Dne 01. 06. 22 v 14:51 Vivek Goyal napsal(a):
On Mon, May 30, 2022 at 11:28:42AM +0200, Zdenek Kabelac wrote:
> Dne 30. 05. 22 v 4:34 Baoquan He napsal(a):
>> On 05/27/22 at 11:39am, Vivek Goyal wrote:
>>> On Fri, May 27, 2022 at 04:59:38PM +0200, Zdenek Kabelac wrote:
>>>> Dne 27. 05. 22 v 16:50 Vivek Goyal napsal(a):
>>>>> On Fri, May 27, 2022 at 04:42:25PM +0200, Zdenek Kabelac wrote:
>>>>>> Dne 27. 05. 22 v 14:20 Vivek Goyal napsal(a):
>>>>>>> On Fri, May 27, 2022 at 02:45:14PM +0800, Tao Liu wrote:
>>>>>>>
>>> Technically speaking, one could first run makedumpfile to just determine
>>> what will be size of vmcore and then actually save vmcore in second
>>> round. But that will double the filtering time.
>> Yeah. Besides, memory content of system is changing dynamically all the
>> time. E.g your oracle DB is running or not running, the user space data
>> is defintely not the same. And two times of work need involve people's
>> manual work, automation is still expected if can be made.
>>
>>>> Running very resource hungry dmeventd (looks all the process memory in
RAM
>>>> - could be many many MB) in kdump environment is not IMHO worst option
>>>> here - I'd prefer to avoid execution of dmeventd in this ramfs
image.
>>> I understand. We also want to keep the size of kdump initramfs to the
>>> minimum.
>> Right.
>>
>> I talked to Tao, he tested on kvm guest with 500M memory and 100M disk
>> space to trigger the insufficient disk space usage. Tao said the
>> dmeventd will cosume about 40MB when executing. I am not familiar with
>> dmeventd, if its running will cost about constant 40M memory, no matter
>> how much disk space need be extended at one time, we can adjust our
>> kdump script to increase the default crashkernel= value if lvm2 thinp is
>> detected. It looks acceptable in kdump side.
>
> Dmeventd runs in 'mlockall()' mode - so the whole executable with all
> libraries and all the memory allocations are pinned in RAM (so IMHO 40MiB
> is way small number)
>
> Reason for this is - in the normal 'running' mode lvm2 protects dmeventd
> from being blocked when it would run out of rootfs and it would suspend DM
> with rootfs on it - so by having the whole binary mlocked in RAM it cannot
> cause 'deadlock' waiting on itsefl when it suspends given DM device.
>
> For kdump executional environment in ramdisk this is not really relevant
> condition (but dmeventd was not designed to be executed in such
> environment). However as mentioned in my previous post - it's actually
> more useful to run 'lvm lvextend --use-policies' with given thin-pool
> name in a plain shell parallel loop - as it basically gives same
> result with way less memory 'obstruction' and with far better control as
> well (i.e. leaving user a defined minimum to be sure it can actuall boot
> afterwards - so dumping only when there really is some space...)
Hi Zdenek,
Is running "lvm extend --use-policies"racy as well. I mean, it is possible
that dump process fills up the pool before lvm extend gets a chance to
extend it? Or it is fine even if thin pool gets full. Once it is extended
again, it will unblock dumping process automatically?
This is the *very* same command dmeventd will run internally (with luxury of
being locked in RAM).
By default there is 60sec 'delay' before thin-pool starts to 'reject' IO
operation on overfilled thin-pool so it should not present any obstacle. Yeah
- it might be slightly delayed before extension happens (depending on sleep
value in shell loop)
But this still does not protect again filling up data LV completely
and making rootfs unusable/unbootable.
It actually gives you some position where you can better 'estimate' whether
you actually do want to kdump or not - by calculating kdump space and free
space and ensuring there will be left some guaranteed free space (since you
are the only user of thin-pool in this moment)
Bao mentioned that makedumpfile has capability to estimate the size
of core dump. May be we should run that instead in second kernel,
extend the thin pool accordingly and the ninitiate the dump. For
Yep - if you know how much data you want to store - and ensure there is
enough free space in thin-pool to store them - it's the best case.
Regards
Zdenek