Dne 27. 05. 22 v 19:26 Vivek Goyal napsal(a):
On Fri, May 27, 2022 at 12:16:38PM -0500, David Teigland wrote:
> On Fri, May 27, 2022 at 01:05:57PM -0400, Vivek Goyal wrote:
>> So if we fill up thin pool completely, it might fail to activate over
>> reboot? I do remember there were issues w.r.t filling up thin pool
>> compltely and it was not desired.
>>
>> So above does not involve growing thin pool at all? Above just says,
>> query currently available space in thin pool and when it is about
>> to be full, stop writing to it? This is suboptimal if there is
>> free space in underlying volume group.
>>
>> Ok, this is going to be ugly given how kdump works right now. We have
>> this config option core_collector where user can specify how vmcore
>> should be saved (dd, cp, makedumpfile, .....)
>>
>> None of these tools know about streaming and thin pool extension etc.
>>
>> I guess one could think of making maekdumpfile aware of thin pool. But
>> given there can be so many dump targets, it will be really ugly from
>> design point of view. Embedding knowledge of a target in a generic
>> filtering tool.
>>
>> Alternatively we could probably write a tool of our own and pipe
>> makedumpfile output to it. But then user will have to specify it
>> in core_collector for thin pool targets only.
>>
>> None of the solutions look clean or fit well into the current design.
> Maybe I'm not following, but all this sounds unnecessarily complicated.
> Roughly estimate largest possible kdump size (X MB).
> Check that the thin pool has X MB free.
> If not, lvextend -L+XMB the thin pool.
> If lvextend doesn't find X MB in the vg, then quit without kdump.
Estimation is hard. We could just look at raw (unfiltered /proc/vmcore
size) and extend it. But problem there is we also support kdump
on multi terabyte machines. And after filtering final vmcore could
be just few GB. So extending thin pool to say 12TB might very well
fail and we fail to save dump.
May be use above trick for dd and cp core_collectors as they will not
filter anything.
And for makedumpfile, run it twice. First run only gives size estimate
and second run actually saves the dump. And do this only for thin volumes
targets. This will almost double dump saving time.
So ideally it will be nice if we can enable automatic thin pool extension
from initramfs.
For kdump environment - this is certainly not ideal - is this itself requires
lot of RAM - buffered processing should be doable even in plain bash - if you
can pipe 'dd' to it.
As mentioned previously - it would be also good to make sure thin-pool leaves
some 'configured' free space - so i.e. those multiTIB do not overfill
thin-pool and make possibly system hard to use after such captured kdump
(although one could imagine) just to 'drop' kdump if thin-pool runs over
some threshold) to keep things simple.
(So i.e. if kdump fills thinpool over >99% - drop it - so use could use
thin-pool after reboot - better to have usable system in this case I'd say)
Regards
Zdenek