On Mon, Aug 22, 2022 at 05:44:26PM -0700, Adam Williamson wrote:
Hey folks! I apologize for the wide distribution, but this seemed
like
a bug it'd be appropriate to get a wide range of input on.
There's a bug that was proposed as an F37 Beta blocker:
https://bugzilla.redhat.com/show_bug.cgi?id=1907030
it's quite an old bug, but up until recently, the summary was
apparently accurate - dnf would run out of memory with 512M of RAM, but
was OK with 1G. However, as of quite recently, on F36 at least (not
sure if anyone's explicitly tested F37), dnf operations are commonly
failing on VMs/containers with 1G of RAM due to running out of RAM and
getting OOM-killed.
The discussion in the bug indicates that this memory growth is related to
loading of the full filepath dataset. We have been discussing splitting
out the non-primary-filepath-data (i.e. paths that are not /etc, /usr/bin,
/usr/sbin), out into a separate lazilly-loaded file. If we manage to do
that, we'll kill two birds with one stone:
- initial download of repo metadata on every freakin' dnf operation can
go down from 80 to 20 MB
- the peak memory use will go down
Apparently DNF5 makes this possible.
My vote is: yes, this is an issue. No, we shouldn't block F37 on this.
Apparently the only reasonable way to tackle this is with a major rework
of DNF, so let's get it right with DNF5.
Zbyszek