On Sun, Jan 05, 2020 at 10:08:07AM -0700, Chris Murphy wrote:
In my testing, xz does provide better compression ratios, well
suited
for seldom used images like archives. But it really makes the
installation experience worse by soaking the CPU, times thousands of
installations (openQA tests on every single nightly, every human QA
tester for nightlies, betas, and then the final released product used
by Fedora end users).
Has zstandard been evaluated? In my testing of images compressed with
zstd, the CPU hit is cut by more than 50%, and is no longer a
bottleneck during installations. Image size does increase, although I
haven't tested mksquashfs block size higher than 256K. Using zstd with
Fedora images also builds on prior evaluation, testing, and effort
moving RPM from xz to zstd.
Blocked-based decompression works with xz, but not with zstd. We do
use this feature. Here's the github issue to get block-based
decompression supported in zstd, and a bit of background about how we
use the feature:
https://github.com/facebook/zstd/issues/395#issuecomment-535875379
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages.
http://libguestfs.org