Hi folks! I'm proposing we cancel the QA meeting for Monday. I don't
have anything urgent this week, and it's a vacation day in Canada.
If you're aware of anything important we have to discuss this week,
please do reply to this mail, but someone else will need to run the
meeting :)
I can't run a blocker review meeting either. There are three proposed
Beta blockers, we can either wait for next week, vote in-bug, or
someone else can run a meeting.
Thanks!
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
_______________________________________________
test-announce mailing list -- test-announce(a)lists.fedoraproject.org
To unsubscribe send an email to test-announce-leave(a)lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/test-announce@lists.fedorapro…
Hey all,
So I was trying to update libseccomp last night, and I was able to
build it for everything except aarch64 on Rawhide because it says the
compiler can't build executables[1].
Looking a bit closer, it looks like the compiler stack is out of sync
again with annobin.
Is there anything that can be done to keep the compiler teams from
submitting gcc into rawhide without doing the required rebuild cycle
to make it so annobin works?
And we're going to have the same problem with clang now that annobin
grew a clang plugin, so I would want neither LLVM nor GCC to land in
Rawhide unless those teams are literally ensuring that annobin isn't
breaking the compiler afterward.
I'm personally very tired of having the compiler break so frequently
because of that plugin. Either some kind of mechanism to hold back GCC
builds until annobin works is implemented, or I'd much rather see the
whole thing go away. Obviously, you could just *bundle* annobin into
the GCC package and build it together to ensure it never broke, but
that option was discarded already[2].
Somebody fix it. ASAP.
[1]: https://koji.fedoraproject.org/koji/taskinfo?taskID=47796366
[2]: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org…
--
真実はいつも一つ!/ Always, there's only one truth!
Hi,
The change proposal has a 'compression option' and we kinda need to
get organized.
https://fedoraproject.org/wiki/Changes/BtrfsByDefault#Compression
- Compression saves space, significantly reduces write amplification
and therefore increases flash lifespan, and in some cases increases
performance.
- Desired but not a requirement of the change proposal.
1. Goal: probably the goal performance wise is to perform as good or
better than now. Is it OK if there's a write time performance hit for
a small percent of folks, for a high value target like usr that isn't
updated that often, and is also updated out of band (offline updates
typically, but also isn't something directly related to the daily
workload)? How to decide this?
2. Benchmarking: this is hard. A simple tool for doing comparisons
among algorithms on a specific bit of hardware is lzbench.
https://github.com/inikep/lzbench
How to compile on F32.
https://github.com/inikep/lzbench/issues/69
But is that adequate? How do we confirm/deny on a wide variety of
hardware that this meets the goal? And how is this test going to
account for parallelization, and read ahead? Do we need a lot of data
or is it adequate to get a sample "around the edges" (e.g. slow cpu
fast drive; fast cpu slow drive; fast cpu fast drive; slow cpu slow
drive). What algorithm?
3. Improvements and upgrades. We'll do plan A, but learn new things
later, and come up with plan B. How do we get the plan A folks
upgraded to plan B? Or just don't worry?
4. The whole file system (using a mount option) or curated (using an
XATTR set on specific "high value" directories)? This part is
elaborated below.
A. do this with a mount option '-o compress=zstd:1'
- dilemma: it doesn't always lead to equal or better performance.
On some systems and workloads, write performance is slightly reduced.
What about LZO?
B. do this with per directory XATTR
- dilemma: the target directories don't exist at install time,
depending on whether the installation is rsync, rpm, or unsquashfs
based.
C. do the install with '-o compres=zstd', then set XATTR post-install
- dilemma: the installed files won't have XATTR set, only new files
inherit; does a 'dnf update' overwrite files and therefore the XATTR
is not inherited, or are they new files and do inherit the XATTR?
D. Which directories? Some may be outside of the installer's scope.
/usr
/var/lib/flatpak
~/.local/share/flatpak
/var/lib/containers/
~/.local/share/containers/
~/.var
~/.cache
(Plausible this list should be reversed. While compressing ~/.cache
may not save much space, it's likely hammered with more changes than
other locations, hence more benefit in terms of reducing write
amplification.)
For reference, the above is mostly from the description in the RFE bug
attached to the feature's tracker bug. But I think it's best to have
most discussion here and leave the bug for implementing+testing the
implementation details.
https://bugzilla.redhat.com/show_bug.cgi?id=1851276
Thanks,
--
Chris Murphy
I see that this ticket is still NEW.
I've update with my experience that the suggested fix works.
It would be good to get this fix in for F33.
Is there more testing I can do to help?
Barry