Phoronix recently release article about Intel's Clear Linux with some
cool graphs showing nice performance gain compared to Xubuntu.
I didn't have time to dig in and look how it's performing against Fedora,
but I'd assume Fedora can be compared to Xubuntu in terms of compiler
I think i'll be interesting to look into it and find out if Fedora can't
tweak compiler settings (eg use LTO for critical things like Mesa, Kernel,
...). I think it could be interesting fo Fedora users to have this enabled
if there are not any disadvantages other than compile time, compile memory
usage and so on.
What do you think?
Best regards / S pozdravem,
I like to have everything on my system in a package. So, I looked around and
found no recipe or rpm for Rstudio. This is really a shame because every
tutorial on R kinda tells you to install it. Even the Coursera classes in the
Data Science track make you install it and send a screenshot to prove it.
So, I spent some time getting it packaged and working. I am placing the spec
file and necessary patch here so that google finds it and saves other people the
trouble. I'm not wanting to submit the package to Fedora because its more work
than I have time for. If anyone else wants to take it from here and submit
and/or maintain it, feel free.
apitrace 5.0 bundles libbacktrace, which looks like is living within the
gcc sources. libbacktrace is not build as a shared library from the gcc
sources, and not packaged.
Is it feasible to build libbacktrace as a shared library and ship it in
a corresponding package? Or should I rather go for a bundling exception
does anyone use the xulrunner package? (and gecko-devel actually).
Mozilla does not maintain it any more and the XUL as technology is going
to be removed/deprecated. I'd like to remove the package from Fedora 24.
So far my idea of maintaining Fedora's iproute package was to do full
version updates only in Rawhide and backport patches selectively to
stable versions on behalf of bug reports.
But since stable versions indeed receive full kernel updates (not just
backported patches), there is an understandable amount of frustration
amongst users when the shiny new kernel that comes with e.g. F22
provides features userspace does not support.
Especially since upstream iproute2 does not really have a concept of
stable versions, I'm in a bit of a dilemma here: update to keep in sync
with the kernel or not update to not unnecessarily destabilize the
Any comments/advice are highly appreciated.
Some time back there was discussion of being able to rollback yum updates via
btrfs snapshotting. As I recall, it turned out that the default btrfs install
was not setup correctly to make this feasible (I had briefly tested it on my
machine). I haven't heard anything since - this seems like a great idea.
-- Those who don't understand recursion are doomed to repeat it
since last Monday or so, I have been able to run firefox over ssh
anymore. I thought it was my setup, but further investigation showed
that it is something specific to firefox.
My setup is a bit more convoluted than this, but I am able to do:
$ ssh -X localhost gnome-terminal
And it shows a terminal as expected
$ ssh -X localhost firefox
Without a firefox running will hang there forever, no output at all. I
tried doing strace of it, and see that is waiting for futexes. But
haven't been able to see what is happening.
Until last Tuesday or so (I am on F23) firefox worked over ssh without
any problems. I have been running it like that for something like a
Anyone has any suggestion? I tried also
$ ssh -Y localhost firefox
And it didn't helped at all (not that I am sure of the difference either).
PD. Really, what I normally do is run ssh to a virtual manchine in the
I just ran into this: https://bugzilla.redhat.com/show_bug.cgi?id=1309175
It's not a huge deal (and there are several workarounds, for git and for
other tools which default ot using 'gpg'), but it highlights the mismatch
between the default /usr/bin/gpg running gpg1, when other tools, like
gpg-agent, are tailored for gpg2.
RHEL/CentOS has shipped /usr/bin/gpg with gnupg2 since at least sometime in
I'm not saying we shouldn't continue to ship gnupg1, but can we at least
rename it, so gnupg package is version 2, and gnupg1 provides /usr/bin/gpg1
instead? This seems overdue. Is there any reason not to do this?
I am one of the maintainers of the ntl package, which is used by some
numeric applications (e.g., Macaulay2 and sagemath). Upstream
supports use of the PCLMUL instruction, the AVX instructions, and the
FMA instructions to speed up various computations. We can't use any
of those in Fedora, since we have to support a baseline x86_64.
Well, that's kind of a downer. I could advertise that people with
newer CPUs ought to rebuild the ntl package for their own CPUs, but
what's a distribution for if people have to rebuild packages? I've
been looking for a way to automatically support more recent CPUs.
Yesterday I sent a patch upstream that uses gcc's indirect function
support together with __attribute__((target ...)) to build vanilla
x86_64, PCLMUL-enabled, AVX-enabled, and FMA-enabled varieties of
several functions. Upstream was initially excited about this but
then, on further reflection, offered the opinion that this approach is
dangerous. The problem is that some of the types involved may change
ABI depending on the instruction set in use, and therefore it would be
necessary to build larger portions of the library for each supported
CPU variant. At that point, as upstream said, we might as well just
build the entire library for each variant. The problem then is how to
choose which version of the library to use at load time.
On some platforms, ld.so offers "hardware capabilities", such as sse2
on i386. By dropping a vanilla library into /usr/lib and an
SSE2-enabled build into /usr/lib/sse2, applications can get the
version of the library appropriate for the CPU in use. But there
don't seem to be any defined hardware capabilities for x86_64.
Has anybody already thought this through? What's the best approach to
take? For this package, the speedups are substantial, so this is
worth doing, if it can be done well.
Just noticed this change on rawhide...
* systemd-logind will now by default terminate user processes that are
part of the user session scope unit (session-XX.scope) when the user
logs out. This behavior is controlled by the KillUserProcesses=
setting in logind.conf, and the previous default of "no" is now
changed to "yes". This means that user sessions will be properly
cleaned up after, but additional steps are necessary to allow
intentionally long-running processes to survive logout.
While the user is logged in at least once, user@.service is running,
and any service that should survive the end of any individual login
session can be started at a user service or scope using systemd-run.
systemd-run(1) man page has been extended with an example which shows
how to run screen in a scope unit underneath user@.service. The same
command works for tmux.
After the user logs out of all sessions, user@.service will be
terminated too, by default, unless the user has "lingering" enabled.
To effectively allow users to run long-term tasks even if they are
logged out, lingering must be enabled for them. See loginctl(1) for
details. The default polkit policy was modified to allow users to
set lingering for themselves without authentication.
Previous defaults can be restored at compile time by the
--without-kill-user-processes option to "configure".
So, now, I've read this and I could possibly remember to use systemd-run
or to set myself as lingering... Except that I don't want to have to go
through the pain of remembering to either change the system config on
all my servers or always starting stuff with systemd-run if it's a bit
long and I think I might want to ^Z/bg/disown it to let it finish.
Thinking further when my users get that update I don't see myself
telling them to do that when they want to start a screen/tmux/nohup-job,
users do not read every update changelogs (tbh I don't either unless
there's a problem); and they probably wouldn't think of systemd if they
ever get that particular issue.. heck they probably don't even know what
systemd and logind are (even if yes, they are "advanced" enough to ssh
into other servers to run *long* tasks that must continue overnight/when
the user logs out ; it doesn't mean they know what they're using
Sure, this change will work for the whole probably targetted audience of
simple desktop users on shared workstations where we probably want to
kill lingering processes; but how much is that compared to servers ?
I know that if this gets through I will have to change the system
default on all my servers... And while the big batches of thousands of
compute nodes are automated there's still quite a few places to update,
especially since this will be the first time we need to change
logind.conf so it's not just adding a line to a file already propagated
Anyway, I don't really want to start (yet) a(nother) troll on systemd, I
appreciate it's also brought good things; I'd just like the default
values to be sane for most of the users.
I did not see any discussion about this particular setting in the
systemd-devel mailing list so I have hope that it is still open to
change, but I'd rather start with a community where there are more
admins who will likely agree that this change will do more harm than
Even if nothing comes out of it, at least more people will be aware of
the issue and will be able to prepare to avoid most of the chaos that
will come if this stays like this...
Thanks for reading,