Previously discussed several times, most recently:
* 2015 https://email@example.com...
* 2011 https://lists.fedoraproject.org/pipermail/devel/2011-September/thread.htm...
Unison is a fairly widely used file synchronization package. Think of
it as a more efficient, multi-directional 'rsync'.
Unison has the unfortunate property that versions of Unison are not
compatible with each other unless they have the exact same major.minor
release. eg. Unison 2.40.128 is compatible with Unison 2.40.102, but
incompatible with Unison 2.48.3 (the latest upstream).
The reason that matters is you might be running Unison across multiple
machines, running different Linux distros, which have different
versions of Unison.
For this reason, Fedora packages three different Unison branches in
* unison213 (currently Unison 2.13.16)
* unison227 (currently Unison 2.27.157)
* unison240 (currently Unison 2.40.128)
* There was a "unison" package, but it is retired
We don't package the latest upstream versions (Unison 2.48.4,
Unison 2.51.2) at all.
Because of what I said above, it matters what Debian is shipping:
* unison 2.27.57
* unison 2.32.52
* unison 2.40
=> If you wanted to communicate between Fedora and Debian you could
use either 2.27 or 2.40.
It's not likely that Unison will use a stable, cross-version protocol
any time soon.
I'm proposing that we clear up this mess by creating a single unified
package called just "unison" which will build subpackages for each
version. It will contain multiple source tarballs for each major
Although this is very slightly dubious from a packaging point of view,
I believe it's the best solution here. It means we can build multiple
versions, we don't need to go through the new package review process
every time upstream releases a new major version, and it'll make
managing the package simpler at the cost of a somewhat more complex
If no one has any objections, I'll submit a unified unison package to
the new package process.
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
[ Broadening the audience to include devel(a)lists.fedoraproject.org ]
Anybody have thoughts about my question below? some examples of places
which will need logic like this to replace PDC usage:
Resolve a module name:stream to a particular build, and do dependency
Get the flatpak-runtime modulemd to find what packages aren't in the
runtime and need to be bundled
Load the modulemd for particular module builds to get profile
information and figure out what to install in a container
Load modulemd and do dependency expansion to build a container locally
Load the modulemd module we are building into a flatpak to find what
runtime it is using, and hence the right build target
On Mon, May 21, 2018 at 1:29 PM, Owen Taylor <otaylor(a)redhat.com> wrote:
> My understanding is that with the planned retirement of the PDC:
> Querying for module information should be done using the MBS and/or Koji
> Various code that I maintain (in OSBS, fedmod, and random tooling) wants
> to do module build lookups - different variations of "look up the modulemd
> for latest build of a NAME:STREAM[:VERSION]". Variations generally being
> exactly what "latest" means here.
> The code generally already is using Koji and the MBS api is quite limited,
> so I've chosen to do the lookups via Koji.
> Is a test tool that incorporates most of the capability that I needed
> across my uses. It's distinctly more than a couple of lines of code - I can
> cut-and-paste it for now, but what's the right long-term home? Is there a
> simpler way?
> My best idea right now is that if the 'base_version' and'status' part of
> my code was simplified to simply be "tag" and avoid reliance on the tag
> structure of Fedora, then this might make a reasonable addition to the Koji
> CLI and API - there are some things that using raw tags for the query makes
> trickier, but it's probably workable.
> Thanks for any input!
Hi, since our Koji builds have disabled internet access, I want to do
the same, so I have more Koji consistent builds.
I have the following in /etc/mock/site-defaults.cfg:
config_opts['use_host_resolv'] = True # or False, no difference
config_opts['rpmbuild_networking'] = False
The internet doesn't work (desired), but has large timeouts (undesired).
This is especially tedious when building Python packages that use
intershpinx and they try to fetch stuff from the internet during build
(it fails, but that's OK). For each such request, the build is prolonged
for ~1 minute. I have Python packages that try to download (and fail)
10+ intershpinx inventories during build. That's 10+ minutes waiting for
something that'll fail anyway. I've been adding hacks to sphinx calls to
make intersphinx timeout sooner, but it's unnecessary clutter in the spec.
So I decided to test this. I've added the following to the spec:
time curl http://example.com/
Here are the times:
* my mock: real 0m56.595s Could not resolve host
* Koji: real 0m0.030s Could not resolve host
This seems to be just about resolving hosts. See with IP address:
time curl http://188.8.131.52/
* my mock: real 2m10.953s Connection timed out
* Koji: real 2m11.337s Connection timed out
Is there a trick to make resolving fail early?
I maintain MariaDB.
There is a subpackage containing TokuDB storage engine. (available only for
However the TokuDB can't be build against Jemalloc 5, but on F>=28 there is
no older jemalloc version.
The build without jemalloc is not supported too.
Is it OK to drop the subpackage on F28 and later until the upstream adds
the Jemalloc 5 support?
I'm ready to drop such update to Bodhi.
Here's a bugzilla for this issue:
So far, on F>=28 the TokuDB was build without jemalloc.
It may be risky to rely on, yet no one complained.
That's why I'm asking users mailing list too, to get a feedback for such
pontential users, i would be otherwise unable to reach.
Associate Software Engineer
Core Services - Databases Team
On 29-04-18 09:25, Hans de Goede wrote:
> Hi All,
> I'm sending this to the fedora-kernel and -devel lists
> both to get the kernel team aware of this and because it
> is not entirely clear to me how to best deal with this.
> I guess we should get this added to the release-notes /
> common-bugs page for F28, but I'm not sure what the
> procedure is for that ?
> I've just become aware that at least for some users
> the use of SATA LPM in F28 causes the Lenovo 50
> series laptops (confirmed X250, T450s G50-80) freeze/
> hang hard under certain conditions when using SATA
> LPM, independent of the disk used (*).
> This is currently being tracked in:
> A known workaround is to add: "ahci.mobile_lpm_policy=0"
> to the kernel boot command line.
> Can someone help me to get this documented? Once we've
> figured out what is going on I hope to be able to fix this
> with a kernel update, but people may still need the
> workaround to install Fedora 28.
> Also if people are using Fedora 28 on a 50 series Lenovo
> laptop without issues, please let me know.
Some more info on this, the user with a x250 is reporting
the problem is hard-freeze about once a day, which goes
away when disabling LPM, so if you have a 50 series Lenovo
laptop and are seeing the occasional hard-freeze this may
be the cause.