fedora-packages  code base is showing its age. The code base and
the technology stack (Turbogears2  web framework and the Moksha
 middleware) is currently not ready for Python3 and I am not
planning to do the work required to make it Python3 compatible, so the
application will stop working when Fedora 29 is EOL.
In order to keep the service running, I have started a Proof Of
Concept (fedora-search ) to replace the backend of the application.
Fedora-search would be a REST API service offering full test search
API. Such a service would then be available for other application to
use, fedora-packages would then become a frontend only application
using the service provided by fedora-search.
While the POC shows that this is a viable solution, I don't think that
we should be proceeding that way, for the simple reason that this add
yet another code base to maintain, I think we should use this
opportunity to consider using Elasticsearch instead of maintaining our
own "search engine".
I think that Elasticsearch offers quite a few advantages :
- Powerful Query language
- Python bindings
- Can be deployed in our infrastructure or used as a service
- Can be useful for other applications ( docs.fp.o, pagure, ??)
So what is the general feeling about using Elasticsearch in our
infrastructure ? Should we look at deploying a cluster in our infra /
Should we approach the Council to see if we can get founding to have
this service hosted by Elastic ?
 - https://apps.fedoraproject.org/packages/
 - http://www.turbogears.org/
 - https://mokshaproject.github.io/mokshaproject.net/
 - https://github.com/fedora-infra/fedora-search
So, we've got a bit of a problem. The sigul package is not installable
in Fedora 29, and pygpgme is half-broken in Fedora 28 and was retired
during Fedora 29 development due to constant breakage.
This means that sigul is in danger of being retired in Fedora.
Unfortunately, sigul is the only supported signer system for Koji at
What do we want to do here? It's well-known that sigul does not work
with GnuPG 2, though I vaguely recall that some work was done to try
to fix this.
Do we want to port sigul to python3-gpg, switching Sigul to Python 3
and the official gpgme bindings so that it works with GnuPG 2?
Or do we want to adapt the bridge to work with obs-signd (which is
already used by Copr)?
真実はいつも一つ！/ Always, there's only one truth!
The latest release of MirrorManager2 is still in updates-testing. I did
not want to push this version to updates-released because this is the
first Python3 based released.
Could someone test the version from updates-testing? Update the
mirrorlist container and test it on one of the proxies?
This mail is for a new micro-service called Message-Tagging-Service (aka
MTS). It serves to tag module build triggered by specific MBS event.
More detailed information is provided inside RFR ticket.
MTS works with a series of predefined rules to see if a module build
should be tagged with one or more tags. There is requirement coming from
module maintainers to ensure a module build is tagged into correct
platforms to fulfill the dependencies of module metadata. Comment has
a specific use case for that.
So far, MTS has been containerized and deployed in internal. The image
is available from quay.io. We would love to run MTS in Fedora as well
in order to make it easier to manage module build tag for module
maintainers and rel-eng.
If anything is missed for this mail thread, please point out. Questions
welcome! Thanks for your time.
Fedora Messaging, the replacement for fedmsg, is using AMQP and thus a
message broker. The current clusters we have deployed in staging and
prod are only accessible from inside our infrastructure.
There are two needs for an externally accessible broker:
- the CentOS folks, who are outside of our infrastructure, would like
to send messages
- people from the community would like to subscribe to messages and do
things based on them
We have several options to make that happen.
1. Use our existing cluster and expose it to the world
The advantage is we don't maintain another cluster, but the downside
is in the case of a DoS attack we're directly affected. With RabbitMQ
3.7 there are some limits you can set on vhosts (max connections
and max queues), but we're not yet on 3.7.
2. Use a separate cluster and copy messages over
We could deploy a separate cluster that would get a copy of all
messages, and would be more limited in resources. It truly isolates
infrastructure, so it's better protected against DoS, but it's more
work for sysadmins.
In both cases, there are several paths we can take as regards to authentication.
A: make a single readonly account for everybody in the community to
use, and a few read-write accounts (with X509 certs) for people who
need to publish, ie CentOS CI. If we choose a separate broker we can
copy those messages back to the main cluster.
The issue here is that everybody in the community will be using the
same account, so it's harder to shut down bad actors. It would also be
theoretically possible for someone to consume from somebody else's
queue (unless people make sure they use UUIDs in queues, I think we
can enforce that but it way have side effects).
However, it enables the same kind of usage that fedmsg provided before.
B: require authentication with username & password but make it easy to
get accounts. People could require accounts via tickets for example.
It will make it much harder to abuse the service, and we could easily
shut down bad actors. However it's an obviously heavier load on the
people who will handle the tickets and create the accounts.
My personal preference would be option 2A, so an external broker with
an anonymous read-only account, but all combinations of options
inflict different loads on the sysadmin (on deployment and in the
longer term), so I think it's really up to them.
What do you guys think?
As you know, we currently have a RHOSP5 ancient cloud. After a bunch of
work last year, we got a RHOSP13 cloud up and mostly working, but it was
a ton of work. After hearing from the Fedora Council and our various
management chains we determined that it wouldn't really be a good use of
our time moving forward to keep maintaining a OpenStack cloud.
We have not yet determined what we want to do with the hardware that we
had allocated to this, but are weighing our options. We may want to
setup OpenShift bare nodes so we can do kubevirt, we may want to just
setup a normal virthost setup managed by ansible.
For the items currently in our cloud, we will be looking at options for
them, we are definitely not shutting things off until we have plans in
Happy to answer any questions and will make sure everything is properly
I want to retire the sse2fedmsg  application. This is currently
deployed only on staging OpenShift as librariesio2fedmsg and it looks
like only application listening to it is Anitya.
Because I have implemented SSE consumer directly in Anitya , and it's
really easy to do it, I want to retire the sse2fedmsg.
But before I do this, I want to ask if anybody here is actually using it
or plan to use it for anything?
> [distgit/pagure] Drop --autoreload from our systemd service file
I noticed a few recent commits using this bracket prefix
style and I wanted to give a heads-up about how this can
cause problems, in case it's not well-known.
(If I've mentioned it before, please forgive my memory
Using brackets to prefix the commit message will cause grief
for any patches which are applied via 'git am', as the
contents within the brackets will be stripped.
Even after the ansible repo is available via pagure, there
are likely still times when someone will attach a patch to a
bug or send it with git send-email.
If this [prefix/area] convention is more widely adopted in
the ansible repo, it's worth keeping in mind the potential
for surprise (and subtle loss of commit message details) it
will likely cause down the road.
The prefix stripping can be avoided by passing the
'--keep-non-patch' option to git am. However, there's no
config variable to allow that to be easily set per repo, so
it'd be very easy to forget.
This has been discussed on the git list in the past. One of
the more recent discussions was from 2015:
I had a recent bug report filed about this as well:
(as evidence that it surprises others still).
Hope this is helpful,