At some point, it would be good to see fedrtc.org migrate to Fedora
infrastructure and use the fedoraproject.org domain
I'd be happy to submit the full request for resources but I just want
to see if there is any initial comment on it. Here is a list of what is
- it uses a PostgreSQL database schema
- it requires some DNS entries (SRV and NAPTR), examples
- it needs a TLS cert for fedoraproject.org on the host(s) where it runs
- it has static HTTP content and PHP that is currently hosted with all
but one problem on a RHEL7 httpd. Content is in Github, it could
be presented as an RPM if necessary.
- all packages are in EPEL7, except:
cajun-json in EPEL6, in testing for EPEL7
resiprocate in Fedora, builds from SRPM on RHEL7
- the SIP proxy is a single daemon, managed by systemctl. All settings
in a single file, /etc/repro/repro.config
- the TURN server process is also a single daemon, managed by systemctl.
All settings in a single file, /etc/reTurn/reTurnServer.config
Just to clarify the scope of this: it is not a full telephony service
like Asterisk, just a SIP proxy and TURN server. There is no persistent
state information (as there would be for voicemail, email service, etc)
and no customized routing.
Ongoing maintenance requirements:
- TLS certificate renewals
- monitoring the ports
- package updates from time to time
It currently runs on a lab machine, I'd be happy to arrange SSH access
to the Fedora Infrastructure team to see exactly what is involved and
verify that it is manageable.
I've been working on a new rhel7/ansible version of our current
fedorapeople.org. This is complicated somewhat by this being one of our
oldest services and one of the first ones added to puppet so many long
I'm heading out for vacation saturday and will be gone until the
following saturday, so I don't want to do any migration right now in a
hurry, but it's going to be on my radar right when I get back.
Toward that end, I have the new instance up and available:
people01.fedoraproject.org - 18.104.22.168 -
If you all get a chance, please do try and login and then reply to this
thread with any issues you run into. Things you cannot do that you
expect to or cannot do but can on the 'production' version. I'll take a
look at these items after I get back and fix them up before we migrate
Note that very likely I will re-install and re-sync content before we
move this live.
IRC: pcreech, pcreech|work
For the past five years I've worked primarily as a C#/.NET developer.
In those roles, I've had the pleasure of also having systems
administration roles. From managing single server linux setups, to
creating a highly available IIS web application infrastructure. Most
recently I also built a raid 10 iSCSI server with CentOS 7.
I have a BS in Computer Science, and am interested in server
Orchestration, automation, and High Availability.
I have code contributions to the Pulp project, and after a recent
break am working on items for them again as well. I've also been a
team member of Funtoo Linux for the past few years as well.
I'm currently trying to absorb the information you guys have out there
on the infrastructure, so I don't necessarily have a specific issue to
help out on yet. But anything in the previously mentioned arenas, as
far as code or sys-administration I'd enjoy helping out with.
Looking forward to working with you guys!
Yesterday and today I spent a little time going over the UDML script of
Going through it, I ended up with few questions regarding it.
* Repository name
UMDL's code clearly says:
# historically, Repository.name was a longer string with
# product and category deliniations. But we were getting
# unique constraint conflicts once we started introducing
# repositories under repositories. And .name isn't used for
# anything meaningful. So simply have it match dir.name,
# which can't conflict.
And quickly grepping through MM2's sources, I could not find a reference to
this, we alway rely on the repository's prefix, not its name.
Question: Should we drop this?
It makes things confusing and is basically noise since we do not use it anywhere.
* Readable status of directories
The Directory table has a 'readable' property, none of our directories is not
Question is: what is the use-case for this boolean?
* Changes while running
Looking at the code, the UMDL seems to be very careful to handle changes on the
FS while it is running.
One hope I have is to speed up the UMDL run time, but I'm curious.
Question: Does anyone know if the FS changes often while the UMDL is actually
Gaining speed of course does not mean being wreakless but I'm curious as to how
often this situation occurs. IIRC, we trigger the UMDL via fedmsg now, right?
So in theory, the FS shouldn't change too much under the UMDL's feet.
* The directory table
So looking at the database and more precisely the directory table in that
database, it seems we store all the directories of the tree, ie:
This makes me a little pondering. What is the interest of keeping the whole
list of directories in the DB ?
After all, as far as I understand, the UMDL finds the repo in the tree (repo
being defined by the presence of a 'repodata' folder containing the repomd.xml
or by the presence of a 'summary' file and an 'objects' folder).
For these repo, we look for the most recent files, stores this info in the DB
and later use it to check if the mirrors are up to date.
But do we need to checking that ``pub/fedora/linux`` exists when we later check
that ``pub/fedora/linux/updates/testing/21/x86_64/`` exists and is up to date?
I am under the impression currently that dropping un-necessary directories would
save DB space (the directories being then linked in the host_category_dir table
listing for each host, in each category which dir are present) as well as
crawling time (both in the UMDL and in the crawler).
* Non-directory based support in UDML.
So the UMDL script currently supports three ways of crawling the tree:
We, in Fedora, are only using the last one. I believe the `rsync` mode was added
to support Ubuntu and the file mode is basically a simplified version of the
directory mode, but that we do not use at at the moment.
I would like to propose that we drop support for rsync. I feel that it may be
simpler and easier to create an UMDL and a crawler for each distro that would
like to use MirrorManager than maintaining a one-script-fits-all UMDL that is
in fact tested for only one of the scenario.
That being said, if we ever have interest from Ubuntu, CentOS or any other
communities, we should definitively look into making the UMDL and crawler as
re-usable as possible for them, but keeping the distro-specific bits separated.
Looking forward hearing your thoughts about these points and questions,
As briefly discussed today during the meeting, I would like to evaluate
the possibility to use our packaged Jenkins in Fedora infrastructure,
instead of upstream binaries. (Jenkins is available in Fedora 21 and
To get started with development of new Jenkins machines I would need
someone to create 2 new cloud machines: master with Fedora 22 and with
any OS (RHEL 6 would be my choice), each machine with at least 1 CPU, 2
GB RAM, public IP and root access for me (FAS: mizdebsk).
>From that I will try to come with my proof-of-concept of "the new
Jenkins". If people like it then old data can be migrated and it can
become the new production instance. If not we can just scratch these
machines and keep using upstream binaries.
So if you don't mind I'd start working on this.
Should I open a ticket for creating the new cloud instances?
Some more technical notes:
1) It is possible to use third-party plugins with packaged Jenkins,
which means that missing plugins can be installed as binary blobs until
they are packaged in Fedora.
2) Jenkins RPMs must be installed only on master node. All slave nodes
can connect to master and download Jenkins code from there. Slaves still
need to have basic environment installed (such as Java, git, mock), but
not Jenkins itself. This means that only the master node must be Fedora
21+, slaves can be anything (RHEL 6/7, older Fedoras).
3) Michal Srb is currently looking into packaging Jenkins as software
collection for RHEL 6 and 7. Once (and if) done this could allow having
RHEL 7 master, with the disadvantage of using "unofficial" RPMs from
softwarecollections.org. This is just the beginning and we can evaluate
having non-Fedora master later, if needed.
Software Engineer, Red Hat
As part of the on-going Council updates, I'm organizing a Fedora
Engineering update. This includes the various WGs, rel-eng,
infrastructure, and a few others. The idea behind this is to give a
brief update of the work your group is doing towards the F23 and F24
releases. Think of this as a 5-10 minute "lightning talk" of the
highlights you want to see in those releases.
The meeting is July 7th. It would be excellent if you could have a
volunteer present the update for your group and stay around to answer
any questions. Worst case, please gather the information and send it
to me and I can do the overview, but representation from the group is
If you have questions, please let me know. The idea and format are
somewhat new, so we'll work through this as best we can.
I am Ardian Haxha a student from Kosovo. I have been part of fedora for
five years as a fedora ambassador, mainly organizing events and teaching my
fellow country people in GNU/Linux.
Apart from being just a peoples person I really like coding and system
administration so I thought that maybe I could help also on more technical
parts of fedora. My main skills are Ruby and Python a little bit more
ruby/rails. I am more than open to learn new technologies and I see fedora
to be the place where we can build new things together. I am also open to
any new duties. As a start I am thinking that I can help on the web
applications side since that's also my field of study. I am looking forward
to meet you in today's IRC meeting, my irc handle is ardian.
I'm glad to announce that as of yesterday Koschei production instance
has been moved Fedora infrastructure and now it can be considered as
officially-supported Fedora service.
Koschei is a continuous integration service for Fedora packages.
Koschei is aimed at helping Fedora developers by detecting problems as
soon as they appear in rawhide - it tries to detect package FTBFS in
rawhide by scratch-building them in Koji. More information can be
found at Fedora Wiki .
Interested parties can be automatically notified when Koschei detects
change in package FTBFS status. In order to subscribe to email or IRC
notifications you can follow instructions at .
At the time of writing, Koschei monitors about 20 % of all Fedora
packages, but anyone with FAS account can add packages they are
interested in. See  for details how to add packages to Koschei.
I would like to thank everybody who helped to make Koschei at Fedora
infrastructure possible, especially Kevin Fenzi, who sponsored Koschei
request for resources and assisted us with the migration.
I just thought I would give everyone a heads up that I will
be heading out on vacation soon.
From 2015-06-27 to 2015-07-04 I will be off in the mountains
and taking time off. Staying with all my brothers and sisters
and their families near Brekenridge, CO.
I should have both cell coverage and network access, but
will strive to not use them, so don't expect me to answer
questions on irc or reply to emails until I am back.
For issues with Fedora Infrastructure, please contact
Stephen John Smoogen <smooge(a)gmail.com>
Patrick Uiterwijk <puiterwijk(a)redhat.com>