we are now in the infrastructure freeze leading up to the Fedora 31
Beta release. This is a pre release freeze.
We do this to ensure that our infrastructure is stable and ready to
release the Fedora 31 Beta when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2019-09-17 (or later if
release slips or uses the secondary target). Frozen hosts should have no
changes made to them without a sign-off on the change from at least 2
sysadmin-main or rel-eng members, along with (in most cases) a patch of
the exact change to be made to this list.
The Fedora Infrastructure is planning to retire the infinote 
service. This service allows text collaboration using the Gobby
client and was mainly used by the Infrastructure team to coordinate
our weekly meeting and the mass update & reboot of the machines.
The service will be taken offline on August 30th 2019. If you wish to
backup some of the document currently hosted, you should make sure
that you have downloaded them locally before that date.
The Infrastructure team will most likely use the service provided by
hackmd.io  for their needs. Other alternatives like public etherpad
can also be used to replace this service.
 - https://infinote.fedoraproject.org/infinote/
 - https://fedoraproject.org/wiki/Gobby
 - https://opensource.com/article/19/7/enable-collaboration-hackmd
I wrote  to devel some time ago regarding the deprecation of the apps.fp.o
index and plan to move its content to the main docs. Kevin mentionned that it
could end up in the infrastructure docs and that the whole should be moved to
docs.fp.o at some point. I will take a look at both since I have wanted to play
with the new documentation pipeline for a while. I am not the best guy to
meddle with the infrastructure doc but I might as well do something useful
while playing with antora. Tell me if it's not or if I missed something.
I might have something to show you at Flock if I have troubles sleeping in the
See you in Budapest,
I'm Cristian, 27, and been working with Linux for the last 5 years or
so. I've been looking for the right group for me to join and try to
contribute even if it's a little bit to the project for the last few
days and I think I finally did!
In these 5 years I've worked with load balancers (haproxy), web servers
(apache), NFS shares, FTP, DNS and a bunch of other technologies, also
some automation with Ansible and monitoring with Zabbix/bit of Nagios. I
have to admit I'm not good at all when it comes to scripting, although I
can read code more or less. When it comes to certifications I recently
passed my RHCSA and find myself studying for RHCE at the moment :)
Anyways... I'm pretty sure my knowledge is not as big/broad as many of
you people but I'd love to help with any tickets/issues if they are
related to any of these things I have experience with, and also I'm keen
on learning from anyone and all down for sharing knowledge.
- My IRC username is: *cdt_*
- What skills you have to offer?
*Haproxy, FTP, DNS, web servers, managing users, permissions, ACL, a bit
of Ansible, networking in linux, etc. Basically everything that is
covered in the RH Certs + automation and loadbalancers.*
- Certs: *RHCSA*
What I'd like to learn?
*I'm okay with learning anything from others. I'm interested in learning
some SQL and improving my scripting skills, among others :)*
I know it'll take some time for me to get used to everything and learn
how everything is connected until I can actually do some work/help, but
I'm willing to wait and also wouldn't mind introducing myself during
I have a question though, what's the primary IRC channel you guys use?
Is it #fedora-admin? Believe it or not this is my first time using IRC
haha so I'd like to know in case I need to reach out to someone. Well
that's pretty much it for me, hope to be able to join you soon and hope
you all enjoying your weekend!
The curl folks think they might have finally tracked down the http/2
issues were were hitting with composes now.
So, I'd like to:
Apply this patch to enable h2 on kojipkgs again:
diff --git a/playbooks/include/proxies-websites.yml
index d66e829..47289ab 100644
@@ -585,7 +585,7 @@
- use_h2: false
+ use_h2: true
- role: httpd/website
Then, run the proxies playbook to enable it.
Then, do a rawhide compose.
If it fails with any h2 or cannot download errors, revert the change and
let the curl folks know.
If it doesn't leave it enabled.
I'd likely do this testing over the weekend when no other composes are
Can I get some +1s for this plan?
I would like to propose to upgrade the koji plugin for fedora-messaging so it
integrates these changes: https://pagure.io/koji-fedmsg-plugin/pull-request/5
which hopefully will fix https://pagure.io/fedora-infrastructure/issue/8158
Worst case they don't and we can revert and we're back to square one.
Best case it works :)
We tried testing this in staging but the way koji and bodhi are currently
configured in koji is making it quite difficult (koji is still on rawhide=f31
while bodhi has already been synced from prod, and koji isn't configured to let
robosignatory signs builds in the f30 tags...).
A bit before freeze started, we upgrade pagure to 5.7.4 on src.fp.o but forgot
to do pagure.io
Since then there has been 4 small bug fixes releases (most of which are related
to src.fp.o), changelog is at: https://docs.pagure.org/pagure/changelog.html
I'd like to upgrade src.fp.o from 5.7.4 to 5.7.8 which will also make Tomas
Tomecek's life easier on packit and upgrade pagure.io to the latest version.
Down time for src.fp.o is as small as an apache restart, on pagure.io there is a
database migration to apply but it should be just a few minutes.
I would like to apply the following patch to solve
We start to quite often have build failing in OpenShift because the nodes
are lacking of disk space. This patch creates a cron job that runs every
weeks (on Monday) and deletes docker "dangling" images.
A dangling image for docker is an image that is not used or as not been
used by a container.
diff --git a/playbooks/groups/os-cluster.yml
index 4b56286dc..52a4e2635 100644
@@ -248,3 +248,18 @@
- name: Enable wildcard routes
command: oc -n default set env dc/router
+- name: Add a cleanup cron job to the nodes
+ hosts: os_nodes_stg:os_nodes
+ - os-node-cleanup
+ - name: Ensure a job that runs every Mondays to clean old docker images
from the nodes.
+ name: "remove docker dangling images"
+ weekday: 1 #Monday
+ minute: "0"
+ hour: "0"
+ job: "docker rmi $(docker images --filter dangling=true -q)"
+ sate: present
Just a heads up that I am going to do a prod->staging fas database sync
here soon (later this morning).
This means that your groups/password in prod will all be your
groups/password in staging with one exception: we can't sync the ipa
database, so if you never had a stg account or you had a different
password in staging you will need to go into the staging fas web
interface and change your password to sync with ipa. Note that you need
to do the normal password change, NOT the 'forgot my password' option.
Sorry for any troubles and if you have any questions, let me know.