A few things I'd like to change starting tomorrow (post-freeze).
* change the MM update-master-directory-list cronjob to start at 0 and
30 past the hour, from its current schedule of trying to start every
15 minutes. It is taking about 20 minutes on average to run, so
really is only running twice an hour anyhow.
* bump back the MM update-mirrorlist cronjob to start at :40 past the
hour. It takes about 20 minutes to complete, and I would like the
new content to land at the top of the hour.
* in modules/rsync/files/rsyncd.conf.secondary1, exclude alt/stage.
Mirrors shouldn't be able to sync this content.
* in MM prod.cfg, exclude pub/alt/stage. Mirrors shouldn't have
this content, and it's extra directory walks we don't need.
* increase the number of crawlers, from 45 to 75. A full run is
taking about 3 hours now, I'd like to bring this down to under 2.
This only affects bapp1, whose load average is still under 1 and has
plenty of free RAM and CPU it seems.
Objections or comments?
Linux Technology Strategist, Dell Office of the CTO
linux.dell.com & www.dell.com/linux
I found that our statistics for downloading have not been properly
capturing requests for Live ISO images from our download links on the
fp.o/get-fedora page. I'll be updating the numbers on the wiki's
[[Statistics]] page in the next day or two.
I did two counts, one for DVD and Live without uniq'ing the IP
addresses doing the retrievals (because there could be multiple
downloads from people behind firewalls), and one with. In both cases
I think the numbers are very significantly higher than our current
stats show. The raw numbers per day since F10 release are attached.
Paul W. Frields http://paul.frields.org/
gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233 5906 ACDB C937 BD11 3717
http://redhat.com/ - - - - http://pfrields.fedorapeople.org/irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug
So doing a liitle looking around I cane across some options that look
interesting, the following options would mean you need to physically have
something to login.
It would require a pam module and for us to setup a server for managing keys.
it looks to be fairly low cost. it would implement a 2 facter
it moves the public key from your hard drive to something you physically need
ubikey is max USD$25 where the etoken is probably at least USD$30. I would
think that with yubikey we could work out a deal with them to get a discount
in return for us being a case study/prominent user of there product. all of
the software for yubikey AFAICT is open source. some of it would require
As stated by Jonathan Dieter in the bug below, deltarpms are mucking
up rawhide updates right now because the drpms were created before the
packages were signed, and the signed versions don't match the deltarpm
reconstructed versions. For me at least, this is causing a problem
because I'm not using a mirrorlist right now (too many problems with
metalink mismatches). So when yum fails to accept the drpm-patched
package, the yum update just fails outright because there are no more
mirrors to get the full updated package from.
Is there anything that can be done on the infastructure side as
Comment #2 From Jonathan Dieter (jdieter(a)gmail.com) 2009-04-24 11:18:36 EDT (-) [reply] -------
This is not a deltarpm bug or a yum-presto bug, but rather an
Infrastructure bug. The deltarpm was created before the target rpm
was gpg signed. So it does indeed build to a valid rpm with exactly
the same data as the downloaded rpm, but without the signature.
Because it's not exactly the same file, yum refuses to use it and
redownloads the full (signed) rpm (which is what it should do).
The infrastructure should either delete and regenerate drpms after the
rpm signatures have changed or they should use the code fragment from
https://fedorahosted.org/koji/ticket/38#comment:3 to attach rpm
signatures to deltarpms.
Not sure how to reassign to Infrastructure.
Can I get 2 +1's to reboot app3 (which is currently frozen)
I'd like to power down xen13 and power it back up now that the BMC has
been flashed. There shouldn't be any impact to the users except that
/transifex/ will go down during the reboot. Which shouldn't be a problem
as I believe most people are using /tx/ now anyway.
2 +1's ?
19:59 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Meeting who's here?
19:59 * ricky (but will probably leave in the middle)
19:59 < ggruener> pong
20:00 * SmootherFrOgZ is
20:00 < mmcgrath> ok, lets get started
20:00 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Cloud Stuff
20:00 * nirik is the seats in the back.
20:00 -!- maploin [n=mapleoin@fedora/maploin] has joined #fedora-meeting
20:00 < mmcgrath> So the cloud stuff is coming along. Not much to report on an ETA for when we'll be giving out guests yet.
20:01 < mmcgrath> We've found (and filed) bugs that will probably need to be taken care of.
20:01 < mmcgrath> Generally though it's moving along
20:01 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Xen13
20:01 -!- dkovalsk [n=dkovalsk(a)ip-89-103-122-242.karneval.cz] has quit Client Quit
20:01 < mmcgrath> so xen13 still hasn't been fixed and IBM has been slow getting back to me about whether or not this is the type of thing our on site warranty will cover.
20:01 * johe around to (just to tell)
20:02 < mmcgrath> johe: hey
20:02 * abadger1999 here
20:02 * mmcgrath makes note to get back to them today, haven't heard back
20:02 < mmcgrath> We're change frozen for the release.
20:03 < abadger1999> was the fan on xen13 a red herring?
20:03 < mmcgrath> abadger1999: well it was fan 7 (or something) which the server didn't have
20:03 < mmcgrath> The techs wanted me to power it down, disconnect the cmos battery and wait 5 minutes
20:03 < mmcgrath> but I'm not there
20:03 < mmcgrath> and no one will be on site.
20:03 -!- knurd is now known as knurd_afk
20:03 < mmcgrath> they think something buggy is happening in the cmos.
20:03 < mmcgrath> quite odd
20:04 < mmcgrath> But the preview release is scheduled for the 28th currently
20:04 < mmcgrath> AFAIK it's on schedule
20:04 < mmcgrath> https://fedorahosted.org/fedora-infrastructure/report/9
20:04 < mmcgrath> looks like everything is assigned and ready
20:04 < mmcgrath> Anyone have any questions or comments on the open preview tickets?
20:06 < mmcgrath> k
20:06 < mmcgrath> So that's really all I had to discuss for thsi meeting
20:06 < mmcgrath> sorry its short but I've got a couple of things going on at once
20:06 < mmcgrath> just the same
20:06 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
20:06 < mmcgrath> does anyone have anything to discuss?
20:07 < mmcgrath> If not I'll close the meeting in 30 :)
20:08 < johe> that was fast
20:08 < f13> I have.... nothing.
20:08 < mmcgrath> johe: yeah this was a quick one.
20:08 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Meeting Closed