I'm attaching my latest incarnation of the PackageDB schema. Here's the
run down on what's changed:
* Implementation of Collection inheritance/overlays/sets
* Add a reviewURL field to Package
* Restructure the Package hierarchy slightly:
| | | |
| PackageListing PackageVersion
The new structure allows a PackageVersion to exist in multiple
collections which Jesse says is beneficial when spinning short-term,
* PackageInterest revamped as a PackageACL set of tables. Instead of
having user roles (comaintainer, watcher, etc) we have functions that
the user can perform. The functions I've come up with so far are:
- commit: Commit changes to the VCS
- build: Request builds from the buildsystem
- watchbugzilla: Be informed of bugzilla tickets
- watchcommits: Be informed of VCS commits for this package
- approveacls: Change these ACLs for this package
- checkout: Checkout the package from the VCS. We'll normally set
this to true for a group that includes everyone but certain embargoed
branches may need to be set to a more restricted group.
Owners will implicitly have access to all of these. Sponsors or the
equivalent group of trusted contributors under the merged FC/FE will
belong to a group with commit-build-approveacls-checkout permissions.
Things that still need working on:
* Integrating comps/categories. I have some notes in comments. In
order to implement this we need to 1) decide if comps is the right thing
for this or if something closer to Nicolas Mailhot's suggestion is the
right way to go. 2) Give the PackageDB an understanding of binary
packages/subpackages as groups and categories won't be the same for each
subpackage. [Note unless someone wants to work on this, I'm deferring it
to after the first iteration.]
* Logging. Make sure we have all the tables we need for logging.
Verify that we really need to log on all these areas. Collections,
Packages, PackageListing(aka package in Collection), PackageVersion, and
PackageACL all have status fields so they should all have logs or we
need to evaluate whether we really want status for each of these.
* CollectionSets: Overlays can now be tracked in the database but the
logic for looking for packages has to be implemented. After talking
with Jesse, I think the simplest may be to implement this in application
code so that searching for a package will first check the collection,
then each of its bases. However, this imposes a performance penalty on
each select. It *may* be worthwhile to look into copying the packages
to the relevant overlay collections on insert instead. I have some
notes as comments in the schema.
* Set things up for SQLObject and SQLAlchemy. I'm going to try loading
the objects through database introspection and see how the two ORMs
As you know, after the release of FC6, we had problems with the amount of
connections to our servers. This made the wiki unavailable as well as other
With this concern, the Fedora Infrastructure Team, have been working in a
way to prevent this from happening again. Our first step, is the wiki
We are now ready to test the future wiki platform, and this is why i'm
sending this email.
URL: http://webtest.fedora.redhat.com/wiki (MoinMoin 1.5.6)
Please track all problems under
http://fedoraproject.org/wiki/Infrastructure/WikiMigration, edit that page
as you see fit, so that we can fix any problems that you guys may encounter.
Feel free to contact me if necessary.
As many of you know we've been looking to make our configuration
management system a bit more robust. Primarily by trying to find a
technological solution to actually enforce our config management
One of the systems I've looked at is glump, provided by the Duke guys
and Seth. The system itself isn't *just* a configuration management
system. Its really a systems framework that is very modular in
nature. Its a bit rough around the edges right now but in true Fedora
spirit I'd like to suggest we adopt this technology and make it
better. It'll work for us out of the box, and with Duke as upstream
we're not alone in using it.
I've got one working sample that just copies a file to your /tmp/
directory. Interesting items to note is once /tmp/test1 is created,
if you alter it and re-run the script, a backup noting the date and
time is created. This is especially handy in our environment where
not everyone always follows the rules. Consider it a safe and gentle
Be warned, there is a slight learning curve. The actual 'config
management' stuff is done in a script here called 'head' glump itself
really just glues a bunch of files together into this one script.
Once you start poking around at it you'll see what I mean. But think
of the files listed in glue.xml as groups of config files. For
example, we could have a phx file and an app server file for app
servers in the phx colo. You get the idea. Check out the source if
http://mmcgrath.net/~mmcgrath/glump-example.tar.gz (The actual glump
source and configuration)
http://mmcgrath.net/~mmcgrath/configfiles.tar.gz (sample configs)
You can run the script by typing:
wget -qO - http://mmcgrath.net/cgi-bin/glump.py | sh
Don't take my word that it won't fark your system up, take a look for
yourself at what its running! It should just create two log files in
/tmp and a file called /tmp/test
We would use this in addition to our current CVS system though we
should probably give all the servers a good once-over and re-sync the
configs for those servers that are out of sync.
Seth, please correct or make more clear anything that I've munged up.
What do you all think?