On Mon, 2009-12-14 at 08:03 -0500, Kamil Paral wrote:
Hello,
I'm happy to announce that in the git there is a
kparal/rpmguard-integration branch available containing my latest
effort to bring rpmguard to autoqa. I would be glad for any feedback
provided. If you want to see it in action easily, you can run e.g.
this command:
$ autoqa post-koji-build --name ctdb --kojitag dist-f12-updates-candidate --arch x86_64
--arch i686 ctdb-1.0.108-1.fc12 -t rpmguard
(append --local if you have only autotest-client installed)
That should be working until ctdb gets the update-candidate tag off.
Example output is here:
http://pastebin.com/m73c45d08
I have some concerns about current implementation, what should be
improved and when the test could fail, but I'll cover that in some
further emails.
What about the output format, Will? Should something be changed?
Nice work Kamil, this looks great. I don't mind the output format to be
honest, but I think the discussion you and Will had during the recent QA
meeting [1] about collapsing results from different architectures
together makes sense. We probably want to err on the side of displaying
less information for developer review than scaring them away with tons
of repeated information. Besides, we can scare them later by just
providing the results against their builds :)
I don't think this is specific to the rpmguard test, but it's a little
confusing for me (not knowing the internals) to see two different
scripts 'rpmguard' and 'rpmguard.py' in the test directory. Looking
further, I believe rpmguard.py provides the class that autotest calls
from the provided control file and is intended to be imported only. Is
that correct?
Some other thoughts ...
* I like how you're using the setup() method and checking for
specific versions of required software. I imagine this might
become a common thing, I wonder if in the future we could offer
a common require_version() method in the autoqa base package
* In the initialize() method ... should we look into using mktemp
(or similar) here instead of os.makedirs? Could multiple runs
of the test pick-up stale data?
* In run_once(), I like that you are displaying errors on stdout
as well as in the log. I wonder if we could rely on the python
logging module (or a common autoqa logging subclass to provide
proper format and loglevels) to send results to multiple places?
The trick I think will be writing this such that it can still
run in stand-alone mode. Looks like Lucas and Michael are doing
similar things with integrating the KVM tests into upstream
autotest [2]. Is this something we could make use of as well?
[1]
https://fedoraproject.org/wiki/QA/Meetings/20091214#rpmguard_integration
[2]
http://patchwork.kernel.org/patch/40190/