I really can't tell if this is a particularly bad pain point for me
since I work remotely and my internet connection isn't great (frequent
SSL timeouts on koji, package downloads take forever) but I was
wondering how you all went about testing AutoQA.
For me, it really depends on what I'm trying to poke at. Most of the
time, I'll run 'watch-koji-builds.py --verbose' if I'm trying to do some
general testing. I keep track of the events that are called if I want to
run something again ('autoqa post-bodhi-update-batch --targettag
dist-f13-updates --arch x86_64 --arch i386 oxygen-gtk-1.0.4-1.fc13' as
an example).
If I'm trying to poke at something very specific, I find myself manually
cobbling together some one-off script that runs something specific, like
depcheck or upgradepath.
Testing the interaction with Koji or Bodhi? I still haven't figured out
a good way to do that. Thus far, I have been hacking in print statements
into bodhi_utils or koji_utils but that doesn't quite cover everything.
I ask because speaking for myself, I'm human. The more of a PITA it is
to test something, the more likely I am to not do it or limit the number
of times that I do test it. I think that I've been pretty good about
testing stuff before pushing to master or stable, but I'm bothered by
the amount of time I'm wasting on setting up tests and figuring out ways
that I can trick AutoQA into going down code paths I want to test.
I'm not saying that testing is a waste of time, just thinking that some
of this test setup time could be much better spent on coding or
additional testing.
This isn't meant to be aimless complaining. I have some ideas on how to
make the testing of AutoQA easier but I wanted to know if I was missing
something obvious before I went too far down that road.
Thanks,
Tim