On 04/28/2011 02:46 AM, Kamil Paral wrote:
> I really can't tell if this is a particularly bad pain point
for
> me since I work remotely and my internet connection isn't great
> (frequent SSL timeouts on koji, package downloads take forever)
That's also our case in Brno office. Our internet is painfully slow,
it often downloads from Koji around 100kB/s. When working on depcheck
I sometimes keep the work for home where I have 10-20 times faster
access. So I know what you are talking about.
Ouch, I thought I had it bad at 600kB/s. I'm envious of the faster
internet at home, though. I'm currently at the fastest I can get for my
area. There is a company offering FTTP but they aren't in my immediate
area yet :(
Testing bandwidth-intensive tests is certainly quite problematic
without fast access to the main Fedora Infrastructure machines. I
don't know how to improve that.
That reminds me, I want to implement package caching for depcheck
(and maybe some other tests), that could help us enormously at least
for subsequent runs.
Unless I'm misunderstanding you here, I was thinking of something a
little less intrusive on our code for the short term:
- Modify at least rpm download code to support proxies
- Implement a squid cache locally (
http://www.squid-cache.org/)
Would our production systems benefit as much from rpm caching? I'm
tempted to say leave that for a little later when we have a better idea
about how exactly the production infrastructure is going to look and
focus on making tests work well and correctly for now.
I don't receive much timeouts however. I guess that means test
abort,
right? Maybe we could improve that in our library (connection
retries).
Yeah, I haven't looked into this much so I'm not really sure why I'm
seeing so many timeouts but it's usually during watch-koji-builds which
then kills the process.
> but I was wondering how you all went about testing AutoQA.
>
> For me, it really depends on what I'm trying to poke at. Most of
> the time, I'll run 'watch-koji-builds.py --verbose' if I'm trying
> to do some general testing. I keep track of the events that are
> called if I want to run something again ('autoqa
> post-bodhi-update-batch --targettag dist-f13-updates --arch x86_64
> --arch i386 oxygen-gtk-1.0.4-1.fc13' as an example).
I usually do "watch-koji-builds.py --verbose --dryrun" to get correct
command syntax and then run desired tests by appending "-t TEST
(--local)" to the command.
'--dryrun' hadn't occurred to me but that's a good idea. I usually end
up killing off all the rpmguard and rpmlint jobs when I'm running locally.
>
> If I'm trying to poke at something very specific, I find myself
> manually cobbling together some one-off script that runs something
> specific, like depcheck or upgradepath.
>
> Testing the interaction with Koji or Bodhi? I still haven't
> figured out a good way to do that. Thus far, I have been hacking in
> print statements into bodhi_utils or koji_utils but that doesn't
> quite cover everything.
>
> I ask because speaking for myself, I'm human. The more of a PITA it
> is to test something, the more likely I am to not do it or limit
> the number of times that I do test it. I think that I've been
> pretty good about testing stuff before pushing to master or stable,
> but I'm bothered by the amount of time I'm wasting on setting up
> tests and figuring out ways that I can trick AutoQA into going down
> code paths I want to test.
I understand that. If I want to run depcheck and all builds are
accepted ATM, I change the username in fas.conf to consider all
builds as unaccepted. It would be nice not have to do this tweaks by
hand over and over again.
I've actually started writing manual scripts to download all the
packages for depcheck and run the command manually.
It doesn't cover all the code paths but it does allow me to muck with
accepted/non-accepted and not have to download things every time.
>
> I'm not saying that testing is a waste of time, just thinking that
> some of this test setup time could be much better spent on coding
> or additional testing.
When I announce "please test" email, I suppose around two or three
people will actually run something. Personally I ran several depcheck
tests and several upgradepath tests (different target repositories).
That's it, it's not much, I know. Just the basic testing.
It would be nice to alleviate this manual effort. Staging server
could help us greatly. Also the testing framework you're preparing is
a great thing (tm). I'm all for automated tests. There is only the
question of the extent we want to go in. In the world where 90% of
our code is a workaround of some kind (e.g. bodhi comments) and we're
constantly re-writing architecture basics I don't see 100% code
coverage as a viable approach. But I would love to see some basic
tests that cover important areas of our code (e.g. Koji/Bodhi
interaction) while requiring reasonable effort to write them.
I'm all for automated testing but I think that we have a long ways to go
before we get there. It also doesn't quite solve the problem of being
able to induce conditions in the environment for development or debugging.
I guess I'm just wondering where developer pain falls on our priority
list. I'm not in any way suggesting that we should drop everything and
start making our lives easier but I am thinking about some stuff for
farther down the road (might make sense to chip away at some as we go
forward)
- Start thinking about better test integration with koji and bodhi
- After looking at the bodhi 2.0 roadmap, I get the feeling that we
might want to start testing more with bodhi in the staging
environment.
- Not sure if it makes sense to write a mock bodhi interface to help
us induce error conditions.
- Maybe work with the bodhi guys (not sure if it's just lmacken) to
see what makes sense.
- Think about a more testable interface for AutoQA eventually
- More ability to poke at stuff for debugging and devel
- Reproducing specific tests, inducing specific error conditions
- Allow for more paradigms than just 'download from koji, run tests'
- I'm thinking in terms of pluggable modules but that is going to be
a gradual process
But we have bigger fish to fry for the immediate future :) I should
probably put these in the form of tickets if there is any interest.
>
> This isn't meant to be aimless complaining. I have some ideas on
> how to make the testing of AutoQA easier but I wanted to know if I
> was missing something obvious before I went too far down that
> road.
I don't think you're missing anything. It is our project that is
missing important parts (it's all question of manpower). Improvements
welcome.
I'm all for improvements but I think this will be a gradual process (as
I'm sure you're thinking). Food for thought, anyways.
Tim