[AutoQA] #223: upgradepath: Compare release correctly
by fedora-badges
#223: upgradepath: Compare release correctly
--------------------+-------------------------------------------------------
Reporter: kparal | Owner: kparal
Type: defect | Status: new
Priority: major | Milestone: Package Update Acceptance Test Plan
Component: tests | Version: 1.0
Keywords: |
--------------------+-------------------------------------------------------
{{{
[root@aqd autoqa]# autoqa post-koji-build --kojitag dist-f14
htop-0.8.3-3.fc14 --local -t upgradepath
...
12:18:47 INFO | ========================================
12:18:47 INFO | htop-0.8.3-3.fc14
12:18:47 INFO | ========================================
12:18:47 INFO | Warning: Pushing into stable repository
12:18:48 INFO | Warning: Pushing older or current version of package
12:18:48 INFO | [ OK ] dist-f10
12:18:48 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:48 INFO | Latest package: htop-0.8.1-1.fc10
12:18:49 INFO | [ OK ] dist-f10-updates
12:18:49 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:49 INFO | Latest package: htop-0.8.3-1.fc10
12:18:49 INFO | [ OK ] dist-f11
12:18:49 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:49 INFO | Latest package: htop-0.8.1-4.fc11
12:18:50 INFO | [ OK ] dist-f11-updates
12:18:50 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:50 INFO | Latest package: htop-0.8.3-1.fc11
12:18:50 INFO | [ OK ] dist-f12
12:18:50 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:50 INFO | Latest package: htop-0.8.3-2.fc12
12:18:51 INFO | [ OK ] dist-f12-updates
12:18:51 INFO | Package htop in dist-f12-updates doesn't exist
12:18:51 INFO | [FAIL] dist-f13
12:18:51 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:51 INFO | Latest package: htop-0.8.3-3.fc13
12:18:52 INFO | [ OK ] dist-f13-updates
12:18:52 INFO | Package htop in dist-f13-updates doesn't exist
12:18:53 INFO | [FAIL] dist-f14
12:18:53 INFO | Pushing package: htop-0.8.3-3.fc14
12:18:53 INFO | Latest package: htop-0.8.3-3.fc14
12:18:53 INFO | [ OK ] dist-f14-updates
12:18:53 INFO | Package htop in dist-f14-updates doesn't exist
12:18:54 INFO | [ OK ] dist-f15
12:18:54 INFO | Package htop in dist-f15 doesn't exist
12:18:54 INFO | SUMMARY: 0 OK, 1 FAILED
}}}
AFAICT it should not fail for dist-f13. Releases are not compared
correctly.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/223>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 8 months
Re: [PATCH 3/5] Improve code of multihook tests
by Kamil Paral
----- "Will Woods" <wwoods(a)redhat.com> wrote:
> On Fri, 2010-09-10 at 15:52 +0200, Kamil Páral wrote:
> > This patch fixes some issues created by adding multihook
> capabilities to
> > rpmlint, rpmguard and initscripts tests with patch
> > f16b2646fa397b0cd55e3e4bf9918d21541e8840. It re-enables opt-in
> emails.
> > It fixes problem where rpmlint's rpm dir cache wasn't cleared
> between
> > successive runs. It also cleans up the code a lot. A lot of recent
> > changes made the code almost unmaintainable and very hard to read
> > (especially in printing/output appending/log appending tasks). This
> > patch reworked all of that quite a lot, it should be much more
> readable
> > and simpler now. In short, it tries again to have these scripts
> ready to
> > be served as examples for other people.
> >
> > This patch also removes autotest exception throwing and uses
> assertions.
>
> This patch is the most complex of the 5, but it looks quite reasonable
> and cleans up the code nicely.
>
> I also really like the use of assertions to catch unexpected/buggy
> conditions - I assume that will show up as a crash/error in autotest?
Yes, the whole test throws AssertionError. I'm not sure what the color
of the field in autotest web UI is :)
As for autoqa-results, we are still missing one patch that would make
the AutoQATest class catch *any* exception and report it as CRASHED.
It's quite an important piece, but it requires changes in all our tests.
I'll try to create it today.
13 years, 8 months
[PATCH 1/5] enable assertions for autoqa tests
by Kamil Paral
Let's enable assertions for autoqa tests. That allows us to quickly
check for situations we think about as showstoppers and it lets us to
end the test quickly (and it will be automatically reported as CRASHED).
Of course an alternative to assertions is to throw exceptions by hand,
but using assertions is much easier and there is no big drawback (only
our tests are compiled with assertions enabled, therefore the
performance hit is neglectable).
---
Makefile | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/Makefile b/Makefile
index 0931a69..a964e1e 100644
--- a/Makefile
+++ b/Makefile
@@ -31,7 +31,7 @@ install: build
for t in tests/*; do cp -a $$t $(PREFIX)$(TEST_DIR); done
install -d $(PREFIX)$(AUTOTEST_DIR)/client/bin
install -m 0644 lib/autotest/site_utils.py $(PREFIX)$(AUTOTEST_DIR)/client/bin
- ( cd lib/python; $(PYTHON) setup.py install -O1 --skip-build --root $(PREFIX)/ )
+ ( cd lib/python; $(PYTHON) setup.py install --skip-build --root $(PREFIX)/ )
## front-ends/israwhidebroken
install -d $(PREFIX)/usr/sbin
install -d $(PREFIX)/usr/share/autoqa-israwhidebroken
--
1.7.2.2
13 years, 8 months
Creating autoqa-watchers sub-package?
by James Laska
Greetings,
As autoqa is currently packaged, all tests, watchers, configs and
cronjobs are included in the main autoqa package. This works fine
except for a minor annoyance. Anytime you install 'autoqa', unless you
are actually scheduling jobs, you needed comment or remove the watcher
notification /etc/cron.d/autoqa. This gets annoying the more I deploy
autoqa.
I'd like propose moving the watcher scripts and cronjob into a
sub-package (see attached patch for detail).
When most people install 'autoqa', they want the test library and tests.
This won't change that behavior. The only change will be for anyone
setting up a test server. I'll need to update the existing wiki
documentation to note installing 'autoqa-watchers' when setting up a
test server [1].
Comments/concerns/ideas?
Thanks,
James
[1] https://fedoraproject.org/wiki/Install_and_configure_AutoQA
13 years, 8 months
Update to autoqa-0.4.0-1
by James Laska
Greetings,
As previously discussed, I'd like to update the autoqa.spec to capture the
0.4.0 release changes. Thanks to wwoods for helping create a pared down (but
accurate) %changelog. I've tested the changes so far on
autoqa.fedoraproject.org (based on commit
5251bdccebb9a9612f19fa008276ebc5a07ae29c as suggested by Kamil). Note,
autoqa.fp.org is deployed as an EPEL5 system, so compatibility with python-2.4
is needed.
Once Kamil has addressed the multihook patches to his liking, I can build
autoqa-0.4.1-1 and deploy. Questions/concerns/ideas?
If no concerns, I'll commit these patches to master later this week.
Thanks,
James
13 years, 8 months
Re: proposal: deprecate using raise error.TestFail
by Kamil Paral
----- "Will Woods" <wwoods(a)redhat.com> wrote:
> On Wed, 2010-09-08 at 15:03 -0400, James Laska wrote:
> > On Wed, 2010-09-08 at 07:41 -0400, Kamil Paral wrote:
> > >
> > > I'd like to propose that we stop using those autotest-internal
> TestFail
> > > and TestWarn exceptions.
> > >
> > > What will change:
> > > 1. We no longer see red/purple boxes in autotest frontend.
> >
> > Hah, I was thinking the same thing when inspecting the autotest tko
> > results viewer. It's hard to distinguish between a successful test
> run
> > that found problems, vs a test case failure.
>
> Yeah - I think this would clean things up significantly for us, and
> is
> therefore a pretty great idea.
>
> But I wish I knew why the autotest developers originally decided to
> use
> these exceptions rather than return values from the tests - surely
> there
> was some good reason for that design, and I'd like to know what we're
> losing by discarding it.
I'm looking into autotest/client/common_lib/error.py and I see there a
lot of exception classes - much more than I expected. It seems that
there are exception types for all kinds of events. But those events are
related more to job execution itself, rather than some testcase semantics.
Currently we mix those two things.
The reason why it is done by exceptions rather than method return values
is I think that it's just easier - you can wrap the whole test object
in try-except block and catch any problem. It will also end any further
methods, so normally when your raise TestFail exception in run_once, then
postprocess_iteration is no longer executed (in standard autotest
environment, not in our case).
If we catch all exceptions in our AutoQATest class, it doesn't mean we
can no longer raise exceptions, it just means we shouldn't raise them
for tests that ended correctly but we deem them as failing some testcase
(which is a quite different semantics). So for example when I run some
external script and that script crashes (and there is no point in
continuing), I can still raise TestFail. That means that we will catch
that exception and automatically report this test run as CRASHED. Which
is quite cool I think.
13 years, 8 months
proposal: deprecate using raise error.TestFail
by Kamil Paral
Hi,
currently many tests use this code in its test objects:
raise error.TestFail
-or-
raise error.TestWarn
to end a test that "failed" (from our perspective). It then creates a few
ugly exceptions in the output, see here, even if everything went just fine
(from autotest perspective):
http://autoqa.fedoraproject.org/results/250-autotest/qa03.c.fedoraproject...
and finally it reports:
09/08 12:48:13 INFO | job:1254| FAIL rpmlint rpmlint timestamp=1283950093 localtime=Sep 08 12:48:13
09/08 12:48:13 INFO | job:1254| END FAIL rpmlint rpmlint timestamp=1283950093 localtime=Sep 08 12:48:13
(which is also visible in autotest web frontend).
However, since we now use jskladan's patchset defining AutoQATest parent
class (using self.result variable), I no longer see a reason to raise those
exceptions. Quite the contrary.
Those exception don't even match our result states, which we have a much richer
set (passed, failed, info, aborted, crashed and maybe some other).
I'd like to propose that we stop using those autotest-internal TestFail
and TestWarn exceptions.
What will change:
1. We no longer see red/purple boxes in autotest frontend.
2. The AutoQATest parent class will be adjusted to automatically output
the test result after a test has finished.
3. Tests will just need to store proper result in self.result variable,
that's all. The methods may be then ended in a standard manner (return
or no statement).
What will be the benefits:
1. We will know that if we see a traceback it's a problem. We won't have to
examine in detail whether this traceback is "good" or "bad".
2. It will allow us to report any crashed test object (throwing an exception)
to the mailing list. We have already started to do that and it's great:
https://fedorahosted.org/pipermail/autoqa-results/2010-September/thread.html
But currently it works only for those tests that haven't filled self.result
before the crash occurred. We need to catch all exceptions.
What do you think?
13 years, 8 months
Re: [PATCH 2/2] Update autoqa and watch-bodhi-requests.py to be python-2.4 friendly so it can run on EPEL5.
by Kamil Paral
----- "James Laska" <jlaska(a)redhat.com> wrote:
> ---
> autoqa | 2 +-
> hooks/post-bodhi-update/watch-bodhi-requests.py | 4 ++--
> 2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/autoqa b/autoqa
> index e0848ea..e6bd8c9 100755
> --- a/autoqa
> +++ b/autoqa
> @@ -258,7 +258,7 @@ test_vars = {} # dict of test->its test vars
> for test in tests[:]:
> try:
> test_vars[test] = eval_test_vars(test, default_test_vars)
> - except IOError as e:
> + except IOError, e:
> print "Error: Can't evaluate test '%s': %s" % (test, e)
> tests.remove(test)
> testlist = [test for test,vars in test_vars.iteritems() if
> vars['execute'] == True]
> diff --git a/hooks/post-bodhi-update/watch-bodhi-requests.py
> b/hooks/post-bodhi-update/watch-bodhi-requests.py
> index 45f168f..597853a 100755
> --- a/hooks/post-bodhi-update/watch-bodhi-requests.py
> +++ b/hooks/post-bodhi-update/watch-bodhi-requests.py
> @@ -197,7 +197,7 @@ when new requests are filed in bodhi')
> try:
> new_updates = bodhi_new_requests_since(r, int(lastcheck),
> testing_cachefile,
> updatecache=(not
> opts.dryrun))
> - except fedora.client.FedoraServiceError as e:
> + except fedora.client.FedoraServiceError, e:
> print "ERROR: %s" % e
> # Sort the new updates based on their target/request
> updates = {'testing': [], 'stable': [], 'critpath': []}
> @@ -213,7 +213,7 @@ when new requests are filed in bodhi')
> new_updates = []
> new_updates = bodhi_new_stable_requests(r,
> stable_cachefile,
> updatecache=(not
> opts.dryrun))
> - except fedora.client.FedoraServiceError as e:
> + except fedora.client.FedoraServiceError, e:
> print "ERROR: %s" % e
> print "%i stable update requests found" % len(new_updates)
> updates['stable'] += new_updates
> --
> 1.7.2.2
I believe I created those as-es :) I didn't suspect it to be incompatible
with older Pythons. Good to know, thanks.
13 years, 8 months