[AutoQA] #246: Add temporary hack for test re-scheduling
by fedora-badges
#246: Add temporary hack for test re-scheduling
--------------------+-------------------------------------------------------
Reporter: kparal | Owner:
Type: task | Status: new
Priority: major | Milestone: 0.4.4
Component: core | Keywords:
--------------------+-------------------------------------------------------
Until we have a proper support for test re-scheduling (ticket #245) we
need some temporary hack to enable test re-scheduling in most common
cases.
We plan to have upgradepath and depcheck sending comments to Bodhi. There
is probably no problem with depcheck, because it re-evaluates the whole
contents of -pending tag on its every update. But there are some issues
with upgradepath, one use case is described in ticket #245.
We need to:
1. get a list of use cases for which we need to reschedule upgradepath or
depcheck
2. inquire whether Bodhi changes timestamp (or whatever) of the updates
in those cases (i.e. we would detect it and run tests again properly even
with our current watcher implementation)
3. in the remaining cases decide what to do with it. For example we can
make temporary upgradepath behave the same way as depcheck does, i.e. test
all builds in -updates-pending for every update of that tag. Because we
will have Bodhi comment duplication prevention, it could work quite fine.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/246>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
13 years, 2 months
AutoQA Self-Test Proof of Concept
by Tim Flink
This took a little longer than I thought it would but I have finished an
initial proof of concept for unit testing surrounding a rather trivial
fix for #265. The code is up in github
(https://github.com/tflink/autoqa-devel/tree/pytest).
There are certainly some parts of it that are sub-optimal but overall, I
think that its enough for a proof of concept. Unless people are thrilled
with the py.test implementation that I currently have, I'm planning to
do another proof of concept using nose/unittest.
Instructions for running the tests are in README.test.
In the interest of not sending out a huge email, I wrote up some
documentation on why I structured things the way I did and the goals,
possible tools etc. that could be used to test AutoQA. That document is
also in the git repo
(https://github.com/tflink/autoqa-devel/blob/pytest/doc/design/selfTesting...).
I tend to write in LaTeX and since I didn't see any other conventions I
just wrote in what I'm most used to. It wouldn't be hard to convert it
to something else (rst etc.) if that is what people prefer.
Feel free to modify the document or suggest changes. Just because I
wrote it up doesn't make it final in any sense. Also, I'm sure that I
forgot to explain something - questions are good :)
Tim
13 years, 2 months
[PATCH] upgradepath: run on post-bodhi-update-batch, test all updates, send bodhi comments
by Kamil Paral
upgradepath: run on post-bodhi-update-batch, test all updates, send bodhi comments
This patch solves ticket #246 and part of ticket #240.
Upgradepath is now triggered on batch bodhi events and it tests all
updates that are present in the -pending tag. That means that some
updates will be tested many times over, but our intelligent Bodhi
comment duplication prevention algorithm should prevent us from
hammering the maintainers with the same results. This behavior is
probably temporary until we have means to re-schedule a test (ticket #245).
As stated, this patch also enables upgradepath to send Bodhi comments
for every update result.
diff --git a/tests/upgradepath/control.autoqa b/tests/upgradepath/control.autoqa
index e63091a..9e17ca2 100644
--- a/tests/upgradepath/control.autoqa
+++ b/tests/upgradepath/control.autoqa
@@ -4,8 +4,8 @@
# upgradepath can be run just once and on any architecture
archs = ['noarch']
-# run only with post-bodhi-update hook
-if hook not in ['post-bodhi-update']:
+# do batch run for new updates
+if hook not in ['post-bodhi-update-batch']:
execute = False
# don't check requests going into *-updates-testing, upgradepath currently
diff --git a/tests/upgradepath/upgradepath.py b/tests/upgradepath/upgradepath.py
index 15e5c38..567cad5 100755
--- a/tests/upgradepath/upgradepath.py
+++ b/tests/upgradepath/upgradepath.py
@@ -20,6 +20,7 @@
import operator
import autoqa.koji_utils
+import autoqa.bodhi_utils
import rpmUtils.miscutils
from autoqa.repoinfo import repoinfo
from autoqa.test import AutoQATest
@@ -34,7 +35,7 @@ class upgradepath(AutoQATest):
self.result = 'PASSED'
# order for evaluation of final result; higher index means preference
self.result_order = ('PASSED','INFO','FAILED','ABORTED')
- self.envr_results = {} # results for invidual packages
+ self.nvr_results = {} # results for invidual packages
self.outputs = []
self.highlights = []
@@ -85,9 +86,19 @@ class upgradepath(AutoQATest):
return result
@ExceptionCatcher()
- def run_once(self, envrs, kojitag, **kwargs):
+ def run_once(self, kojitag, **kwargs):
super(self.__class__, self).run_once()
- update_id = kwargs['name'] or kwargs['id']
+
+ # Ideally upgradepath should check just the new updates. But since we don't
+ # yet support test re-scheduling [1], we have to work around somehow [2].
+ # So let's just run upgradepath for *all* updates requesting their move to
+ # <kojitag>. It is a little inefficient, but if for some package the result
+ # changes, we will report it correctly (and that's more important).
+ # When test re-scheduling is supported, we can revert this test from whole
+ # tag testing to just new updates testing.
+ # [1] https://fedorahosted.org/autoqa/ticket/245
+ # [2] https://fedorahosted.org/autoqa/ticket/246
+
# Get a list of all repos we monitor (currently not -testing)
# FIXME - perhaps we should only query for 'active' repos
@@ -103,20 +114,22 @@ class upgradepath(AutoQATest):
koji = autoqa.koji_utils.SimpleKojiClientSession()
- for envr in envrs:
- msg = '%s\n%s into %s\n%s' % (40*'=', envr, kojitag, 40*'=')
+ # get the list of all builds requesting move to the kojitag
+ sourcetag = kojitag + '-pending'
+ print 'Koji tag to be tested: %s' % sourcetag
+ updates = koji.listTagged(sourcetag)
+ nvrs = sorted([u['nvr'] for u in updates])
+ print 'Builds to be tested: %s' % ' '.join(nvrs)
+
+ # for every build let's test it
+ for nvr in nvrs:
+ msg = '%s\n%s into %s\n%s' % (40*'=', nvr, kojitag, 40*'=')
print msg
self.outputs.append(msg)
- # our testing package
- (name, version, release, epoch, arch) = rpmUtils.miscutils.splitFilename(envr)
- matching_build = {
- 'name': name,
- 'version' : version,
- 'release' : release + '.' + arch,
- 'epoch' : epoch,
- 'envr' : envr,
- }
+ # get all info about this current build
+ matching_build = koji.getBuild(nvr)
+ assert matching_build is not None, 'This build does not exist in Koji: %s' % nvr
if kojitag.find('updates') < 0 and repotags[current_tag] != repotags[-1]:
# not *-updates* and not rawhide
@@ -144,7 +157,7 @@ class upgradepath(AutoQATest):
# compute pkg result
if self.result_order.index(result2) > self.result_order.index(result):
result = result2
- self.envr_results[envr] = result
+ self.nvr_results[nvr] = result
msg = 'RESULT: %s' % result
print msg
@@ -154,11 +167,13 @@ class upgradepath(AutoQATest):
if self.result_order.index(result) > self.result_order.index(self.result):
self.result = result
+ print # empty line
+
# create summary like "1 PASSED, 2 FAILED, 3 INFO"
summary = []
for res in self.result_order:
- if res in self.envr_results.values():
- count = len([k for k in self.envr_results.keys() if self.envr_results[k] == res])
+ if res in self.nvr_results.values():
+ count = len([k for k in self.nvr_results.keys() if self.nvr_results[k] == res])
summary.append('%d %s' % (count, res))
summary = ', '.join(summary)
@@ -168,5 +183,25 @@ class upgradepath(AutoQATest):
self.outputs.append(msg)
self.highlights.append(msg)
- self.summary = '%s for %s' % (summary, update_id)
-
+ self.summary = '%s for %s' % (summary, sourcetag)
+
+ # send results to Bodhi
+ print 'Sending results to Bodhi...'
+ exc = False
+ for nvr in nvrs:
+ result = self.nvr_results[nvr]
+ try:
+ update = autoqa.bodhi_utils.query_update(nvr)
+ assert update is not None, 'No such update object in Bodhi: %s' % nvr
+ update_title = update['title']
+ print 'Sending results to Bodhi: %s %s' % (update_title, result)
+ autoqa.bodhi_utils.bodhi_post_testresult(update_title, 'upgradepath',
+ result, self.autotest_url, 'noarch')
+ except AssertionError, e:
+ msg = 'Failed to send Bodhi results to %s:\n %s' % (nvr, e)
+ print msg
+ self.outputs.append(msg)
+ exc = True
+ # if assert failed (some Bodhi update doesn't exist), end the test as CRASHED
+ if exc:
+ raise e
13 years, 2 months
[PATCH] stop using epoch in NVR variables
by Kamil Paral
stop using epoch in NVR variables
We have found out that Koji has NVR as an unique identifies (though it
support epochs). That means we don't have to get messy with all that
epoch thingy if not necessary, working with NVR is usually sufficient.
This patch converts all ENVR references to NVR ones, where appropriate.
Let's have it simpler.
diff --git a/hooks/post-bodhi-update/README b/hooks/post-bodhi-update/README
index 4e72282..38460ce 100644
--- a/hooks/post-bodhi-update/README
+++ b/hooks/post-bodhi-update/README
@@ -10,13 +10,13 @@ The required arguments are:
plus one of:
--updateid: the bodhi-provided Update ID for this update, or
--title: the title of the update
-and finally a list of the ENVRs of the package builds in this update.
+and finally a list of the NVRs of the package builds in this update.
AutoQA tests can expect the following variables from post-bodhi-update hook:
name: title of the update request (one of 'name' or 'id' may be empty)
id: Unique ID of the update request (one of 'name' or 'id' may be empty)
kojitag: target koji tag for this update
- envrs: list of package envrs in this update request
+ nvrs: list of package NVRs in this update request
NOTE:
The watcher is actually in the post-koji-build directory.
diff --git a/hooks/post-bodhi-update/hook.py b/hooks/post-bodhi-update/hook.py
index bb9d14c..b6e3a50 100644
--- a/hooks/post-bodhi-update/hook.py
+++ b/hooks/post-bodhi-update/hook.py
@@ -25,7 +25,7 @@ name = 'post-bodhi-update'
def extend_parser(parser):
'''Extend the given OptionParser object with settings for this hook.'''
- parser.set_usage('%%prog %s [options] PKG_ENVR [PKG_ENVR ...]' % name)
+ parser.set_usage('%%prog %s [options] PKG_NVR [PKG_NVR ...]' % name)
group = optparse.OptionGroup(parser, '%s options' % name)
group.add_option('--title', default='',
help='Title for the given update')
@@ -44,13 +44,13 @@ def process_testdata(parser, opts, args, **extra):
populated.'''
if not args:
- parser.error('No ENVR was specified as a test argument!')
+ parser.error('No NVR was specified as a test argument!')
if not opts.targettag:
parser.error('--target-tag is a mandatory option!')
if not opts.title and not opts.updateid:
parser.error('At least one of --title or --updateid must be supplied!')
- testdata = {'envrs': args,
+ testdata = {'nvrs': args,
'kojitag': opts.targettag,
'id': opts.updateid,
'name': opts.title}
diff --git a/hooks/post-koji-build/README b/hooks/post-koji-build/README
index 2989bb9..7b4a4f1 100644
--- a/hooks/post-koji-build/README
+++ b/hooks/post-koji-build/README
@@ -5,9 +5,9 @@ to ensure the changes are reasonable, simple functional tests, etc.
The required arguments are:
--kojitag: the koji tag applied to the new build (e.g. dist-f14-updates-candidate)
-and finally the ENVR of the package build to be tested.
+and finally the NVR of the package build to be tested.
AutoQA tests can expect the following variables from post-koji-build hook:
- envr: package ENVR (epoch may be skipped if 0)
+ nvr: package NVR
name: package name
kojitag: koji tag applied to this package
diff --git a/hooks/post-koji-build/hook.py b/hooks/post-koji-build/hook.py
index af29b59..d0f5619 100644
--- a/hooks/post-koji-build/hook.py
+++ b/hooks/post-koji-build/hook.py
@@ -26,7 +26,7 @@ name = 'post-koji-build'
def extend_parser(parser):
'''Extend the given OptionParser object with settings for this hook.'''
- parser.set_usage('%%prog %s [options] PACKAGE_ENVR' % name)
+ parser.set_usage('%%prog %s [options] PACKAGE_NVR' % name)
group = optparse.OptionGroup(parser, '%s options' % name)
group.add_option('-k', '--kojitag', default='',
help='Koji tag that has just been applied to this new build')
@@ -41,14 +41,14 @@ def process_testdata(parser, opts, args, **extra):
populated.'''
if not args:
- parser.error('No ENVR was specified as a test argument!')
+ parser.error('No NVR was specified as a test argument!')
if not opts.kojitag:
parser.error('--kojitag is a mandatory option!')
- # parse name from ENVR
- envr = args[0]
- nvrea = rpmUtils.miscutils.splitFilename(envr + '.noarch')
+ # parse name from NVR
+ nvr = args[0]
+ nvrea = rpmUtils.miscutils.splitFilename(nvr + '.noarch')
name = nvrea[0]
- testdata = {'envr': envr, 'kojitag': opts.kojitag, 'name': name}
+ testdata = {'nvr': nvr, 'kojitag': opts.kojitag, 'name': name}
return testdata
diff --git a/hooks/post-koji-build/watch-koji-builds.py b/hooks/post-koji-build/watch-koji-builds.py
index 0d37600..c05a1de 100755
--- a/hooks/post-koji-build/watch-koji-builds.py
+++ b/hooks/post-koji-build/watch-koji-builds.py
@@ -375,11 +375,8 @@ class KojiWatcher(object):
for arch in testarches:
harnesscall += ['--arch', arch]
- if b['epoch']:
- envr = '%s:%s' % (b['epoch'], b['nvr'])
- else:
- envr = b['nvr']
- harnesscall.append(envr)
+ nvr = b['nvr']
+ harnesscall.append(nvr)
# if some builds were skipped during bodhi-event planning
# add the nvrs to the respective call
@@ -416,7 +413,7 @@ class KojiWatcher(object):
if len(new_builds[tag]) == 0:
continue
harness_arches = set()
- harness_envrs = []
+ harness_nvrs = []
p_tag = tag.replace('-pending', '')
repoarches = set(repoinfo.getrepo_by_tag(p_tag).get("arches"))
@@ -429,12 +426,8 @@ class KojiWatcher(object):
testarches = set(repoarches).intersection(arches)
harness_arches.update(testarches)
- if b['epoch']:
- envr = '%s:%s' % (b['epoch'], b['nvr'])
- else:
- envr = b['nvr']
-
- harness_envrs.append(envr)
+ nvr = b['nvr']
+ harness_nvrs.append(nvr)
harnesscall = ['autoqa']
# bodhi events
@@ -447,7 +440,7 @@ class KojiWatcher(object):
for a in harness_arches:
harnesscall += ['--arch', a]
- harnesscall.extend(harness_envrs)
+ harnesscall.extend(harness_nvrs)
if self.dry_run or self.verbose:
print " ".join(harnesscall)
diff --git a/lib/python/koji_utils.py b/lib/python/koji_utils.py
index 84c4959..7290680 100644
--- a/lib/python/koji_utils.py
+++ b/lib/python/koji_utils.py
@@ -48,7 +48,7 @@ class SimpleKojiClientSession(koji.ClientSession):
def latest_by_tag(self, tag, pkgname, max_evr=None):
'''Get the latest package for the given name in the given tag. If you
- set max_nvr, it is *exclusive*.
+ set max_evr, it is *exclusive*.
max_evr = (epoch, version, release)
'''
# allow epoch to be empty, transcode it to zero
diff --git a/tests/anaconda_storage/anaconda_storage.py b/tests/anaconda_storage/anaconda_storage.py
index 51f5044..24ff43f 100755
--- a/tests/anaconda_storage/anaconda_storage.py
+++ b/tests/anaconda_storage/anaconda_storage.py
@@ -55,10 +55,10 @@ class anaconda_storage(AutoQATest):
# Build arguments for the test script (runtest.py)
repos = []
if kwargs.get("hook","") == "post-koji-build":
- envr = kwargs.get('envr','')
+ nvr = kwargs.get('nvr','')
kojitag = kwargs.get('kojitag','')
name = kwargs.get('name','')
- assert envr.startswith("anaconda"), "This test is only applies to anaconda (not '%s')" % envr
+ assert nvr.startswith("anaconda"), "This test is only applies to anaconda (not '%s')" % nvr
# Add requested repo
repo = repoinfo.getrepo_by_tag(kojitag)
diff --git a/tests/compose_tree/compose_tree.py b/tests/compose_tree/compose_tree.py
index 4a662fb..9e7c5d7 100644
--- a/tests/compose_tree/compose_tree.py
+++ b/tests/compose_tree/compose_tree.py
@@ -47,8 +47,8 @@ class compose_tree(AutoQATest):
assert kwargs['hook'] in ['post-koji-build',], \
'Unexpected hook argument: %s' % kwargs['hook']
- assert kwargs.has_key('envr'), \
- 'Missing required argument: envr'
+ assert kwargs.has_key('nvr'), \
+ 'Missing required argument: nvr'
assert kwargs.has_key('kojitag'), \
'Missing required argument: kojitag'
@@ -71,7 +71,7 @@ class compose_tree(AutoQATest):
# Run test
cmd = '%s/compose_tree.sh -r %s -d %s -e "%s" ' % (self.bindir, \
- releasever, self.resultsdir, kwargs.get('envr'))
+ releasever, self.resultsdir, kwargs.get('nvr'))
cmd = 'su -c "%s" - autotest' % cmd
try:
out = utils.system_output(cmd, retain_output=True)
diff --git a/tests/compose_tree/compose_tree.sh b/tests/compose_tree/compose_tree.sh
index 4863832..b5e67e9 100755
--- a/tests/compose_tree/compose_tree.sh
+++ b/tests/compose_tree/compose_tree.sh
@@ -10,7 +10,7 @@ Options:
fedora-14, fedora-rawhide)
-a ARCH Value used for \$basearch
-d RESULTDIR Directory to store results (default: \$PWD)
- -e ENVR One or more RPM ENVR's to download from koji
+ -e NVR One or more RPM NVR's to download from koji
-t TMPDIR Temporary directory to use (default: /tmp)
EOF
exit 1
@@ -27,7 +27,7 @@ detect_releasever() {
elif [ -f /etc/redhat-release -a ! -L /etc/redhat-release ]; then
RELEASE="epel"
VER=$(gawk '{print $5}' /etc/redhat-release)
- case $VER in
+ case $VER in
4* ) VER=4 ;;
5* ) VER=5 ;;
6* ) VER=6 ;;
@@ -56,7 +56,7 @@ BASEARCH=$(uname -i)
RELEASEVER=$(detect_releasever)
RESULTSDIR=${PWD}/results/$(date +%Y%m%d)
TMPDIR=/tmp
-ENVR=""
+NVR=""
MOCKDIR=/etc/mock
# Process arguments
@@ -67,7 +67,7 @@ do
r ) RELEASEVER=$OPTARG ;;
d ) RESULTSDIR=$OPTARG ;;
t ) TMPDIR=$OPTARG ;;
- e ) ENVR=$OPTARG ;;
+ e ) NVR=$OPTARG ;;
\?|h ) usage ;;
* ) usage ;;
esac
@@ -92,7 +92,7 @@ cat > ${SETUP} << EOF
#!/bin/sh
## Use koji to download updated packages
-for E in ${ENVR}
+for E in ${NVR}
do
koji download-build --arch noarch --arch ${BASEARCH} \$E
done
@@ -101,7 +101,7 @@ done
yum -y localupdate *.rpm
## Make sure the requested packages were installed
-rpm -q ${ENVR}
+rpm -q ${NVR}
exit \$?
EOF
diff --git a/tests/initscripts/control.autoqa b/tests/initscripts/control.autoqa
index 00ed666..1fede00 100644
--- a/tests/initscripts/control.autoqa
+++ b/tests/initscripts/control.autoqa
@@ -6,11 +6,11 @@
labels = ['virt']
# because we install the package, the autotest label of the correct distribution
-# must be present (like 'fc13'); strip it from the envr (last part)
-if autoqa_args.has_key('envr'):
- labels.append(autoqa_args['envr'].split('.')[-1])
+# must be present (like 'fc13'); strip it from the NVR (last part)
+if autoqa_args.has_key('nvr'):
+ labels.append(autoqa_args['nvr'].split('.')[-1])
-# we want to run initscripts just for post-koji-build for now
+# we want to run initscripts just for post-koji-build for now
# (and post-bodhi-build in the near future)
if hook not in ['post-koji-build']:
execute = False
diff --git a/tests/initscripts/initscripts.py b/tests/initscripts/initscripts.py
index 51462c9..2261ceb 100644
--- a/tests/initscripts/initscripts.py
+++ b/tests/initscripts/initscripts.py
@@ -104,10 +104,10 @@ class initscripts(AutoQATest):
def run_once(self, kojitag, **kwargs):
super(self.__class__, self).run_once()
if kwargs['hook'] == 'post-koji-build':
- envrs = [kwargs['envr']]
- update_id = kwargs['envr']
+ nvrs = [kwargs['nvr']]
+ update_id = kwargs['nvr']
elif kwargs['hook'] == 'post-bodhi-update':
- envrs = kwargs['envrs']
+ nvrs = kwargs['nvrs']
update_id = kwargs['name'] or kwargs['id']
self.result = 'PASSED'
@@ -117,13 +117,13 @@ class initscripts(AutoQATest):
self.outputs = []
self.highlights = []
- for envr in envrs:
+ for nvr in nvrs:
# add header
- msg = '%s\n%s\n%s' % ('='*40, envr, '='*40)
+ msg = '%s\n%s\n%s' % ('='*40, nvr, '='*40)
print msg
self.outputs.append(msg)
#find all tests for package $name
- nvrea = rpmUtils.miscutils.splitFilename(envr + '.noarch')
+ nvrea = rpmUtils.miscutils.splitFilename(nvr + '.noarch')
name = nvrea[0]
testdir = os.path.join(self.bindir, "./tests/%s" % name)
assert os.path.isdir(testdir), "No initscript checker found for package %s" % name
@@ -140,7 +140,7 @@ class initscripts(AutoQATest):
#install packages from koji
koji = autoqa.koji_utils.SimpleKojiClientSession()
- pkgurls = koji.nvr_to_urls(envr, arches = os.uname()[-1])
+ pkgurls = koji.nvr_to_urls(nvr, arches = os.uname()[-1])
rpms = []
print "Saving RPMs to %s" % self.rpmdir
#download packages
diff --git a/tests/rpmguard/rpmguard.py b/tests/rpmguard/rpmguard.py
index faba8e0..c51a9a1 100644
--- a/tests/rpmguard/rpmguard.py
+++ b/tests/rpmguard/rpmguard.py
@@ -46,28 +46,28 @@ class rpmguard(AutoQATest):
def run_once(self, kojitag, **kwargs):
super(self.__class__, self).run_once()
if kwargs['hook'] == 'post-koji-build':
- envrs = [kwargs['envr']]
- update_id = kwargs['envr']
+ nvrs = [kwargs['nvr']]
+ update_id = kwargs['nvr']
elif kwargs['hook'] == 'post-bodhi-update':
- envrs = kwargs['envrs']
+ nvrs = kwargs['nvrs']
update_id = kwargs['name'] or kwargs['id']
self.result = 'PASSED'
# order for evaluation of final result; higher index means preference
self.result_order = ('PASSED','INFO','FAILED','ABORTED')
- self.envr_results = {} # results for invidual packages
+ self.nvr_results = {} # results for invidual packages
self.outputs = []
self.highlights = []
- for envr in envrs:
+ for nvr in nvrs:
# add header
- msg = '%s\n%s\n%s' % ('='*40, envr, '='*40)
+ msg = '%s\n%s\n%s' % ('='*40, nvr, '='*40)
print msg
self.outputs.append(msg)
- # run the test for this envr
- (result, highlights, outputs, warn_count) = self.test_envr(envr, kojitag)
+ # run the test for this nvr
+ (result, highlights, outputs, warn_count) = self.test_nvr(nvr, kojitag)
# collect output
- self.envr_results[envr] = result
+ self.nvr_results[nvr] = result
if self.result_order.index(result) > self.result_order.index(self.result):
self.result = result
if highlights:
@@ -82,7 +82,7 @@ class rpmguard(AutoQATest):
# email results to mailing list and to pkg owner if they optin
repo = repoinfo.getrepo_by_tag(kojitag)
- pkg_name = rpmUtils.miscutils.splitFilename(envr + '.noarch')[0]
+ pkg_name = rpmUtils.miscutils.splitFilename(nvr + '.noarch')[0]
send_optin_email = getbool(self.config.get('notifications', 'send_optin_email'))
if repo is not None and send_optin_email and \
autoqa.util.check_opt_in(pkg_name, repo['collection_name']):
@@ -99,16 +99,16 @@ class rpmguard(AutoQATest):
# create result line like "1 PASSED, 2 FAILED, 3 INFO"
result_count = []
for res in self.result_order:
- if res in self.envr_results.values():
- count = len([k for k in self.envr_results.keys() if self.envr_results[k] == res])
+ if res in self.nvr_results.values():
+ count = len([k for k in self.nvr_results.keys() if self.nvr_results[k] == res])
result_count.append('%d %s' % (count, res))
result_count = ', '.join(result_count)
self.summary = '%s for %s' % (result_count, update_id)
- def test_envr(self, envr, kojitag):
+ def test_nvr(self, nvr, kojitag):
'''
- Test a single ENVR.
+ Test a single NVR.
Returns (result, highlights, outputs, warn_count).
'''
result = 'PASSED'
@@ -127,20 +127,20 @@ class rpmguard(AutoQATest):
# get the most recent release available
# add .noarch to parse filename correctly
- nvrea = rpmUtils.miscutils.splitFilename(envr + '.noarch')
+ nvrea = rpmUtils.miscutils.splitFilename(nvr + '.noarch')
name = nvrea[0]
lastBuild = koji.list_previous_release(name, kojitag,
max_evr=(nvrea[3], nvrea[1], nvrea[2]))
# if there is no such build, we don't have anything to compare
if not lastBuild:
- msg = "N: There is no previous build of %s in %s tag (or its parents)." % (envr, kojitag)
+ msg = "N: There is no previous build of %s in %s tag (or its parents)." % (nvr, kojitag)
print msg
outputs.append(msg)
return get_result()
# now we need list of RPMs available for each build
- new_rpms = koji.nvr_to_rpms(envr, src=False)
+ new_rpms = koji.nvr_to_rpms(nvr, src=False)
old_rpms = koji.nvr_to_rpms(lastBuild['nvr'], src=False)
# and match the RPMs according to build name and architecture as
# (old one, new one)
diff --git a/tests/rpmlint/rpmlint.py b/tests/rpmlint/rpmlint.py
index b1ebb35..73d798f 100644
--- a/tests/rpmlint/rpmlint.py
+++ b/tests/rpmlint/rpmlint.py
@@ -46,24 +46,24 @@ class rpmlint(AutoQATest):
def run_once(self, kojitag, **kwargs):
super(self.__class__, self).run_once()
if kwargs['hook'] == 'post-koji-build':
- envrs = [kwargs['envr']]
- update_id = kwargs['envr']
+ nvrs = [kwargs['nvr']]
+ update_id = kwargs['nvr']
elif kwargs['hook'] == 'post-bodhi-update':
- envrs = kwargs['envrs']
+ nvrs = kwargs['nvrs']
update_id = kwargs['name'] or kwargs['id']
self.result = 'PASSED'
# order for evaluation of final result; higher index means preference
self.result_order = ('PASSED','INFO','FAILED','ABORTED')
- self.envr_results = {} # results for invidual packages
+ self.nvr_results = {} # results for invidual packages
self.outputs = []
self.highlights = []
koji = autoqa.koji_utils.SimpleKojiClientSession()
- for envr in envrs:
+ for nvr in nvrs:
# add header
- msg = '%s\n%s\n%s' % ('='*40, envr, '='*40)
+ msg = '%s\n%s\n%s' % ('='*40, nvr, '='*40)
print msg
self.outputs.append(msg)
@@ -74,7 +74,7 @@ class rpmlint(AutoQATest):
os.remove(os.path.join(self.rpmdir, rpm))
# download packages
- pkgurls = koji.nvr_to_urls(envr)
+ pkgurls = koji.nvr_to_urls(nvr)
print "Saving RPMs to %s" % self.rpmdir
for p in pkgurls:
print "Grabbing %s" % p
@@ -109,14 +109,14 @@ class rpmlint(AutoQATest):
result = 'INFO'
# collect output
- self.envr_results[envr] = result
+ self.nvr_results[nvr] = result
if self.result_order.index(result) > self.result_order.index(self.result):
self.result = result
self.outputs.append(outputs)
# email results to mailing list and to pkg owner if they optin
repo = repoinfo.getrepo_by_tag(kojitag)
- pkg_name = rpmUtils.miscutils.splitFilename(envr + '.noarch')[0]
+ pkg_name = rpmUtils.miscutils.splitFilename(nvr + '.noarch')[0]
send_optin_email = getbool(self.config.get('notifications', 'send_optin_email'))
if repo is not None and send_optin_email and \
autoqa.util.check_opt_in(pkg_name, repo['collection_name']):
@@ -133,8 +133,8 @@ class rpmlint(AutoQATest):
# create result line like "1 PASSED, 2 FAILED, 3 INFO"
result_count = []
for res in self.result_order:
- if res in self.envr_results.values():
- count = len([k for k in self.envr_results.keys() if self.envr_results[k] == res])
+ if res in self.nvr_results.values():
+ count = len([k for k in self.nvr_results.keys() if self.nvr_results[k] == res])
result_count.append('%d %s' % (count, res))
result_count = ', '.join(result_count)
self.summary = '%s for %s' % (result_count, update_id)
13 years, 2 months
depcheck test, version 2
by Will Woods
This patch adds the depcheck test. Detailed change info/development history
can be found by poking around the 'depcheck' branch.
This version of the patch should address all the issues found by James and
Kamil[1] in their reviews. (Thanks for the feedback, guys.)
The most interesting bit is that depcheck_main was split into two parts:
do_depcheck and print_depcheck_results. do_depcheck returns much more info
than depcheck_main did, which allows for more unittests and things.
(nearly) full list of changes since the last version of this patch:
* test:
- separate files for CLI, library, and unittests
- handle pending/accepted like normal yum repos (use YumBase.add_enable_repo)
- handle new packages (i.e. not updates to existing packages) correctly
- refactor depcheck_main into do_depcheck and print_depcheck_results
- add unittests for new packages and ignored packages
* wrapper:
- install mash in setup()
- add missing 'import re'
- initialize results dict
- fetch_nvrs(): fix missing 'koji', fix filename arg, download status output
- use new query_update() and bodhi_already_commented() methods
- fix NameError caused by missing 'updateid' variable
* control files:
- fix incorrect comment in control.autoqa
[1] ..except the 'tags and timing' problem of pending/accepted changing during
the test. That still need to be addressed, but first we need code to work with!
tests/depcheck/control | 14 +
tests/depcheck/control.autoqa | 6 +
tests/depcheck/depcheck | 96 +++++++
tests/depcheck/depcheck.py | 185 +++++++++++++
tests/depcheck/depcheck_lib.py | 337 +++++++++++++++++++++++
tests/depcheck/depcheck_unittests.py | 489 ++++++++++++++++++++++++++++++++++
6 files changed, 1127 insertions(+), 0 deletions(-)
-w
13 years, 2 months
Re: depcheck test, version 2
by Kamil Paral
Forwarding wwoods' replies here.
----- Forwarded Message -----
From: "Will Woods" <wwoods(a)redhat.com>
To: "Kamil Paral" <kparal(a)redhat.com>
Sent: Friday, February 11, 2011 8:37:02 PM
Subject: Re: depcheck test, version 2
On Fri, 2011-02-11 at 14:26 -0500, Kamil Paral wrote:
> ----- Original Message -----
> > Ha, yeah, I forgot to wire up --selftest. You can run the selftests
> > just
> > by doing ./depcheck_unittest.py.
>
> One question to wwoods:
> depcheck_unittest.py requires to be run on amd64 because of those
> multilib test cases, right? I haven't managed to get it running on
> i386.
>
> If it is the case, I'll add a short architecture detection to the
> beginning of the test and print out warning if it is executed on a
> wrong arch.
...ick. Yes, the tests that exist are currently correct for x86_64 only.
They used to also work on i386 but not anymore, I guess. We should
probably have different versions of the multilib tests that expect
different results, depending on whether you're on a multiarch or
singlearch system.
>
> One question to sheer public:
(I'm not sure what you mean by "sheer public" here? But anyway..)
> I have found out that depcheck_unittest.py requires a few packages I
> didn't have installed, specifically python-rpmfluff, rpm-build and
> mash. How are we gonna track these dependencies? We use autoqa.spec
> for managing runtime dependencies of autoqa and we use setup() methods
> for managing dependencies of individual tests. But what about
> dependencies of our unittests?
>
> Should we add those to the autoqa.spec? Should we add "make test-deps"
> command to install those deps?
Probably the latter one - the unittests are only interesting to people
who are probably going to be modifying the test and testing its changes,
so we don't want to pull those in for everyone.
It might make sense to add a BuildRequires: line for them, though?
>
> > Yeah, this is unnecessary but not technically wrong. Mostly I wanted
> to
> > avoid encoding any multilib policy/logic into the test wrapper, but
> if
> > you want to do:
> >
> > if arch == 'i686':
> > download_pkgs(arch)
> > else:
> > download_pkgs(both_arches)
> >
> > that's fine.
>
> We will probably need to hold on this because of
> https://fedorahosted.org/autoqa/ticket/272
>
> I don't know how to detect OS arch and HW arch may not match.
Right. And hopefully, downloading packages should be a fairly quick
operation when we're running this in production; it's not a huge loss if
it takes 20 seconds instead of 10 seconds on i686. We can fix that
later.
-w
13 years, 2 months