#154: pst: Abort when old and new package versions match
-------------------------+--------------------------------------------------
Reporter: kparal | Owner: psss
Type: enhancement | Status: new
Priority: major | Milestone: Package Sanity Tests
Component: tests | Version: 1.0
Keywords: |
-------------------------+--------------------------------------------------
Currently this command works:
{{{
pst -y -o tzdata-2009o-2.fc12 -n tzdata-2009o-2.fc12
}}}
but it prints many FAILs in the process. I think we could just detect if
any of the old packages is equal to any of the new packages and abort the
test in this case. There is no sense in testing the same package anyway,
is it?
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/154>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#151: RFE: opt-in email from autoqa's post-koji-build tests
-----------------------+----------------------------------------------------
Reporter: skvidal | Owner:
Type: task | Status: new
Priority: minor | Milestone:
Component: reporting | Version: 1.0
Keywords: |
-----------------------+----------------------------------------------------
Right now all koji builds are being tested with rpmguard and rpmlint on
the backend.
The output is going to:
https://fedorahosted.org/mailman/listinfo/autoqa-results
It'd be great if there were a way for a maintainer to opt-in to this kind
of info for their package.
Not sure where the info would be stored - but the results from each build,
instead of emailing to autoqa-results would send that to $srpmname-
owner(a)fedoraproject.org
and then anyone who is on that email alias gets the notices.
so what we need is:
1. a place to say which packages have opted in
2. maybe a partial solution to ticket 148
3. that sounds like it.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/151>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
Hello,
I have similar ideas long time in my head, created
a ticket now:
https://fedorahosted.org/autoqa/ticket/153
Of course rpmguard should be modular, at least from
source code perspective. That would allow easier addition
of new checks. Completely separate plugins in a yum way
are maybe too much, but that's up to consideration. I
really like the modular approach used in rpmlint.
I created rpmguard as a quick hack and also as my first
longer Python script. If time allows, I would like to
convert it into a tool with a more decent architecture
(modular design amongst other things), which would also
offer me to get my hands dirty with more complex Python
programming - looking forward to it.
Now the only question is about time frame. I believe
our top priority is now to have AutoQA with ResultDB
up-and-running, which will allow us to finally inform
people about glitches in our/their repositories and
packages. Having nicely structured test scripts comes
only after that, I assume. Right, wrong?
So it can take some time until I'm given the opportunity
to work on it. But hey, any contribution is welcome,
Seth (or anyone else)! :)
Greetings folks,
Please don't take this post to mean, "drop what you are doing and let's
focus on rpmguard". I believe our current focus is correctly centered
around automating the package update acceptance test plan [1]. Kamil
and I have discussed some long-term plans for rpmguard, but I also
wanted to bring the discussion to the list for group consumption/review.
Quick background, rpmguard does a fair amount of comparative tests
against old and new set of packages. Heck, the tests are event
documented [2] (nicely done Kamil!). Today, Seth Vidal directed me to
bug#564018 and asked whether rpmguard had support for testing whether a
package has a *large* change in it's %provides. In the bug, the erlang
package went from 50 provides to 22157 provides. This package did go
through rpmguard and warnings were generated (for a good laugh, see
[3]). So kudos are in order for capturing the change in an automated
manner.
What interested me about this exercise was how a new test might be added
to rpmguard. The power of rpmguard is in abstracting the details of
locating the previous packages and providing a common framework for
comparisons against the two package sets. Presently, the tests exist in
the main driver.
How do we want to extend this in the future make it easier/faster to
support new comparison tests? Should tests exist in stand-alone python
scripts, where they all accept a common set of arguments? It seems like
rpmlint is structured this way [4], is this good/bad ... does this make
user-contributed comparative tests easier?
Something Kamil proposed a while back, should rpmguard be a stand-alone
tool (much like rpmlint)? Instead of being bundled inside autoqa,
anyone could do: `yum install rpmguard`
Apologies for the ranty nature of the email, I'm still thinking this
through. My main objective is to get a sense where rpmguard should go.
Hopefully these thoughts can lead to a wiki or TRAC roadmap, and
something we could implement once our immediate objects are behind us.
Thanks,
James
[1]
https://fedoraproject.org/wiki/QA:Package_Update_Acceptance_Test_Plan
[2] https://fedorahosted.org/autoqa/wiki/RpmguardChecks
[3]
https://fedorahosted.org/pipermail/autoqa-results/2010-April/017795.html
[4] http://rpmlint.zarb.org/cgi-bin/trac.cgi/browser/trunk
Hello gang,
take a look at the patch attached - both autotest_server and job.tag are
stored in the autoqa_conf variable.
To get the job.tag there, I had to do a little 'hack' - the autoqa_conf
is stored in the control file as a string, so when creating this string,
I add a parameter
jobtag = %s
to the [general] section of the config, and append
% (job.tag, )
behind the string, so it gets evaluated, when the control file is
imported (i.e. when the job.tag value is known).
The respective part of the control file then looks like this:
autoqa_conf = '''
[test]
smtpserver = localhost
result_email =
mail_from = autoqa(a)fedoraproject.org
[general]
autotest_server = dhcp-30-103.brq.redhat.com
jobtag = %s
notification_email =
local = false
hookdir = /usr/share/autoqa
testdir = /usr/share/autotest/client/site_tests
''' % (job.tag, )
so nothing needs to change in control files (no new parameter is
required), only the tests will need minor adjustments in the initialize
method:
def initialize(self, envr, name, kojitag, config):
self.config = config_loader(config, self.tmpdir)
autotest_server = self.config.get('general', 'autotest_server')
jobtag = self.config.get('general', 'jobtag')
self.autotest_url = "http://%s/results/%s/" % (autotest_server,
jobtag)
And then of course the test needs to take advantage of the
self.autotest_url variable (i.e. add the URL to the e-mail).
What do you think about this approach - I belive it's quite clean (apart
from the minor job.tag 'hack') and hope you'll like it.
joza
-------
diff --git a/autoqa b/autoqa
index efb94e3..cf8b6ee 100755
--- a/autoqa
+++ b/autoqa
@@ -27,6 +27,7 @@ import optparse
import tempfile
import StringIO
import urlgrabber
+import socket
from ConfigParser import *
from subprocess import call
@@ -37,6 +38,7 @@ conf = {
'testdir': '/usr/share/autotest/client/site_tests',
'hookdir': '/usr/share/autoqa',
'notification_email': '',
+ 'autotest_server': socket.gethostname(),
}
cfg_parser = SafeConfigParser() # used by prep_controlfile
try:
@@ -58,6 +60,10 @@ def prep_controlfile(controlfile, extradata):
prepended with key='value' lines for each item in extradata and an
'autoqa_conf' variable which holds the contents of the autoqa
configfile.
'''
+ # the jobtag must be evaluated on every single testrun (since it
changes :-))
+ # so line 'jobtag = %s' is added to the [general] part of the
autoqa_conf
+ # string, and then is replaced with job.tag when the code is executed
+ cfg_parser.set('general', 'jobtag', '%s')
controldata = open(controlfile)
(fd, name) = tempfile.mkstemp(prefix='autoqa-control.')
os.write(fd, '# -*- coding: utf-8 -*-\n\n')
@@ -65,7 +71,7 @@ def prep_controlfile(controlfile, extradata):
cfgdata = StringIO.StringIO()
cfg_parser.write(cfgdata)
cfgdata.seek(0)
- os.write(fd, "autoqa_conf = '''\n%s'''\n\n" % cfgdata.read())
+ os.write(fd, "autoqa_conf = '''\n%s''' %% (job.tag, )\n\n" %
cfgdata.read())
except IOError:
pass
for k,v in extradata.iteritems():
@@ -139,6 +145,9 @@ parser.add_option('--local', action='store_true',
dest='local',
help='Do not schedule jobs - run test(s) directly on the local
machine')
parser.add_option('-l', '--list-tests', action='store_true',
dest='listtests',
help='list the tests for the given hookname - do not run any tests')
+parser.add_option('--autotest-server', action='store', default=None,
+ help='Sets the autotest-server hostname. Used for creating URLs to
results.\
+Hostname of the local machine is used by default.')
# Read and validate the hookname
# Check for no args, or just -h/--help
if len(sys.argv) == 1 or sys.argv[1] in ('-h', '--help'):
@@ -197,6 +206,11 @@ for arch in opts.arch:
# N.B. process_testdata may grow new keyword arguments if we add
new autoqa
# args that add another loop here..
testdata = hook.process_testdata(opts, args, arch=arch)
+ if not 'autotest_server' in testdata.keys():
+ if opts.autotest_server is not None:
+ testdata['autotest_server'] = opts.autotest_server
+ else:
+ testdata['autotest_server'] = conf['autotest_server']
# XXX FIXME: tests need to be able to indicate that they do not
require
# any specific arch (e.g. rpmlint can run on any arch)
for test in testlist: