#104: virtguest.py: accept iso image(s) as install location
--------------------+-------------------------------------------------------
Reporter: wwoods | Owner:
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: tests | Version: 1.0
Keywords: |
--------------------+-------------------------------------------------------
allow {{{VirtGuest.create()}}} to use an iso image (or images) for its
{{{location}}} arg, and automatically choose the appropriate {{{--location
URL}}} or {{{--cdrom IMAGE}}} flag.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/104>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#308: depcheck: Handle packages with "Conflicts:" declarations better than just
failing them
-------------------------+--------------------------------------------------
Reporter: tflink | Owner:
Type: enhancement | Status: new
Priority: minor | Milestone: Package Update Acceptance Test Plan
Component: core | Keywords:
-------------------------+--------------------------------------------------
In a recent
[http://autoqa.fedoraproject.org/results/82955-autotest/qa03.c.fedoraproject…
test for mediawiki on f13], depcheck rejected the updated because it
conflicted with pho-common.
The package conflicts with php-common because of a "Conflicts: php-common
= 5.3.1" in the spec file for mediawiki starting with 1.16.2-56
([http://pkgs.fedoraproject.org/gitweb/?p=mediawiki.git;a=blob;f=mediawiki.sp…
current spec file])
I can't imagine that this is the only package with a "Conflicts:"
declaration and I don't think that we should be just failing packages like
that. Maybe the "Conflicts:" should be checked somewhere, I'm just
thinking that it should be somewhere other than depcheck.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/308>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#248: depcheck: simultaneous run may report incorrect results
--------------------+-------------------------------------------------------
Reporter: kparal | Owner:
Type: task | Status: new
Priority: major | Milestone: 0.4.4
Component: tests | Keywords:
--------------------+-------------------------------------------------------
This is taken from:[[BR]]
https://fedorahosted.org/pipermail/autoqa-devel/2010-November/001306.html
I have imagined a situation where two updates are accepted into
the -pending tag in quick succession. Thus we can end up with two
depcheck tests running simultaneously. And the one that was started
earlier can finish up later. It could then report incorrect result.
Do you have an idea what we can we do about that? Maybe we can
somehow write down the state of the -pending tag in the time of
starting the test, and if it is different from when the test is
finished, we can just throw away the results? Just a wild idea.
Wwoods mentioned that he's working on --accepted option for depcheck, here
is short IRC excerpt, hopefully he will provide more info soon:
{{{
(05:16:13 PM) wwoods: I'm working on a mailing list post about the
"accepted" flag
(05:19:13 PM) wwoods: the short answer is this: we mark packages as
"accepted" when they pass depcheck
(05:19:48 PM) wwoods: exactly *how* we mark the package is kind of an
implementation detail - could be a koji tag or a local database of
accepted package names or whatever
(05:20:04 PM) wwoods: but since we're already planning to give +1 karma
for packages that are accepted
(05:21:02 PM) wwoods: then we can (I hope!) just check Bodhi for each
thing in -pending
(05:21:11 PM) wwoods: if it has +1, it was previously accepted
(05:21:21 PM) wwoods: err - has a +1 from autoqa / depcheck
(05:21:51 PM) wwoods: in the future we might want to query resultsdb
instead, or use a second koji tag
(05:22:39 PM) wwoods: I'll still write a (possibly longer) explanation for
the list - or maybe a blog post
}}}
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/248>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#312: depcheck: RuntimeError: maximum recursion depth exceeded
--------------------+-------------------------------------------------------
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: major | Milestone: Hot issues
Component: tests | Keywords:
--------------------+-------------------------------------------------------
This happened while running:
{{{
# autoqa post-bodhi-update-batch --targettag dist-f14-updates-testing
--arch x86_64 --arch i386 root-5.28.00a-1.fc14 -t depcheck --local
}}}
{{{
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3633, in update
tx_return):
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3392, in _newer_update_in_trans
available_pkg)
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3373, in _check_new_update_provides
tx_return.extend(self.update(po=pkg))
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3633, in update
tx_return):
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3392, in _newer_update_in_trans
available_pkg)
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3373, in _check_new_update_provides
tx_return.extend(self.update(po=pkg))
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3633, in update
tx_return):
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3392, in _newer_update_in_trans
available_pkg)
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3373, in _check_new_update_provides
tx_return.extend(self.update(po=pkg))
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 3612, in update
updated_pkg =
self.getInstalledPackageObject(updated)
File "/usr/lib/python2.7/site-
packages/yum/__init__.py", line 2811, in getInstalledPackageObject
pkgs = self.rpmdb.searchPkgTuple(pkgtup)
File "/usr/lib/python2.7/site-
packages/yum/packageSack.py", line 114, in searchPkgTuple
return self.searchNevra(name=n, arch=a, epoch=e,
ver=v, rel=r)
File "/usr/lib/python2.7/site-
packages/yum/packageSack.py", line 406, in searchNevra
return
self._computeAggregateListResult("searchNevra", name, epoch, ver, rel,
arch)
File "/usr/lib/python2.7/site-
packages/yum/packageSack.py", line 584, in _computeAggregateListResult
sackResult = apply(method, args)
File "/usr/lib/python2.7/site-
packages/yum/sqlitesack.py", line 46, in newFunc
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-
packages/yum/sqlitesack.py", line 1691, in searchNevra
for pkg in self.searchNames(names=[name]):
File "/usr/lib/python2.7/site-
packages/yum/sqlitesack.py", line 46, in newFunc
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-
packages/yum/sqlitesack.py", line 1270, in searchNames
if self._skip_all():
File "/usr/lib/python2.7/site-
packages/yum/sqlitesack.py", line 840, in _skip_all
if repo not in self._all_excludes:
RuntimeError: maximum recursion depth exceeded
}}}
Full log attached.
Yum bug, or depcheck bug?
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/312>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#183: Write a script and a daemon to send signals between virt machine and host
----------------------------+-----------------------------------------------
Reporter: kparal | Owner:
Type: task | Status: new
Priority: major | Milestone: Virtualization
Component: infrastructure | Version: 1.0
Keywords: |
----------------------------+-----------------------------------------------
We need to exchange some signals between virt machine and a host. For
example "autotest job finished, need to revert to previous state". We need
to write simple script that will send this signal for virt machine and a
daemon that will receive this signal in the host machine and act
accordingly.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/183>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
> Hello Scott,
>
> I'm not exactly sure I get the idea. The autotest server already
> handles starting the task and collecting results (arrows 2. and 3. in
> the picture I added to
> https://fedorahosted.org/autoqa/ticket/183#comment:description) Ticket
> #183 mainly concerns arrow 4., signaling to the host (of that autotest
> client VM). We can use it for reverting the VM to the previous state
> (I think that's the main reason we need all of this). In the Virtualization
> milestone there are other tickets that cover remaining parts of that
> picture.
>
> So, the problem of ticket #183 is not getting the results (autotest handles
> that for us), but telling the host (of that VM) "do something" right after
> completing the test.
>
> Are we on the same page? Maybe I have missed something. Tell me.
>
> Thanks,
> Kamil
>
Kamil,
We were on a different page and it's completely due to lack-of-sleep
on my end, but that's another story (4-week-old daughter). Thanks for
clarifying. All should be well now :)
Best,
Scott
#258: implement 'make test'
-------------------------+--------------------------------------------------
Reporter: kparal | Owner:
Type: enhancement | Status: new
Priority: major | Milestone: 0.5.0
Component: core | Keywords:
-------------------------+--------------------------------------------------
Some of our modules have unit tests. Let's execute them all by "make test"
command and report the result. We can use this functionality to
periodically check whether nothing got broken.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/258>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#314: Decrease the volume of 'PASSED' email sent to maintainers from bodhi
--------------------+-------------------------------------------------------
Reporter: tflink | Owner:
Type: task | Status: new
Priority: major | Milestone: Hot issues
Component: core | Keywords:
--------------------+-------------------------------------------------------
With the current setup, maintainers are sent an email with every single
comment made by AutoQA.
These emails aren't particularly useful and increase the amount of noise
that maintainers need to filter out.
Find a way to decrease the number of useless emails sent to maintainers.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/314>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#298: test.py - split postprocess_iteration reporting into standalone methods
----------------------+-----------------------------------------------------
Reporter: jskladan | Owner:
Type: task | Status: new
Priority: minor | Milestone: 0.5.0
Component: core | Keywords:
----------------------+-----------------------------------------------------
At the moment, postprocess_iteration() handles the whole reporting/sending
results. When the run_once ends, this method takes the content of
self.{result, summary, highlights, outputs} and automagically sends an
email to the mailing list, creates output.log file, etc.
This approach was very reasonable for tests which actually test just one
thing (update/build/etc), because just a single result is to be sent.
With the new tests like Depcheck and Upgradepath, we'd love to be able to
force-send several results as the test proceeds - e.g. to be able to send
bodhi comment to every update, but send just one overall email...
So what I'd like to have is:
1) Take postprocess_iteration, and split it into standalone methods
according to the 'destination' of the report (e.g. send_email(),
create_output_log(), ...). These will be able to take {result, summary,
highlights, outputs} parameters, which will override the 'automagicall'
self.{result, summary, highlights, outputs}. I.e. if I set result
parameter, summary, highlights and outputs will be filled using the 'self'
variables, and so on.
2) Take the bodhi-reporting method (as used in depcheck and upgradepath),
and move it to the AutoQATest class.
3) Add a method report_results(), again with the same parameters ({result,
summary, highlights, outputs}) and one more (e.g. a dictionary) to control
which reporting methods to call (i.e. to be able to say "send email, store
in resultsdb, but do not send bodhi comment"). By default, this will call
all the reporting routines.
4) In postprocess_iteration(), call the report_results(), so the
'automagical reporting' behaviour is not changed.
5) Add a variable, which will turn on/off the call of report_results()
(True as default) in postprocess_iteration() (needs to be some attribute
of AutoQATest class, since postprocess_iteration() does not take
arguments).
This is for the wrapper-writer, to be able to control if results are to be
sent in the postprocess_iteration() or not (imagine Upgradepath - we will
report all the results in the run_once(), and do not need postprocess
iteration to actually send anything).
-------------------------------
This is also preparation for resultsdb, because it solves the problem with
reporting multiple results from one test. Also turning the resultsdb
reporting on is a matter of adding one method, and calling it from
report_results() (and adding more types of reporting [I'm looking at you
fedora message bus!] in the future will be also this simple.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/298>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project