#338: AutoQATest.version - is it needed?
-------------------------+--------------------------------------------------
Reporter: kparal | Owner:
Type: enhancement | Status: new
Priority: trivial | Milestone: Finger Food
Component: tests | Keywords:
-------------------------+--------------------------------------------------
In all our tests we have code like this:
{{{
class rpmlint(AutoQATest):
version = 1 # increment if setup() changes
}}}
The question is - is the ''version'' attribute needed and being used at
all? I have never seen any difference in behavior, the setup() phase is
run every time.
We should have look at autotest documentation or source code whether this
is being used and when. If we don't need it, let's remove it from all our
tests and templates (the less code the better).
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/338>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#186: Automate media kit sanity tests
--------------------+-------------------------------------------------------
Reporter: jlaska | Owner: lili
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: tests | Version: 1.0
Keywords: |
--------------------+-------------------------------------------------------
This ticket is intended to track automating the mediakit sanity tests
(http://fedoraproject.org/wiki/Category:Installer_Image_Sanity_Test_Cases)
Discussion already underway, along with code in Liam's private branch.
For details see https://fedorahosted.org/pipermail/autoqa-
devel/2010-June/000595.html
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/186>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#194: Incorporate automated anaconda storage suite
--------------------+-------------------------------------------------------
Reporter: jlaska | Owner:
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: tests | Version: 1.0
Keywords: |
--------------------+-------------------------------------------------------
Chris Lumens has developed a virt-based test suite to automate different
disk/storage install scenarios. I've discussed this briefly with Chris,
but I'd be interested in seeing his test suite incorporated within autoqa
and run on a regular interval (perhaps along-side Liam's install tests).
The tests are currently available at http://clumens.fedorapeople.org
/anaconda-storage-test/
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/194>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#361: improve pretty log output for anaconda tests
--------------------+-------------------------------------------------------
Reporter: jlaska | Owner:
Type: task | Status: new
Priority: minor | Milestone: Finger Food
Component: tests | Keywords:
--------------------+-------------------------------------------------------
With the advent of the super awesome html pretty logs, some of the less
frequented tests are in need of some care and feeding. While reviewing
installer test results with the anaconda team, I noticed that the
installer tests don't all look good with pretty logs.
Sample output for existing anaconda test results ...
*
[http://autoqa.fedoraproject.org/results/143510-autotest/qa02.qa.fedoraproje…
anaconda_storage] - Doesn't look bad ... I don't think I'd recommend
changes here.
*
[http://autoqa.fedoraproject.org/results/143512-autotest/qa07.qa.fedoraproje…
compose_tree] - Missing some info
*
[http://autoqa.fedoraproject.org/results/143577-apache/10.5.124.249/anaconda…
anaconda_checkbot] - Yuck, no info at all :(
This ticket is intended to address improving the html result output for
the above installer tests.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/361>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#363: depcheck: don't require specific release
-------------------------+--------------------------------------------------
Reporter: kparal | Owner:
Type: enhancement | Status: new
Priority: minor | Milestone: Finger Food
Component: tests | Keywords:
-------------------------+--------------------------------------------------
Depcheck currently requires machine with a specific Fedora release
installed to test matching repositories.
From tests/depcheck/control.autoqa:
{{{
# because we simulate installing packages, the autotest label of the
correct distribution
# must be present (like 'fc13') If proper label is not found the test will
not execute
if autoqa_args.has_key('nvrs'):
from autoqa.util import get_distro
distro = get_distro(autoqa_args['nvrs'][0])
labels.append(distro)
}}}
Josef says it's possible to cancel this constraint. That would allow us to
test arbitrary repository on arbitrary machine (improving performance and
maybe allowing us to have less machines).
There is one thing worth mentioning: Depcheck is tightly tied to yum, and
any weird behavior might be caused by yum. If we cancel above-mentioned
constraint, we can no longer suppose that "depcheck crashed on
f15-updates, therefore I should try to reproduce it on my F15 machine". It
will be necessary to look which machine the test was executed on and try
to reproduce it in the same environment if possible.
The goal of this ticket is test properly whether cancelling the
requirement works correctly. That means to test executing depcheck for F15
repos on F14 machine, or vice versa, and comparing the results with
behavior prior to this change.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/363>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#335: autotest_server autodetection fails
--------------------+-------------------------------------------------------
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: minor | Milestone: Finger Food
Component: core | Keywords:
--------------------+-------------------------------------------------------
Currently if we want to have working hyperlinks to logs, we have to fill
in 'autotest_server' in autoqa.conf, because autodetection from hostname
get's overwritten by empty value from the log.
The links then look like this:
http:///results/20-root/brutus.test.redhat.com/helloworld/results/full.log
/autoqa:
{{{
# Hardcoded defaults for the 'general' section
conf = {
'local': 'false',
'testdir': '/usr/share/autotest/client/site_tests',
'eventdir': '/usr/share/autoqa/events',
'notification_email': '',
'autotest_server': socket.gethostname(),
}
conf = autoqa_conf.get_section('general', conf)
conf = autoqa_conf.get_section('notifications', conf)
# FIXME: conf['autotest_server'] gets overwritten here by empty value
coming
# from autoqa_conf
}}}
We can solve this by some easy hack or more properly by solving ticket
#255.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/335>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#311: Improve Koji call performance with koji.ClientSession.multiCall
-------------------------+--------------------------------------------------
Reporter: kparal | Owner:
Type: enhancement | Status: new
Priority: minor | Milestone: Finger Food
Component: core | Keywords:
-------------------------+--------------------------------------------------
We use koji calls quite extensively in watchers/tests/libraries. Some of
the calls may be sped up substantially by using multicalls (executing
multiple calls at once and waiting for a grouped result). See docstring at
koji.!ClientSession.multiCall:
{{{
Execute a multicall (multiple function calls passed to the server
and executed at the same time, with results being returned in a
batch).
Before calling this method, the self.multicall field must have
been set to True, and then one or more methods must have been called
on
the current session (those method calls will return None). On
executing
the multicall, the self.multicall field will be reset to False
(so subsequent method calls will be executed immediately)
and results will be returned in a list. The list will contain one
element
for each method added to the multicall, in the order it was added to
the multicall.
Each element of the list will be either a one-element list containing
the result of the
method call, or a map containing "faultCode" and "faultString" keys,
describing the
error that occurred during the method call.
}}}
Go through our code and re-write standard Koji calls to multicalls
whenever possible.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/311>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
#289: Missing autotest debug logs
----------------------+-----------------------------------------------------
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: major | Milestone: Hot issues
Component: autotest | Keywords:
----------------------+-----------------------------------------------------
I've found a test run where autotest debug logs are missing. What's wrong?
We need them.
http://autoqa.fedoraproject.org/results/72107-autotest/qa06.c.fedoraproject…
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/289>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
Greetings gang,
Just a heads up ...
It looks like autoqa-results@ hasn't been doing its exercises. It has become
so saturated with autoqa test result emails (~ 8-13k/day), that it's slowing
down mailman on fedorahosted.org, and causing delivery issues for other lists.
After discussing with the Fedora infrastructure team, it was determined
immediate action was needed to allow other lists on fedorahosted.org to
continue receiving mail. We performed the following ...
1) Disabled autoqa-results@ delivery on autoqa.fedoraproject.org immediately
I commented out result_email in /etc/autoqa/autoqa.conf
2) Increase the autoqa.fp.org results time from 30days to 60days (we have disk
space to allow this)
/etc/cron.d/autoqa
0 3 * * * autotest /usr/sbin/tmpwatch -vv --dirmtime -umc -f 1440 /usr/share/autotest/results/ -X README
This should address the problem in the short term, but adjustments may be
needed on to recognize that autoqa-results@ archive no longer available. It
seems that the volume+size of autoqa-results@ mails increased dramatically
recently. I'm not clear why, so that might be interesting to diagnose. This
also helps us determine whether autoqa-results@ is still the best option for
long-term test result storage, or if there are other options worth pursuing.
Thanks,
James
#321: Store test logs for at least a month
------------------------+---------------------------------------------------
Reporter: kparal | Owner:
Type: task | Status: new
Priority: major | Milestone: Nice to have soon
Component: production | Keywords:
------------------------+---------------------------------------------------
Today is May 3rd and some test output from April 30th is already
inaccessible
(https://admin.fedoraproject.org/updates/zabbix-1.8.5-1.fc15) That is
problematic for package maintainers and also for us when dealing with test
issues. We should aim to store results for at least a month.
James mentioned that log pruning was run every 15 days, but sometimes it
had to run more often, because e.g. some depcheck result logs may take up
to several hundred MBs. James, can you provide links to such logs in order
for us to evaluate that issue? Also, you can provide some disk space
statistics? E.g. how much data we usually generate per week, how much disk
space we have available, etc.
If we can't extend available disk space easily, let's aim for reducing
logs size. Can we do some form of transparent filesystem compression? From
my experience depcheck logs have extremely good compress ratio.
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/321>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project