[AutoQA] #408: depcheck: Argument list too long
by fedora-badges
#408: depcheck: Argument list too long
---------------------+-------------------------------
Reporter: kparal | Owner:
Type: defect | Status: new
Priority: minor | Milestone: Nice to have soon
Component: tests | Keywords:
Blocked By: | Blocking:
---------------------+-------------------------------
{{{
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/autoqa/decorators.py", line 72,
in newf
f_result = f(*args, **kwargs) #call the decorated function
File "/usr/share/autotest/tests/depcheck/depcheck.py", line 174, in
run_once
depcheck_output = utils.system_output(cmd, retain_output=True)
File "/usr/share/autotest/common_lib/base_utils.py", line 931, in
system_output
args=args).stdout
File "/usr/share/autotest/common_lib/base_utils.py", line 654, in run
stderr_level=get_stderr_level(stderr_is_expected)),),
File "/usr/share/autotest/common_lib/base_utils.py", line 79, in
__init__
stdin=stdin)
File "/usr/lib64/python2.7/subprocess.py", line 672, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1202, in _execute_child
raise child_exception
OSError: [Errno 7] Argument list too long
}}}
http://autoqa-
stg.fedoraproject.org/results/27644-autotest/virt24.qa/depcheck/results/f...
Sometimes we hit system limits.
Proposed solution: Write pending builds to one file, accepted builds to a
different file, then provide these files using different command line
options to depcheck. Place both files in the results directory.
I don't think this issue is really hot, because it happens seldom. Putting
to "nice to have soon".
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/408>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 2 months
ResultsDB integration in master *fireworks* (also ticket 408)
by Josef Skladanka
Hi gang,
after a long long time, ResultsDB integration has landed into master.
Kamil will push it to stagign shortly, and we'll see what went wrong,
before going live to production.
Also, the patch for ticket #408 (depcheck arguments too long), is there
so we should have a bit-less CRASHED results from now on.
j.
12 years, 2 months
[AutoQA] #194: Incorporate automated anaconda storage suite
by fedora-badges
#194: Incorporate automated anaconda storage suite
--------------------+-------------------------------------------------------
Reporter: jlaska | Owner:
Type: task | Status: new
Priority: major | Milestone: Automate installation test plan
Component: tests | Version: 1.0
Keywords: |
--------------------+-------------------------------------------------------
Chris Lumens has developed a virt-based test suite to automate different
disk/storage install scenarios. I've discussed this briefly with Chris,
but I'd be interested in seeing his test suite incorporated within autoqa
and run on a regular interval (perhaps along-side Liam's install tests).
The tests are currently available at http://clumens.fedorapeople.org
/anaconda-storage-test/
--
Ticket URL: <https://fedorahosted.org/autoqa/ticket/194>
AutoQA <http://autoqa.fedorahosted.org>
Automated QA project
12 years, 2 months
require maintainer defined for each test & clean up current tests
by Kamil Paral
Before releasing new version of AutoQA, I want to solve one more issue, and that is the current mess in tests we manage. Honestly stated, we have some tests that we have no idea whether they work, what exactly they do, and how to fix them if needed. Some of them are probably not being used at all, and we still need to care about them when re-factoring our framework. Worst of all, for many of them we have no idea who should be responsible for keeping them in shape.
I have spent some time and executed all of them. I also tried to provide my best guess about their current maintainer. This is the result:
Test Maintainer Works Currently useful
==== ========== ===== ================
anaconda_checkbot clumens ? no no
anaconda_storage clumens ? no no
compose_tree ? no no
conflicts ? yes unlikely
depcheck jskladan ? yes yes
helloworld kparal yes yes
rats_install hongqing ? yes yes
rats_sanity hongqing ? yes probably
repoclosure ? yes unlikely
rpmguard kparal yes somewhat
rpmlint kparal yes somewhat
upgradepath kparal yes yes
My proposal is:
1. Every test will have a maintainer defined. In its 'control' file we will change AUTHOR line to MAINTAINER line (patch is ready). This will ensure that we always know who to talk to when the test seems broken. It doesn't mean you can't work on a test you don't maintain, no. But at least the contact point will always be defined. I tried to place my best guess in the matrix.
Please speak up who wants to maintain some test.
2. Tests without maintainer will be archived and deleted. They can stay in a separate git branch and wait for their future revival (if any), but they won't be in master.
3. To save resources, we should also archive tests that don't seem to be currently much useful. We can re-enable them once required architecture is in place. More specifically:
* rpmlint, rpmguard - the results are sent to opted-in maintainers, some of them said it's useful. I'd keep them enabled.
* repoclosure, conflicts - this lists potential dependency problems and file conflicts for the whole repository. Until now no one cared. With resultsdb frontend we can finally have a page that lists all the results day by day. It means someone can go through the results occasionally and file some bugs. The question is: who? It's nice to have some results, but if we just *hope* someone will do something about it, that seems too uncertain for me. They are also somewhat obsoleted by depcheck. I'm sitting on the fence here.
* compose_tree - this tries to build boot.iso and pxeboot images. This is basically a releng test that we execute. Previous maintainer was James, but I doubt we should ask him for future maintenance. Currently it is broken. Even if it worked, was there someone reporting the issues? It's easier to look into releng logs online:
http://kojipkgs.fedoraproject.org/mash/branched-20120227/logs/
Does the above link render compose_tree test useless?
* anaconda_* - anaconda build, unit test, and automated installation using various test cases. Great work, but currently broken. I was talking to clumens some time ago and asked whether he received results. He said he did not. I fixed the opt-in emails and told him. No response since, so I guess he just doesn't care. And I'm not surprised. With the speed of anaconda development, it's much easier for anaconda devs to execute the test on their own machines (at least I suppose they do). They can't send us patches all the time. Since there hasn't been any drive from anaconda devs, I propose to obsolete this tests until we have something better to offer.
* the rest of the tests stay enabled
For the next autoqa release I'd like to focus mainly on easy deployment of tests, so that we (for our tests) and some other teams (like anaconda) can easily update tests without tedious process of "new autoqa release". After that I expect some of the tests to return.
What do you think?
And please, put your name next to the test you want to maintain.
12 years, 2 months