On Mon, 2010-03-22 at 11:45 +0100, Josef Skladanka wrote:
Hello gang,
last week, I made a simple proof of concept using the resultdb database
schema <
https://fedoraproject.org/wiki/AutoQA_resultsdb_schema> and
xmlrpc interface <
http://rajcze.homelinux.net/resultdb/xmlrpc.py>. This
can only start/stop a testrun (can try like this
<
http://rajcze.homelinux.net/resultdb/example.txt> - you can invent any
test name/version combination if it's not already in the
database<http://rajcze.homelinux.net/resultdb/frontend/simple_php/?act...;,
new test will be created. Watch the state here
<
http://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_tes...>),
but it made me realize some stuff, i'd like to share:
I told Josef on IRC, but I love the schema diagrams. I've always been
interested in a good way to graphically represent a schema.
Tests and Testruns:
===================
1) Even though it's not really required to store the results, we
certainly need to store some metadata, to be able to show the results in
reasonable way (table Test in schema). For basic usage, i suggest fields
Name, Version, Tested Package and Description. These should make
possible to search the tests in an usefull way.
To be honest, this seems like a sound starting point. Do we need more
than just...
* what test is? (aka a test_name and only if really needed, a
test_version)
* what are we testing? - some unique human-readable (and
human-created) identifier for a test run? The most obvious
choice being a package envra, but also a build stamp for an ISO
install test run (e.g. F-13-Alpha-TC0)?
2) We need a way to identify, which test is actually executed in the
testrun. For now, i use identification based on $test_name/$test_version
schema, which is converted to UUID5 [1] in URL namespace. I'm not sure,
if the UUID is not a duplicate information (since it's figured out from
two other known values in the database), but it seems reasonable at
least as a unique identifier in the database.
For now, my API uses name/version parameters for identification,
maybe we would like to store the UUID inside the test source (even
though i'm not a big fan of this solution) and use directly it. (hope
this is not too confusing :) )
I would think that UUID's make good sense for internally referencing
data, but I really don't want to be passing around UUID's in URLs when
directing people to test result dashboards. Does this help?
Testplans and Jobs
==================
My starting idea was, that we would have a number of standalone Tests
(one Test equals one Testrun), and Testplans would be just a set of
these Tests, runned in specified order. One would basically create the
testplan 'on the fly' from existing Tests (and/or Testplans) using the
TCMS-like-thingie, and the rest would be taken care of automatically.
As you can imagine, this could be quite hard to implement using AutoQA,
so I talked with wwoods about it, and i belive, that we agreed, that we
would love to have this functionality, but it's not a problem to solve
*now*.
Definitely a cool concept. But I agree with you guys, this is probably
outside the scope of what we need to accomplish in the short-term.
So how could Testplans work *now*
---------------------------------
1) Testplans will be hand written, and 'hard coded', using the resultdb
only as a metadata/results storage.
2) From AutoQA point of view, Testplan is just an ordinary test, which
will subsequentely run each Test required, and will report the results
to the resultdb.
3) At the beginning of executing a Testplan, it will create new record
in the Job table, and will add a record to _Job-Testrun table for each
executed Testrun (aka Test). This way, we'll be able to show overall
progress (as James had in his mockup), and we will use this information
also in the frontends - for example one could want to compare subsequent
executions of a given Testplan.
I know if says 'hard coded', but this doesn't seem bad for now.
Questions
=========
1) Are there any tests, we would like to use in more than one Testplan?
I.E. is there a need to tell apart a Test from Testplan? (for me, it's
certainly a good thing)
I think it will be common for a test to live in multiple test plans.
For example, we have the Rawhide Acceptance Test plan [1] which includes
a *small* subset of tests to validate that the repo and the install
images are sane. I envision some of those tests would be used again
(possibly a slightly different context) in the installation test plan
[2].
[1]
https://fedoraproject.org/wiki/QA:Rawhide_Acceptance_Test_Plan
[2]
https://fedoraproject.org/wiki/QA:Fedora_13_Install_Test_Plan
2) What do you think about the UUID identification? I'm sure we
need to
have some way to tell the tests apart (at least to be able to
automatically store the results :-D), but is the UUID generated from
name/version better than "random" UUID or not? (for me, it's better to
have name/version, since one could almost automatically re-use the
metadata in a simple way, when only the test-version changes, and
generating the UUID from human-readable values makes more sense to me)
As long as I never have to use the UUID (or using it is an exception).
I mean, if it's just an internal representation (like how git uses hash
strings for representing commits), that seems fine. But if we expect
users/testers to be passing around the UUID's, I don't know if that
improves things.
What I love about storing our test cases and plans in the wiki right now
is that I don't need to remember some internal unique ID to reference
the test. I just call it what it is, for example
'QA:Testcase_Mediakit_ISO_Size'. What is nice is that the wiki takes an
optional version parameter in the event you are referencing something
other than HEAD
(
https://fedoraproject.org/w/index.php?title=QA:Testcase_Mediakit_ISO_Size...).
Thanks,
James