It's time for weekly check-in! Rejoice! :)
jskladan and me have consulted psplicha about his ideas of AutoQA
ResultsDB. He has a knowledge of RHTS and he works on the Test Package
Sanity. The main things he pointed out:
• He asked if the ResultsDB functionality is not already (at least
partially) provided by Beaker. That would mean we don't have to write
things from scratch, but for example collaborate on finishing it in
Beaker. Provided that that piece could be used standalone (because
Beaker as a whole is probably not finished yet to be used instead of
Autotest). As a local Brno contact for Beaker he guessed it could be
mcsontos.
• In RHTS all the main metadata about test are in its Makefile, like
RunFor (list of packages for which to run the test), Requires
(dependencies), MaxTime, Destructive, Archs, Releases (list of
distributions to run the test for), etc. Also some of these metadata
are in TCMS - Testopia, therefore it's duplicated.
• He described how you can group individual test cases into recipes,
then into recipe sets, and then into jobs. This provides a mean to run
predefined set of test on particular architectures on particular
distributions, for example. Should we also consider some kind of
grouping?
• In RHTS it collects and stores amongst others these artifacts:
∘ global test results (PASS, FAIL, WARN) - also considered ABORT in the future?
∘ test phases results - currently global result is FAIL if any of the
phases fails. It would be a nice improvement to be able to define
which phases may fail and which may not, or provide some custom
specification how to global result should depend on the phases. This
could be illustrated on something RHTS calls "real comparative
workflow" - you install an old package, run the test (which is most
probably supposed to fail), then you upgrade/patch the package and you
run the test again. The global result in RHTS is failed (since the old
version package failed the test), but the real oucome should depend on
the result of the new package testing.
∘ score - integer? can be used to measure anything, from number of
errors to performance
∘ logs - arbitrary files, also collected some system logs (installed
packages, messages.log, kickstart/anaconda logs, etc.)
∘ summary output - a short summary generated from assertions and
other beakerlib commands, but you may also write there something on
your own; this summary is shown by default when reviewing the test
∘ run time
• beakerlib stores the log (journal) of the test in XML structure.
This could be the basis for all the logs of all tests, so we could
extract some useful information from it by automated tools and display
it in the front-end. This means, that one could easily have different
levels of detail - it's possible to make "quick summary" (just
pass/fail state of all the test phases), "detailed view" (for example
which asserts failed), and "complete view" (complete log, with
stout/stderr logged etc) from just one file - since the XML produced
by beakerlib is quite well structured. This of course has a minor
drawback that beakerlib is for shell and our wrappers are in Python.
But the journalling part of beakerlib is written in Python, and Petr
Muller is working on (if i recall correctly) making this yournalling
part a standalone library, so the journalling could be done directly.
I also consulted dkovalsk, who knows very much about RHTS. What he
pointed out:
• He draw me a little diagram of how RHTS looks like and what would be
better. Currently there's a Scheduler handling job scheduling. It has
access to an Inventory, that contains information about available
hardware. There's an RHTS Hub, that executes the tests and collects
the results. What would be better is if the results went through a
TCMS and be stored there.
• He said that in the past all the tests could use any output format
they wanted. After that they started to use rhtslib (now beakerlib)
and that unified the output. He stressed that this approach really
simplified everything and he really recommends it (to have unified
output format).
• Ideally the whole process should focus around TCMS. In the TCMS the
tests and jobs should be defined. All the tests metadata should be
there. And also all the test results should be reported there. He sees
that as an very important building stone of the whole process and
recommends it.
• Some tests phases should be mandatory and some optional, so the test
won't stop on an optional phase.
• The current set of test results (pass, fail, warn) could be extended
to a finer granularity.
• He heavily recommended to look at Nitrate TCMS, so we don't develop
something on our own. It should be available and ready, some more
XML-RPC API is to be added soon. Author is Victor Chen.
• He basically stressed that it is important to look around first
before developing something, because we may find a lot of stuff
already in Red Hat and available for pushing upstream. For example
guys working with cobbler and kickstart have a really good tools and
techniques for clean machine installation/repair, etc.
• Some other interesting contacts: psplicha, pmuller, cward, bpeck;
mailing lists rhts, test-auto, tcms
• We can have a look at the Inventory at
https://inventory.engineering.redhat.com/