Thanks Adam for getting the ball rolling on this topic.
On Tue, 2010-12-21 at 17:11 +0000, Adam Williamson wrote:
Hi, everyone. So, in the recent debate about the update process it
again
became clear that we were lacking a good process for providing
package-specific test instructions, and particularly specific
instructions for testing critical path functions.
I've been working on a process for this, and now have two draft Wiki
pages up for review which together describe it:
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_test_case_creation
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_pl...
the first isn't particularly specific to this, but it was a prerequisite
that I discovered was missing: it's a guide to test case creation in
general, explaining the actual practical process of how you create a
test case, and the best principles to consider in doing it.
Nice job here, this is something that's difficult to explain if you've
done it a lot, but I think you've captured the key points. If possible,
it might be helpful to highlight a few existing examples that stand out
for the different characteristics you mention (comprehensive, but able
to stand the test of time).
Another thought, any reason that we wouldn't want to keep all wiki tests
in the QA: namespace (and with the prefix QA:Testcase_)? The door is
left open for other names, I wonder if we want to cut that off ahead of
time to keep our sanity by having all tests in the same namespace?
The page also talks about using [[Category:Test_Cases]]. I worry if we
are too lax in categorizing new tests we'll end up with a large amount
of random tests in the main [[Category:Test_Cases]] making it a
maintenance nightmare to cleanup that category. Should we instead
direct users to your other page
(
https://fedoraproject.org/wiki/User:Adamwill/Draft_QA_SOP_package_test_pl...) for
guidance on categorizing test cases?
The second is what's really specific to this subject. It
describes how
to create a set of test cases for a particular package, and a proposed
standardized categorization scheme which will allow us to denote test
cases as being associated with specific packages, and also denote them
as concerning critical path functionality.
I think I mentioned this previously, in the section 'Preparation', I
appreciate the distinction of 'core' and 'extended'. But I it resonates
with me better under the context of test "priority". I don't see why we
can't keep using the terms 'core' and 'extended', but just want to
clarify their purpose. They're intended to add some sense of execution
priority to a list of test cases, right? Where critpath comes first,
then core, then extended, then other? Also, you describe
categorizing/grouping test cases in more detail below, maybe just link
to that instead?
In the section, 'Simple (required)', would it help to add a link to
http://fedoraproject.org/wiki/BugZappers/CorrectComponent#Which_component... (or
similar page). Something to help testers find the right src.rpm name of the component
under test? Side note, this might also be a maintenance task we can define where I, or
anyone interested, could manually scrub (or script) finding Categories:Test_Cases
searching incorrectly named category pages.
Also in 'Simple (required)', we don't tell the author to add their
'Category:Package_${sourcename}_test_cases' to 'Category:Test_Cases'. I
think we want all newly created package categories anchored under
'Category:Test_Cases'.
General comment. I know we've got an eye towards integrating this work
with bodhi and/or f-e-k. Until that work is complete, I wonder if those
notes will introduce confusion/speculation. Should we leave out the
bits about possible future tool integration until such support is
active?
Given that mediawiki has a handy API which also allows you to deal
with
categories, this should make it easy to both manually and
programmatically derive a list of test cases for a given package, and a
list of *critical path* test cases for a given package. You can do this
manually, but I also envision Bodhi and fedora-easy-karma utilizing the
API so that when an update is pushed for a package for which test cases
have been created under this system, they will link to those test cases;
and when an update is pushed for a critical path package, they will be
able to display separately (and more prominently, perhaps) the list of
test cases relevant to the critical path functionality of the package.
I should add too that we've explored the mediawiki remote API used to
extract data from the wiki by other scripts/tools, and are confident
that the desired queries, and data, are available.
Comments, suggestions and rotten fruit welcome :) I'm
particularly
interested in feedback from package maintainers and QA contributors in
whether you feel, just after reading these pages, that you'd be
confident in going ahead and creating some test cases, or if there's
stuff that's scary or badly explained or that you feel like something is
missing and you wouldn't know where to start, etc.
Agreed, would love to hear from others.
The trac ticket on this is probably valuable for background,
explaining
why some things in the proposal are the way they are:
https://fedorahosted.org/fedora-qa/ticket/154
it also mentions one big current omission: dependencies. For instance,
it would be very useful to be able to express 'when yum is updated, we
should also run the PackageKit test plan' (because it's possible that a
change in yum could be fine 'within itself', and all the yum test cases
pass, but could break PackageKit). That's rather complex, though,
especially with a Wiki-based system. If anyone has any bright ideas on
how to achieve this, do chip in! Thanks.
Certainly seems like a feature worth noting on the TCMS requirements
page Hurry has been building
(
https://fedoraproject.org/wiki/Rhe/tcms_requirements_proposal). I've
added that to the Talk page.
Thanks,
James