Hi,
Let me know what you think...
First of all let me begin with my motives. I think that Fedora Review
can be useful not only for doing initial package reviews but also to
perform regular QA checks of existing packages. It's especially useful
if you are maintaining many packages of the same type (like Java or
Perl packages) for which you can write many automated tests.
Writing single checks should be easy and shouldn't require knowing any
complex API so that any user could write their own checks. I think that
simple shell script is good enough in most of the cases. Some
variables like pkg name, version etc would be exported as
environmental variables, more complex things would be extracted on
disk (as F-R does now, but more things could be extracted too). So my
goal was creating a layer that could integrate those shell scripts
with Fedora Review. Some proof of concept will be available soon.
I wrote a JSON plugin instead of native python plugin only because I
don't know Python, but I talked to Stano and we considered
reimplementing that in Python to overcome some limitations of JSON
API.
One question is how to create a reasonable unit test. You seem to
have
done some testing... and I more or less rewrote the existing test suite
as part of the 0.2.0 release. May we we could achieve something together?
If you don't mind having non-Python test cases then I could write some
basic tests. Moreover, working plugin framework (which I am developing
as free software) can be a kind of test too, especially if there are
not that many other users of this API (if there are any at all).
Mikolaj
PS. <rant>
I also think that more tests could be automated, and that doesn't
have to be "pass or fail" type of automation. Let's take "Large
documentation files must go in a -doc subpackage" as example. AFAIR
for now it's not implemented at all. But you could check the size and
if it's less than 1kB pass, more than 1Mb fail, in-between -
inconclusive. The advantage is that both worst cases are avoided. case
1) There is huge documentation and reviewer didn't notice it -- users
sufer. That simple tests detects this, but if it was inconclusive,
users won't notice much (what's 1MB today). And case 2) -- there are
no documentation, or tiny 5-line README, reviewer spent his time
looking for doc files which are absent. It's obvious than 1kB is OK.
That's only an example but you get what I mean.
Also some tests results imply others. For example "The package
successfully compiles and builds into binary rpms on at least one
primary architecture" and "Package supports only 1 architecture"
implies "The package successfully compiles and builds into binary rpms
on all supported architectures.". This rule would apply to many packages.
</rant>