On Tue, 2010-05-11 at 17:16 -0400, Seth Vidal wrote:
On Tue, 11 May 2010, James Laska wrote:
> What interested me about this exercise was how a new test might be added
> to rpmguard. The power of rpmguard is in abstracting the details of
> locating the previous packages and providing a common framework for
> comparisons against the two package sets. Presently, the tests exist in
> the main driver.
>
> How do we want to extend this in the future make it easier/faster to
> support new comparison tests? Should tests exist in stand-alone python
> scripts, where they all accept a common set of arguments? It seems like
> rpmlint is structured this way [4], is this good/bad ... does this make
> user-contributed comparative tests easier?
>
> Something Kamil proposed a while back, should rpmguard be a stand-alone
> tool (much like rpmlint)? Instead of being bundled inside autoqa,
> anyone could do: `yum install rpmguard`
>
> Apologies for the ranty nature of the email, I'm still thinking this
> through. My main objective is to get a sense where rpmguard should go.
> Hopefully these thoughts can lead to a wiki or TRAC roadmap, and
> something we could implement once our immediate objects are behind us.
>
okay - so this might be my own imagination but I thought there was a goal
of autoqa to do the following:
1. show issues with pkgs
2. show signficant/dangerous CHANGES to pkgs
3. provide a way for a packager/the-powers-that-be to stop a package if it
doesn't get at least N score for a pkg
Yeah, in a nutshell. AutoQA allows for the above to occur. Perhaps the
steps above might be more the focus for specific tests like rpmlint,
package_sanity, depcheck and rpmguard. Same result in the end.
That last one might be my own wishful-thinking.
But I was sorta thinking that rpmguard (or something else above it) could
have a dir of python scripts - like yum plugins.
Love the plugin idea! It's always fun to think of code in this regard,
but I always forget that yum plugins didn't just appear out of thin air.
It took time for the need to develop and the API to mature.
Each one of the [enabled] scripts would be passed a 'conduit'
that maybe
looks like:
What's a conduit? Is that a way for each test to gain access to the old
and new packages etc..., or something else?
conduit.get_old_package_file()
conduit.get_new_package_file()
conduit.get_old_package_hdr()
conduit.get_new_package_hdr()
#and this is just my own wishfulness)
conduit.get_old_package_object()#using yum's package objects
conduit.get_new_package_object()#using yum's package objects
Then the scripts could do whatever they need to do and maybe feedback to
the code a result object of some kind:
for example:
test_result.code = RPMGUARD_PASS
test_result.output = "a lot of strings here"
test_result.score = 23
How were you thinking score would be used here?
We then collect and compile those to pitch back to the
user/builder/resultsdb.
Does that make sense?
Yeah, certainly does. At this high level, it seems like it would
provide a nice structure to allow for future test additions and
dynamically enabling or disabling any subset of tests.
Thanks,
James