On 08/09/2012 05:01 PM, Stanislav Ochotnicky wrote:
Quoting Alec Leamas (2012-08-09 16:16:02)
> On 08/09/2012 03:29 PM, Mikolaj Izdebski wrote:
>>> Examples are good, especially examples like these. To be fair, I hadn't
>>> realized how simple things could be. Have you implemented this using a
>>> json plugin handling the communication between "your" plugins and
f-r?!
>> Yes, I have implemented this as a single JSON plugin which basically:
>> 1) asks F-R for all possible info (spec file scetions etc)
>> 2) extracts more stuff on disk
>> 3) reads metadata of all scripts
>> 4) orders scripts basing on their dependencies
>> 5) executes individual scripts
>> 6) returns one reply to F-R containing results of all tests
>>
> I raised the issue to simply drop the json api in another message. One
> strategy might be to support either "advanced" plugins written in
> python or "simple" ones following your approach here. This means that if
> we could reimplement your json plugin in python the json interface could
> be dropped and we would have:
> - python plugins: multiple tests, attachment handling, access to
> internal python classes.
> - "simple" plugins: one test per file, simplified registration, no
I wanted to have external tests on par with internal (Python) ones so
noone would feel left out. But it is most probably true that covering
90% functionality in external tests in more simple way would be more
productive (and attractive).
> In any case, the modelling of a test needs an overhaul both in python
> and to some extent also for the simple ones. Things like
> dependencies/execution order, access to other tests, selecting tests to
> run. The registration of plugin tests should really be done also by tje
> python ones...
Indeed my biggest grief is that internal tests have no idea about
external ones (from deprecating POV). For this we'd need 2-step plugin
interaction (i.e. registration and then run).
So alternative approach. Let's reimplement Mikolaj's interface in Python
where we put stuff either in ENV or special predefined places (i.e.
spec/main, spec/prep, spec/build, spec/install etc). And then get
information about plugins from decorators (name, description text etc).
Attachments can be put in special subdirectory (as Mikolaj suggested).
Normally they are supposed to be linked with specific test, not sure if
it's really that needed. Result of test is based on return value (0 -
success, 1 - fail, etc.).
Sounds like a plan?
Yes, it's a plan....(#2)
I have pushed a feature branch check-api which handles the problems I
have seen from the tool's point of view. The most important part is that
plugins is registered so user can list, select etc. among the defined
tests. Also, to get a reasonable code all tests, internal or external,
is represented in the same way and stored in a common place.
The rest step depends, if I understand it right, on Mikolaj's employer
and whether he (she?) is willing to publish his work under an open
license. Note that rewriting the json plugin really don't requires the
code, but rather the environment expected by the shell tools. If
Mikolaj could define this, I dont think writing a python "bridge"
plugin is much work.: