I have been thinking about ticket #52: https://fedorahosted.org/autoqa/ticket/52
The main idea behind it is: "I have the test script written, let me see if it actually works in AutoQA." I don't know if my patch is the best approach available, but it served my needs.
Imagine a situation: You have a working test. You have written a control file and a test wrapper. Now, how do you know it works? You know your test works, but it is hard to ensure that your test will get correct command-line parameters, because you have no way to try it. Forwarding (and transformation) of parameters works in this way: watcher -> autoqa -> -> hook -> control file -> wrapper -> -> your test. Oof, that's a long way where many mistakes can happen.
The only current way is to install autotest-server, which is painful. I would like to lower this entrance barrier by avoiding that.
I have found that many watcher scripts have hidden undocumented option --dryrun. Then they only print command they would have run. That's great. Still there is a long way from this command to actual test execution command.
Also, there are some use cases in which you can't really separate your test from the watcher. For example the repoclosure test - it can work perfect when you test it manually, but it depends on some repository metadata that the _watcher_ downloads. Then you simply don't know if all the data is downloaded ok, in correct formats, correct locations, etc. And if you don't want to hack and slash the watcher source code, then the only method is to run the whole thing.
Because Will is developing some more libraries for tests, the code can soon became even more intertwined and there can be even bigger desire for running the whole thing together.
The last use case is when contributing to autoqa directly and wanting to see "something" running. Just to have some idea how it works.
Which takes me to a list of possible improvements: 1. Standardize "--dry-run" option for the all watchers, document it (let them have --help). It helps and it is easy. 2a. Provide "--dry-run" also for the autoqa itself, so it wouldn't actually run the tests, it would just print all the commands that would be run for the particular hook (so printing lines like "repoclosure --foo=bar ..." etc). This is hard, because autotest is assembling the execution commands from our provided control files/wrappers. I haven't found any --dry-run support in autotest. Therefore I think this can't be done. But if it could be, it would be cool. 2b. Allow people to run single hook/test without the need of autotest-server. That can be most easily done by directly calling autotest-client. This is just for development/demonstrative purposes. Just to see if everything is working ok and if the new test can be deployed on the "real" autoqa instance. This can be done as config option (my patch), command-line option, whatever.
Suggestions, comments, remarks, corrections, anything... welcome.
On Fri, 2009-11-13 at 03:33 -0500, Kamil Paral wrote:
I have been thinking about ticket #52: https://fedorahosted.org/autoqa/ticket/52
The main idea behind it is: "I have the test script written, let me see if it actually works in AutoQA." I don't know if my patch is the best approach available, but it served my needs.
Imagine a situation: You have a working test. You have written a control file and a test wrapper. Now, how do you know it works? You know your test works, but it is hard to ensure that your test will get correct command-line parameters, because you have no way to try it. Forwarding (and transformation) of parameters works in this way: watcher -> autoqa -> -> hook -> control file -> wrapper -> -> your test. Oof, that's a long way where many mistakes can happen.
The only current way is to install autotest-server, which is painful. I would like to lower this entrance barrier by avoiding that.
I have found that many watcher scripts have hidden undocumented option --dryrun. Then they only print command they would have run. That's great. Still there is a long way from this command to actual test execution command.
Also, there are some use cases in which you can't really separate your test from the watcher. For example the repoclosure test - it can work perfect when you test it manually, but it depends on some repository metadata that the _watcher_ downloads. Then you simply don't know if all the data is downloaded ok, in correct formats, correct locations, etc. And if you don't want to hack and slash the watcher source code, then the only method is to run the whole thing.
Because Will is developing some more libraries for tests, the code can soon became even more intertwined and there can be even bigger desire for running the whole thing together.
The last use case is when contributing to autoqa directly and wanting to see "something" running. Just to have some idea how it works.
Which takes me to a list of possible improvements:
- Standardize "--dry-run" option for the all watchers, document it (let them have --help). It helps and it is easy.
2a. Provide "--dry-run" also for the autoqa itself, so it wouldn't actually run the tests, it would just print all the commands that would be run for the particular hook (so printing lines like "repoclosure --foo=bar ..." etc). This is hard, because autotest is assembling the execution commands from our provided control files/wrappers. I haven't found any --dry-run support in autotest. Therefore I think this can't be done. But if it could be, it would be cool. 2b. Allow people to run single hook/test without the need of autotest-server. That can be most easily done by directly calling autotest-client. This is just for development/demonstrative purposes. Just to see if everything is working ok and if the new test can be deployed on the "real" autoqa instance. This can be done as config option (my patch), command-line option, whatever.
Suggestions, comments, remarks, corrections, anything... welcome.
Great mail Kamil! You're right on by pointing out barriers to entry and work up plans to address that. Will and I were discussing this topic earlier this week. One thing that jumped out was that there might different approaches on how to solve the problems. Which is fine really, since that can often lead to new ways to solve a problem. But more than different methods, perhaps it isn't yet clear how we're expecting people to solve problems for themselves.
With AutoQA so early on it's development, and with FUDCon looming, the thought occurred that it might be helpful to outline all the ways we anticipate people will interact with AutoQA. Really, I see this as an exercise/thought_experiment to outline use cases ... and ensure we have plans in place to document or address them.
I've started a *rough* outline of what I was thinking at https://fedoraproject.org/wiki/AutoQA_Use_Cases.
Can you help make sure I have your use cases listed. I'm going to continue fleshing this out in the next few days. Please feel free to update the wiki directly, or add thoughts to the Talk page.
Thanks, James
autoqa-devel@lists.fedorahosted.org