On 2/20/2020 7:02 AM, Ankur Sinha wrote:
> There were some interesting questions in the discussion as well,
but
> the most interesting was, how do we actually test the tools before
> packaging them. Apart from the basic unit tests (that upstream
> provides), do we do any other tests to ensure it's correctness?
> Since, we have a lack of neuroscientists in the sig, who actually use
> the packaged tools, I believe that we should test the tools packaged
> by us, as a QA to actually verify that they work in Fedora.
> As an example, if we package a DICOM image viewer, then we should make
> sure that we are able to view images in it, other than the actual math
> behind it that is verified by the unit tests. This may turn out to be
> quite difficult for some complicated tools, but it would help us
> gradually shift from packaging to testing as well.
Ah, that's a co-incidence. I filed this 2 days ago:
Issue #339: Write test cases for NeuroFedora packages? - NeuroFedora -
Pagure.io:
https://pagure.io/neuro-sig/NeuroFedora/issue/339
So, yes, it's a good idea. I'll try to find the time to write an example
test case sometime, but if anyone else would want to investigate this
too, please feel free to do so.
Very interesting co-incidence, kismet? I wonder if there might be some way to invite
neuroscience students to participate in either writing tests or utilizing tests. Not
really a priority, but after some groundwork is laid, making a presentable website where
the tests are explained and links are made to software, it could be possible to attract
some interest for students to come and write and submit tests/data or learn how to use
these softwares (software?).