Reconsidering time for the meeting
by Honza Horak
Hi,
as we were talking on the yesterday's meeting, we may consider changing
the meeting time, so it better fits our needs, especially after DST is
over. I've created whenisgood:
http://whenisgood.net/2h5zadw
Please, fill-in until 30th Oct, I'll sumarize results Fri, 31th Oct.
Honza
9 years, 6 months
Proposal for integration tests infrastructure
by Honza Horak
Fedora lacks integration testing (unit testing done during build is not
enough). Taskotron will be able to fill some gaps in the future, so
maintainers will be able to set-up various tasks after their component
is built. But even before this works we can benefit from having the
tests already available (and run them manually if needed).
Hereby, I'd like to get ideas and figure out answers for how and where
to keep the tests. A similar discussion already took place before, which
I'd like to continue in:
https://lists.fedoraproject.org/pipermail/devel/2014-January/193498.html
And some short discussion already took place here as well:
https://lists.fedoraproject.org/pipermail/env-and-stacks/2014-October/000...
Some high level requirements:
* tests will be written by maintainers or broader community, not a
dedicated team
* tests will be easy to run on anybody's computer (but might be
potentially destructive; some secure environment will not be part of tests)
* tests will be run automatically after related components get built
(probably by Taskotron)
Where to keep tests?
a/ in current dist-git for related components (problem with sharing
parts of code, problem where to keep tests related for more components)
b/ in separate git with similar functionality as dist-git (needs new
infrastructure, components are not directly connected with tests, won't
make mess in current dist-git)
c/ in current dist-git but as ordinary components (no new infrastructure
needed but components are not directly connected with tests)
How to deliver tests?
a/ just use them directly from git (we need to keep some metadata for
dependencies anyway)
b/ package them as RPMs (we can keep metadata there; e.g. Taskotron will
run only tests that have "Provides: ci-tests(mariadb)" after mariadb is
built; we also might automate packaging tests to RPMs)
Structure for tests?
a/ similar to what components use (branches for Fedora versions)
b/ only one branch
Test maintainers should be allowed to behave the same as package
maintainers do -- one likes keeping branches the same and uses "%if
%fedora" macros, someone else likes specs clean and rather maintain more
different branches) -- we won't find one structure that would fit all,
so allowing both ways seems better.
Which framework to use?
People have no time to learn new things, so we should let them to write
the tests in any language and just define some conventions how to run them.
Cheers,
Honza
9 years, 6 months
Pulp plugin for managing Python packages
by Nick Coghlan
I actually opened up my Pulp devel mailing list folder for the first
time in a while, and the first thing I saw was that they're currently
working on a plugin for Python packages: https://github.com/pulp/pulp_python
One of the things that has been worrying me about the idea of language
specific package repos is the sheer complexity of managing them all.
(The fact Slavek found 35 new packages he'd need to respin as RPMs to
deploy devpi for the pilot did nothing whatsoever to reassure me...)
For those that aren't aware, Pulp is a plugin based repository
management system written in Python, where the different repositories
can all share common infrastructure for things like scheduling updates,
uploading new content, and mirroring files out to remote sites, but
publish content in a way that can be consumed by application specific
packaging tools (it's actually one of the upstream projects for Red Hat
Satellite 6+).
The already released plugins cover RPMs and Puppet modules, but in
addition to the Python support mentioned above, there's also
experimental modules for Docker image registry support.
I've actually used Pulp before (version 1 though, when the plugin model
was still in alpha), and rather liked their approach, as well as finding
their dev team quite easy to work. I hadn't previously thought of it in
the context of language specific repositories for Fedora, but now that I
have, I'll explore the idea further.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
9 years, 6 months
Integration per-component tests in Fedora
by Honza Horak
Hi guys,
I've been thinking about how to implement integration testing in Fedora,
that would allow to execute per-package tests. This basically means to
solve the following questions:
1) Where to keep such tests:
I don't see any reason why not include them in Fedora dist-git, but
there are at least two ways to do so:
1a) Include the tests in git for component, say in 'tests/' directory.
With this approach it would not be easy to use some fragments from other
components (some general tests valid for set of components) -- well,
dist-git could use relative paths that would lead to different tasks'
directories or we might create some meta-tests repos that would be used
by more components, but it does not feels much like a clean solution.
1b) Another approach would be to package the test suites as separate
components using standard packaging process called by some convention
and which would be installed as RPMs. With this approach we'd be able to
prepare real integration tests related for more components, we would be
able to use RPM's dependencies to require depended tests, etc. It could
only be too big overhead for tests that has only two lines and just
check if a daemon can be started (which IMHO is a valid test worth
doing). But still, this would be my preferred option.
2) Which testing framework to use:
We might allow anything executable that will produce output in
standardized format, but if we agree on a shell library called beakerlib
[1], we could profit from having one platform for Fedora and RH internal
tests. But still, that could be a recommendation, but a must.
3) The most tricky part -- how to integrate the tests into the packaging
process:
First, I thought we needed to wait until Taskotron implements a feature
to be able to run some tasks only for some components. But now I think
we can implement the "per-package" part in some general task, that will
be run for all components, just the decision which tests to fetch/run
would be done in the task, not in Taskotron.
4) How to run destructive tests:
Even a test that just starts a service can be destructive if a service
is broken, so we need to have some level of defence. Taskotron plans to
support destructive tests in the future, but we might be able to do so
in the task itself again. Not sure which technology would be the best,
there are several options from VM in VM, to docker or nspawn solutions,
but not sure if some of the solutions would be fine enough.
Anyway, this is how it might work in practice (using approach 2b):
* maintainer writes few lines of code that just install, start and stop
mariadb daemon using beakerlib
* this script and small config file that specifies how to run the script
is packaged as ci-mariadb RPM (using standard Fedora process)
* Taskotron has a task that is run after every build and which does the
following:
- checks repo if there is some ci-mariadb package
- installs such a package
- reads the config file to see how to run the tests
- runs the tests in some safe environment (might be handled by task
itself of taskotron)
- stores the results
* maintainer is happy to catch mistakes soon
* user is happy to not get broken stuff
[1] https://fedorahosted.org/beakerlib/
Oh, I'm a bit too chatty today, sorry for that... :)
Any ideas welcome!
Honza
9 years, 6 months
Agenda for Env-and-Stacks WG meeting (2014-10-21)
by Honza Horak
WG meeting will be at 13:00 UTC (14:00 London, 15:00 Brno, 9:00 Boston,
22:00 Tokyo) in #fedora-meeting on Freenode.
= Topics =
* Follow-up -- Docker docs
* Integration tests for packages
* Per-component integration tests in Fedora
* Picking chairman for the next meeting
* OpenFloor
9 years, 6 months
Agenda for Env-and-Stacks WG meeting (2014-10-14)
by Honza Horak
WG meeting will be at 13:00 UTC (14:00 London, 15:00 Brno, 9:00 Boston,
22:00 Tokyo) in #fedora-meeting on Freenode.
= Topics =
* Docker, Docker, Docker :)
* Dockerfile_lint -- where to include it
* New checks for Dockerfile_lint
* Packaging rules for Dockerfiles in Fedora
* Picking chairman for the next meeting
* OpenFloor
9 years, 6 months
Idea: Ability to define dependencies between coprs (correctly)
by Honza Horak
Hi all,
I have a proposal that would change how dependencies are defined in copr:
Problem:
Currently, copr allows to add a link to an arbitrary repo URL that is
available for installing dependencies during building in copr. Using
this dependent repo link we are able to build packages in coprA with
dependencies in another coprB.
However, when enabling only coprA and installing some packages from this
copr, we can miss some packages from coprB, because those packages are
not available since coprB is not enabled.
Solution:
We should be able to define dependency between coprs correctly. When
creating coprA, we would add one or more depended coprs ('userB/coprB')
instead of repo link. Then all packages from these coprs would be
available during build, correct buildroots would be used (no need to
specify variables $releasever and in addition, we would be able to
provide correct (all) RPMs also when *installing* coprs.
There are basically two ways how to implement this on the users' side:
1) Simpler, preferred by Mirek, copr maintainer (CC'd):
'copr' plugin in dnf would include -r option, which would basically
installed all related coprs. That means when running `dnf copr enable -r
userA/coprA`, user would end with two coprs enabled: userA/coprA and
userB/coprB.
2) More complicated, preferred by me :) :
copr A repository from example above would not only include RPMs build
as part of this copr, but would include also packages from copr B. That
means that when running `dnf copr enable userA/coprA`, user would not
need to install userB/coprB repository and would have all packages
available.
Both ways struggle with refreshing data:
* in 1) we might need to refresh coprs enabled (on the users' side)
* in 2) we would need to re-create repodata in depended coprA if coprB
gets changed (on the server's side).
Let's discuss this a bit, I'm eager to hear your opinions.
Cheers,
Honza
9 years, 7 months
Agenda for Env-and-Stacks WG meeting (2014-09-16)
by Honza Horak
WG meeting will be at 13:00 UTC (14:00 London, 15:00 Brno, 9:00 Boston,
22:00 Tokyo) in #fedora-meeting on Freenode.
= Topics =
* Language specific mirrors for Fedora Playground compliant packages
* SCLs, building above them and their position in Fedora/EPEL
* Picking chairman for the next meeting
* OpenFloor
9 years, 7 months
Language Stacks in Docker Hub
by Václav Pavlín
Hi all,
Docker announced language stacks images in Docker Hub last week:
http://blog.docker.com/2014/09/docker-hub-official-repos-announcing-langu...
They are all based on Debian and pretty outdated. We should consider
creation of such images for our stacks based on Fedora - we are thinking
of releasing F21 Alpha base image so we could maybe build them on top of it.
As we still don't have a build service I proposed some time ago, we
could use Docker Automated Builds - we would "just" need to prepare
Dockerfiles.
Have a look at what they provide and let me know if you think it's
interesting for us.
Thanks,
Vašek
--
Lead Infrastructure Engineer
Developer Experience
Brno, Czech Republic
9 years, 7 months