Council Engineering update
by Josh Boyer
Hi Env&Stacks WG,
As part of the on-going Council updates, I'm organizing a Fedora
Engineering update. This includes the various WGs, rel-eng,
infrastructure, and a few others. The idea behind this is to give a
brief update of the work your group is doing towards the F23 and F24
releases. Think of this as a 5-10 minute "lightning talk" of the
highlights you want to see in those releases.
The meeting is July 7th. It would be excellent if you could have a
volunteer present the update for your group and stay around to answer
any questions. Worst case, please gather the information and send it
to me and I can do the overview, but representation from the group is
clearly preferred.
If you have questions, please let me know. The idea and format are
somewhat new, so we'll work through this as best we can.
josh
8 years, 9 months
Skip Env-and-Stacks WG meeting? (2015-06-25)
by Honza Horak
hey guys, I'm not available tomorrow and I know about couple of people
that are not either. So, I'm proposing to skip tomorrow's meeting,
unless someone else volunteers to organize one.
Btw. don't forget to vote in the Env & Stacks and other elections :)
Honza
8 years, 10 months
CTF - Containers (not only) Testing Framework
by Tomas Hozza
Hi.
Petr Hracek asked me to share some brief introduction to CTF with the
Env&Stacks WG, since I'm one of the authors of the CTF project.
CTF [1] is a relatively simple framework built on top of Behave [1],
which is a BDD testing framework written in Python. Tests written for
Behave consist of Features and Steps (and environment).
Features
--------
You specify the Features and Scenarios you want to test as set of Steps
described in simple English. Those steps can be parametrized, so you can
pass strings or tables to the Step implementation. Steps are connected
together using keywords like "Given", "When", "Then", "And", "But", ...
. Nice thing about Features is that you can reuse the same Step in
multiple Scenarios, thus being able to write new test cases even without
the need to implement new test code, if reusing existing Steps. Also
when reading the Feature files, it is mostly obvious what the test is
testing, since you are not reading the code, but the "human-readable"
representation of the test case.
Steps
-----
Now the Steps used in Features to describe the test cases are
implemented in separate files as basic Python functions and decorated
with decorators provided by Behave. This is so that Behave can match the
Steps in Feature files to their implementation. As said before, you can
pass arguments like string, table (or even own one, if you implement
proper parser) to the step. Also a context file is passed between each
Step, so these can share some state.
Environment
-----------
There is also an environment.py file, where you can specify code that
will be run before/after each feature/scenario/step and you can even
filter the execution based on TAGs applied to the feature/scenario/step.
Now all of this still pure Behave specific. You can read more about it
on the Behave project page [1]. There are also some nice examples.
CTF
---
CTF is meant as a way to distribute and reuse existing test cases across
various projects with the aim on containers (although we want to make it
really general and not to limit ourselves only to containers).
The basic idea is that you can have multiple test cases (Features and
Steps) that are valid for multiple containers (e.g. any layered image
with database MariaDB/MongoDB/...) and you want to reuse them. Tests
(Features and Steps) are stored in remote git repositories.
CTF, when executed in the project repository, based on the
configuration, it clones all the remote test Features and Steps as git
submodules. It then creates a working directory for Behave, in which it
combines local tests (Features & Steps) with all the remote tests,
prepares environment.py and executes Behave in the working directory.
It is then possible to add/remove/update the cloned tests (git
submodules) if these were modified in the remote repo.
This enables developers to write e.g. set of tests that are common for
any Docker containers and then just execute these as part of testing a
specific container along with the specific local tests. It is also
possible to reuse already implemented remote Steps in local Features you
want to test. This means less code duplication, better tests sharing and
in the end hopefully higher test coverage.
There is pretty comprehensive description of how ctf-cli works, on the
project page on GitHub [3].
Feel free to reach out to the developers. The project is still in early
stage, but there is active development. It is also already used by some
teams to execute tests.
[1] http://pythonhosted.org/behave/
[2] https://github.com/Containers-Testing-Framework
[3] https://github.com/Containers-Testing-Framework/ctf-cli
Regards,
--
Tomas Hozza
Software Engineer - EMEA ENG Developer Experience
PGP: 1D9F3C2D
Red Hat Inc. http://cz.redhat.com
8 years, 10 months
Building and publishing Software Collections from COPR or Koji?
by Nick Coghlan
There's more context for this question below, but has anyone looked at
what would be involved in building software collections in COPR or
Koji, and then publishing them to softwarecollections.org?
The context is that seeing https://github.com/squeaky-pl/portable-pypy
reminded me that I'd been meaning to ask for a while if anyone looked
at turning the PyPy and PyPy3 packages for Fedora/EPEL into sofware
collections for softwarecollections.org.
That then lead to a follow on though, which is that building them is
only the first step, the next trick is updating them for future
releases, which then lead to asking about the possible automation
paths.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
8 years, 10 months
Latest thoughts on user level package management
by Nick Coghlan
In relation to the Fedora Modularization objective approved in
https://fedorahosted.org/council/ticket/26, I've been thinking further
about the ideas in
https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageMa...
and how it might be possible to evolve them into a maintainable
multi-tiered ecosystem of components
This isn't quite the same concept as Fedora.next's "rings", and the
word "levels" on its own is boring, so I'm going to take inspiration
from Fedora's logo and use "Aleph"
(http://en.wikipedia.org/wiki/Aleph_number), which also nicely
captures the fact that we expect there to be increasingly more
software in each tier (because the barriers to entry will be lower).
For each tier, there'd be different expectations on the degree of
scrutiny applied to affected components and how closely they track
upstream, with the outermost tiers being a Do-It-Yourself adventure
where all we're saying is "We've checked, and it's legal for you to
use this, and the folks publishing it aren't obviously malicious".
Each tier would also have different constraints on how the components
were published, and the typically available means of consuming them.
This is the first time I've written this down, so I expect it to
require a lot of modification to become a workable proposal, and it
may not be fixable at all. If folks at the WG meeting this evening
agree there's at least the seed of a good idea here, I'll move it over
to the wiki :)
The 6 proposed levels:
* Aleph 0: Essential components
* Aleph 1: Integrated components
* Aleph 2: Redistributed components
* Aleph 3: Experimental components
* Aleph 4: Developer components
* Aleph 5: Upstream components
Aleph's 3-5 would be the domain of Environments & Stacks, Aleph 0
would be the domain of the Base WG and Edition WG's, Aleph 1 would
include everything else specifically taken into account for release
management purposes (and perhaps a bit more), while Aleph 2 would
primarily be the domain of individual package maintainers.
Tiers using RPM as the publication format may contain distro specific
patches, tiers using other formats (i.e. Aleph 4 & 5) would always
provide vanilla upstream packages (as a result, I consider the fact
the proposal switches technologies at Aleph 4 to be a positive feature
rather than as a problem to be resolved)
= Aleph 0: Essential components =
Publication format: RPM
Build system: koji
Consumption formats: RPM, ISO, AMI, OSTree, base Docker images
All the RPMs that go into Fedora Cloud, Fedora Atomic Host, and the
base installs for Fedora Server and Fedora Workstation are Aleph 0
components. Any component proposed for inclusion at this tier *must*
be available as a policy compliant RPM.
Some essential components could eventually be made available as
xdg-app sandboxes or layered Docker images. Whether they're classed as
"essential" or not would still be determined by whether or not they
were a default component in one of the Fedora Editions.
If essential components aren't closely tracking their upstream
counterparts, we should want to know why. We'd expect essential
components to be smoothly handed over to new maintainers rather than
getting orphaned without a migration plan.
= Aleph 1: Integrated components =
Publication format: RPM
Build system: koji
Consumption formats: RPM, ISO (, layered Docker image?, xdg-app?)
All the RPMs that go into Labs and Spins would be Aleph 1 components.
There might be other components that would fit here as well if the
Fedora starts providing them in a pre-integrated form, like a layered
Docker image, or an xdg-app. The upstream packages that feed into
softwarecollections.org should likely also be Aleph 1
If integrated components aren't closely tracking their upstream
counterparts, we should want to know why. We'd expect integrated
components to be smoothly handed over to new maintainers rather than
getting orphaned without a migration plan.
= Aleph 2: Redistributed components =
Publication format: RPM
Build system: koji
Consumption formats: RPM(, layered Docker image?, xdg-app?)
Everything else in the main Fedora repos, as well as everything in EPEL.
The key differences from Aleph 0 and 1 is that these components may
not closely track their upstream counterparts, and they may be
orphaned and retired without a specific migration plan in place.
= Aleph 3: Experimental components =
Publication format: RPM
Build system: COPR
Consumption formats: RPM
Fedora Playground, COPR repos in general.
= Aleph 4: Developer components =
Publication format: nix???
Build system: ????
Consumption formats: nix???
This level would be developer components in a language and distro
independent format. nix is the current frontrunner in my mind, mostly
due to the good discussions between myself, Randy Barlow (Pulp) and
Domen Kožar (NixOS), and the fact that Nix already uses the model of a
central system-wide package store, with user and environment specific
views into that store (this is a much nicer architecture for auditing
purposes, since you can audit the package store directly, rather than
having to check each environment).
The intended use of these components would be local development and
data analysis, as well as incorporation into layered Docker image and
xdg-app sandbox builds.
Generation of these components from their corresponding upstream
components should be fully automated (letting us easily track upstream
closely after the initial approval process), so the fact Domen has
been working on automated PyPI -> Nix conversions
(https://twitter.com/iElectric/status/605686824443494400) does play a
part in my inclination towards Nix here :)
= Aleph 5: Upstream components =
Publication format: language dependent
Build system: ????
Consumption formats: language dependent
For ecosystems that are designed to accommodate redistribution, this
level would be for republishing approved upstream components in a
language dependent format. The key aspects would be the licensing
review to check it's actually permissible for us to redistribute it in
accordance with Fedora's policies, and that the publishers of the
software don't appear to be obviously malicious.
For language ecosystems where redistribution isn't well supported, it
would be possible to skip this tier and go straight to Aleph 3 or 4.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
8 years, 10 months