#9: default templates use /tmp
----------------------------+--------------------------------
Reporter: tflink | Owner: somebody
Type: defect | Status: new
Priority: major | Milestone: Initial Deployment
Component: Image Building | Version:
Keywords: |
----------------------------+--------------------------------
The templates currently in git specify /tmp as working directories for
lorax and pungi. This is fine for F17 and earlier, but with tmpfs in F18,
this can cause problems because /tmp isn't a real filesystem.
This is a simple fix - just substitute /var/tmp for /tmp in the templates.
It might be worth a warning if someone tries to use /tmp as a work
directory but I really don't think that's necessary at this time.
--
Ticket URL: <https://fedorahosted.org/fedora-build-service/ticket/9>
fedora-build-service <https://fedorahosted.org/fedora-build-service>
A service to build fedora images with a somewhat arbitrary package set from the standard fedora repos or builds from koji.
I'm going to be taking down the autoqa and autoqa-stg hosts for updates
shortly since the virthost they are on is going to be rebooted.
I don't expect any issues but I'll send out another email when
everything is done.
Tim
I don't think this particular conversation ever made it very far and
IIRC, hasn't been on the list yet but I want to get it started before
FUDCon NA this weekend. I've cc'd David and Matthew because I've talked
with them about Fedora test automation recently and they might have
input.
In my opinion (I suspect that other people feel similarly), AutoQA in
it's current form is not capable of meeting the test automation needs
for Fedora; mostly because we don't have a clear path towards external
tests and it seems pretty clear that the current devs (myself included)
don't have the bandwidth to add any more tests to the current set.
There has been some casual conversation about looking into switching
over to using Beaker [1] so that we can leverage some of the tests
currently being used by various groups within Red Hat instead of having
to rewrite them for AutoQA/Autotest. However, I don't want this
to sound like 'autotest is bad'. I sincerely doubt that there is a
single framework/runner out there which will 100% satisfy all of our
needs and I'm just looking to re-evaluate what we want from test
automation before deciding how we get there.
Instead of getting into the minutia of what we can do with
beaker/autotest/robot/whatever at the moment, I want to get a better
idea of what we actually want and need so that we don't end up coding
ourselves into a corner in the future.
At the end of this email, I've listed requirements for our test
automation from my perspective. I want to emphasize again, that I
_don't_ want to get into specific frameworks/solutions yet - just what
we want an ideal framework to do. We can get into the
advantages/disadvantages of particular setups and other practicality
issues later.
I'm planning to make this into a wiki page but figured that I would put
it on the list first for some discussion.
Tim
[1] http://beaker-project.org/
For the sake of consistency:
- 'must' means that something is a requirement. maybe not on initial
release but it has to at least seem possible without too much effort
or too many dirty hacks
- 'should' means that it would be preferred but not absolutely required
- 'would be nice' means that it would be cool, but nothing to lose
much sleep over.
=== Basic Requirements ===
* should be written mostly in Python
* must be package-able in fedora and EPEL repos
- mostly a licensing thing, other packaging issues could be
overlooked if upstream is at least interested in taking patches to
fix any issues.
* should have a friendly, responsive upstream
* must have an understandable codebase
* must be extendable without dirty hacks
=== Reporting ===
* must be able to coordinate with bodhi (1.0, 2.0)
* must be able to report some information via fedbus
* must support the ability to report to external systems
* must be clear about what test version, package-under-test version
and fedora release correspond with the reports
* must be clear about the test system's state (package versions,
installed packages etc.)
* should have some standardized reporting format
- based on something standard like XML, json, yaml etc.
=== Automation Framework ===
* must be able to support spawning VMs in Fedora infra's cloud
- or at least some other solution that supports rapid VM
provisioning without the need to install from scratch for every
test. installing from scratch every time is not an acceptable
option.
* must be able to differentiate between fedora release numbers and
package versions
* should be able to tell the difference between VMs and bare metal
where appropriate
* would be nice to have the VM type used during tests as a variable
when hooked up to a cloud-ish setup
* would be nice to support graphical installation testing
* would be nice to include support for grabbing new images from image
builder when that's supported
=== Library ===
* should be written in mostly python
* should not make writing tests in languages other than python more
difficult than it needs to be
* would be nice to support basic reporting options in other languages
=== Test Runner ===
* must support any language (within reason)
* must be able to pull in new/updated tests independently from
runner or framework updates
* must support version-specific tests (ie variants for different
fedora releases)
* should be runnable outside the framework for testing and development
purposes
* would be nice to be able to support multiple libraries
(beakerlib, application specific stuff, non-python support etc.)
=== Tests ===
* must be decoupled from the automation framework
* must be able to run outside the automation framework
* must report sane results (looking at you, depcheck)
=== Test Repository ===
* must support non-python languages
* must enable a review process for new tests before they are accepted
* must allow for test updates without admin or dev intervention
* should be in an existing package format (python egg, rpm etc.)