Signed-off-by: Steven Dake <sdake(a)redhat.com>
man/oz-install.1 | 11 ++++++++---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/man/oz-install.1 b/man/oz-install.1
index b497813..7cf0077 100644
@@ -70,9 +70,14 @@ will undefine the libvirt guest with the same name or UUID and delete
the diskimage, so it should be used with caution.
-Use a timeout value of \fBtimeout\fR for installation, rather than the
-oz default. This can be useful if you know you have slower storage
-and want to wait longer for the installation to timeout.
+Terminate the installation of the guest image in \fBtimeout\fR seconds
+rather then the default of 1200 seconds. This value can be increased in the
+case of slow storage or multiple oz-install operations on the same machine
+consuming the disk bandwidth.
+Please note there is a separate termination action that occurs if 300 seconds
+elapses before any data is written to the VM image. This timer value is not
Customize the image after installation. This generally installs
Greg Blomquist and I have put together an Audrey design document for the 0.4.0 effort.
It can be found here:
Although we would appreciate input from anyone I'm requesting review feedback from the following people:
I am asking that all feedback be provided by COB one week from today, August 3, 2011. We are already moving forward with some development according to the presented design so direction changing feedback would be greatly appreciated as early as possible. If anyone needs more time please let us know.
I tried to provide XML examples that align with the format outlined in past discussion. If I missed that please let me know.
Joe and Greg
Some of these have been discussed, while some of these have been sitting
on my backlog for ages.
Currently configure allows us to generate ssl certificates for
conductor, we need to expand this to generate ssl certs for
imagefactory, iwhd, condor and deltacloud,
Furthermore, AFAIK a cert creation resource type and provider is lacking
in the upstream puppet project. We can split these bits into its own
puppet resource type and provider and send that upstream for inclusion
Currently we use a users table in the db to store user identity. We need
to be able to integrate a user store from LDAP, whether it be a server
we setup ourselves via configure or an external one we integrate in
(both need to be supported)
Configuration profiles / interactive installer:
The start of this was sent to list, but essentially a mechanism to
install different aeolus components on different machines, configuring
them to be interoperable with each other and existing infrastructure
resources (qmf, ldap, or postgres servers for example).
Furthermore a user friendly interactive installer should be provided so
that an admin can setup / configure only the specific components and
seed data that they are looking for
No work has been done to make Aeolus IPv6 compliant and this should be
done at some point.
Single click deployments:
The simpler the Aeolus experience the better, there should be a small
widget on the first conductor page after logging in allowing the user to
select a template and select a provider to deploy to before clicking to
actually launch it. The rest of the nitty-gritty details should be taken
care of by the app itself.
This would be a patentable feature for a 'one-click cloud deployment'
Improved rubygem/rpm and ruby/fedora integration:
The gem2rpm and ruby-rpm bindings can be improved significantly to
assist in our bundling of project dependencies and the ruby -> Fedora
release process. Specifically we should be able to do alot more in terms
of dependency analysis and making sure any given gem stack will work as
intended on our infrastructure.
Furthermore our existing tooling, such as the rpm/yum rake tasks, can be
improved and made available to the greater ruby community.
Code audit and improvements
There are alot of things that can be cleaned up, optimized and improved
in general in the aeolus codebase. Particularly conductor can be
optimized and should be improved to make use of ActiveResource,
ActiveModel, and other Rails 3 features
Condor REST frontend
If we can accomplish this, we can remove alot of the needless
replication from our conductor db, replacing the ActiveRecord models w/
ActiveResource derived ones. Furthermore, we can get rid of condormatic
We've had a lot of good discussion on the Aeolus list  about
authorization and identity lately, and also about the intersection
points of Aeolus and Katello. However I have been slow (sorry!) to
suggest and update the requirements and user stories for this
integration, and that has left folks asking a lot of questions about
the features that are affected by that integration, in particular:
* We will eventually need a UI for creating and managing templates,
assemblies, and deployables -- where's that gonna go?
* Precisely which components have to talk to each other securely in a
system where Katello and Conductor work together?
* Is there a need for an independent permission system in the
Warehouse? What about an independent identity system?
* One of the very cool features VCloud has is the ability to take a
bunch of applications -- really, VM templates -- and put them in a
"Catalog" that an admin can grant or revoke permission on for a set
of users. The virtue of the catalog is that it gives admins an easy
way to manage, say, "all the financial apps that are approved for
production," or "the apps that are in testing this week." We need
something like this for Aeolus, but how does it work exactly, and
which part of the system owns it?
I've been in a lot of meetings this week where we talked about these
questions, so I'm going to propose some answers here based on
that. There are a ton of unanswered questions as well, and I'm hopeful
that Aeolus and Katello folks can help answer them here.
(Katello folks, apologies for dumping a mostly-aeolus-centered design
doc into your midst, but I thought you guys might want to know what
we're thinking about too. Please feel free to comment on either or
1. Template design and management
(By "Template," I mean "Image templates, assembly definitions, and
We've had quite a bit of early user feedback asking, in no uncertain
terms, for a nice GUI to let users define templates. We removed our
nascent template definition UI from Conductor for the 0.3.0 release
because it didn't feel right there, which I continue to think was the
right idea, but there is clearly a need for such a GUI to exist
Fortunately, there is another project very close to Aeolus called
Katello . Katello's mission in life is allowing users to define
bare-metal systems and manage their access to the software that should
be provisioned on those systems. As such, it has as a core feature
providing a UI for creating "System templates."
I believe, and I think the Katello folks agree, that Katello users
would benefit from being able to use it to define systems that will
run in the cloud, as well as bare-metal systems. From there it's not a
big leap to say that Katello users will want to define the entire
range of templates that Aeolus can use and understand -- image
templates, for stuff that gets built pre-boot; system templates
(Katello) or assemblies (Audrey), for stuff that happens to a system
post-boot; and application templates (Katello) or deployables
(Audrey) for hooking systems together post-boot.
Note that the last of these things -- application templates -- are
also really useful for the bare-metal world. Given a set of system
templates, and the ability to specify a set of systems that work
together, it's not a big stretch to think that Katello users will
want a way to provision and launch that set of systems -- an
application -- either on a set of bare-metal machines, *or* in a cloud
provided by Conductor.
Of course, it will be some time before the Katello folks can design
and build the whole UI for managing all three kinds of Aeolus
templates and their interactions and dependencies. However, we don't
have to wait for that to happen before we can start thinking about
making Conductor and Katello work together. For release 0.4.0, I would
think we could get by with the following features:
* Katello provides an API that, given a user Katello knows about, will
return all the Katello system templates that user has permission to
see. (Note that templates in Katello are grouped by "environment," and
that a user can have permission on more than one environment, so
we'll have to deal with that hierarchy in the API somehow.)
* The Aeolus team builds a UI piece in or next to Conductor will allow
the user to:
* map those templates to a catalog (more about this later)
* map non-Katello templates (assemblies/deployables) to a
catalog. For now this could be pretty much the same mechanism that
our "Suggested Deployable" UI uses now -- store a URL of a
deployable to use
* Another UI piece, in or next to Conductor, will allow a user to:
* map a catalog to a pool
* check to see if the templates in that catalog have been built into
images that have been pushed to the cloud provider accounts that the
pool has access to
* ask the Image Factory to build and push any templates that have
*not* been built in the available accounts
* display the status of the build(s)
Somewhere in this process a translation step between the Katello
system template and the Factory image template will need to happen. I
don't particularly care where that step is. And it's important to note
that eventually we will be passing Katello image templates to Factory,
not Katello system templates -- hacking up the system template to get
an image definition out of it is a temporary first step.
I also think that in general, we should not be importing the actual
template into a catalog, but rather just importing a reference to it
so that we don't have duplicate copies of templates floating
around. I'm willing to be argued out of this however.
Some folks I talked to on the Katello side wondered if they should be
able to tell something on the Aeolus side "Build this thing right
now." I'm willing to say Conductor should provide a convenience API to
do this, but it's important to note that Katello would need to be able
to tell Conductor what pool (and hence what cloud providers) it wants
the image built for, and provide a user who is authorized to push
images to those providers.
(One thing I notice as I write this is that we are going to need to
pass user identity around fairly soon to accomplish this, which makes
me think a central identity store is probably a firmer requirement for
0.4.0 than I initially thought.)
So if we take the above feature list as "good enough" for 0.4.0, what
does that tell us about who needs to talk to whom?
2. Secured communications
I said above that we will have some UI in a catalog manager that hits
a Katello API to retrieve all the system templates a user is allowed
to see. For that to work, we're going to need to be able to give
Katello a user; and for that to work, we're going to need a common
identity store. Whether any of it is encrypted or not for this
go-round is not terribly important I don't think, although of course
it will have to be for production.
I also said that the catalog manager will need to ask the Factory to
build images for a set of cloud provider accounts. So there is going
to have to be a channel to do that. However, I do not see anywhere
here a user story that would require the catalog manager to act *on
behalf of a user*; Conductor has already checked to see if the
building user is allowed to build and push to the set of cloud
provider accounts for the given pool. So I don't think the Factory
needs to know anything about identity -- and, in fact, I don't think
it really matters if non-Conductor users call the Factory and ask it
to do stuff. Without the cloud provider account credentials, which are
locked up in Conductor, there shouldn't be anything the user can do
with the Factory that is harmful.
Finally, I implied (without really saying so) that Conductor, or the
catalog manager wherever that lives, needs to read information about
the state of images from the warehouse, and push images to the
warehouse. This means there will need to be a channel between the
catalog manager and the Warehouse. However, I do not see anywhere here
a user story that would require the catalog manager to talk to the
Warehouse *on behalf of a user* - the ability to push, update, or
launch images is all governed by Conductor's control of the cloud
provider account credentials, so I don't think there is any reason for
the Warehouse to have a notion of images belonging to users, or any
understanding of user identity. Note that THIS IS A BIG CHANGE from
what I and others have said in the past, but it does simplify what we
have to do in the short term quite a bit.
I believe the remaining secured communications -- between Conductor
and Deltacloud, Conductor and Audrey, etc. -- are well enough
understood that I don't have to discuss them here.
So, to sum up:
* Conductor (and its catalog manager) and Katello need to share
user identity, and fairly soon
* Conductor and Factory and Warehouse do not need to share user
* In fact, Conductor and Katello are the *only* two components in this
system that need to share identity.
I realize I've just answered my third question above, about whether
the Warehouse needs to know anything about authentication or
authorization. I believe the answer is no.
Although my first thought about catalogs as a feature was "I guess
we're doing this just because VMWare has them and we need to check the
feature complete box," the more we talked about the feature the more I
began to see it as a useful thing.
Things administrators can do with catalogs:
* For now, group image templates into a catalog; eventually, group
application templates into a catalog
* Connect catalogs to pools, making it possible for users to run the
applications in the catalog
* Check that the images that the applications in a catalog depend on,
have been built for the cloud provider accounts connected to a pool,
so that it is possible to launch the applications.
* Disconnect catalogs from pools. Maybe, also remove images that
are no longer referenced by an application that is connected to a
So from a user story perspective, you can imagine that an
administrator, on creating or maintaining a pool, would be able to
browse catalogs of applications to add to that pool. On adding the
catalog to the pool, the admin would be able to check to make sure the
images required by the apps in that catalog are built for all the
provider accounts the pool has access to. The users of that pool would
gain the ability to launch the listed applications.
I would like to see the design activity for catalogs, and hopefully UX
and wireframes, done soon. It would be really good to have a
super-simple (just system templates) implementation by the 0.4.0
All right. That was long. I'll be waiting to hear how wrong I am from
the assembled multitude. Please be thorough.
== Hugh Brock, hbrock(a)redhat.com ==
== Engineering Manager, Cloud BU ==
== Aeolus Project: Manage virtual infrastructure across clouds. ==
== http://aeolusproject.org ==
"I know that you believe you understand what you think I said, but I’m
not sure you realize that what you heard is not what I meant."
After fixing this, I discovered a failing test. The test didn't make sense to me because I'm pretty sure those features were only in the old UI, so I deleted the test. It would probably be good if someone else could give this bit a quick sanity-check...
Initial thoughts for Release 0.4.0 from Identity planning meeting:
Support authentication against external LDAP
Conductor will integrate with LDAP Server for authentication. It will
follow the same principles as Katello, in that it will use the local DB
as its primary data source for users and fall back on LDAP (TBC). e.g.
If a user does not already exist in the local DB it will: 1)
authenticate against LDAP 2) create the user in the DB.
Deleting users will consist of deleting the user in the local DB only.
this can then be created again, the next time a user logs in using LDAP
Listing users in Conductor, will consist of only listing the users in
the local database. Warehouse should share the same set of users as
conductor. Warehouse is likely supporting GSSAPI. We need to decide
whether warehouse will be authenticating against conductor or another
We intend to use OAuth across components for authentication. This would
require adding OAuth Provider support to conductor and OAuth client
support to each component accessing protected resources. Katello
already supports a OAuth (two-legged) which hopefully means relatively
straight forward integration once we have the other parts in place.
- support authentication against external LDAP
- provide authentication mechanism across aeolus components
* LDAP Support:
* Conductor auth against LDAP with local DB Fallback
* conductor first tries authenticate user against external LDAP
server. If user is found there, user account in local db is created
(except credentials) if it doesn't exist yet. If user is not found in
LDAP, local db is searched.
* Keep Consistent With Katello
* This means Using DB as initial resource and falling back to LDAP
* if admin deletes a LDAP user in admin section, user is deleted
only from local db, but this user can login again (as she will be
authenticated against LDAP)
* user listing in conductor - shoud we list only local users or all
* do we need UI for LDAP config (it's probably only IP + port,
maybe LDAP domain), if so we will need some table for saving
configuration (we don't have this yet)
* should be LDAP server setup part of aeolus-configure script? I
don't think so.
* but we should probably have something in aeolus-configure to make
conductor/etc aware of the ldap store
* IWHD Auth against:
* warehouse should know about same set of users as conductor? No
* warehouse is going to support GSSAPI. So if someone is
authenticating to warehouse, identity check should be done against
* Should IWHD and imagefactory users be independent
* should IWHD authenticate always against conductor or against LDAP
or other resource? if not against conductor local db users won't have
access to iwhd
* Should ImageFactory Authenticate users?Yes
* Authentication across components
* Use OAuth?
* Already Support By Katello
* supports OAuth (two-legged), so it's not a problem for conductor
to be a 'consumer' of Katello service and use two-legged OAuth too (in
case conductor will use Katello as a service)
* OAuth Providers:
* Will IWHD Support OAuth as well as GSSAPI?
* OAuth Clients
* if iwhd supports only GSSAPI (and should authenticate against
conductor) then conductor should act as GSSAPI server. Also user creds
in image build process will have to be passed through
* any other alternative to OAuth?
* Add LDAP support in conductor
* Update authentication model to auth againsts LDAP id available
* Create non existing users to local db
* Add OAuth Provider Support to
* Add OAuth Client Support to
* Add Deps to componets and fedora
* gem used in conductor for ldap auth, could be part of some more
sophisticated gem (devise, omniauth)
* oauth, gssapi libs
Fix for https://www.aeolusproject.org/redmine/issues/1510
This should work and shouldn't break any new tests, but I didn't test it on
providers that actually get affected by this as I don't have access to them.
Ian, please take a look at it and let me know if it solves the problem.
Taking a quick look around the code, it doesn't seem like we need the
rack-restful_submit gem anymore. Is that true? If so, I would like to remove
it from the Gemfile and the specfile, but I wanted to make sure I wasn't
missing anything before doing that.
Here is the proposal for a API feature planned for this iteration.
The general idea is to create a RESTful API for conductor allowing us to
manage all conductor functionality through this API.
Currently we have some xml and json responses implemented in controllers
For the beginning I suggest to use responder object to directly render
XML responses for collections,
lately when we want to stabilize the API we should use XML templates.
Here I propose features and tasks:
=== As a developer I want to have clean controllers.
* Move more code to before filters
* Move more code to models
=== As a developer I want to have full test suite for the API
* Implement all Rspec tests for the API
=== As a user I want to be able to consume xml API
* Add responder object to actions index and show for all controllers
=== As a user I want to be able to modify data through xml API
* Add responder object to actions update and destroy for all
=== As a user I want to have overview of the API
* Implement entry point for the API
* Implement/configure HTTP authentication