I've been contributing to the Heat project lately and I think it's
gotten to a point where we (the Aeolus community) could look at it and
figure out if it's something we want to use.
Please forgive me the length of this email. I tried to make it as short
but there's a lot to cover. Grab your favourite beverage and sit
A little background
Heat is a project that provides orchestration and high availability to
OpenStack. Its API is modelled after Amazon CloudFormation.
What it does in Aeolus' terms is: you create a deployable and pass it to
Heat API along with any launch-time parameters. Heat will decide on the
correct order of the resources (instances, databases, load balancers,
remote storage, floating IP addresses, etc.) and launches and configures
the whole thing to produce a running deployment.
Heat started about five months ago and hopes to become one of the core
OpenStack components. It's written in Python following the OpenStack
development model and tools.
What's this to do with Aeolus?
We're trying to solve some of the same problems:
* launching multiple instances in correct order
* passing parameters that are only known when an instance is launched
* configuring and advertising the services running inside the VMs
If we could outsource this to a project whose sole focus is solving
these issues, we could focus on things that make Aeolus unique: building
great UI, tools and APIs for managing cross-cloud deployments.
As a side benefit, we would get the current and future features of Heat
* high availability
Heat is an OpenStack project and the developers did not put effort into
making sure that it works with anything else.
However, Ian Main and I looked into the code and it seems that we could
integrate Deltacloud support with little changes in the Heat codebase.
The Deltacloud backend would be an external package that would not
pollute Heat's code and dependencies. The changes to add supports for a
custom backend would be small so there's a good chance they'd be accepted.
Proof of concept
It would be cool if we had a working prototype to demonstrate the
strengths and weaknesses as a reference for the discussion.
We're not there yet but Ian and I have came up with a plan that should
produce a basic prototype reasonably quickly.
The first step is to get Conductor to launch multi-instance deployments
using Heat, nothing else. The plan is at the end of this email.
Heat's image management is very similar to Aeolus'.
The images are built using Oz and stored in Glance (OpenStack's image
Heat supports both installing everything at runtime from bare (JEOS)
images and using preconfigured images and just launching those.
So far the focus has been on the former but both approaches work and
Heat developers are spending more time making the experience with image
Since Conductor supports both modes as well (prebuilt images with Image
Factory, post-launch with Audrey), there shouldn't be an impedance mismatch.
We will keep building images in Image Factory, we'll pass the image IDs
from Deployable XMLs to Heat that will in turn pass them to Deltacloud.
You can read the plan for the proof-of-concept and the long-term phases
and I would like your feedback. Please post correction on anything I've
missed or said wrong and anything else that comes in your mind. Do you
think integrating with Heat makes sense?
I want stress that this is not a done deal. I think that that Heat can
be very useful to Aeolus but it's quite possible that I've missed
something and we still need to see it in action before we commit to
There's a real possibility that this just is not a good match in which
case we'll go our separate ways (unless we decide to make it a good
match). Feel free to ask anything that's unclear.
Gimme teh links!
If you want to try Heat out, follow these steps:
Heat developers hang out at #heat on freenode and are all really nice chaps.
The repo for experimental integration of Deltacloud into Heat:
The Deltacloud client for Python lives in the official Deltacloud
The client is very barebones at the moment. We'll be adding the
necessary features here:
and work on getting them accepted upstream.
The API Heat uses at the moment is based on AWS CloudFormation. You can
read the documentation here:
Here are some Deployable descriptions that Heat accepts and is able to
launch (in Heat and CloudFormation terminology, these are called
"templates", the deployment is called "stack")
What follows is the short-term and long-term plan on getting this done.
Proof of concept
### Getting Deltacloud to work with Python ###
The Deltacloud API is language-agnostic but we still need Python bindings.
Deltacloud ships with a Python client but it hasn't been touched for
over a year. I've sent a few patches to fix it but more work is required
for it to be usable.
### Passing the Provider credentials from Heat API to Deltacloud ###
Currently, the Heat API user can authenticate using either Keystone or
the EC2-style credentials.
We need a new authentication handler that will receive Deltacloud-style
credentials and pass them to the Deltacloud API.
Ian has written a patch that does this.
### Integrating Deltacloud client with Heat ###
Heat is currently using the Nova python client to launch instances, etc.
The calls are already reasonably isolated.
I've written a shim module that provides the same API as the Nova client
and we hooked it up to Ian's authentication code.
It's not connected to Deltacloud yet (that's the next step), but as far
as majority of Heat's codebase is concerned, everything works without
making a single call to OpenStack.
### Getting Aeolus Conductor to generate AWS::CFN Template ###
Conductor already has all the info it needs to launch the instances
stored in the Deployable XML.
We'll take the XML, parse it and generate the equivalent JSON template
that Heat accepts.
### Passing the generate template from Conductor to Heat ###
Once we have the converted deployable, Conductor will send it to Heat
via the current CloudFormation API.
Heat will launch the deployment/stack via its Deltacloud bindings.
### Querying Heat data from Conductor ###
Heat doesn't support any callbacks. When Conductor wants to know details
about the stack it launched, it will use the CloudFormation API to query
For the proof of concept stage, we will just issue the query to Heat
upon every relevant UI action: e.g. `ListStacks` when showing
deployables in the UI, `DescribeStackResource` when shoving a details of
a single deployable, `DescribeStackEvents` to get deployable events, etc.
### Dead Nova client dependencies ###
Parts of the Heat code import Nova and Glance clients but they don't use
the code. These should be removed.
### Nova client exceptions ###
Several libraries import the exception classes from Nova client.
If we could make the Deltacloud backend to raise the same exceptions
without depending on the `python-novaclient` library, that would be cool.
Barring that, we'll either have to wrap the backend exceptions or have
the Deltacloud code bite the bullet and depend on the novaclient lib
(the Heat package depends on it anyway so it may not be as bad).
### Make the backends swappable at the configuration level ###
Similarly to how Heat can change between the AMQP implementations for
Fedora and Ubuntu, we need a configuration option that replaces
OpenStack with Deltacloud.
### Make the Heat Deltacloud backend a separate package ###
Heat doesn't have to ship additional backends if they don't want to. The
configuration option from previous task would take a Python package
That way it doesn't care if it's within the main codebase or a
### Template format ###
Aeolus uses a different format from Heat to describe deployables.
We need to settle on what's the best thing to use. Options:
1) Translate Aeolus Deployable XML to Amazon CloudFormation JSON
2) Implement Aeolus Deployable XML format in Heat
3) Adopt CloudFormation JSON in Conductor
4) Have both Aeouls and Heat adopt another (open standard) format
With all likelihood Heat will reject 2), we won't switch to 3) and since
*both* communities would have to accept and implement 4), we'll probably
stick to 1) for some time.
Still, listing it here as it needs to be decided.
### Heat-Conductor communication ###
Heat is working on implementing a proper RESTful API that will
1) be more consistent with other OpenStack components
2) suck less
Once that's done, Conductor should probably adopt this API.
We then need to figure out how to pass data from Heat back to Conductor.
If Conductor's launching instances via Heat, how does that information
1) Conductor asks Heat every time it needs the data
2) Conductor asks Heat periodically and caches the results
3) Heat allows Conductor to register a callback -- either via HTTP
(webhooks) or via RPC -- and tells Conductor when stuff happens (e.g. an
instance was launched, crashed, etc.)
4) Conductor continues to use DBomatic to query the instances directly
via Deltacloud, ignoring any additional information Heat may provide
For the proof of concept, everything other than 1) is a premature
optimization and then we'll see.
I think 4) would be rather brittle especially since adopting Heat might
possibly let us drop DBomatic entirely.
### Is Heat going to be the only way to launch deployments in Conductor
or will it be an optional part? ###
We will need to see how stable and usable Heat is. How easy it is to
deploy and keep up (it would be another dependency, after all) and how
much benefit and code cleanup it brings to Aeolus.
### Missing parts in Deltacloud ###
Deltacloud provides only a subset of the information OpenStack clients
have access to. A lot of these are provider-specific.
We need to identify these cases and decide what to do with them:
1) Have Deltacloud provide the access for providers that support them
and advertise the support properly
2) Don't let Heat provide them for non-Openstack providers
3) Implement them in Heat for providers that support them and keep
Deltacloud as the lowest common denominator
This includes working with the Deltacloud people: we're going to let
them know what's missing and discuss the appropriate solution.
Since Deltacloud API is already focused towards discoverability (e.g.
when showing an instance, it lists the actions that can be done to the
instance) that may be the best approach if all parties agree.
I just was offered a slot to present Aeolus at The Ohio LinuxFest .
The OLF is one of the biggest community-run Linux conferences in the
States so it'll be a great opportunity to share / promote the project to
a wide / diverse audience.
The conference is at the end of September, so besides my usual security
and other work up to then, I'm going to be taking some time to throw
together and send around a quality presentation for the event. I would
like to heavily lean on a demo, using the wui and cli tools to deploy to
the cloud using a hosted instance of Aeolus (even if that is non-public
at the time being). I'd like to skip over the configure / setup stuff
and just demonstrate the meat and bones of using the application to
deploy to the cloud.
Any thoughts, comments, presentation material, etc would be appreciated,
To follow rubygem tradition we have decided to come up with a more
catchy name for Image Management Engine. Because, well, Image
Management Engine is pretty dull. This task is harder than it seems.
It turns out the creative side of my mind has been locked away in a
little box. I'm hoping one day I'll find the key, but right now it's
looking grim so I'm relying on you guys for inspiration :)
We're open to all suggestions wackier the more memorable :). One thing
we would like is do is keep in with the Cloud or Conductor theme.
I'm pleased to announce release 0.9.0 of Oz. Oz is a program for
doing automated installation of guest operating systems with limited
input from the user. Release 0.9.0 is a (long overdue) bugfix and
feature release for Oz. Some of the highlights between Oz 0.8.0 and
- Easier to create Debian/Ubuntu packages
- Ability to specify the disk size in the TDL
- Ability to specify the number of CPUs and amount of memory used
for the installation VM
- Cleanup and bugfixes to oz-cleanup-cache
- Ability to install Fedora-17 disk images
- Ability to install guests as a non-root user. This has several
caveats; please see the documentation on
http://github.com/clalancette/oz for more information.
- Ability to install RHEL-6.3 disk images
- Ability to install ScientificLinuxCERN disk images
- Ability to install Mandrake 8.2 disk images
- Ability to install OpenSUSE 10.3 disk images
- Ability to install Ubuntu 12.04 disk images
A tarball of this release is available, as well as packages for
Fedora-16. Instructions on how to get and use Oz are available at
If you have any questions or comments about Oz, please feel free to
contact aeolus-devel(a)lists.fedorahosted.org or me
Thanks to everyone who contributed to this release through bug
reports, patches, and suggestions for improvement.