Daniel P. Berrange wrote:
On Thu, Feb 21, 2008 at 01:57:08AM +0000, Daniel P. Berrange wrote:
> On Wed, Feb 20, 2008 at 08:01:32PM -0500, Jeremy Katz wrote:
>> On Tue, 2008-02-19 at 19:16 +0000, Daniel P. Berrange wrote:
>>>>> from its kickstart recipe. Currently developers building the
appliance have to
>>>>> boot a VM using the F8 boot.iso and run the kickstart script in the
VM and so
>>>>> on. While this works, involves many steps with potential for failure.
This new
>>>>> tool reduces the problem to simply
>>>> As opposed to virt-install --name ... ? I'm not convinced
there's a
>>>> huge gain in terms of number of places for failure :-/
>>> You have to have a working virtualization stack & it takes more resource
>>> overhead than just doing the chroot'd build. Emprically the part of our
>>> build process doing the guest based installs has been less reliable than
>>> the livecd-creator part. Havinga disk-creator will address that problem
>> You have to have a working virt stack to *use* virtualization or test
>> things. I'm not sure that't eh argument to use. And as far as
>> reliability, what bugs are you talking about? livecd-creator has
>> managed to have lower numbers of bugs largely due to a) less
>> functionality and b) less users.
> The host you use virtualization on, is not neccessarily the same as the host
> you build your images on though. If people don't have hardware virt in their
> dev machine it can be beneficial to build images via this tool, in preference
> to use the very slow QEMU emulator. Even in they do have virtualization the
> CPU & particularly memory overhead of building in the host is reduced. One
> specific bug is that the disk image ends up with the hostname of the guest
> VM embedded in /etc/sysconfig/network & /etc/hosts. Another was that the user
> in question wanted to run the tool on a host without hardware virt support.
Some other examples of scenarios where you want to build appliance images but
do not have virtualization capabilities directly accessible.
- Machines where the user's primary OS is running under an embedded hypervisor.
QEMU is tolerable for booting an image to verify that it works, but building
the image in QEMU is too slow to be practical.
Obviously, since my project uses precisely that (qemu) I'll defend a
bit: Some examples of where 'too slow to be practical' is IMHO an
oversweeping generalization-
- when a few hours or overnight is not a big deal
- when you can use a standardized pre-existing image as a base (that
might have taken hours to build once), and all you need to do is add or
remove a small set of packages, which only takes a few minutes
- in the future, when qemu, either via kvm/kqemu or just plain faster
hardware, reduces the install time from hours to minutes, and the
simplicity and security of no-root-privs becomes more valuable than the
time saved using alternate methods (at least for some usage cases).
Naturally these might not be situations you are interested in, but I
think your statement of 'too slow to be practical' was an
oversimplification which you knew I would take the bait and defend ;)
Basically, I do agree that with the current relative immaturity of these
sorts of imaging tools, we do find ourselves repeatedly building the
'long run'. But in the future, I see a pipeline and polishedness, where
you have one or more standard images built and cached (because disk is
cheap), and further one-off appliances are quickly built as minor
modifications of those cached images. But that's just the itch I'm
scratching... I grant that for your purposes, running disk-creator as
root is clearly better for the sorts of virtualization tasks you seem to
be targeting (right now).
- Building images to deploy to a virtual machine hosted by an ISP,
eg
linode.com
where you have either option of providing a pre-installed disk image, or using
one the ISP built for you.
- The virtualization technology on your local machine may be different from the
target machine. eg you can run VirtualBox on your deskop, but want to build
Xen based images to deploy to Amazon's EC2 hosting environment.
Yup, two more cases where qemu (or *cough* perhaps vmware ;) can fit the
bill, if you aren't in a massive hurry.
-dmc