#182: Write virt management script ----------------------------+----------------------------------------------- Reporter: kparal | Owner: Type: task | Status: new Priority: major | Milestone: Virtualization Component: infrastructure | Version: 1.0 Keywords: | ----------------------------+----------------------------------------------- We must write a virt management script that will be able to the basic tasks for us: * query for available virt systems that we can use * switch between "real" disk and a disk snapshot for a virt system * revert a disk snapshot for a virt system * perform some command in a virt system - e.g. "yum update" for keeping it periodically up-to-date * install/reinstall a virt-system for us with specified distro * (some more to come?)
We may want to use some pre-defined names syntax to recognize what virt systems do we have. For example:
{{{ /dev/vg_autoqa/F12_i686_1 /dev/vg_autoqa/F12_i686_1-snap }}}
#182: Write virt management script -----------------------------+---------------------------------------------- Reporter: kparal | Owner: Type: task | Status: new Priority: major | Milestone: Virtualization Component: infrastructure | Version: 1.0 Resolution: | Keywords: -----------------------------+---------------------------------------------- Comment (by jlaska):
I was curious how this might work, so I've modified the script supplied by mmcgrath (slightly newer than what is attached to ticket#128).
Basically, I started by creating master guests for each of the stable system configurations * f13-i386-master * f13-x86_64-master * f12-i386-master * f12-x86_64-master * f11-i386-master * f11-x86_64-master
These systems are tuned and adjusted as desired. They are intended as the master copies used for all subsequent guests. They'll be updated at some pre-defined interval (daily, weekly etc...). * Updated packages: {{{yum update}}} * Removed system-specific udev net rules: {{{rm /etc/udev/rules.d/70 -persistent-net.rules}}} * Handle other system-specific details - ssh keys, /etc/sysconfig/network, /etc/sysconfig/network-scripts/ifcfg-eth0
Next, I use the attached script to create a slave guests using a specified ''release'' and ''arch''. The script will creates an LVM snapshot of the specified master image, then creates and boots a virt slave using the LVM snapshot. For example ... {{{ # bash create-virt-slave.sh -r f13 -a i386
Using slave guest name 'f13-i386-slave-1' One or more specified logical volume(s) not found. Creating LVM snapshot - f13-i386-slave-1 ... Logical volume "f13-i386-slave-1" created Creating virt guest using /var/tmp/quickguest.Ddr3CAr1/f13-i386-slave-1.xml ... Domain f13-i386-slave-1 created from /var/tmp/quickguest.Ddr3CAr1/f13-i386-slave-1.xml
Slave 'f13-i386-slave-1' created. Use 'virsh console f13-i386-slave-1' to connect to guest. }}}
This seems like it would satisfy the need for a quick'n'dirty guest management. There is still some work needed with the script. For example, * The script attaches to the guest console to record it to a log file. In production, this is probably fine, but for testing, it breaks using {{{virsh console slave}}} * I'd really like to get this setup so that slaves use a fresh LVM snapshot on boot. Without this, we run the potential of a test corrupting the snapshot. This still offers some level of protection over a bare- metal setup, we just re-run the ''create-virt-slave.sh'' script. However, it would be handy if the slaves would use a new snapshot on each boot. We can then rely on autotest rebooting the system after each test run (thus cleaning the system).
Hope this helps!
----- "AutoQA" trac@fedorahosted.org wrote:
#182: Write virt management script -----------------------------+---------------------------------------------- Reporter: kparal | Owner: Type: task | Status: new Priority: major | Milestone: Virtualization Component: infrastructure | Version: 1.0 Resolution: | Keywords: -----------------------------+---------------------------------------------- Comment (by jlaska):
I was curious how this might work, so I've modified the script supplied by mmcgrath (slightly newer than what is attached to ticket#128).
Basically, I started by creating master guests for each of the stable system configurations
- f13-i386-master
- f13-x86_64-master
- f12-i386-master
- f12-x86_64-master
- f11-i386-master
- f11-x86_64-master
These systems are tuned and adjusted as desired. They are intended as the master copies used for all subsequent guests. They'll be updated at some pre-defined interval (daily, weekly etc...).
- Updated packages: {{{yum update}}}
- Removed system-specific udev net rules: {{{rm
/etc/udev/rules.d/70 -persistent-net.rules}}}
- Handle other system-specific details - ssh keys,
/etc/sysconfig/network, /etc/sysconfig/network-scripts/ifcfg-eth0
Next, I use the attached script to create a slave guests using a specified ''release'' and ''arch''. The script will creates an LVM snapshot of the specified master image, then creates and boots a virt slave using the LVM snapshot. For example ... {{{ # bash create-virt-slave.sh -r f13 -a i386
Using slave guest name 'f13-i386-slave-1' One or more specified logical volume(s) not found. Creating LVM snapshot - f13-i386-slave-1 ... Logical volume "f13-i386-slave-1" created Creating virt guest using /var/tmp/quickguest.Ddr3CAr1/f13-i386-slave-1.xml ... Domain f13-i386-slave-1 created from /var/tmp/quickguest.Ddr3CAr1/f13-i386-slave-1.xml
Slave 'f13-i386-slave-1' created. Use 'virsh console f13-i386-slave-1' to connect to guest. }}}
This seems like it would satisfy the need for a quick'n'dirty guest management. There is still some work needed with the script. For example,
- The script attaches to the guest console to record it to a log
file. In production, this is probably fine, but for testing, it breaks using {{{virsh console slave}}}
- I'd really like to get this setup so that slaves use a fresh LVM
snapshot on boot. Without this, we run the potential of a test corrupting the snapshot. This still offers some level of protection over a bare- metal setup, we just re-run the ''create-virt-slave.sh'' script. However, it would be handy if the slaves would use a new snapshot on each boot. We can then rely on autotest rebooting the system after each test run (thus cleaning the system).
Hope this helps!
Wow, that seems like a great start. Thanks, James!
#182: Write virt management script --------------------+------------------------------------------------------- Reporter: kparal | Owner: Type: task | Status: new Priority: major | Milestone: Virtualization Component: core | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kparal):
I have found Oz tool:
http://aeolusproject.org/oz.html
that could help us with some tasks. Linking here.
#182: Write virt management script --------------------+------------------------------------------------------- Reporter: kparal | Owner: Type: task | Status: new Priority: major | Milestone: Virtualization Component: core | Resolution: Keywords: | --------------------+------------------------------------------------------- Comment (by kparal):
Another tool:
http://lukas.zapletalovi.com/2011/11/quick-provision-script-now-on- githubcom.html
autoqa-devel@lists.fedorahosted.org