Improving our Xen Usage
Posted on Tue 31 May 2016 • Tagged with Institute for Computer Vision and Computer Graphics, Work
The software we use at ICG for virtualizing servers is Xen. That’s fine because it has not made my life unnecessarily hard yet. There are however, some things that could be done better - especially while handling things with puppet.
- This post was updated once (2016-08-28)
How it used to work
When I initially found the infrastructure for our Xen guests, the configuration files were located in a subdirectory of /srv
that turned out to be an NFS share (mounted with hard
because that’s the default). This was the same for all our Xen hosts apart from one which had local configurations, but symlinked to a similar folder.
Inside these folders were many previous configuration files of VMs that had been retired long ago which made finding the currently used files somewhat of an annoying task.
The main reason I chose to rework this was the NFS mount - when the host providing the NFS share wouldn’t reboot during a standard maintenance I had no configuration for any guest on all but one of our Xen hosts. That was an inconvenient situation I hoped to avoid in the future.
How it works right now
One of my issues with the previous solution was that it left important configuration files in a non-standard path instead of /etc/…
. Furthermore I wanted to use version control (Git, to be precise) in order to keep the directory clean and current while also maintaining the file histories.
I integrated everything into our Git versioned Puppet code by writing a class which installs the xen-hypervisor-4.4-amd64
package, maintains the /etc/xen/xl.conf
and /etc/xen/xend-config.sxp
files as well as the directory /etc/xen/domains
- the latter of which is a single flat directory where I keep all Xen guest configuration files.
The files are named according to a special syntax so that it’s possibly to see with a glance where the domains are supposed to be run. (e.g. 02-example_domain.cfg
)
While further improving our Xen hosts with additional monitoring, unattended upgrades and optimizing the DRBD running on some of them I soon found out that this solution wasn’t great either. The flat directory prevented me from writing simple Puppet code to use Xen’s /etc/xen/auto
directory to have certain guests automatically started (or resumed, depending on circumstances) on boot of the host.
How the suggested solution looks like
Since Puppet is not a scripting language it’s often that your established way of thinking (mine being, “I know, I’ll use a ‘for’ loop”) can’t solve the problem and you either have to research new ways of working around the problem or finding idiomatic ways to solve it.
I needed a way to make sure the right Xen configurations would end up in each host’s /etc/xen/auto
without them trying to start configurations for other hosts. Given the naming scheme this could be as easy as the following snippet.
# NOTE: untested and only here for illustration purposes
# You need to get the host number from somewhere
# but that wouldn't be the main issue.
exec
{
'link-xen-configurations':
refreshonly => true,
path => '/usr/bin/find /etc/xen/domains -type f -name "NUMBER-*.cfg" | /usr/bin/xargs -I FILENAME -n1 -t ln -f -s FILENAME /xen/auto/FILENAME',
user => 'root',
}
Of course you would need to remove existing links to files first and using exec
s is a messy business after all. Besides - something I hadn’t touched yet - there are also VM configurations that have two prefixes to signify on which hosts they can be run due to DRBD (e.g. 01-03-other_example.cfg
) syncing their contents on a block level between two hosts.
Given this it’s even more complex to build such a system well in a way that won’t break in spectacular fashion the first time you look away after a deploy.
My plan is to create host-specific folders in our Puppet code and have Puppet symlink those since using the . In addition, disentangling the multiple-host configurations will be necessary - this will avoid having DRBD capable hosts starting the same VM at the same time. $::hostname
variable provided by Puppet’s Facter makes this extremely easyI might combine this with changing the device specified in the Xen configurations.
-disk = ["phy:/dev/drbd13,xvda,w"]
+disk = ["drbd:myresource,xvda,w"]
This will direct Xen to put the DRBD resource named ‘myresource’ into the Primary role, and configure it as device xvda in your domU. ~
/etc/xen/scripts/block-drbd
(slightly changed to use whole disk instead of partition)
The interesting thing here is that the resource will automatically become primary when the Xen domain is started - there is no need to automatically become primary on startup on a particular node with DRBD itself - this will be done on demand as soon as a Xen guest requires it.
In time - with DRBD 9 - it might even be reasonable to have all VM hosts be able to run all guests due to cluster-mode block syncing.
Update (2016-08-28)
After some planning, some research and testing, I arrived at the following setup.
DRBD 9 auto promotion
I previously thought that one could use the drbd:resource
syntax to automatically mount the storage for our VMs. This does not work due to it being incompatible with HVM guests. The drbd
syntax is only enabled for PV guests. This seems to be due to a timing issue and is a known complication. The bad news is that previously this has been solved by patching a sleep(5)
in which I really didn’t want to do.
The great news however is that this is obsolete since DRBD 9 supports auto-promotion. This means that when there is currently no primary
for a resource in a cluster and one node wants to have write-access then that node is promoted to primary
. This works great and requires no further configuration.
With a simple xl create my_vm.cfg
the node becomes primary and the VM is booted.
Folders per host
There was an easier option than symlinking available. Instead of relying on symlinks all the time, I created folders matching the hostnames of all Xen hosts. Then I have Puppet modify the /etc/default/xendomains
script to automatically start the configurations from said directory by using a template.
file
{
'/etc/default/xendomains':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
content => template('automation/xen_configurations/xendomains.erb'),
require => File['/etc/xen/domains'],
}
The only line deviating from the previous version of the xendomains
file is the one with the folder according to the hostname.
XENDOMAINS_AUTO=/etc/xen/domains/<%= @hostname %>
This ensures me that only the correct VMs will be booted on startup of a machine. Furthermore I can also modify the place where a VM will be booted on next start from our GitLab by modifying the file path - which makes working with others much easier. You want to move where the VM runs? Just move the config over to the other folder, shut the machine down, boot it on the other host.
While creating the hostname folders, one should not forget to make a touch .gitkeep
inside each of them, to have them in your repository and distributed to the machines even if they are ostensibly empty.