Preparing the Virtual Reality course at ICG

For a while now a lot of my time working was spent on preparing the technical part of a Virtual Reality course at ICG. Since the setup was fairly complex I thought a review might be interesting.

  • This write-up contains notes on fabric, puppet, apt, dpkg, reprepro, unattended-upgrades, synergy and equalizer.
  • I worked with Daniel Brajko, Bernhard Kerbl and Thomas Geymayer on this project.
  • This post was updated 4 times.

The setup

The students will be controlling 8 desktop-style computers (“clients”) as well as one additional desktop computer (“master”) which will be used to control the clients. The master is the single computer the students will be working on – it will provide a “terminal” into our 24 (+1) display videowall-cluster.

Each of the 8 computers is equipped with a current, good NVIDIA GPU (NVIDIA GTX 970) which powers 3 large, 1080p, stereo-enabled screens positioned vertically along a metal construction. The construction serves as the mount for the displays, the computer at its back as well as all cables. Additionally, each mount has been constructed to be easily and individually movable by attaching wheels to the bottom plate. The design of said constructions, as well as the planning, organization and the acquisition of all components was done by Daniel Brajko.

the videowall, switched off


I could go into detail here, how my colleague has planned and organized the new Deskotheque (that the name of the lab) as well as overseen the mobile mount construction. However, since I am very thankful for not having to deal with both shipping as well as assembly, I will spare that part. Instead I will tell how one of our researchers and I scrambled to get a demo working within little to no time.

All computers were set up with Ubuntu 14.04. We intended to use puppet, which was initially suggested by Dieter Schmalstieg, the head of our institute, from the start. At that time our puppet infrastructure was not yet ready, so I had to set up the computers individually. After installing openssh-server and copying my public key over to the computer I used Python fabric scripts I’ve written to execute the following command:

fabric allow_passwordless_sudo:desko-admin 
  set_password_login:False change_password:local -H deskoN

This command accessed the host whose alias I had previously set up in my ~/.ssh/config. The code for those commands can be found on Github. The desko-admin account has since been deleted.

A while later our puppet solution was ready and we connected those computers to puppet. There is a variety of tasks that is now handled by puppet:

  • the ICG apt repository is used as additional source (this happens before the main stage)
  • a PPA is used as additional apt source to enable the latest NVIDIA drivers (this happens before the main stage)
  • NVIDIA drivers, a set of developer tools, a set of admin tools, the templates, binaries and libraries for the VRVU lecture are installed.
  • unattended_upgrades, ntp, openssh-server are enabled and configured.
  • apport is disabled. (Because honestly, I have no clue why Ubuntu is shipping this pain enabled.)
  • deskotheque users are managed
  • SSH public keys for administrative access are distributed


First impression

If you don’t care for ranting about Ubuntu, please skip ahead to moving parts, thank you. Setting up a different wallpaper for two or more different screens in Ubuntu happens to be a rather complicated task. For the first impression I needed to:

  • log in as desko-admin
    • create the demo user account
    • have demo log in automatically
  • log in via SSH as desko-admin
    • add PPA for nitrogen
    • install nitrogen and gnome-tweak-tool
    • copy 3 distinct pictures to a given location on the system
  • log in as demo
    • disable desktop-icons via gnome-tweak-tool
    • set monitor positions (do this the second time after doing it for desko-admin because monitor positions are account-specific. This, btw, is incredibly stupid.)
    • set images via nitrogen (because who would ever want to see two different pictures on his two screens, right?)
    • disable the screen saver (don’t want people having to log in over and over during work)
    • enable autostart of nitrogen (that’s right, we are only faking a desktop background by starting an application that runs in the background)

Only after this had been done for every single computer, a big picture was visible: all the small images formed one big photograph and made an impressive multi-screen wallpaper – at least if you stood back far enough not to notice the pixels. Getting a picture that’s 3*1080 x 8*1920 is rather hard, so we upscaled an existing one.

The result of this pain is: One switches on all computers and they all start displaying parts of the same picture, logged in via the same account. You can immediately start a demo using all screens with this user. (This procedure was made even more simple by having puppet deploy SSH public and private keys for this user – so you instantly jump from one deskotheque computer to another if you’re demo.)

Moving parts

For the first big demo for a selected number of people during WARM 2015 I
worked together with Thomas Geymayer which is the main developer of our in-house fork of synergy on setting up said program. It took us some attempts to get everything working in the first place since he had used Ubuntu 14.10 for development. The cluster however used the current 14.04 LTS I had rolled out earlier. Since by then the puppet solution wasn’t ready, we spent two frantic days copying, trying, compiling, trying again and copying via SFTP between the individual nodes in order to get everything to work properly. Thomas had to rework some of the implementation since our fork was originally invented for presenting, not remote-control of several devices which he did in admirably little time. Though we had some issues during the presentation the attendees seemed interested and impressed by our setup.

Soon after that deadline I prioritized finishing our puppet solution since I got very, very annoyed manually syncing directories.


Bernhard Kerbl wanted to work with the Equalizer framework in order to enable complex rendering tasks. Each of the computers in the cluster is supposed to compute a single part of the whole image (or rather 3 parts given that 3 monitors are connected to each node). The parts of the whole image must be synchronized by the master, so that the whole image makes sense (e.g. no parts of the image may be further ahead in a timeline than the others). Usually I expect bigger projects to either offer Ubuntu packages, prebuilt Linux binaries or even a PPA. Their PPA doesn’t offer packages for the current Ubuntu LTS though, so we ended up compiling everything ourselves.

That took a while, even after figuring out that one can make apt-get and use Ubuntu packages instead of compiling libraries like boost from source. After some trial and error we arrived at a portable (by which I mean “portable between systems in the cluster”) solution. I packaged that version using fpm. Since the students will be using the headers and libraries in the framework we could not simple ship that package and be done with it, we also had to ensure that everything could be compiled and run without issue. The result of that is a package with equalizer libraries and almost everything else that was built which has a sheer endless list of dependencies since we had to include both buildtime and runtime dependencies.

In order to package everything, we installed all the depencies, built out of source and packaged everything with fpm.

In the last weeks before this article, I’ve seen a 3D rendering on almost all screens of the cluster which was great. I enjoy seeing people use systems I helped building.

Puppet: apt or dpkg

Having a prepared .DEB file didn’t solve all my trouble though. I had two options for installing the file via puppet: apt or dpkg. Well, this was troubling. dpkg does not understand dependencies if used in this way – a bad thing given that the dependencies of our vrvu-equalizer package were a pretty long list. apt however didn’t offer to use a source parameter – therefore we had to offer a way to install the package from a repository.

After a bit of research I decided to set up an in-house repository for the institute, hosting those packages which we cannot comfortably use from other sources. At the time of this writing it holds patched versions of unattended-upgrades for Trusty, Precise, Wheezy and Jessie as well as our vrvu-equalizer version for Trusty. (I recommend against using our repository for your computers since I haven’t found the time to repair the slightly broken unattended-upgrades for systems other than Jessie.)

deb <codename> main

I created the repository using reprepro and we sign our packages with the following key:


I’ve automated installation of upgrades on most of our Linux based machines at the institute mostly due to the fact that I don’t want to babysit package upgrades when security critical updates are released. *cough* openssl *cough* However, I’ve run into one problematic issue. I’ve run out of space on the /boot partition due to frequent kernel updates which don’t remove the previous kernels.

I’ve since set the Remove-unused-dependencies parameter, but that didn’t do everything I wanted. This parameter only instructs the script to remove dependencies that happen to be no longer needed during this run. Dependencies which were “orphaned” before the current run will be ignored. This means that manual upgrades have the potential to lead to orphaned packages which remain on the system permanently.

Since the unattended-upgrades script is written in Python, I took a stab at implementing the functionality I wanted to have for use with our installations. After I had done that, I packaged everything for Ubuntu Precise Pangolin, Ubuntu Trusty Tahr and Debian Wheezy and put everything in our ICG apt repository to have it automatically installed.

Unattended-upgrades, again

A review of my previous modification to unattended-upgrades was necessary since root kept getting mail from the cronjob associated with unattended-upgrades even though I had specifically instructed the package via puppet to only do so in case of errors. Still, every few days, we would get emails containing the output of the script. Here’s an example.

debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin: 
(Reading database ... 117338 files and directories currently installed.)
Preparing to replace subversion 1.6.17dfsg-4+deb7u8 (using .../subversion_1.6.17dfsg-4+deb7u9_amd64.deb) ...
Unpacking replacement subversion ...
Preparing to replace libsvn1:amd64 1.6.17dfsg-4+deb7u8 (using .../libsvn1_1.6.17dfsg-4+deb7u9_amd64.deb) ...
Unpacking replacement libsvn1:amd64 ...
Processing triggers for man-db ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
Setting up libsvn1:amd64 (1.6.17dfsg-4+deb7u9) ...
Setting up subversion (1.6.17dfsg-4+deb7u9) ...

I am currently in the process of solving this by rewriting my modification in a cleaner, more structured way – a way which is a lot more influenced by the original script, keeping in mind that the necessary environment variable for debconf is set in the execution path.

My initial error with this was that cache.commit() in the script immediately applied all changes made to the cache. While I intended to only apply the deletion of marked packages at the point of my call to the method, this meant that all changes got applied – even those for installing/upgrading new packages. The script returned prematurely and stdout got written to. This in term meant that root would get mail, since root always receives mail of cronjobs produce output.

Update 1: While my current progress does no longer call commit prematurely, it still sends me e-mails. I probably forgot to return True somewhere.

Update 2: In the meantime I think I fixed that issue by returning the success status of the auto-removal process and assigning it to the pkg_install_success variable if it does not already contain an error.

Update 3: Fixed every issue I found and submitted a pull request on Github. However, I don’t know if it will be accepted since I implemented my preferred behaviour instead of the old one. I am not sure whether I should’ve added an additional parameter instead.

Update 4: Pull request was merged. Unfortunately I will be stuck patching my older systems, however.

Preparing the Virtual Reality course at ICG

A Letter to the Dev: thoughts about Audiosurf 2’s “Autofind Music”

Dear Dylan,

First of all, I tremendously enjoy playing Audiosurf 2. I bought it as soon as it was available on OS X. I longed for that to happen since one of my favorite games I had to leave behind when switching operating systems was Audiosurf (1). While I personally find Mono a little harder than in AS1 (or do I recall it having an “easy” mode?) I still love every minute I play.

However, I am of the confound impression that the “autofind music” feature is not well implemented. From this forum thread I gather that before you were not scanning external disks. Right now, you are doing some things that are worse and will probably result in the scan never finishing its run. Here are some recommendations on how to make it better.

  • Build either a blacklist or whitelist of folders, with the content of that varying by operating system (Windows, OS X, Linux)
  • Exclude system folders
  • Set a maximum amount of depth that you follow symlinks / NTFS junctions. The lower, the better.
  • Think about implementing a time-out. (This is not necessarily a good idea, just something to think about when you’re scanning for more than, say, 30 minutes.)

Here are some suggestions for exclusions, prefixed by operating system for your convenience:

  • OS X: ~/Library (contains preferences, caches, etc for your user account)
  • OS X: /Volumes/Time Machine (contains the external copy of time machine, the Apple provided backup system)
  • OS X: /Volumes/MobileBackups (contains the local version of time machine, enabled for all laptops on which Time Machine is active)
  • OS X: /Volumes/BOOTCAMP (NTFS volume which is there when someone enables dual-booting with Windows on their Mac)
  • OS X: Generally don’t read outside of a user’s home, unless it’s a portable device (/Volumes/…)
  • OS X: Don’t access hidden folders (starting with “.”)
  • Windows: C:\Windows (system components)
  • Windows: C:\Program Files (installation data)
  • Windows: C:\Program Files(x86) (installation data for 32bit applications on 64bit systems)
  • Windows: %appdata%, %localappdata% and %appdata%/…/locallow (Microsoft explains this better than I would)
  • Linux: Generally don’t read outside of a user’s home, unless it’s a portable device (/mnt/…, /media/…, /mount/…)
  • Linux: Don’t access hidden folders (starting with “.”)

I have built this list in order to try and help you make AS2 an even greater game which actually finishes automatically finding my music instead of digging through my local and external backups, accidentally indexing music that might be gone the next time and following potential symlink circles. I sincerely hope this helps you.


I originally wrote this in the steam forums, however it might be useful to keep around in case someone needs advice on the same topic.

A Letter to the Dev: thoughts about Audiosurf 2’s “Autofind Music”

Media Recap: Q4 2014

I have not stopped writing down my consumed media. Neither do I feel the need to stop sharing them with you. However, my current schedule happens to exhaust me a lot easier than previous ones. It is because of this change in workload and scheduling that I need to change my Media Recap series to 4 times a year instead of every month. Here’s what I checked out in Q4 2014.

Continue reading “Media Recap: Q4 2014″

Media Recap: Q4 2014

State of e-mail 2014

I’ve been chatting with @stefan2904 about mail clients recently and we came to the conclusion that we’re rather unsatisfied with the current status of desktop mailing software.

Only a few weeks back I’ve reorganized my complete e-mail workflow again. I’ve done this once before and it was unpleasant the first time, it was still annoying the second time. Moving your mails from one provider to another one is crappy, slow and error prone – the more advanced tools are complicated and not suited for an impatient mood. I’m not sure what the preferred tool for this task is but migrating your existing mails with Apple Mail or Thunderbird is every bit as shitty as it sounds. (CMD+A, drag to folder on other mail provider, wait for timeout to occur, repeat)

Anyway. My previous setup looked like this:

  • IMAP (standard, or rather sub-standard) at my website host whose SquirrelMail web interface is crappy and its filtering sucks in every single category you can think of, be that spam or rules for regular mail.
  • IMAP (standard, but somewhat better) Horde mail interface at my institute at university.
  • Exchange (the name is precisely the activity I wanted to do with it) mail for university, directly at university
  • iCloud IMAP (sub-standard) holy shit. I’ve wanted to switch to iCloud for its Push delivery of new mail to my iOS devices. I’ve never before seen such a ridiculous spam-reporting technique. You are supposed to forward mail that their filter has missed to a special address. iCloud really completely blocks spam instead of collecting it in a dedicated folder like Gmail does.

In essence, I had all those accounts set up on all of my devices (3, about to become 4) and that led to the occasional confusion and a lot of micromanagement for identities and preferences when setting up a device or changing a tiny detail.

There were a few points I intensely disliked about former setup, the most annoying one getting spam onto my mobile phone. Since there is no automated spam filtering in the iOS world you have to rely on your server component. If your server part happens to be crap, you are syncing every tiny piece of unwanted mail to mobile devices regardless of its importance (read: spam is not important). That means more irrelevant notifications and less battery life. I arrived at this setup after realizing I wanted Push notifications for at least some of my mails. Newer versions of iOS do not provide Push for Gmail accounts, so I switched everything to iCloud.

However, working with multiple e-mail accounts, aliases and different push/fetch settings as well as redirects quickly proved painful and actively discouraged me from using my preferred address, the one associated with my domain.

To Google again

In order to remedy this, as well as get better push support I’ve moved back to Gmail. Since Gmail support for iOS is not exactly the best (although quite good) and the Gmail iOS app feels more like a wrapper arounds its website than a responsive app, I’ve also decided to make Mailbox my new mail client on both iOS and OS X (admittedly, the desktop version is only in beta stage at the moment but it works okay).

Another big reason for my renewed use of Gmail is its automated spam filtering: In contrast to other solutions which require you to follow a certain process for reporting spam Gmail allows you to simply move an unwanted mail to its Junk folder via IMAP. Learning will happen automatically on the server side. Let me repeat this again, so you can appreciate it better: There is no need to create rules or other procedures to combat spam other than marking unwanted mails as spam when they arrive.

What works great

Swiping is a great interaction method for clearing messages quickly. Auto-swipes sync across your devices (as the should). While it would be preferable to have absolutely all filtering on the server-side, creating simple rules is extremely fast and very handy. Due to the Dropbox integration, both rules and preferences sync to your other devices if you choose so.

In contrast to Google’s Inbox which I’ve also tested for a few hours, I vastly prefer the simple white interface to Google’s Material Design. As you are probably aware, Google tries its best to keep you immersed in their ecosystem, which makes working harder on iOS if you prefer to use tools from multiple companies.

What’s decidedly bad


It seems like there is no (outgoing) attachment support in the desktop version yet. From having a look around the forums I arrived at the conclusion that the preferred method is to put a file into one’s Dropbox and send the link to that. I am curious if this will be automated via the GUI in the future.

While drafts are accessible on the desktop, it’s simply not possible to save a draft. I’ve tried hitting CMD+S, I’ve checked whether there is a prompt on closing an unsaved message, I’ve double-checked the menus for an option regarding saving of drafts. It seems like I will keep my habit of keeping e-mails as short as possible.


Mailbox for iOS seems to choke on particularly long e-mails – even on the latest iPad (iPad Air 2), so I assume it is not a CPU problem. Since this only happens on the extremely long log file one of our server creates every day it is not a problem for me.

Another slight problem is iOS’s unwillingness to let users exchange the default mail program. While this could easily be remedied by Mailbox providing a new iOS 8 share extension, it is currently necessary for me to have my Gmail account configured in Apple’s in order to share articles from Instapaper and Reeder easily. I’ve set the refresh to ‘manually’ to avoid syncing everything twice.

State of e-mail 2014