Using Continuous Integration for puppet

Posted on Sun 01 November 2015 • Tagged with Institute for Computer Vision and Computer Graphics, Work

I’ll admit the bad stuff right away. I’ve been checking in bad code, I’ve had wrong configuration files on our services and it’s happened quite often that files referenced in .pp manifests have had a different name than the one specified or were not moved to the correct directory during refactoring. I’ve made mistakes that in other languages would’ve been considered “breaking the build”.

Given that most of the time I’m both developing and deploying our puppet code I’ve found many of my mistakes the hard way. Still I’ve wished for a kind of safety net for some time. Gitlab 8.0 finally gave me the chance by integration easy to use CI.

  • This post was updated once (2016-08-28)

Getting started with Gitlab CI

  1. Set up a runner. We use a private runner on a separate machine for our administrative configuration (puppet, etc.) to have a barrier from the regular CI our researchers are provided with (or, as of the time of this writing, will be provided with soonish). I haven’t had any problems with our docker runners yet.
  2. Enable Continuous Integration for your project in the gitlab webinterface.
  3. Add a gitlab-ci.yml file to the root of your repository to give instructions to the CI.

Test setup

I’ve improved the test setup quite a bit before writing this and aim to improve it further. I’ve also considered making the tests completely public on my github account, parameterize some scripts, handle configuration specific data in gitlab-ci.yml and using the github repository as a git submodule.

before script

In the before_script section which is run in every instance immediately before a job is run, I set some environment variables and run apt‘s update procedure once to ensure only the latest versions of packages are installed when packages are requested.

before_script:
  - export DEBIAN_FRONTEND=noninteractive
  - export NOKOGIRI_USE_SYSTEM_LIBRARIES=true
  - apt-get -qq update
  • DEBIAN_FRONTEND is set to suppress configuration prompts and just tell dpkg to use safe defaults.
  • NOKOGIRI_USE_SYSTEM_LIBRARIES greatly reduces build time for ruby‘s native extensions by not building its own libraries which are already on the system.

Optimizations

  • Whenever apt-get install is called, I supply -qq and -o=Dpkg::Use-Pty=0 to reduce the amount of text output generated.
  • Whenever gem install is called, I supply --no-rdoc and --no-ri to improve installation speed.

Puppet tests

All tests which I consider to belong to puppet itself run in the build stage. As is usual with Gitlab CI, only if all tests in this stage pass, the tests in the next stage will be run. Given that sanity checking application configurations which puppet won’t be able to apply doesn’t make a lot of sense, I’ve moved those checks into another stage.

I employ two of the three default stages for gitlab-ci: build and test. I haven’t had the time yet to build everything for automatic deployment after all tests pass using the deploy stage.

puppet:
  stage: build
  script:
    - apt-get -qq -o=Dpkg::Use-Pty=0 install puppet ruby-dev
    - gem install --no-rdoc --no-ri rails-erb-lint puppet-lint
    - make libraries
    - make links
    - tests/puppet-validate.sh
    - tests/puppet-lint.sh
    - tests/erb-syntax.sh
    - tests/puppet-missing-files.py
    - tests/puppet-apply-noop.sh
    - tests/documentation.sh

While puppet-lint exists as .deb file, this installs it as a gem in order to have Ubuntu docker containers running the latest puppet-lint.

I use a Makefile in order to install the dependencies of our puppet code quickly as well as to create symlinks to simplify the test process instead of copying files around the test VM.

libraries:
  @echo "Info: Installing required puppet modules from forge.puppetlabs.com."
  puppet module install puppetlabs/stdlib
  puppet module install puppetlabs/ntp
  puppet module install puppetlabs/apt --version 1.8.0
  puppet module install puppetlabs/vcsrepo

links:
  @echo "Info: Symlinking provided modules for CI."
  ln -s `pwd`/modules/core /etc/puppet/modules/core
  ln -s `pwd`/modules/automation /etc/puppet/modules/automation
  ln -s `pwd`/modules/packages /etc/puppet/modules/packages
  ln -s `pwd`/modules/services /etc/puppet/modules/services
  ln -s `pwd`/modules/users /etc/puppet/modules/users
  ln -s `pwd`/hiera.yaml /etc/puppet/hiera.yaml

As you can see, I haven’t had the chance to migrate to puppetlabs/apt 2.x yet.

puppet-validate

I use the puppet validate command on every .pp file I come across in order to make sure it is parseable. It is my first line of defense given that files which are not even able to make it pass the parser are certainly not going to do what I want in production.

#!/bin/bash
set -euo pipefail

find . -type f -name "*.pp" | xargs puppet parser validate --debug

puppet-lint

While puppet-lint is by no means perfect, I like to make it a habit to enable linters for most languages I work with in order for others to have an easier time reading my code should the need arise. I’m not above asking for help in a difficult situation and having readable output available means getting help for your problems will be much easier.

#!/bin/bash
set -euo pipefail

# allow lines longer then 80 characters
# code should be clean of warnings

puppet-lint . \
--no-80chars-check \
--fail-on-warnings \

As you can see I like to consider everything apart from the 80 characters per line check to be a deadly sin. Well, I’m exaggerating but as I said, I like to have things clean when working.

erb-syntax

ERB is a Ruby templating language which is used by puppet. I have only ventured into using templates two or three times, but that has been enough to make me wish for extra checking there too. I initially wanted to use rails-erb-check but after much cursing rails-erb-lint turned out to be easier to use. Helpfully it will just scan the whole directory recursively.

#!/bin/bash
set -euo pipefail

rails-erb-lint check

puppet-missing-files

While I’ve used puppet-lint locally previously it caught fewer errors than I would’ve liked due to it not checking whether files I sourced for files or templates existed. I was negatively surprised upon realizing that puppet validate didn’t do that either, so I slapped together my own checker for that in Python.

Basically the script first builds a set of all .pp files and then uses grep to check for lines specifying either puppet: or template( which are telltale signs for files or templates respectively. Then each entry of said entry is verified by checking for its existence as either a path or a symlink.

#!/usr/bin/env python2
"""Test puppet sourced files and templates for existence."""

import os.path
import subprocess
import sys


def main():
    """The main flow."""

    manifests = get_manifests()
    paths = get_paths(manifests)
    check_paths(paths)


def check_paths(paths):
    """Check the set of paths for existence (or symlinked existence)."""

    for path in paths:
        if not os.path.exists(path) and not os.path.islink(path):
            sys.exit("{} does not exist.".format(path))


def get_manifests():
    """Find all .pp files in the current working directory and subfolders."""

    try:
        manifests = subprocess.check_output(["find", ".", "-type", "f",
                                             "-name", "*.pp"])
        manifests = manifests.strip().splitlines()
        return manifests
    except subprocess.CalledProcessError, error:
        sys.exit(1, error)


def get_paths(manifests):
    """Extract and construct paths to check."""

    paths = set()

    for line in manifests:
        try:
            results = subprocess.check_output(["grep", "puppet:", line])
            hits = results.splitlines()

            for hit in hits:
                working_copy = hit.strip()
                working_copy = working_copy.split("'")[1]
                working_copy = working_copy.replace("puppet://", ".")

                segments = working_copy.split("/", 3)
                segments.insert(3, "files")

                path = "/".join(segments)
                paths.add(path)

        # we don't care if grep does not find any matches in a file
        except subprocess.CalledProcessError:
            pass

        try:
            results = subprocess.check_output(["grep", "template(", line])
            hits = results.splitlines()

            for hit in hits:
                working_copy = hit.strip()
                working_copy = working_copy.split("'")[1]

                segments = working_copy.split("/", 1)
                segments.insert(0, ".")
                segments.insert(1, "modules")
                segments.insert(3, "templates")

                path = "/".join(segments)
                paths.add(path)

        # we don't care if grep does not find any matches in a file
        except subprocess.CalledProcessError:
            pass

    return paths

if __name__ == "__main__":
    main()

puppet-apply-noop

In order to perform tests on the most common tests in puppet world, I wanted to test every .pp file in a module’s tests directory with puppet apply --noop, which is a kind of dry run. This outputs information what would be done in case of a real run. Unfortunately this information is highly misleading.

#!/bin/bash
set -euo pipefail

content=(core automation packages services users)

for item in ${content[*]}
do
  printf "Info: Running tests for module $item.\n"
  find modules -type f -path "modules/$item/tests/*.pp" -execdir puppet apply --modulepath=/etc/puppet/modules --noop {} \;
done

When run in this mode, puppet does not seem to perform any sanity checks at all. For example, it can be instructed to install a package with an arbitrary name regardless of the package’s existence in the specified (or default) package manager.

Upon deciding this mode was not providing any value to my testing process I took a stab at implementing “real” tests instead by running puppet apply instead. The value added by this procedure is mediocre at best, given that puppet returns 0 even if it fails to apply all given instructions. Your CI will not realize that there have been puppet failures at all and happily report your build as passing.

puppet provides the --detailed-exitcodes flag for checking failure to apply changes. Let me quote the manual for you:

Provide transaction information via exit codes. If this is enabled, an exit code of ´2´ means there were changes, an exit code of ´4´ means there were failures during the transaction, and an exit code of ´6´ means there were both changes and fail‐ ures.

I’m sure I don’t need to point out that this mode is not suitable for testing either given that there will always be changes in a testing VM.

Now, one could solve this by writing a small wrapper around the puppet apply --detailed-exitcodes call which checks for 4 and 6 and fails accordingly. I was tempted to do that. I might still do that in the future. The reason I didn’t implement this already was that actually applying the changes slowed things down to a crawl. The installation and configuration of a gitlab instance added more than 90 seconds to each build.

A shortened sample of what is done in the gitlab build:

  • add gitlab repository
  • make sure apt-transport-https is installed
  • install gitlab
  • overwrite gitlab.rb
  • provide TLS certificate
  • start gitlab

Should I ever decide to implement tests which really apply their changes, the infrastructure needed to run those checks for everything we do with puppet in a timely manner would drastically increase.

documentation

I am adamant when it comes to documenting software since I don’t want to imagine working without docs, ever.

In my Readme.markdown each H3 header is equivalent to one puppet class.

This test checks whether the amount of documentation in my preferred style matches the amount of puppet manifest files (.pp). If the Readme.markdown does not contain exactly the same amount of ### headers as there are puppet manifest files then it counts as a build failure since someone obviously missed to update the documentation.

#!/bin/bash
set -euo pipefail

count_headers=`grep -e "^### " Readme.markdown|wc -l|awk {'print $1'}`
count_manifests=`find . -type f -name "*.pp" |grep -v "tests"|wc -l|awk {'print $1'}`

if test $count_manifests -eq $count_headers
  then printf "Documentation matches number of manifests.\n"
  exit 0
else
  printf "Documentation does not match number of manifests.\n"
  printf "There might be missing manifests or missing documentation entries.\n"
  printf "Manifests: $count_manifests, h3 documentation sections: $count_headers\n"
  exit 1
fi

Application tests

As previously said I use the test stage for testing configurations for other applications. Currently I only test postfix‘s /etc/aliases file as well as our /etc/postfix/forwards which is an extension of the former.

applications:
  stage: test
  script:
      - apt-get -qq -o=Dpkg::Use-Pty=0 install postfix
      - tests/postfix-aliases.py

Future: There are plans for handling both shorewall as well as isc-dhcp-server configurations with puppet. Both of those would profit from having automated tests available.

Future: The different software setups will probably be done in different jobs to allow concurrent running as soon as the CI solution is ready for general use by our researchers.

postfix-aliases

In order to test the aliases, an extremely minimalistic configuration for postfix is installed and the postfix instance is started. If there is any output whatsoever I assume that the test failed.

Future: I plan to automatically apply both a minimal configuration and a full configuration in order to test both the main server and relay configurations for postfix.

#!/usr/bin/env python2
"""Test postfix aliases and forwards syntax."""

import subprocess
import sys


def main():
    """The main flow."""
    write_configuration()
    copy_aliases()
    copy_forwards()
    run_newaliases()


def write_configuration():
    """Write /etc/postfix/main.cf file."""

    configuration_stub = ("alias_maps = hash:/etc/aliases, "
                          "hash:/etc/postfix/forwards\n"

                          "alias_database = hash:/etc/aliases, "
                          "hash:/etc/postfix/forwards")

    with open("/etc/postfix/main.cf", "w") as configuration:
        configuration.write(configuration_stub)


def copy_aliases():
    """Find and copy aliases file."""

    aliases = subprocess.check_output(["find", ".", "-type", "f", "-name",
                                       "aliases"])
    subprocess.call(["cp", aliases.strip(), "/etc/"])


def copy_forwards():
    """Find and copy forwards file."""

    forwards = subprocess.check_output(["find", ".", "-type", "f", "-name",
                                        "forwards"])
    subprocess.call(["cp", forwards.strip(), "/etc/postfix/"])


def run_newaliases():
    """Run newaliases and report errors."""

    result = subprocess.check_output(["newaliases"], stderr=subprocess.STDOUT)
    if result != "":
        print result
        sys.exit(1)

if __name__ == "__main__":
    main()

Conclusion

While I’ve ran into plenty frustrating moments, building a CI for puppet was quite fun and I’m constantly thinking about how to improve this further. One way would be to create “real” test instances for configurations, like “spin up one gitlab server with all its required classes”.

The main drawback in our current setup was two-fold:

  1. I haven’t enabled more than one concurrent instances of our private runner.
  2. I haven’t considered the performance impact of moving to whole instance testing in other stages and parallelizing those tests.

I look forward to implementing deployment on passing tests instead of my current method of automatically deploying every change in master.


Update (2016-08-28)

Prebuilt Docker image

In order to reduce the run time for each build I eventually decided to prebuild our Docker image. I’ve also enabled automatic builds for the repository by linking our Docker Hub account with our GitHub organisation. This was easy and quick to set up. Mind you that similar setups using GitLab have become possible in the mean time. This is a solution with relatively little maintenance as long as you don’t forget to add the base image you use as a dependency in the Docker Hub settings.

FROM buildpack-deps:trusty
MAINTAINER Alexander Skiba <alexander.skiba@icg.tugraz.at>

ENV DEBIAN_FRONTEND noninteractive

RUN apt-get update && apt-get install -y \
    isc-dhcp-server \
    postfix \
    puppet \
    ruby-dev \
    rsync \
    shorewall \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/*

RUN gem install --no-rdoc --no-ri \
    puppet-lint \
    rails-erb-lint

Continuous Deployment

When using CD, you want to have two runners on your puppetmaster. The Docker runner in which you test your changes and a shell runner with which you deploy your changes.

That’s what our .gitlab-ci.yml looks like.

puppet:
  stage: build
  script:
    - tests/puppet-validate.sh
    - tests/puppet-lint.sh
    - tests/erb-syntax.sh
    - tests/puppet-missing-files.py
    - tests/documentation.sh
    - tests/postfix-aliases.py
    - tests/dhcpd.sh
    - tests/dhcpd-todos.sh
    - tests/shorewall.sh
  tags:
    - puppet
    - ubuntu trusty

puppetmaster:
  stage: deploy
  only:
    - master
  script:
    - make modules
    - sudo service apache2 stop
    - rsync --omit-dir-times --recursive --group --owner --links --perms --human-readable --sparse --force --delete --stats . /etc/puppet
    - sudo service apache2 start
  tags:
    - puppetmaster
  environment: production

As you can see, there are no installation steps in the puppet job anymore, those have moved into the Dockerfile. The deployment consists of installing all the modules and rsyncing the files into the Puppet directory for use as soon as the server is started again. I chose to stop and start the server as a precaution since I prefer a “Puppet is down” message to incomplete Puppet runs in case a client gets an inconsistent state.

We also use the environment instruction to easily display the currently deployed commit in the GitLab interface.

In order for this to work easily and transparently, I’ve updated the Makefile.

INSTALL := puppet module install
TARGET_DIR := `pwd`/modules
TARGET := --target-dir $(TARGET_DIR)

modules:
  @echo "Info: Installing required puppet modules from forge.puppetlabs.com."
  mkdir -p $(TARGET_DIR)
  # Puppetlabs modules
  $(INSTALL) puppetlabs-stdlib $(TARGET)
  $(INSTALL) puppetlabs-apt $(TARGET)
  $(INSTALL) puppetlabs-ntp $(TARGET)
  $(INSTALL) puppetlabs-rabbitmq $(TARGET)
  $(INSTALL) puppetlabs-vcsrepo $(TARGET)
  $(INSTALL) puppetlabs-postgresql $(TARGET)
  $(INSTALL) puppetlabs-apache $(TARGET)

  # Community modules
  $(INSTALL) REDACTED $(TARGET)


test:
  @echo "Installing manually via puppet from site.pp."
  puppet apply --verbose /etc/puppet/manifests/site.pp

.PHONY: modules test

A major difference to before is the installation of puppet modules into the current path after testing instead of installing them directly into the Puppet directory. This also serves to prevent deploying an inconsistent state into production in case something couldn’t be fully downloaded. Furthermore this assures we are always using the most recent released version to get bugfixes.


Notes

  • Build stages do run after each other, however, they do not use the same instance of the docker container and therefore are not suited for installing prerequisites and running tests in different stages. Read: If you need an additional package in every stage, you need to install it during every stage.
  • If you are curious what the set -euo pipefail commands on top of all my shell scripts do, refer to Aaron Maxwell’s Use the Unofficial Bash Strict Mode.
  • Our runners as of the time of this writing use buildpack-deps:trusty as their image. We use a custom image now.