Reading recommendations (2017-07-26)

Posted on Wed 26 July 2017 in reading recommendations

Pieter Hintjens has Ten Steps to Better Public Speaking for you. Amongst them is to avoid using slides since they send the audience into passive 'consumer only' mode. I'm definitely guilty of doing that as a listener.

Here is an interview with Craig Schaefer, author of one of my favorite book series, the Faust books. - Cover Reveal and Mini-Q&A with Craig Schaefer (by Mihir Wanchoo). I'm very sad that he still hasn't resumed selling his books on Apple's iBooks, my preferred source.

Jason Schreier writes in Final Fantasy XII: The Zodiac Age: The Kotaku Review that one of the most interesting games in the Final Fantasy series is as good as I remember it and might be even better in its new version. I'm very pleased to hear that even though I don't currently own a PS4. They even removed that one quirk where you mustn't open a specific chest for the whole game in order to get one of the best weapons.

Matt Gemmell's Regulars is about peoplewatching. It's about the what-ifs. It's what happens when observation and imagination meet and have a great time in a coffee shop. (pun intended)

With every new framework release comes the fresh chance of masking your lack of fundamental JavaScript knowledge. @iamdevloper

I also happened to read the comic books I got with The Witcher 2 and Alan Wake, but sadly those didn't click with me.


Sidenotes.


Example of a Sensu Puppet class

Posted on Thu 13 July 2017 in work • Tagged with Sensu

At the sensu-users mailing list someone asked how they could deploy Sensu plugins with Puppet. After giving a short snippet, I was asked for further help whether to implement the snippet as a class or what I would recommend. Therefore I present you: A slightly redacted example of a Sensu Puppet class taken from production.

I will attempt to walk you through the sections I used and at the end there will be a big code block with the complete class, for easier copy & paste.

Please note that this class was written with Sensu 0.26 and sensu-puppet 2.2.0 in mind and may not include all the latest features you expected to use and does not use features available in versions of Sensu or sensu-puppet.

Detailed explanation

docs

# Class: services::sensu
# Manages configuration, checks, handlers and certs for
# the sensu monitoring system
#
# parameters:
# (bool) is_main_server: makes this server the main host on which sensu is run
# (bool) consistent_connection: if set to `false`, enables high-value timeouts
#        for sensu keepalive checks
# (array) subscriptions: the check groups a host should subscribe to

You will always want some form of documentation. Leaving a little bit in the code is considered good practice and puppet-lint will (rightfully) complain if you don't. I make sure to also leave hints about class parameters and their types since I don't use them a lot in this project.

default parameters

class services::sensu($is_main_server = false,
                      $consistent_connection = true,
                      $subscriptions = [])
{

As you might have seen, I use is_main_server to denote the sensu-server instance, so it defaults to false. consistent_connection will be manually set to false for desktop or laptop machines that will be turned off regularly and is true by default. In a later version of Sensu and sensu-puppet this can be solved easier with deregistration. The subscriptions array will be filled with strings that enable subscriptions and checks that are not automatically detected and is empty by default.

manual configuration

# configuration
$rabbitmq_password = 'REDACTED'
$gitlab_health_token = 'REDACTED'
$gitlab_issues_token = 'REDACTED'
$assignments_health_token = 'REDACTED'
$sensu_monitoring_password = 'REDACTED'

# installed sensu plugins
$plugins = ['sensu-plugins-cpu-checks',
            'sensu-plugins-disk-checks',
            'sensu-plugins-environmental-checks',
            'sensu-plugins-filesystem-checks',
            'sensu-plugins-http',
            'sensu-plugins-load-checks',
            'sensu-plugins-memory-checks',
            'sensu-plugins-network-checks',
            'sensu-plugins-nvidia',
            'sensu-plugins-ntp',
            'sensu-plugins-postfix',
            'sensu-plugins-process-checks',
            'sensu-plugins-puppet',
            'sensu-plugins-raid-checks',
            'sensu-plugins-uptime-checks']

# kibana URL - allows clicking to jump to filtered log results
$kibana_url = "https://REDACTED/#/discover?_g=()&_a=(columns:!(_source),interval:auto,query:(query_string:(analyze_wildcard:!t,query:'host:${::hostname}')),sort:!('@timestamp',desc),index:%5Blogstash-%5DYYYY.MM.DD)#"
# grafana URL - allows clicking to jump to filtered metrics
$grafana_url = "https://REDACTED/dashboard/db/single-host-overview?var-hostname=${::hostname}"
# runbook prefix - allows linking directly to a propose solution
$runbook_prefix = 'https://REDACTED/administrators/documentation/blob/master/runbooks/sensu'

# how many times should keepalive fire before notifications
$keepalive_occurrences = '1'
# how much time needs to pass until keepalive notification is repeated (in seconds)
$keepalive_refresh = '3600'
# impact text for keepalive
$keepalive_impact = 'Host is not checking in with monitoring and may be completely unavailable.'
# suggestion text for keepalive
$keepalive_suggestion = 'Check if the host is frozen, stuck, down or offline.'

This is the section where details specific to our deployment reside. There is one block that holds tokens and passwords that used by Puppet during the deployment (rabbitmq_password) and ones that are used by Sensu during standard operation (e.g. gitlab_health_token when monitoring GitLab's health API).

  • plugins: lists Sensu plugins that should be installed on all machines.
  • kibana_url, grafana_url: We have systems in place to collect log files and metrics from the systems we monitor. These are easy links that will be displayed in Uchiwa and notifications (e-mail, Mattermost) that link directly to data for the host in question.
  • runbook_prefix: I wrote runbooks for most checks so that my colleagues can resolve issues while I'm on vacation. This is prepended in checks, so that one only needs to concatenate the prefix with the filename of the runbook in question to get a full URL.

The next block describes Sensu's keepalive events - you get these when Sensu has lost contact with a client (meaning your client hasn't checked in with the Sensu server for some time). The keepalive_occurrences and keepalive_refresh attributes are used for filtering of notifications.

keepalive_impact and keepalive_suggestion are part of a concept I use throughout our Sensu deployment - Every check that can trigger a notification needs to have information on what the real-world impact of a failure is and what the quickest and most common solution to the problem could be.

automatic subscriptions

# automatic subscriptions computed from machine properties
if (str2bool($::is_virtual) == true)
{
  $machine_type = ['virtual']
}
else
{
  $machine_type = ['physical']
}

if (str2bool($::has_nvidia_graphics_card) == true and str2bool($::using_nouveau_driver) == false)
{
  $gpu = ['nvidia']
}
else
{
  $gpu = []
}

if (($::operatingsystem == 'Ubuntu' and versioncmp($::operatingsystemrelease, '16.04') >= 0) or
    ($::operatingsystem == 'Debian' and versioncmp($::operatingsystemrelease, '8.0') >= 0))
{
  $systemd_enabled = ['systemd']
}
else
{
  $systemd_enabled = []
}

$automatic_subscriptions = concat($machine_type, $gpu, $systemd_enabled, ['client_specific'])

After a while, hardcoding checks gets annoying and that's why I try to automatically detect some things based on hardware or operating system.

  • ::is_virtual is a default Puppet fact. I'll add checks for S.M.A.R.T. as well as RAID checks and sensors metrics if run on a physical machine. (not included in this example)
  • ::has_nvidia_graphics_card is a fact taken from jaredjennings/puppet-nvidia_graphics. I'll add GPU specific metrics based on that. (not included in this example)
  • I'll also try to decide whether Systemd is managing the host or not. I'll add some specific service checks based on that. (not included in this example)

The automatic subscriptions are then combined with a pseudo-subscription called client_specific that I use to distribute only the configuration of various client specific checks to hosts.

metrics templates

# template variables (must be in class scope)
$default_scheme = 'sensu.host.$(hostname)'
$metrics_handler = ['graphite_tcp']
$timestamp = '`date +%s`'

For easier use of metrics checks that are not written with sensu-plugin (the framework) I have some variables that are reused whenever hacking together a check on the quick.

  • default_scheme is prepended to a metric, resulting in something like sensu.host.myawesomehostname.cpu.usage
  • metrics_handler is an easier way of specifying the handler should we ever need to change it (or extend it).
  • timestamp is a simple way to get a UNIX timestamp.

sensu-server: packages and subscriptions

# SENSU SERVER
if ($is_main_server == true)
{
  $combined_subscriptions = unique(concat(['proxy'], $subscriptions, $automatic_subscriptions))

  $server_packages = ['redis-server', 'curl', 'jq']

  $server_plugins = [ 'sensu-plugins-imap',
                      'sensu-plugins-slack',
                      'sensu-plugins-ssl',
                      'sensu-extensions-occurrences']

  # install server-only packages
  package
  {
    $server_packages:
    ensure => present,
  }

  # install plugins for proxy group

  package
  {
    $server_plugins:
    ensure   => present,
    provider => 'sensu_gem',
    require  => Package[$server_packages],
  }

The Sensu server is the machine handling proxy requests for me. That means that checks that check e.g. if a site is available via HTTP on another machine is a proxy check and will in my deployment be run on the Sensu server. To achieve this, a proxy subscription is added to the subscriptions of the server.

Next, the server_packages are installed via the default package management (e.g. apt in my case) and the server_plugins are Sensu specific ruby gems that are installed via the sensu_gem provider that comes with sensu-puppet.

sensu-server: workaround

# Workaround for sensu-api not subscribing to check updates.
Class['::sensu::client::service'] ~> Class['::sensu::api::service']

Sometimes I had the problem that the results for some queries in Uchiwa were not the most recent ones and this snippet seems to have solved them.

sensu-server: configuration

This is the part where the sensu-puppet module is configured by my class.

class
{
  '::sensu':
  rabbitmq_password           => $rabbitmq_password,
  server                      => true,
  client                      => true,
  api                         => true,
  api_bind                    => '127.0.0.1',
  use_embedded_ruby           => true,
  rabbitmq_reconnect_on_error => true,
  redis_reconnect_on_error    => true,
  redis_auto_reconnect        => true,
  subscriptions               => $combined_subscriptions,
  rabbitmq_host               => '127.0.0.1',
  redis_host                  => '127.0.0.1',
  redact                      => ['password', 'pass', 'api_key','token'],
  purge                       => true,
  safe_mode                   => true,

  require                     => Package[$server_packages],

  client_custom               =>
  {
    kibana_url       => $kibana_url,
    grafana_url      => $grafana_url,
    type             => $::virtual,
    operating_system => $::lsbdistdescription,
    kernel           => $::kernelrelease,
    puppet_version   => $::puppetversion,

    gitlab_health    =>
    {
      token => $gitlab_health_token,
    },
    ldap_sensu       =>
    {
      password => $sensu_monitoring_password,
    },
    gitlab_issues    =>
    {
      token => $gitlab_issues_token,
    },
    assignments_health =>
    {
      token => $assignments_health_token,
    }
  }
}

You can read about most parameters in the docs. Here are some general hints:

  • api_bind: I bind to the machine so everything needs to be proxied (e.g. with Apache or Nginx).
  • rabbitmq_reconnect_on_error, redis_reconnect_on_error, redis_auto_reconnect: I want my deployment to be potentially self-healing.
  • redact: I have some additional keywords here that will be redacted in the API output. Please check out the Sensu docs on redaction, it's a great feature.
  • purge: I enable this since I control all changes centrally. [queue Mass Effect 2 taking direct control soundbite]
  • safe_mode: Though it is more work, you probably do not want your hosts to run arbitrary commands.

The client_custom section is where additional attributes are defined. I've already talked about kibana_url and grafana_url. I find that the operating system, the kernel version, the puppet_version and whether the host is virtual or physical are helpful information to display on its dashboard page, so I include these.

The tokens and passwords are written to files on the host, and can then easily be referenced in Sensu commands using e.g. :::gitlab_health.token:::.

sensu-server: uchiwa

class
{
  '::uchiwa':
  install_repo => false,
  host         => '127.0.0.1',
  require      => Class['::sensu'],
}

I run Uchiwa, the dashboard for Sensu on the same machine and have it proxied. Note that this requires the yelp/uchiwa Puppet module.

sensu-server: includes

  # sensu server specific checks
  include services::sensu::core

  # include all checks here, so that the master has all in order to run
  # with safe_mode => true

  # subscription: proxy
  include services::sensu::imap
  include services::sensu::certificates
  include services::sensu::client_specific
  include services::sensu::api_health
  include services::sensu::availability
  include services::sensu::remote_metrics

  # automatic subscriptions
  include services::sensu::nvidia
  include services::sensu::physical
  include services::sensu::systemd
  include services::sensu::virtual

  # last part is subscription name
  include services::sensu::elasticsearch
  include services::sensu::fail2ban
  include services::sensu::kibana
  include services::sensu::ldap
  include services::sensu::mailman
  include services::sensu::logstash
  include services::sensu::seafile
  include services::sensu::seahub

  # include handler definitions
  include services::sensu::handlers
}

Since I'm using safe_mode, the Sensu server needs to have every single check that should be run. I include them here, manually.

Structuring your checks into neatly partitioned and readable files is a daunting task. I've tried to do it the following way: There is one file that holds checks that are common (core). I've grouped all proxy subscription checks into one block, automatic subscriptions into the second block and files that are automatically included based on the content of the subscriptions array that the class receives in the third block. Handler definitions also get their own file (handlers) since they get unwieldy even with only a few handlers.

sensu-client: subscriptions

# SENSU CLIENT
else
{
  # default client configuration
  $combined_subscriptions = unique(concat($subscriptions, $automatic_subscriptions))

  # default include checks and metrics
  include services::sensu::core
  include services::sensu::client_specific

  # automatically include checks for subscriptions
  services::sensu::combined_subscriptions{$combined_subscriptions:}

Similar to the server, the client gets a combination of (manual) subscription and automatic_subscriptions. Then, the core checks and metrics are included as well as any client_specific ones. I include Puppet classes automatically based on the combined_subscriptions then. For your convenience I'll include this Puppet hack.

## combined_subscriptions.pp
# Define: services::sensu::combined_subscriptions
# use a define to dynamically include classes with checks

define services::sensu::combined_subscriptions
{
  include "services::sensu::${name}"
}

sensu-client: keepalive configuration

# if the client is not consistently connected, warn after 2 weeks
# and throw a critical error after 4 weeks
# something will be wrong, outdated or the client can be removed
if ($consistent_connection == false)
{
  $client_keepalive =
  {
    thresholds =>
    {
      warning => 1209600,
      critical => 2419200,
    },
    handlers   => ['default', 'mail', 'mattermost'],
    runbook    => "${runbook_prefix}/keepalive.markdown",
    occurences => $keepalive_occurrences,
    refresh    => $keepalive_refresh,
    impact     => $keepalive_impact,
    suggestion => $keepalive_suggestion,
  }
}
else
{
  $client_keepalive =
  {
    handlers   => ['default', 'mail', 'mattermost'],
    runbook    => "${runbook_prefix}/keepalive.markdown",
    occurences => $keepalive_occurrences,
    refresh    => $keepalive_refresh,
    impact     => $keepalive_impact,
    suggestion => $keepalive_suggestion,
      }

The configuration for keepalive events is part of the client attributes, not a separate check. If I set consistent_connection to false, it will take some weeks until I am notified of a "missing" device. Filters are configured via occurrences and refresh. The Sensu developers wrote a helpful blog post on that. Again, if you have a new enough version of Sensu, you should not need this.

As you can see, the "check" also has a runbook, an impact description and an operator suggestion defined to make manual intervention very easy.

sensu-client: configuration

This is the part where the sensu-puppet module is configured by my class.

  class
  {
    '::sensu':
    rabbitmq_password           => $rabbitmq_password,
    rabbitmq_host               => 'REDACTED',
    rabbitmq_port               => '5671',
    server                      => false,
    api                         => false,
    client                      => true,
    client_keepalive            => $client_keepalive,
    subscriptions               => $combined_subscriptions,
    rabbitmq_ssl                => true,
    rabbitmq_ssl_private_key    => 'puppet:///modules/services/sensu/client-key.pem',
    rabbitmq_ssl_cert_chain     => 'puppet:///modules/services/sensu/client-cert.pem',
    use_embedded_ruby           => true,
    rabbitmq_reconnect_on_error => true,
    purge                       => true,
    safe_mode                   => true,

    require                     => Package['ruby-json'],

    client_custom               =>
    {
      kibana_url       => $kibana_url,
      grafana_url      => $grafana_url,
      type             => $::virtual,
      operating_system => $::lsbdistdescription,
      kernel           => $::kernelrelease,
      puppet_version   => $::puppetversion,
    },
  }
}

There is nothing especially fancy here except the client_keepalive which gets filled with the values from a previous section. Everything else should either be taken from the docs or was already explained earlier.

Of note: rabbitmq_ssl_private_key and rabbitmq_ssl_cert_chain are the same for every host. This is an (unfortunate) implementation detail which allows only one cert in use for the whole Sensu transport deployment. I think I would've liked to piggyback onto Puppet's certificates if possible, but am quite aware this is neither good in terms of compartmentalization nor good design.

common

  package
  {
    $plugins:
    ensure   => installed,
    provider => 'sensu_gem',
  }

  file
  {
    '/etc/sudoers.d/sensu':
    ensure  => file,
    owner   => 'root',
    group   => 'root',
    mode    => '0440',
    source  => 'puppet:///modules/services/sensu/sudoers.d',
    require => Package['sudo'],
  }

  # all nodes need development dependencies for native extentions

  $client_packages = ['g++', 'make', 'ruby-json', 'sudo']

  Class['apt::update']
  -> Package[$client_packages]

  package
  {
    $client_packages:
    ensure => present,
  }
}

This section is for both the server and the client part. The list of sensu-plugins is installed via sensu_gem. Some checks I use with Sensu require sudo rights, so I distribute a customized sudoers file directly into /etc/sudoers.d/ which whitelists some commands for Sensu.

Since it is often the case that Ruby gems try to build native extensions on installation we require development tools on each host.

As a last little detail I make sure to only install packages after an apt-get update run. I think I added this since I was often testing my setup in a Docker container via GitLab's CI feature. It is good practice to have a container that is as small as possible, so people delete the cached apt sources which leads to errors while installing packages if apt-get update is not run before an apt-get install PACKAGE.

Minimal example

Alright, so now I've written quite a bit about this specific class, but how would one actually use all of this? Let's see a minimal working example.

# site.pp
node 'myhostname.mydomain.com'
{
  include services::sensu
}

If you wanted to add additional (previously implemented) subscriptions, you would use something like this:

# site.pp
node 'example.domain.com'
{
  class{ 'services::sensu': subscriptions => ['fail2ban', 'ldap']}
}

sensu.pp

# Class: services::sensu
# Manages configuration, checks, handlers and certs for
# the sensu monitoring system
#
# parameters:
# (bool) is_main_server: makes this server the main host on which sensu is run
# (bool) consistent_connection: if set to `false`, enables high-value timeouts
#        for sensu keepalive checks
# (array) subscriptions: the check groups a host should subscribe to

class services::sensu($is_main_server = false,
                      $consistent_connection = true,
                      $subscriptions = [])
{
  # configuration
  $rabbitmq_password = 'REDACTED'
  $gitlab_health_token = 'REDACTED'
  $gitlab_issues_token = 'REDACTED'
  $assignments_health_token = 'REDACTED'
  $sensu_monitoring_password = 'REDACTED'

  # installed sensu plugins
  $plugins = ['sensu-plugins-cpu-checks',
              'sensu-plugins-disk-checks',
              'sensu-plugins-environmental-checks',
              'sensu-plugins-filesystem-checks',
              'sensu-plugins-http',
              'sensu-plugins-load-checks',
              'sensu-plugins-memory-checks',
              'sensu-plugins-network-checks',
              'sensu-plugins-nvidia',
              'sensu-plugins-ntp',
              'sensu-plugins-postfix',
              'sensu-plugins-process-checks',
              'sensu-plugins-puppet',
              'sensu-plugins-raid-checks',
              'sensu-plugins-uptime-checks']

  # kibana URL - allows clicking to jump to filtered log results
  $kibana_url = "https://REDACTED/#/discover?_g=()&_a=(columns:!(_source),interval:auto,query:(query_string:(analyze_wildcard:!t,query:'host:${::hostname}')),sort:!('@timestamp',desc),index:%5Blogstash-%5DYYYY.MM.DD)#"
  # grafana URL - allows clicking to jump to filtered metrics
  $grafana_url = "https://REDACTED/dashboard/db/single-host-overview?var-hostname=${::hostname}"
  # runbook prefix - allows linking directly to a propose solution
  $runbook_prefix = 'https://REDACTED/administrators/documentation/blob/master/runbooks/sensu'

  # how many times should keepalive fire before notifications
  $keepalive_occurrences = '1'
  # how much time needs to pass until keepalive notification is repeated (in seconds)
  $keepalive_refresh = '3600'
  # impact text for keepalive
  $keepalive_impact = 'Host is not checking in with monitoring and may be completely unavailable.'
  # suggestion text for keepalive
  $keepalive_suggestion = 'Check if the host is frozen, stuck, down or offline.'

  # automatic subscriptions computed from machine properties
  if (str2bool($::is_virtual) == true)
  {
    $machine_type = ['virtual']
  }
  else
  {
    $machine_type = ['physical']
  }

  if (str2bool($::has_nvidia_graphics_card) == true and str2bool($::using_nouveau_driver) == false)
  {
    $gpu = ['nvidia']
  }
  else
  {
    $gpu = []
  }

  if (($::operatingsystem == 'Ubuntu' and versioncmp($::operatingsystemrelease, '16.04') >= 0) or
      ($::operatingsystem == 'Debian' and versioncmp($::operatingsystemrelease, '8.0') >= 0))
  {
    $systemd_enabled = ['systemd']
  }
  else
  {
    $systemd_enabled = []
  }

  $automatic_subscriptions = concat($machine_type, $gpu, $systemd_enabled, ['client_specific'])

  # template variables (must be in class scope)
  $default_scheme = 'sensu.host.$(hostname)'
  $metrics_handler = ['graphite_tcp']
  $timestamp = '`date +%s`'


  # SENSU SERVER
  if ($is_main_server == true)
  {
    $combined_subscriptions = unique(concat(['proxy'], $subscriptions, $automatic_subscriptions))

    $server_packages = ['redis-server', 'curl', 'jq']

    $server_plugins = [ 'sensu-plugins-imap',
                        'sensu-plugins-slack',
                        'sensu-plugins-ssl',
                        'sensu-extensions-occurrences']

    # install server-only packages
    package
    {
      $server_packages:
      ensure => present,
    }

    # install plugins for proxy group

    package
    {
      $server_plugins:
      ensure   => present,
      provider => 'sensu_gem',
      require  => Package[$server_packages],
    }


    # Workaround for sensu-api not subscribing to check updates.
    Class['::sensu::client::service'] ~> Class['::sensu::api::service']

    class
    {
      '::sensu':
      rabbitmq_password           => $rabbitmq_password,
      server                      => true,
      client                      => true,
      api                         => true,
      api_bind                    => '127.0.0.1',
      use_embedded_ruby           => true,
      rabbitmq_reconnect_on_error => true,
      redis_reconnect_on_error    => true,
      redis_auto_reconnect        => true,
      subscriptions               => $combined_subscriptions,
      rabbitmq_host               => '127.0.0.1',
      redis_host                  => '127.0.0.1',
      redact                      => ['password', 'pass', 'api_key','token'],
      purge                       => true,
      safe_mode                   => true,

      require                     => Package[$server_packages],

      client_custom               =>
      {
        kibana_url       => $kibana_url,
        grafana_url      => $grafana_url,
        type             => $::virtual,
        operating_system => $::lsbdistdescription,
        kernel           => $::kernelrelease,
        puppet_version   => $::puppetversion,

        gitlab_health    =>
        {
          token => $gitlab_health_token,
        },
        ldap_sensu       =>
        {
          password => $sensu_monitoring_password,
        },
        gitlab_issues    =>
        {
          token => $gitlab_issues_token,
        },
        assignments_health =>
        {
          token => $assignments_health_token,
        }
      }
    }

    class
    {
      '::uchiwa':
      install_repo => false,
      host         => '127.0.0.1',
      require      => Class['::sensu'],
    }

    # sensu server specific checks
    include services::sensu::core

    # include all checks here, so that the master has all in order to run
    # with safe_mode => true

    # subscription: proxy
    include services::sensu::imap
    include services::sensu::certificates
    include services::sensu::client_specific
    include services::sensu::api_health
    include services::sensu::availability
    include services::sensu::remote_metrics

    # automatic subscriptions
    include services::sensu::nvidia
    include services::sensu::physical
    include services::sensu::systemd
    include services::sensu::virtual

    # last part is subscription name
    include services::sensu::elasticsearch
    include services::sensu::fail2ban
    include services::sensu::kibana
    include services::sensu::ldap
    include services::sensu::mailman
    include services::sensu::logstash
    include services::sensu::seafile
    include services::sensu::seahub

    # include handler definitions
    include services::sensu::handlers
  }

  # SENSU CLIENT
  else
  {
    # default client configuration
    $combined_subscriptions = unique(concat($subscriptions, $automatic_subscriptions))

    # default include checks and metrics
    include services::sensu::core
    include services::sensu::client_specific

    # automatically include checks for subscriptions
    services::sensu::combined_subscriptions{$combined_subscriptions:}

    # if the client is not consistently connected, warn after 2 weeks
    # and throw a critical error after 4 weeks
    # something will be wrong, outdated or the client can be removed
    if ($consistent_connection == false)
    {
      $client_keepalive =
      {
        thresholds =>
        {
          warning => 1209600,
          critical => 2419200,
        },
        handlers   => ['default', 'mail', 'mattermost'],
        runbook    => "${runbook_prefix}/keepalive.markdown",
        occurences => $keepalive_occurrences,
        refresh    => $keepalive_refresh,
        impact     => $keepalive_impact,
        suggestion => $keepalive_suggestion,
      }
    }
    else
    {
      $client_keepalive =
      {
        handlers   => ['default', 'mail', 'mattermost'],
        runbook    => "${runbook_prefix}/keepalive.markdown",
        occurences => $keepalive_occurrences,
        refresh    => $keepalive_refresh,
        impact     => $keepalive_impact,
        suggestion => $keepalive_suggestion,
      }

    }

    class
    {
      '::sensu':
      rabbitmq_password           => $rabbitmq_password,
      rabbitmq_host               => 'REDACTED',
      rabbitmq_port               => '5671',
      server                      => false,
      api                         => false,
      client                      => true,
      client_keepalive            => $client_keepalive,
      subscriptions               => $combined_subscriptions,
      rabbitmq_ssl                => true,
      rabbitmq_ssl_private_key    => 'puppet:///modules/services/sensu/client-key.pem',
      rabbitmq_ssl_cert_chain     => 'puppet:///modules/services/sensu/client-cert.pem',
      use_embedded_ruby           => true,
      rabbitmq_reconnect_on_error => true,
      purge                       => true,
      safe_mode                   => true,

      require                     => Package['ruby-json'],

      client_custom               =>
      {
        kibana_url       => $kibana_url,
        grafana_url      => $grafana_url,
        type             => $::virtual,
        operating_system => $::lsbdistdescription,
        kernel           => $::kernelrelease,
        puppet_version   => $::puppetversion,
      },
    }
  }

  package
  {
    $plugins:
    ensure   => installed,
    provider => 'sensu_gem',
  }

  file
  {
    '/etc/sudoers.d/sensu':
    ensure  => file,
    owner   => 'root',
    group   => 'root',
    mode    => '0440',
    source  => 'puppet:///modules/services/sensu/sudoers.d',
    require => Package['sudo'],
  }

  # all nodes need development dependencies for native extentions

  $client_packages = ['g++', 'make', 'ruby-json', 'sudo']

  Class['apt::update']
  -> Package[$client_packages]

  package
  {
    $client_packages:
    ensure => present,
  }
}

Site structure updates

Posted on Fri 23 June 2017 in random notes

I've archived the pages for "Companion Gaming" and "The Tea" since I haven't updated them for a while and no longer plan to do, for several reasons.

In order to make up for that, I've put up a page for Cooking with Friends, a social event I hold for a circle of my friends and acquaintances and have come to enjoy quite a lot.


Stellaris: Stories about an Empire

Posted on Sat 17 June 2017 in video games • Tagged with Stories

I had the chance to play more Stellaris when Final Fantasy XIV was in maintenance mode to prepare for the release and Early Access period of its expansion, Stormblood. This time I took some notes. While the storytelling of Stellaris tends to be very direct, I find it fascinating to see an empire evolve and make different decisions when playing - leading to an experience that is never quite the same.

Let me tell you about the Rhator, of the Rhatorian Collective. They were a race of arachnoids hailing from the desert planet Yuria. Their interest in technology and robots to serve them was almost as old as their civilization and their prevalent reason to take to space was fear - fear of the cycle of destruction and recreation that is life in their galaxy.

While taking their first baby steps exploring via the Warp drive engines they discovered structures in space - temples to the very gods of their own religion but much older than their civilization. This in turn triggered a revolution and their culture turned to newfound spiritualism, casting aside their former wishes to create artificial servants, for they themselves were but servants to beings of greater nature.

After investigating several worlds that harbored life in the past, the Rhatorian scientists arrived at the conclusion that it was highly unlikely their planet would be to be eradicated of life anytime soon - as a result the report of these findings was made public. Another thing that they found during exploration of nearby star systems was the existence of another society, from 600 000 years ago. The Rhator termed them "Cybrex". The Cybrex were sentient machines that had developed a massive empire in their time until they started a crusade to wipe out organic machines from the galaxy. Traces of their ruins could be found in various star systems.

When they colonized their first world, it was a curious act. Their thirst for scientific discoveries prompted them not to colonize a neighboring star system but one further away. The first colony was named Memenos and it was a desert world, for that is what the Rhator felt most comfortable with.

The Rhatorian people began to wonder whether there actually is intelligent life out in the galaxy beside themselves, when they finally encountered a space commerce station. It was a curious finding to see a neutral space station not belonging to any other spacefaring society.

Then began the emigration waves. The next planet to be settled was Lazon Prime, in the neighboring star system. However, not all was great in the empire. They failed. They, as a collective, failed to protect their people on Memenos when they could not stop the impact of a gigantic asteroid on Memenos. Millions died that day and would be mourned for generations to come.

After several attempts at centralized governing all colonies and its home world, the Rhatorian Collective decided that Lazon Prime should be their first individual sector.

It was not long until they made contact with the Mandasura Empire, a society of evangelizing plantoid zealots. They were initially sceptic towards the Rhatorian ways and later turned into the reason of the Rhatorian empire's decline. When several disputes about border rights triggered a war, the Rhator took heavy losses and had to cede four of their systems to the Mandasura. The population consisting of Rhator was enslaved and the indigenous population of primitives was even killed off. The Rhator faced the dreaded end of their civilization - but by different means than they had expected when they took to the stars.

This was the point where I concluded that I didn't want to continue this game since the Mandasura attacked me with a fleet with a value of 4600 and my whole fleet was positioned at 600 at the start of the conflict. Needless to say the situation was more than bleak.

I really like how there's so much story telling potential in Stellaris, but really wish that there was a log of events or such. Some player-accessible record of things that happened during one game. Such a log would allow me to better review how my empire had developed and share my experiences with others.


Reading recommendations (2017-04-15)

Posted on Sat 15 April 2017 in reading recommendations

I feel bad about dropping so many things into the Sidenotes uncommented - however not putting out this post for longer only makes it worse. I've been very, very busy in the last few weeks and expect it to stay this way for some more time.

Here's a great micro story by the ever interesting @microsff Twitter account:

"Assassin?" the emperor said.
"Yes?" the assassin said.
"I employed you, once, did I not?"
"In case you became a tyrant."
"Did I?"
"Yes."

A lack of nature in the office could be decreasing your productivity by Belle B. Cooper (blog.rescuetime.com feed)

But at lunchtime or in the afternoon when you’re facing a slump in energy and struggling to focus, a walk through nature could be just what you need to get through the rest of your workday.

Why You Need a Morning Ritual, not Just Morning Routine by Alan Henry (lifehacker.com feed)

It’s a simple mind shift, but super empowering when you realize that before you even left the house, you’ve done something good, crossed an item off your to-do list, and practiced a little self-care.

Thwart my OSINT Efforts while Binging TV! by Lesley Carhart (tisiphone.net feed)
In which @hacks4pancakes shows you how not to show up in every identity database, ever.

This browser tweak saved 60% of requests to Facebook by Ben Maurer, Nate Schloss (minus points for the awful title, probably via Bulletproof TLS newsletter)
Technical post about static resources and how browsers treat them when reloading the current page.

New Filing Confirms Yahoo Was Aware of Large-Scale Email Hack in 2014 by Mitchel Broussard (macrumors.com feed)

In September, Yahoo confirmed that at least 500 million of its users' accounts had been compromised during an attack in late 2014. Now, in a recent filing with the Securities and Exchange Commission, it was revealed that the company knew about the hack when it originally happened in 2014, but waited two years to divulge it to the public

'Amazon Go' Stores Will Let You Grab Groceries and Go, No Checkout Needed by Joe Rossignol (macrumors.com feed)

Amazon Go provides a checkout-free shopping experience that, to the naked eye, looks exactly like shoplifting.

You might agree that the promise of such a convenient shopping experience has its allure.

Redesigning Bluetooth Settings by Daniel Foré (twitter)
Even if you're not that into application design, you might want to check out the images from this iterative design process to see how a user interface can change. Even better, Foré has provided reasons for every time a design was changed.

CMD challenge - It's a browser-based commandline interface asking you to perform many differenct tasks using tools readily available in command CLIs. I was intrigued by this coding project and managed to achieve 2/3 of the challenges offered when I took the challenge. Or, to be perfectly honest, I didn't want to give up until solving 2/3. What ever version you prefer.


Sidenotes.