Archive for the ‘ Uncategorized ’ Category

WARTUNGSFENSTER: 14.02.2014 – 15.02.2014

Hiermit kuendigen wir ein geplantes Wartungsfenster an unseren Core Systemen an.
Im Zuge dieses Wartungsfensters werden neue Patches und Updates in diverse
Core Systeme eingespielt.

Das Wartungsfenster erstreckt sich von 14.02.2014 0700 – 15.02.2014 1200.

Im Rahmen der Tätigkeiten werden sowohl unsere Netzwerk- Corekomponenten,
als auch die Firmware unserer Storages auf den neuesten Stand gebracht. Zudem
wird ein zusätzliches Storage in Betrieb genommen um die Zugriffszeiten auf die
Homedirectories zu verringern und höhere Durchsätze via FibreChannel zu nutzen.

Ziele der Arbeiten:

  • Höherer Durchsatz zwischen LABs und Storages
  • Diverse Bugfixes und Performance Tweaks
  • Verbesserte WLAN Leistung durch autonomes Channelmanagement unter 2.4 und 5 GHz.
  • Verbesserte Betriebssicherheit unserer vituellen Serverinfrastruktur

Aufgrund des Umfanges der Arbeiten wird es im oben angegebenen Zeitfenster
immer wieder zu Teil- und Totalausfällen aller unserer Dienste und Systeme
kommen. Der Zeitpunkt wurde aus organisatorischen Gründen (Ferien) gewählt.

fix ejabberd’s LDAP / AD authentication timeouts

In larger environments, integrating ejabberd into your existing infrastructure by binding it against your existing AD server seems to be a great idea in first place. We are provided with a read-only domain controller (RODC) by our central IT and use it for all our authentication matters. However, we noticed that our ejabberd server – while working perfectly in first place – stopped to successfully authenticate our users although they provided valid credentials.

Restarting ejabberd resolved the issue for several hours until it started to throw error messages like:

Failed authentication for xxx@ourdomain.ext

Checking the authentication sequence with tcpdump we eventually found out that ejabberd did not manage to talk to the AD server. It set up the connection when the service started, but after a few hours it timed out without ejabberd noticing. Since we could not convince the AD server to allow TCP connections to exist infinitely (no privileges on the respective RODC) we had to play around with the knobs on our end – the underlying debian system that hosts our ejabberd service.

The Linux IP stack is able to notice broken connections if there is traffic that uses them. A more reliable way of detecting broken ultra-low traffic connections, there is “TCP KEEPALIVE” support, but ejabberd does not make use of it. Fortunately there is a way of tricking software into using TCP keepalive without recompiling it: libkeepalive.

The rather small download uses the LD_PRELOAD environment variable – you can set it within your service startup script to apply keepalive for a single service / application – or you can use the configuration file /etc/ for changing userspace behaviour for the whole system.

For further informations about where to get libkeepalive and how to configure the TCP keepalive feature on linux, visit the following two links:


We achieved our best results with the following setup:

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5





RaspBMC vs. XBian: The dark side of the (open-) source.

So we are on the open source railroad for many years now and we were really convinced that it would be all for the better for so many characteristics software should bring along. Supporting open source should mean to let people all over the world contribute to quality, security and stability of the software, either by reporting back or doing some optimizations themselves. Some licenses even allow to fork software if <random guy> thinks he or she would do it way better than the original author in first place.

But over the years, something apparently changed…

Read more

Resurrect MacPorts on OS X Mountain Lion (10.8)

Installing Mountain Lion on release day came with some issues,
not that we assumed that everything would just work out of the box 😉

If you are using MacPorts, as most technicians who choose OS X to be
their operating system, you might notice that it stops working:


creo$ sudo port selfupdate
Installing new MacPorts release in /opt/local as root:admin; permissions 0755; Tcl-Package in /Library/Tcl
Error: /opt/local/bin/port: port selfupdate failed: Error installing new MacPorts base: command execution failed

Well, no need to panic, help is here!

Read more after the break…

Read more

Using Puppet to configure OpenStack Instances

About Puppet

Puppet is a configuration management tool which can be used to configure servers / instances on runtime. For further information visit:

Our Approach

Install and configure the puppet client directly on the image which will be used to launch instances. After the instance has booted, the puppet client connects to its configured puppet server and retrieves its associated specification of configuration (puppet calls that manifests).

We installed puppet server on our cloud controller c2n2. The puppet client runs inside an virtual instance. Further information about our test environment can be found: here.

Client and Server Installation and configuration after the break…

Read more

OpenStack (Ubuntu 11.10 and OpenStack Diablo)

Introduction to OpenStack

Following links provide you a feeling about the architecture of OpenStack and how the several components interact:

OpenStack Administration Guide: “Getting Started with OpenStack”

OpenStack Administration Guide: “Networking Options”

OpenStack Wiki: “Understanding FlatNetworking”

We recommend you to read through the links as well as the OpenStack Starter Guide.


Our Starting Point…

This blog entry describes an openstack multinode installation. As the name implies, this configuration is based on 2 (or more) physical hosts. We use one of the nodes as the so called “cloud controller” and the other one as an additional computing ressource. The hostname of the cloud ontroller is “c2n2”, the hostname of the additional computing ressource node is “c2n1”. Make sure that the domain name resolution works in both directions!

The cloud controller c2n2 will host all basic nova services that are needed for providing a cloud computing environment:

  • Glance (manages images for virtual instances)
  • Compute (runs virtual instances)
  • Network (manages the virtual logical network infrastructure and network traffic within the cloud environment)
  • Scheduler (manages the distribution of virtual instances to physical hosts based on metadata they deliver)
  • Volume (manages persistant block device storage)
  • Rabbit Message Server (provides a communication channel between the several components of OpenStack)

The other physical node provides solely addional computing power. So only the nova-compute (and nova-common) service will be installed on that node. At the end we have one cloud management node (c2n2) which works simultaneously as computing node together with the other node (c2n1).

Our OpenStack test installation is hosted in a vlan seperated lab environment. So our physical nodes are seperated from other physical nodes based on OSI Layer 2 vlans.

Each of our physical nodes is equipped with 2 network interface cards. The following picture shows our topology.

our topology

The initial configuration of our 2 physical hosts is as follows:

  • operating system: Ubuntu Server 11.10 (Oneiric Ocelot)
  • Manual Partitioning:
  • 1. Partition: Boot Flag, 50G, ext4, /, (UbuntuServer), sda1
  • 2. Partition: 25G, sda2
  • 3. Partition: 5G, swap, sda3
  • Main Network Interface Card: eth0
  • Local user: localadmin / localadmin (no prerequisite)
  • Grub installed in MBR
  • OpenSSH Server installed

    Static Network Configuration


    root@c2n2:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface auto lo iface lo inet loopback
    # The primary network interface 
    auto eth0
    iface eth0 inet static        
    auto eth1
    iface eth1 inet static



    root@c2n1:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface auto lo iface lo inet loopback
    # The primary network interface 
    auto eth0
    iface eth0 inet static
    auto eth1
    iface eth1 inet static

    The eth1 interfaces on the nodes are left unconfigured – they will be used as bridging devices by openstack.


    More on the installation process of openstack after the break…

    Read more

    PAM: Authenticate Linux / Unix Users against ActiveDirectory (without installing Unix Extensions)


    If you’re going to deal with authentication of *x Systems against ActiveDirectory, you will need to install the Unix Extensions to your ActiveDirectory scheme to provide your accounts with proper UID-, GID- and SHELL variables. Unfortunately it is sometimes not applicable to install those extensions to your AD, especially when the companies core IT department is refusing to do so ( of course they might have their reasons ).

    To resolve this issue, we decided to create a hybrid authentication scenario, featuring a local LDAP which stores the account information while checking the passphrase directly against the central AD infrastructure.

    Read more after the break

    Read more