WARTUNGSFENSTER: 14.02.2014 – 15.02.2014

Hiermit kuendigen wir ein geplantes Wartungsfenster an unseren Core Systemen an.
Im Zuge dieses Wartungsfensters werden neue Patches und Updates in diverse
Core Systeme eingespielt.

Das Wartungsfenster erstreckt sich von 14.02.2014 0700 – 15.02.2014 1200.

Im Rahmen der Tätigkeiten werden sowohl unsere Netzwerk- Corekomponenten,
als auch die Firmware unserer Storages auf den neuesten Stand gebracht. Zudem
wird ein zusätzliches Storage in Betrieb genommen um die Zugriffszeiten auf die
Homedirectories zu verringern und höhere Durchsätze via FibreChannel zu nutzen.

Ziele der Arbeiten:

  • Höherer Durchsatz zwischen LABs und Storages
  • Diverse Bugfixes und Performance Tweaks
  • Verbesserte WLAN Leistung durch autonomes Channelmanagement unter 2.4 und 5 GHz.
  • Verbesserte Betriebssicherheit unserer vituellen Serverinfrastruktur

Aufgrund des Umfanges der Arbeiten wird es im oben angegebenen Zeitfenster
immer wieder zu Teil- und Totalausfällen aller unserer Dienste und Systeme
kommen. Der Zeitpunkt wurde aus organisatorischen Gründen (Ferien) gewählt.

fix ejabberd’s LDAP / AD authentication timeouts

In larger environments, integrating ejabberd into your existing infrastructure by binding it against your existing AD server seems to be a great idea in first place. We are provided with a read-only domain controller (RODC) by our central IT and use it for all our authentication matters. However, we noticed that our ejabberd server – while working perfectly in first place – stopped to successfully authenticate our users although they provided valid credentials.

Restarting ejabberd resolved the issue for several hours until it started to throw error messages like:

Failed authentication for xxx@ourdomain.ext

Checking the authentication sequence with tcpdump we eventually found out that ejabberd did not manage to talk to the AD server. It set up the connection when the service started, but after a few hours it timed out without ejabberd noticing. Since we could not convince the AD server to allow TCP connections to exist infinitely (no privileges on the respective RODC) we had to play around with the knobs on our end – the underlying debian system that hosts our ejabberd service.

The Linux IP stack is able to notice broken connections if there is traffic that uses them. A more reliable way of detecting broken ultra-low traffic connections, there is “TCP KEEPALIVE” support, but ejabberd does not make use of it. Fortunately there is a way of tricking software into using TCP keepalive without recompiling it: libkeepalive.

The rather small download uses the LD_PRELOAD environment variable – you can set it within your service startup script to apply keepalive for a single service / application – or you can use the configuration file /etc/ld.so.preload for changing userspace behaviour for the whole system.

For further informations about where to get libkeepalive and how to configure the TCP keepalive feature on linux, visit the following two links:



We achieved our best results with the following setup:

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5





NX- Connection to Ubuntu with Unity-Desktop

If you want to connect with an NX-client to an Ubuntu with Unity-Desktop, you have to set the window manager manually.

The command to start unity as window manager:

 gnome-session --session=ubuntu-2d 


Other available gnome-sessions are listed on the server in the directory



RaspBMC vs. XBian: The dark side of the (open-) source.

So we are on the open source railroad for many years now and we were really convinced that it would be all for the better for so many characteristics software should bring along. Supporting open source should mean to let people all over the world contribute to quality, security and stability of the software, either by reporting back or doing some optimizations themselves. Some licenses even allow to fork software if <random guy> thinks he or she would do it way better than the original author in first place.

But over the years, something apparently changed…

Read more

Resurrect MacPorts on OS X Mountain Lion (10.8)

Installing Mountain Lion on release day came with some issues,
not that we assumed that everything would just work out of the box 😉

If you are using MacPorts, as most technicians who choose OS X to be
their operating system, you might notice that it stops working:


creo$ sudo port selfupdate
Installing new MacPorts release in /opt/local as root:admin; permissions 0755; Tcl-Package in /Library/Tcl
Error: /opt/local/bin/port: port selfupdate failed: Error installing new MacPorts base: command execution failed

Well, no need to panic, help is here!

Read more after the break…

Read more

We provide a full featured internal Debian and Ubuntu mirror

From now on, we provide an internal Debian and Ubuntu repository mirror based on apt-cacher-ng.

This allows you to get your updates and applications with up to 1 Gigabit/s from our servers,
saving time and bandwidth for you and all the other users in our labs.

There is no limitation on what Repositories you use since apt-cacher-ng is not the classic “prefetched”
– it caches your packages on demand and allows you to use 3rd party repositories as you like –
there is no special server side configuration required.

You can use it for fresh installs, upgrades, dist-upgrades, software installation … you name it!


We provide further information and configuration guides for your clients
at http://start.nwt.fhstp.ac.at/service_packagemirror.php


All the best,




Austria starts with 6 months of data retention on the 1st of April 2012!

No, there’s no joke inside, the data retention (also known as “Vorratsdatenspeicherung”
or – in Newspeak – “Mindestdatenspeicherung”) starts on the 1st of April 2012.

Although national media covers the topic – now that there are just a few days left –
they refused to cover the topic for the last few months to keep the people calm.

Several independent groups tried to inform our citizens but the vast majority is
still unsure about the effect of data retention and that austrian providers have
to start collecting data in a few days.

You may be disappointed, shocked or frustrated, you may feel indifferent
about the whole topic or just resign …


… or you can respond with proper action!

Links, Tools and Information to protect your privacy after the break…

Read more

Using Puppet to configure OpenStack Instances

About Puppet

Puppet is a configuration management tool which can be used to configure servers / instances on runtime. For further information visit: http://docs.puppetlabs.com/

Our Approach

Install and configure the puppet client directly on the image which will be used to launch instances. After the instance has booted, the puppet client connects to its configured puppet server and retrieves its associated specification of configuration (puppet calls that manifests).

We installed puppet server on our cloud controller c2n2. The puppet client runs inside an virtual instance. Further information about our test environment can be found: here.

Client and Server Installation and configuration after the break…

Read more

OpenStack (Ubuntu 11.10 and OpenStack Diablo)

Introduction to OpenStack

Following links provide you a feeling about the architecture of OpenStack and how the several components interact:

OpenStack Administration Guide: “Getting Started with OpenStack”

OpenStack Administration Guide: “Networking Options”

OpenStack Wiki: “Understanding FlatNetworking”

We recommend you to read through the links as well as the OpenStack Starter Guide.


Our Starting Point…

This blog entry describes an openstack multinode installation. As the name implies, this configuration is based on 2 (or more) physical hosts. We use one of the nodes as the so called “cloud controller” and the other one as an additional computing ressource. The hostname of the cloud ontroller is “c2n2”, the hostname of the additional computing ressource node is “c2n1”. Make sure that the domain name resolution works in both directions!

The cloud controller c2n2 will host all basic nova services that are needed for providing a cloud computing environment:

  • Glance (manages images for virtual instances)
  • Compute (runs virtual instances)
  • Network (manages the virtual logical network infrastructure and network traffic within the cloud environment)
  • Scheduler (manages the distribution of virtual instances to physical hosts based on metadata they deliver)
  • Volume (manages persistant block device storage)
  • Rabbit Message Server (provides a communication channel between the several components of OpenStack)

The other physical node provides solely addional computing power. So only the nova-compute (and nova-common) service will be installed on that node. At the end we have one cloud management node (c2n2) which works simultaneously as computing node together with the other node (c2n1).

Our OpenStack test installation is hosted in a vlan seperated lab environment. So our physical nodes are seperated from other physical nodes based on OSI Layer 2 vlans.

Each of our physical nodes is equipped with 2 network interface cards. The following picture shows our topology.

our topology

The initial configuration of our 2 physical hosts is as follows:

  • operating system: Ubuntu Server 11.10 (Oneiric Ocelot)
  • Manual Partitioning:
  • 1. Partition: Boot Flag, 50G, ext4, /, (UbuntuServer), sda1
  • 2. Partition: 25G, sda2
  • 3. Partition: 5G, swap, sda3
  • Main Network Interface Card: eth0
  • Local user: localadmin / localadmin (no prerequisite)
  • Grub installed in MBR
  • OpenSSH Server installed

    Static Network Configuration


    root@c2n2:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface auto lo iface lo inet loopback
    # The primary network interface 
    auto eth0
    iface eth0 inet static        
    auto eth1
    iface eth1 inet static



    root@c2n1:~# cat /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface auto lo iface lo inet loopback
    # The primary network interface 
    auto eth0
    iface eth0 inet static
    auto eth1
    iface eth1 inet static

    The eth1 interfaces on the nodes are left unconfigured – they will be used as bridging devices by openstack.


    More on the installation process of openstack after the break…

    Read more

    PAM: Authenticate Linux / Unix Users against ActiveDirectory (without installing Unix Extensions)


    If you’re going to deal with authentication of *x Systems against ActiveDirectory, you will need to install the Unix Extensions to your ActiveDirectory scheme to provide your accounts with proper UID-, GID- and SHELL variables. Unfortunately it is sometimes not applicable to install those extensions to your AD, especially when the companies core IT department is refusing to do so ( of course they might have their reasons ).

    To resolve this issue, we decided to create a hybrid authentication scenario, featuring a local LDAP which stores the account information while checking the passphrase directly against the central AD infrastructure.

    Read more after the break

    Read more