Author Archive

WARTUNGSFENSTER: 14.02.2014 – 15.02.2014

Hiermit kuendigen wir ein geplantes Wartungsfenster an unseren Core Systemen an.
Im Zuge dieses Wartungsfensters werden neue Patches und Updates in diverse
Core Systeme eingespielt.

Das Wartungsfenster erstreckt sich von 14.02.2014 0700 – 15.02.2014 1200.

Im Rahmen der Tätigkeiten werden sowohl unsere Netzwerk- Corekomponenten,
als auch die Firmware unserer Storages auf den neuesten Stand gebracht. Zudem
wird ein zusätzliches Storage in Betrieb genommen um die Zugriffszeiten auf die
Homedirectories zu verringern und höhere Durchsätze via FibreChannel zu nutzen.

Ziele der Arbeiten:

  • Höherer Durchsatz zwischen LABs und Storages
  • Diverse Bugfixes und Performance Tweaks
  • Verbesserte WLAN Leistung durch autonomes Channelmanagement unter 2.4 und 5 GHz.
  • Verbesserte Betriebssicherheit unserer vituellen Serverinfrastruktur

Aufgrund des Umfanges der Arbeiten wird es im oben angegebenen Zeitfenster
immer wieder zu Teil- und Totalausfällen aller unserer Dienste und Systeme
kommen. Der Zeitpunkt wurde aus organisatorischen Gründen (Ferien) gewählt.

fix ejabberd’s LDAP / AD authentication timeouts

In larger environments, integrating ejabberd into your existing infrastructure by binding it against your existing AD server seems to be a great idea in first place. We are provided with a read-only domain controller (RODC) by our central IT and use it for all our authentication matters. However, we noticed that our ejabberd server – while working perfectly in first place – stopped to successfully authenticate our users although they provided valid credentials.

Restarting ejabberd resolved the issue for several hours until it started to throw error messages like:

Failed authentication for xxx@ourdomain.ext

Checking the authentication sequence with tcpdump we eventually found out that ejabberd did not manage to talk to the AD server. It set up the connection when the service started, but after a few hours it timed out without ejabberd noticing. Since we could not convince the AD server to allow TCP connections to exist infinitely (no privileges on the respective RODC) we had to play around with the knobs on our end – the underlying debian system that hosts our ejabberd service.

The Linux IP stack is able to notice broken connections if there is traffic that uses them. A more reliable way of detecting broken ultra-low traffic connections, there is “TCP KEEPALIVE” support, but ejabberd does not make use of it. Fortunately there is a way of tricking software into using TCP keepalive without recompiling it: libkeepalive.

The rather small download uses the LD_PRELOAD environment variable – you can set it within your service startup script to apply keepalive for a single service / application – or you can use the configuration file /etc/ld.so.preload for changing userspace behaviour for the whole system.

For further informations about where to get libkeepalive and how to configure the TCP keepalive feature on linux, visit the following two links:

http://libkeepalive.sourceforge.net
http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO

 

We achieved our best results with the following setup:

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5

 

 

 

creo
http://creo.blackmesa.at

RaspBMC vs. XBian: The dark side of the (open-) source.

So we are on the open source railroad for many years now and we were really convinced that it would be all for the better for so many characteristics software should bring along. Supporting open source should mean to let people all over the world contribute to quality, security and stability of the software, either by reporting back or doing some optimizations themselves. Some licenses even allow to fork software if <random guy> thinks he or she would do it way better than the original author in first place.

But over the years, something apparently changed…

Read more

Resurrect MacPorts on OS X Mountain Lion (10.8)

Installing Mountain Lion on release day came with some issues,
not that we assumed that everything would just work out of the box 😉

If you are using MacPorts, as most technicians who choose OS X to be
their operating system, you might notice that it stops working:

Example:

creo$ sudo port selfupdate
Installing new MacPorts release in /opt/local as root:admin; permissions 0755; Tcl-Package in /Library/Tcl
Error: /opt/local/bin/port: port selfupdate failed: Error installing new MacPorts base: command execution failed

Well, no need to panic, help is here!

Read more after the break…

Read more

We provide a full featured internal Debian and Ubuntu mirror

From now on, we provide an internal Debian and Ubuntu repository mirror based on apt-cacher-ng.

This allows you to get your updates and applications with up to 1 Gigabit/s from our servers,
saving time and bandwidth for you and all the other users in our labs.

There is no limitation on what Repositories you use since apt-cacher-ng is not the classic “prefetched”
mirror
– it caches your packages on demand and allows you to use 3rd party repositories as you like –
there is no special server side configuration required.

You can use it for fresh installs, upgrades, dist-upgrades, software installation … you name it!

 

We provide further information and configuration guides for your clients
at http://start.nwt.fhstp.ac.at/service_packagemirror.php

 

All the best,

 

creo
http://creo.blackmesa.at 

 

Austria starts with 6 months of data retention on the 1st of April 2012!

No, there’s no joke inside, the data retention (also known as “Vorratsdatenspeicherung”
or – in Newspeak – “Mindestdatenspeicherung”) starts on the 1st of April 2012.

Although national media covers the topic – now that there are just a few days left –
they refused to cover the topic for the last few months to keep the people calm.

Several independent groups tried to inform our citizens but the vast majority is
still unsure about the effect of data retention and that austrian providers have
to start collecting data in a few days.

You may be disappointed, shocked or frustrated, you may feel indifferent
about the whole topic or just resign …

 

… or you can respond with proper action!

Links, Tools and Information to protect your privacy after the break…

Read more

PAM: Authenticate Linux / Unix Users against ActiveDirectory (without installing Unix Extensions)

 

If you’re going to deal with authentication of *x Systems against ActiveDirectory, you will need to install the Unix Extensions to your ActiveDirectory scheme to provide your accounts with proper UID-, GID- and SHELL variables. Unfortunately it is sometimes not applicable to install those extensions to your AD, especially when the companies core IT department is refusing to do so ( of course they might have their reasons ).

To resolve this issue, we decided to create a hybrid authentication scenario, featuring a local LDAP which stores the account information while checking the passphrase directly against the central AD infrastructure.

Read more after the break

Read more

openVZ: Automatic VETH Bridging Script

When you use openVZ you will sooner or later need virtual ethernet interfaces in VEs you’re going to bridge to an interface on your host node – for example to put your container into some special VLAN. Though openVZ included the optional “bridge” parameter to their configuration specification, they do not provide an automatic way of dealing with it once you set it up so you’ll have to take care about it yourself.

We wrote a script that allows you to bridge the virtual ethernet interface automatically to the hosts interface you specified within your VEs configuration once you start a virtual entity. All you have to do is saving it to /etc/vz/scripts, making it executable and starting it along with the VE mount event. To do this, insert “/etc/vz/vzbridge.sh $VEID &” into /etc/vz/conf/vps.mount (you might have to create the vps.mount file in first place and make it executeable too.).

You’ll find the script available for download here
http://creo.blackmesa.at/random_stuff.html

 

[creo]

 

 

Differential block-based backup using rdiff-backup

Time has come for us to stop worrying about a safe, reliable and efficient (aka. cheap) backup strategy. After weeks of comparing lots of backup solutions we found the one we’re going to use from now on (for our *x infrastructure).

If you are looking for a solid and easy way to backup your data (from single files to full machines) rdiff-backup may be your weapon of choice. While backing up your data from a given source directory (or set of directories) it preserves all meta-information and in the end provides a full copy of the source directory with extra reverse diffs that are stored in a special subdirectory of that target directory. This way you can recover your files easily by copying them back to their original location with cp, but you may also go back in time and recover files you deleted (or modified) a specified period of time ago. It fully resembles the features of Apples “TimeMachine” while being introduced in mid 2001.

rdiff-backup is available for all major platforms (yes – even Windows) and is licensed under GPL.

For more information, visit the rdiff-backup website.

 

[creo]