Snippets for moving WP mu hosts

This one-liner can be very helpful in case you need to move a running installation of WordPress Multi User to another host. This will fetch the domains, their aliases, reverse the domain name from a.b.c to c.b.a, sort that and finally print it out. With that, you have a nice overview of all FQDNs handled by your WP mu which may help not to miss one.

The first is everything in a readable form, the second is the one-liner.

You will have to set WP_MYSQL_USER, WP_MYSQL_PASSWORD, WP_MYSQL_HOST, and WP_MYSQL_DATABASE according to your specific settings.

for entry in $(
   mysql -u ${WP_MYSQL_USER} \
    --password=${WP_MYSQL_PASSWORD} \
    -h ${WP_MYSQL_HOST} \
    -s \
    -e "SELECT domain FROM ${WP_MYSQL_DATABASE}.wp_domain_mapping;"
   echo $entry |  \
    awk ' \
       BEGIN {FS = ".";output_string=""} \
         if ($3 == "") output_string=$2"."$1 \
         else output_string=$3"."$2"."$1
       END {print output_string}'
done |  \
for entry in $(mysql -u ${WP_MYSQL_USER} --password=${WP_MYSQL_PASSWORD} -h ${WP_MYSQL_HOST} -s -e "SELECT domain FROM ${WP_MYSQL_DATABASE}.wp_domain_mapping;");do echo $entry | awk 'BEGIN {FS = ".";output_string=""};{if ($3 == "") output_string=$2"."$1; else output_string=$3"."$2"."$1};END{print output_string}' ;done | sort
Posted in howto, Linux, Software, Webserver | Tagged , , , , , , , , , , , | Comments

Grab what the _parent won’t give you.

Though IFrames are an evil remnant of the dark age commonly known as the 90s, you still have to deal with them from time to time. Say you want to have a microsite in a YouTube channel. Back in the old days we were not so keen on all the URI parameters but now in the days of Big Data and massive user tracking, we need to know from where a user came. Unfortunately, within an IFrame we have to rely on what the surrounding page is willing to give us.

Luckly, there’s a handy workaround: Work with the referrer’s RequestURI.

Here’s a code snippet for nginx:

location / {
   set $forward 0;
   if ($http_referer ~*)$) {
      # take the referrer's arguments and 
      # add another to prevent an infinite redirection loop
      set $newargs $1&D43ft=42;
      set $forward 1;

   # if this is the redirected call, 
   # stop nginx from forwarding 
   if ($arg_D43ft = 42) {set $forward 0;}

   if ( $forward = 1) {
      # clear the local RequestURI. 
      # This is necessary to prevent nginx 
      # from adding the current RequestURI to the redirection 
      set $args "";
      rewrite ^ $scheme://$host$uri$newargs;
   proxy_pass http://your_target:your_port;
   include /etc/nginx/proxy_params;
Posted in howto, Linux, Software, Webserver | Tagged , , , , , | Comments

Howto speed-up initial imports to MySQL innodb

Well, it’s an ever hack, however, it works.

  1. Create to databases that differ in the used engine only. One shall use MyISAM, the other InnoDB.
  2. Import everything to the MyISAM database.
  3. connect to mysql and
    INSERT innodb_db.table SELECT * FROM myisam_db.table;
Posted in Database | Tagged , , | Comments

nginx, memcached and static files

For some reasons it might make sense to deliver static assets via memcached. As nginx is able to directly talk to memcached, this is even more obvious. But sometimes, there is no backend to put the files into the cache. Therefore, we will have to do it ourselves. Here’s a litte howto, that extends what is written here. I had to extend it as some files were crippled. Therefore, we read the files into a buffer, do some fancy GD operations (mainly not to lose transparency on PNG and GIF) and finally push the buffer into memcached from where nginx takes it.

Nginx expects DOCUMENT_URI as key and the file as value.

First of all, you will need the right nginx config. Like that:

  # this is just to make sure we deliver the proper content type
  location ~* \.jpg$ {
    gzip off;
    expires      max;
    # this is to know whether the caching works
    add_header	 x-header-memcached true; 
    default_type image/jpeg;
    #here we define the key
    set $memcached_key $uri;
    memcached_pass localhost:11211;
    memcached_buffer_size 8k;
    # cache misses will redirect to this
    error_page         404 = @cache_miss;

  location ~* \.png$ {
    gzip off;
    expires      max;
    add_header x-header-memcached true;
    default_type       image/png;
    set $memcached_key $uri;
    memcached_pass     localhost:11211;
    error_page         404 = @cache_miss;

  location ~* \.gif$ {
    gzip off;
    expires      max;
    add_header x-header-memcached true;
    default_type       image/gif;
    set $memcached_key $uri;
    memcached_pass     localhost:11211;
    error_page         404 = @cache_miss;
# here is what we do in case a file could not be found
location @cache_miss {
  # we need to specify the document root as @cache_miss is a virtual location
  root /var/www/;
  # this is just for us so that we know that the delivered file was came from the file system
  add_header x-header-memcached false;

And finally, we need a script that imports all the files into the cache. Like the one below. Important: You will have to run the script in your document root as the final file operations are performed with the mangled file name.

$mylist=rscandir("/var/www/"); // where to scan
$srch = array('/var/www/'); // file name mangling
$newval = array(''); //here you could enter something that will be replaced 

$memcache = new Memcache;
$memcache->addServer(''); // Add more servers if you like to

// here we recursively scan the given directory and add .jpg, .png and.gif to the file array
function rscandir($base='', &$data=array()) {
  $array = array_diff(scandir($base), array('.', '..'));

  foreach($array as $value) {
    if (is_dir($base."/".$value)) {
      $data = rscandir($base."/".$value,$data);

    elseif (is_file($base."/".$value)) {
    $rest = substr($value, -4);
      if ((!strcmp($rest,'.jpg')) || (!strcmp($rest,'.png')) || (!strcmp($rest,'.gif')) ) {
        $data[] = $base."/".$value;

  return $data;

while (list($key, $val) = each($mylist)) {
  // the URL will be used as key
  // detect the file type from the file's content
  $mime_type = exif_imagetype($val);
  if ( $mime_type == 2) { //JPEG
  } elseif ( $mime_type == 3) { //PNG
    imageinterlace($image, true);
    imagealphablending($image, true);
    imagesavealpha($image, true);
  } elseif ( $mime_type == 1) { //GIF
    imageinterlace($image, true);
    $background = imagecolorallocate($image, 0, 0, 0);
    imagecolortransparent($image, $background);
  echo "$val - $mime_type\n";
Posted in howto, Webserver | Tagged , , , , , | Comments

modify a RequestURI with nginx

Sometimes it is necessary to modify the RequestURI, to strip some GET-pairs. Here is a way how this could be done using nginx. Though we all know that if is evil, it’s sometimes quite handy.

# the »Evil Facebook-Like-Hack.«
location / {
  # initialize variables with empty values
  set $01 "";set $02 "";set $03 "";set $04 "";
  set $05 "";set $06 "";set $07 "";set $08 "";

  # if GET key exists, copy its value
  if ($arg_productName) { set $01 productName=$arg_productName;}
  if ($arg_linkId) { set $02 linkId=$arg_linkId;}
  if ($arg_target) { set $03 target=$arg_target;}
  if ($arg_id) { set $04 id=$arg_id;}
  if ($arg_linkValue) { set $05 linkValue=$arg_linkValue;}
  if ($arg_groupId) { set $06 groupId=$arg_groupId;}
  if ($arg_articleId) { set $07 articleId=$arg_articleId;}
  if ($arg_version) { set $08 version=$arg_version;}

  # if a GET-key has fb_ in its name then do the funky URL-magic.
  if ($args ~* fb_ ) {
    # clear the GET-array so that the URI has no GET-pairs
    set $args "";
    # rewrite the given URL and add the GET-pairs we create above
    rewrite ^ $scheme://$host$uri?$01$02$03$04$05$06$07$08 break;

The only drawback is that the URL we receive contains at least a ? for a case when none of the given keys exists – but that’s a minor thing I’d say.

Posted in Linux, Webserver | Tagged , | Comments

find out if your Nginx is really serving the right hosts

Quite I while ago, I posted an article on how to find out whether Apache is still serving the hosts it is configured for. This time, I will show the bash code for doing the same but with nginx.

for entry in $(grep -h server_name _config/nginx/* | sed -e 's/server_name//g' -e 's/;//g' -e 's/#.*$//g' -e 's/_//g' );do echo -n $(nslookup "$entry" | grep -A1 Name | grep Address | awk '{print $2}');echo " $entry";done
Posted in howto, Linux, Webserver | Tagged , , , , , , , , , , | Comments

Gentoo ebuild: Nginx With Support For Upstream Fair Proxy Load Balancer And HTTP-Auth against LDAP

In a previous article we already presented a modified nginx ebuild containing support for gnosek’s Upstream Fair Proxy Load Balancer. Now, we wanted to have HTTP-Auth against a LDAP server. So we crawled the almighty internet and stumbled over nginx-auth-ldap. Therefore, we updated our last ebuild and extended it with this plugin.

As documentation there is only a small config example. However, anyone who already is  familiar with HTTP-Auth against LDAP with Apache and/or Lighttpd will find this extension pretty straight-forward.

Here you can have a look at the ebuild and there is the SVN checkout path.

The new USE flag is called auth_ldap, gnosek’s plugin can be used with upstream_fair. Add them to your NGINX_MODULES_HTTP in /etc/make.conf

Have a lot of fun!

Posted in howto, Linux, Software, Webserver | Tagged , , , , , , , , | Comments

Gentoo ebuild: Nginx With Support For Upstream Fair Proxy Load Balancer

Nginx is a powerful web server and therefore our choice. Unfortunately, the Gentoo ebuild is missing one essential extension: a load balancer that is querying the servers not via round robin but by their current load. Therefore, we had to extend the default ebuild to support gnosek’s upstream_fair module.

How to use this ebuild:

  1. download the ebuild (currently, we only have one for nginx 1.1.12)
  2. place it anywhere portage has access to (i.e. rough: /usr/portage/www-servers/nginx/)
  3. run ebuild nginx-1.1.12-r2.ebuild digest
  4. add upstream_fair to NGINX_MODULES_HTTP in /etc/make.conf
  5. add your keywords to /etc/portage/package.keywords
  6. emerge nginx
  7. done.
Posted in howto, Linux, Software, Webserver | Tagged , , , , , , , , | Comments

dovecot: remove maildirs

We are running dovecot as MDA. Dovecot gets its user details from OpenLDAP and adds new users automatically.  But removing a user in LDAP does not mean it gets removed in Dovecot as well. To have this a little bit more comfortable, I created this little script here:



  for DOMAINDIR in $MAILDIR/*;do
    if [ -d $DOMAINDIR ];then
      DOMAIN=$(echo $DOMAINDIR | sed 's/// /g' | awk '{print $4}')
      for USERDIR in $DOMAINDIR/*;do
        if [ -d $USERDIR ];then
          USER=$(echo $USERDIR | sed 's/// /g' | awk '{print $5}')
          EXISTS=$(ldapsearch -H $LDAP_HOST -D $LDAP_BIND_USER 
                     -w $LDAP_BIND_PASS -x -b $LDAP_BASE_DN 
                     mail=$MAIL mail | grep -c dn:)
          if [ "$EXISTS" == "0" ];then
            echo "$MAIL is obsolete."
            echo -n "Removing userdir..."
            rm -rf $USERDIR
            echo " done."

What the script does is to crawl every subdirectory of MAILDIR. This is where we receive the domain names through a sed/awk-combination. For every domain name we crawl its userbase. A similar sed/awk-combination is being used to receive the user names. Then we create an eMail address out of the two retrieved bits of information.  Now we are ready to check this mail address against the LDAP. If we receive a negative answer (address is not found and therefore no “dn”), we can be sure the eMail account has been removed. Finally, we remove the mail directory of the non-existing user.

The script itself should be handed over to the cron, I’d say.

Here is the download for the lazy.

Posted in howto, Linux, Mailserver | Tagged , , , , , , , , , , , , | Comments

Locate all Apple serial numbers in a subnet

The command ioreg -l | grep IOPlatformSerialNumber will show you your Mac’s serial number. With this, you can find out when your Mac has been created if you enter this number to

For a whole network of Macs, this can become very boring. However, there is a nice way to solve this task more conveniently. The following script is your little helper.

It needs two arguments: an IP range (i.e. 192.168.0) and an user that is allowed to log into every Mac. Of course it would be great if every Mac would have your public key already in its authorized_keys file.

You will need nmap installed for we need it to check whether port 22 is opened.

What the script does is to check every IP from IP-range.1 to IP-range.254 if port 22 is opened. If so, we try to log in and to run the ioreg command. The output is grep’d and awk’d so that we receive nothing but the serial number. If we have the serial, we add this together with the host’s IP to a file.

if [ "$#" == "2" ] ; then
  echo "usage $0 IP-range ssh-user";
  echo "IP-range should be of format xxx.yyy.zzz (no trailing .)"
  echo "ssh-user must exist on every accessible host"
  exit 1;


while [ ${_I} -lt 254 ]; do
  if [ $(nmap ${_CURRENT_IP} -p22 | grep -c open ) -eq 1 ]; then
    ssh-keyscan ${_CURRENT_IP} 2>/dev/null 1>> ~/.ssh/known_hosts
    _CURRENT_SERIAL=$(ssh ${_SSH_USER}@${_CURRENT_IP} ioreg -l | grep IOPlatformSerialNumber | awk '{print $4}')
    if [ ! -z "${_CURRENT_SERIAL}" ];then
      echo "${_CURRENT_IP} ${_CURRENT_SERIAL}" >> ${_IP_RANGE}.txt
Posted in howto, Software | Tagged , , , , , , , , , , , , | Comments