Gallery3 Installed on a Nginx Server

A few months ago I converted my web cluster from Apache to Nginx. Initially I was only concerned with my WordPress sites as they get, by far the most traffic. Their conversion went well, with only very minor issues.

Since I was happy with the results, I moved to quickly to my MediaWiki site. It’s conversion went very well and actually end up allowing the site to accept requests with and without index.php in them.

During this time, my other random sites, using odd or old software, was simply proxied back to the still running Apache install on my server. Using nginx’s proxy configuration, i was able to just change the port Apache listen on and left the old configurations as they were.  This allowed me to stay up, but didn’t give me any of the benefits of nginx.  Images from the my Gallery3 gallery were still being served from Apache, threw Nginx.

After getting everything working, I decided to start on Gallery3, since Gallery3 uses .htaccess files to secure images, I wasn’t sure how I would be able to go about this.  After some investigating I found that Gallery3 only uses the .htaccess files to block downloading of the actual images them self’s, the php pages are still secured threw normal permissions.

Configuring Nginx

Setting up Gallery3 under Nginx is pretty straightforward, when ignoring the .htaccess file talked about above.  As with all my Nginx setup’s, im using php-fpm via a UNIX sock.  Switching to a TCP connection (such as 127.0.0.0:8000), can be substituted with only minor tweaks.

To start out, create a new file in your nginx configuration directory, named gallery3.conf.  This file will be generic enough that you will be able to use it for any Gallery3 install on the server. Your site specific information will live in your main nginx.conf file.

Create a new file named gallery3.conf, and copy the below into it:

location ~* .(js|css|png|jpg|jpeg|gif|ico|ttf)$ {
    expires 180d;
#    if_modified_since off;
#    add_header Last-Modified "";
}

if (!-e $request_filename) {
    rewrite ^/(.+)$ /index.php?kohana_uri=$1 last;
}

location /var/ {
    try_files $uri /index.php?kohana_uri=$2;
}

location = /downloadalbum/zip/album/1 {
    return 404;
}

location  ~* .php$ {
    include fastcgi_params;
    fastcgi_index  index.php;
    fastcgi_split_path_info ^(.+.php)(.*)$;
    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    fastcgi_param  PATH_INFO        $fastcgi_path_info;
    fastcgi_pass php;

    index index.php;

    if (-f $request_filename) {
        rewrite ^/var/albums/(.*)$ /file_proxy/$1 last;
        expires max;
        break;
    }

    if (!-e $request_filename) {
        rewrite ^/(.+)$ /index.php?kohana_uri=$1 last;
    }
}

Now you need to setup your new site in Nginx’s main config file.  Below is the most basic setup, it assumes you have a running Nginx server and are only adding this site to it.  Note the name of the server and the root location of the files needs to be updated.

Added to nginx.conf:

server {
    listen 80;
    server_name gallery.example.com;
    root /var/www/gallery.example.com;
    include gallery3.conf;
}

This next part really sucks and I wish there was a way around it, in your Gallery3 installation, go into applications/config/config.php and modify the “index_page” setting as shown below.

application/config/config.php:

$config["index_page"] = "";

What sucks about this is you will need to do it after EVERY UPDATE. If after you update Gallery3 and you loose your style sheet and java scripts, this is why.

The config file should not live in a location that get’s updated automatically with new versions of software.

Advertisements

MediaWiki Installed on a Nginx Server

Here is my quick guide to use MediaWiki on a Nginx server using php-fpm.

There are many ways of doing this. I like pretty url’s, so this configuration will remove the index.php from the url. This is great, but it will not redirect links to index.php to the new none index.php links. You also need to configure MediaWiki to use short urls. Also, the below configuration dose assume your using Unix socket connections to php-fpm.

So here we go, inside your http { block, add the following.

/etc/nginx/nginx.conf:

    server {
        listen 80;
        server_name wiki.example.com;
        root /var/www/wiki.example.com;
        include mediawiki.conf;
    }

/etc/nginx/mediawiki.conf:

# MediaWiki Config
location ~ .htaccess {
        deny all;
}

location ~* /sitemaps/ {
        autoindex  on;
}

# Remote index.php from URI
rewrite ^/index.php/(.*) /$1  permanent;

location / {
        if (!-e $request_filename) {
                rewrite ^/([^?]*)(?:?(.*))? /index.php?title=$1&$2 last;
        }
        if ($uri ~* ".(ico|css|js|gif|jpe?g|png)(?[0-9]+)?$") {
                expires max;
                break;
        }
}

location ~* .php$ {
        if (!-e $request_filename) {
                return 404;
        }

        include /etc/nginx/fastcgi_params;

        fastcgi_pass  unix:/var/run/php-fpm.socket;
        fastcgi_index index.php;

        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
}

If you only have a single wiki on your server, there’s really no reason to create a new mediawiki.conf file, but instead just add it to the server block.

Installing WordPress on Nginx

I, like most people, started out by using Apache and really didn’t see anything wrong with it.  It’s relatively easy to setup, it’s used by most sites so support is a snap and a default and it’s already installed on most distributions. The thing is, it’s slow, the easy of use is paid for by speed.

So this is where Nginx comes in.  Nginx is not the easiest software to setup.  It requires you to tell it what different file types you will be using and how to handle them.  It requires you to tell it where scripts live and where static files live.  It also requires you to use an external php server, such as FastCGI.

PHP-FPM Setup

Nginx does not provide FastCGI for you (FastCGI is what your web server uses to interact with WordPress’s PHP code), so you’ve got to have a way to spawn your own FastCGI processes.

My preferred method is using of running FastCGI is with php-fpm.  Since I’m using Fedora, and there is a yum packet already built for php-fpm, it’s quick and easy to install.

Installing php-fpm is pretty straightforward:

yum install php-fpm

After installing php-fpm you have to start it. The rpm for php-fpm installs the service script for you, you only need to enable starting at boot, and start the process.

chkconfig php-fpm on
service php-fpm start

Nginx

The next part is to install Nginx on your server.  This is as straightforward as installing php-fpm on Fedora, when using yum.

yum install nginx

Once Nginx is installed, you need to set it up to server your site.

Configuring Nginx for WordPress

So we now have the needed software installed, next we need to set it all up. Below is the config for a standard, simple WordPress site named example.com.
nginx.conf:

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] $http_host "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    rewrite_log     on;
    keepalive_timeout  5;
    index              index.php index.html index.htm;

    # Upstream to abstract backend connection(s) for PHP.
    upstream php {
        server unix:/var/run/php-fpm.socket;
    }

    server {
        listen 80;
        server_name example.com;
        server_name www.example.com;
        root /var/www/example.com;

        if ($http_host != "example.com") {
                rewrite ^ http://example.com$request_uri permanent;
        }

        include wordpress.conf;
    }
}

wordpress.conf:

# WordPress single blog rules.
# Designed to be included in any server {} block.

# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
location ~ /. {
        deny all;
        access_log off;
        log_not_found off;
}

# Deny access to any files with a .php extension in the uploads directory
location ~* ^/wp-content/uploads/.*.php$ {
        deny all;
        access_log off;
        log_not_found off;
}

# Deny access to any files with a .php extension in the uploads directory for multisite
location ~* /files/(.*).php$ {
        deny all;
        access_log off;
        log_not_found off;
}

# This order might seem weird - this is attempted to match last if rules below fail.
# http://wiki.nginx.org/HttpCoreModule
location / {
        try_files $uri $uri/ /index.php?$args;
}

# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;

# Directives to send expires headers and turn off 404 error logging.
location ~* .(js|css|png|jpg|jpeg|gif|ico|ttf)$ {
        expires 180d;
        log_not_found off;
}

# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ .php$ {
        # Zero-day exploit defense.
        # http://forum.nginx.org/read.php?2,88845,page=3
        # Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
        # Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine.  And then cross your fingers that you won't get hacked.
        try_files $uri =404;

        fastcgi_split_path_info ^(.+.php)(/.+)$;
        #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini

        include fastcgi_params;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#       fastcgi_intercept_errors on;
        fastcgi_pass php;
}

Configuring WordPress to use Memcached

When using WordPress self hosted software it’s generally a good idea to cache as much as possible.  Object Caching allows you store parts of the pages in memory for quicker retrieval, since the server will not need to look as much up from the SQL database.

Installing the needed parts

To start out, you will need to have memcached installed on your server. If your using Fedora, you may install memcached via Yum as follows.

yum -y install memcached php-pecl-memcached perl-Cache-Memcached

Configuring memcached

After installing memcached you need to configure it. If working in Fedora, and using the Yum install as talked about above, you will need to change the memcached confuration by modifying it’s sysconfig file located at /etc/sysconfig/memcached.

A default configuration may look like this.

PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1"

If you would like to share this memcached server with other webservers, change the address from 127.0.0.1 to the server’s actual address.

To set memcached to start automatically when the server get’s rebooted, run:

chkconfig memcached on

And of course, don’t for get to start it

service memcached start

Configuring WordPress

After memcached is installed, you need to configure the WordPress side.

Next you should install the Memcached Object Cache plugin, but be careful, this is not a normal plugin.  You should not activate this plugin as you would with a normal plugin, but instead download it as you normally would, but then you need to copy the object-cache.php file to your wp-content folder.

From the root of your WordPress install, run the following:

cp wp-content/plugins/memcached/object-cache.php wp-content/

Now we need to configure WordPress to use the memcached server. Add the following near the end of your wp-config.php file.

global $memcached_servers;
$memcached_servers = array('default' => array('127.0.0.1:11211'));

Note that were using the same server and port (127.0.0.1:11211) as was configured above while we were setting up memcached.

Checking in on memcached

Memcached is one of those things that just sort of runs. There’s not much direct feed back, besides the speed difference on your site.

One quick way is memcache-top. memcache-top will show you the current status of your memcached server.

To install, run the following.

wget http://memcache-top.googlecode.com/files/memcache-top-v0.6
chmod +x memcache-top-v0.6
./memcache-top-v0.6

Running ./memcache-top-v0.6 will assumed the default configuration we used here.

WordPress Auto Backup Script

A quick backup script for WordPress. If you pass a config file containing the block of variables, the script will auto run for your install of WordPress.

# /bin/bash
BKNAME=
DIR=
DBHOST=
DBNAME=
DBUSER=
DBPASS=

# Backup Varibles
BKDIR=

# Restore Variables
RSDIR=
RSARDIR=
RSUSER=
RSGROUP=

OFFSITE=

# Offsite Rsync
RSYNCHOST=
RSYNCUSERNAME=
RSYNCPASSWORD=

# Offiste SCP
SCPUSER=
SCPHOST=
SCPDIR=

if [ `date +%e` =  1 ]; then
 DAY=`date +%b`
 DATE=$DAY.`date +%Y-%m-%d`
else
 DAY=`date +%w`
 DATE=$DAY.`date +%Y-%m-%d`
fi

if [ -e $1 ]; then
 source $1
 CONFIGSTATUS="Using config file found at $1"
else
 echo "No Config file found"
 exit
fi

if [ ! -e $DB_HOST ]; then
  DBHOST=`grep "DB_HOST" $DIR/wp-config.php |sed "s/define('DB_HOST', '//" |sed "s/');//"`
  DBNAME=`grep "DB_NAME" $DIR/wp-config.php |sed "s/define('DB_NAME', '//" |sed "s/');//"`
  DBUSER=`grep "DB_USER" $DIR/wp-config.php |sed "s/define('DB_USER', '//" |sed "s/');//"`
  DBPASS=`grep "DB_PASS" $DIR/wp-config.php |sed "s/define('DB_PASSWORD', '//" |sed "s/');//"`
fi
BKDIRNAME=$BKNAME

if [ `date +%e` = 1 ]; then
 ARCHIVE=1
else
 if [ `date +%w` = 0 ]; then
  ARCHIVE=1
 else
  ARCHIVE=0
 fi
fi

if [ ! -e $BKDIR ];then
 mkdir -p $BKDIR
fi

chown -R root:root $DIR/
chown -R apache:apache $DIR/wp-content $DIR/wp-admin/update.php
chown apache:apache $DIR

if [ -e $DIR/sitemap.xml ]; then
  chown apache:apache $DIR/sitemap.xml
fi

if [ -e $DIR/sitemap.xml.gz ]; then
  chown apache:apache $DIR/sitemap.xml.gz
fi
case "$2" in
restore)
  rm -rf $RSARDIR/$BKNAME.$DAY.*
  rm -rf $RSDIR/$BKNAME
  rm -rf $DIR/$BKNAME.$DAY.*
  rm -rf $DIR/$DBNAME.*
  if [ ! -e $RSARDIR ]; then
   mkdir -p $RSARDIR
  fi
  export RSYNC_PASSWORD=$RSYNCPASSWORD
  rsync -rvzht --delete --stats $RSYNCUSERNAME@$RSYNCHOST::ibackup/odin/$BKNAME/$BKNAME.$DAY.* $RSARDIR/ --port=45873 >> $DIR/$BACKUPNAME.log 2>&1
  cd $RSARDIR
  md5sum -c $BKNAME.$DATE.tgz.md5 > /dev/null 2>&1
  RESTOREMD5=`echo $(($?))`
  if [ $RESTOREMD5 = 0 ]; then
   cd $RSDIR/
   cp $RSARDIR/$BKNAME.$DATE.tgz $RSDIR/$BKNAME.$DATE.tgz
   tar -xzf $RSDIR/$BKNAME.$DATE.tgz
   mv $RSDIR$DIR $RSDIR
   cd $RSDIR/$BKNAME
   md5sum -c $DBNAME.$DATE.sql.md5 > /dev/null 2>&1
   RESTORESQLMD5=`echo $(($?))`
   if [ $RESTORESQLMD5 = 0 ]; then
    cd $RSDIR
    mysql -u $DBUSER -p$DBPASS $DBNAME > $DIR/$BKNAME.$DATE.log 2>&1
  done
  mysqldump -h $DBHOST -u $DBUSER -p$DBPASS $DBNAME > $DIR/$DBNAME.$DATE.sql 2> $DIR/$BKNAME.$DATE.log
  a=$?
  mysqldump -h $DBHOST -u $DBUSER -p$DBPASS $DBNAME --xml > $DIR/$DBNAME.$DATE.xml 2> $DIR/$BKNAME.$DATE.log
  a2=$?
  cd $DIR/
  md5sum $DBNAME.$DATE.sql $DBNAME.$DATE.xml > $DBNAME.$DATE.sql.md5 2>> $DIR/$BKNAME.$DATE.log
  b=$?
  SQLERROR=`echo $(($a+$a2+$b))`
  if [ $SQLERROR != 0 ]; then
   BKNAME=$BKNAME-NOSQL
   echo "Exiting on SQL error"
   echo "DBHOST $DBHOST"
   echo "DBNAME $DBNAME"
   echo "DBUSER $DBUSER"
   echo "DBPASS $DBPASS"
   exit 1
  fi
  tar -czf $BKDIR/$BKNAME.$DATE.tgz --totals $DIR >> $DIR/$BKNAME.$DATE.log 2>&1
  c=$?
  cd $BKDIR/
  md5sum $BKNAME.$DATE.tgz > $BKNAME.$DATE.tgz.md5 2>> $DIR/$BKNAME.$DATE.log
  d=$?
  TOTAL=`ls -lh $BKDIR/$BKNAME.$DATE.tgz |awk '{ print $5 }'`
  ERROR=`echo $(($a+$a2+$b+$c+$d))`
  if [ $ERROR = 0 ]; then
   STATUS=Good
  else
   STATUS=Failed
  fi

  echo "" >> $DIR/$BKNAME.$DATE.log
  echo "Starting GPG encryption" >> $DIR/$BKNAME.$DATE.log
  for a in $GPGKEY
  do
   GPGKEYD="$GPGKEYD -r $a"
  done
  #gpg -e -r $GPGKEY $BKDIR/$BKNAME.$DATE.tgz
  echo "" >> $DIR/$BKNAME.$DATE.log
  if [ $ARCHIVE = '0' ]; then
   case "$OFFSITE" in
    rsync|RSYNC)
      export RSYNC_PASSWORD=$RSYNCPASSWORD
      rsync -rvzht --delete --stats $BKDIR/ $RSYNCUSERNAME@$RSYNCHOST::ibackup/odin/$BKDIRNAME --port=45873 --exclude=Archive/ >> $DIR/$BACKUPNAME.log 2>&1
    ;;

    scp|SCP)
      echo "" >> $DIR/$BKNAME.$DATE.log
      echo "Starting SCP Transmition" >> $DIR/$BKNAME.$DATE.log
      ssh $SCPUSER@$SCPHOST "mkdir -p $SCPDIR"
      ssh $SCPUSER@$SCPHOST "rm -f $SCPDIR/$BKNAME.$DAY.*"
      scp -q $BKDIR/$BKNAME.$DATE.tgz $SCPUSER@$SCPHOST:$SCPDIR/ 2>> $DIR/$BKNAME.$DATE.log
      scp -q $BKDIR/$BKNAME.$DATE.tgz.md5 $SCPUSER@$SCPHOST:$SCPDIR/ 2>> $DIR/$BKNAME.$DATE.log
      #ssh $SCPUSER@$SCPHOST "chmod 666 $SCPDIR/$BKNAME.$DAY.*"
      echo "" >> $DIR/$BKNAME.$DATE.log
    ;;
   esac
  fi

  echo "final error status is: $ERROR" >> $DIR/$BKNAME.$DATE.log
  sed '/tar: Removing leading */d' $DIR/$BKNAME.$DATE.log > $BKDIR/backuptmp.log
  #echo "INSERT INTO Backup_Log ( System, Backup_Job, Label, Output, Bytes, Status, Log ) VALUES ('`hostname -s`', 'Wiki', '$DIR', '$BKNAME.$DATE', '$TOTAL', '$STATUS', '`cat $BKDIR/backuptmp.log`');" |mysql -t -h localhost -u backup Status
  rm -rf $BKDIR/backuptmp.log
  rm -rf $DIR/$DBNAME.$DAY.*
  rm -rf $DIR/$BKNAME.$DAY.*
  ;;
*)
  echo "The valid commands are backup & restore"
  exit 1
  ;;
esac

Dovecot Auto Update Script

Here is a small auto update script for a Fedora build. This script will auto install all needed software and start Dovecot once it’s finished.

#!/bin/bash

DIR=/var/src

if [ ! -e $DIR ]; then
  echo "Creating base directory $DIR"
  mkdir -p $DIR
fi

if [ ! -e $DIR/dovecot-2.0 ]; then
  echo "Dovecot 2.0 is not installed, Downloading a fresh copy of Dovecot 2.0"
  cd $DIR
  hg clone http://hg.dovecot.org/dovecot-2.0/ dovecot-2.0 > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to download a fresh copy of Dovecot 2.0"
    exit 1
  fi
  echo "Installing any needed dependinces"
  yum -y install mercurial gcc autoconf automake libtool perl-Text-Iconv gettext gettext-devel gettext-libs openssl openssl-devel sqlite sqlite-devel zlib zlib-devel > /dev/null
  cd $DIR/dovecot-2.0
else
  cd $DIR/dovecot-2.0
  echo "Downloading updated version of Dovecot 2.0"
  echo -n "Current Version is: "
  dovecot --version
  hg pull -u > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to update the Dovecot 2.0 install"
    exit 1
  fi
  dovecotver=0
  hgrepover=1
  dovecotver=`dovecot --version |awk '{print $2}' |sed 's/(//g' |sed 's/)//g'`
  hgrepover=`hg summary |grep parent |sed 's/:/ /g' |awk '{print $3}'`
  if [ ! $dovecotver ]; then
    dovecotver=0
  fi
  if [ ! $hgrepover ]; then
    hgrepover=1
  fi
  if [ $dovecotver == $hgrepover ]; then
    echo "The installed version and the downloaded version are both `dovecot --version`, not updating."
    exit 0
  else
    echo "The installed version is $dovecotver, but the current version on hg is $hgrepover, updating!"
  fi
  rm -rf *
  hg revert --all > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to restore Dovecot 2.0 install"
    exit 1
  fi
fi

echo "Starting to comple Dovecot 2.0"
./autogen.sh > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "Failed to run autogen.sh for Dovecot 2.0"
  exit 1
fi

./configure --prefix=/usr --with-ssl=openssl --with-sqlite --with-mysql --with-ldap --with-zlib > /dev/null 2> /dev/null

if [ $? != 0 ]; then
  echo "Failed to run configure for Dovecot 2.0"
  exit 1
fi

make > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "Failed to run make for Dovecot 2.0"
  exit 1
fi

if [ ! -e $DIR/dovecot-2.0-pigeonhole ]; then
  echo "Dovecot Pigeonhole is not installed, Downloading a fresh copy of Dovecot Pigeonhole"
  cd $DIR
  hg clone http://hg.rename-it.nl/dovecot-2.0-pigeonhole/ > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to download a fresh copy of Dovecot Pigeonhole"
    exit 1
  fi
  cd $DIR/dovecot-2.0-pigeonhole
else
  cd $DIR/dovecot-2.0-pigeonhole
  hg pull -u > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to update the Dovecot Pigeonhole install"
    exit 1
  fi
  rm -rf *
  hg revert --all > /dev/null
  if [ $? != 0 ]; then
    echo "Failed to restore Dovecot Pigeonhole install"
    exit 1
  fi
fi

./autogen.sh > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "Failed to run autogen.sh for Sieve"
  exit 1
fi

./configure --prefix=/usr --with-dovecot=../dovecot-2.0 > /dev/null
if [ $? != 0 ]; then
  echo "Failed to run configure for Sieve"
  exit 1
fi

make > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "Failed to run make for Sieve"
  exit 1
fi

echo "Stopping Dovecot"
service dovecot stop
if [ $? != 0 ]; then
  echo "Failed to stop the running Dovecot proccess"
fi

cd /var/src/dovecot-2.0
make install > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "failed to install Dovecot"
  exit 1
fi

cd /var/src/dovecot-2.0-pigeonhole
make install > /dev/null 2> /dev/null
if [ $? != 0 ]; then
  echo "Failed to install Sieve"
  exit 1
fi

rm -rf /usr/local/etc/dovecot
ln -s /etc/dovecot /usr/local/etc/dovecot
rm -rf /usr/etc/dovecot
ln -s /etc/dovecot /usr/etc/dovecotecho "Restarting Dovecot"
service dovecot start

Converting a MediaWiki database from MySQL to SQLite

So the plane here is to convert a fully working MediaWiki install running on MySQL to run on SQLite instead.  To do this you will need to install a 2nd MediaWIKI install on a test or development system.  Once you are done you can move your new MediaWiki install to where every you would like.

Backup up your Old MediaWiki Installation

To start, from within your working MediaWiki install, first back up your data.

php maintenance/dumpBackup.php --full --uploads > wiki-backup.xml

This will create a file named wiki-backup.xml in your MediaWiki’s root directory, copy that file to a safe place.  We aren’t going to touch the MySQL database until were done, but it’s always a good idea to have backups safe and sound in case you need them.

The backup script run above dose not backup your MediaWiki’s uploaded images and other files.  These files are stored in your ‘images’ folder in the root of MediaWiki’s directory.  You need to back those up also.

tar -czf wiki-images.tgz images/

You should now have everything you need from your old MediaWiki install. Next you will need to install MediaWiki in a new location on your sever (or a development server).

Installing your new SQLite MediaWiki site

You should download and install the newest version of MediaWiki.  I always you the development trunk since this is what’s used on Wikipedia.

During the install process you will be asked what database you would like to use, you much choose ‘SQLite’ since this is the point of reinstalling MediaWiki.  Bring your new install all the way so you have a new install running on your server.  MediaWiki will create a default home page for you and you should be able to modify that page.  If you are unable to get MediaWiki installed or if you have a problem modifying the Main Page after the install, please see the MediaWiki mailing lists or the FAQ for assistance.

Restoring your Data on your new SQLite site

After you have your new SQLite version of MediaWiki installed and working, you need to restore your data.  The database part of this is pretty strate forward.

Start by copying the xml file you created in the first step to your new MediaWiki install.  Then run the following:

php maintenance/importDump.php wiki-backup.xml

Depending on the amount of pages you have, this may take quite some time to process. Once this is done all your pages should be on your new install (expect the Main Page, you will need to copy that manually).
To restore you images and other uploaded files, first you need to extract the tarball you made earlier to temporary location.

mkdir temporary
cp wiki-images.tgz temporary
cd temporary/
tar -xzf wiki-images.tgz

This will create a bunch of folders in the temporary directory, you need to copy everything in those folders into a single folder. The name of the folder doesn’t matter, I’m using tempimages, but you may use what you would like.

cd ../
mkdir tempimages
cp temporary/images/*/*/* tempimages

Now that you have everything in a single folder, import the content of that folder into your new MediaWiki install.

php maintenance/importImages.php tempimages/

And that should do it, you should now have a fully working MediaWiki install using SQLite.