XP-Dev.com: Another Milestone

So, it has been 3 months and a bit since the upgrade to the new platform that runs the current version of XP-Dev.com, and it has been eventful. There was the release that took a whole day, and then were some functional releases as well. The latest release bring some really cool features to XP-Dev.com.

Another milestone has been reached, and the functionality gap has been narrowing drastically the past 3 months. However, I will be brave enough to admit that the gap is still there, and at least for me, there’s still a mountain to climb ahead.

Enjoy the new releases, and as usual, your feedback is appreciated. Do give a shout in the forums, or just raise a support ticket. You can contact me via this blog as well.

Moving Over to Nginx

Running XP-Dev.com has its set of unique problems, and it has not always been easy. I’ve always tried to run the whole infrastructure on a shoe-string budget at the same time trying not to compromise on quality.

One of the problems is hardware resource.

The truth is: Apache is a memory hog, and to keep things scalable for serving Subversion repositories, I decided to remove all PHP websites out from apache and run them under nginx and PHP-CGI (sudo apt-get install php5-cgi). To be honest, I did not notice any difference in performance of the web sites (apache/mod_php vs nginx/fastcgi/php-cgi), however, the main motivation of this exercise is to limit the maximum amount of memory that my non-critical PHP web sites take, and at the same ti

me, giving apache more room to grow for serving the Subversion repositories. I could have had two apache installations, and give them different limits (by tweaking MaxSpare*MaxRequests* and friends), but that’s an outright pain to manage. Moreover, I needed a simple webserver that can just serve static content as well.

And lets not forget the users of virtual private servers (VPS) with limited amount of memory. Nginx and PHP-CGI is a much appropriate solution for those memory limited configurations.

I had a look around, and it was basically down to lighttpd or nginx as a replacement to serve the PHP websites, and I picked nginx as there were some odd bugs with lighttpd serving large files. The FastCGI performance is almost the same (I did not really do any scientific benchmarks). However, the part that really got me sold on these two was that it used a master-slave threading model, rat

her than the (out of date) one thread/process per client model, which does not scale at all. Both of them are event driven, rather than “client socket” driven. BTW, this includes the awesome J2EE web container Jetty (if you use the SelectChannelConnector).

Migrating the websites across from apache to nginx/fastcgi/php-cgi was an absolute breeze and here are a few pointers that will help ease the burden.


Just to clarify, in the apache/mod_php world, PHP files are served via the apache process itself. The strategy under nginx is to get nginx to pass on the request to another set of long running php-cgi processes that do the actual PHP processing. The response will then be passed back to nginx, which will send it back to the web browser.


Use the English Nginx wiki extensively. There’s a lot of documentation there on configuring and tweaking nginx, especially the module reference pages. Here’s a quick and dirty howto on getting nginx+fastcgi and php-cgi working.

PHP FastCGI Start/Stop Scripts

Save yourself the trouble of writing a custom PHP FastCGI start/stop script. Install lighttpd and use their spawn-fcgi script wrapper. Its really going to save you a lot of painful hours. I wrote a simple wrapper around that script as I wanted PHP cgi to startup on every server bootup, or if I wanted a quick restart of the processes. You might rant to adjust the variables pidfile and cgidir for your setup.


if [ $me != "root" ]; then
        echo Not root!
        exit 1

pid=`cat $pidfile`

[ ! -d $cgidir ] && echo creating $cgidir && mkdir $cgidir && chown www-data.www-data $cgidir

if [ "$pid" != "" ]; then
        echo Killing $pid
        kill $pid
        rm $pidfile
        sleep 1

[ -f $sock ] && chown www-data.www-data $sock

/usr/bin/spawn-fcgi -f /usr/bin/php-cgi -s $sock -C 5 -P $pidfile -u www-data -g www-data

Stop serving .htaccess

Plenty of web apps out there have built in support for apache, and include .htaccess files in their distribution to reduce the configuration overhead for the installer. However, nginx will serve these files by default, which maybe fine for most of the cases, but its always good practice to deny access to it. Simple config for nginx does the trick

location ~ /\.ht {
    deny  all;

Serving PHP files

To serve PHP files, nginx will pass the request to the PHP-CGI handlers.

location ~ .*\.php$ {
	fastcgi_pass   unix:/var/run/php-cgi/unix.sock;
	fastcgi_index  index.php;
	include /etc/nginx/fastcgi_params;
	fastcgi_param  SCRIPT_FILENAME  /home/rs/local/wordpress/$fastcgi_script_name;

Notice that I’ve included a /etc/nginx/fastcgi_params file above. This file contains all the regular FastCGI directives, and I’ve put it in a seperate file to avoid too much repetition. The content of the file /etc/nginx/fastcgi_params is below:

fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;

fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;

fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;

WordPress Rewrite

The final tip is for all those WordPress junkies out there. To get nice urls for WordPress, you will need the following rewrite directive. If I’m not mistaken, one will be given to you for apache when you’re setting up custom urls via the admin screen, but not for nginx:

if (!-e $request_filename) {
    rewrite ^(.+)$ /index.php?q=$1 last;

And that’s about it. I really do hope these tips will help someone out there. I know it would have shaved a couple hours off my setup time had I known them beforehand.

Ext3 – handling large number of files in a directory

If you’ve used Linux in the past, I am pretty sure that you’ve heard of the Ext3 file system. It is one of the most common file system format out there, used mainly on Linux based systems.

I’ve noticed something really annoying about how it handles large number of files in a single directory. Essentially, I have a directory with almost a million files and I found that creating a new file in this directory took ages (in the region of tens of seconds), which is not ideal at all for my purpose.

After some reading, and much research, I learnt that Ext3 stores directory indices in a flat table, and this causes much of the headache when a directory has many files in a directory. There are a couple of options.

One, restructure the directory so that it does not contain that many files. I did some tests, and in a default (untuned) Ext3 partition, each subsequent write degrades horribly past the 2000 file limit. So, keeping the items in a directory to within 2000 files should be fine.

Second, is to enable the dir_index option on the Ext3 file system. Run the following as root and you should find that it improves a lot. Do note that the indexing will take up much more space, but then hard disk space is not too expensive nowadays:

$ sudo tune2fs -O dir_index /dev/hda1

Finally, just use something like ReiserFS which stores directory contents in a balanced tree, which is pretty darn fast and you don’t have to muck around tweaking things.

If you’ve got your main partition as an Ext3, and can’t really afford to reformat it into ReiserFS, there might be an alternative: create a blank file and format that as a ReiserFS file system and mount it using loopback.

So, lets create the file first. This depends on how much data you need to handle, and in this example, I’ll just create a ~100MB file full of zeros:

$ dd if=/dev/zero of=reiser.img bs=1k count=100000

Next, format the file using ReiserFS as below. It will complain about the file ‘reiser.img’ not being a special block device (and we know that!). Just say yes and carry on.

$ mkreiserfs -f reiser.img

Finally, mount it where you would like to read/write files into it (need to do this as root):

$ sudo mount -t reiserfs -o loop reiser.img /tmp/listdir

You might need to do some chown so that your normal user can write into it. Moreover, if you need it to startup during boot, do remember to put it in /etc/fstab !

FYI, I used a Python script below to see how long it took to write new files:

import os
import time

count = 1000000
total = 0.0
for i in range(count):
	if i % 1000 == 0:
		print 'Creating %i' % i
	start = time.time()
	open('/tmp/listdir/%s' % i, 'w').close()
	total += (time.time() - start)
print 'Avg is %0.8f' % (total / count)