Asymmetric Follow, Pub/Sub and Systems Design

James Governor mentions a very interesting pattern of web 2.0 – asymmetrical follow. In a nutshell, it’s basically an unbalanced communication network – you have some nodes on a network (hubs) that tend to have a lot of inbound links compared to others. In other words, it is a situation where popular people whose words/thoughts/opinions/tweets get read by the masses, and they (the popular people) do not reciprocate.

The thing is, asymmetrical follow exists everywhere. Celebrities have a lot of inbound links (tabloids, fans, press, etc), but they do not necessarily have a reverse link back. Blogs by their nature are asymmetrical as well – the blogger publishes a post thats read by visitors, and comments on the blogs, or even pingbacks do not necessarily get read or responded to by the poster. Its not being rude or anything anti-social about it, but its a pattern, and James is right – it is core to Web 2.0. Back in Dec ‘07, JP mentions that Twitter is neither a push or a pull network, but it is actually publish-subscribe.

The point of this post surrounds what James mentions in his article:

But Twitter wasn’t designed for whales. It was designed for small shoals of fish. Which brings us to one of the big issues with Asymmetrical Follow – it introduces unexpected scaling problems. Twitter’s architecture didn’t cope all that well at first, but has performed a lot better since the message broker was re-architected using Scala LIFT, a new web application programming framework). The technical approach that is most appropriate to support Asymmetrical Follow is well known in the world of high scale enterprise messaging- its called Publish And Subscribe.

Publish-Subscribe is a very common pattern in technology. Having worked in 2 investment banks, I have seen plenty of implementations that do the exact same thing: Publishers fire data once to a middleware, and that middleware layer sends that data off to many subscribers. Sounds simple enough to implement, right ? Well, it’s not.

Designing a good, reliable, highly performant Publish-Subscribe framework is not easy. Getting the initial bits is simple and trivial, but the problem that a lot of people face is scalability. If you are looking to build a Pub/Sub layer on your own, then the first thing you have to do is stop and take a reality check. Its not worth the trouble. Buy it from someone, or reuse another framework (like Twitter have done with Scala LIFT). I am not kidding. I have seen millions of dollars go down the drain in missed opportunities, direct trading losses, etc all due to poorly designed and implemented Pub/Sub layers.

Pub/Sub frameworks are a lot like caches (for e.g. memcached) but with a twist. Not only do you have to cache data, but you have to tell subscribers when this data has changed. In fact, they are closer to finite state machines than caches.

Here are a few that I have used in the past and highly recommend any of them:

Now, if you’re still stubborn and think you’re up for the challenge, then here are a few pointers:

Design it really, really well

Sit down with a few people and walk them through your design. Your design has to look into how memory is managed, the threading model, the communication mechanism, etc. Find as many defects as possible and don’t take it personally. Do this before you even write a single line of code.

Have a solid, clean API

I have used some really arcane APIs in the past, and oddly, some of them are provided by electronic markets (no names mentioned here). Remember, the API will be used by publishers and subscribers. The cleaner the API, the less bugs it will introduce in publishers/subscribers code.

Non-blocking IO

If you’re using TCP sockets as a communication layer, do use select() (non-blocking IO). You need to break away from the one-thread-per-client model. That model, while easy to code to, just does not scale at all. I have been in way too many situations where I have inherited a system that uses the one-thread-per-client model, and all of a sudden it does not work in production because they’ve just scaled from 30 connected clients to 3000. BTW, if you’re developing in Java, I highly recommend using Apache MINA to reduce the stress of writing non-blocking IO code.

Watch out for data state inconsistencies

A common approach that a lot of frameworks use is to send a snapshot followed by updates of changes.

Publishers should send the following messages upon startup:

  1. An initial message saying “This is the beginning of my initial data”
  2. The initial data itself
  3. A final message saying “This is the end of my initial data”

From then on, publishers should just send updates.

Subscribers will get the reverse. A call to subscribe() should result in at least the following callbacks:

  1. A callback saying “This is the beginning of the data”
  2. The initial snapshot data itself
  3. A callback saying “This is the end of the data”

From then on, subscribers should just send updates.

Handling the subscribe() call in your framework is going to be tricky. You’ll need to be careful of locking your cache to ensure that no one updates it while you’re taking the initial data for the subscriber. Alternatively, you could create a snapshot copy of the cache, but keep an eye on your memory usage.

Correctness over performance

Don’t worry so much about reducing latency from 200ms to 20ms. Getting your implementation correct is far more important than performance. I’m not saying performance is not important, BUT you need to get it to work correctly before looking into performance.

Build a load test framework

You will definitely need one of these. There have been way too many times in the past where I need to reproduce a production problem related to scalability, only to find that the original authors of the system did not bother building a load test framework.

I hope by now you would realise that designing and implementing a Publish/Subscribe framework is not trivial. Buy it off the shelf as someone out there has gone through all of this pain for you.

I am a big believer that your system architecture should reflect the underlying business. It will not be a good fit if you are trying to retrofit an incorrect architecture as you will end up having loads of problems (scalability, maintenence, etc) in the long run. Asymmetric follow is here to stay, and in your next project think about how it is going to affect your architecture and what you need to do at the initial outset to get it right.

Moving Over to Nginx

Running XP-Dev.com has its set of unique problems, and it has not always been easy. I’ve always tried to run the whole infrastructure on a shoe-string budget at the same time trying not to compromise on quality.

One of the problems is hardware resource.

The truth is: Apache is a memory hog, and to keep things scalable for serving Subversion repositories, I decided to remove all PHP websites out from apache and run them under nginx and PHP-CGI (sudo apt-get install php5-cgi). To be honest, I did not notice any difference in performance of the web sites (apache/mod_php vs nginx/fastcgi/php-cgi), however, the main motivation of this exercise is to limit the maximum amount of memory that my non-critical PHP web sites take, and at the same ti

me, giving apache more room to grow for serving the Subversion repositories. I could have had two apache installations, and give them different limits (by tweaking MaxSpare*MaxRequests* and friends), but that’s an outright pain to manage. Moreover, I needed a simple webserver that can just serve static content as well.

And lets not forget the users of virtual private servers (VPS) with limited amount of memory. Nginx and PHP-CGI is a much appropriate solution for those memory limited configurations.

I had a look around, and it was basically down to lighttpd or nginx as a replacement to serve the PHP websites, and I picked nginx as there were some odd bugs with lighttpd serving large files. The FastCGI performance is almost the same (I did not really do any scientific benchmarks). However, the part that really got me sold on these two was that it used a master-slave threading model, rat

her than the (out of date) one thread/process per client model, which does not scale at all. Both of them are event driven, rather than “client socket” driven. BTW, this includes the awesome J2EE web container Jetty (if you use the SelectChannelConnector).

Migrating the websites across from apache to nginx/fastcgi/php-cgi was an absolute breeze and here are a few pointers that will help ease the burden.

Strategy

Just to clarify, in the apache/mod_php world, PHP files are served via the apache process itself. The strategy under nginx is to get nginx to pass on the request to another set of long running php-cgi processes that do the actual PHP processing. The response will then be passed back to nginx, which will send it back to the web browser.

Documentation

Use the English Nginx wiki extensively. There’s a lot of documentation there on configuring and tweaking nginx, especially the module reference pages. Here’s a quick and dirty howto on getting nginx+fastcgi and php-cgi working.

PHP FastCGI Start/Stop Scripts

Save yourself the trouble of writing a custom PHP FastCGI start/stop script. Install lighttpd and use their spawn-fcgi script wrapper. Its really going to save you a lot of painful hours. I wrote a simple wrapper around that script as I wanted PHP cgi to startup on every server bootup, or if I wanted a quick restart of the processes. You might rant to adjust the variables pidfile and cgidir for your setup.

#!/bin/bash

me=`whoami`
if [ $me != "root" ]; then
        echo Not root!
        exit 1
fi

pidfile=/root/php.PID
pid=`cat $pidfile`
cgidir=/var/run/php-cgi
sock=$cgidir/unix.sock

[ ! -d $cgidir ] && echo creating $cgidir && mkdir $cgidir && chown www-data.www-data $cgidir

if [ "$pid" != "" ]; then
        echo Killing $pid
        kill $pid
        rm $pidfile
        sleep 1
fi

[ -f $sock ] && chown www-data.www-data $sock

/usr/bin/spawn-fcgi -f /usr/bin/php-cgi -s $sock -C 5 -P $pidfile -u www-data -g www-data

Stop serving .htaccess

Plenty of web apps out there have built in support for apache, and include .htaccess files in their distribution to reduce the configuration overhead for the installer. However, nginx will serve these files by default, which maybe fine for most of the cases, but its always good practice to deny access to it. Simple config for nginx does the trick

location ~ /\.ht {
    deny  all;
}

Serving PHP files

To serve PHP files, nginx will pass the request to the PHP-CGI handlers.

location ~ .*\.php$ {
	fastcgi_pass   unix:/var/run/php-cgi/unix.sock;
	fastcgi_index  index.php;
	include /etc/nginx/fastcgi_params;
	fastcgi_param  SCRIPT_FILENAME  /home/rs/local/wordpress/$fastcgi_script_name;
}

Notice that I’ve included a /etc/nginx/fastcgi_params file above. This file contains all the regular FastCGI directives, and I’ve put it in a seperate file to avoid too much repetition. The content of the file /etc/nginx/fastcgi_params is below:

fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;

fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;

fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;

WordPress Rewrite

The final tip is for all those WordPress junkies out there. To get nice urls for WordPress, you will need the following rewrite directive. If I’m not mistaken, one will be given to you for apache when you’re setting up custom urls via the admin screen, but not for nginx:

if (!-e $request_filename) {
    rewrite ^(.+)$ /index.php?q=$1 last;
}

And that’s about it. I really do hope these tips will help someone out there. I know it would have shaved a couple hours off my setup time had I known them beforehand.

Spring and Jetty Integration

Jetty is a pretty darn awesome J2EE web container. With amazing features like non-blocking IO, continuations and immediate integration with Cometd – I feel that it is a solid, production ready container.

I hate war files, I hate web.xml files – there’s just way too much black magic that needs to be done to get things up and running. It is nice once someone has done the dirty work, and got the initial web.xml constructed, but I wouldn’t want to be that person who starts it all off.

Oh – another thing – I absolutely LOVE dependency injection. Using the web.xml approach, you’ll almost always have to start off a servlet of some sort to initialise various services that you’ll need – moreover the easiest way to access these services on other servlets is to use singletons, and we all know why singletons are bad!

So, I end up using Jetty in an embedded setup, and used to write various wrappers around the configuration so that I can do most of the common things with minimal code. A good example will be to setup a bunch of contexts and a DefaultServlet for regular file serving. However, the way Jetty is written makes it really easy to be used in Spring – everything is a simple bean with a bunch of setters.

To start off, lets write down the bean. Notice I’ve added an init-method attribute to start(). If you don’t want Spring to kick off your server, then just grab hold of the bean and call start() on it explicitly.

<bean name="WebServer" init-method="start">
</bean>

Then, lets add some connectors to it:

<property name="connectors">
  <list>
  <bean name="LocalSocket">
      <property name="host" value="localhost"/>
      <property name="port" value="8080"/>
  </bean>
  </list>
</property>

You will need some handlers (one of them will be a context handler to serve your servlets). I’ve added a logging handler so that the server logs requests in the same format as apache’s combined log.

<property name="handlers">
  <list>
    <bean>
      <property name="contextPath" value="/"/>
      <property name="sessionHandler">
        <bean/>
      </property>
      <property name="resourceBase" value="/var/www"/>
      <property name="servletHandler">
        <bean>
          <property name="servlets"> <!-- servlet definition -->
            <list>
            <!-- default servlet -->
            <bean>
              <property name="name" value="DefaultServlet"/>
              <property name="servlet">
                <bean/>
              </property>
              <property name="initParameters">
                <map>
                  <entry key="resourceBase" value="/var/www"/>
                </map>
              </property>
            </bean>
            </list>
          </property>
          <property name="servletMappings">
            <list><!-- servlet mapping -->
            <bean>
              <property name="pathSpecs">
                <list><value>/</value></list>
              </property>
              <property name="servletName" value="DefaultServlet"/>
            </bean>
            </list>
          </property>
        </bean>
      </property>
    </bean>
    <!-- log handler -->
    <bean>
      <property name="requestLog">
        <bean>
          <property name="append" value="true"/>
          <property name="filename" value="/var/log/jetty/request.log.yyyy_mm_dd"/>
          <property name="extended" value="true"/>
          <property name="retainDays" value="999"/>
          <property name="filenameDateFormat" value="yyyy-MM-dd"/>
        </bean>
      </property>
    </bean>
  </list>
</property>

And thats about it. If you need to add more servlets, then all you have to do is add an entry to ServletHandler’s properties for servlets and servletMappings.

Now, image if I had to get a reference to a DAO, or some other service in my servlet – it’s just going to be a matter of adding a member, exposing it via a setter and whacking in the dependency on the servlet Spring config above. All done in nice dependency injected way. No more overriding init() on the servlet and picking up some context attribute via some magic string. The best part of this

Here’s the whole Spring config – hack away to your needs!

<bean name="WebServer" init-method="start">
<property name="connectors">
  <list>
  <bean name="LocalSocket">
    <property name="host" value="localhost"/>
    <property name="port" value="8080"/>
  </bean>
  </list>
</property>
<property name="handlers">
  <list>
    <bean>
      <property name="contextPath" value="/"/>
      <property name="sessionHandler">
        <bean/>
      </property>
      <property name="resourceBase" value="/var/www"/>
      <property name="servletHandler">
        <bean>
          <property name="servlets"> <!-- servlet definition -->
            <list>
            <!-- default servlet -->
            <bean>
              <property name="name" value="DefaultServlet"/>
              <property name="servlet">
                <bean/>
              </property>
              <property name="initParameters">
                <map>
                  <entry key="resourceBase" value="/var/www"/>
                </map>
              </property>
            </bean>
            </list>
          </property>
          <property name="servletMappings">
            <list><!-- servlet mapping -->
            <bean>
              <property name="pathSpecs">
                <list><value>/</value></list>
              </property>
              <property name="servletName" value="DefaultServlet"/>
            </bean>
            </list>
          </property>
        </bean>
      </property>
    </bean>
    <!-- log handler -->
    <bean>
      <property name="requestLog">
        <bean>
          <property name="append" value="true"/>
          <property name="filename" value="/var/log/jetty/request.log.yyyy_mm_dd"/>
          <property name="extended" value="true"/>
          <property name="retainDays" value="999"/>
          <property name="filenameDateFormat" value="yyyy-MM-dd"/>
        </bean>
      </property>
    </bean>
  </list>
</property>
</bean>

Converting PEM certificates and private keys to JKS

If there is one irritating, arcane issue about Java, it is their SSL and Crypto framework. It is a pile of mess. I remember using openssl as a library about 3-4 years ago in a project that was pretty crypto heavy and their library can be used by any junior developer – it’s that simple to use.

However, Java’s crypto framework is just absolutely irritating to use – tons of unnecessary boiler plate, and not enough of self discovery of file formats (as an example). Try to do SSL client certificate authentication from ground up and you’ll know what I mean. Knife, wrist – sound familiar ?

Last night, I had to convert some PEM formatted certificates and private keys to JKS (was getting SSL nicely configured under Jetty). I remember doing this a few years back and there were molehillsmountains of issues to jump across and I did pull my hair out back then. Last night was no different. However, I did manage to solve it and ended up with much less hair.

So, to save everyone else the trouble (and their hair!), I’m jotting down some notes here on how to convert a certificate and private key in PEM format into Java’s keystore and truststore in JKS format.

The Keystore

If we’re starting with PEM format, we need to convert the certificate and key to a PKCS12 file. We’ll use openssl for that:

Remember to use a password for the command below, otherwise, the Jetty converter (the following step) will barf in your face!

openssl pkcs12 -export -out cert.pkcs12 \
  -in cert.pem -inkey key.pem

Once that’s done, you need to convert the pkcs12 to a JKS. Here, I will be using a small utility that comes bundled with Jetty called PKCS12Import. You can download the necessary library (you’ll need the main jetty.jar) which can be a huge download for such a small thing, or just grab the jar from here. Run the following command and use the password from the step above and your keystore password:

java -cp /path/to/jetty-6.1.7.jar \
  org.mortbay.jetty.security.PKCS12Import \
  cert.pkcs12 keystore.jks

The Truststore

Next, you’ll almost definitely need to import the certificate into your truststore whenever you need to do anything related to SSL.

First, export the certificate as a DER:

openssl x509 -in cert.pem -out cert.der -outform der

Then import it into the truststore:

keytool -importcert -alias mycert -file cert.der \
  -keystore truststore.jks \
  -storepass password

And that’s it! You have your key in the keystore, and your certificate in the truststore. Hope this helps some of you out there.