QR Code contains TinyURL of this article.On Performance & Content Management

Introduction

I created and managed my first ever website, The Lair! (an Amiga e-zine), entirely by hand. This consisted of coding pages in a text editor and uploading them and their assets to the web-server manually, via FTP. This was a tedious and time-consuming business but it resulted in pages that were meticulously coded and optimised. They had to be, as the average Internet user browsed the web with a 28.8Kbps modem at that time.

In subsequent years tools for creating web-pages appeared and evolved into full-blown Content Management Systems. With a CMS one could create content via a web interface and with a WYSIWYG editor. The CMS would then merge the content with a previously built template and generate the completed web page. It would manage the website’s assets (images, downloads, etc.) and provide an interface to integrate these into a page. Other utilities would also be available, empowering the content producer and making things like global changes easy to apply and instantaneous to deploy. I have used the well-known WordPress CMS to produce and manage websites over the years. I also wrote my own, in Perl, which I used to maintain by own personal weblog.

Eventually I switched to WordPress for my weblog. But I always felt that it was overkill for my needs. I wanted to run lean and mean. I wanted a fast CMS with a small footprint. I searched the Internet but couldn’t find any that appealed, so I did what any self-respecting hacker would do: I started writing my own (again). However that effort soon petered out as I didn’t have the time to invest in the project. My weblog quickly grew stale and eventually disappeared, when I decided to not renew its domain name.

In the intervening years I have done massive amounts of online development, but none of it has been for myself and, during that whole time, I missed not having a little bit of Web real-estate that I could call my own; a plot of digital land where I could build my castles and my follies, where I could paint the walls whatever colour I liked and, most importantly, where I could control the architecture and the building materials used.

The Speed of Light

A public-facing website has to be fast. We all hate it when we visit a website that doesn’t respond quickly enough to our requests. We all hate bloated web-pages that take an age to download all the assets of their fancy designs.

While researching via Google, I found amazing differences of opinion as to how fast page delivery should be. They ranged from 8 seconds down to 200-500 milliseconds. I found articles that illustrated a correlation between page load times and online sale conversion rates. I also found essays that described how two of the most frequently visited websites, Google and Amazon, recorded measurable increases in traffic and stickiness (how long a visitor stays on a site) by reducing load times by just a couple of hundred milliseconds.

It seemed obvious then that my goal should simply be to make my website as fast as I possibly could. How would I achieve this?

Content Delivery

Yahoo! has, what they call, an “Exceptional Performance Team” within their hallowed walls. The team is responsible for streamlining and optimising Yahoo!’s web services to ensure that they are as fast as they can be. These are obsessive geeks: they analyse every small detail of Yahoo!’s web services then tune and modify, shaving off a millisecond here and there. Yahoo! has acquired a wealth of knowledge from this team and, fortunately for us, have shared the results. I referred to their published guidelines extensively while building this platform.

Make fewer HTTP requests
Rather than design my own (hey, I’m no designer) I took an existing template, Simplico, for this website. To reduce the HTTP requests I Base-64 encoded the theme’s images and embedded them in its style-sheet. At around the same time, a colleague introduced me to Font Awesome, which I immediately decided to use for the site’s icons. The template had multiple JavaScript and CSS files which I merged into a single file for each type.
Result: On pages with no content-specific imagery, we make less than a dozen HTTP requests.
Add an Expires header
Done, with an expiry date set to 1 year after access. “Whoa, hold on,” you say, “a year, that’s crazy. What if you change your CSS or JavaScript?” We load the CSS and JavaScript files with a versioning file-name like this: 04b97e30b89fa514.css. The versioning value is a CRC-32 of the content of the file. If the content of the file changes, so does its CRC-32 signature. The browser would, in response to the new query string, request the file from the server regardless of what’s already in its cache (assuming our revised file doesn’t exactly match a previously cached version). This process is entirely automatic.
As for the images, I don’t foresee a circumstance that would make me change an image once I’d published it. But, if I did, I’d simply change its file-name.
Additional Guidelines
Gzip components;
Put CSS at the top
Move scripts to the bottom
Avoid CSS expressions
Make JavaScript and CSS external
Reduce DNS look-ups
Minify JavaScript & CSS
Avoid redirects
Remove duplicate scripts
Configure ETags

There were more esoteric recommendations on the Yahoo! page that I also applied. As a result, the delivery of my pages is as fast as it could be for the platform it’s served from. I would have to work really hard to make further improvements and/or change the server hardware and connectivity.

F-22 Raptor with afterburner
I feel the need… the need for speed!

The State of Play

The fastest web sites are those consisting of static HTML files. With such content the web server need only go to the file-system, read in the file and pump it out to the client browser. It can do so quickly and with minuscule demands of the server’s CPU and disk systems. A properly configured web-server can service half a one million requests every second, with negligible latency and 10,000+ users concurrently.

Unfortunately the demands of a modern website almost always mean that a system based on static files is unsuitable due to the mutable nature of the content. We could reasonably anticipate that the platform for such a website consists of a heavy duty CMS and a SQL database which stores the content and associated data. There might also be an in-memory data store to support performance critical features (such as live search). There might also be systems in place to handle caching and/or load balancing.

The bottlenecks of such a platform lie within the CMS/framework and the SQL database. These heavy-duty, computationally expensive components generally trade performance for features. Unfortunately the scope of these platforms is often unnecessary, with the websites sitting atop them neither needing all the features they offer nor benefiting from their “industrial strength”.

So why does this behemoth of a stack (database, framework and CMS) continue to be so widely deployed? Because the stack works, it’s that simple. It’s reliable, easy to deploy, easy to develop upon, reasonably secure and, properly configured, fast enough for most projects. Furthermore, developers and designers are familiar with it, it has become the de facto standard. But is it really the right choice?

In my particular case, with a simple website and striving for the best performance, the answer is no. This website is basically a weblog, a paired down one at that. I’ve foregone most of the traditional weblog “features”: comments; social networking; relationships between pages (bar their shared taxonomies and the links I explicitly create between them); ratings; advertisements or other affiliate programmes; user accounts; analytics… none of these components are in use here.

So, when developing this website, I asked myself, “do I even need a relational database for this project?”

I decided that I didn’t. The obvious next questions was, “what will I use for my data store?”

Exploiting the File System

That the file system was the answer, was beyond doubt. Working directly with the file system would be fast and efficient. I could organise my website using the same hierarchical structure that I use to organise my files on disk and, as an added bonus, that same arrangement would map directly to my website’s URL patterns. The file system would be easily accessible and navigable through the shell, FTP and the operating system’s GUI. It would be easy to backup, package (as a tarball for example) and relocate. It would be but the work of a moment to synchronise my local development folder with its counterpart on the live server. I could also put 100% of the site under version control. I would be able to use any old text editor to prepare my content and I would be able to do so both on and offline. I’d be able to write in plain text, HTML, Markdown or any other format.

There would be some trade-offs in not having an SQL database of course. The three that immediately troubled me were search, taxonomies and indexing. I figured I’d be able to implement search with grep but I would have to parse the DOM of each file so as to only search the text content. Taxonomies were an issue because cross-referencing the “tags” of my pages was a problem I had no immediate solution to, bar building some kind of look-up table. Taxonomies would require some parsing of the document store. Finally indexing: I needed to be able to maintain an index of my content, so that I could create navigation, tag and category pages. The search text corpus, taxonomy and index would need to be automatically updated when I create, edit or delete content. Taking care of these indices manually would be tedious and increasingly prone to error as the site grew.

I also realised that if I stored my content as complete HTML pages, I wouldn’t be able to easily implement design changes, perform search-and-replace operations or add site-wide features. It would be difficult to make any kind of global change. It seemed obvious that storing my documents as HTML probably wasn’t going to be workable, even though the idea was an attractive one from a performance perspective.

A Pseudo Database

I realised that I could use XML files to store my content. An XML representation of data can be analogous to a table in a traditional database. Thus the XML item represents a database record or row and within my item I could have any number of fields. Thus I could have fields for my title, slug, tags, publish date, content and anything else I deemed necessary. I would be able to query my XML in a variety of ways. I would also benefit from the separation of content and presentation.

Baking or Frying

To the best of my knowledge the terms “bake” and “fry”, in the context of web publishing, were first used by Ian Kallen in an article on salon.com that described their content production process and CMS (Bricolage). A fried page is one built at request time, for each and every request. A baked page is one built at publish time and which remains static from then on.

The main benefit of a fried page is that it can be dynamic. It can change on each request based on whatever governing factors the developers have implemented. It can adapt to the individual user. It can also have frequently changing elements like reader comments, ratings and so on. However, because the server creates it for each request, there is an overhead in CPU time on each view and thus a negative impact on performance.

A baked page does not lend itself to dynamic components. We produce the page once, then deliver that same page for each request. Because we require only minimal processing to deliver a baked page, the overall performance of the website is considerably improved. Content producers avoid baked pages due to the perception that they offer fewer opportunities for personalisation or context sensitive presentation. But in these enlightened times; where client-side processing, dynamic Ajax components and modern browser facilities such as local storage are the norm; there is scope for baked pages to include dynamic content. Therefore, I would be baking my pages.

As an aside, the front page of this website has a near-real-time feed of my Github activity, despite being a baked page. We add this data feed to the front page using Ajax.

Get Simple

At this point, I had a much clearer picture of what I needed from a CMS: it should be as single purpose as possible, with no extraneous bloat; it should bake pages; it should use XML as a data store; it should be either a PHP or Perl application, so that I could extend or modify it; it should have as small a footprint as possible (in terms of CPU usage) and it should be secure. I settled on GetSimple.

GetSimple would manage my pages, provide an interface for styling my plain text content and handle file uploads. The CMS would also take care of producing the XML files I needed. GetSimple also offers an “undo everything” work-flow that really appealed to me. Unfortunately GetSimple doesn’t bake, provide a search engine or generate RSS feeds. But that didn’t matter. I had a decent enough core to build upon and that’s all I needed.

Plug It In

It turned out that GetSimple has a library of plug-ins. I didn’t realise this until I’d installed the CMS. I logged into the back office and noticed a “Plugins” menu option. On the plug-in management page there were two plug-ins installed by default (which I immediately disabled). There was also an enticing “Download More Plugins” link which, of course, I clicked.

I ended up browsing a large plug-in and theme repository. I chose a theme, Simplico and settled on three plug-ins: IL8N Search, News Manager and News Manager RSS. Suddenly I had all the major components of my website in place and I hadn’t written a single line of code or HTML!

Computer code on a laptop
Photo Credit: .
License: CC BY-SA 2.0

Hacking the Code

I needed to modify the IL8N Search code to get it to play nicely with the blog pages created by the News Manager. Then I had to modify News Manager to improve its navigation and a couple of other minor details. I performed extensive surgery within the GetSimple core, related to text processing (there were certain typographic rules I wanted to apply to my content), file handling and, of course, baking the final pages.

Work-flow

So what happens when we publish or edit a page? First, the application analyses the content in order to update the search word corpus. The system then creates the XML file for the page and updates the XML indices for navigation and tags. The CMS then bakes a new page (writing a static HTML file to the appropriate path on the file system). Then as a final step, the software deletes all the existing baked pages.

But why does the system delete the existing pages? When I change something — either by publishing a new page or editing an existing one — there is a ripple effect throughout the document store: navigation links change, indices change, the tag cloud changes and so on. A purge of the document store removes this stale content. Then the CMS can bake the pages again with up-to-date information.

The system could, at this point, rebuild every single page. However, that would be a naive implementation. A full rebuild takes time to complete and consumes server resources. If any of the pages aren’t viewed between this and the next rebuild, then this rebuild wasted those resources baking pages unnecessarily. No, a better formula is to bake on the initial request only. If a page doesn’t get requested, then it doesn’t get baked, and that’s how this system operates.

Smart 404 Handler

You might be wondering how the software decides whether or not it needs to bake a page. The launcher for baking lies within the site’s 404 error handler. This is how it works:

  • The server gets a request for the file /test;
  • The server checks the file system for a matching test.htm file in the path /;
  • If the server finds the file /test.htm it delivers that page and the request is complete;
  • If the server can’t locate the file it sends a message to the CMS saying, in effect, “Hey, I’m looking for a page called ‘/test.htm’, is that one of yours?”;
  • If the CMS says, “Nope, never heard of it,” then the server delivers the default 404 error page and the request is complete;
  • If that page matches an XML file in the CMS’ data store then the CMS replies, “Yeah, hang on for a couple of milliseconds and I’ll bake it for you.”;
  • The CMS then bakes the page, writes the HTML file to the correct path and passes the file to the server;
  • The server delivers the page to the client browser and the request is complete.

As you can see, at no time does the server waste resources creating a page unnecessarily. When a user requests a page it’s baked once and served repeatedly, without further modification, until the CMS discards it at the next publish/edit/delete operation. This is the most efficient work-flow I could come up with.

Summary

The more time I spend with this system, the more convinced I am that the heavy-duty CMS applications — WordPress, Joomla, Drupal, et al — are overkill for some of the websites they service. I’ve proven that you can run a weblog off the file system, out-performing most of the dedicated weblog applications into the bargain. Similarly, any site with non-volatile content can be easily serviced by a stack similar to the one described here. I have also demonstrated that, with the right content (static text), the file-system is a perfectly adequate data store.

Of course it would be ridiculous to suggest that the traditional stack is unnecessary and that we should all jump ship. I wouldn’t recommend that anyone try to run an e-commerce site on anything else for example. Also, where a site is community oriented, a relational database is a must.

It’s also important to note that I have no idea how my GetSimple project will scale. Will it fall apart after 100 posts? Maybe 1,000? Surely it’ll have imploded before I get to 100,000 posts? Or perhaps it will keep on going until it runs out of disk space? I see no reason why it would break on volume but, without testing, it’s impossible to know for sure.

I began this article by stating that the principle aim of this project was performance. I have put together a stack that services requests quickly and with a high level of concurrency. It can service most requests, from the moment of the user clicks a link to the delivery of the final element of content, in under a second (I’ve actually hit a magical 300ms from time to time).

There are things I could do to make it faster: I could scrap the Apache web-server and build on top of NGINX, PHP-FPM & APC; I could switch to a faster hosting platform; I could swap out the server’s mechanical hard disks for solid-state drives; I could, for the best performance, load up the server with inexpensive RAM and run the baked-page cache out of a RAM disk!

I think this could scale nicely.

Updated: 20th April, 2014.

I am no longer using the platform described here. I have switched to the nanoc static site generator.