In this series of articles, we have covered the key aspects of load balancing and health monitors to improve availability of web applications, and now it’s time to look at how you can use simple caching to improve the performance of your applications.
Imagine a website which uses a database to hold transactions and customer data. Every hit from the front page goes through the database, and then the page is assembled by a chunk of code, maybe PHP or Java. This is very CPU-intensive, and so there is a limit to the number of page requests that can be handled per second without adding more resources.
Content caching sounds like a great idea – but if you have a dynamic application, maybe displaying sports scores or news tickers, how can you cache the web pages for performance, without causing problems?
In this example, you can see the Stingray Activity Monitor, showing simulated users hitting the front page of the website as quickly as possible. You can see that we're averaging about 18 transactions a second, the limit being the CPU of the computer being used. Wouldn't it be great if I could use content caching to improve performance of the site? Generally, that's not possible for dynamic sites like this. When you make a change to the content, you want that change to be effected immediately on the page of the website.
But I can add a rule that intercepts web documents coming back, and applies a cache time of one second, modifying the cache headers in response to tell any devices, such as the Riverbed Stingray Traffic Manager, they can cache this content for one second at a time. That ensures the content is never more than one second out of date.
So what is the effect if we now turn on content caching, but cache the content for just one second at a time? To do this, I can make a simple change to the configuration for the virtual server, and enable content caching:
We can see the effect of this caching by looking again at the number of requests per second we are able to handle. Within a matter of seconds, the activity monitor jumps from less than twenty transactions per second to well over 2000 transactions per second.
That's roughly a hundredfold performance improvement. We can now handle 100 times as many page requests on the same infrastructure and can still give our users fresh, up-to-date content, even on dynamic web content!
For more information:
Health monitoring with Traffic Manager
Control and Flexibility with Stingray Traffic Manager
(This article is part of a series starting with Back to Basics - What is a Traffic Manager, Anyway?)