Showing results for 
Search instead for 
Did you mean: 

Feature Brief: Application Acceleration with Traffic Manager

Traffic Manager employs a range of protocol optimization and specialized offload functions to improve the performance and capacity of a wide range of networked applications.


  • TCP Offload applies to most protocol types and is used to offload slow client-side connections and present them to the server as if they were fast local transactions.  This reduces the duration of a connection, reducing server concurrency and allowing the server to recycle limited resources more quickly
  • HTTP Optimizations apply to HTTP and HTTPS protocols.  Efficient use of HTTP keepalives (including carefully limiting concurrency to avoid overloading servers with thread-per-connection or process-per-connection models) and upgrading client connections to the most appropriate HTTP protocol level will reduce resource usage and connection churn on the servers
  • Performance-sensitive Load Balancing selects the optimal server node for each transaction based on current and historic performance, and will also consider load balancing hints such as LARD to prefer the node with the hottest cache for each resource
  • Processing Offload: Highly efficient implementations of SSL, compression and XML processing offload these tasks from server applications, allowing them to focus on their core application code
  • Content Caching will cache static and dynamic content (discriminated by use of a 'cache key') and eliminate unnecessary requests for duplicate information from your server infrastructure


Further specialised functions such as Web Content Optimization, Rate Shaping (Dynamic rate shaping slow applications) and Prioritization (Detecting and Managing Abusive Referers) give you control over how content is delivered so that you can optimize the end user experience.


The importance of HTTP optimization


There's one important class of applications where ADCs make a very significant performance difference using TCP offload, request/response buffering and HTTP keepalive optimization.


A number of application frameworks have fixed concurrency limits. Apache is the most notable (the worker MPM has a default limit of 256 concurrent processes), mongrel (Ruby) and others have a fixed number of worker processes; some Java app servers also have an equivalent limit. The reason the fixed concurrency limits are applied is a pragmatic one; each TCP connection takes a concurrency slot, which corresponds to a heavyweight process or thread; too many concurrent processes or threads will bring the server to its knees and this can easily be exploited remotely if the limit is not low enough.


The implication of this limit is that the server cannot service more than a certain number of TCP connections concurrently. Additional connections are queued in the OS' listen queue until a concurrency slot is released. In most cases, an idle client keepalive connection can occupy a concurrency slot (leading to the common performance detuning advice for apache recommending that keepalives are disabled or limited).


When you benchmark a concurrency-limited server over a fast local network, connections are established, serviced and closed rapidly. Concurrency slots are only occupied for a short period of time, connections are not queued for long, so the performance achieved is high.


However, when you place the same server in a production environment, the duration of connections is much greater (slow, lossy TCP; client keepalives) so concurrency slots are held for much longer. It's not uncommon to see an application server running in production at <10% utilization, but struggling to achieve 10% of the performance that was measured in the lab.


The solution is to put a scalable proxy in front of the concurrency-limited server to offload the TCP connection, buffer the request data, use connections to the server efficiently, offload the response, free up a concurrency slot and offload the lingering keepalive connection.


Customer Stories



"Since Traffic Manager was deployed, there has been a major improvement to the performance and response times of the site."

David Turner, Systems Architect, PLAY.COM


"With Traffic Manager we effortlessly achieved between 10-40 times improvement in performance over the application working alone."

Steve Broadhead, BroadBand Testing


"Traffic Manager has allowed us to dramatically improve the performance and reliability of the TakingITGlobal website, effectively managing three times as much traffic without any extra burden."

 Michael Furdyk, Director of Technology, TakingITGlobal


"The performance improvements were immediately obvious for both our users and to our monitoring systems – on some systems, we can measure a 400% improvement in performance."

Philip Jensen, IT Section Manager, Sonofon


"700% improvement in application response times… The real challenge was to maximize existing resources, rather than having to continually add new servers."

Kent Wright, Systems Administrator, QuantumMail



Read more


Version history
Revision #:
2 of 2
Last update:
‎06-24-2019 05:34:AM
Updated by:
Labels (1)