cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Overview list of SteelApp Videos
View full article
This video provides an overview on how you can use FlyScript and Stingray to automatically add the JavaScript snippet required by Riverbed OPNET BrowserMetrix to web pages.
View full article
Vinay Reddy demonstrates Riverbed's Stingray Traffic Manager virtual application delivery controller in a VMware vFabric Application Director environment.
View full article
This video gives a general overview of Load Balancing with Stingray as well as recommendations on what Load Balancing algorithms to use depending on the situation.
View full article
Video: Introduction to TrafficScript
View full article
  This video discusses what SSL Decryption with Stingray is, why to use it, and how to configure it.
View full article
  Vinay Reddy and Nick Amato discuss how you can avoid public cloud outages using Global Load Balancing (GLB) with the Stingray Traffic Manager.
View full article
This video provides step by step instructions on how to properly deploy the Stingray Traffic Manager on Amazon Web Services (AWS) Virtual Private Cloud (VPC), along with best practices.
View full article
Vinay Reddy discusses setting up Global Load Balancing (GLB) with the Stingray Traffic Manager running in Amazon AWS.
View full article
In this hands-on technical video, Vinay Reddy, Senior Technical Marketing Engineer at Riverbed Technology, takes you through a step-by-step demo of Stingray Traffic Manager in Amazon AWS Cloud, including: Exploring AWS Marketplace Launching Stingray from the Amazon console Use Amazon console to choose instance types and deployment configuration Open the Stingray admin console and prepare to configure nodes and virtual servers
View full article
Video archived here :    Faisal Memon demonstrates Stingray Aptimizer accelerating Magento eCommerce
View full article
Vinay Reddy discusses how to migrate your Cisco ACE configuration to the Stingray Traffic Manager.
View full article
TrafficScript is Traffic Manager's scripting and configuration language that lets you specify precisely how Traffic Manager must handle each type of request, in as much detail as you need. Without TrafficScript, you would have to configure your load balancer with a single, ‘lowest-common-denominator’ policy that describes how to handle all your network traffic. With TrafficScript, you control how Traffic Manager handles your traffic, inspecting and modifying each request and response as you wish, and pulling in each of Traffic Manager's features as you require.   What is TrafficScript?   TrafficScript is a high-level programming language used to create ‘rules’ which are invoked by the traffic manager each time a transaction request or response is received:   TrafficScript rules have full access to all request and response data, and give the you full control over how end users interact with the load balanced services.  They are commonly used to selectively enable particular Traffic Manager features (for example, bandwidth control, caching or security policies) and to modify request and response data to handle error cases or augment web page data.   Although TrafficScript is a new language, the syntax is intentionally familiar.  It is deeply integrated with the traffic management kernel for two reasons:   Performance: the integration allows for very efficient, high-performance interaction with the internal state of the traffic manager Abstraction: TrafficScript presents a very easy-to-use request/response event model that abstracts the internal complexities of managing network traffic away from the developer.   You can use TrafficScript to create a wide range of solutions and the familiar syntax means that complex code can be prototyped and deployed rapidly.   Example 1 - Modifying Requests   # Is this request a video download? $url = http.getPath(); if( string.wildmatch( $url, "/videos/*.flv" ) ) { # Rewrite the request to target an f4v container, not flv $url = string.replace( $url, ".flv", ".f4v" ); http.setPath( $url ); # We encode flash videos at 1088 Kbits max. Apply a limit to 2 Gbit # to control download tools and other greedy clients response.setBandwidthClass( "Videos 2Mbits" ); # We don't want to cache the response in the Stingray cache, even if # the HTTP headers state that it is cacheable http.cache.disable(); } A simple request rule that modifies the request and instructs the traffic manager to apply bandwidth and cache customizations   TrafficScript's close integration with the traffic management kernel makes it as easy to rewrite HTTP responses as HTTP requests:   Example 2 - Modifying Responses   $url = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $url, "text/html" ) ) break; $response = http.getResponseBody(); $response = string.replaceAll( $response, "http://intranet.mycorp.com/", "https://extranet.mycorp.com/" ); http.setResponseBody( $response ); A response rule that makes a simple replacement to change links embedded in HTTP responses   TrafficScript can invoke external systems in a synchronous or asynchronous fashion:   TrafficScript functions like http.request.get() , auth.query() and net.dns.resolveIP() will query an external HTTP, LDAP or DNS server and return the result of the query.  They operate synchronously (the rule is ‘blocked’ while the query is running) but the Traffic Manager will process other network traffic while the current rule is temporarily suspended. The TrafficScript function event.emit() raises an event to Stingray’s Event Handling system.  The TrafficScript rule continues to execute and the event is processed asynchronously. Events can trigger a variety of actions, ranging from syslog or email alerts to complex user-provided scripts.   These capabilities allow the Traffic Manager to interface with external systems to retrieve data, verify credentials or initiate external control-plane actions.   Example 3 - Accessing an external source (Google News)   $type = http.getResponseHeader( "Content-Type" ); if( $type != "text/html" ) break; # Stop processing this rule $res = http.request.get( "https://ajax.googleapis.com/ajax/services/search/news?". "v=1.0&q=Riverbed" ); $r = json.deserialize( $res ); $rs = $r['responseData']['results']; $html .= "<ul>\n"; foreach( $e in $rs ) { $html .= '<li>' . '<a href="'.$e['unescapedUrl'].'">'.$e['titleNoFormatting'].'</a>'. '</li>'; } $html .= "</ul>\n"; $body = http.getResponseBody(); $body = string.replace( $body, "<!--RESULTS-->", $html); http.setResponseBody( $body ); An advanced response rule that queries an external datasource and inserts additional data into the web page response   TrafficScript rules may also invoke Java Extensions.  Extensions may be written in any language that targets the JVM, such as Python or Ruby as well as Java. They allow developers to use third-party code libraries and to write sophisticated rules that maintain long-term state or perform complex calculations.   Getting started with RuleBuilder   The full TrafficScript language gives you access to over 200 functions, with the support of a proper programming language - variables, tests, loops and other flow control. You can write TrafficScript rules much as you'd write Perl scripts (or Python, JavaScript, Ruby, etc).   The RuleBuilder gives you a UI that lets you configure tests, and actions which are executed if one-of or all-of the tests are satisfied. The tests and actions you can use are predefined, and cover a subset of the full functions of TrafficScript. You can use the RuleBuilder much as you'd use the filtering rules in your email client.   RuleBuilder provides a simple way to create basic policies to control Traffic Manager   If you're not familiar with programming languages, then RuleBuilder is a great way to get started.  You can create simple policies to control Traffic Manager's operation, and then, with one click, transform them into the equivalent TrafficScript rule so that you can learn the syntax and extend them as required.  There's a good example to start with in the Stop hot-linking and bandwidth theft! article.   Examples   Collected Tech Tips: TrafficScript examples Top Examples of Traffic Manager in action (many solutions depend on TrafficScript)   Read More   TrafficScript Guide in the Product Documentation
View full article
Traffic Manager's SOAP Control API is a standards-conformant SOAP-based API that makes it possible for other applications to query and modify the configuration of a Traffic Manager cluster. For example, a network monitoring or intrusion detection system may reconfigure Traffic Manager's traffic management rules as a result of abnormal network traffic; a server provisioning system could reconfigure Traffic Manager when new servers came online.   The SOAP Control API can be used by any programming language and application environment that supports SOAP services.   Examples   Collected Tech Tips: SOAP Control API examples   Read more   SOAP Control API Guide in the Product Documentation
View full article
Overview Traffic Manager's RESTful Control API allows HTTP clients to access and modify Traffic Manager cluster configuration data.  For example a program using standard HTTP methods can create or modify virtual servers and pools, or work with other Traffic Manager configuration objects.   The RESTful Control API can be used by any programming language and application environment that supports HTTP.   Resources   The Traffic Manager RESTful API is an HTTP based and published on port :9070.  Requests are made as standard HTTP requests, using the GET, PUT or DELETE methods.  Every RESTful call deals with a “resource”.  A resource can be one of the following:   A list of resources, for example, a list of Virtual Servers or Pools. A configuration resource, for example a specific Virtual Server or Pool. A file, for example a rule or a file from the extra directory.   Resources are referenced through a URI with a common directory structure.  For this first version of the Traffic Manager RESTful API the URI for all resources starts with “/api/tm/1.0/config/active”, so for example to get a list of pools, the URI would be “/api/tm/1.0/config/active/pools” and to reference the pool named “testpool”, the URI would be “/api/tm/1.0/config/active/pools/testpool.   When accessing the RESTful API from a remote machine, HTTPS must be used, but when accessing the RESTful API from a local Traffic Manager instance, HTTP can be used.   By default, the RESTful API is disabled and when enabled listens on port 9070.  The RESTful API can be enabled and the port can be changed in the Traffic Manager GUI by going to System->Security->REST API.   To complete the example, to reference the pool named “testpool” on the Traffic Manager instance with a host name of “stingray.example.com”, the full URI would be “https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool”.  To get a list off all the types of resources available you can access the URL,  “https://stingray.example.com:9070/api/tm/1.0/config/active".   To retrieve the data for a resource you use the GET method, to add or change a resource you use the PUT method and to delete a resource you use the DELETE method.   Data Format   Data for resource lists and configuration resources are returned as JSON structures with a MIME type of "application/json".  JSON allows complex data structures to be represented as strings that can be easily passed in HTTP requests.  When the resource is a file, the data is passed in its raw format with a MIME type of "application/octet-stream".   For lists of resources the data returned will have the format:   { "children": [{ "name": "", "href": "/api/tm/1.0/config/active/pools/" }, { "name": "", "href": "/api/tm/1.0/config/active/pools/" }] }   For example, the list of pools, given two pools, “pool1” and “pool2” would be:   { "children": [{ "name": "pool1", "href": "/api/tm/1.0/config/active/pools/pool1" }, { "children": [{ "name": "pool2", "href": "/api/tm/1.0/config/active/pools/pool2" }] }   For configuration resources, the data will contain one or more sections of properties, always with at least one section named "basic", and the property values can be of different types.  The format looks like:   { "properties": { "<section name>": { "<property name>": "<string value>", "<property name>": <numeric value>, "<property name>": <boolean value>, "<property name>": [<value>, <value>], "<property name>": [<key>: <value>, <key>: <value>] }, "<section name>": { "<property name>": "<string value>", "<property name>": <numeric value>" } }   Accessing the RESTful API   Any client or program that can handle HTTP requests can be used to access the RESTful API. Basic authentication is used with the usernames and passwords matching those used to administer Traffic Manager.  To view the data returned by the RESTful API without having to do any programming, there are browser plug-ins that can be used.  One that is available, is the Chrome REST Console.  It is very helpful during testing to have something like this available.  One nice thing about a REST API is that it is discoverable, so using something like the Chrome REST Console, you can walk the resource tree and see everything that is available via the RESTful API.  You can also add, change and delete data.  For more information on using the Chrome REST Console see: Tech Tip: Using Traffic Manager's RESTful Control API with the Chrome REST Console   When adding or changing data, use the PUT method and for configuration resources, the data sent in the request must be in JSON format and must match the data format returned when doing a GET on the same type of resource.  For adding a configuration resource you do not need to include all properties, just the minimum sections and properties required to add the resource and this will vary for each resource.  When changing data you only need to include the sections and properties that need to be changed.  To delete a resource use the DELETE method.   Notes   An important caution when changing or deleting data is that this version of the RESTful API does do data integrity checking.  The RESTful API will allow you to makes changes that would not be allowed in the GUI or CLI.  For example, you can delete a Pool that is being used by a Virtual Server.  This means that when using the RESTful API, you should be sure to understand the data integrity requirements for the resources that you are changing or deleting and put validation in any programs you write. This release of the RESTful API is not compatible with Multi-Site Manager, so both cannot be enabled at the same time.   Read more   REST API Guide in the Product Documentation Collected Tech Tips: Using the RESTful Control API with Python Tech Tip: Using Stingray's RESTful Control API with the Chrome REST Console
View full article
Traffic Manager's autoscaling capability is intended to help you dynamically control the resources that a service uses so that you can deliver services to a desired SLA, while minimizing the cost.  The intention of this feature is that you can:   Define the desired SLA for a service (based on response time of nodes in a pool)   Define the minimum number of nodes needed to deliver the service (e.g. 2 nodes for fault-tolerance reasons) Define the maximum number of resources (acting as a brake – this limits how much resource Traffic Manager will deploy in the event of a denial of service attack, traffic surge, application fault etc)   You also need to configure Traffic Manager to deploy instances of the nodes, typically from a template in Amazon, Rackspace or VMware.   You can then leave Traffic Manager to provision the notes, and to dynamically scale the number of nodes up or down to minimize the cost (number of nodes) while preserving the SLA.   Details   Autoscaling is a property of a pool.   A pool contains a set of ‘nodes’ – back-end servers that provide a service on an IP address and port.  All of the nodes in a pool provide the same service.  Autoscaling monitors the service level (i.e. response time) delivered by a pool.  If the response time falls outside the desired SLA, then autoscaling will add or remove nodes from the pool to increase or reduce resource in order to meet the SLA at the lowest cost.   The feature consists of a monitoring and decision engine, and a collection of driver scripts that interface with the relevant platform.   The decision engine   The decision engine monitors the response time from the pool.  Configure it with the desired SLA, and the scale-up/scale-down thresholds.   Example: my SLA is 1000 ms.  I want to scale up (add nodes) if less than 40% of transactions are completed within this SLA, and scale-down (remove nodes) if more than 95% of transactions are completed within the SLA.  To avoid flip-flopping, I want to wait for 20 seconds before initiating the change (in case the problem is transient and goes away), and I want to wait 180 seconds before considering another change.     Other parameters control the minimum and maximum number of nodes in a pool, and how we access the service on new nodes:   The driver   Traffic Manager includes drivers for Amazon EC2, Rackspace and VMware vSphere.  You will need to configure a set of ‘cloud credentials’ (authentication details for the management API for the virtual platform):     You'll also need to specify the details of the virtual machine template that instantiates the service in the pool:     The decision engine initiates a scale-up or scale-down action by invoking the driver with the configured credentials and parameters.  The driver instructs the virtualization layer to deploy or terminate a virtual machine.  Once the action is complete, the driver returns the new list of nodes in the pool and the decision engine update the pool configuration.   Notes:   You can manually provision nodes by editing the max-nodes and min-nodes settings in the pool.  If Traffic Manager notices that there is a mismatch between the max/min and the actual number of nodes active, then it will initiate a series of scale-up or scale-down actions.   Creating a custom driver for a new platform   You can create a custom driver for any platform that is capable of deploying new service instances on demand.  Creating a new driver involves:   Create the driver script, that conforms to the API below Upload the script to the Extra Files -> Miscellaneous store using the UI (or copy to $ZEUSHOME/zxtm/conf/extra) Create a Credentials object that contains the uids, passwords etc necessary to talk to the cloud platform:   Configure the pool to autoscale, and provide the details of the virtual machine that should be provisioned:     Specification of the driver scripts   The settings in the UI are interpreted by the Cloud API script. Traffic Manager will invoke this script and pass the details in. Use the ZEUSHOME/zxtm/bin/rackspace.pl or vsphere-client scripts as examples (the ZEUSHOME/zxtm/bin/awstool script is multi-purpose and used by Traffic Manager's handling of EC2 EIPs for failover as well). Arguments:   The scripts should support several actions - status , createnode , destroynode , listimageids and listsizeids . Run --help:     root@stingray-1:/opt/zeus/zxtm/bin# ./rackspace.pl --help Usage: ./rackspace.pl [--help] action options action: [status|createnode|destroynode|listimageids|listsizeids] common options: --verbose=1 --cloudcreds=name other valid options depend on the chosen action:   status: --deltasince=tstamp Only report changes since timestamp tstamp (unix time)   createnode: --name=newname Associate name newname (must be unique) with the new instance               --imageid=i_id Create an instance of image uniquely identified by i_id               --sizeid=s_id  Create an instance with size uniquely identified by s_id   destroynode: --id=oldid destroy instance uniquely identified by oldid   Note: The ' --deltasince ' isn't supported by many cloud APIs, but has been added for Rackspace.  If the cloud API in question supports reporting only changes since a given date/time, it should be implemented.   The value of the --name option will be chosen by the autoscaler on the basis of the 'autoscale!name': a different integer will be appended to the name for each node.   The script should return a JSON-formatted response for each action:   Status action   Example response:     {"NodeStatusResponse": {   "version":1,   "code":200,   "nodes":[    {"sizeid":1,     "status":"active",     "name":"TrafficManager",     "public_ip":"174.143.156.25",     "created":1274688603,     "uniq_id":98549,     "private_ip":"10.177.4.216",     "imageid":8,     "complete":100},    {"sizeid":1,     "status":"active",     "name":"webserver0",     "public_ip":"174.143.153.20",     "created":1274688003,     "uniq_id":100768,     "private_ip":"10.177.1.212",     "imageid":"apache_xxl",     "complete":100}   ] } }   version and code must be JSON integers in decimal notation; sizeid, uniq_id, imageid can be decimal integers or strings.   name must be a string.  Some clouds do not give every instance a name; in this case it should be left out or be set to the empty string.  The autoscaler process will then infer the relevance of a node for a pool on the basis of the imageid (must match 'autoscale!imageid' in the pool's configuration).   created is the unix time stamp of when the node was created and hence must be a decimal integer. When the autoscaler destroys nodes, it will try to destroy the oldest node first.  Some clouds do not provide this information; in this case it should be set to zero.   complete must be a decimal integer indicating the percentage of progress when a node is created.   A response code of 304 to a 'status' request with a '--deltasince' option is interpreted as 'no change from last status request'. CreateNode action   The response is a JSON-formatted object as follows:   {"CreateNodeResponse": {   "version":1,   "code":202,   "nodes":[    {"sizeid":1,     "status":"pending",     "name":"webserver9",     "public_ip":"173.203.222.113",     "created":0,     "uniq_id":230593,     "private_ip":"10.177.91.9",     "imageid":41,     "complete":0}   ] } } The 202 corresponds to the HTTP response code 'Accepted'.   DestroyNode Action   The response is a JSON-formatted object as follows: {"DestroyNodeResponse": {   "version":1,   "code":202,   "nodes":[    {     "created":0,     "sizeid":"unknown",     "uniq_id":230593,     "status":"destroyed",     "name":"unknown",     "imageid":"unknown",     "complete":100}   ] } } Error conditions   The autoscaling driver script should communicate error conditions using responsecodes >= 400 and/or by writing output to stderr. When the autoscaler detects an error from an API script it disables autoscaling for all pools using the Cloud Credentials in question until an API call using those Cloud Credentials is successful.
View full article
Traffic Manager's Content Caching capability allows Traffic Manager to identify web page responses that are the same for each request and to remember (‘cache’) the content. The content may be ‘static’, such as a file on disk on the web server, or it may have been generated by an application running on the web server.   Why use Content Caching?   When another client asks for content that Traffic Manager has cached in its internal web cache, Traffic Manager can return the content directly to the client without having to forward the request to a back-end web server.   This has the effect of reducing the load on the back-end web servers, particularly if Traffic Manager has detected that it can cache content generated by complex applications which consume resources on the web server machine.   What are the pitfalls?   A content cache may store a document that should not be cached.   Traffic Manager conforms to the caching recommendations of RFC 2616, which describe how web browsers and server can specify cache behaviour. However, if a web server is misconfigured, and does not provide the correct cache control information, then a TrafficScript or RuleBuilder rule can be used to override Traffic Manager's default caching logic.   A content cache may need a very large amount of memory to be effective   Depending on the spread of content for your service, and the proportion that is cacheable and frequently used compared to the long tail of less-used content, you may need a very large content cache to get the best possible hit rates.   Traffic Manager allows you to specify precisely how much memory you wish to use for your cache, and to impose fine limits on the sizes of files to be cached and the duration that they should be cached for. Traffic Manager's 64-bit software overcomes the 2-4Gb limit of older solutions, and Traffic Manager can operate with a two-tier (in-memory and on-SSD) cache in situations where you need a very large cache and the cost of server memory is prohibitive.   How does it work?   Not all web content can be cached. Information in the HTTP request and the HTTP response drives Traffic Manager's decisions as to whether or not a request should be served from the web cache, and whether or not a response should be cached.   Requests   Only HTTP GET and HEAD requests are cacheable. All other methods are not cachable. The Cache-Control header in an HTTP request can force Traffic Manager to ignore the web cache and to contact a back-end node instead. Requests that use HTTP basic-auth are uncacheable.   Responses   The Cache-Control header in an HTTP response can indicate that an HTTP response should never be placed in the web cache.  The header can also use the max-age value to specify how long the cached object can be cached for. This may cause a response to be cached for less than the configured webcache!time parameter. HTTP responses can use the Expires header to control how long to cache the response for. Note that using the Expires header is less efficient than using the max-age value in the Cache-Control response header. The Vary HTTP response header controls how variants of a resource are cached, and which variant is served from the cache in response to a new request.   If a web application wishes to prevent Traffic Manager from caching a response, it should add a ‘ Cache-Control: no-cache ’ header to the response.   Debugging Traffic Manager's Cache Behaviour   You can use the global setting webcache!verbose if you wish to debug your cache behaviour. This setting is found in the Cache Settings section of the System, Global Settings page. If you enable this setting, Traffic Manager will add a header named ‘ X-Cache-Info ’ to the HTTP response to indicate how the cache policy has taken effect. You can inspect this header using Traffic Manager's access logging, or using the developer extensions in your web browser.   X-Cache-Info values   X-Cache-Info: cached X-Cache-Info: caching X-Cache-Info: not cacheable; request had a content length X-Cache-Info: not cacheable; request wasn't a GET or HEAD X-Cache-Info: not cacheable; request specified "Cache-Control: no-store" X-Cache-Info: not cacheable; request contained Authorization header X-Cache-Info: not cacheable; response had too large vary data X-Cache-Info: not cacheable; response file size too large X-Cache-Info: not cacheable; response code not cacheable X-Cache-Info: not cacheable; response contains "Vary: *" X-Cache-Info: not cacheable; response specified "Cache-Control: no-store" X-Cache-Info: not cacheable; response specified "Cache-Control: private" X-Cache-Info: not cacheable; response specified "Cache-Control: no-cache" X-Cache-Info: not cacheable; response specified max-age <= 0 X-Cache-Info: not cacheable; response specified "Cache-Control: no-cache=..." X-Cache-Info: not cacheable; response has already expired X-Cache-Info: not cacheable; response is 302 without expiry time   Overriding Traffic Manager's default cache behaviour Several TrafficScript and RuleBuilder cache contrl functions are available to facilitate the control of Traffic Manager’s caching behaviour. In most cases, these functions eliminate the need to manipulate headers in the HTTP requests and responses.   http.cache.disable()   Invoking http.cache.disable() in a response rule prevents Traffic Manager from caching the response. The RuleBuilder 'Make response uncacheable' action has the same effect.   http.cache.enable()   Invoking http.cache.enable() in a response rule reverts the effect of a previous call to http.cache.disable(). It causes Traffic Manager's default caching logic to take effect. Note that it possible to force Traffic Manager to cache a response that would normally be uncachable by rewriting the headers of that response using TrafficScript or RuleBuilder (response rewriting occurs before cachability testing).   http.cache.setkey()   The http.cache.setkey() function is used to differentiate between different versions of the same request, in much the same way that the Vary response header functions. It is used in request rules, but may also be used in response rules.   It is more flexible than the RFC2616 vary support, because it lets you partition requests on any calculated value – for example, different content based on whether the source address is internal or external, or whether the client’s User-Agent header indicates an IE or Gecko-based browser.   This capability is not available via RuleBuilder.   Simple control   http.cache.disable and http.cache.disable allow you to easily implement either 'default on', or 'default off' policies, where you either wish to cache everything cacheable unless you explicity disallow it, or you wish to only let Traffic Manager cache things you explictly allow. For example, you have identified a particular set of transactions out of a large working set that each 90% of your web server usage, and you wish to just cache those requests, and not lets less painful transactions knock these out of the cache. Alternatively, you may be trying to cache a web-based application which is not HTTP compliant in that it does not properly mark up pages which are not cacheable and caching them would break the application. In this scenario, you wish to only enable caching for particular code paths which you have tested to not break the application. An example TrafficScript rule implementing a 'default off' policy might be:     # Only cache what we explicitly allow http.cache.disable(); if( string.regexmatch( http.geturl(), "^/sales/(order|view).asp" )) { # these are our most painful pages for the DB, and are cacheable http.cache.enable(); }     RuleBuilder offers only the simple 'default on' policy, overridden either by the response headers or the 'Make response uncacheable' action.   Caching multiple resource versions for the same URL Suppose that your web service returns different versions of your home page, depending on whether the client is coming from an internal network (10.0.0.0) or an external network. If you were to put a content cache in front of your web service, you would need to arrange that your web server sent a Cache-Control: no-cache header with each response so that the page were not cached. Use the following request rule to manipulate the request and set a 'cache key' so that Traffic Manager caches the two different versions of your page:   # We're only concerned about the home page... if( http.getPath() != "/" ) break; # Set the cache key depending on where the client is located $client = request.getRemoteIP(); if( string.ipmaskmatch( $ip, "10.0.0.0/8" ) ) { http.cache.setkey( "internal" ); } else { http.cache.setkey( "external" ); } # Remove the Cache-Control response header - it's no longer needed! http.removeResponseHeader( "Cache-Control" );     Forcing pages to be cached   You may have an application, say a JSP page, that says it is not cacheable, but actually you know under certain circumstances that it is and you want to force Traffic Manager to cache this page because it is a heavy use of resource on the webserver.   You can force Traffic Manager to cache such pages by rewriting its response headers; any TrafficScript rewrites happen before the content caching logic is invoked, so you can perform extremely fine-grained caching control by manipulating the HTTP response headers of pages you wish to cache.   In this example, as have a JSP page that sets a 'Cache-Control: no-cache' header, which prevents Stingray by caching the page. We can make this response cacheable by removing the Cache-Control header (and potentially the Expires header as well), for example:   if( http.getpath() == "/testpage.jsp" ) { # We know this request is cacheable; remove the 'Cache-Control: no-cache' http.removeResponseHeader( "Cache-Control" ); }   Granular cache timeouts   For extra control, you may wish instead to use the http.setResponseHeader() function to set a Cache-Control with a max-age= paramter to specify exactly how long this particular piece of content should be cached or; or add a Vary header to specify which parts of the input request this response depends on (e.g. user language, or cookie). You can use these methods to set cache parameters on entire sets of URLs (e.g. all *.jsp) or individual requests for maximum flexibility.   The RuleBuilder 'Set Response Cache Time' action has the same effect.   Read more   Stingray Product Documentation Cache your website - just for one second? Managing consistent caches across a Stingray Cluster
View full article
Why do you need Session Persistence?   Consider the process of conducting a transaction in your application - perhaps your user is uploading and annotating an image, or concluding a shopping cart transaction.   This process may entail a number of individual operations - for example, several HTTP POSTs to upload and annotate the image - and you may want to ensure that these network transactions are routed to the same back-end server.  Your application design may mandate this (because intermediate state is not shared between nodes in a cluster), or it just may be highly desireable (for performance and cache-hit reasons).   Traffic Manager's Load Balancing (Feature Brief: Load Balancing in Traffic Manager) will work against you. Traffic Manager will process each network operation independently, and it's very likely that the network transactions will be routed to different machines in your cluster.  In this case, you need to be firm with Traffic Manager and require that all transactions in the same 'session' are routed to the same machine.   Enter 'Session Persistence' - the means to override the load balancing decision and pin 'sessions' to the same server machine.   Session Persistence Methods Traffic Manager employs a range of session persistence methods, each with a different way to identify a session. You should generally select the session persistence method that most accurately identifies user sessions for the application you are load balancing.      Persistence Type Session identifier Session data store IP-based persistence Source IP address Internal session cache Universal session persistence TrafficScript-generated key Internal session cache Named Node session persistence TrafficScript specifies node None Transparent session affinity HTTP browser session Client-side Cookie (set by Stingray) Monitor application cookies Named application cookie Client-side Cookie (set by Stingray) J2EE session persistence J2EE session identifier Internal session cache ASP and ASP.NET session persistence ASP/ASP.Net session identifiers Internal session cache X-Zeus-Backend cookies Provided by backend node Client-side Cookie (set by backend) SSL Session ID persistence SSL Session ID Internal session cache   For a detailed description of the various session persistence methods, please refer to the User Manual (Product Documentation).   Where is session data stored?   Client-side Cookies   Traffic Manager will issue a Set-Cookie header to store the name of the desired node in a client-side cookie.  The cookie identifier and the name of the node are both hashed to prevent tampering or information leakage:   In the case of ‘Monitor Application Cookies’, the session cookie is given the same expiry time as the cookie it is monitoring. In the case of ‘Transparent Session Affinity’, the session cookie is not given an expiry time.  It will last for the duration of the browser session.  See also What's the X-Mapping- cookie for, and does it constitute a security or privacy risk?   Internal session cache   Session data is stored in Traffic Manager in a fixed-size cache, and replicated across the cluster according to the ‘State Synchronization Settings’ (Global Settings).   All session persistence classes of the same type will share the same cache space.  The session persistence caches function in a ‘Least Recently Used’ fashion: each time an entry is accessed, its timestamp is updated. When an entry must be removed to make room for a new session, the entry with the oldest timestamp is dropped.   Controlling Session Persistence   Session persistence ties the requests from one client (ie, in one 'session') to the same back-end server node. It defeats the intelligence of the load-balancing algorithm, which tries to select the fastest, most available node for each request.   In a web session, often it's only necessary to tie some requests to the same server node. For example, you may want to tie requests that begin " /servlet " to a server node, but let Traffic Manager be free to load-balance all other requests (images, static content) as appropriate.   Session Persistence may be a property of a pool - all requests processed by that pool are assigned to a session and routed accordingly - but if you want more control you can control it using TrafficScript.   Configure a session persistence class with the desired configuration for your /servlet requests, then use the following request rule:   if( string.startswith( http.getPath(), "/servlet" ) ) { connection.setPersistenceClass( "servlet persistence" ); }   Missing Session Persistence entries   If a client connects and no session persistence entry exists in the internal table, then the connection will be handled as if it were a new session. Traffic Manager will apply load-balancing to select the most appropriate node, and then record the selection in the session table.  The record will be broadcast to other Traffic Manager machines in the cluster.   Failed Session Persistence attempts   If the session data (client cookie or internal table) references a node that is not available (it has failed or has been removed), then the default behavior is to delete the session record and load-balance to a working node.   This behavior may be modified on a Persistence class basis, to send a ‘sorry’ message or just drop the connection:   Configure how to respond and how to manage the session if a target node cannot be reached   Draining and disabled nodes   If a node is marked as draining then existing sessions will be directed to that node, but no new sessions will be established.  Once the existing sessions have completed, it is safe to remove the node without interrupting connections or sessions.   Traffic Manager provides a counter indicating when the node was last used.  If you wish to time sessions out after 10 minutes of activity, then you can remove the node once the counter passes 10 minutes:   The ‘Connection Draining’ report indicates how long ago the last session was active on a node   If a node is marked as disabled, no connections are sent to it.  Existing connections will continue until they are closed.  In addition, Traffic Manager stops running health monitors against the disabled node.  Disabling a node is a convenient way to take it out of service temporarily (for example, to apply a software update) without removing it completely from the configuration.   Monitoring and Debugging Session Persistence   SNMP and Activity Monitor counters may be used to monitor the behavior of the session cache.  You will observe that the cache gradually fills up as sessions are added, and then remains full.  The max age of cache entries will likely follow a fine saw-tooth pattern as the oldest entry gradually ages and then is either dropped or refreshed, although this is only visible if new entries are added infrequently:     In the first 4 minutes, traffic is steady at 300 new sessions per minute and the session cache fills.  Initially, the max age grows steadily but when the cache fills (after 2 minutes) the max age remains fairly stable as older entries are dropped.  In the last minute, no new entries were added, so the cache remains full and the max-age increases steadily.   The ‘Current Connections’ table will display the node that was selected for each transaction that the traffic manager processed:   Observe that requests have been evenly distributed between nodes 201, 202 and 203 because no session persistence is active   Transaction logging can give additional information.  Access logs support webserver-style macros, and the following macros are useful:   Macro Description %F The favored node; this is a hint to the load-balancing algorithm to optimize node cache usage %N The required node (may be blank): defined by a session persistence method %n The actual node used by the connection; may differ from %F if the favoured node is overloaded, and differ from %N if the required node has failed   Finally, TrafficScript can be used to annotate pages with the name of the node they were served from:     if( http.getResponseHeader( "Content-Type" ) != "text/html" ) break; $body = http.getResponseBody(); $html = '<div style="position:absolute;top:0;left:0;border:1px solid black;background:white">'. 'Served by ' . connection.getNode() . '</div>'; $body = string.regexsub( $body, "(<body[^>]*>)", "$1\n".$html."\n", "i" ); http.setResponseBody( $body );   Final Observations   Like caching, session persistence breaks the simple model of load-balancing each transaction to the least-loaded server.  If used without a full understanding of the consequences, it can provoke strange and unexpected behavior.   The built-in session persistence methods in Traffic Manager are suitable for a wide range of applications, but it’s always possible to construct situations with fragile applications or small numbers of clients where session persistence is not the right solution for the problem at hand.   Session persistence should be regarded as a performance optimization, ensuring that users are directed to a node that has their session data ready and fresh in a local cache. No application should absolutely depend upon session persistence, because to do so would introduce a single point of failure for every users’ session.   Pragmatically, it is not always possible to achieve this. Traffic Manager's TrafficScript language provides the means to fine-tune session persistence to accurately recognize individual sessions, apply session persistence judiciously to the transactions that require it, and implement timeouts if required.   Read more   Session Persistence - implementation and timeouts HowTo: Controlling Session Persistence HowTo: Delete Session Persistence records
View full article
The Traffic Manager Documentation (user manual) describes how to configure and use Traffic Manager's Service Level Monitoring capability. This article summarizes how the SLM statistics are calculated so that you can better appreciate what they mean and how they can be used to detect service level problems.   Overview   Service Level Monitoring classes are used to determine how well a service is meeting a response-time-based service level. One key goal of the implementation of Service Level Monitoring is that the data it measures is not affected by the performance or reliability of the remote clients; as far as possible, the SLM class gives an accurate measure of performance and reliability that measures only the factors that the application administrator has control over.   By default, connections are not measured using Service Level Monitoring classes. Typically, an administrator would select the types of connections he or she is interested in (for example, just requests for .asp resources) and assign those to a service level monitoring class:   connection.setServiceLevelClass( "My SLM Class Name" );   A virtual server may also be configured with a 'default' service level class, so that all of the connections it manages are times using the class.   Timers   Traffic Manager starts a timer for each connection when it receives the request from the remote client. The timer is stopped when either the first data from the server is received, or if the connection is unexpectedly closed (perhaps due to a client or server timeout).   The high-res timer measures the time taken to run any TrafficScript request rules (including the delay if these rules perform a blocking action such as communicating with an external server), the time taken to read any additional request data (such as HTTP body data), the time to connect to a node, write the request and read the first response data.   When the timer is stopped, Traffic Manager checks to see if a Service Level Class was assigned to this connection. If so, the elapsed time is recorded against the SLM class, and the per-second 'max' and 'min' response time managed by the class is updated if necessary. Traffic Manager also maintains a per-second count of how many requests conformed and how many failed to conform.   Note: connections which close unexpectedly before the 'conformance' time limit are disregarded completely because they never completed. Connections which close unexpectedly after the 'conformance' time-limit has passed are counted as non-conforming and the elasped time is counted towards the performance of the SLM class.   Calculations   For each service level class, Traffic Manager maintains a list of the last 10 seconds worth of data in a rolling fashion - min, max and average response times, numbers conforming and non-confirming. When asked for the percentage-confirming for the SLM class, Traffic Manager sums the results from the last 10 seconds.   Note: Traffic Manager commonly runs in multi-process mode (one process per CPU core). In that case, each child process counts SLM data in a shared memory segment, so the results should be consistent no matter which process handles a given connection.   Note: When running in a cluster, Traffic Manager automatically shares and merges SLM data from other members of the cluster. There may be a slight time-delay in the state sharing, so the SLM calculations from different cluster members running in active-active mode may be slightly inconsistent. If the cluster has only one active traffic manager for a given SLM class, the passive traffic managers will be able to 'see' the SLM statistics, but they may be delayed by a second or so.   Using SLM classes   SLM class data may be used in a variety of ways. You can configure 'warning' and 'serious' conformance levels, and Traffic Manager will log whenever the class transits between an 'ok', 'warning' or 'serious' state. The transition can also trigger an event using Traffic Manager's Event Handling capability, and you can assign custom actions to these events.   You can inspect the state of an SLM class in TrafficScript to return an error message or return different content when a service begins to underperform. Service Level Monitoring is a key measurement tool when determining whether or not to prioritize traffic.
View full article
'Server First', 'Client First' and 'Generic Streaming' protocols are the most basic L7 protocol types that Traffic Manager can use to manage traffic. They are useful when managing custom protocols or simple TCP connections because they do not expect the traffic to conform to any specific format. This document describes the difference between Server First, Client First and Generic Streaming protocols, and explains some of the details of managing them.     Server-first protocols Server-first is the simplest. When Traffic Manager receives a connection from a client, it immediately runs any TrafficScript rules, then immediately connects to a back-end server. It then listens on both connections (client-side and server-side) and passes data from one side to the other as it comes.   Load-balancing a simple server-first protocol, such as TIME or MySQL   Server-first protocols may be one-shot protocols such as Time (where the server writes the response data, then immediately closes the connection), or complex protocols like MySQL (where the server opens the dialog with a password challenge).   Client-first Protocols   Client-first is an optimization of server-first. On the client-side, Traffic Manager is only alerted when a connection has been established and data has been received. Traffic Manager then proceeds in the style of server-first, i.e. runs TrafficScript rules, connects to back-end and relays data back and forth. In practice, you can use server-first any time you use client-first, but client-first is more efficient when a client is expected to write the first data.   Load-balancing a client-first protocol such as HTTP   When you use a client-first procotol, you can use TrafficScript rules to inspect the client's request data before making your load-balancing decision.   Server-first with 'server banner'   Server-first with a 'server banner' is a different optimization (to cater for servers which broadcast a banner on connect, such as SMTP). When a client connects, Traffic Manager immediately writes the configured 'server-first banner' to the client, then proceeds as a regular client-first connection. In addition, Traffic Manager slurps and discards the first line or data (terminated by a \n) that the server sends.   Load-balancing a simple server-first protocol with a greeting banner, such as SMTP or FTP   Once again, you can use TrafficScript rules to inspect the client's request data before making your load-balancing decision.   Generic Streaming   Generic Streaming is a very basic connection handing mode that does not allow for any synchronization.  It's best suited for long-lived asynchronous protocols where either party may initiate a transaction, such as WebSockets or chat protocols.   Load-balancing an unsynchronized protocol such as chat or websockets   If the protocol has a well-defined handshake (such as WebSockets) you can attach rules to manage the handshake.  You can create rules that manage the subsequent, asynchronous packets but you should take care not to call TrafficScript functions that would block (such as request.read() ) as these will stall data flowing in the other direction in the connection.   Troubleshooting Protocol problems   Timeouts   All connections have associated timeouts on connect and data. If a connect does not complete within the 'connect' timeout, or the connection is idle for the 'idle timeout', the connection will be discarded.   If it's a server-side connection, it will count as a server failure; three successive failures mark the node as dead.   The timeouts and 'three-failures' count can all be tuned if necessary via the Connection Management settings in the Virtual Server and Pool, and the Global Settings page.   Deadlocks   A request or response rule can cause the connection to block. For example, if Traffic Manager runs a request rule that calls 'request.read()', the connection will block until the required data has been read from the client. During this period, Traffic Manager stops relaying data from the server.   This may stall a connection or even trigger a timeout; be very careful if you use read (or write) TrafficScript functions with a bi-directional protocol.   Examples   The generic protocols typically function 'out-of-the-box' - all of the other protocols implemented by Traffic Manager layer on top of these and add protocol-specific handlers and optimizations.   Inspecting and managing generic protocols is challenging.  The following articles describe such advanced use of these protocols:   Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager Building a load-balancing MySQL proxy with TrafficScript
View full article