cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
The Pulse Secure Virtual Traffic Manager includes a useful benchmarking tool named 'zeusbench' that we use for internal performance testing and as a load generation tool for training courses. The first incarnation of ZeusBench was donated to the Apache project a long time ago, and is now well known as ApacheBench. The new incarnation starts from a completely fresh codebase; it has a new 'rate' mode of operation, is less resource intensive and is often more accurate in its reporting.   You’ll find ZeusBench in the admin/bin directory of your Traffic Manager installation ( $ZEUSHOME ). Run zeusbench with the -–help option to display the comprehensive help documentation:   $ $ZEUSHOME/admin/bin/zeusbench --help   Using zeusbench   zeusbench generates HTTP load by sending requests to the desired server. It can run in two different modes:   Concurrency   When run with the –c N option, zeusbench simulates N concurrent HTTP clients. Each client makes a request, reads the response and then repeats, as quickly as possible.   Concurrency mode is very stable and will quickly push a web server to its maximum capacity if sufficient concurrent connections are used. It's very difficult to overpower a web server unless you select far too many concurrent connections, so it's a good way to get stable, repeatable transactions-per-second results. This makes it suitable for experimenting with different performance tunings, or looking at the effect of adding TrafficScript rules.   Rate   When run with the –r N option, zeusbench sends new HTTP requests at the specified rate (requests per second) and reads responses when they are returned.   Rate mode is suitable to test whether a web server can cope with a desired transaction rate, but it is very easy to overwhelm a server with requests. It's great for testing how a service copes with a flash-crowd - try running one zeusbench instance for background traffic, then fire off a second instance to simulate a short flash crowd effect. Rate-based tests are very variable; it's difficult to get repeatable results when the server is overloaded, and it’s difficult to determine the maximum capacity of the server (use a concurrency test for this).   Comparing concurrency and rate   The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:   Measuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests   The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.   The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.   Controlling the tests   The –n N option instructs zeusbench to run until it has sent N requests, then stop and report the results. The –t N option instructs zeusbench to run for N seconds, then stop and report the results. The –f option instructs zeusbench to run forever (or until you hit Ctrl-C, at which point zeusbench stops and reports the results). The –v option instructs zeusbench to report each second on the progress of the tests, including the number of connections started and completed, and the number of timeouts or unexpected error responses.   Keepalives   Keepalives can make a very large difference to the nature of a test. By default, zeusbench will open a TCP connection and use it for one request and response before closing it. If zeusbench is instructed to use keepalives (with the –k flag), it will reuse TCP connections indefinitely; the –k N option can specify the number of times a connection is reused before it is closed.   Other zeusbench options   We’ve just scratched the surface of the options that zeusbench offers. zeusbench gives you great control over timeouts, the HTTP requests that it issues, the ability to ramp up concurrency or rate values over time, the SSL parameters and more.   Run   $ $ZEUSHOME/admin/bin/zeusbench --help   for the detailed help documentation   Running benchmarks   When running a benchmark, it is always wise to sanity-check the results you obtain by comparing several different measurements. For example, you can compare the results reported by zeusbench with the connection counts charted in the Traffic Manager Activity Monitor. Some discrepancies are inevitable, due to differences in per-second counting, or differences in data transfers and network traffic counts.   Benchmarking requires a careful, scientific approach, and you need to understand what load you are generating and what you are measuring before drawing any detailed conclusions. Furthermore, the performance you measure over a low-latency local network may be very different to the performance you can achieve when you put a service online at the edge of a high-latency, lossy WAN.   For more information, check out the document:Tuning Virtual Traffic Manager   The article Dynamic Rate Shaping of Slow Applications gives a worked example of the use of zeusbench to determine how to rate-shape traffic to a slow application.   Also note that many Traffic Manager licenses impose a maximum bandwidth limit (development licenses are typically limited to 1Mbits); this will obviously impede any benchmarking attempts. You can commence a managed evaluation and obtain a license key that gives unrestricted performance if required.
View full article
A selection of SteelApp security articles, for SteelApp Traffic Manager and SteelApp Web App Firewall. Listed from the most recent to the oldest, let me know if you have other articles to add to this list.   Poodle 2.0:   SteelApp not vulnerable to POODLE 2.0 (CVE 2014-8730) CVE-2014-8730   Poodle:   Disabling SSL v3.0 for SteelApp Re: Assuming TLS, what ciphers does SteelApp 9.8 support? CVE-2014-3566   ShellShock/Bash:   TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) CVE-2014-6271   Heartbleed:   Heartbleed: Using TrafficScript to detect TLS heartbeat records Would Stingray automatically protect servers from Heartbleed? CVE-2014-0160   Whitepapers:   Global-scale Web Application Security for DOD Why Web Application Firewalls Matter   Miscellaneous Articles:   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Stingray Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Stingray Traffic Manager SteelApp TrafficManager SAML 2.0 Protocol Validation with TrafficScript
View full article
Stingray Traffic Manager version 9.5 includes some important enhancements to the RESTful API.  These enhancements include the following:   A new API version The API version has moved to 2.0.  Versions 1.0 and 1.1 are still available but have been deprecated.   Statistics and Version Information A new resource, "status", is available that contains the child resources "information" and "statistics", under the host name.  Data can only be retrieved for these resources; no updates are allowed.  The URL for "information" is: http(s)://<host>:<port>/api/tm/2.0/status/<host>/information   and the URI for "statistics" is:   http(s)://<host>:<port>/api/tm/2.0/status/<host>/statistics   <host> can also be "local_tm", which is an alias for the Traffic Manager processing the REST request.  For this release, only statistics for the local Traffic Manager are available.   The "information" resource contains the version of the Stingray Traffic Manager, so for example the request:   http(s)://<host>:<port>/api/tm/2.0/status/local_tm/information   for version 9.5 would return:   tm_version9.5   The "statistics" resource contains the Stingray statistics that are also available with SNMP or the SOAP API.  The following child resources are available under "statistics":   actions, bandwidth, cache, cloud_api_credentials, connection_rate_limit, events, glb_services, globals, listen_ips, locations, network_interface, nodes, per_location_service, per_node_slm, pools, rule_authenticators, rules, service_level_monitors, service_protection, ssl_ocsp_stapling, traffic_ips, virtual_servers   The statistics that are available vary by resource.   Example:   To get the statistics for the pool "demo" on the Stingray Traffic Manager "stingray.example.com": https://stingray.example.com:9070/api/tm/2.0/status/local_tm/statistics/pools/demo { "statistics": { "algorithm": "roundrobin", "bytes_in": 20476976, "bytes_out": 53323, "conns_queued": 0, "disabled": 0, "draining": 0, "max_queue_time": 0, "mean_queue_time": 0, "min_queue_time": 0, "nodes": 1, "persistence": "none", "queue_timeouts": 0, "session_migrated": 0, "state": "active", "total_conn": 772 } } Resource Name Changes Some resources have been renamed to be more clear:   actionprogs-> action_programs auth-> user_authenticators authenticators-> rule_authenticators cloudcredentials-> cloud_api_credentials events-> event_types extra-> extra_files flipper-> traffic_ip_groups groups-> user_groups scripts-> monitor_scripts services-> glb_services settings.cfg-> global_settings slm-> service_level_monitors vservers-> virtual_servers zxtms-> traffic_managers   New Resource   One new resource, "custom" has been added to support the new Custom Configuration Sets feature.  This allows arbitrary name:value configuration pairs to be stored in the Traffic Manager configuration system. As part of the Traffic Manager configuration, this data is replicated across a cluster and is accessible using the REST API, SOAP API and ZCLI.  All data structures supported by the Stinray REST API are also supported for Custom Configuration Sets.  Please see the REST API Guide for more information.
View full article
Multi-hosted IP addresses allow the same traffic IP to be hosted on multiple traffic managers at the same time. This can provide benefits to traffic distribution and reduce the number of IPs needed to run a service.   For a background on Stingray's Fault Tolerance and Clustering approach, please refer to the document Feature Brief: Clustering and Fault Tolerance in Stingray Traffic Manager.   How do Multi-Hosted IP addresses work?   Multi-hosted IPs make use of multicast MAC addresses.  When Stingray observes an ARP for the target traffic IP address, it responds with a multicast MAC address that is calculated based on the value of the multicast IP address used for clustered communications:     The upstream switch will relay packets destined to the traffic IP address to that MAC address; because it's a multicast MAC, the switch will learn which nodes are using that MAC address and forward the traffic to each (problems with switches learning the location of MAC address - check out this solution: Why can't users connect to my multi-hosted IPs?).   The zcluster kernel module ( Stingray Kernel Modules for Linux Software) implements a filter in the hosts TCP stack that partitions the traffic to that traffic IP address and discards all but the host's share of the traffic. The method used to determine the shares that each host takes is stable and guarantees statistically-perfect distribution of traffic.  It handles failover gracefully, redistributing the failed traffic share evenly between the remaining traffic managers, while ensuring that the remaining traffic managers don't need to rebalance their own shares to preserve the statistically-perfect distribution, and it does so in a stable fashion that does not require any per-connection synchronization or inter-cluster negotiation.   In more detail   Suppose you have 4 traffic managers in a cluster, all listening on the same user-specified IP address.  These traffic managers all ARP the same multicast MAC address so the switch forwards all incoming traffic to that IP to all traffic managers.   The zcluster kernel module will:   Take source IP, source port, destination IP, destination port (this is sufficient to identify a TCP or UDP session) and hash them together Give its local traffic manager a fixed quarter of the hash space and silently discard the remaining 3-quarters   This means that every connection is handled by just one traffic manager and connections are evenly distributed (on a statistical basis) between traffic managers.   If one or more traffic managers fail, then the distribution method makes three guarantees:   The hash space is balanced perfectly equally between the running traffic managers When a traffic manager fails or recovers, the only part of the hash space that is redistributed is the portion relating to that traffic manager (i.e. it’s not necessary to move any of the hash space between running traffic managers to rebalance) The method only depends on the instantaneous state of the cluster   This means that the only ‘synchronization’ is the shared view of the health of the cluster (i.e. for the method to be stable, each running traffic manager has the same view as to which traffic managers are running and which are failed).  Stingray's health broadcasts ensure that this is the case (apart from possible momentary differences when one or more traffic managers fail or recover).   A worked example   Suppose you have four traffic managers, A, B, C and D.  They each get 1/4 of the hash space.   Traffic manager D fails: its 1/4 is shared three ways between A, B and C (so the hash space is still balanced) Traffic manager C fails: its original 1/4 and the 1/3 * 1/4 it inherited from D are both split half and half between A and B Traffic manager D recovers: Traffic managers A and B stop listening for 1/3 of their traffic (i.e. all of the traffic they inherited from D and 1/3rd of the traffic they inherited from C), and D starts listening for the portions that A and B released   Observations   Multi-hosted Traffic IP addresses are useful when you want to ensure even distribution of traffic across a cluster (for example, to spread SSL processing load) or when you have a very limited number of public IP addresses at your disposal.   They do replicate ingress traffic across multiple ports in the internal network.  The implication of this is that each Stingray Traffic Manager needs the same ingress bandwidth as the upstream switch; this is rarely a significant problem.   There is a built-in limit of 8 - the maximum number of traffic managers that can be present in a multi-cast traffic IP group.   Using Multi-Hosted traffic IP groups with IP transparency to your back-end servers is challenging.  You need to configure appropriate source-based routing on each back-end server so that it directs the egress traffic to the correct node in the Stingray cluster.
View full article
Why do you need Session Persistence?   Consider the process of conducting a transaction in your application - perhaps your user is uploading and annotating an image, or concluding a shopping cart transaction.   This process may entail a number of individual operations - for example, several HTTP POSTs to upload and annotate the image - and you may want to ensure that these network transactions are routed to the same back-end server.  Your application design may mandate this (because intermediate state is not shared between nodes in a cluster), or it just may be highly desireable (for performance and cache-hit reasons).   Stingray's Load Balancing (Feature Brief: Load Balancing in Stingray Traffic Manager) will work against you.  Stingray will process each network operation independently, and it's very likely that the network transactions will be routed to different machines in your cluster.  In this case, you need to be firm with Stingray and require that all transactions in the same 'session' are routed to the same machine.   Enter 'Session Persistence' - the means to override the load balancing decision and pin 'sessions' to the same server machine.   Session Persistence Methods   Stingray Traffic Manager employs a range of session persistence methods, each with a different way to identify a session. You should generally select the session persistence method that most accurately identifies user sessions for the application you are load balancing.    Persistence Type Session identifier Session data store IP-based persistence Source IP address Internal Stingray session cache Universal session persistence TrafficScript-generated key Internal Stingray session cache Named Node session persistence TrafficScript specifies node None Transparent session affinity HTTP browser session Client-side Cookie (set by Stingray) Monitor application cookies Named application cookie Client-side Cookie (set by Stingray) J2EE session persistence J2EE session identifier Internal Stingray session cache ASP and ASP.NET session persistence ASP/ASP.Net session identifiers Internal Stingray session cache X-Zeus-Backend cookies Provided by backend node Client-side Cookie (set by backend) SSL Session ID persistence SSL Session ID Internal Stingray session cache   For a detailed description of the various session persistence methods, please refer to the Stingray User Manual (Stingray Product Documentation).   Where is session data stored?   Client-side Cookies   Stingray Traffic Manager will issue a Set-Cookie header to store the name of the desired node in a client-side cookie.  The cookie identifier and the name of the node are both hashed to prevent tampering or information leakage:   In the case of ‘Monitor Application Cookies’, the session cookie is given the same expiry time as the cookie it is monitoring. In the case of ‘Transparent Session Affinity’, the session cookie is not given an expiry time.  It will last for the duration of the browser session.  See also What's the X-Mapping- cookie for, and does it constitute a security or privacy risk?   Internal Stingray session cache   Session data is stored in Stingray Traffic Manager in a fixed-size cache, and replicated across the cluster according to the ‘State Synchronization Settings’ (Global Settings).   All session persistence classes of the same type will share the same cache space.  The session persistence caches function in a ‘Least Recently Used’ fashion: each time an entry is accessed, its timestamp is updated. When an entry must be removed to make room for a new session, the entry with the oldest timestamp is dropped.   Controlling Session Persistence   Session persistence ties the requests from one client (ie, in one 'session') to the same back-end server node. It defeats the intelligence of the load-balancing algorithm, which tries to select the fastest, most available node for each request.   In a web session, often it's only necessary to tie some requests to the same server node. For example, you may want to tie requests that begin " /servlet " to a server node, but let Stingray be free to load-balance all other requests (images, static content) as appropriate.   Session Persistence may be a property of a pool - all requests processed by that pool are assigned to a session and routed accordingly - but if you want more control you can control it using TrafficScript.   Configure a session persistence class with the desired configuration for your /servlet requests, then use the following request rule:   if( string.startswith( http.getPath(), "/servlet" ) ) { connection.setPersistenceClass( "servlet persistence" ); }   Missing Session Persistence entries   If a client connects and no session persistence entry exists in the internal table, then the connection will be handled as if it were a new session.  Stingray will apply load-balancing to select the most appropriate node, and then record the selection in the session table.  The record will be broadcast to other Stingray machines in the cluster.   Failed Session Persistence attempts   If the session data (client cookie or internal table) references a node that is not available (it has failed or has been removed), then the default behavior is to delete the session record and load-balance to a working node.   This behavior may be modified on a Persistence class basis, to send a ‘sorry’ message or just drop the connection:   Configure how to respond and how to manage the session if a target node cannot be reached   Draining and disabled nodes   If a node is marked as draining then existing sessions will be directed to that node, but no new sessions will be established.  Once the existing sessions have completed, it is safe to remove the node without interrupting connections or sessions.   Stingray provides a counter indicating when the node was last used.  If you wish to time sessions out after 10 minutes of activity, then you can remove the node once the counter passes 10 minutes:   The ‘Connection Draining’ report indicates how long ago the last session was active on a node   If a node is marked as disabled, no connections are sent to it.  Existing connections will continue until they are closed.  In addition, Stingray stops running health monitors against the disabled node.  Disabling a node is a convenient way to take it out of service temporarily (for example, to apply a software update) without removing it completely from the configuration.   Monitoring and Debugging Session Persistence   SNMP and Activity Monitor counters may be used to monitor the behavior of the session cache.  You will observe that the cache gradually fills up as sessions are added, and then remains full.  The max age of cache entries will likely follow a fine saw-tooth pattern as the oldest entry gradually ages and then is either dropped or refreshed, although this is only visible if new entries are added infrequently:     In the first 4 minutes, traffic is steady at 300 new sessions per minute and the session cache fills.  Initially, the max age grows steadily but when the cache fills (after 2 minutes) the max age remains fairly stable as older entries are dropped.  In the last minute, no new entries were added, so the cache remains full and the max-age increases steadily.   The ‘Current Connections’ table will display the node that was selected for each transaction that the traffic manager processed:   Observe that requests have been evenly distributed between nodes 201, 202 and 203 because no session persistence is active   Transaction logging can give additional information.  Access logs support webserver-style macros, and the following macros are useful:   Macro Description %F The favored node; this is a hint to the load-balancing algorithm to optimize node cache usage %N The required node (may be blank): defined by a session persistence method %n The actual node used by the connection; may differ from %F if the favoured node is overloaded, and differ from %N if the required node has failed   Finally, TrafficScript can be used to annotate pages with the name of the node they were served from:   if( http.getResponseHeader( "Content-Type" ) != "text/html" ) break; $body = http.getResponseBody(); $html = '<div style="position:absolute;top:0;left:0;border:1px solid black;background:white">'. 'Served by ' . connection.getNode() . '</div>'; $body = string.regexsub( $body, "(<body[^>]*>)", "$1\n".$html."\n", "i" ); http.setResponseBody( $body );   Final Observations   Like caching, session persistence breaks the simple model of load-balancing each transaction to the least-loaded server.  If used without a full understanding of the consequences, it can provoke strange and unexpected behavior.   The built-in session persistence methods in Stingray Traffic Manager are suitable for a wide range of applications, but it’s always possible to construct situations with fragile applications or small numbers of clients where session persistence is not the right solution for the problem at hand.   Session persistence should be regarded as a performance optimization, ensuring that users are directed to a node that has their session data ready and fresh in a local cache.  No application should absolutely depend upon session persistence, because to do so would introduce a single point of failure for every users’ session.   Pragmatically, it is not always possible to achieve this.  Stingray’s TrafficScript language provides the means to fine-tune session persistence to accurately recognize individual sessions, apply session persistence judiciously to the transactions that require it, and implement timeouts if required.   Read more   Session Persistence - implementation and timeouts HowTo: Controlling Session Persistence HowTo: Delete Session Persistence records
View full article
Stingray Traffic Manager operates as a full network proxy.  Incoming TCP and UDP traffic is terminated on the traffic manager, and new TCP or UDP sessions are initiated from the traffic manager to the selected target server.   This approach has the benefit that Stingray can apply a range of TCP optimizations (such as independent window scaling) and higher-level optimizations (HTTP connection reuse), and it's an architectural necessity for any complex content inspection and rewriting (including compression, SSL decryption and all manner of TrafficScript-based solutions).   However, the approach has the side effect that the target servers observe the connection as originating from the Stingray device, not from the remote client.  There are several situations where this may be a problem:   Security and access control measures that need to observe the source IP address of a connection will not work Access logs will identify the Stingray IP address as the source of the connection, and compliance requirements may mandate that the true origin is recorded   There are a number of steps you can take to address problems that arise from this situation.   Offload the task that requires the IP address on to the Stingray device   In many cases, it's possible to move the task that requires access to the IP address from the back-end servers and deploy it on the traffic manager instead:   Access Logging: Stingray provides full webserver-style access logging.  Another advantage of logging transactions on Stingray rather than the webserver cluster is that you don't need to worry about merging log files from multiple servers Security: You can implement a range of IP-based security measures on Stingray, such as Checking IP addresses against a DNS blacklist with Stingray Traffic Manager   Modify the behavior of the Server Application   When Stingray manages an HTTP connection, it adds an X-Cluster-Client-Ip header to the request that identifies the true source address. A web based application that wishes to know the source address of the connection could inspect the value of this header instead.   For example, if you are logging transactions using the common log format in a server such as Apache:   LogFormat "%h %l %u %t \"%r\" %>s %b"   ... you can replace the %h macro with a macro that records the value of the custom header that Stingray inserts:   LogFormat "%{X-Cluster-Client-Ip} %l %u %t \"%r\" %>s %b"   If you are using Apache, you should consider using the mod_remoteip - Apache HTTP Server module (thanks to Julian Midgley for the following).  Enable it as follows:   LoadModule remoteip_module modules/mod_remoteip.so RemoteIPHeader X-Cluster-Client-Ip RemoteIPTrustedProxy 1.2.3.4   ... where 1.2.3.4 is the trusted source of the traffic (i.e. the IP address of the Stingray device).  If you want to trust the header even if it points to a private IP (e.g. 192.168.0.1), then use RemoteIPInternalProxy instead of RemoteIPTrustedProxy.   Note that Apache will continue to log the Stingray IP address when using %h in the log file; replace this with %a to get the client IP address.   If you're using iPlanet, SunONE or a related webserver, you can look at this alternative: Preserving the Client IP address to iPlanet/SunONE/Sun Java System Web Server servers and apps.   Use Stingray's IP Transparency Feature   Stingray's IP Transparency Feature is used to rewrite the source IP address in the server-side connection so that the TCP and UDP traffic appears to originate from the remote client.   It is a very effective solution, but it requires careful network configuration and it incurs an additional workload on the Stingray host because it needs to maintain a large NAT table for the rewritten connections.   On the Stingray Virtual Appliance, the ztrans kernel module that provides IP Transparency is pre-installed; if you are using the Stingray software on a Linux host, you will need to install the Stingray Kernel Modules for Linux Software.  The feature is not available for Solaris.   Once installed, you enable IP Transparency on a per-pool basis:     Please refer to the Stingray Product Documentation for more information.   Read more   HowTo: Spoof Source IP Addresses with IP Transparency Transparent Load Balancing with Stingray Traffic Manager Preserving the Client IP address to iPlanet/SunONE/Sun Java System Web Server servers and apps
View full article
Stingray’s autoscaling capability is intended to help you dynamically control the resources that a service uses so that you can deliver services to a desired SLA, while minimizing the cost.  The intention of this feature is that you can:   Define the desired SLA for a service (based on response time of nodes in a pool) Define the minimum number of nodes needed to deliver the service (e.g. 2 nodes for fault-tolerance reasons) Define the maximum number of resources (acting as a brake – this limits how much resource Stingray will deploy in the event of a denial of service attack, traffic surge, application fault etc)   You also need to configure Stingray to deploy instances of the nodes, typically from a template in Amazon, Rackspace or VMware.   You can then leave Stingray to provision the notes, and to dynamically scale the number of nodes up or down to minimize the cost (number of nodes) while preserving the SLA.   Details   Autoscaling is a property of a pool.   A pool contains a set of ‘nodes’ – back-end servers that provide a service on an IP address and port.  All of the nodes in a pool provide the same service.  Autoscaling monitors the service level (i.e. response time) delivered by a pool.  If the response time falls outside the desired SLA, then autoscaling will add or remove nodes from the pool to increase or reduce resource in order to meet the SLA at the lowest cost.   The feature consists of a monitoring and decision engine, and a collection of driver scripts that interface with the relevant platform.   The decision engine   The decision engine monitors the response time from the pool.  Configure it with the desired SLA, and the scale-up/scale-down thresholds.   Example: my SLA is 1000 ms.  I want to scale up (add nodes) if less than 40% of transactions are completed within this SLA, and scale-down (remove nodes) if more than 95% of transactions are completed within the SLA.  To avoid flip-flopping, I want to wait for 20 seconds before initiating the change (in case the problem is transient and goes away), and I want to wait 180 seconds before considering another change.     Other parameters control the minimum and maximum number of nodes in a pool, and how we access the service on new nodes:   The driver   Stingray Traffic Manager includes drivers for Amazon EC2, Rackspace and VMware vSphere.  You will need to configure a set of ‘cloud credentials’ (authentication details for the management API for the virtual platform):     You'll also need to specify the details of the virtual machine template that instantiates the service in the pool:     The decision engine initiates a scale-up or scale-down action by invoking the driver with the configured credentials and parameters.  The driver instructs the virtualization layer to deploy or terminate a virtual machine.  Once the action is complete, the driver returns the new list of nodes in the pool and the decision engine update the pool configuration.   Notes:   You can manually provision nodes by editing the max-nodes and min-nodes settings in the pool.  If Stingray notices that there is a mismatch between the max/min and the actual number of nodes active, then it will initiate a series of scale-up or scale-down actions.   Creating a custom driver for a new platform   You can create a custom driver for any platform that is capable of deploying new service instances on demand.  Creating a new driver involves:   Create the driver script, that conforms to the API below Upload the script to the Extra Files -> Miscellaneous store using the UI (or copy to $ZEUSHOME/zxtm/conf/extra) Create a Credentials object that contains the uids, passwords etc necessary to talk to the cloud platform:   Configure the pool to autoscale, and provide the details of the virtual machine that should be provisioned:     Specification of the driver scripts   The settings in the UI are interpreted by the Cloud API script.  Stingray will invoke this script and pass the details in.  Use the ZEUSHOME/zxtm/bin/rackspace.pl or vsphere-client scripts as examples (the ZEUSHOME/zxtm/bin/awstool script is multi-purpose and used by Stingray¹s handling of EC2 EIPs for failover as well).   Arguments:   The scripts should support several actions - status , createnode , destroynode , listimageids and listsizeids . Run --help:   root@stingray-1:/opt/zeus/zxtm/bin# ./rackspace.pl --help Usage: ./rackspace.pl [--help] action options   action: [status|createnode|destroynode|listimageids|listsizeids] common options: --verbose=1 --cloudcreds=name other valid options depend on the chosen action:   status: --deltasince=tstamp Only report changes since timestamp tstamp (unix time)   createnode: --name=newname Associate name newname (must be unique) with the new instance               --imageid=i_id Create an instance of image uniquely identified by i_id               --sizeid=s_id  Create an instance with size uniquely identified by s_id   destroynode: --id=oldid destroy instance uniquely identified by oldid   Note: The ' --deltasince ' isn't supported by many cloud APIs, but has been added for Rackspace.  If the cloud API in question supports reporting only changes since a given date/time, it should be implemented.   The value of the --name option will be chosen by the autoscaler on the basis of the 'autoscale!name': a different integer will be appended to the name for each node.   The script should return a JSON-formatted response for each action:   Status action   Example response:   {"NodeStatusResponse": {   "version":1,   "code":200,   "nodes":[      {"sizeid":1,     "status":"active",     "name":"TrafficManager",     "public_ip":"174.143.156.25",     "created":1274688603,     "uniq_id":98549,     "private_ip":"10.177.4.216",     "imageid":8,     "complete":100},      {"sizeid":1,     "status":"active",     "name":"webserver0",     "public_ip":"174.143.153.20",     "created":1274688003,     "uniq_id":100768,     "private_ip":"10.177.1.212",     "imageid":"apache_xxl",     "complete":100}   ] } }   version and code must be JSON integers in decimal notation; sizeid , uniq_id , imageid can be decimal integers or strings.   name must be a string.  Some clouds do not give every instance a name; in this case it should be left out or be set to the empty string.  The autoscaler process will then infer the relevance of a node for a pool on the basis of the imageid (must match 'autoscale!imageid' in the pool's configuration).   created is the unix time stamp of when the node was created and hence must be a decimal integer.  When the autoscaler destroys nodes, it will try to destroy the oldest node first.  Some clouds do not provide this information; in this case it should be set to zero.   complete must be a decimal integer indicating the percentage of progress when a node is created.   A response code of 304 to a 'status' request with a '--deltasince' option is interpreted as 'no change from last status request'.   CreateNode action   The response is a JSON-formatted object as follows:   {"CreateNodeResponse": {   "version":1,   "code":202,   "nodes":[    {"sizeid":1,     "status":"pending",     "name":"webserver9",     "public_ip":"173.203.222.113",     "created":0,     "uniq_id":230593,     "private_ip":"10.177.91.9",     "imageid":41,     "complete":0}   ] } }   The 202 corresponds to the HTTP response code 'Accepted'.   DestroyNode Action   The response is a JSON-formatted object as follows:   {"DestroyNodeResponse": {   "version":1,   "code":202,   "nodes":[    {     "created":0,     "sizeid":"unknown",     "uniq_id":230593,     "status":"destroyed",     "name":"unknown",     "imageid":"unknown",     "complete":100}   ] } }   Error conditions   The autoscaling driver script should communicate error conditions using responsecodes >= 400 and/or by writing output to stderr. When the autoscaler detects an error from an API script it disables autoscaling for all pools using the Cloud Credentials in question until an API call using those Cloud Credentials is successful.
View full article
What is Policy Based Routing?   Policy Based Routing (PBR) is simply the ability to choose a different routing policy based on various criteria, such as the last hop used, or the local IP address of the connection. As you may have guessed, PBR is only necessary where your Stingray Traffic Manager is multi-homed (ie multiple default routes) and asynchronous routing is either not possible or not desired.   There are only really two types of multi homing which we commonly deal with in Stingray deployments. I am going to refer to them as "Multiple ISP", and "Multiple Link".   Multiple ISP   This is the simpler scenario, and it is seen when a Stingray is deployed in an infrastructure with two or more independent ISPs. The ISPs all provide different network ranges, and Stingray Traffic IP Groups are the end points for the addresses in those ranges. Stingray must chose the default gateway based on the local Traffic IP Address of the connection.   Multiple Link   This is slightly more complicated because traffic destined for Stingrays Traffic IP can come in via a number of different gateways. Stingray must ensure that return traffic is sent out of the same gateway as it arrived through. This is also known as "Auto-Last-Hop", and is achieved by keeping track of the Layer 2 mac address associated with the connection.     Setting up Policy Based Routing on Stingray   This guide will show you how to set up a process within Stingray Traffic Manager (STM) such that a PBR policy is applied during software start up. The advantage of configuring Stingray this way is that there are no changes to the underlying OS configuration, and as such it is fully compatible with the Virtual Appliance as well as the software (Linux) version. The steps to set up the PBR are as follows...   Configure gateways.conf for your environment Upload the gateways.conf to Catalogs -> Extra -> Misc Create a new action called "DynamicPBR" in System -> Alerting -> Actions This should be a program action, and execute the dynamic-pbr.sh script Create a new Event called "Dynamic PBR" in System -> Alerting -> Events You want to hook the software started event here   Step 1: Upload the dynamic-pbr.sh script   Navigate to Catalogs -> Extra Files -> Actions Programs and upload the dynamic-pbr.sh script found attached to this article.     Step 2: Configure the gateways.conf for your environment   When the dynamic-pbr.sh script is executed it will attempt to load and process a file called gateways.conf from miscellaneous files. You will need to create that configuration file.   The configuration is a simple text file with a number of fields separated by white space. The first column should be either MAC (to indicate a “Multiple Link” config) or SRC (to indicate “Multiple ISP”).   If you are using the MAC method, then you only need to supply the IP address of each of your gateways and their Layer 2 MAC address. Each MAC line should read “MAC <Gateway IP> <Gateway MAC>”.   If you are using the SRC method, then you should include: local source IP (this can be an individual Traffic IP, or a subnet), the Gateway IP. You should also include information on the local network if you need the Stingray to be able to access local machines other than the gateway. Do this using two additional/optional columns: Local subnet and device.   Each SRC line should read: “SRC <Local IP> <Gateway IP> <Local subnet> <local device>”     Step 3: upload the gateways.conf   Once you have configured the gateways.conf for your environment, you should upload it to Catalogs -> Extra Files -> Miscellaneous   Step 4: Create Dynamic PBR Action   Now we have the script and configuration file uploaded to Stingray, the next steps are to configure the alerting system to execute them at software start up. First we must create a new program action under System -> Alerting -> Manage Actions.   Create a new action called “Dynamic PBR” of type Program. In the edit action screen, you should then be able to select dynamic-pbr.sh from the drop down list.     Step 5: Create Dynamic PBR Event   Now that we have an action, we need to create an event which hooks the “software is running” event. Navigate to System -> Alerting -> Manage Event Types and create a new Event Type called “Dynamic PBR”.   In the event list select the software running event under General, Information Messages.     Step 6: Link the event to the action   Navigate back to the System -> Alerting page and link our new “Dynamic PBR” event type to the “Dynamic PBR” action.     Finished   Now every time the Stingray software is started, the configuration from the gateways.conf will be applied.   How do I check the policy?   If you want to check what policy has been applied to the OS, you can do so on the command line. Either open the console or SSH into the Stingray Traffic Manager machine. The policy is applied by setting up a rule and matching routing table for each of the lines in the gateways.conf configuration file.You can check the routing policy by using the iproute2 utility.   To check the routing rules, run: “ip rule list”.   There are three default rules/tables in Linux: rule 0 looks up the “local” table, rule 32766 looks up “main”, and 32767 looks up “default”. The rules are executed in order. The local rule (0) is maintained by the kernel, so you shouldn’t touch it. The main table (look up rule 32766) and default table (look up rule 32767) tables go last. The main table holds the main routing table of your machine and is the one returned by “netstat –rn”. The default table is usually empty. All other rules in the list are custom, and you should see a rule entry for each of the lines in your gateway configuration file.   So where are the routes? Well the rules are passed in order and the lookup points to a table. You can have upto 255 tables in linux. The “main” table is actually table 255. To see the routes in the table you would use the “ip route list” command. Executing “ip route list table main” and “ip route list table 254” should return the same routing information.   You will note that the rules added by Stingray are referenced by their number only. So to look at one of your tables you would use its number. For example “ip route list table 10”. Enjoy!   Updates 20150317: Modified script to parse configuration files which use windows format line endings.
View full article
Stingray Traffic Manager operates as a layer 7 proxy.  It receives traffic on nominated IP addresses and ports and reads the client request.  After processing the request internally, Stingray selects a candidate 'node' (back-end server).  It writes the request using a new connection (or an existing keepalive connection) to that server and reads the server response.  Stingray processes the response, then writes it back to the client. Stingray performs a wide range of traffic inspection, manipulation and routing tasks, from SSL decryption and service protection, through load balancing and session persistence, to content compression and bandwidth management.  This article explains how each task fits within the architecture of Stingray. Virtual Servers and Pools The key configuration objects in Stingray are the Virtual Server and the Pool: The Virtual Server manages the connections between the remote clients and Stingray. It listens for requests on the published IP address and port of the service. The Pool manages the connections between Stingray and the back-end nodes (the servers which provide the service). A pool represents a group of back-end nodes. Everything else All other data-plane (relating to client traffic) functions of Stingray are associated with either a virtual server or a pool: Health Monitors run asynchronously, probing the servers with built-in and custom tests to verify that they are operating correctly.  If a server fails, it is taken out of service. 1. Virtual Server's processing (request) The Virtual Server listens for TCP connections or UDP datagrams on its nominated IP and port.  It reads the client request and processes it: SSL Decryption is performed by a virtual server. It references certificates and CRLs that are stored in the configuration catalog. Service Protection is configured by Service Protection Classes which reside in the catalog. Service Protection defines which requests are acceptable, and which should be discarded immediately. A Virtual Server then executes any Request Rules. These rules reside in the catalog. They can manipulate traffic, and select a pool for each request. 2. Pool's processing The request rules may select a pool to handle the request. If they complete without selecting a pool, the virtual server's 'default pool' is used: The pool performs load-balancing calculations, as specified by its configuration. A number of load balancing algorithms are available. A virtual server's request rule may have selected a session persistence class, or a pool may have a preferred session persistence class. In this case, the pool will endeavour to send requests in the same session to the same node, overriding the load-balancing decision. Session persistence classes are stored in the catalog and referenced by a pool or rule. Finally, a pool may SSL-encrypt traffic before sending it to a back-end node. SSL encryption may reference client certificates, root certs and CRLs in the catalog to authenticate and authorize the connection. 3. Virtual Servers's processing (response ) The pool waits for a response from a back-end node, and may retry requests if an error is detected or a response is not received within a timeout period. When a response is received, it is handed back to the virtual server: The virtual server may run Response Rules to modify the response, or to retry it if it was not acceptable. Response rules are stored in the catalog. A virtual server may be configured to compress HTTP responses. They will only be compressed if the remote client has indicated that they can accept compressed content. The virtual server may be configured to write a log file entry to record the request and response. HTTP access log formats are available, and formats for other protocols can be configured. A request rule may have selected a Service Level Monitoring class to monitor the connection time, or the virtual server may have a default class. These servive level monitoring classes are stored in the catalog, and are used to detect poor response times from back-end nodes. Finally, a virtual server may assign the connection to a Bandwidth Management Class. A bandwidth class is used to restrict the bandwidth available to a connection; these classes are stored in the catalog. Many of the more complex configuration objects are stored in the configuration catalog. These objects are referenced by a virtual server, pool or rule, and they can be used by a number of different services if desired. Other configuration objects Two other configuration objects are worthy of note: Monitors are assigned to a pool, and are used to asynchronously probe back-end nodes to detect whether they are available or not. Monitors reside in the catalog. Traffic IP Groups are used to configure the fault-tolerant behavior of Stingray. They define groups of IP addresses that are shared across a fault-tolerant cluster. Configuration Core service objects - Virtual Servers, Pools, Traffic IP Groups - are configured using the 'Services' part of the Stingray Admin server: Catalog objects and classes - Rules, Monitors, SSL certificates, Service Protection, Session Persistence, Bandwidth Management and Service Level Monitoring classes - are configured using the 'Catalogs' part of the Stingray Admin server: Most types of catalog objects are referenced by a virtual server or pool configuration, or by a rule invoked by the virtual server. Read more For more information on the key features, refer to the Product Briefs for Stingray Traffic Manager
View full article
This article describes how TrafficScript manages the memory needed when a rule executes, and how it references connection and global data that is stored outside of a rule's execution environment.  It will help you understand the differences between local variables in TrafficScript, connection-local variables ( connection.data.get/set ), resource data ( resource.get ) and global data ( data.get/set ). Overview TrafficScript is a very lightweight compiled language.  TrafficScript rules are compiled into a Stingray-internal bytecode with about a dozen simple stack-based instructions and are executed on an internal 'virtual machine' (code-named ' ichor ').  All of the 'heavy lifting' (i.e. all of the trafficscript functions) are implemented by internal Stingray procedures (not by the TrafficScript language) so the performance of TrafficScript is driven by native Stingray performance rather than the performance of the virtual machine bytecode. The biggest single determinant of performance in an optimized, lightweight virtual machine like ichor is the use of memory.  Ichor goes to great pains to minimize memory copies by use of references and region-based memory management where possible to reduce the overhead as far as possible. In this article, we'll consider how TrafficScript addresses memory via the String datatype.  Internally, Strings are implemented as references (pointer-and-length) to memory that is often managed outside of the Ichor runtime.  From Ichor's perspective, this memory is read-only; multiple strings can refer to the same memory area and memory is only copied when a new string with new contents is created.  This allows ichor to make assumptions that significantly improve execution speed and memory footprint. A model of memory The diagram below outlines five types of memory that TrafficScript can address. Constants TrafficScript constants (string, integer and double values) are stored with the compiled TrafficScript rule Stored with the compiled TrafficScript rule: String, integer and double values that are declared in a TrafficScript rule are stored with the rule bytecode and referenced directly.  They are deduped, simply to reduce the memory footprint of the compiled TrafficScript rule. Local variables and Temporary values Variables and other temporary values are stored in a temporary memory region that is discarded once the rule has finished executing. Stored for the scope of the rule execution: Local variables and temporary values that are created during the execution of a TrafficScript rule are stored on the execution stack and in the growable heap used by the TrafficScript virtual machine. Because string data uses references rather than private copies, code like the following is very efficient: $body = http.getResponseBody(); if( string.regexMatch( $body, "(.*?)(<body.*?>)(.*)" ) ) {   $head = $1;   $bodytag = $2;   $remainder = $3; } No memory copies are made during the regex search and the assignment of the results to $head , $bodytag and $remainder .  These variables simply contain references to substrings within Stingray's internal copy of the response body. TrafficScript variables and temporaries are only valid for the duration of the rule's execution; persistent data is copied out of the heap as a side-effect of the relevant TrafficScript operations and the heap is safely and quickly discarded in a single operation when the rule execution completes. Per-connection data Data can be stored with a connection using connection.data.set() , and retrieved by a later rule using connection.data.get() .  This is used when sharing state between the rules that process a connection. Stored for the duration of the connection: A connection that is processed by Stingray uses a variety of memory data structures for various data types. HTTP headers and other connection data is stored with the connection.  If a TrafficScript rule requests the value of the path or a header (for example), it is given a reference to the connection-local memory containing the path, so there are no memory copies.  If a TrafficScript rule updates the value of the path (for example, using http.setPath() ), then the connection-local copy is updated so that the value persists when the TrafficScript rule completes. In the example above where the TrafficScript rule used http.getResponseBody() , all of the strings refer to the connection-local copy of the body, and this single copy is used by all TrafficScript rules that need to access it in a read-only fashion. Note: Stingray's HTTP virtual server type abstracts an HTTP transaction from the underlying TCP connection.  Connection-local data is associated with the HTTP transaction and discarded when the transaction completes. connection.data.set(): You can explicitly store data with a connection-scope using the connection.data.set() trafficscript function.  This places a copy of the data in the connection's growable memory pool and this data can be retrieved by a later TrafficScript rule ( connection.data.get() ) or a transaction log macro.  The connection's memory pool is discarded in a single operation once the connection has completed. Per-process data Resource files are stored per-process and can be referenced efficiently using resource.get() and related functions. The most common per-process data that TrafficScript will address is the contents of resource files.  Resource files sit in the extra section of the Stingray configuration.  They are loaded into memory and stored persistently at startup, and whenever they change on disk. resource.get() returns a reference to the body of the already-loaded resource file.  In a similar fashion, resource.getMTime() and resource.getMD5() return the pre-calculated values so there is no disk or compute overhead from invoking these functions. Global data Data can be shared between all rules using the global key-value store.  Access the data using data.get(), data.set() and related functions. On a multi-core machine, Stingray will typically run one zeus.zxtm traffic manager process per core.  These processes share a fixed-size shared memory segment that is allocated at startup. This shared memory segment is used for a number of purposes - sharing session persistence data, bandwidth and rate data, the web content cache, etc.  It includes a key-value store called the Global Data Table that you address using TrafficScript functions such as data.set() and data.get() . The Global Data Table is the key memory resource to use if you want to share data between different connections (the alternative is an external solution accessed using a Java Extension or other external callout, or a client-side cookie).  Keys and values are stored as strings (other types are serialized and deserialized on demand), and the size of the table is fixed (trafficscript!data_size) so you must track and discard entries yourself.  Without locking, iterators or memory management, using the global data table effectively can be a challenge. data.set( $key, $value ) : put a copy of the key/value pair in the global data table, serializing non-string datastructures where necessary data.get( $key ) : return the corresponding value, de-serializing where necessary, or the empty string if $key is not recognised data.remove( $key ) : removes the key/value pair from the table, freeing the memory used by both data.reset( [$prefix] ) : removes every entry, or just entries where the key begins with $prefix, from the table, freeing memory data.getMemoryUsage() and data.getMemoryFree(): indicate memory usage and can be used to detect impending memory exhaustion Read more HowTo: TrafficScript Arrays and Hashes Investigating the performance of TrafficScript - storing tables of data (illustrates efficient use of the global data table)
View full article
TrafficScript is Stingray's scripting and configuration language that lets you specify precisely how Stingray must handle each type of request, in as much detail as you need. Without TrafficScript, you would have to configure your load balancer with a single, ‘lowest-common-denominator’ policy that describes how to handle all your network traffic.  With TrafficScript, you control how Stingray handles your traffic , inspecting and modifying each request and response as you wish, and pulling in each of Stingray’s features as you require. What is TrafficScript? TrafficScript is a high-level programming language used to create ‘rules’ which are invoked by the traffic manager each time a transaction request or response is received: TrafficScript rules have full access to all request and response data, and give the you full control over how end users interact with the load balanced services.  They are commonly used to selectively enable particular Traffic Manager features (for example, bandwidth control, caching or security policies) and to modify request and response data to handle error cases or augment web page data. Although TrafficScript is a new language, the syntax is intentionally familiar.  It is deeply integrated with the traffic management kernel for two reasons: Performance: the integration allows for very efficient, high-performance interaction with the internal state of the traffic manager Abstraction: TrafficScript presents a very easy-to-use request/response event model that abstracts the internal complexities of managing network traffic away from the developer. You can use TrafficScript to create a wide range of solutions and the familiar syntax means that complex code can be prototyped and deployed rapidly. Example 1 - Modifying Requests # Is this request a video download? $url = http.getPath(); if( string.wildmatch( $url, "/videos/*.flv" ) ) {   # Rewrite the request to target an f4v container, not flv   $url = string.replace( $url, ".flv", ".f4v" );   http.setPath( $url );   # We encode flash videos at 1088 Kbits max.  Apply a limit to 2 Gbit   # to control download tools and other greedy clients   response.setBandwidthClass( "Videos 2Mbits" );   # We don't want to cache the response in the Stingray cache, even if   # the HTTP headers state that it is cacheable   http.cache.disable(); } A simple request rule that modifies the request and instructs the traffic manager to apply bandwidth and cache customizations TrafficScript's close integration with the traffic management kernel makes it as easy to rewrite HTTP responses as HTTP requests: Example 2 - Modifying Responses $url = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $url, "text/html" ) ) break; $response = http.getResponseBody(); $response = string.replaceAll( $response,     " http://intranet.mycorp.com/ ", " https://extranet.mycorp.com/ " ); http.setResponseBody( $response ); A response rule that makes a simple replacement to change links embedded in HTTP responses TrafficScript can invoke external systems in a synchronous or asynchronous fashion: TrafficScript functions like http.request.get() , auth.query() and net.dns.resolveIP() will query an external HTTP, LDAP or DNS server and return the result of the query.  They operate synchronously (the rule is ‘blocked’ while the query is running) but the Traffic Manager will process other network traffic while the current rule is temporarily suspended. The TrafficScript function event.emit() raises an event to Stingray’s Event Handling system.  The TrafficScript rule continues to execute and the event is processed asynchronously. Events can trigger a variety of actions, ranging from syslog or email alerts to complex user-provided scripts. These capabilities allow the Traffic Manager to interface with external systems to retrieve data, verify credentials or initiate external control-plane actions. Example 3 - Accessing an external source (Google News) $type = http.getResponseHeader( "Content-Type" ); if( $type != "text/html" ) break;  # Stop processing this rule $res = http.request.get(   " https://ajax.googleapis.com/ajax/services/search/news ?".   "v=1.0&q=Riverbed" ); $r = json.deserialize( $res ); $rs = $r['responseData']['results']; $html .= "<ul>\n"; foreach( $e in $rs ) {   $html .= '<li>' . '<a href="'.$e['unescapedUrl'].'">'.$e['titleNoFormatting'].'</a>'. '</li>'; } $html .= "</ul>\n"; $body = http.getResponseBody(); $body = string.replace( $body, "<!--RESULTS-->", $html); http.setResponseBody( $body ); An advanced response rule that queries an external datasource and inserts additional data into the web page response TrafficScript rules may also invoke Java Extensions.  Extensions may be written in any language that targets the JVM, such as Python or Ruby as well as Java. They allow developers to use third-party code libraries and to write sophisticated rules that maintain long-term state or perform complex calculations. Getting started with RuleBuilder The full TrafficScript language gives you access to over 200 functions, with the support of a proper programming language - variables, tests, loops and other flow control. You can write TrafficScript rules much as you'd write Perl scripts (or Python, JavaScript, Ruby, etc). The RuleBuilder gives you a UI that lets you configure tests, and actions which are executed if one-of or all-of the tests are satisfied. The tests and actions you can use are predefined, and cover a subset of the full functions of TrafficScript. You can use the RuleBuilder much as you'd use the filtering rules in your email client. RuleBuilder provides a simple way to create basic policies to control Traffic Manager If you're not familiar with programming languages, then RuleBuilder is a great way to get started.  You can create simple policies to control Stingray's operation, and then, with one click, transform them into the equivalent TrafficScript rule so that you can learn the syntax and extend them as required.  There's a good example to start with in the Stop hot-linking and bandwidth theft! article. Examples Collected Tech Tips: TrafficScript examples Top Stingray Solutions and Deployments (many solutions depend on TrafficScript) Read More Stingray TrafficScript Guide in the Stingray Product Documentation
View full article
This technical brief discusses Stingray's Clustering and Fault Tolerance mechanisms ('TrafficCluster'). Clustering Stingray Traffic Managers are routinely deployed in clusters of two or more devices, for fault-tolerance and scalability reasons: A cluster is a set of traffic managers that share the same basic configuration (for variations, see 'Multi-Site Management' below).  These traffic managers act as a single unit with respect to the web-based administration and monitoring interface: configuration updates are automatically propagated across the cluster, and diagnostics, logs and statistics are automatically gathered and merged by the admin interface. Architecture - fully distributed, no 'master' There is no explicit or implicit 'master' in a Stingray cluster - like the Knights of the Round Table, all Stingrays have equal status.  This design improves reliability of the cluster as there is no need to nominate and track the status of a single master device. Administrators can use the admin interface on any Stingray device to manage the cluster.  Intra-cluster communication is secured using SSL and fingerprinting to protect against the interception of configuration updates or false impersonation (man-in-the-middle) of a cluster member. Note: You can remove administration privileges from selected traffic managers by disabling the control!canupdate configuration setting for those traffic managers.  Once a traffic manager is restricted in that way, its peers will refuse to accept configuration updates from it, and the administration interface is disabled on that traffic manager.  If a restricted traffic manager is in some way compromised, it cannot be used to further compromise the other traffic managers in your cluster. If you find yourself in the position that you cannot access any of the unrestricted traffic managers and you need to promote a restricted traffic manager to regain control, please refer to the technical note What to do if you need to access a restricted Stingray Traffic Manager. Traffic Distribution across a Cluster Incoming network traffic is distributed across a cluster using a concept named 'Traffic IP Groups'. A Traffic IP Group contains a set of floating (virtual) IP addresses (known as 'Traffic IPs') and it spans some or all of the traffic managers in a cluster: The Stingray Cluster contains traffic managers 1, 2, 3 and 4. Traffic IP Group A contains traffic IP addresses 'x' and 'y' and is managed by traffic managers 1, 2 and 3. Traffic IP Group B contains traffic IP address 'z' and is managed by traffic managers 3 and 4. The traffic managers handle traffic that is destined to the traffic IP addresses using one of two methods: Single-hosted traffic IP groups:  If a group is configured to operate in a 'single-hosted' fashion, each IP address is raised on a single traffic manager.  If there are multiple IP addresses in the group, the IP addresses will be shared between the traffic managers in an even fashion. Multi-hosted traffic IP groups:  If a group is configured to operate in a 'multi-hosted' fashion, each IP address is raised on all of the traffic managers.  The traffic managers publish the IP address using a multi-cast MAC address and employ the zcluster Stingray Kernel Modules for Linux Software module to filter the incoming traffic so that each connection is processed by one traffic manager, and the workload is shared evenly. Single-hosted is typically easier to manage and debug in the event of problems because all of the traffic to a traffic IP address is targetted to the same traffic manager.  In high-traffic environments, it's common to assign multiple IP addresses to a single-hosted traffic IP group and let the traffic managers distribute those IP addresses evenly.  Publish all of the IP addresses in a round-robin DNS fashion.  This gives approximately even distribution of traffic across these IP addresses. Multi-hosted traffic IP groups are more challenging to manage, but they have the advantage that all traffic is evenly distributed across the machines that manage the traffic IP group. For more information, refer to the article Feature Brief: Deep-dive on Multi-Hosted IP addresses in Stingray Traffic Manager If possible, you should use single-hosted traffic IP groups in very high traffic environments.  Although multi-hosted gives even traffic distribution, this comes at a cost: Incoming packets are sprayed to all of the traffic managers in the multi-hosted traffic IP group, resulting in an increase in network traffic Each traffic manager must run the zcluster kernel module to filter incoming traffic; this module will increase the CPU utilization of the kernel on that traffic manager Fault Tolerance The traffic managers in a cluster each perform frequent self-tests, verifying network connectivity, correct operation and internal self-tests.  They broadcast health messages periodically (every 500 ms by default - see flipper!monitor_interval) and listen for the health messages from their peers. If a traffic manager fails, it either broadcasts a health message indicating the problem, or (in the event of a catastrophic situation) it stops broadcasting health messages completely.  Either way, its peers in the Stingray cluster will rapidly identify that it has failed. In this situation, two actions are taken: An event is raised to notify the Event System that a failure has occured.  This will typically raise an alert in the event log and UI, and may send an email or other actions if they have been configured Any traffic IP addresses that the failed traffic manager was responsible for are redistributed appropriately across the remaining traffic managers in each traffic IP group Note that if a traffic manager fails, it will voluntarily drop any traffic IP addresses that it is responsible for. Failover If a traffic manager fails, the traffic IP addresses that it is responsible for are redistributed.  The goal of the redistribution method is to share the  orphaned IP responsibilities as evenly as possible with the remaining traffic managers in the group, without reassigning any other IP allocations.  This minimizes disruption and seeks to ensure that traffic is as evenly shared as possible across the remaining cluster members. The single-hosted method is granular to the level of individual traffic IP addresses.  The failover method is described in the article How are single-hosted traffic IP addresses distributed in a Stingray cluster (TODO). The multi-hosted method is granular to the level of an individual TCP connection.  It's failover method is described in the article How are multihosted-hosted traffic IP addresses distributed in a Stingray cluster (TODO). State sharing within a cluster Stingray machines within a cluster will share some state information: Configuration: Configuration is automatically replicated across the cluster and all traffic managers will hold an identical copy of the entire configuration at all points Health Broadcasts: Stingray machines periodically broadcast their health to the rest of the cluster Session Persistence data&colon; Some session persistence methods depend on Stingray's internal store (see Session Persistence - implementing timeouts).  Local updates to that store are automatically replicated across the cluster on a sub-second granularity Bandwidth Data: Bandwidth classes that share a bandwidth allocation across a cluster (see Feature Brief: Bandwidth and Rate Shaping in Stingray Traffic Manager) will periodically exchange state so that each traffic manager can dynamically negotiate its share of the bandwidth class based on current demand Stingray does not share detailed connection information across a cluster (SSL state, rules state etc), so if a Stingray Traffic Manager were to fail, any TCP connections it is currently managing will be dropped.  You can guarantee that no connections are ever dropped by using a technique like VMware Fault Tolerance to run a shadow traffic manager that tracks the state of the active traffic manager completely.  This solution is supported by Riverbed and is in use in a number of deployments where 5- or 6-9's uptime is not sufficient: VMware Fault Tolerance is used to ensure that no connections are dropped in the event of a Stingray failure Multi-Site Management Recall that all of the traffic managers in a Stingray cluster have identical copies of configuration and therefore will operate in identical fashions. Stingray Traffic Manager clusters may span multiple locations, and in some situations, you may need to run slightly different configurations in each location.  For example, you may wish to use a different pool of web servers when your service is running in your New York datacenter compared to your Docklands datacenter. In simple situations, this can be achieved with judicious use of TrafficScript to apply slightly different traffic management actions based on the identity of the traffic manager that is processing the request ( sys.hostname() ), or the IP address that the request was received on: $ip = request.getLocalIP(); # Traffic IPs in the range 31.44.1.* are hosted in Docklands if( string.ipmaskMatch( $ip, "31.44.1.0/24" ) )   pool.select( "Docklands Webservers" ); # Traffic IPs in the range 154.76.87.* are hosted in New Jersey if( string.ipmaskMatch( $ip, "154.76.87.0/24" ) )   pool.select( "New Jersey Webservers" ); In more complex situations, you can enable the Multi-Site Management option for the Stingray configuration.  This option allows you to apply a layer of templating to your configuration - you define a set of locations, assign each traffic manager to one of these locations, and then you can template individual configuration keys so that they take different values depending on the location in which the configuration is read. There are limitations to the scope of Multi-Site Manager (it currently does not interoperate with The specified space was not found. and the REST API is not able to manage configuration that is templated using Multi-Site Manager).  Please refer to the What is Stingray Multi-Site Manager? feature brief for more information, and to the relevant chapter in the Stingray Product Documentation for details of limitations and caveats. Read More Stingray User Manual in the Stingray Product Documentation
View full article
In situations where TrafficScript does not provide the necessary functionality, you may wish to turn to a  general-purpose language.  In this case, TrafficScript can call out to a locally-installed JVM (Java Virtual Machine) and invoke a ‘Java Extension’ to perform the necessary processing:   This allows you to run code written in a variety of languages (Python and Ruby can target the JVM as well as Java), and to use third-party libraries to provide functionality that would be difficult to code up in TrafficScript.   For example, you can use a Java Extension and the relevant class libraries to inspect image and pdf data ‘on-the-wire’ and apply watermarks or stenography to secure the data and trace unauthorized redistribution.  Java Extensions provide a means to interface with third-party systems that are not natively supported by TrafficScript, such as a MySQL or Oracle databases, or an XML-RPC server.  Finally, they make it easier to accumulate data outside of the regular request/response event model and act on that data asynchronously.   How do Java Extensions function?   Stingray Java Extensions use an extended version of the standard Java Servlet API that adds implementations of significant TrafficScript control functions and allows a chain of Java Extensions to process requests and responses.    A Java Extension is loaded into the local JVM on demand (the first time java.run( 'ExtensionName' ) is invoked).  Stingray passes the basic request (and response) data to the extension and then invokes the servlet ' service ' or ' doGet ' method in the extension.  The extension runs to completion and then control is handed back to the TrafficScript rule that invoked it.   Java Extensions run in a dedicated Java Virtual Machine (JVM), on the same server system as the core Stingray traffic management software. The JVM isolates the Java Extensions from the Stingary kernel, so that extensions can safely perform operations such as database access without blocking or unduly interfering with the Stingray kernel.  The Java Extensions engine can run multiple extensions concurrently (subject to a global limit), and extensions can build up and store persistent data, spawn threads for background processing tasks and perform all manner of blocking operations, subject to configured connection timeouts. The engine supports remote debugging and hot code patching so that running extensions can be debugged and updated on-the-fly.   Java Extensions may be used against HTTP and non-HTTP traffic, and may also be used to implement additional functions that can be called from TrafficScript.  Extensions can be written in Java, Python or Ruby.   What can you do with Java Extensions?   Java Extensions let you draw on the capabilities of Java, the functionality of the JRE and the huge variety of supporting Java class libraries. You can perform detailed content inspection and modification, communicate with external databases, applications and services, and implement very sophisticated application delivery logic.   Possible applications include:   Content Watermarking - protect intellectual property by applying unique visible or hidden watermarks to outgoing documents or images; Authentication, Authorization and Access Control - checking user credentials against an LDAP, Radius, TACACS+, Active Directory or other database and applying access control; Proxying multiple applications behind a single-sign-on gateway, using Java Extensions to create the necessary the logic to broker, authenticate and translate requests between remote clients and the different back-end applications and services; Application Mash-ups, built from multiple different sources. Java Extensions can even communicate with raw sockets to interface with proprietary applications; XML Signature Verification and Generation - verify and strip signatures from incoming XML data, or replace signatures with locally generated ones to indicate that a document has passed initial checking at the gateway; Transaction logging and alerting, logging key events to a remote database or raising an alert, reconfiguring Stingray via the Control API or performing other custom actions as appropriate; Complex Request Routing based on location data, URL hash values or real-time information retrieved from an external database or service.   Examples   Collected Tech Tips: Java Extension Examples   Read More   Stingray Java Development Guide in the Stingray Product Documentation
View full article
Overview   Stingray's RESTful Control API allows HTTP clients to access and modify Stingray cluster configuration data.  For example a program using standard HTTP methods can create or modify virtual servers and pools, or work with other Stingray configuration objects.   The RESTful Control API can be used by any programming language and application environment that supports HTTP.   Resources   The Stingray RESTful API is an HTTP based and published on port :9070.  Requests are made as standard HTTP requests, using the GET, PUT or DELETE methods.  Every RESTful call deals with a “resource”.  A resource can be one of the following:   A list of resources, for example, a list of Virtual Servers or Pools. A configuration resource, for example a specific Virtual Server or Pool. A file, for example a rule or a file from the extra directory.   Resources are referenced through a URI with a common directory structure.  For this first version of the Stingray RESTful API the URI for all resources starts with “/api/tm/1.0/config/active”, so for example to get a list of pools, the URI would be “/api/tm/1.0/config/active/pools” and to reference the pool named “testpool”, the URI would be “/api/tm/1.0/config/active/pools/testpool.   When accessing the RESTful API from a remote machine, HTTPS must be used, but when accessing the RESTful API from a local Stingray instance, HTTP can be used.   By default, the RESTful API is disabled and when enabled listens on port 9070.  The RESTful API can be enabled and the port can be changed in the Stingray GUI by going to System->Security->REST API.   To complete the example, to reference the pool named “testpool” on the Stingray instance with a host name of “stingray.example.com”, the full URI would be “https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool”.  To get a list off all the types of resources available you can access the URL,  “https://stingray.example.com:9070/api/tm/1.0/config/active".   To retrieve the data for a resource you use the GET method, to add or change a resource you use the PUT method and to delete a resource you use the DELETE method.   Data Format   Data for resource lists and configuration resources are returned as JSON structures with a MIME type of "application/json".  JSON allows complex data structures to be represented as strings that can be easily passed in HTTP requests.  When the resource is a file, the data is passed in its raw format with a MIME type of "application/octet-stream".   For lists of resources the data returned will have the format:   { "children": [{ "name": "", "href": "/api/tm/1.0/config/active/pools/" }, { "name": "", "href": "/api/tm/1.0/config/active/pools/" }] }   For example, the list of pools, given two pools, “pool1” and “pool2” would be:   { "children": [{ "name": "pool1", "href": "/api/tm/1.0/config/active/pools/pool1" }, { "children": [{ "name": "pool2", "href": "/api/tm/1.0/config/active/pools/pool2" }] }   For configuration resources, the data will contain one or more sections of properties, always with at least one section named "basic", and the property values can be of different types.  The format looks like:   { "properties": { "<section name>": { "<property name>": "<string value>", "<property name>": <numeric value>, "<property name>": <boolean value>, "<property name>": [<value>, <value>], "<property name>": [<key>: <value>, <key>: <value>] }, "<section name>": { "<property name>": "<string value>", "<property name>": <numeric value>" }   Accessing the RESTful API   Any client or program that can handle HTTP requests can be used to access the RESTful API. Basic authentication is used with the usernames and passwords matching those used to administer Stingray.  To view the data returned by the RESTful API without having to do any programming, there are browser plug-ins that can be used.  One that is available, is the Chrome REST Console.  It is very helpful during testing to have something like this available.  One nice thing about a REST API is that it is discoverable, so using something like the Chrome REST Console, you can walk the resource tree and see everything that is available via the RESTful API.  You can also add, change and delete data.  For more information on using the Chrome REST Console see: Tech Tip: Using Stingray's RESTful Control API with the Chrome REST Console   When adding or changing data, use the PUT method and for configuration resources, the data sent in the request must be in JSON format and must match the data format returned when doing a GET on the same type of resource.  For adding a configuration resource you do not need to include all properties, just the minimum sections and properties required to add the resource and this will vary for each resource.  When changing data you only need to include the sections and properties that need to be changed.  To delete a resource use the DELETE method.   Notes   An important caution when changing or deleting data is that this version of the RESTful API does do data integrity checking.  The RESTful API will allow you to makes changes that would not be allowed in the GUI or CLI.  For example, you can delete a Pool that is being used by a Virtual Server.  This means that when using the RESTful API, you should be sure to understand the data integrity requirements for the resources that you are changing or deleting and put validation in any programs you write. This release of the RESTful API is not compatible with Multi-Site Manager, so both cannot be enabled at the same time.   Read more   Stingray REST API Guide in the Stingray Product Documentation Collected Tech Tips: Using the RESTful Control API with Python Tech Tip: Using Stingray's RESTful Control API with the Chrome REST Console
View full article
The Brocade Virtual Traffic Manager employs a range of protocol optimization and specialized offload functions to improve the performance and capacity of a wide range of networked applications.   TCP Offload applies to most protocol types and is used to offload slow client-side connections and present them to the server as if they were fast local transactions.  This reduces the duration of a connection, reducing server concurrency and allowing the server to recycle limited resources more quickly HTTP Optimizations apply to HTTP and HTTPS protocols.  Efficient use of HTTP keepalives (including carefully limiting concurrency to avoid overloading servers with thread-per-connection or process-per-connection models) and upgrading client connections to the most appropriate HTTP protocol level will reduce resource usage and connection churn on the servers Performance-sensitive Load Balancing selects the optimal server node for each transaction based on current and historic performance, and will also consider load balancing hints such as LARD to prefer the node with the hottest cache for each resource Processing Offload: Highly efficient implementations of SSL, compression and XML processing offload these tasks from server applications, allowing them to focus on their core application code Content Caching will cache static and dynamic content (discriminated by use of a 'cache key') and eliminate unnecessary requests for duplicate information from your server infrastructure   Further specialised functions such as Web Content Optimization, Rate Shaping (Dynamic rate shaping slow applications) and Prioritization (Detecting and Managing Abusive Referers) give you control over how content is delivered so that you can optimize the end user experience.   The importance of HTTP optimization   There's one important class of applications where ADCs make a very significant performance difference using TCP offload, request/response buffering and HTTP keepalive optimization.   A number of application frameworks have fixed concurrency limits. Apache is the most notable (the worker MPM has a default limit of 256 concurrent processes), mongrel (Ruby) and others have a fixed number of worker processes; some Java app servers also have an equivalent limit. The reason the fixed concurrency limits are applied is a pragmatic one; each TCP connection takes a concurrency slot, which corresponds to a heavyweight process or thread; too many concurrent processes or threads will bring the server to its knees and this can easily be exploited remotely if the limit is not low enough.   The implication of this limit is that the server cannot service more than a certain number of TCP connections concurrently. Additional connections are queued in the OS' listen queue until a concurrency slot is released. In most cases, an idle client keepalive connection can occupy a concurrency slot (leading to the common performance detuning advice for apache recommending that keepalives are disabled or limited).   When you benchmark a concurrency-limited server over a fast local network, connections are established, serviced and closed rapidly. Concurrency slots are only occupied for a short period of time, connections are not queued for long, so the performance achieved is high.   However, when you place the same server in a production environment, the duration of connections is much greater (slow, lossy TCP; client keepalives) so concurrency slots are held for much longer. It's not uncommon to see an application server running in production at <10% utilization, but struggling to achieve 10% of the performance that was measured in the lab.   The solution is to put a scalable proxy in front of the concurrency-limited server to offload the TCP connection, buffer the request data, use connections to the server efficiently, offload the response, free up a concurrency slot and offload the lingering keepalive connection.   Customer Stories     "Since Traffic Manager was deployed, there has been a major improvement to the performance and response times of the site." David Turner, Systems Architect, PLAY.COM   "With Traffic Manager we effortlessly achieved between 10-40 times improvement in performance over the application working alone." Steve Broadhead, BroadBand Testing   "Traffic Manager has allowed us to dramatically improve the performance and reliability of the TakingITGlobal website, effectively managing three times as much traffic without any extra burden."  Michael Furdyk, Director of Technology, TakingITGlobal   "The performance improvements were immediately obvious for both our users and to our monitoring systems – on some systems, we can measure a 400% improvement in performance." Philip Jensen, IT Section Manager, Sonofon   "700% improvement in application response times… The real challenge was to maximize existing resources, rather than having to continually add new servers." Kent Wright, Systems Administrator, QuantumMail     Read more   Feature Brief: Brocade vTM Content Caching
View full article
Stingray’s Content Caching capability allows Stingray Traffic Manager to identify web page responses that are the same for each request and to remember (‘cache’) the content. The content may be ‘static’, such as a file on disk on the web server, or it may have been generated by an application running on the web server. Why use Content Caching? When another client asks for content that Stingray has cached in its internal web cache, Stingray can return the content directly to the client without having to forward the request to a back-end web server. This has the effect of reducing the load on the back-end web servers, particularly if Stingray has detected that it can cache content generated by complex applications which consume resources on the web server machine. What are the pitfalls? A content cache may store a document that should not be cached. Stingray conforms to the caching recommendations of RFC 2616, which describe how web browsers and server can specify cache behaviour. However, if a web server is misconfigured, and does not provide the correct cache control information, then a TrafficScript or RuleBuilder rule can be used to override Stingray's default caching logic. A content cache may need a very large amount of memory to be effective Depending on the spread of content for your service, and the proportion that is cacheable and frequently used compared to the long tail of less-used content, you may need a very large content cache to get the best possible hit rates. Stingray Traffic Manager allows you to specify precisely how much memory you wish to use for your cache, and to impose fine limits on the sizes of files to be cached and the duration that they should be cached for. Stingray's 64-bit software overcomes the 2-4Gb limit of older solutions, and Stingray can operate with a two-tier (in-memory and on-SSD) cache in situations where you need a very large cache and the cost of server memory is prohibitive. How does it work? Not all web content can be cached. Information in the HTTP request and the HTTP response drives Stingray's decisions as to whether or not a request should be served from the web cache, and whether or not a response should be cached. Requests Only HTTP GET and HEAD requests are cacheable. All other methods are not cachable. The Cache-Control header in an HTTP request can force Stingray to ignore the web cache and to contact a back-end node instead. Requests that use HTTP basic-auth are uncacheable. Responses The Cache-Control header in an HTTP response can indicate that an HTTP response should never be placed in the web cache.  The header can also use the max-age value to specify how long the cached object can be cached for. This may cause a response to be cached for less than the configured webcache!time parameter. HTTP responses can use the Expires header to control how long to cache the response for. Note that using the Expires header is less efficient than using the max-age value in the Cache-Control response header. The Vary HTTP response header controls how variants of a resource are cached, and which variant is served from the cache in response to a new request. If a web application wishes to prevent Stingray from caching a response, it should add a ‘ Cache-Control: no-cache ’ header to the response. Debugging Stingray's Cache Behaviour You can use the global setting webcache!verbose if you wish to debug your cache behaviour. This setting is found in the Cache Settings section of the System, Global Settings page. If you enable this setting, Stingray will add a header named ‘ X-Cache-Info ’ to the HTTP response to indicate how the cache policy has taken effect. You can inspect this header using Stingray's access logging, or using the developer extensions in your web browser. X-Cache-Info values X-Cache-Info: cached X-Cache-Info: caching X-Cache-Info: not cacheable; request had a content length X-Cache-Info: not cacheable; request wasn't a GET or HEAD X-Cache-Info: not cacheable; request specified "Cache-Control: no-store" X-Cache-Info: not cacheable; request contained Authorization header X-Cache-Info: not cacheable; response had too large vary data X-Cache-Info: not cacheable; response file size too large X-Cache-Info: not cacheable; response code not cacheable X-Cache-Info: not cacheable; response contains "Vary: *" X-Cache-Info: not cacheable; response specified "Cache-Control: no-store" X-Cache-Info: not cacheable; response specified "Cache-Control: private" X-Cache-Info: not cacheable; response specified "Cache-Control: no-cache" X-Cache-Info: not cacheable; response specified max-age <= 0 X-Cache-Info: not cacheable; response specified "Cache-Control: no-cache=..." X-Cache-Info: not cacheable; response has already expired X-Cache-Info: not cacheable; response is 302 without expiry time Overriding Stingray's default cache behaviour Several TrafficScript and RuleBuilder cache contrl functions are available to facilitate the control of Stingray Traffic Manager’s caching behaviour. In most cases, these functions eliminate the need to manipulate headers in the HTTP requests and responses. http.cache.disable() Invoking http.cache.disable() in a response rule prevents Stingray from caching the response. The RuleBuilder 'Make response uncacheable' action has the same effect. http.cache.enable() Invoking http.cache.enable() in a response rule reverts the effect of a previous call to http.cache.disable(). It causes Stingray’s default caching logic to take effect. Note that it possible to force Stingray to cache a response that would normally be uncachable by rewriting the headers of that response using TrafficScript or RuleBuilder (response rewriting occurs before cachability testing). http.cache.setkey() The http.cache.setkey() function is used to differentiate between different versions of the same request, in much the same way that the Vary response header functions. It is used in request rules, but may also be used in response rules. It is more flexible than the RFC2616 vary support, because it lets you partition requests on any calculated value – for example, different content based on whether the source address is internal or external, or whether the client’s User-Agent header indicates an IE or Gecko-based browser. This capability is not available via RuleBuilder. Simple control http.cache.disable and http.cache.disable allow you to easily implement either 'default on', or 'default off' policies, where you either wish to cache everything cacheable unless you explicity disallow it, or you wish to only let Stingray cache things you explictly allow. For example, you have identified a particular set of transactions out of a large working set that each 90% of your web server usage, and you wish to just cache those requests, and not lets less painful transactions knock these out of the cache. Alternatively, you may be trying to cache a web-based application which is not HTTP compliant in that it does not properly mark up pages which are not cacheable and caching them would break the application. In this scenario, you wish to only enable caching for particular code paths which you have tested to not break the application. An example TrafficScript rule implementing a 'default off' policy might be: # Only cache what we explicitly allow http.cache.disable(); if( string.regexmatch( http.geturl(), "^/sales/(order|view).asp" )) {   # these are our most painful pages for the DB, and are cacheable   http.cache.enable(); } RuleBuilder offers only the simple 'default on' policy, overridden either by the response headers or the 'Make response uncacheable' action. Caching multiple resource versions for the same URL Suppose that your web service returns different versions of your home page, depending on whether the client is coming from an internal network (10.0.0.0) or an external network. If you were to put a content cache in front of your web service, you would need to arrange that your web server sent a Cache-Control: no-cache header with each response so that the page were not cached. Use the following request rule to manipulate the request and set a 'cache key' so that Stingray caches the two different versions of your page: # We're only concerned about the home page... if( http.getPath() != "/" ) break; # Set the cache key depending on where the client is located $client = request.getRemoteIP(); if( string.ipmaskmatch( $ip, "10.0.0.0/8" ) ) {    http.cache.setkey( "internal" ); } else {    http.cache.setkey( "external" ); } # Remove the Cache-Control response header - it's no longer needed! http.removeResponseHeader( "Cache-Control" ); Forcing pages to be cached You may have an application, say a JSP page, that says it is not cacheable, but actually you know under certain circumstances that it is and you want to force Stingray to cache this page because it is a heavy use of resource on the webserver. You can force Stingray to cache such pages by rewriting its response headers; any TrafficScript rewrites happen before the content caching logic is invoked, so you can perform extremely fine-grained caching control by manipulating the HTTP response headers of pages you wish to cache. In this example, as have a JSP page that sets a 'Cache-Control: no-cache' header, which prevents Stingray by caching the page. We can make this response cacheable by removing the Cache-Control header (and potentially the Expires header as well), for example: if( http.getpath() == "/testpage.jsp" ) {   # We know this request is cacheable; remove the 'Cache-Control: no-cache'   http.removeResponseHeader( "Cache-Control" ); } Granular cache timeouts For extra control, you may wish instead to use the http.setResponseHeader() function to set a Cache-Control with a max-age= paramter to specify exactly how long this particular piece of content should be cached or; or add a Vary header to specify which parts of the input request this response depends on (e.g. user language, or cookie). You can use these methods to set cache parameters on entire sets of URLs (e.g. all *.jsp) or individual requests for maximum flexibility. The RuleBuilder 'Set Response Cache Time' action has the same effect. Read more Stingray Product Documentation Cache your website - just for one second? Managing consistent caches across a Stingray Cluster
View full article
Bandwidth Management and Rate Shaping are two key techniques to prioritize traffic using Stingray Traffic Manager. Bandwidth Management is used to limit the bandwidth used by network traffic Rate Shaping is used to limit the rate of transactions Bandwidth Management Stingray's Bandwidth Management is applied by assigning connections to a Bandwidth Class.  A bandwidth class limits the bandwidth used by its connections in one of two ways: Per connection: You could use a bandwidth class to limit the bandwidth of each video download to, for example, 400Mbits, to limit the effect of download applications that would otherwise use all your available bandwidth Per class: All of the connections assigned to the class share the total bandwidth limit in a fair and equitable fashion.  For example, you may wish to limit the amount of bandwidth that unauthenticated users use so that a proportion of your bandwidth is reserved for other traffic The 'per class' bandwidth can be counted on a per-traffic-manager basis (simple) or can be shared across a traffic manager cluster (sophisticated).  When it is shared, the traffic managers negotiate between themselves on a per-second basis (approx) to share out parts of the bandwidth allocation in proportion to the demand on each traffic manager. Assigning Bandwidth Management to connections A bandwidth management class may be assigned in one of two different ways: Per service: All of the connections processed by a virtual server will be assigned to a common bandwidth management class Per connection: A TrafficScript rule can assign a connection to a bandwidth class based on any critera, for example, whether the user is logged in, what type of content the user is requesting, or the geographic location of the user. Examples of Bandwidth Management in action HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Rate Shaping Stingray's Rate Shaping is most commonly used to control the rate of particular types of transactions.  For example, you could use Rate Shaping to control the rate at which users attempt to log in to a web form, in order to mitigate against dictionary attacks, or you could use Rate Shaping to protect a vulnerable application that is prone to being overloaded. Rates are defined using Rate Classes, which can specify rates on a per-second or per-minute basis: Rate Shaping is implemented using a queue.  A TrafficScript rule can invoke a rate class, and the execution of that rule is immediately queued. If the queue limits (per minute or per second) have not been exceeded, the rule is then immediately released from the queue and can continue executing If the queue limits have been exceeded, the rule execution is then paused until the queue limits are met For example, to rate-limit requests for the /search.cgi resource using the limits defined in the 'DDoS Protect' rate class, you would use the following TrafficScript snippet: $path = http.getPath(); if( $path == "/search.cgi" ) rate.use( "DDoS Protect" ); You can use the functions rate.getBacklog() and rate.use.noQueue() to query the length of the queue, or to test a connection against the current queue length without suspending it. Rate limits are applied by each traffic manager.  The limit is not shared across the cluster in the way that bandwidth limits can be. Rate shaping with contexts In some cases, you may need to apply a rate limit per-user or per-URL.  You can use rate.use() with an additional 'context' argument; the rate limit is applied to each context individually.  For example, to limit the number of requests to /search.cgi from each individual IP address, you would use: $path = http.getPath(); $ip = request.getRemoteIP(); if( $path == "/search.cgi" ) rate.use( "DDoS Protect", $ip ); Examples of Rate Shaping in action Dynamic rate shaping slow applications Stingray Spider Catcher The "Contact Us" attack against mail servers Rate Limit only client Traffic from CDN/Proxy or Header Value Detecting and Managing Abusive Referers Read more Stingray Product Documentation
View full article
Stingray's SOAP Control API is a standards-conformant SOAP-based API that makes it possible for other applications to query and modify the configuration of a Stingray cluster. For example, a network monitoring or intrusion detection system may reconfigure Stingray's traffic management rules as a result of abnormal network traffic; a server provisioning system could reconfigure Stingray when new servers came online. The SOAP Control API can be used by any programming language and application environment that supports SOAP services. Examples Collected Tech Tips: SOAP Control API examples Read more Stingray Control API Guide in the Stingray Product Documentation
View full article
Stingray's sophisticated 'Event Handling' system allows an administrator to configure precisely what actions Stingray should take when a wide range of events occur.  The range of e vents covers internal changes (configuration modified, SSL certificate timing out), external changes (back-end server failure, service-level class falling outside parameters) and traffic-initiated events (trafficscript rule raising an event due to certain user traffic). Stingray can be configured to perform any of a range of actions when an event is raised - log a message, raise an SNMP trap or syslog alert, send an email or even run a custom script: The Event Handler configuration maps groups of events to user-defined actions Configuration Stingray already contains one built-in event handler that causes all events to be written to the global Event Log. This built-in handler and its "Log all Events" action can not be deleted, but you can add additional actions if required.  You can add additional event handlers, such as the handler illustrated above that sends an email when there is a problem with a license key. Components of an Event Handler Each Event Handler is triggered by events of a particular type. When an event that matches the Event Type occurs, the Event Handler will invoke the Action it is configured to perform: An Event Type is a set of events that can trigger an Event Handler.  Stingray includes a number of predefined event types, such as 'Service Failed' or 'Any critical problem', so that you can easily create simple event handlers, such as 'email the administrator if any part of the infrastructure fails'.  You can create new event types, and even create new events that you can raise from TrafficScript or Java Extensions. There are a number of built-in Actions, such as 'send an SNMP trap', and you can create additional actions using either the SOAP alerting interface or scripts and programs that you can upload to Stingray. Event formats Information about the event is provided to the action that it triggered in the following format: LEVEL (tab) [section] (tab) primary tag (tab) [tags (tab)]* text LEVEL may be one of 'INFO' (for example, a configuration file was modified), 'WARNING' (for example, a node in a pool failed), 'SERIOUS' (for example, all of the nodes in a pool have failed) or 'FATAL' (a catastrophic error that prevents the traffic manager from functioning). For example, if you stop a virtual server called 'Server1' the following event will be raised: INFO (tab) vservers/Server1 (tab) vsstop (tab) Virtual server stopped The first two components indicate that the event is an informational message about the virtual server "Server1". The primary tag, vsstop , defines what has happened to that virtual server. There are no additional tags because this event does not affect any other aspects of the configuration. Finally there is a human-readable description of the event that occurred. You can find a full list of the events that may be raised in the Stingray UI: Create a set of events: there are 100's of event that can be trapped and acted upon... Raising events from TrafficScript You can use the TrafficScript function ' event.emit() ' to raise an event.  For example, you may wish to log certain requests to the main Event Log, and this is an appropriate way to do so. Read more Custom event handling in Stingray Traffic Manager Traffic Managers can Tweet Too
View full article
Feature briefs Feature Brief: Introduction to the Stingray Architecture Feature Brief: Load Balancing in Stingray Traffic Manager Feature Brief: Session Persistence in Stingray Traffic Manager Feature Brief: Application Acceleration with Stingray Traffic Manager Feature Brief: TrafficScript Feature Brief: Server First, Client First and Generic Streaming Protocols Feature Brief: Clustering and Fault Tolerance in Stingray Traffic Manager Feature Brief: Health Monitoring in Stingray Traffic Manager Feature Brief: Stingray Content Caching Feature Brief: Bandwidth and Rate Shaping in Stingray Traffic Manager Feature Brief: Stingray's Autoscaling capability Feature Brief: Service Level Monitoring Feature Brief: Event Handling in Stingray Traffic Manager Feature Brief: Java Extensions in Stingray Traffic Manager Feature Brief: Stingray's RESTful Control API Feature Brief: Stingray's SOAP Control API Stingray Product Documentation Deployment Guide - Global Load Balancing with Parallel DNS Other product briefs Installation and Configuration Hardware and Software requirements for Stingray Traffic Manager Stingray Kernel Modules for Linux Software A guide to Policy Based Routing with Stingray (Linux and VA) Techniques for Direct Server Return with Stingray Traffic Manager Tuning Stingray Traffic Manager IP Transparency: Preserving the Client IP address in Stingray Traffic Manager Operation Ten Administration good practices What happens when Stingray Traffic Manager receives more HTTP connections than the servers can handle? Connection mirroring and failover with Stingray Traffic Manager Managing consistent caches across a Stingray Cluster Session Persistence - implementation and timeouts TrafficScript How is memory managed in TrafficScript? Investigating the performance of TrafficScript - storing tables of data Evaluating and Prioritizing Traffic with Stingray Traffic Manager Managing XML SOAP data with TrafficScript Other briefs Introducing Zeusbench Technical Tips: How to use Stingray Traffic Manager Collected Tech Tips: TrafficScript examples Collected Tech Tips: Java Extension Examples Collected Tech Tips: Using the RESTful Control API Collected Tech Tips: SOAP Control API examples
View full article