cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Stingray Traffic Manager can run as either a forward or a reverse proxy. But what is a Proxy? A reverse proxy? A forward proxy? And what can you do with such a feature?   Let's try and clarify what all these proxies are. In computing, a Proxy is a service that accepts network connections from clients and then forwards them on to a server. So in essence, any Load Balancer or Traffic Manager is a kind of proxy. Web caches are another example of proxy servers. These keep a copy of frequently requested web pages and will deliver these pages themselves, rather than having to forward the request on to the 'real' server.   Forward and Reverse Proxies   The difference between a 'forward' and 'reverse' proxy is determined by where the proxy is running.   Forward Proxies:  Your ISP probably uses a web cache to reduce its bandwidth costs. In this case, the proxy is sitting between your computer and the whole Internet. This is a 'forward proxy'. The proxy has a limited set of users (the ISP's customers), and can forward requests on to any machine on the Internet (i.e. the web sites that the customers are browsing).   Reverse Proxies: Alternatively, a company can put a web cache in the same data center as their web servers, and use it to reduce the load on their systems. This is a 'reverse proxy'. The proxy has an unlimited set of users (anyone who wants to view the web site), but proxies requests on to a specific set of machines (the web servers running the company's web site). This is a typical role for Traffic Managers - they are traditionally used as a reverse proxy.   Using Stingray Traffic Manager as a Forward Proxy   You may use Stingray Traffic Manager to forward requests on to any other computer, not just to a pre-configured set of machines in a pool. TrafficScript is used to select the exact address to forward the request on to:   pool.use( "Pool name", $ipaddress, $port );   The pool.use() function is used, in the same way as you would normally pick a pool of servers to let Stingray Traffic Manager load balance to. The extra parameters specify the exact machine to use. This machine does not have to belong to the pool that is mentioned; the pool name is there just so Stingray Traffic Manager can use its settings for the connection (e.g. timeout settings, SSL encryption, and so on).   We refer to this technique as 'Forward Proxy mode', or 'Forward Proxy' for short.   What use is a Forward Proxy?   Combined with TrafficScript, the Forward Proxy feature gives you complete control over the load balancing of requests. For example, you could use Stingray Traffic Manager to load balance RDP (Remote Desktop Protocol), using TrafficScript to pick out the user name of a new connection, look the name up in a database and find the hostname of a desktop to allocate for that user.   Forward Proxying also allows Stingray Traffic Manager to be used nearer the clients on a network. With some TrafficScript, Stingray Traffic Manager can operate as a caching web proxy, speeding up local Internet usage. You can then tie in other Stingray Traffic Manager features like bandwidth shaping, service level monitoring and so on. TrafficScript response rules could then filter the incoming data if needed.   Example: A web caching proxy using Stingray Traffic Manager and TrafficScript   You will need to set up Stingray Traffic Manager with a virtual server listening for HTTP proxy traffic. Set HTTP as the protocol, and enable web caching. Also, be sure to disable Stingray's "Location Header rewriting", on the connection management page. Then you will need to add a TrafficScript rule to examine the incoming connections and pick a suitable machine. Here's how you would build such a rule:   # Put a sanity check in the rule, to ensure that only proxy traffic is being received: $host = http.getHostHeader(); if( http.headerExists( "X-Forwarded-For" ) || $host == "" ) { http.sendResponse( "400 Bad request", "text/plain", "This is a proxy service, you must send proxy requests", "" ); } # Trim the leading http://host from the URL if necessary $url = http.getRawUrl(); if ( string.startswith( $url, "http://" ) ) { $slash = string.find( $url, "/", 8 ); $url = string.substring( $url, $slash, -1 ); } http.setPath( string.unescape( $url ) ); # Extract the port out of the Host: header, if it is there $pos = string.find( $host, ":" ); if( $pos >= 0 ) { $port = string.skip( $host, $pos + 1 ); $host = string.substring( $host, 0, $pos - 1 ); } else { $port = 80; } # We need to alter the HTTP request to supply the true IP address of the client # requesting the page, and we need to tweak the request to remove any proxy-specific headers. http.setHeader( "X-Forwarded-For", request.getRemoteIP() ); http.removeHeader( "Range" ); # Removing this header will make the request more cacheable http.removeHeader( "Proxy-Connection" ); # The user might have requested a page that is unresolvable, e.g. # http://fakehostname.nowhere/. Let's resolve the IP and check $ip = net.dns.resolveHost( $host ); if( $ip == "" ) { http.sendResponse( "404 Unknown host", "text/plain", "Failed to resolve " . $host . " to an IP address", "" ); } # The last task is to forward the request on to the target website pool.use( "Forward Proxy Pool", $ip, $port );   Done! Now try using the proxy: Go to your web browser's settings page or your operating system's network configuration (as appropriate) and configure an HTTP proxy.  Fill in the hostname of your Stingray Traffic Manager and the port number of the virtual server running this TrafficScript rule. Now try browsing to a few different web sites. You will be able to see the URLs on the Current Activity page in the UI, and the Web Cache page will show you details of the content that has been cached by Stingray:   'Recent Connections' report lists connections proxied to remote sites   Content Cache report lists the resources that Stingray has cached locally     This is just one use of the forward proxy. You could easily use the feature for other uses, e.g. email delivery, SSL-encrypted proxies, and so on. Try it and see!
View full article
This article describes how to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port.   Overview   WebSockets is an emerging protocol that is used by many web developers to provide responsive and interactive applications.  It's commonly used for talk and email applications, real-time games, and stock market and other monitoring applications.   By design, WebSockets is intended to resemble HTTP.  It is transported over tcp/80, and the initial handshake resembles an HTTP transaction, but the underlying protocol is a simple bidirectional TCP connection.   For more information on the protocol, refer to the Wikipedia summary and RFC 6455.     Basic WebSockets load balancing   Basic WebSockets Load Balancing   Basic WebSockets load balancing is straightforward.  You must use the 'Generic Streaming' protocol type to ensure that Stingray will correctly handle the asynchronous nature of websockets traffic.   Inspecting and modifying the WebSocket handshake   A WebSocket handshake message resembles an HTTP request, but you cannot use the built-in http.* TrafficScript functions to manage it because these are only available in HTTP-type virtual servers.   The libWebSockets.rts library (see below) implements analogous functions that you can use instead:   libWebSockets.rts   Paste the libWebSockets.txt library to your Rules catalog and reference it from your TrafficScript rule as follows:   import libWebSockets.rts as ws;   You can then use the ws.* functions to inspect and modify WebSockets handshakes.  Common operations include fixing up host headers and URLs in the request, and selecting the target servers (the 'pool') based on the attributes of the request.   import libWebSockets.rts as ws; if( ws.getHeader( "Host" ) == "echo.example.com" ) { ws.setHeader( "Host", "www.example.com" ); ws.setPath( "/echo" ); pool.use( "WebSockets servers" ); }   Ensure that the rules associated with WebSockets virtual server are configured to run at the Request stage, and to run 'Once', not 'Every'.  The rule should just be triggered to read and process the initial client handshake, and does not need to run against subsequent messages in the websocket connection:   Code to handle the WebSocket handshake should be configured as a Request Rule, with 'Run Once'   SSL-encrypted WebSockets   Stingray can SSL-decrypt TCP connections, and this operates fully with the SSL-encrypted wss:// protocol: Configure your virtual server to listen on port 443 (or another port if necessary) Enable SSL decryption on the virtual server, using a suitable certificate Note that when testing this capability, we found that Chrome refused to connect to WebSocket services with untrusted or invalid certificates, and did not issue a warning or prompt to trust the certificate.  Other web browsers may operate similarly.  In Chrome's case, it was necessary to access the virtual server directly ( https:// ), save the certificate and then import it into the certificate store.   Stingray can also SSL-encrypt downstream TCP connections (enable SSL encryption in the pool containing the real websocket servers) and this operates fully with SSL-enabled origin WebSockets servers.   Handling HTTP and WebSockets traffic   HTTP traffic should be handled by an HTTP-type virtual server rather than a Generic Streaming one.  HTTP virtual servers can employ HTTP optimizations (keepalive handling, HTTP upgrades, Compression, Caching, HTTP Session Persistence) and can access the http.* TrafficScript functions in their rules.   If possible, you should run two public-facing virtual servers, listening on two separate IP addresses.  For example, HTTP traffic should be directed to www.site.com (which resolves to the public IP for the HTTP virtual server) and WebSockets traffic should be directed to ws.site.com (resolving to the other public IP): Configure two virtual servers, each listening on the appropriate IP address   Sometimes, this is not possible – the WebSockets code is hardwired to the main www domain, or it's not possible to obtain a second public IP address. In that case, all traffic can be directed to the WebSockets virtual server and then HTTP traffic can be demultiplexed and forwarded internally to an HTTP virtual server:   Listen on a single IP address, and split off the HTTP traffic to a second HTTP virtual server   The following TrafficScript code, attached to the 'WS Virtual Server', will detect if the request is an HTTP request (rather than a WebSockets one) and hand the request off internally to an HTTP virtual server by way of a special 'Loopback Pool':   import libWebSockets.rts as ws; if( !ws.isWS() ) pool.use( "Loopback Pool" );   Notes: Testing WebSockets   The implementation described in this article was developed using the following browser-based client, load-balancing traffic to public 'echo' servers (ws://echo.websocket.org/, wss://echo.websocket.org, ws://ajf.me:8080/).   testclient.html   At the time of testing: echo.websocket.org did not respond to ping tests, so the default ping health monitor needed to be removed Chrome24 refused to connect to SSL-enabled wss resources unless they had a trusted certificate, and did not warn otherwise If you find this solution useful, please let us know in the comments below.
View full article
Top Deployment Guides   The following is a list of tested and validated deployment guides for common enterprise applications. Ask your sales team for information for the latest information    Microsoft   Virtual Traffic Manager and Microsoft Lync 2013 Virtual Traffic Manager and Microsoft Lync 2010 Virtual Traffic Manager and Microsoft Skype for Business Virtual Traffic Manager and Microsoft Exchange 2010 Virtual Traffic Manager and Microsoft Exchange 2013 Virtual Traffic Manager and Microsoft Exchange 2016 Virtual Traffic Manager and Microsoft SharePoint 2013 Virtual Traffic Manager and Microsoft SharePoint 2010 Virtual Traffic Manager and Microsoft Outlook Web Access Virtual Traffic Manager and Microsoft Intelligent Application Gateway Virtual Traffic Manager and Microsoft IIS  Oracle   Virtual Traffic Manager and Oracle EBS 12.1 Virtual Traffic Manager and Oracle Enterprise Manager 12c Virtual Traffic Manager and Oracle Application Server 10G Virtual Traffic Manager and Oracle WebLogic Applications (Ex: PeopleSoft and Blackboard) Virtual Traffic Manager and Glassfish Application Server   VMware   Virtual Traffic Manager and VMware Horizon View Servers Virtual Traffic Manager Plugin for VMware vRealize Orchestrator   Other Applications   Virtual Traffic Manager and SAP NetWeaver Virtual Traffic Manager and Magento  
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2013.
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
Top examples of Pulse vADC in action   Examples of how SteelApp can be deployed to address a range of application delivery challenges.   Modifying Content   Simple web page changes - updating a copyright date Adding meta-tags to a website with Traffic Manager Tracking user activity with Google Analytics and Google Analytics revisited Embedding RSS data into web content using Traffic Manager Add a Countdown Timer Using TrafficScript to add a Twitter feed to your web site Embedded Twitter Timeline Embedded Google Maps Watermarking PDF documents with Traffic Manager and Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking web content with Pulse vADC and TrafficScript   Prioritizing Traffic   Evaluating and Prioritizing Traffic with Traffic Manager HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Using Pulse vADC to Catch Spiders Dynamic rate shaping slow applications Stop hot-linking and bandwidth theft! Slowing down busy users - driving the REST API from TrafficScript   Performance Optimization   Cache your website - just for one second? HowTo: Monitor the response time of slow services HowTo: Use low-bandwidth content during periods of high load   Fixing Application Problems   No more 404 Not Found...? Hiding Application Errors Sending custom error pages   Compliance Problems   Satisfying EU cookie regulations using The cookiesDirective.js and TrafficScript   Security problems   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Traffic Manager Heartbleed: Using TrafficScript to detect TLS heartbeat records TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) SAML 2.0 Protocol Validation with TrafficScript Disabling SSL v3.0 for SteelApp   Infrastructure   Transparent Load Balancing with Traffic Manager HowTo: Launch a website at 5am Using Stingray Traffic Manager as a Forward Proxy Tunnelling multiple protocols through the same port AutoScaling Docker applications with Traffic Manager Elastic Application Delivery - Demo How to deploy Traffic Manager Cluster in AWS VPC   Other solutions   Building a load-balancing MySQL proxy with TrafficScript Serving Web Content from Traffic Manager using Python and Serving Web Content from Traffic Manager using Java Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager TrafficScript can Tweet Too Instrument web content with Traffic Manager Antivirus Protection for Web Applications Generating Mandelbrot sets using TrafficScript Content Optimization across Equatorial Boundaries
View full article
When you need to scale out your MySQL database, replication is a good way to proceed. Database writes (UPDATEs) go to a 'master' server and are replicated across a set of 'slave' servers. Reads (SELECTs) are load-balanced across the slaves.   Overview   MySQL's replication documentation describes how to configure replication:   MySQL Replication   A quick solution...   If you can modify your MySQL client application to direct 'Write' (i.e. 'UPDATE') connections to one IP address/port and 'Read' (i.e. 'SELECT') connections to another, then this problem is trivial to solve. This generally needs a code update (Using Replication for Scale-Out).   You will need to direct the 'Update' connections to the master database (or through a dedicated Traffic Manager virtual server), and direct the 'Read' connections to a Traffic Manager virtual server (in 'generic server first' mode) and load-balance the connections across the pool of MySQL slave servers using the 'least connections' load-balancing method: Routing connections from the application   However, in most cases, you probably don't have that degree of control over how your client application issues MySQL connections; all connections are directed to a single IP:port. A load balancer will need to discriminate between different connection types and route them accordingly.   Routing MySQL traffic   A MySQL database connection is authenticated by a username and password. In most database designs, multiple users with different access rights are used; less privileged user accounts can only read data (issuing 'SELECT' statements), and more privileged users can also perform updates (issuing 'UPDATE' statements). A well architected application with sound security boundaries will take advantage of these multiple user accounts, using the account with least privilege to perform each operation. This reduces the opportunities for attacks like SQL injection to subvert database transactions and perform undesired updates.   This article describes how to use Traffic Manager to inspect and manage MySQL connections, routing connections authenticated with privileged users to the master database and load-balancing other connects to the slaves:   Load-balancing MySQL connections   Designing a MySQL proxy   Stingray Traffic Manager functions as an application-level (layer-7) proxy. Most protocols are relatively easy for layer-7 proxies like Traffic Manager to inspect and load-balance, and work 'out-of-the-box' or with relatively little configuration.   For more information, refer to the article Server First, Client First and Generic Streaming Protocols.   Proxying MySQL connections   MySQL is much more complicated to proxy and load-balance.   When a MySQL client connects, the server immediately responds with a randomly generated challenge string (the 'salt'). The client then authenticates itself by responding with the username for the connection and a copy of the 'salt' encrypted using the corresponding password:   Connect and Authenticate in MySQL   If the proxy is to route and load-balance based on the username in the connection, it needs to correctly authenticate the client connection first. When it finally connects to the chosen MySQL server, it will then have to re-authenticate the connection with the back-end server using a different salt.   Implementing a MySQL proxy in TrafficScript   In this example, we're going to proxy MySQL connections from two users - 'mysqlmaster' and 'mysqlslave', directing connections to the 'SQL Master' and 'SQL Slaves' pools as appropriate.   The proxy is implemented using two TrafficScript rules ('mysql-request' and 'mysql-response') on a 'server-first' Virtual Server listening on port 3306 for MySQL client connections. Together, the rules implement a simple state machine that mediates between the client and server:   Implementing a MySQL proxy in TrafficScript   The state machine authenticates and inspects the client connection before deciding which pool to direct the connection to. The rule needs to know the encrypted password and desired pool for each user. The virtual server should be configured to send traffic to the built-in 'discard' pool by default.   The request rule:   Configure the following request rule on a 'server first' virtual server. Edit the values at the top to reflect the encrypted passwords (copied from the MySQL users table) and desired pools:   sub encpassword( $user ) { # From the mysql users table - double-SHA1 of the password # Do not include the leading '*' in the long 40-byte encoded password if( $user == "mysqlmaster" ) return "B17453F89631AE57EFC1B401AD1C7A59EFD547E5"; if( $user == "mysqlslave" ) return "14521EA7B4C66AE94E6CFF753453F89631AE57EF"; } sub pool( $user ) { if( $user == "mysqlmaster" ) return "SQL Master"; if( $user == "mysqlslave" ) return "SQL Slaves"; } $state = connection.data.get( "state" ); if( !$state ) { # First time in; we've just recieved a fresh connection $salt1 = randomBytes( 8 ); $salt2 = randomBytes( 12 ); connection.data.set( "salt", $salt1.$salt2 ); $server_hs = "\0\0\0\0" . # length - fill in below "\012" . # protocol version "Stingray Proxy v0.9\0" . # server version "\01\0\0\0" . # thread 1 $salt1."\0" . # salt(1) "\054\242" . # Capabilities "\010\02\0" . # Lang and status "\0\0\0\0\0\0\0\0\0\0\0\0\0" . # Unused $salt2."\0"; # salt(2) $l = string.length( $server_hs )-4; # Will be <= 255 $server_hs = string.replaceBytes( $server_hs, string.intToBytes( $l, 1 ), 0 ); connection.data.set( "state", "wait for clienths" ); request.sendResponse( $server_hs ); break; } if( $state == "wait for clienths" ) { # We've recieved the client handshake. $chs = request.get( 1 ); $chs_len = string.bytesToInt( $chs ); $chs = request.get( $chs_len + 4 ); # user starts at byte 36; password follows after $i = string.find( $chs, "\0", 36 ); $user = string.subString( $chs, 36, $i-1 ); $encpasswd = string.subString( $chs, $i+2, $i+21 ); $passwd2 = string.hexDecode( encpassword( $user ) ); $salt = connection.data.get( "salt" ); $passwd1 = string_xor( $encpasswd, string.hashSHA1( $salt.$passwd2 ) ); if( string.hashSHA1( $passwd1 ) != $passwd2 ) { log.warn( "User '" . $user . "': authentication failure" ); connection.data.set( "state", "authentication failed" ); connection.discard(); } connection.data.set( "user", $user ); connection.data.set( "passwd1", $passwd1 ); connection.data.set( "clienths", $chs ); connection.data.set( "state", "wait for serverhs" ); request.set( "" ); # Select pool based on user pool.select( pool( $user ) ); break; } if( $state == "wait for client data" ) { # Write the client handshake we remembered from earlier to the server, # and piggyback the request we've just recieved on the end $req = request.get(); $chs = connection.data.get( "clienths" ); $passwd1 = connection.data.get( "passwd1" ); $salt = connection.data.get( "salt" ); $encpasswd = string_xor( $passwd1, string.hashSHA1( $salt . string.hashSHA1( $passwd1 ) ) ); $i = string.find( $chs, "\0", 36 ); $chs = string.replaceBytes( $chs, $encpasswd, $i+2 ); connection.data.set( "state", "do authentication" ); request.set( $chs.$req ); break; } # Helper function sub string_xor( $a, $b ) { $r = ""; while( string.length( $a ) ) { $a1 = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $b1 = string.left( $b, 1 ); $b = string.skip( $b, 1 ); $r = $r . chr( ord( $a1 ) ^ ord ( $b1 ) ); } return $r; }   The response rule   Configure the following as a response rule, set to run every time, for the MySQL virtual server.   $state = connection.data.get( "state" ); $authok = "\07\0\0\2\0\0\0\02\0\0\0"; if( $state == "wait for serverhs" ) { # Read server handshake, remember the salt $shs = response.get( 1 ); $shs_len = string.bytesToInt( $shs )+4; $shs = response.get( $shs_len ); $salt1 = string.substring( $shs, $shs_len-40, $shs_len-33 ); $salt2 = string.substring( $shs, $shs_len-13, $shs_len-2 ); connection.data.set( "salt", $salt1.$salt2 ); # Write an authentication confirmation now to provoke the client # to send us more data (the first query). This will prepare the # state machine to write the authentication to the server connection.data.set( "state", "wait for client data" ); response.set( $authok ); break; } if( $state == "do authentication" ) { # We're expecting two responses. # The first is the authentication confirmation which we discard. $res = response.get(); $res1 = string.left( $res, 11 ); $res2 = string.skip( $res, 11 ); if( $res1 != $authok ) { $user = connection.data.get( "user" ); log.info( "Unexpected authentication failure for " . $user ); connection.discard(); } connection.data.set( "state", "complete" ); response.set( $res2 ); break; }   Testing your configuration   If you have several MySQL databases to test against, testing this configuration is straightforward. Edit the request rule to add the correct passwords and pools, and use the mysql command-line client to make connections:   $ mysql -h zeus -u username -p Enter password: *******   Check the 'current connections' list in the Traffic Manager UI to see how it has connected each session to a back-end database server.   If you encounter problems, try the following steps:   Ensure that trafficscript!variable_pool_use is set to 'Yes' in the Global Settings page on the UI. This setting allows you to use non-literal values in pool.use() and pool.select() TrafficScript functions. Turn on the log!client_connection_failures and log!server_connection_failures settings in the Virtual Server > Connection Management configuration page; these settings will configure the traffic manager to write detailed debug messages to the Event Log whenever a connection fails.   Then review your Traffic Manager Event Log and your mysql logs in the event of an error.   Traffic Manager's access logging can be used to record every connection. You can use the special *{name}d log macro to record information stored using connection.data.set(), such as the username used in each connection.   Conclusion   This article has demonstrated how to build a fairly sophisticated protocol parser where the Traffic Manager-based proxy performs full authentication and inspection before making a load-balancing decision. The protocol parser then performs the authentication again against the chosen back-end server.   Once the client-side and server-side handshakes are complete, Traffic Manager will simply forward data back and fro between the client and the server.   This example addresses the problem of scaling out your MySQL database, giving load-balancing and redundancy for database reads ('SELECTs'). It does not address the problem of scaling out your master 'write' server - you need to address that by investing in a sufficiently powerful server, architecting your database and application to minimise the number and impact of write operations, or by selecting a full clustering solution.     The solution leaves a single point of failure, in the form of the master database. This problem could be effectively dealt with by creating a monitor that tests the master database for correct operation. If it detects a failure, the monitor could promote one of the slave databases to master status and reconfigure the 'SQLMaster' pool to direct write (UPDATE) traffic to the new MySQL master server.   Acknowledgements   Ian Redfern's MySQL protocol description was invaluable in developing the proxy code.     Appendix - Password Problems? This example assumes that you are using MySQL 4.1.x or later (it was tested with MySQL 5 clients and servers), and that your database has passwords in the 'long' 41-byte MySQL 4.1 (and later) format (see http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html)   If you upgrade a pre-4.1 MySQL database to 4.1 or later, your passwords will remain in the pre-4.1 'short' format.   You can verify what password format your MySQL database is using as follows:   mysql> select password from mysql.user where user='username'; +------------------+ | password         | +------------------+ | 6a4ba5f42d7d4f51 | +------------------+ 1 rows in set (0.00 sec)   mysql> update mysql.user set password=PASSWORD('password') where user='username'; Query OK, 1 rows affected (0.00 sec) Rows matched: 1  Changed: 1  Warnings: 0   mysql> select password from mysql.user where user='username'; +-------------------------------------------+ | password                                  | +-------------------------------------------+ | *14521EA7B4C66AE94E6CFF753453F89631AE57EF | +-------------------------------------------+ 1 rows in set (0.00 sec)   If you can't create 'long' passwords, your database may be stuck in 'short' password mode. Run the following command to resize the password table if necessary:   $ mysql_fix_privilege_tables --password=admin password   Check that 'old_passwords' is not set to '1' (see here) in your my.cnf configuration file.   Check that the mysqld process isn't running with the --old-passwords option.   Finally, ensure that the privileges you have configured apply to connections from the Stingray proxy. You may need to GRANT... TO 'user'@'%' for example.
View full article
We spend a great deal of time focusing on how to speed up customers' web services. We constantly research new techniques to load balance traffic, optimise network connections and improve the performance of overloaded application servers. The techniques and options available from us (and yes, from our competitors too!) may seem bewildering at times. So I would like to spend a short time singing the praises of one specific feature, which I can confidently say will improve your website's performance above all others - caching your website.   "But my website is uncacheable! It's full of dynamic, changing pages. Caching is useless to me!"   We'll answer that objection soon, but first, it is worth a quick explanation of the two main styles of caching:     Client-side caching   Most people's experience of a web cache is on their web browser. Internet Explorer or Firefox will store copies of web pages on your hard drive, so if you visit a site again, it can load the content from disk instead of over the Internet.   There's another layer of caching going on though. Your ISP may also be doing some caching. The ISP wants to save money on their bandwidth, and so puts a big web cache in front of everyone's Internet access. The cache keeps copies of the most-visited web pages, storing bits of many different websites. A popular and widely used open-source web cache is Squid.   However, not all web pages are cacheable near the client. Websites have dynamic content, so for example any web page containing personalized or changing information will not be stored in your ISP's cache. Generally the cache will fill up with "static" content such as images, movies, etc. These get stored for hours or days. For your ISP, this is great, as these big files take up the most of their precious bandwidth.   For someone running their own website, the browser caching or ISP caching does not do much. They might save a little bandwidth from the ISP image caching if they have lots of visitors from the same ISP, but the bulk of the website, including most of the content generated by their application servers, will not be cached and their servers will still have lots of work to do.   Server-side caching (with Traffic Manager)   Here, the main aim is not to save bandwidth, but to accelerate your website. The traffic manager sits in your datacenter (or your cloud provider), in front of your web and application servers. Access to your website is through the Traffic Manager software, so it sees both the requests and responses. Traffic Manager can then start to answer these requests itself, delivering cached responses. Your servers then have less work to do. Less work = faster responses = fewer servers needed = saves money!   "But I told you - my website isn't cacheable!"   There's a reason why your website is marked uncacheable. Remember the ISP caches...? They mustn't store your changing, constantly updating web pages. To enforce this, application servers send back instructions with every web page, the Cache-Control HTTP header, saying "Don't cache this". Traffic Manager obeys these cache instructions too, because it's well-behaved.   But, think - how often does your website really change? Take a very busy site, for example a popular news site. Its front page may be labelled as uncacheable so that vistors always see the latest news, since it changes as new stories are added. But new additions aren't happening every second of the day. What if the page was marked as cacheable - for just one second? Visitors would still see the most up-to-date news, but the load on the site servers would plummet. Even if the website had as few as ten views in a second, this simple change would reduce the load on the app servers ten-fold.   This isn't an isolated example - there are plenty of others: Think twitter searches, auction listings, "live" graphing, and so on. All such content can be cached briefly without any noticable change to the "liveness" of the site. Traffic Manager can deliver a cached version of your web page much faster than your application servers - not just because it is highly optimized, but because sending a cached copy of a page is so much less work than generating it from scratch.   So if this simple cache change is so great, why don't people use this technique more - surely app servers can mark their web pages as cacheable for one or two seconds without Traffic Manager's help, and those browser/ISP caches can then do the magic? Well, the browser caches aren't going to be any use - an individual isn't going to be viewing the same page on your website multiple times a second (and if they keep hitting the reload button, their page requests are not cacheable anyway). So how about those big ISP caches? Unfortunately, they aren't always clever enough either. Some see a web page marked as cacheable for a short time and will either:   Not cache it at all (it's going to expire soon, what's the point in keeping it?) or will cache it for much longer (if it is cacheable for 3 seconds, why not cache it for 300, right?)   Also, by leaving the caching to the client-side, the cache hit rate gets worse. A user in France isn't going to be able to make use of a cached copy of your site stored in a US ISP's cache, for instance.   If you use Traffic Manager to do the caching, these issues can be solved. First, the cache is held in one place - your datacenter, so it is available to all visitors. Second, Traffic Manager can tweak the cache instructions for the page, so it caches the page while forcing other people not to. Here is what's going on:     Request arrives at Traffic Manager, which sends it on to your application server. App server sends web page response back to the traffic manager. The page has a Cache-Control: no-cache header, since the app server thinks the page can't be cached. TrafficScript response rule identifies the page as one that can be cached, for a short time. It changes the cache instructions to Cache-Control: max-age=3, meaning that the page can now be cached for three seconds. Traffic Manager's web cache stores the page. Traffic Manager sends out the response to the user (and to anyone else for the next three seconds), but changes the cache instructions to Cache-Control: no-cache, to ensure downstream caches, ISP caches and web browsers do not try to cache the page further.   Result: a much faster web site, yet it still serves dynamic and constantly updating pages to viewers. Give it a try - you will be amazed at the performance improvements possible, even when caching for just a second. Remember, almost anything can be cached if you configure your servers correctly!   How to set up Traffic Manager   On the admin server, edit the virtual server that you want to cache, and click on the "Content Caching" link. Enable the cache. There are options here for the default cache time for pages. These can be changed as desired, but are primarily for the "ordinary" content that is cacheable normally, such as images, etc. The "webcache!control_out" setting allows you to change the Cache-Control header for your pages after they have been cached by the Traffic Manager software, so you can put "no-cache" here to stop others from caching your pages.   The "webcache!refresh_time" setting is a useful extra here. Set this to one second. This will smooth out the load on your app servers. When a cached page is about to expire (i.e. it's too old to stay cached) and a new request arrives, Traffic Manager will hand over a single request to your app servers, to see if there is a newer page available. Other requests continue to get served from the cache. This can prevent 'waves' of requests hitting your app servers when a page is about to expire from the cache.   Now, we need to make Traffic Manager cache the specific pages of your site that the app server claims are uncacheable. We do this using the RuleBuilder system for defining rules, so click on the "Catalogs" icon and then select the "Rules" tab. Now create a new RuleBuilder rule.   This rule needs to run for the specific web pages that you wish to make cacheable for short amounts of time. For an example, we'll make "/news" cacheable. Add a condition of "HTTP:URL Path" to match "/news", then add an action to set a HTTP response header. The rule should look like this:     Finally, add this rule as a response rule to your virtual server. That's it! Your site should now start to be cached. Just a final few words of caution:   Be selective in the pages that you mark as cacheable; remember that personalized pages (e.g. showing a username) cannot be cached otherwise other people will see those pages too! If necessary, some page redesign might be called for to split the content into "generic" and "user-specific" iframes or AJAX requests. Server-side caching saves you CPU time, not bandwidth. If your website is slow because you are hitting your site throughput limits, then other techniques are needed.
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
Distributed denial of service (DDoS) attacks are the worst nightmare of every web presence. Common wisdom has it that there is nothing you can do to protect yourself when a DDoS attack hits you. Nothing? Well, unless you have Stingray Traffic Manager. In this article we'll describe how Stingray helped a customer keep their site available to legitimate users when they came under massive attack from the "dark side".   What is a DDoS attack?   DDoS attacks have risen to considerable prominence even in mainstream media recently, especially after the BBC published a report on how botnets can be used to send SPAM or take web-sites down and another story detailing that even computers of UK government agencies are taken over by botnets.   A botnet is an interconnected group of computers, often home PCs running MS Windows, normally used by their legitimate owners but actually under the control of cyber-criminals who can send commands to programs running on those computers. The fact that their machines are controlled by somebody else is due to operating system or application vulnerabilities and often unknown to the unassuming owners. When such a botnet member goes online, the malicious program starts to receive its orders. One such order can be to send SPAM emails, another to flood a certain web-site with requests and so on.   There are quite a few scary reports about online extortions in which web-site owners are forced to pay money to avoid disruption to their services.   Why are DDoS attacks so hard to defend against?   The reason DDoS attacks are so hard to counter is that they are using the service a web-site is providing and wants to provide: its content should be available to everybody. An individual PC connected to the internet via DSL usually cannot take down a server, because servers tend to have much more computing power and more networking bandwidth. By distributing the requests to as many different clients as possible, the attacker solves three problems in one go:   They get more bandwidth to hammer the server. The victim cannot thwart the attack by blocking individual IP addresses: that will only reduce the load by a negligible fraction. Also, clever DDoS attackers gradually change the clients sending the request. It's impossible to keep up with this by manually adapting the configuration of the service. It's much harder to identify that a client is part of the attack because each individual client may be sending only a few requests per second.   How to Protect against DDoS Attacks?   There is an article on how to ban IP addresses of individual attackers here: Dynamic Defense Against Network Attacks.The mechanism described there involves a Java Extension that modifies Stingray Traffic Manager's configuration via a SOAP call to add an offending IP address to the list of banned IPs in a Service Protection Class. In principle, this could be used to block DDoS attacks as well. In reality it can't, because SOAP is a rather heavyweight process that involves much too much overhead to be able to run hundreds of times per second. (Stingray's Java support is highly optimized and can handle tens of thousands of requests per second.)   The performance of the Java/SOAP combination can be improved by leveraging the fact that all SOAP calls in the Stingray API are array-based. So a list of IP addresses can be gathered in TrafficScript and added to Stingray's configuration in one call. But still, the blocking of unwanted requests would happen too late: at the application level rather than at the OS (or, preferably, the network) level. Therefore, the attacker could still inflict the load of accepting a connection, passing it up to Stingray Traffic Manager, checking the IP address inside Stingray Traffic Manager etc. It's much better to find a way to block the majority of connections before they reach Stingray Traffic Manager.   Introducing iptables   Linux offers an extensive framework for controlling network connections called iptables. One of its features is that it allows an administrator to block connections based on many different properties and conditions. We are going to use it in a very simple way: to ignore connection initiations based on the IP address of their origin. iptables can handle thousands of such conditions, but of course it has an impact on CPU utilization. However, this impact is still much lower than having to accept the connection and cope with it at the application layer.   iptables checks its rules against each and every packet that enters the system (potentially also on packets that are forwarded by and created in the system, but we are not going to use that aspect here). What we want to impede are new connections from IP addresses that we know are part of the DDoS attack. No expensive processing should be done on packets belonging to connections that have already been established and on which data is being exchanged. Therefore, the first rule to add is to let through all TCP packets that do not establish a new connections, i.e. that do not have the SYN flag set:   # iptables -I INPUT -p tcp \! --syn -j ACCEPT   Once an IP address has been identified as 'bad', it can be blocked with the following command:   # iptables -A INPUT -s [ip_address] -J DROP   Using Stingray Traffic Manager and TrafficScript to detect and block the attack   The rule that protects the service from the attack consists of two parts: Identifying the offending requests and blocking their origin IPs.   Identifying the Bad Guys: The Attack Signature   A gift shopping site that uses Stingray Traffic Manager to manage the traffic to their service recently noticed a surge of requests to their home page that threatened to take the web site down. They contacted us, and upon investigation of the request logs it became apparent that there were many requests with unconventional 'User-Agent' HTTP headers. A quick web search revealed that this was indicative of an automated distributed attack.   The first thing for the rule to do is therefore to look up the value of the User-Agent header in a list of agents that are known to be part of the attack:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 sub isAttack()  {      $ua = http.getHeader( "User-Agent" );      if ( $ua == "" || $ua == " " ) {         #log.info("Bad Agent [null] from ". request.getRemoteIP());         counter.increment(1,1);         return 1;      } else {         $agentmd5 = resource.getMD5( "bad-agents.txt" );         if ( $agentmd5 != data.get( "agentmd5" ) ) {            reloadBadAgentList( $agentmd5 );         }         if ( data.get( "BAD" . $ua ) ) {            #log.info("Bad agent ".$ua." from ". request.getRemoteIP());            counter.increment(2,1);            return 1;         }      }      return 0;  }    The rule fetches the relevant header from the HTTP request and makes a quick check whether the client sent an empty User-Agent or just a whitespace. If so, a counter is incremented that can be used in the UI to track how many such requests were found and then 1 is returned, indicating that this is indeed an unwanted request.   If a non-trivial User-Agent has been sent with the request, the list is queried. If the user-agent string has been marked as 'bad', another counter is incremented and again 1 is returned to the calling function. The techniques used here are similar to those in the more detailed HowTo: Store tables of data in TrafficScript article; when needed, the resource file is parsed and an entry in the system-wide data hash-table is created for each black-listed user agent.   This is accomplished by the following sub-routine:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 sub reloadBadAgentList( $newmd5 )  {      # do this first to minimize race conditions:      data.set( "agentmd5" , $newmd5 );      $badagents = resource.get( "bad-agents.txt" );      $i = 0;      data. reset ( "BAD" ); # clear old entries      while ( ( $j = string.find( $badagents , "\n" , $i ) ) != -1 ) {         $line = string.substring( $badagents , $i , $j -1 );         $i = $j +1;         $entry = "BAD" .string.trim( $line );         log .info( "Adding bad UA '" . $entry . "'" );         data.set( $entry , 1 );      }  }    Most of the time, however, it won't be necessary to read from the file system because the list of 'bad' agents does not change often (if ever for a given botnet attack). You can download the file with the black-listed agents here and there is even a web-page dedicated to user-agents, good and bad.   Configuring iptables from TrafficScript   Now that TrafficScript 'knows' that it is dealing with a request whose IP has to be blocked, this address must be added to the iptables 'INPUT' chain with target 'DROP'. The most lightweight way to get this information from inside Stingray Traffic Manager somewhere else is to use the HTTP client functionality in TrafficScript provided by the function http.request.get(). Since many such 'evil' IP addresses are expected per second, it is a good idea to buffer up a certain number of IPs before making an HTTP request (the first of which will have some overhead due to TCP's three-way handshake, but of course much less than forking a new process; subsequent requests will be made over the kept-alive connection).   Here is the rule that accomplishes the task:   1 2 3 4 5 6 7 8 9 10 11 12 13 if ( isAttack() ) {      $ip = request.getRemoteIP();      $iplist = data.get( "badiplist" );      if ( string.count( $iplist , "/" )+1 >= 10 ) {         data.remove( "badiplist" );         $url = "http://127.0.0.1:44252" . $iplist . "/" . $ip ;         http.request.get( $url , "" , 5);      } else {         data.set( "badiplist" , $iplist . "/" . $ip );      }      connection. sleep ( $sleep );      connection.discard();  }    A simple 'Web Server' that Adds Rules for iptables   Now who is going to handle all those funny HTTP GET requests? We need a simple web-server that reads the URL, splits it up into the IPs to be blocked and adds them to iptables (unless it is already being blocked). On startup this process checks which addresses are already in the black-list to make sure they are not added again (which would be a waste of resources), makes sure that a fast path is taken for packets that do not correspond to new connections and then listens for requests on a configurable port (in the rule above we used port 44252).   This daemon doesn't fork one iptables process per IP address to block. Instead, it uses the 'batch-mode' of the iptables framework, iptables-restore. With this tool, you compile a list of rules and send all of them down to the kernel with a single commit command.   A lot of details (like IPv6 support, throttling etc) have been left out because they are not specific to the problem at hand, but can be studied by downloading the Perl code (attached) of the program.   To start this server you have to be root and invoke the following command:   # iptablesrd.pl   Preventing too many requests with Stingray Traffic Manager's Rate Shaping   As it turned out when dealing with the DDoS attack that plagued our client, the bottleneck in the whole process described up until now was the addition of rules to iptables. This is not surprising as the kernel has to lock some of its internal structures before each such manipulation. On a moderately-sized workstation, for example, a few hundred transactions can be committed per second when starting from an empty rule set. Once there are, say, 10,000 IP addresses in the list, adding more becomes slower and slower, down to a few dozen per second at best. If we keep sending requests to the 'iptablesrd' web-server at a high rate, it won't be able to keep up with them. Basically, we have to take into account that this is the place where processing is channeled from a massively parallel, highly scalable process (Stingray) into the sequential, one-at-a-time mechanism that is needed to keep the iptables configuration consistent across CPUs.   Queuing up all these requests is pointless, as it will only eat resources on the server. It is much better to let Stingray Traffic Manager sleep on the connection for a short time (to slow down the attacker) and then close it. If the IP address continues to be part of the botnet, the next request will come soon enough and we can try and handle it then.   Luckily, Stingray comes with rate-shaping functionality that can be used in TrafficScript. Setting up a 'Rate' class in the 'Catalog' tab looks like this:     The Rate Class can now be used in the rule to restrict the number of HTTP requests Stingray makes per second:   1 2 3 4 5 6 if ( rate.getBackLog( "DDoS Protect" ) < 1 ) {      $url = "http://localhost:44252" . $iplist . "/" . $ip ;      rate. use ( "DDoS Protect" );      # our 'webserver' never sends a response      http.request.get( $url , "" , 5);  }    Note that we simply don't do anything if the rate class already has a back-log, i.e. there are outstanding requests to block IPs. If there is no request queued up, we impose the rate limitation on the current connection and then send out the request.   The Complete Rule   To wrap this section up, here is the rule in full:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 $sleep = 300; # in milliseconds  $maxbacklog = 1;  $ips_per_httprequest = 10;  $http_timeout = 5; # in seconds  $port = 44252; # keep in sync with argument to iptablesrd.pl       if ( isAttack() ) {      $ip = request.getRemoteIP();      $iplist = data.get( "badiplist" );      if ( string.count( $iplist , "/" )+1 >= $ips_per_httprequest ) {         data.remove( "badiplist" );         if ( rate.getBackLog( "ddos_protect" ) < $maxbacklog ) {            $url = "http://127.0.0.1:" . $port . $iplist . "/" . $ip ;            rate. use ( "ddos_protect" );            # our 'webserver' never sends a response            http.request.get( $url , "" , $http_timeout );         }      } else {         data.set( "badiplist" , $iplist . "/" . $ip );      }      connection. sleep ( $sleep );      connection.discard();  }       $rawurl = http.getRawURL();  if ( $rawurl == "/" ) {      counter.increment(3, 1);  # Small delay - shouldn't annoy real users, will at least slow down attackers      connection. sleep (100);      http.redirect( "/temp-redirection" );  # Attackers will probably ignore the redirection. Real browsers will come back  }       # Re-write the URL before passing it on to the web servers  if ( $rawurl == "/temp-redirection" ) {      http.setPath( "/" );  }       sub isAttack()  {      $ua = http.getHeader( "User-Agent" );           if ( $ua == "" || $ua == " " ) {         counter.increment(1,1);         return 1;      } else {         $agentmd5 = resource.getMD5( "bad-agents.txt" );         if ( $agentmd5 != data.get( "agentmd5" ) ) {            reloadBadAgentList( $agentmd5 );         }         if ( data.get( "BAD" . $ua ) ) {            counter.increment(2,1);            return 1;         }      }      return 0;  }       sub reloadBadAgentList( $newmd5 )  {      # do this first to minimize race conditions:      data.set( "agentmd5" , $newmd5 );      $badagents = resource.get( "bad-agents.txt" );      $i = 0;      data. reset ( "BAD" );      while ( ( $j = string.find( $badagents , "\n" , $i ) ) != -1 ) {         $line = string.substring( $badagents , $i , $j -1 );         $i = $j +1;         $entry = "BAD" .string.trim( $line );         data.set( $entry , 1 );      }  }   Note that there are a few tunables at the beginning of the rule. Also, since in the particular case of the gift shopping site all attack requests went to the home page ("/"), a small slowdown and subsequent redirect was added for that page.   Further Advice   The method described here can help mitigate the server-side effect of DDoS attacks. It is important, however, to adapt it to the particular nature of each attack and to the system Stingray Traffic Manager is running on. The most obvious adjustment is to change the isAttack() sub-routine to reliably detect attacks without blocking legitimate requests.   Beyond that, a careful eye has to be kept on the system to make sure Stingray strikes the right balance between adding bad IPs (which is expensive but keeps further requests from that IP out) and throwing away connections the attackers have managed to establish (which is cheap but won't block future connections from the same source). After a while, the rules for iptables will block all members of the botnet. However, botnets are dynamic, they change over time: new nodes are added while others drop out.   An useful improvement to the iptablesrd.pl process described above would therefore be to speculatively remove blocks if they have been added a long time ago and/or if the number of entries crosses a certain threshold (which will depend on the hardware available).   Most DDoS attacks are short-lived, however, so it may suffice to just wait until it's over.   The further upstream in the network the attack can be blocked, the better. With the current approach, blocking occurs at the machine Stingray Traffic Manager is running on. If the upstream router can be remote-controlled (e.g. via SNMP), it would be preferable to do the blocking there. The web server we are using in this article can easily be adapted to such a scenario.   A word of warning and caution: The method presented here is no panacea that can protect against arbitrary attacks. A massive DDoS attack can, for instance, saturate the bandwidth of a server with a flood of SYN packets and such an attack can only be handled further upstream in the network. But Stingray Traffic Manager can certainly be used to scale down the damage inflicted upon a web presence and take a great deal of load from the back-end servers.   Footnote   The image at the top of the article is a graphical representation of the distribution of nodes on the internet produced by the opte project. It is protected by the Creative Commons License.
View full article
What is Direct Server Return? Layer 2/3 Direct Server Return (DSR), also referred to as ‘triangulation’, is a network routing technique used in some load balancing situations: Incoming traffic from the client is received by the load balancer and forwarded to a back-end node Outgoing (return) traffic from the back-end node is sent directly to the client and bypasses the load balancer completely Incoming traffic (blue) is routed through the load balancer, and return traffic (red) bypasses the load balancer Direct Server Return is fundamentally different from the normal load balancing mode of operation, where the load balancer observes and manages both inbound and outbound traffic. In contrast, there are two other common load balancing modes of operation: NAT (Network Address Translation): layer-4 load balancers and simple layer 7 application delivery controllers use NAT (Network Address Translation) to rewrite the destination value of individual network packets.  Network connections are load-balanced by the choice of destination value. They often use a technique called ‘delayed binding’ to delay and inspect a new network connection before sending the packets to a back-end node; this allows them to perform content-based routing.  NAT-based load balancers can switch TCP streams, but have limited capabilities to inspect and rewrite network traffic. Proxy: Modern general-purpose load balancers like Stingray Traffic Manager operate as full proxies.  The proxy mode of operation is the most compute-intensive, but current general purpose hardware is more than powerful enough to manage traffic at multi-gigabit speeds. Whereas NAT-based load balancers manage traffic on a packet-by-packet basis, proxy-based load balancers can read entire request and responses.  They can manage and manipulate the traffic based on a full understanding of the transaction between the client and the application server. Note that some load balancers can operated in a dual-mode fashion - a service can be handled either in a NAT-like fashion or in a Proxy-like fashion.  This introduces are tradeoff between hardware performance and software sophistication - see SOL4707 - Choosing appropriate profiles for HTTP traffic for an example.  Stingray Traffic Manager can only function in a Proxy-like fashion. This article describes how the benefits of direct server return can be applied to a layer 7 traffic management device such as Stingray Traffic Manager. Why use Direct Server Return? Layer 2/3 Direct Server Return was very popular from 1995 to about 2000 because the load balancers of the time were seriously limited in performance and compute power; DSR uses less compute resources then a full NAT or Proxy load balancer.  DSR is no longer necessary for high performance services as modern load balancers on modern hardware can easily handle multi-gigabits of traffic without requiring DSR. DSR is still an appealing option for organizations who serve large media files, or who have very large volumes of traffic. Stingray Traffic Manager does not support a traditional DSR mode of operation, but it is straightforward to manage traffic to obtain a similar layer 7 DSR effect. Disadvantages of Layer2/3 Direct Server Return There are a number of distinct limitations and disadvantages with DSR: 1. The load balancer does not observe the response traffic The load balancer has no way of knowing if a back-end server has responded correctly to the remote client.   The server may have failed, or it may have returned a server error message.  An external monitoring service is necessary to verify the health and correct operation of each back-end server. 2. Proper load balancing is not possible The load balancer has no idea of service response times so it is difficult for it to perform effective, performance-sensitive load balancing. 3. Session persistence is severely limited Because the load balancer only observes the initial ‘SYN’ packet before it makes a load balancing decision, it can only perform session persistence based on the source IP address and port of the packet, i.e. the IP address of the remote client. The load balancer cannot perform cookie-based session persistence, SSL session ID persistence, or any of the many other session persistence methods offered by other load balancers. 4. Content-based routing is not possible Again, because the load balancer does not observe the initial request, it cannot perform content based routing. 5. Limited traffic management and reporting The load balancer cannot manage traffic, performing operations like SSL decryption, content compression, security checking, SYN cookies, bandwidth management, etc.  It cannot retry failed requests, or perform any traffic rewriting.  The load balancer cannot report on traffic statistics such as bandwidth sent. 6. DSR can only be used within a datacenter There is no way to perform DSR between datacenters (other than proprietary tunnelling, which may be limited by ISP egress filtering). In addition, many of the advanced capabilities of an application delivery controller that depend on inspection and modification (security, acceleration, caching, compression, scrubbing etc) cannot be deployed when a DSR mode is in use. Performance of Direct Server Return The performance benefits of DSR are often assumed to be greater than they really are.  Central to this doubt is the observation that client applications will send TCP ‘ACK’ packets via the load balancer in response to the data they receive from the server, and the volume of the ACK packets can overwhelm the load balancer. Although ACK packets are small, in many cases the rated capacities of network hardware assume that all packets are the size of the maximum MTU (typically 1500 bytes).  A load balancer running on a 100 MBit network could receive a little over 8,000 ACK packets per second. On a low-latency network, ACK packets are relatively infrequent (1 ACK packet for every 4 data packets), but for large downloads over a high-latency network (8 hops) the number of ACK packets closely approaches 1:1 as the server and client attempt to optimize the TCP session.  Therefore, over high-latency networks, a DSR-equipped load balancer will receive a similar volume of ACK packets to the volume of outgoing data packets (and the difference in size between the ACK and data packets has little effect to packet-based load balancers). Stingray alternatives to Layer 2/3 DSR There are two alternatives to direct server return: Use Stingray Traffic Manager in its usual full proxy mode Stingray Traffic Manager is comfortably able to manage over many Gbits of traffic in its normal ‘proxy’ mode on appropriate hardware, and can be scaled horizontally for increased capacity.  In benchmarks, modern Intel and AMD-based systems can achieve multiple 10's of Gbits of fully load-balanced traffic, and up to twice as much when serving content from Stingray Traffic Manager’s content cache. Redirect requests to the chosen origin server (a.k.a. Layer 7 DSR) For the most common protocols (HTTP and RTSP), it is possible to handle them in ‘proxy’ mode, and then redirect the client to the chosen server node once the load balancing and session persistence decision has been made.  For the large file download, the client communicates directly with the server node, bypassing Stingray Traffic Manager completely: Client issues HTTP or RTSP request to Stingray Traffic Manager Stingray Traffic Manager issues ‘probe’ request via pool to back-end server Stingray Traffic Manager verifies that the back-end server returns a correct response Stingray Traffic Manager sends a 302 redirect to the client, telling it to retry the request against the chosen back-end server Requests for small objects (blue) are proxied directly to the origin.  Requests for large objects (red) elicit a lightweight probe to locate the resource, and then the client is instructed (green)to retrieve the resource directly from the origin. This technique would generally be used selectively.  Small file downloads (web pages, images, etc) would be managed through the Stingray Traffic Manager.  Only large files – embedded media for example – would be handled in this redirect mode.  For this reason, the HTTP session will always run through the Stingray Traffic Manager. Layer 7 DSR with HTTP Layer 7 DSR with HTTP is fairly straightforward.  In the following example, incoming requests that begin “/media” will be converted into simple probe requests and sent to the ‘Media Servers’ pool.  The Stingray Traffic Manager will determine which node was chosen, and send the client an explicit redirect to retrieve the requested content from the chosen node: Request rule: Deploy the following TrafficScript request rule: $path = http.getPath(); if( string.startsWith( $path, "/media/" ) || 1 ) {    # Store the real path    connection.data.set( "path", $path );    # Convert the request to a lightweight HEAD for '/'    http.setMethod( "HEAD" );    http.setPath( "/" );    pool.use( "Media Servers" ); } Response rule: This rule reads the response from the server; load balancing and session persistence (if relevant) will ensure that we’ve connected with the optimal server node.  The rule only takes effect if we did the request rewrite, the $saved_path value will begin with ‘/media/’, so we can issue the redirect. $saved_path = connection.data.get( "path" ); if( string.startsWith( $saved_path, "/media" ) ) {    $chosen_node = connection.getNode();    http.redirect( " http:// ".$chosen_node.$saved_path ); } Layer 7 DSR  with RTSP An RTSP connection is a persistent TCP connection.  The client and server communicate with HTTP-like requests and responses.  In this example, Stingray Traffic Manager will receive initial RTSP connections from remote clients and load-balance them on to a pool of media servers.  In the RTSP protocol, a media download is always preceded by a ‘DESCRIBE’ request from the client; Stingray Traffic Manager will replace the ‘DESCRIBE’ response with a 302 Redirect response that tells the client to connect directly to the back-end media server. This code example has been tested with the Quicktime, Real and Windows media clients, and against pools of Quicktime, Helix (Real) and Windows media servers. The details Create a virtual server listening on port 554 (standard port for RTSP traffic).  Set the protocol type to be “RTSP”. In this example, we have three pools of media servers, and we’re going to select the pool based on the User-Agent field in the RTSP request.  The pools are named “Helix Servers”, “QuickTime Servers” and “Windows Media Servers”. Request rule: Deploy the following TrafficScript request rule: $client = rtsp.getRequestHeader( "User-Agent" ); # Choose the pool based on the User-Agent if( string.Contains( $client, "RealMedia" ) ) {    pool.select( "Helix Servers" ); } else if ( string.Contains( $client, "QuickTime" ) ) {    pool.select( "QuickTime Servers" ); } else if ( string.Contains( $client, "WMPlayer" ) ) {    pool.select( "Windows Media Servers" ); } This rule uses pool.select() to specify which pool to use when Stingray is ready to forward the request to a back-end server.  Response rule: All of the work takes place in the response rule.  This rule reads the response from the server.  If the request was a ‘DESCRIBE’ method, the rule then replaces the response with a 302 redirect, telling the client to connect directly to the chosen back-end server.  Add this rule as a response rule, setting it to run every time (not once). # Wait for a DESCRIBE response since this contains the stream $method = rtsp.getMethod(); if( $method != "DESCRIBE" ) break; # Get the chosen node $node = connection.getnode(); # Instruct the client to retry directly against the chosen node rtsp.redirect( "rtsp://" . $node . "/" . $path ); Appendix: How does DSR work? It’s useful to have an appreciation of how DSR (and Delayed Binding) functions in order to understand some of its limitations (such as content inspection). TCP overview A simplified overview of a TCP connection is as follows: Connection setup The client initiates a connection with a server by sending a ‘SYN’ packet.  The SYN packet contains a randomly generated client sequence number (along with other data). The server replies with a ‘SYN ACK’ packet, acknowledging the client’s SYN and sending its own randomly generated server sequence number. The client completes the TCP connection setup by sending an ACK packet to acknowledge the server’s SYN.  The TCP connection setup is often referred to as a 3-way TCP handshake.  Think of it as the following conversation: Client: “Can you hear me?” (SYN) Server: “Yes.  Can you hear me?” (ACK, SYN) Client: “Yes” (ACK) Data transfer Once the connection has been established by the 3-way handshake, the client and server exchange data packets with each other.  Because packets may be dropped or re-ordered, each packet contains a sequence number; the sequence number is incremented for each packet sent. When a client receives intact data packets from the server, it sends back an ACK (acknowledgement) with the packet sequence number.  When a client acknowledges a sequence number, it is acknowledging it received all packets up to that number, so ACKs may be sent less frequently than data packets.  The server may send several packets in sequence before it receives an ACK (determined by the (“window size”), and will resend packets if they are not ACK’d rapidly enough. Simple NAT-based Load Balancing There are many variants for IP and MAC rewriting used in simple NAT-based load balancing.  The simplest NAT-based load balancing technique uses Destination-NAT (DNAT) and works as follows: The client initiates a connection by sending a SYN packet to the Virtual IP (VIP) that the load balancer is listening on The load balancer makes a load balancing decision and forwards the SYN packet to the chosen node.  It rewrites the destination IP address in the packet to the IP address of the node.  The load-balancer also remembers the load-balancing decision it made. The node replies with a SYN/ACK.  The load-balancer rewrites the source IP address to be the VIP and forwards the packet on to the remote client. As more packets flow between the client and the server, the load balancer checks its internal NAT table to determine how the IP addresses should be rewritten. This implementation is very amenable to a hardware (ASIC) implementation.  The TCP connection is load-balanced on the first SYN packet; one of the implications is that the load balancer cannot inspect the content in the TCP connection before making the routing decision. Delayed Binding Delayed binding is a variant of the DNAT load balancing method.  It allows the load balancer to inspect a limited amount of the content before making the load balancing decision. When the load balancer receives the initial SYN, it chooses a server sequence number and returns a SYN/ACK response The load balancer completes the TCP handshake with the remote client and reads the initial few data packets in the client’s request. The load balancer reassembles the request, inspects it and makes the load-balancing decision.  It then makes a TCP connection to the chosen server, using DNAT (i.e., the client’s source IP address) and writes the request to the server. Once the request has been written, the load balancer must splice the client-side and server-side connection together.  It does this by using DNAT to forward packets between the two endpoints, and by rewriting the sequence numbers chosen by the server so that they match the initial sequence numbers that the load balancer used. This implementation is still amenable to hardware (ASIC) implementation.  However, layer 4-7 tasks such as detailed content inspection and content rewriting are beyond implementation in specialized hardware alone and are often implemented using software approaches (such as F5's FastHTTP profile), albeit with significant functional limitations. Direct Server Return Direct Server Return is most commonly implemented using MAC address translation (layer 2). A MAC (Media Access Control) address is a unique, unchanging hardware address that is bound to a network card.  Network devices will read all network packets destined for their MAC address. Network devices use ARP (address resolution protocol) to announce the MAC address that is hosting a particular IP address.  In a Direct Server Return configuration, the load balancer and the server nodes will all listen on the same VIP.  However, only the load balancer makes ARP broadcasts to tell the upstream router that the VIP maps to its MAC address. When a packet destined for the VIP arrives at the router, the router places it on the local network, addressed to the load balancer’s MAC address.  The load balancer picks that packet up. The load balancer then makes a load balancing decision, choosing which node to send it to.  The load balancer rewrites the MAC address in the packet and puts it back on the wire. The chosen node picks the packet up just as if it were addressed directly to it. When the node replies, it sends its packets directly to the source node.  They are immediately picked up by the upstream router and forwarded on. In this way, reply packets completely bypass the load balancer machine. Why content inspection is not possible Content inspection (delayed binding) is not possible because it requires that the load balancer first completes the three-way handshake with the remote source node, and possibly ACK’s some of the data packets. When the load balancer then sends the first SYN to the chosen node, the node will respond with a SYN/ACK packet directly back to the remote source.  The load balancer is out-of-line and cannot suppress this SYN/ACK.  Additionally, the sequence number that the node selects cannot be translated to the one that the remote client is expecting.  There is no way to persuade the node to pick up in the TCP connection from where the load balancer left off. For similar reasons, SYN cookies cannot be used by the load balancer to offload SYN floods from the server nodes. Alternative Implementations of Direct Server Return There are two alternative implementations of DSR (see this 2002 paper entitled 'The State of the Art'), but neither is widely used any more: TCP Tunnelling: IP tunnelling (aka IP encapsulation) can be used to tunnel the client IP packets from the load balancer to the server.  All client IP packets are encapsulated within IP datagrams, and the server runs a tunnel device (an OS driver and configuration) to strip off the datagram header before sending the client IP packet up the network stack. This configuration does not support delayed binding, or any equivalent means of inspecting content before making the load balancing decision TCP Connection Hopping: Resonate have implemented a proprietary protocol (Resonate Exchange Protocol, RXP) which interfaces deeply with the server node’s TCP stack.  Once a TCP connection has been established with the Resonate Central Dispatch load balancer and the initial data has been read, the load balancer can hand the response side of the connection off to the selected server node using RXP.  The RXP driver on the server suppresses the initial TCP handshake packets, and forces the use of the correct TCP sequence number.  This uniquely allows for content-based routing and direct server return in one solution. Neither of these methods are in wide use now.
View full article
Introduction   Do you ever face any of these requirements?   "I want to best-effort provide certain levels of service for certain users." "I want to prioritize some transactions over others." "I want to restrict the activities of certain types of users."   This article explains that to address these problems, you must consider the following questions:    "Under what circumstances do you want the policy to take effect?" "How do you wish to categorise your users?" "How do you wish to apply the differentiation?"   It then describes some of the measures you can take to monitor performance more deeply and apply prioritization to nominated traffic:   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions. Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations. Request Rate Shaping - Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   Whether you are running an eCommerce web site, online corporate services or an internal intranet, there’s always the need to squeeze more performance from limited resources and to ensure that your most valuable users get the best possible levels of service from the services you are hosting.   An example   Imagine that you are running a successful gaming service in a glamorous location.  The usage of your service is growing daily, and many of your long-term users are becoming very valuable.   Unfortunately, much of your bandwidth and server hits are taken up by competitors’ robots that screen-scrape your betting statistics, and poorly-written bots that spam your gaming tables and occasionally place low-value bets. At certain times of the day, this activity is so great that it impacts the quality of the service you deliver, and your most valuable customers are affected.     Using Traffic Manager to measure, classify and prioritize traffic, you can construct a service policy that comes into effect when your web site begins to run slowly to enforce different levels of service:     Competitor’s screen-scraping robots are tightly restricted to one request per second each.  A ten-second delay reduces the value of the information they screen-scrape. Users who have not yet logged in are limited to a small proportion of your available bandwidth and directed to a pair of basic web servers, thus reserving capacity for users who are logged in. Users who have made large transactions in the past are tagged with a cookie and the performance they receive is measured.  If they are receiving poor levels of service (over 100ms response time), then some of the transaction servers are reserved for these high-value users and the activity of other users is shaped by a system-wide queue.   Whether you are operating a gaming service, a content portal, a B2B or B2C eCommerce site or an internal intranet, this kind of service policy can help ensure that key customers get the best possible service, minimize the churn of valuable users and prevent undesirable visitors from harming the service to the detriment of others.   Designing a service policy     “I want to best-effort guarantee certain levels of service for certain users.” “I want to prioritize some transactions over others.” “I want to restrict the activities of certain users.”   To address these problems, you must consider the following questions:   Under what circumstances do you want the policy to take effect? How do you wish to categorise your users? How do you wish to apply the differentiation?   One or more TrafficScript rules can be used to apply the policy.  They take advantage of the following features:   When does the policy take effect?   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions.   How are users categorized?   Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations.   How are they given different levels of service?   Request Rate Shaping – Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   TrafficScript   Feature Brief: TrafficScript is the key to defining traffic management policies to implement these prioritization rules.  TrafficScript brings together functionality to monitor and classify behavior, and then applies functionality to impose the appropriate prioritization rules.   For example, the following TrafficScript request rule inspects HTTP requests.  If the request is for a .jsp page, the rule looks at the client’s ‘Priority’ cookie and routes the request to the ‘high-priority’ or ‘low-priority’ server pools as appropriate:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { $cookie = http.getCookie( "Priority" ); if( $cookie == "high" ) { pool.use( "high-priority" ); } else { pool.use( "low-priority" ); } }   Generally, if you can describe the traffic management logic that you require, it is possible to implement it using TrafficScript.   Capability 1: Service Level Monitoring   Using Feature Brief: Service Level Monitoring, Traffic Manager can measure and react to changes in response times for your hosted services, by comparing response times to a desired time.   You configure Service Level Monitoring by creating a Service Level Monitoring Class (SLM Class).  The SLM Class is configured with the desired response time (for example, 100ms), and some thresholds that define actions to take.  For example, if fewer than 80% of requests meet the desired response time, Traffic Manager can log a warning; if fewer than 50% meet the desired time, Traffic Manager can raise a system alert.   Suppose that we were concerned about the performance of our Java servlets.  We can configure an SLM Class with the desired performance, and use it to monitor all requests for Java servlets:   $url = http.getPath(); if( string.startsWith( $url, "/servlet/" ) { connection.setServiceLevelClass( "Java servlets" ); }   You can then monitor the performance figures generated by the ‘Java servlets’ SLM class to discover the response times, and the proportion of requests that fall outside the desired response time:   Once requests are monitored by an SLM Class, you can discover the proportion of requests that meet (or fail to meet) the desired response time within a TrafficScript rule.  This makes it possible to implement TrafficScript logic that is only called when services are underperforming.   Example: Simple Differentiation   Suppose we had a TrafficScript rule that tested to see if a request came from a ‘high value’ customer.   When our service is running slowly, high-value customers should be sent to one server pool (‘gold’) and other customers sent to a lower-performing server pool (‘bronze’). However, when the service is running at normal speed, we want to send all customers to all servers (the server pool named ‘all servers’).   The following TrafficScript rule describes how this logic can be implemented:   # Monitor all traffic with the 'response time' SLM class, which is # configured with a desired response time of 200ms connection.setServiceLevelClass( "response time" ); # Now, check the historical activity (last 10 seconds) to see if it’s # been acceptable (more than 90% requests served within 200ms) if( slm.conforming( "response time" ) > 90 ) ) { # select the ‘all servers’ server pool and terminate the rule pool.use( "all servers" ); } # If we get here, things are running slowly # Here, we decide a customer is ‘high value’ if they have a login cookie, # so we penalize customers who are not logged in. You can put your own # test here instead $logincookie = http.getCookie( "Login" ); if( $logincookie ) { pool.use( "gold" ); } else { pool.use( "bronze" ); }   For a more sophisticated example of this technique, check out the article Dynamic rate shaping slow applications   Capability 2: Application Traffic Inspection   There’s no limit to how you can inspect and evaluate your traffic.  Traffic Manager lets you look at any aspect of a client’s request, so that you can then categorize them as you need. For example:   # What is the client asking for? $url = http.getPath(); # ... and the QueryString $qs = http.getQueryString(); # Where has the client come from? $referrer = http.getHeader( "Referer" ); $country = geo.getCountryCode( request.getRemoteIP() ); # What sort of browser is the client using? $ua = http.getHeader( "User-Agent" ); # Is the client trying to spend more than $49.99? if( http.getPath() == "/checkout.cgi" && http.getFormParam( "total" ) > 4999 ) ... # What’s the value of the CustomerName field in the XML purchase order # in the SOAP request? $body = http.getBody(); $name = xml.xpath.matchNodeSet( $body, "", "//Info/CustomerName/text()"); # Take the name, post it to a database server with a web interface and # inspect the response. Does the response contain the value ‘Premium’? $response = http.request.post( "http://my.database.server/query", "name=".string.htmlEncode( $name ) ); if( string.contains( $response, "Premium" ) ) { ... }   Remembering the Classification with a Cookie   Often, it only takes one request to identify the status of a user, but you want to remember this decision for all subsequent requests.  For example, if a user places an item in his shopping cart by accessing the URL ‘ /cart.php ’, then you want to remember this information for all of his subsequent requests.   Adding a response cookie is the way to do this.  You can do this in either a Request or Response Rule with the ‘ http.setResponseCookie() ’ function:   if( http.getPath() == "/cart.php" ) { http.setResponseCookie( "GotItems", "Yes" ); }   This cookie will be sent by the client on every subsequent request, so to test if the user has placed items in his shopping cart, you just need to test for the presence of the ‘GotItems’ cookie in each request rule:   if( http.getCookie( "GotItems" ) ) { ... }   If necessary, you can encrypt and sign the cookie so that it cannot be spoofed or reused:   # Setting the cookie # Create an encryption key using the client’s IP address and user agent # Encrypt the current time using encryption key; it can only be decrypted # using the same key $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $encrypted = string.encrypt( sys.time(), $key ); $encoded = string.hexencode( $encrypted ); http.setResponseHeader( "Set-Cookie", "GotItems=".$encoded ); # Validating the cookie $isValid = 0; if( $cookie = http.getCookie( "GotItems" ) ) { $encrypted = string.hexdecode( $cookie ); $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $secret = string.decrypt( $encrypted, $key ); # If the cookie has been tampered with, or the ip address or user # agent differ, the string.decrypt will return an empty string. # If it worked and the data was less than 1 hour old, it’s valid: if( $secret && sys.time()-$secret < 3600 ) { $isValid = 1; } }   Capability 3: Request Rate Shaping   Having decided when to apply your service policy (using Service Level Monitoring), and classified your users (using Application Traffic Inspection), you now need to decide how to prioritize valuable users and penalize undesirable ones.   The Feature Brief: Bandwidth and Rate Shaping in Traffic Manager capability is used to apply maximum request rates:   On a global basis (“no more than 100 requests per second to my application servers”); On a very fine-grained per-user or per-class basis (“no user can make more than 10 requests per minute to any of my statistics pages”).   You can construct a service policy that places limits on a wide range of events, with very fine grained control over how events are identified.  You can impose per-second and per-minute rates on these events.   For example:   You can rate-shape individual web spiders, to stop them overwhelming your web site. Each web spider, from each remote IP address, can be given maximum request rates. You can throttle individual SMTP connections, or groups of connections from the same client, so that each connection is limited to a maximum number of sent emails per minute. You may also rate-shape new SMTP connections, so that a remote client can only establish new connections at a particular rate. You can apply a global rate shape to the number of connections per second that are forwarded to an application. You can identify individual user’s attempts to log in to a service, and then impede any dictionary-based login attacks by restricting each user to a limited number of attempts per minute.   Request Rate Limits are imposed using the TrafficScript rate.use() function, and you can configure per-second and per-minute limits in the rate class.  Both limits are applied (note that if the per-minute limit is more than 60-times the per-second limit, it has no effect).   Using a Rate Class   Rate classes function as queues.  When the TrafficScript rate.use() function is called, the connection is suspended and added to the queue that the rate class manages.  Connections are then released from the queue according to the per-second and per-minute limits.   There is no limit to the size of the backlog of queued connections.  For example, if 1000 requests arrived in quick succession to a rate class that admitted 10 per second, 990 of them would be immediately queued.  Each second, 10 more requests would be released from the front of the queue.   While they are queued, connections may time out or be closed by the remote client.  If this happens, they are immediately discarded.   You can use the rate.getBacklog() function to discover how many requests are currently queued.  If the backlog is too large, you may decide to return an error page to the user rather than risk their connection timing out.  For example, to rate shape jsp requests, but defer requests when the backlog gets too large:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { if( rate.getBacklog( "shape requests" ) > 100 ) { http.redirect( "http://mysite/too_busy.html" ); } else { rate.use( "shape requests" ); } }   Rate Classes with Keys In many circumstances, you may need to apply more fine-grained rate-shape limits.  For example, imagine a login page; we wish to limit how frequently each individual user can attempt to log in to just 2 attempts per minute.   The rate.use() function can take an optional ‘key’ which identifies a specific instance of the rate class.  This key can be used to create multiple, independent rate classes that share the same limits, but enforce them independently for each individual key.   For example, the ‘login limit’ class is restricted to 2 requests per minute, to limit how often each user can attempt to log in:   $url = http.getPath(); if( string.endsWith( $url, "login.cgi" ) ) { $user = http.getFormParam( "username" ); rate.use( "login limit", $user ); }   This rule can help to defeat dictionary attacks where attackers try to brute-force crack a user’s password.  The rate shaping limits are applied independently to each different value of $user.  As each new user accesses the system, they are limited to 2 requests per minute, independently of all other users who share the “login limit” rate shaping class.   For another example, check out The "Contact Us" attack against mail servers.   Applying service policies with rate shaping   Of course, once you’ve classified your users, you can apply different rate settings to different categories of users:   # If they have an odd-looking user agent, or if there’s no host header, # the client is probably a web spider. Limit it to 1 request per second. $ua = http.getHeader( "User-Agent" ); if( ! string.startsWith( $ua, "Mozilla/" ) && ! string.startsWith( $ua, "Opera/" ) || ! http.getHeader( "Host" ) ) { rate.use( "spiders", request.getRemoteIP() ); }   If the service is running slowly, rate-shape users who have not placed items into their shopping cart with a global limit, and rate-shape other users to 8 requests per second each:   if( slm.conforming( "timer" ) < 80 ) { $cookie = request.getCookie( "Cart" ); if( ! $cookie ) { rate.use( "casual users" ); } else { # Get a unique id for the user $cookie = request.getCookie( "JSPSESSIONID" ); rate.use( "8 per second", $cookie ); } }   Capability 4: Bandwidth Shaping   Feature Brief: Bandwidth and Rate Shaping in Traffic Manager allows Traffic Manager to limit the number of bytes per second used by inbound or outbound traffic, for an entire service, or by the type of request.   Bandwidth limits are automatically shared and enforced across all the Traffic Managers in a cluster. Individual Traffic Managers take different proportions of the total limit, depending on the load on each, and unused bandwidth is equitably allocated across the cluster depending on the need of each machine.   Like Request Rate Shaping, you can use Bandwidth shaping to limit the activities of subsets of your users. For example, you may have a 1 Gbits/s network connection which is being over-utilized by a certain type of client, which is affecting the responsiveness of the service.  You may therefore wish to limit the bandwidth available to those clients to 20Mbits/s.   Using Bandwidth Shaping Like Request Rate Shaping, you configure a Bandwidth class with a maximum bandwidth limit.  Connections are allocated to a class as follows:   response.setBandwidthClass( "class name" );   All of the connections allocated to the class share the same bandwidth limit.   Example: Managing Flash Floods The following example helps to mitigate the ‘Slashdot Effect’, a common example of a Flash Flood problem.  In this situation, a web site is overwhelmed by traffic as a result of a high-profile link (for example, from the Slashdot news site), and the level of service that regular users experience suffers as a result.   The example looks at the ‘Referer’ header, which identifies where a user has come from to access a web site.  If the user has come from ‘slashdot.org’, he is tagged with a cookie so that all of his subsequent requests can be identified, and he is allocated to a low-bandwidth class:   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot.org" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); connection.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { connection.setBandwidthClass( "slashdot" ); }   For a more in depth discussion, check out Detecting and Managing Abusive Referers.   Capability 5: Traffic Routing and Termination   Different levels of service can be provided by different traffic routing, or in extreme events, by dropping some requests.   For example, some large media sites provide different levels of content; high-bandwidth rich media versions of news stories are served during normal usage, and low-bandwidth versions which are served when traffic levels are extremely high.  Many websites provide flash-enabled and simple HTML versions of their home page and navigation.   This is also commonplace when presenting content to a range of browsing devices with different capabilities and bandwidth.   The switch between high and low bandwidth versions could take place as part of a service policy: as the service begins to under-perform, some (or all) users could be forced onto the low-bandwidth versions so that a better level of service is maintained.   # Forcibly change requests that begin /high/ to /low/ $url = http.getPath(); if( string.startsWith( $url, "/high" ) ) { $url = string.replace( $url, "/high", "/low" ); http.setPath( $low ); }   Example: Ticket Booking Systems   Ticket Booking systems for major events often suffer enormous floods of demand when tickets become available.   You can use Stingray's request rate shaping system to limit how many visitors are admitted to the service, and if the service becomes overwhelmed, you can send back a ‘please try again’ message rather than keeping the user ‘on hold’ in the queue indefinitely.   Suppose the ‘booking’ rate shaping class is configured to admit 10 users per second, and that users enter the booking process by accessing the URL /bookevent?eventID=<id> .  This rule ensures that no user is queued for more than 30 seconds, by keeping the queue length to no more than 300 users (10 users/second * 30 seconds):   # limit how users can book events $url = http.getPath(); if( $url == "/bookevent" ) { # How many users are already queued? if( rate.getBacklog( "booking" ) > 300 ) { http.redirect( "http://www.mysite.com/too_busy.html"); } else { rate.use( "booking" ); } }   Example: Prioritizing Resource Usage In many cases, the resources are limited and when a site is overwhelmed, users’ requests still need to be served.   Consider the following scenario:   The site runs a cluster of 4 identical application servers (‘servers ‘1’ to ‘4’); Users are categorized into casual visitors and customers; customers have a ‘Cart’ cookie, and casual visitors do not.   Our goal is to give all users the best possible level of service, but if customers begin to get a poor level of service, we want to prioritize them over casual visitors.  We desire that more then 80% of customers get responses within 100ms.   This can be achieved by splitting the 4 servers into 2 pools: the ‘allservers’ pool contains servers 1 to 4, and the ‘someservers’ pool contains servers 1 and 2 only.   When the service is poor for the customers, we will restrict the casual visitors to just the ‘someservers’ pool.  This effectively reserves the additional servers 3 and 4 for the customers’ exclusive use.   The following code uses the ‘response’ SLM class to measure the level of service that customers receive:   $customer = http.getCookie( "Cart" ); if( $customer ) { connection.setServiceLevelClass( "response" ); pool.use( "allservers" ); } else { if( slm.conforming( "response" ) < 80 ) { pool.use( "someservers" ); } else { pool.use( "allservers" ); } }   Capability 6: Selective Traffic Optimization Some of Traffic Manager's features can be used to improve the end user’s experience, but they take up resources on the system:   Pulse Web Acceleraror (Aptimizer) rewrites page content for faster download and rendering, but is very CPU intensive. Content Compression reduces the bandwidth used in responses and gives better response times, but it takes considerable CPU resources and can degrade performance. Feature Brief: Traffic Manager Content Caching can give much faster responses, and it is possible to cache multiple versions of content for each user.  However, this consumes memory on the system.   All of these features can be enabled and disabled on a per-user basis, as part of a service policy.   Pulse Web Accelerator (Stingray Aptimizer)   Use the http.aptimizer.bypass() and http.aptimizer.use() TrafficScript functions to control whether Traffic Manager will employ the Aptimizer optimization module for web content.    Note that these functions only refer to optimizations to the base HTML document (e.g. index.html, or other content of type text/html) - all other resources will be served as is appropriate.  For example, if a client receives an aptimized version of the base content and then requests the image sprites, Traffic Manager will always serve up the sprites.   # Optimize web content for clients based in Australia $ip = request.getRemoteIP(); if( geo.getCountry( $ip ) == "Australia" ) { http.aptimizer.use( "All", "Remote Users" ); }   Content Compression Use the http.compress.enable() and http.compress.disable() TrafficScript functions to control whether or not Traffic Manager will compress response content to the remote client.   Note that Traffic Manager will only compress content if the remote browser has indicated that it supports compression.   On a lightly loaded system, it’s appropriate to compress all response content whenever possible :   http.compress.enable();   On a system where the CPU usage is becoming too high, you can selectively compress content:   # Don’t compress by default http.compress.disable(); if( $isvaluable ) { # do compress in this case http.compress.enable(); }   Content Caching Traffic Manager can cache multiple different versions of a HTTP response.  For example, if your home page is generated by an application that customizes it for each user, Traffic Manager can cache each version separately, and return the correct version from the cache for each user who accesses the page.   Traffic Manager's cache has a limited size so that it does not consume too much memory and cause performance to degrade.  You may wish to prioritize which pages you put in the cache, using the http.cache.disable() and http.cache.enable() TrafficScript  functions.   Note: you also need to enable Content Caching in your Virtual Server configuration; otherwise the TrafficScript cache control functions will have no effect.   # Get the user name $user = http.getCookie( "UserName" ); # Don’t cache any pages by default: http.cache.disable(); if( $isvaluable ) { # Do cache these pages for better performance. # Each user gets a different version of the page, so we need to cache # the page indexed by the user name. http.cache.setkey( $user ); http.cache.enable(); }   Custom Logging A service policy can be complicated to construct and implement.   The TrafficScript functions log.info() , log.warn() and log.error() are used to write messages to the event log, and so are very useful debugging tools to assist in developing complex TrafficScript rules.   For example, the following code:   if( $isvaluable && slm.conforming( "timer" ) < 70 ) { log.info( "User ".$user." needs priority" ); }   … will append the following message to your error log file:   $ tail $ZEUSHOME/zxtm/log/errors [20/Jan/2013:10:24:46 +0000] INFO rulename rulelogmsginfo vsname User Jack needs priority   You can also inspect your error log file by viewing the ‘Event Log’ on the  Admin Server.   When you are debugging a rule, you can use log.info() to print out progress messages as the rule executes.  The log.info() f unction takes a string parameter; you can construct complex strings by appending variables and literals together using the ‘.’ operator:   $msg = "Received ".connection.getDataLen()." bytes."; log.info( $msg );   The functions log.warn() and log.error() are similar to log.info() .  They prefix error log messages with a higher priority - either “WARN” or “ERROR” and you can filter and act on these using the Event Handling system.   You should be careful when printing out connection data verbatim, because the connection data may contain control characters or other non-printable characters.  You can encode data using either ‘ string.hexEncode() ’ or ‘ string.escape( ) ’; you should use ‘ string.hexEncode() ’ if the data is binary, and ‘ string.escape() ’ if the data contains readable text with a small number of non-printable characters.   Conclusion Traffic Manager is a powerful toolkit for network and application administrators.  This white paper describes a number of techniques to use tools in the kit to solve a range of traffic valuation and prioritization tasks.   For more examples of how Traffic Manager and TrafficScript can manipulate and prioritize traffic, check out the Top Examples of Traffic Manager in action on the Pulse Community.
View full article
In Stingray, each virtual server is configured to manage traffic of a particular protocol.  For example, the HTTP virtual server type expects to see HTTP traffic, and automatically applies a number of optimizations - keepalive pooling, HTTP upgrades, pipelines - and offers a set of HTTP-specific functionality (caching, compression etc).   A virtual server is bound to a specific port number (e.g. 80 for HTTP, 443 for HTTPS) and a set of IP addresses.  Although you can configure several virtual servers to listen on the same port, they must be bound to different IP addresses; you cannot have two virtual servers bound to the same IP: port pair as Stingray will not know which virtual server to route traffic to.   "But I need to use one port for several different applications!"   Sometimes, perhaps due to firewall restrictions, you can't publish services on arbitrary ports.  Perhaps you can only publish services on port 80 and 443; all other ports are judged unsafe and are firewalled off. Furthermore, it may not be possible to publish several external IP addresses.   You need to accept traffic for several different protocols on the same IP: port pair.  Each protocol needs a particular virtual server to manage it;  How can you achieve this?   The scenario   Let's imagine you are hosting several very different services:   A plain-text web application that needs an HTTP virtual server listening on port 80 A second web application listening for HTTPS traffic listening on port 443 An XML-based service load-balanced across several servers listening on port 180 SSH login to a back-end server (this is a 'server-first' protocol) listening on port 22   Clearly, you'll need four different virtual servers (one for each service), but due to firewall limitations, all traffic must be tunnelled to port 80 on a single IP address.  How can you resolve this?   The solution - version 1   The solution is relatively straightforward for the first three protocols.  They are all 'client-first' protocols (see Feature Brief: Server First, Client First and Generic Streaming Protocols), so Stingray can read the initial data written from the client.   Virtual servers to handle individual protocols   First, create three internal virtual servers, listening on unused private ports (I've added 7000 to the public ports).  Each virtual server should be configured to manage its protocol appropriately, and to forward traffic to the correct target pool of servers.  You can test each virtual server by directing your client application to the correct port (e.g. http://stingray-ip-address:7080/ ), provided that you can access the relevant port (e.g. you are behind the firewall):   For security, you can bind these virtual servers to localhost so that they can only be accessed from the Stingray device.   A public 'demultiplexing' virtual server   Create three 'loopback' pools (one for each protocol), directing traffic to localhost:7080, localhost:7180 and localhost:7443.   Create a 'public' virtual server listening on port 80 that interrogates traffic using the following rule, and then selects the appropriate pool based on the data the clients send.  The virtual server should be 'client first', meaning that it will wait for data from the client connection before triggering any rules:     # Get what data we have... $data = request.get(); # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   The Detect protocol rule is triggered once we receive client data   Now you can target all your client applications at port 80, tunnel through the firewall and demultiplex the traffic on the Stingray device.   The solution - version 2   You may have noticed that we omitted SSH from the first version of the solution.   SSH is a challenging protocol to manage in this way because it is 'server first' - the client connects and waits for the server to respond with a banner (greeting) before writing any data on the connection.  This means that we cannot use the approach above to identify the protocol type before we select a pool.   However, there's a good workaround.  We can modify the solution presented above so that it waits for client data.  If it does not receive any data within (for example) 5 seconds, then assume that the connection is the server-first SSH type.   First, create a "SSH" virtual server and pool listening on (for example) 7022 and directing traffic to your target SSH virtual server (for example, localhost:22 - the local SSH on the Stingray host):     Note that this is a 'Generic server first' virtual server type, because that's the appropriate type for SSH.   Second, create an additional 'loopback' pool named 'Internal SSH loopback' that forwards traffic to localhost:7022 (the SSH virtual server).   Thirdly, reconfigure the Port 80 listener public virtual server to be 'Generic streaming' rather than 'Generic client first'.  This means that it will run the request rule immediately on a client connection, rather than waiting for client data.   Finally, update the request rule to read the client data.  Because request.get() returns whatever is in the network buffer for client data, we spin and poll this buffer every 10 ms until we either get some data, or we timeout after 5 seconds.   # Get what data we have... $data = request.get(); $count = 500; while( $data == "" && $count-- > 0 ) { connection.sleep( 10 ); # milliseconds $data = request.get(); } if( $data == "" ) { # We've waited long enough... this must be a server-first protocol pool.use( "Internal SSH loopback" ); } # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   This solution isn't perfect (the spin and poll may incur a hit for a busy service over a slow network connection) but it's an effective solution for the single-port firewall problem and explains how to tunnel SSH over port 80 (not that you'd ever do such a thing, would you?)   Read more   Check out Feature Brief: Server First, Client First and Generic Streaming Protocols for background The WebSockets example (libWebSockets.rts: Managing WebSockets traffic with Stingray Traffic Manager) uses a similar approach to demultiplex websockets and HTTP traffic
View full article
This article describes how to gather activity statistics across a cluster of traffic managers using Perl, SOAP::Lite and Stingray's SOAP Control API. Overview Each local Stingray Traffic Manager tracks a very wide range of activity statistics. These may be exported using SNMP or retrieved using the System/Stats interface in Stingray's SOAP Control API. When you use the Activity monitoring in Stingray's Administration Interface, a collector process communicates with each of the Traffic Managers in your cluster, gathering the local statistics from each and merging them before plotting them on the activity chart. 'Aggregate data across all traffic managers' However, when you use the SNMP or Control API interfaces directly, you will only receive the statistics from the Traffic Manager machine you have connected to. If you want to get a cluster-wide view of activity using SNMP or the Control API, you will need to poll each machine and merge the results yourself. Using Perl and SOAP::Lite to query the traffic managers and merge activity statistics The following code sample determines the total TCP connection rate across the cluster as follows: Connect to the named traffic manager and use the getAllClusterMachines() method to retrieve a list of all of the machines in the cluster; Poll each machine in the cluster for its current value of TotalConn (the total number of TCP connections processed since startup); Sleep for 10 seconds, then poll each machine again; Calculate the number of connections processed by each traffic manager in the 10-second window, and calculate the per-second rate accurately using high-res time. The code: #!/usr/bin/perl -w use SOAP::Lite 0.6; use Time::HiRes qw( time sleep ); $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; my $userpass = "admin:admin";      # SOAP-capable authentication credentials my $adminserver = "stingray:9090"; # Details of an admin server in the cluster my $sampletime = 10;               # Sample time (seconds) sub getAllClusterMembers( $$ ); sub makeConnections( $$$ ); sub makeRequest( $$ ); my $machines = getAllClusterMembers( $adminserver, $userpass ); print "Discovered cluster members ". ( join ", ", @$machines ) . "\n"; my $connections = makeConnections( $machines, $userpass,    " http://soap.zeus.com/zxtm/1.0/System/Stats/ " ); # sample the value of getTotalConn my $start = time(); my $res1 = makeRequest( $connections, "getTotalConn" ); sleep( $sampletime-(time()-$start) ); my $res2 = makeRequest( $connections, "getTotalConn" ); # Determine connection rate per traffic manager my $totalrate = 0; foreach my $z ( keys %{$res1} ) {    my $conns   = $res2->{$z}->result - $res1->{$z}->result;    my $elapsed = $res2->{$z}->{time} - $res1->{$z}->{time};    my $rate = $conns / $elapsed;    $totalrate += $rate; } print "Total connection rate across all machines: " .       sprintf( '%.2f', $totalrate ) . "\n"; sub getAllClusterMembers( $$ ) {     my( $adminserver, $userpass ) = @_;     # Discover cluster members     my $mconn =  SOAP::Lite          -> ns(' http://soap.zeus.com/zxtm/1.0/System/MachineInfo/ ')          -> proxy(" https://$userpass\@$adminserver/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );     $mconn->proxy->ssl_opts( SSL_verify_mode => 0 );      my $res = $mconn->getAllClusterMachines();     # $res->result is a reference to an array of System.MachineInfo.Machine objects     # Pull out the name:port of the traffic managers in our cluster     my @machines = grep s@ https://(.*?)/@$1@ ,        map { $_->{admin_server}; } @{$res->result};     return \@machines; } sub makeConnections( $$$ ) {     my( $machines, $userpass, $ns ) = @_;     my %conns;     foreach my $z ( @$machines ) {        $conns{ $z } = SOAP::Lite          -> ns( $ns )          -> proxy(" https://$userpass\@$z/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );        $conns{ $z }->proxy->ssl_opts( SSL_verify_mode => 0 );     }     return \%conns; } sub makeRequest( $$ ) {     my( $conns, $req ) = @_;     my %res;     foreach my $z ( keys %$conns ) {        my $r = $conns->{$z}->$req();        $r->{time} = time();        $res{$z}=$r;     }     return \%res; } Running the script $ ./getConnections.pl Discovered cluster members stingray1-ny:9090, stingray1-sf:9090 Total connection rate across all machines: 5.02
View full article
In a recent conversation, a user wished to use the Traffic Manager's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Traffic Manager's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Traffic Manager processes all of the requests without excessive queuing; the response time stays within desired limits. Traffic Manager typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
Dynamic information is more abundant now than ever, but we still see web applications provide static content. Unfortunately many websites are still using a static picture for a location map because of application code changes required. Traffic Manager provides the ability to insert the required code into your site with no changes to the application. This simplifies the ability to provide users dynamic and interactive content tailored for them.  Fortunately, Google provides an API to use embedded Google maps for your application. These maps can be implemented with little code changes and support many applications. This document will focus on using the Traffic Manager to provide embedded Google Maps without configuration or code changes to the application.   "The Google Maps Embed API uses a simple HTTP request to return a dynamic, interactive map. The map can be easily embedded in your web page by setting the Embed API URL as the src attribute of an iframe...   Google Maps Embed API maps are easy to add to your webpage—just set the URL you build as the value of an iframe's src attribute. Control the size of the map with the iframe's height and width attributes. No JavaScript required. "... -- Google Maps Embed API — Google Developers   Google Maps Embedded API Notes   Please reference the Google Documentation at Google Maps Embed API — Google Developers for additional information and options not covered in this document.   Google API Key   Before you get started with the Traffic Script, your need to get a Google API Key. Requests to the Google Embed API must include a free API key as the value of the URL key parameter. Your key enables you to monitor your application's Maps API usage, and ensures that Google can contact you about your website/application if necessary. Visit Google Maps Embed API — Google Developers to for directions to obtain an API key.   By default, a key can be used on any site. We strongly recommend that you restrict the use of your key to domains that you administer, to prevent use on unauthorized sites. You can specify which domains are allowed to use your API key by clicking the Edit allowed referrers... link for your key. -- Google Maps Embed API — Google Developers   The API key is included in clear text to the client ( search nerdydata for "https://www.google.com/maps/embed/v1/place?key=" ). I also recommend you restrict use of your key to your domains.   Map Modes   Google provides four map modes available for use,and the mode is specified in the request URL.   Place mode displays a map pin at a particular place or address, such as a landmark, business, geographic feature, or town. Directions mode displays the path between two or more specified points on the map, as well as the distance and travel time. Search mode displays results for a search across the visible map region. It's recommended that a location for the search be defined, either by including a location in the search term (record+stores+in+Seattle) or by including a center and zoom parameter to bound the search. View mode returns a map with no markers or directions.   A few use cases:   Display a map of a specific location with labels using place mode (Covered in this document). Display Parking and Transit information for a location with Search Mode.(Covered in this document). Provide directions (between locations or from the airport to a location) using Directions mode Display nearby Hotels or tourist information with Search mode using keywords or "lodging" or "landmarks" Use geo location and Traffic Script and provide a dynamic Search map of Gym's local to each visitor for your fitness blog. My personal favorite for Intranets Save time figuring out where to eat lunch around the office and use Search Mode with keyword "restaurant" Improve my Traffic Script productivity and use Search Mode with keyword "coffee+shops"   Traffic Script Examples   Example 1: Place Map (Replace a string)   This example covers a basic method to replace a string in the HTML code. This rule will replace a string within the existing HTML with Google Place map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;        $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;   $googleapikey = "YOUR_KEY_HERE" ;   $googlemapurl = "https://www.google.com/maps/embed/v1/place" ;   #Map height and width   $mapheight = "420" ;   $mapwidth = "420" ;        #String of HTML to be replaced   $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;        #Replacement HTML   $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .   "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=" .   "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .        #Get the existing HTTP Body for modification   $body = http.getResponseBody();        #Regex sub against the body looking for the defined string   $body = string.replaceall( $body , $insertstring , $googlemaphtml );   http.setResponseBody( $body );    Example 2: Search Map (Replace a string) This example is the same as Example 1, but a change in the map type (note the change in the $googlemapurl?q=parking+near). This rule will replace a string within the existing HTML with Google Search map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;           #String of HTML to be replaced    $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;           #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .           #Get the existing HTTP Body for modification    $body = http.getResponseBody();           #Regex sub against the body looking for the defined string    $body = string.replaceall( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );    Example 3: Search Map (Replace a section)   This example provides a different method to insert code into the existing HTML. This rule uses regex to replace a section of the existing HTML with Google map iframe HTML, and has also been formatted for easy customization and readability. The change from Example 2 can be noted (See $insertstring and string.regexsub).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 #Only process text/html content       if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;          #String of HTML to be replaced    $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;          #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .          #Get the existing HTTP Body for modification    $body = http.getResponseBody();          #Regex sub against the body looking for the defined string    $body = string.regexsub( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );     Example 3.1 (Shortened)   For reference a shortened version of the Example 3 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;                http.setResponseBody ( string.regexsub( http.getResponseBody(),      "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,      "<iframe width=\"420\" height=\"420\" frameborder=\"0\" style=\"border:0\" " .      "src=\"https://www.google.com/maps/embed/v1/search?" .      "q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107" .      "&key=YOUR_KEY_HERE\"></iframe>" ) );     Example 4: Search Map ( Replace a section with formatting, select URL, & additional map)   This example is closer to a production use case. Specifically this was created with www.riverbed.com as my pool nodes. This rule has the following changes from Example 3: use HTML formatting to visually integrate with an existing application (<div class=\"six columns\">), only process for the desired URL path of contact (line #3), and provides an additional Transit Stop map (lines 27-31).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 #Only process text/html content in the contact path  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" )       || http.getpath() == "contact" ) break;       $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;  $mapcenter = string.urlencode( "37.784465,-122.398570" );  $mapzoom = "14" ;  #Google API key  $googleapikey = "YOUR_KEY_HERE" ;  $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;  #Map height and width  $mapheight = "420" ;  $mapwidth = "420" ;       #Regex match for the HTML section to be replaced  $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;       #Replacment HTML  $googlemapshtml =   #HTML cleanup (2x "</div>") and New Section title  "</div></div></a><h4>Parking and Transit Information</h4>" .  #BEGIN Parking Map. Using existing css for layout  "<div class=\"six columns\"><h5>Parking Map</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" . $nearaddress . "" .  "&key=" . $googleapikey . "\"></iframe></div>" .  #BEGIN Transit Map. Using existing css for layout  "<div class=\"six columns\"><h5>Transit Stop's</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=Transit+Stop+near+" . $nearaddress . "" .  "&center=" . $mapcenter . "&zoom=" . $mapzoom . "&key=" . $googleapikey . "\"></iframe></div>" .  #Include the removed HTML comment  "<!-- TAB 2 Content (Office Locations) -->" ;       #Get the existing HTTP Body for modification  $body = http.getResponseBody();       #Regex sub against the body looking for the defined string  $body = string.regexsub( $body , $insertstring , $googlemapshtml );  http.setResponseBody( $body );    Example 4.1 (Shortened)   For reference a shortened version of the Example 4 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 if ( !string.startsWith ( http.getResponseHeader( "Content-Type" ), "text/html" )         || http.getpath() == "contact" ) break;           http.setResponseBody( string.regexsub(  http.getResponseBody() ,    "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,     "</div></div></a><h4>Parking and Transit Information</h4><div class=\"six columns\">" .    "<h5>Parking Map</h5><iframe width=\"420\" height=\"420\" frameborder=\"0\" " .    "style=\"border:0\" src=\"https://www.google.com/maps/embed/v1/search" .    "?q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107&key=YOU_KEY_HERE\"></iframe>" .  "</div><div class=\"six columns\"><h5>Transit Stop's</h5><iframe width=\"420\" " .  "height=\"420\" frameborder=\"0\" style=\"border:0\" " .  "src=\"https://www.google.com/maps/embed/v1/search?q=Transit+Stop+near+" .  "680+Folsom+St.+San+Francisco,+CA+94107&center=37.784465%2C-122.398570&zoom=14" .  "&key=YOUR_KEY_HERE\"></iframe></div><!-- TAB 2 Content (Office Locations) -->" ) );  
View full article
Important note - this article illustrates an example of authenticating traffic using Java Extensions.  Stingray TrafficScript also includes LDAP/Active Directory primitives, in the form of auth.query() , and these are generally simpler and easier to use than a Java-based solution.   Overview   A very common requirement for intranet and extranet applications is the need to authenticate users against an Active Directory (or LDAP) database. The Java Extension in this article describes how to do exactly that.   This article describes two Java Extensions that manage the HTTP Basic Authentication process and validate the supplied username and password against an Active Directory database. It shows how to use Initialization Parameters to provide configuration to an extension, and how authentication results can be cached to reduce the load on the Active Directory server.   A basic Java Extension   The first Java Extension verifies that the supplied username and password can bind directly to the LDAP database.  It's appropriate for simple LDAP deployments, but enterprise AD deployments may not give end users permissions to bind directly to the entire database, so the second example may be more appropriate.   The Java Extension (version 1)   import java.io.IOException; import java.io.PrintWriter; import java.util.Hashtable; import javax.naming.Context; import javax.naming.directory.DirContext; import javax.naming.directory.InitialDirContext; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.zeus.ZXTMServlet.ZXTMHttpServletRequest; public class LdapAuthenticate extends HttpServlet { private static final long serialVersionUID = 1L; private String dirServer; private String realm; public void init( ServletConfig config) throws ServletException { super.init( config ); dirServer = config.getInitParameter( "DB" ); realm = config.getInitParameter( "Realm" ); if( dirServer == null ) throw new ServletException( "No DB configured" ); if( realm == null ) realm = "Secure site"; } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { ZXTMHttpServletRequest zreq = (ZXTMHttpServletRequest)req; String[] userPass = zreq.getRemoteUserAndPassword(); if( userPass == null ) throw new Exception( "No Authentication details" ); Hashtable<String, String> env = new Hashtable<String, String>(); env.put( Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put( Context.PROVIDER_URL, "LDAP://" + dirServer ); env.put( Context.SECURITY_AUTHENTICATION, "DIGEST-MD5" ); env.put( Context.SECURITY_PRINCIPAL, userPass[0] ); env.put( Context.SECURITY_CREDENTIALS, userPass[1] ); DirContext ctx = new InitialDirContext( env ); ctx.close(); // No exceptions thrown... must have been successful ;-) return; } catch( Exception e ) { res.setHeader( "WWW-Authenticate", "Basic realm=\"" + realm + "\"" ); res.setHeader( "Content-Type", "text/html" ); res.setStatus( 401 ); String message = "<html>" + "<head><title>Unauthorized</title></head>" + "<body>" + "<h2>Unauthorized - please log in</h2>" + "<p>Please log in with your system username and password</p>" + "<p>Error: " + e.toString() + "</p>" + "</body>" + "</html>"; PrintWriter out = res.getWriter(); out.println( message ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }     Configuring the Java Extension   Upload the LdapAuthenticate Java Extension into the Java Catalog page. Click on the LdapAuthenticate link to edit the properties of the extension, and add two Initialization Parameters:   DB specifies the name of the LDAP or Active Directory database, and Realm specifies the authentication realm.   These parameters are read when the extension is initialized, which occurs the first time the extension is used by Stingray:   public void init( ServletConfig config) throws ServletException { super.init( config ); dirServer = config.getInitParameter( "DB" ); realm = config.getInitParameter( "Realm" ); if( dirServer == null ) throw new ServletException( "No DB configured" ); if( realm == null ) realm = "Secure site"; }   If you change the value of one of these parameters, use the 'Force Reload' option in the Stingray Admin Server to unload and reload this extension.   Either use the auto-generated rule, or create a new TrafficScript rule to call the extension on every request to an HTTP virtual server:   java.run( "LdapAuthenticate" );   Testing the Java Extension   When you try to access the web site through Stingray, you will be prompted for a username and password; the LdapAuthenticate extension checks that the username and password can bind to the configured Active Directory database, and refuses access if not: If you are unable to log in, cancel the prompt dialog box to see the error reported by the Java extension. In the following case, there was a networking problem; the extension could not contact the database server provided in the 'DB' parameter: Caching the Authentication Results   Caching the Authentication response from the Java extension will improve the performance of the web site and reduce the load on the database server.   You can modify the TrafficScript rule that calls the extension so that it records successful logins, caching them for a period of time. The following rule uses the data.set() TrafficScript function to record successful logins, caching this information for 10 minutes before attempting to reauthenticate the user against the database server.   $auth = http.getHeader( "Authorization" ); if( data.get( $auth ) < sys.time() ) { data.remove( $auth ); java.run( "LdapAuthenticate" ); # if we got here, we were authenticated. # Cache this information for 600 seconds data.set( $auth, sys.time()+600 ); }   A more sophisticated Java Extension implementation   In enterprise deployments, users often cannot bind to the LDAP or Active Directory database directly.  This example runs a custom search against the Active Directory database to locate the distinguishedName corresponding to the userid provided in the login attempt, then attempts to verify that the user can bind using their distinguishedName and the password they provided.   The Java Extension (version 2)   import java.io.IOException; import java.io.PrintWriter; import java.util.Hashtable; import javax.naming.NamingEnumeration; import javax.naming.Context; import javax.naming.directory.Attribute; import javax.naming.directory.Attributes; import javax.naming.directory.DirContext; import javax.naming.directory.InitialDirContext; import javax.naming.directory.SearchControls; import javax.naming.directory.SearchResult; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.zeus.ZXTMServlet.ZXTMHttpServletRequest; public class LdapAuthenticate extends HttpServlet { private static final long serialVersionUID = 1L; private String dirServer; private String realm; private String authentication; private String bindDN; private String bindPassword; private String baseDN; private String filter; public void init( ServletConfig config) throws ServletException { super.init( config ); dirServer = config.getInitParameter( "DB" ); if( dirServer == null ) throw new ServletException( "No DB configured" ); realm = config.getInitParameter( "Realm" ); if( realm == null ) realm = "Secure site"; authentication = config.getInitParameter( "authentication" ); if( authentication == null ) authentication = "simple"; bindDN = config.getInitParameter( "bindDN" ); if( dirServer == null ) throw new ServletException( "No bindDN configured" ); bindPassword = config.getInitParameter( "bindPassword" ); if( dirServer == null ) throw new ServletException( "No bindPassword configured" ); baseDN = config.getInitParameter( "baseDN" ); if( dirServer == null ) throw new ServletException( "No baseDN configured" ); filter = config.getInitParameter( "filter" ); if( dirServer == null ) throw new ServletException( "No filter configured" ); } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { ZXTMHttpServletRequest zreq = (ZXTMHttpServletRequest)req; String[] userPass = zreq.getRemoteUserAndPassword(); if( userPass == null ) throw new Exception( "No Authentication details" ); Hashtable<String, String> env = new Hashtable<String, String>(); env.put( Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put( Context.PROVIDER_URL, "LDAP://" + dirServer ); env.put( Context.SECURITY_AUTHENTICATION, authentication ); /* Bind with admin credentials */ NamingEnumeration<SearchResult> ne; String searchfilter = filter.replace( "%u", userPass[0] ); try { env.put( Context.SECURITY_PRINCIPAL, bindDN ); env.put( Context.SECURITY_CREDENTIALS, bindPassword ); DirContext ctx = new InitialDirContext( env ); String[] attrIDs = { "distinguishedName" }; SearchControls sc = new SearchControls(); sc.setReturningAttributes( attrIDs ); sc.setSearchScope(SearchControls.SUBTREE_SCOPE); ne = ctx.search(baseDN, searchfilter, sc); ctx.close(); } catch( Exception e ) { throw new Exception( "Failed to bind with master credentials: " + e.toString() ); } if( ne == null || !ne.hasMore() ) { throw new Exception( "No such user " + userPass[0] ); } SearchResult sr = (SearchResult) ne.next(); Attributes attrs = sr.getAttributes(); Attribute dnAttr = attrs.get("distinguishedName"); String dn = (String) dnAttr.get(); /* Now bind using dn with the user credentials */ try { env.put( Context.SECURITY_PRINCIPAL, dn ); env.put( Context.SECURITY_CREDENTIALS, userPass[1] ); DirContext ctx = new InitialDirContext( env ); ctx.close(); } catch( Exception e ) { throw new Exception( "Failed to bind with user credentials: " + e.toString() ); } // No exceptions thrown... must have been successful ;-) return; } catch( Exception e ) { res.setHeader( "WWW-Authenticate", "Basic realm=\"" + realm + "\"" ); res.setHeader( "Content-Type", "text/html" ); res.setStatus( 401 ); String message = "<html>" + "<head><title>Unauthorized</title></head>" + "<body>" + "<h2>Unauthorized - please log in</h2>" + "<p>Please log in with your system username and password</p>" + "<p>Error: " + e.toString() + "</p>" + "</body>" + "</html>"; PrintWriter out = res.getWriter(); out.println( message ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   This version of the Java Extension takes additional initialization parameters: It first searches the database for a distinguishedName using a query resembling:   $ ldapsearch -h DB -D bindDN -w bindPassword -b baseDN filter distinguishedName   Where the %u in the filter is replaced with the username in the login attempt.   It then attempts to bind to the database using a query resembling:   $ ldapsearch -h DB -D distinguishedName -w userpassword   ... and permits access if that bind is successful.
View full article
Update: 2013 06018 - I had to do 50 conversions today, so I have attached a shell script to to automate this process. == Assumptions: You have a pkcs12 bundle with a private key and certificate in it - in this example we will use a file called www.website.com.p12.  I use SimpleAuthority as it is cross platform and the free edition lets you create up to 5 keypairs, which is plenty for the lab... You don't have a password on the private key (passwords on machine loaded keys are a waste of time IMHO) You have a Linux / MacOS X / Unix system with openssl installed (Mac OS X does by default, so do most Linux installs...) 3 commands you need: First we take the p12 and export just the private key (-nocerts) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nocerts -out www.website.com.key.pem -nodes Second we take the p12 and export just the certificate (-nokeys) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nokeys -out www.website.com.cert.pem -nodes Third, we convert the private key into the format Stingray wants it in (-text) openssl rsa -in www.website.com.key.pem -out www.website.com.key.txt.pem -text You are left with a list of files, only two of them are needed to import into the Stingray: www.website.com.key.txt.pem is the private key you need www.website.com.cert.pem is the certificate you need These can then be imported into the STM under Catalogues > SSL > Server Certs Hope this helps.. 1 ~ $ ./p12_convert.sh -h ./p12_convert.sh written by Aidan Clarke <aidan.clarke at riverbed.com> Copyright Riverbed Technologies 2013 usage: ./p12_convert.sh -i inputfile -o outputfile This script converts a p12 bundle to PEM formated key and certificate ready for import into Stingray Traffif Manager OPTIONS:    -h      Show this message    -i      Input file name    -o      Output file name stub
View full article
Installation   Unzip the download ( Stingray Traffic Manager Cacti Templates.zip ) Via the Cacti UI, “Import Templates” and import the Data, Host, and Graph templates.  * Included graph templates are not required for functionality. Copy the files for the Cacti folder in the zip file to their corresponding directory inn your cacti install. Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers. Assign the host template to Traffic Manager(s) and create new graphs.   * Due to the method used by Cacti for creating graphs and the related RRD files, it is my recommendation NOT to create all graphs via the Device Page.   If you create all the graphs via the “*Create Graphs for this Host” link on the device page, Cacti will create an individual data source (RRD file and SNMP query for each graph) resulting in a significant amount of wasted Cacti and Device resources. Test yourself with the Stingray SNMP graph.   My recommendation is to create a single initial graph for each Data Query or Data Input method (i.e. one for Virtual Servers and one for Global values) and add any additional graphs via the Cacti’s Graph Management using the existing Data Source Drop downs.   Data Queries   Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl * Perl script to query the STM for most of the sys.globals values Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers.xml * Cacti XML snmp query for the Virtual Servers Table MIB   Graph Templates   Stingray_-_global_-_cpu.xml Stingray_-_global_-_dns_lookups.xml Stingray_-_global_-_dns_traffic.xml Stingray_-_global_-_memory.xml Stingray_-_global_-_snmp.xml Stingray_-_global_-_ssl_-_client_cert.xml Stingray_-_global_-_ssl_-_decryption_cipher.xml Stingray_-_global_-_ssl_-_handshakes.xml Stingray_-_global_-_ssl_-_session_id.xml Stingray_-_global_-_ssl_-_throughput.xml Stingray_-_global_-_swap_memory.xml Stingray_-_global_-_system_-_misc.xml Stingray_-_global_-_traffic_-_misc.xml Stingray_-_global_-_traffic_-_tcp.xml Stingray_-_global_-_traffic_-_throughput.xml Stingray_-_global_-_traffic_script_data_usage.xml Stingray_-_virtual_server_-_total_timeouts.xml Stingray_-_virtual_server_-_connections.xml Stingray_-_virtual_server_-_timeouts.xml Stingray_-_virtual_server_-_traffic.xml     Sample Graphs (click image for full size)           Compatibility   This template has been tested with STM 9.4 and Cacti 0.8.8.a   Known Issues   Cacti will create unnecessary queries and data files if the “*Create Graphs for this Host” link on the device page is used. See install notes for work around.   Conclusion   Cacti is sufficient with providing SNMP based RRD graphs, but is limited in Information available, Analytics, Correlation, Scale, Stability and Support.   This is not just a shameless plug; Brocade offers a MUCH more robust set of monitoring and performance tools.
View full article