cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Stingray Traffic Manager can run as either a forward or a reverse proxy. But what is a Proxy? A reverse proxy? A forward proxy? And what can you do with such a feature?   Let's try and clarify what all these proxies are. In computing, a Proxy is a service that accepts network connections from clients and then forwards them on to a server. So in essence, any Load Balancer or Traffic Manager is a kind of proxy. Web caches are another example of proxy servers. These keep a copy of frequently requested web pages and will deliver these pages themselves, rather than having to forward the request on to the 'real' server.   Forward and Reverse Proxies   The difference between a 'forward' and 'reverse' proxy is determined by where the proxy is running.   Forward Proxies:  Your ISP probably uses a web cache to reduce its bandwidth costs. In this case, the proxy is sitting between your computer and the whole Internet. This is a 'forward proxy'. The proxy has a limited set of users (the ISP's customers), and can forward requests on to any machine on the Internet (i.e. the web sites that the customers are browsing).   Reverse Proxies: Alternatively, a company can put a web cache in the same data center as their web servers, and use it to reduce the load on their systems. This is a 'reverse proxy'. The proxy has an unlimited set of users (anyone who wants to view the web site), but proxies requests on to a specific set of machines (the web servers running the company's web site). This is a typical role for Traffic Managers - they are traditionally used as a reverse proxy.   Using Stingray Traffic Manager as a Forward Proxy   You may use Stingray Traffic Manager to forward requests on to any other computer, not just to a pre-configured set of machines in a pool. TrafficScript is used to select the exact address to forward the request on to:   pool.use( "Pool name", $ipaddress, $port );   The pool.use() function is used, in the same way as you would normally pick a pool of servers to let Stingray Traffic Manager load balance to. The extra parameters specify the exact machine to use. This machine does not have to belong to the pool that is mentioned; the pool name is there just so Stingray Traffic Manager can use its settings for the connection (e.g. timeout settings, SSL encryption, and so on).   We refer to this technique as 'Forward Proxy mode', or 'Forward Proxy' for short.   What use is a Forward Proxy?   Combined with TrafficScript, the Forward Proxy feature gives you complete control over the load balancing of requests. For example, you could use Stingray Traffic Manager to load balance RDP (Remote Desktop Protocol), using TrafficScript to pick out the user name of a new connection, look the name up in a database and find the hostname of a desktop to allocate for that user.   Forward Proxying also allows Stingray Traffic Manager to be used nearer the clients on a network. With some TrafficScript, Stingray Traffic Manager can operate as a caching web proxy, speeding up local Internet usage. You can then tie in other Stingray Traffic Manager features like bandwidth shaping, service level monitoring and so on. TrafficScript response rules could then filter the incoming data if needed.   Example: A web caching proxy using Stingray Traffic Manager and TrafficScript   You will need to set up Stingray Traffic Manager with a virtual server listening for HTTP proxy traffic. Set HTTP as the protocol, and enable web caching. Also, be sure to disable Stingray's "Location Header rewriting", on the connection management page. Then you will need to add a TrafficScript rule to examine the incoming connections and pick a suitable machine. Here's how you would build such a rule:   # Put a sanity check in the rule, to ensure that only proxy traffic is being received: $host = http.getHostHeader(); if( http.headerExists( "X-Forwarded-For" ) || $host == "" ) { http.sendResponse( "400 Bad request", "text/plain", "This is a proxy service, you must send proxy requests", "" ); } # Trim the leading http://host from the URL if necessary $url = http.getRawUrl(); if ( string.startswith( $url, "http://" ) ) { $slash = string.find( $url, "/", 8 ); $url = string.substring( $url, $slash, -1 ); } http.setPath( string.unescape( $url ) ); # Extract the port out of the Host: header, if it is there $pos = string.find( $host, ":" ); if( $pos >= 0 ) { $port = string.skip( $host, $pos + 1 ); $host = string.substring( $host, 0, $pos - 1 ); } else { $port = 80; } # We need to alter the HTTP request to supply the true IP address of the client # requesting the page, and we need to tweak the request to remove any proxy-specific headers. http.setHeader( "X-Forwarded-For", request.getRemoteIP() ); http.removeHeader( "Range" ); # Removing this header will make the request more cacheable http.removeHeader( "Proxy-Connection" ); # The user might have requested a page that is unresolvable, e.g. # http://fakehostname.nowhere/. Let's resolve the IP and check $ip = net.dns.resolveHost( $host ); if( $ip == "" ) { http.sendResponse( "404 Unknown host", "text/plain", "Failed to resolve " . $host . " to an IP address", "" ); } # The last task is to forward the request on to the target website pool.use( "Forward Proxy Pool", $ip, $port );   Done! Now try using the proxy: Go to your web browser's settings page or your operating system's network configuration (as appropriate) and configure an HTTP proxy.  Fill in the hostname of your Stingray Traffic Manager and the port number of the virtual server running this TrafficScript rule. Now try browsing to a few different web sites. You will be able to see the URLs on the Current Activity page in the UI, and the Web Cache page will show you details of the content that has been cached by Stingray:   'Recent Connections' report lists connections proxied to remote sites   Content Cache report lists the resources that Stingray has cached locally     This is just one use of the forward proxy. You could easily use the feature for other uses, e.g. email delivery, SSL-encrypted proxies, and so on. Try it and see!
View full article
SOA applications need just as much help as traditional web applications when it comes to reliability, performance and traffic management. This article provides four down-to-earth TrafficScript examples to show you how you can inspect the XML messages and manage SOA transactions. Why is XML difficult? SOA traffic generally uses the SOAP protocol, sending XML data over HTTP. It's not possible to reliably inspect or modify the XML data using simple tools like search-and-replace or regular expressions. In Computer Science terms, regular expressions match regular languages, whereas XML is a much more structured context-free language. Instead of regular expressions, standards like XPath and XSLT are used to inspect and manipulate XML data. Using TrafficScript rules, Stingray can inspect the payload of an SOAP request or response and use XPath operations to extract data from it, making traffic management decisions on this basis. Stingray can check the validity of XML data, and use XSLT operations to transform the payload to a different dialect of XML. The following four articles give examples of traffic inspection and management in an SOA application contect. Other examples of XML processing include embedding RSS data in an HTML document. Routing SOAP traffic Let’s say that the network is handling requests for a number of different SOAP methods. The traffic manager is the single access point – all SOAP traffic is directed to it. Behind the scenes, some of the methods have dedicated SOAP servers because they are particularly resource intensive; all other methods are handled by a common set of servers. The following example uses Stingray's pools. A pool is a group of servers that provide the same service. Individual pools have been created for some SOA components, and a ‘SOAP-Common-Servers’ pool contains the nodes that host the common SOA components. # Obtain the XML body of the SOAP request $request = http.getBody(); $namespace = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/*[1]"; # Extract the SOAP method using an XPath expression $method = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); # For ‘special’ SOAP methods, we have a dedicated pool of servers for each if( pool.getActiveNodes( "SOAP-".$method ) > 0 ) {    pool.select( "SOAP-".$method ); } else {    pool.select( "SOAP-Common-Servers" ); } TrafficScript: Routing SOAP requests according to the method Why is this useful? This allows you to deploy SOA services in a very flexible manner. When a new instance of a service is added, you do not need to modify every caller that may invoke this service. Instead, you need only add the service endpoint to the relevant pool. You can rapidly move a service from one server to another, for resourcing or security reasons (red zone, green zone), and an application can be easily built from services that are found in different locations. Ensuring fair access to resources With Stingray, you can also monitor the performance of each pool of servers to determine which SOAP methods are running the slowest. This can help troubleshoot performance problems and inform decisions to re-provision resources where they are needed the most. You can shape traffic – bandwidth or transactions per second – to limit the resources used and smooth out flash floods of traffic. With the programmability, you can shape different types of traffic in different ways. For example, the following TrafficScript code sample extracts a ‘username’ node from the SOAP request. It then rate-shapes SOAP requests so that each remote source (identified by remote IP address and ‘username’ node value) can submit SOAP requests at a maximum of 60 times per minute: # Obtain the source of the request $ip = request.getRemoteIP(); # Obtain the XML body of the SOAP request $request = http.getBody(); $namespace = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/*/username/text()"; # Extract the username using an XPath expression $username = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); # $key uniquely identifies this type of request from this source. $key = $ip . ", " . $username; # The 'transactions' rate shaping class limits each type to 60 per minute rate.use( "transactions", $key ); TrafficScript: Rate-shaping different users of SOAP traffic Why is this important? An SOA component may be used by multiple different SOA applications. Different applications may have different business priorities, so you might wish to prioritize some requests to a component over others. Applying ‘service governance’ policies using Stingray's rate shaping functionality ensures that all SOA applications get fair and appropriate access to critical components, and that no one application can overwhelm a component to the detriment of other applications. This can be compared to time-sharing systems – each SOA application is a different ‘user’, and users can be granted specific access to resources, with individual limits where required. When some SOA applications are externally accessible (via a web-based application for example), this is particularly important because a flash flood or malicious denial-of-service attack could ripple through, affecting many internal SOA components and internal applications. Securing Traffic Suppose that someone created a web services component for a travel company that enumerated all of the possible flights from one location to another on a particular day. The caller of the component could specify how many hops they were prepared to endure on the journey. Unfortunately, once the component was deployed, a serious bug was found. If a caller asked for a journey with the same start and finish, the component got stuck in an infinite loop. If a caller asked for a journey with a large number of hops (1000 hops perhaps), the computation cost grew exponentially, creating a simple, effective denial of service attack. Fixing the component is obviously the preferred solution, but it’s not always possible to do so in a timely fashion. Often, procedural barriers make it difficult to make changes to a live application. However, by controlling and manipulating the SOA requests as they travel over the network, you can very quickly roll out a security rule on your SDC to drop or modify the SOAP request. Here’s a snippet: $request = http.getBody(); $from = xml.XPath.matchNodeSet( $request, $namespace, "//from/text()" ); $dest = xml.XPath.matchNodeSet( $request, $namespace, "//dest/text()" ); # The error response; can read a precanned response from disk and return # it as a SOAP response if( $from == $dest ) {    $response = resource.get( "FlightPathFaultResponse.xml" );    connection.sendResponse( $response ); } $hops = xml.XPath.matchNodeSet( $request, $namespace, "//maxhops/text()" ); if( $hops > 3 ) {    # Apply an XSLT that sets the hops node to 3    $transform = resource.get( "FlightPath3Hops.xslt" );    http.setBody( xml.XSLT.transform( $request, $transform ) ); } TrafficScript: Checking validity of SOAP requests Why is this important? Using the Service Delivery Controller to manage and rewrite SOA traffic is a very rapid and lightweight alternative to rewriting SOA components. Patching the application in this way may not be a permanent solution, although it’s often sufficient to resolve problems. The real benefit is that once a fault is detected, it can be resolved quickly, without requiring in-depth knowledge of the application. Development staff need not be pulled away from other projects immediately. A full application-level fix can wait until the staff and resources are available; for example, at the next planned update of the component code. Validating SOAP responses If a SOAP server encounters an error, it may still return a valid SOAP response with a ‘Fault’ element inside. If you can look deep inside the SOAP response, you’ve got a great opportunity to work around such transient application errors. If a server returns a fault message where the faultcode indicates there was a problem with the server, wouldn’t it be great if you could retry the request against a different SOAP server in the cluster? $response = http.getResponseBody(); $ns = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/SOAP-ENV:Fault/faultcode/text()"; $faultcode = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); if( string.endsWith( $faultcode, "Server" ) ) {    if( request.retries() < 2 ) {       request.avoidNode( connection.getNode() );       request.retry();    } } TrafficScript: If we receive a Server fault code in the SOAP response, retry the request at most 2 times against different servers Why is this important? This particular example shows how the error checking used by an SDC can be greatly extended to detect a wide range of errors, even in responses that appear “correct” to less intelligent traffic managers. It is one example of a wide range of applications where responses can be verified, scrubbed and filtered. Undesirable responses may include fault codes, sensitive information (like credit card or social security numbers), or even incorrectly-localized or formatted responses that may be entirely legitimate, but cannot be interpreted by the calling application. Pinpointing errors in a loosely-coupled SOA application is a difficult and invasive process, often involving the equivalent of adding ‘printf’ debug statements to the code of individual components. By inspecting responses at the network, it becomes much easier to investigate and diagnose application problems and then work round them, either by retrying requests or transforming and rewriting responses as outlined in the previous example.
View full article
Bandwidth can be expensive. So it can be annoying if other websites steal your bandwidth from you. A common problem is when people use 'hot-linking' or 'deep-linking' to place images from your site on to their own pages. Every time someone views their website, you will pick up the bandwidth tab, and users of your website may be impacted because of the reduced bandwidth. So how can this be stopped? When a web browser requests a page or an image from your site, the request includes a ' Referer ' header (The misspelling is required in the specs!). This referrer gives the URL of the page that linked to the file. So, if you go to https://splash.riverbed.com/ , your browser will load the HTML for the page, and then load all the images. Each time it asks the web server for an image, it will report that the referrer was https://splash.riverbed.com/ . We can use this referrer header to check that the image is being loaded for your own site, and not for someone else's. If another website embedded a link to one of these images, the Referer: header would contain the URL of their site instead. This site has a more in-depth discussion of bandwidth-stealing; the Stingray approach is an alternative to the Apache solution it presents. Solving the problem with RuleBuilder RuleBuilder is a simple, GUI front-end to TrafficScript that lets you create straightforward 'if condition then action'-style policies.  Use the Stingray Admin Server to create a new RuleBuilder rule as follows: You should then associate that with your virtual server, configuring it to run as a Request Rule: All done. This rule will catch any hotlink requests for content where the URL ends with ' .gif ', ' .jpg ' or ' .png ', and redirect to the image: http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png TrafficScript improvements We can make some simple improvements to this rule: We can provide a simple list of file extensions to check against, rather than using a regular expression.  This is easier to manage, though not necessarily faster We can check that the referer matches the host header for the site.  That is a simple approach that avoids embedding the domain (e.g. riverbed.com) in the rule, thus making it less likely to surprise you when you apply the rule to a different website First convert the rule to TrafficScript.  That will reveal the implementation of the rule, and you can edit the TrafficScript version to implement the additional features you require: $headerReferer = http.getheader( "Referer" ); $path = http.getpath(); if( string.contains( $headerReferer, "riverbed.com" ) == 0         && $headerReferer != ""         && string.regexmatch( $path, "\\.(jpg|gif|png)$" ) ){         http.redirect( " http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png " ); } The RuleBuilder rule, converted to TrafficScript Edit the rule so that it resembles the following: $extensions = [ 'jpg', 'jpeg', 'gif', 'png', 'svg' ]; $redirectto = " http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png "; ####################################### $referer = http.getheader( "Referer" ); $host    = http.getHostHeader(); $path    = http.getpath(); $ext     = ""; if( string.regexMatch( $path, '\.(.*?)$' ) ) $ext = $1; if( array.contains( $extensions, $ext )    && $referer != ""    && !string.contains( $referer, $host )    && $path != $redirectto ) {       http.redirect( $redirectto ); } Alternate rule implementation
View full article
On 27th February 2006, we took part in VMware's launch of their Virtual Appliance initiative.  Riverbed Stingray (or 'Zeus Extensible Traffic Manager / ZXTM') was the first ADC product that was packaged as a virtual appliance and we were delighted to be a launch partner with VMware, and gain certification in November 2006 when they opened up their third party certification program.   We had to synchronize the release of our own community web content with VMware's website launch, which was scheduled for 9pm PST. That's 5am in our Cambridge UK dev labs!   With a simple bit of TrafficScript, we were able to test and review our new web content internally before the release, and make the new content live at 5am while everyone slept soundly in their beds.   Our problem...   The community website we operated was a reasonably sophisticated website. It was based on a blogging engine, and the configuration and content for the site was split between the filesystem and a local database. Content was served up from the database via the website and an RSS feed. To add a new section with new content to the site, it was necessary to coordinate a number of changes to the filesystem and the database together.   We wanted to make the new content live for external visitors at 5am on Monday 27th, but we also wanted the new content to be visible internally and to selected partners before the release, so that we could test and review it.   The obvious solution of scripting the database and filesystem changes to take place at 5am was not a satisfactory solution. It was hard to test on the live site, and it did not let us publish the new site internally beforehand.   How we did it   if( $usenew ) { pool.use( "Staging server" ); } else { pool.use( "DMZ Server" ); }   We had a couple of options.   We have a staging system that we use to develop new code and content before putting it on the live site. This system has its own database and filesystem, and when we publish a change, we copy the new settings to the live site manually. We could have elected to run the new site on the staging system, and use Stingray to direct traffic to the live or staging server as appropriate:   However, this option would have exposed our staging website (running on a developer's desktop behind the DMZ) to live traffic, and created a vulnerable single-point-of-failure. Instead, we modified the current site so that it could select the database to use based on the presence of an HTTP header:   $usenew = 0; # For requests from internal users, always use the new site $remoteip = request.getremoteip(); if( string.ipmaskmatch( $remoteip, "10.0.0.0/8" ) ) { $usenew = 1; } # If its after 5am on the 27th, always use the new site # Fix this before the 1st of the following month! if (sys.time.monthday() == 27 && sys.time.hour() >= 5 ) { $usenew = 1; } if( sys.time.monthday() > 27 ) { $usenew = 1; } http.removeHeader( "NEWSITE" ); if( $usenew ) { http.addHeader( "NEWSITE", "1" ); }   PHP code   // The php code selects overrides the database host $DB_HOST = "livedb.internal.zeus.com"; if( isset( $_ENV['HTTP_NEWSITE'] ) && ( $_ENV['HTTP_HOST'] == 'knowledgehub.zeus.com' ) ) { $DB_HOST = "stagedb.internal.zeus.com"; }   This way, only the secured DMZ webserver processed external traffic, but it would use the internal staging database for the new content.   Did it work?   Of course it did! Because we used Stingray to categorize the traffic, we could safely test the new content, confident that the switchover would be seamless. No one was awake at 5am when the site went live, but traffic to the site jumped after the launch:  
View full article
Introduction Many DDoS attacks work by exhausting the resources available to a website for handling new connections.  In most cases, the tool used to generate this traffic has the ability to make HTTP requests and follow HTTP redirect messages, but lacks the sophistication to store cookies.  As such, one of the most effective ways of combatting DDoS attacks is to drop connections from clients that don't store cookies during a redirect. Before you Proceed It's important to point out that using the solution herein may prevent at least the following legitimate uses of your website (and possibly others): Visits by user-agents that do not support cookies, or where cookies are disabled for any reason (such as privacy); some people may think that your website has gone down! Visits by internet search engine web-crawlers; this will prevent new content on your website from appearing in search results! If either of the above items concern you, I would suggest seeking advice (either from the community, or through your technical support channels). Solution Planning Implementing a solution in pure TrafficScript will prevent traffic from reaching the web servers.  But, attackers are still free to consume connection-handling resources on the traffic manager.  To make the solution more robust, we can use iptables to block traffic a bit earlier in the network stack.  This solution presents us with a couple of challenges: TrafficScript cannot execute shell commands, so how do we add rules to iptables? Assuming we don't want to permanently block all IP addresses that are involved in a DDoS attack, how can we expire the rules? Even though TrafficScript cannot directly run shell commands, the Event Handling system can.  We can use the event.emit() TrafficScript function to send jobs to a custom event handler shell script that will add an iptables rule that blocks the offending IP address.  To expire each rule can use the at command to schedule a job that removes it.  This means that we hand over the scheduling and running of that job over to the control of the OS (which is something that it was designed to do). The overall plans looks like this: Write a TrafficScript rule that emits a custom event when it detects a client that doesn't support cookies and redirects Write a shell script that takes as its input: an --eventtype argument (the event handler includes this automatically) a --duration argument (to define the length of time that an IP address stays blocked for) a string of information that includes the IP address that is to be blocked Create an event handler for the events that our TrafficScript is going to emit TrafficScript Code $cookie = http.getCookie( "DDoS-Test" ); if ( ! $cookie ) {       # Either it's the visitor's first time to the site, or they don't support cookies    $test = http.getFormParam( "cookie-test" );       if ( $test != "1" ) {       # It's their first time.  Set the cookie, redirect to the same page       # and add a query parameter so we know they have been redirected.       # Note: if they supplied a query string or used a POST,       # we'll respond with a bare redirect       $path = http.getPath();             http.sendResponse( "302 Found" , "text/plain" , "" ,          "Location: " . string.escape( $path ) .          "?cookie-test=1\r\nSet-Cookie: DDoS-Test=1" );          } else {             # We've redirected them and attempted to set the cookie, but they have not       # accepted.  Either they don't support cookies, or (more likely) they are a bot.             # Emit the custom event that will trigger the firewall script.       event.emit( "firewall" , request.getremoteip());             # Pause the connection for 100 ms to give the firewall time to catch up.       # Note: This may need tuning.       connection.sleep( 100 );             # Close the connection.       connection.close( "HTTP/1.1 200 OK\n" );    } } Installation This code will need to be applied to the virtual server as a request rule.  To do that, take the following steps: In the traffic manager GUI, navigate to Catalogs → Rule Enter ts-firewaller in the Name field Click the Use TrafficScript radio button Click the Create Rule button Paste the code from the attached ts-firewaller.rts file Click the Save button Navigate to the Virtual Server that you want to protect ( Services → <Service Name> ) Click the Rules link In the Request Rules section, select ts-firewaller from the drop-down box Click the Add Rule button Your virtual server should now be configured to execute the rule. Shell Script Code #!/bin/bash # Use getopt to collect parameters. params=`getopt -o e:,d: -l eventtype:,duration: -- "$@"` # Evaluate the set of parameters. eval set -- "$params" while true; do   case "$1" in   --duration ) DURATION="$2"; shift 2 ;;   --eventtype ) EVENTTYPE="$2"; shift 2 ;;   -- ) shift; break ;;   * ) break ;;   esac done # Awk the IP address out of ARGV IP=$(echo "${BASH_ARGV}" | awk ' { print ( $(NF) ) }' ) # Add a new rule to the INPUT chain. iptables -A INPUT -s ${IP} -j DROP && # Queue a new job to delete the rule after DURATION minutes. # Prevents warning about executing the command using /bin/sh from # going in the traffic manager event log. echo "iptables -D INPUT -s ${IP} -j DROP" | at -M now + ${DURATION} minutes &> /dev/null Installation To use this script as an action program, you'll need to upload it via the GUI.  To do that, take the following steps: Open a new file with the editor of your choice (depends on what OS you're using) Copy and paste the script code into the editor Save the file as ts-firewaller.sh In the traffic manager UI, navigate to Catalogs → Extra Files → Action Programs Click the Choose File button Select the ts-firewaller.sh file that you just created Click the Upload Program button Event Handler Now that we have a rule that emits a custom event, and a script that we can use as an action program, we can configure the event handler that will tie the two together. First, we need to create a new event type: In the traffic manager's UI, navigate to System → Alerting Click the Manage Event Types button Enter Firewall in the Name field Click the Add Event Type button Click the + next to the Custom Events item in the event tree Click the Some custom events... radio button Enter firewall in the empty field Click the Update button Now that we have an event type, we need to create a new action: In the traffic manager UI, navigate to System → Alerting Click on the Manage Actions button In the Create New Action section, enter firewall in the Name field Click the Program radio button Click the Add Action button In the Program Arguments section, enter duration in the Name field Enter Determines the length of time in minutes that an IP will be blocked for in the Description field Click the Update button Enter 10 in the newly-appeared arg!duration field Click the Update button Now that we have an action configured, the only thing that we have left to do is to connect the custom event to the new action: In the traffic manager UI, navigate to System → Alerting In the Event Type column, select firewall from the drop-down box In the Actions column, select firewall from the drop-down box Click the Update button That concludes the installation steps; this solution should now be live! Testing Testing the functionality is pretty simple for this solution.  Basically, you can monitor the state of iptables while you run specific commands from a command line.  To do this, ssh into your traffic manager and execute iptables -L as root.  You should check this after tech of the upcoming tests. Since I'm using a Linux machine for testing, I'm going to use the curl command to send crafted requests to my traffic manager.  The 3 scenarios that I want to test are: Initial visit: The user-agent is missing a query string and a cookie Successful second visit: The user-agent has a query string and has provided the correct cookie Failed second visit: The user ages has a query string (indicating that they were redirected), but hasn't provided a cookie The respective curl commands that need to be run are: curl -v http:// <tmhost>/ curl -v http:// <tmhost>/?cookie-test=1 -b "DDoS-Test=1" curl -v http:// <tmhost>/?cookie-test=1 Note: If you run these commands from your workstation, you will be unable to connect to the traffic manager in any way for a period of 10 minutes!
View full article
Stingray's TrafficScript rules can inspect and modify an entire request and response stream. This provides many opportunities for securing content against unauthorized breaches. For example, over a period of 9 months, a hacker named Nicolas Jacobsen used a compromised customer account on T-Mobile's servers to exploit a vulnerability and leach a large amount of sensitive information (see http://www.securityfocus.com/news/10271). This information included US Secret Service documents and customer records including their Social Security Numbers. This article describes how to use a simple TrafficScript rule to detect and mask out suspicious data in a response. The TrafficScript rule Here is a simple rule to remove social security numbers from any web documents served from a CGI script: if( string.contains( http.getPath(), "/cgi-bin/" ) ) {    $payload = http.getResponseBody();    $new_response = string.regexsub( $payload, "\\d{3}-\\d{2}-\\d{4}",                             "xxx-xx-xxxx", "g" );    if( $new_response != $payload )       http.setResponseBody( $new_response ); } Configure this rule as a 'Response Rule' for a virtual server that handles HTTP traffic. How it works How does this simple-looking TrafficScript rule work?  The specification for the rule is: If the request is for a resource in /cgi-bin/, then: mask anything in the response that looks like a social security number. In this case, we recognize social security numbers as sequences of digits and '-' (for example, '123-45-6789') and we replace them with 'XXX-XX-XXXX'. 1. If the request is for a resource in /cgi-bin/: if( string.contains( http.getPath(), "/cgi-bin/" ) ) { The http.getPath() function returns the name of the HTTP request, having removed any %-encoding which obscures the request. You can use this function in a request or response rule.The string.contains() test checks whether the request is for a resource in /cgi-bin/ . 2. Get the entire response: $payload = http.getResponseBody(); The http.getResponseBody() function reads the entire HTTP response. It seamlessly handles cases where no content length is provided, and it dechunks a chunk-transfer-encoded response - these are common cases when handling responses from dynamic web pages and applications. It interoperates perfectly with performance features like HTTP Keepalive connections and Pipelined requests. 3. Replace any social security numbers: $new_response = string.regexsub( $payload, "\\d{3}-\\d{2}-\\d{4}",                             "xxx-xx-xxxx", "g" ); The string.regexsub() function applies a regular expression substitution to the $payload data, replacing potential social security numbers with anonymous data. Regular expressions are commonly used to inspect and manipulate textual data, and Stingray supports the full POSIX regular expression specification. 4. Change the response: if( $new_response != $payload )    http.setResponseBody( $new_response ); The http.setResponseBody() function replaces the HTTP response with the supplied data. You can safely replace the response with a message of different length - Stingray will take care of the Content-Length header, as well as compressing and SSL-encrypting the response as required. http.setResponseBody() interoperates with keepalives and pipelined requests. In action... Here is the vulnerable application, before (left) and after (right) the TrafficScript rule is applied: Masking social security numbers with a string of 'XXX' Summary Although Stingray is not a total application security solution (look to the Stingray Application Firewall for this), this example demonstrates how Stingray Traffic Manager can be used as one layer in a larger belt-and-braces system. Stingray is one location where security measures can be very easily added - perhaps as a rapid reaction to a vulnerability elsewhere in the network, patching over the problem until a more permanant solution can be deployed. In a real deployment, you might do something more firm than masking content.  For example, if a web page contains unexpected, sensitive data it might be best just to forcibly-redirect the client to the home page of your application to avoid the risk of any sensitive content being leaked.
View full article
Google Analytics i s a great tool for monitoring and tracking visitors to your web sites. Perhaps best of all, it's entirely web based - you only need a web browser to access the analysis services it provides.     To enable tracking for your web sites, you need to embed a small fragment of JavaScript code in every web page. This extension makes this easy, by inspecting all outgoing content and inserting the code into each HTML page, while honoring the users 'Do Not Track' preferences.   Installing the Extension   Requirements   This extension has been tested against Stingray Traffic Manager 9.1, and should function with all versions from 7.0 onwards.   Installation    Copy the contents of the User Analytics rule below. Open in an editor, and paste the contents into a new response rule:     Verify that the extension is functioning correctly by accessing a page through the traffic manager and use 'View Source' to verify that the Google Analytics code has been added near the top of the document, just before the closing </head> tag:     User Analytics rule   # Edit the following to set your profile ID; $defaultProfile = "UA-123456-1"; # You may override the profile ID on a site-by-site basis here $overrideProfile = [ "support.mysite.com" => "UA-123456-2", "secure.mysite.com" => "UA-123456-3" ]; # End of configuration settings # Only process text/html responses $contentType = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $contenttype, "text/html" )) break; # Honor any Do-Not-Track preference $dnt = http.getHeader( "DNT" ); if ( $dnt == "1" ) break; # Determine the correct $uacct profile ID $uacct = $overrideProfile[ http.getHostHeader() ]; if( !$uacct ) $uacct = $defaultProfile; # See http://www.google.com/support/googleanalytics/bin/answer.py?answer=174090 $script = ' <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(["_setAccount", "' . $uacct . '"]); _gaq.push(["_trackPageview"]); (function() { var ga = document.createElement("script"); ga.type = "text/javascript"; ga.async = true; ga.src=("https:" == document.location.protocol ? "https://ssl" : "http://www") + ".google-analytics.com/ga.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(ga, s); })(); </script>'; $body = http.getResponseBody(); # Find the location of the closing '</head>' tag $i = string.find( $body, "</head>" ); if( $i ==-1 ) $i = string.findI( $body, "</head>" ); if( $i ==-1 ) break; # Give up http.setResponseBody( string.left( $body, $i ) . $script . string.skip( $body, $i ));   For some extensions to this rule, check out Faisal Memon's article Google Analytics revisited
View full article
  Introduction   While I was thinking of writing an article on how to use the traffic manager to satisfy EU cookie regulations, I figured "somebody else has probably done all the hard work".  Sure enough, a quick search turned up an excellent and (more importantly) free utility called cookiesDirective.js.  In addition to cookieDirective.js being patently nifty, its website left me with a nostalgic craving for a short, wide glass of milk.   Background   If you're reading this article, you probably have a good idea of why you might want (need) to disclose to your users that your site uses cookies.  You should visit the site at http://cookiesdirective.com in order to gain a richer understanding of what the cookieDirective script actually does and why you might want to use it.  For the impatient, let's just assume that you're perfectly happy for random code to run in your visitors' browsers.     Requirements   A website. A TrafficScript-enabled traffic manager, configured to forward traffic to your web servers. Preparation   According to the directions, one must follow "just a simple 3-step process" in order to use cookieDirective.js:   Move cookie-generating JavaScript in your page (such as Google Analytics) in to a separate file, and the name of the file to a function that causes it to get loaded before the closing </head> tag in the HTML body.  Basically, this makes it possible to display the cookie disclosure message before the cookie-generating code gets run by the browser.  That much moving code around is not within the scope of this article.  For now, let's assume that displaying the message to the user is "good enough". Add a snippet of code to the end of your html body that causes the browser to download cookiesDirective.js.  In the example code, it gets downloaded directly from cookiesdirective.com, but you should really download it and host it on your own web server if you're going to be using it in production. Add another snippet of code that runs the JavaScript.  This is the bit that causes the popup to appear. The Goods   # The path to your home page? $homepath = '/us/'; # The location on the page where the cookie notification should appear (top or bottom)? $noticelocation = 'bottom'; # The URL that contains your privacy statement. $privacyURL = 'http://www.riverbed.com/us/privacy_policy.php'; # ==== DO NOT EDIT BELOW THIS LINE! (unless you really want to) ==== sub insert_before_endbody($response, $payload){ # Search from the end of the document for the closing body tag. $idx = string.findr($response, "</body>"); # Insert the payload. $response = string.substring($response, 0, $idx-1) . $payload . string.substring($response, $idx, -1); # Return the response. return $response; } $path = http.getpath(); if ( $path == $homepath ){ # Initialize the response body. $response = http.getresponsebody(); # Cookie-generating JavaScript gets loaded in this function. $wrapper = '<script type="text/javascript">function cookiesDirectiveScriptWrapper(){}</script>'; # Imports the cookiesdirective code. # FIXME: Download the package and host it locally! $code = '<script type=_ # Executes the cookiesdirective code, providing the appropriate arguments. $run = '<script type="text/javascript">cookiesDirective(\'' . $noticelocation . '\',0,\'' . $privacyURL . '\',\'\');</script>'; # Insert everything into the response body. foreach($snippet in [$wrapper, $code, $run]){ $response = insert_before_endbody($response, $snippet); } # Update the response data. http.setresponsebody($response); }   This particular example works on the main Riverbed site.  To get the code to work, you'll need to change at least the $homepath and $privacyURL variables.  If you want the notice to appear at the top of the page, you can change the $noticelocation variable.   NOTE: Remember to apply this rule to your virtual server as a response rule!
View full article
This article presents a TrafficScript library that give you easy and efficient access to tables of data stored as files in the Stingray configuration:   libTable.rts   Download the following TrafficScript library from gihtub and import it into your Rules Catalog, naming it libTable.rts :   libTable.rts   # libTable.rts # # Efficient lookups of key/value data in large resource files (>100 lines) # Use getFirst() and getNext() to iterate through the table sub lookup( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename."::".$key ); } sub getFirst( $filename ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":first" ); } sub getNext( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":next:".$key ); } # Internal functions sub update( $filename ) { $pid = sys.getPid(); $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable".$pid.$filename.":md5" ) ) return; data.reset( "resourcetable".$pid.$filename.":" ); data.set( "resourcetable".$pid.$filename.":md5", $md5 ); $contents = resource.get( $filename ); $pkey = ""; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); data.set( "resourcetable".$pid.$filename."::".$key, $value ); if( !$pkey ) { data.set( "resourcetable".$pid.$filename.":first", $key ); } else { data.set( "resourcetable".$pid.$filename.":next:".$pkey, $key ); } $pkey = $key; } }   Usage:   import libTable.rts as table; $filename = "data.txt"; # Look up a key/value pair $value = table.lookup( $filename, $key ); # Iterate through the table for( $key = table.getFirst( $filename ); $key != ""; $key = table.getNext( $filename, $key ) ) { $value = table.lookup( $filename, $key ); }   The library caches the contents of the file internally, and is very efficient for large files.  For smaller files, it may be slightly more efficient to search these files using a regular expression, but the convenience of this library may outweigh the small performance gains.   Data file format   This library provides access to files stored in the Stingray conf/extra folder (by way of the Extra Files > Miscellaneous Files) section of the catalog.  These files can be uploaded using the UI, the SOAP or REST API, or by manually copying them in place and initiating a configuration replication.   Files should contain  key-value pairs, one per line, space separated:   key1value1 key2value2 key3value3   Preservation of order   The lookup operation uses an open hash table, so is efficient for large files. The getFirst() and getNext() operations will iterate through the data table in order, returning the keys in the order they appear in the file.   Performance and alternative implementations   The performance of this library is investigated in the article Investigating the performance of TrafficScript - storing tables of data.  It is very efficient for large tables of data, and marginally less efficient than a simple regular-expression string search for small files.   If performance is of a concern and you only need to work with small datasets, then you could use the following library instead:   libTableSmall.rts   # libTableSmall.rts: Efficient lookups of key/value data in a small resource file (
View full article
TrafficScript rules often need to refer to tables of data - redirect mappings, user lists, IP black lists and the like. For small tables that are not updated frequently, you can place these inline in the TrafficScript rule: $redirect = [   "/widgets" => "/sales/widgets",   "/login" => "/cgi-bin/login.cgi" ]; $path = http.getPath(); if( $redirect[ $path ] ) http.redirect( $redirect[ $path ] ); This approach becomes difficult to manage if the table becomes large, or you want to update it without having to edit the TrafficScript rule.  In this case, you can store the table externally (in a resource file) and reference it from the rule: The following examples will consider a file that follows a standard space-separated 'key value' pattern, and we'll look at alternative TrafficScript approaches to efficiently handle the data and look up key-value pairs: # cat /opt/zeus/zxtm/conf/extra/redirects.txt /widgets /sales/widgets /login   /cgi-bin/login.cgi /support http://support.site.com We'll propose several 'ResourceTable' TrafficScript library implementations that express a lookup() function that can be used in the following fashion: # ResourceTable provides a lookup( filename, key ) function import ResourceTable as table; $path = http.getPath(); $redirect = table.lookup( "redirects.txt", $path ); We'll then look at the performance of each to see which is the best. For a summary of the solutions in this article, jump straight to libTable.rts: Interrogating tables of data in TrafficScript. Implementation 1: Search the file on each lookup ResourceTable1 sub lookup( $filename, $key ) {    $contents = resource.get( $filename );    if( string.regexmatch( $contents, '\n'.$key.'\s+([^\n]+)' ) )       return $1;    if( string.regexmatch( $contents, '^'.$key.'\s+([^\n]+)' ) )       return $1;    return ""; } ​ This simple implementation searches the file on each and every lookup, using a regular expression to locate the key and also the text on the remainder of the line.  It pins the key to the start of the line so that it does not mistakenly match lines where $key is a substring (suffix) of the key. The implementation is simple and effective, but we would reasonably expect that it would become less and less efficient, the larger the resource file became. Implementation 2: Store the table in a TrafficScript hash table for easy lookup The following code builds a TrafficScript hash table from the contents of the resource file: $contents = resource.get( $filename ); $h = [ ]; foreach( $l in string.split( $contents, "\n" ) ) {   if( ! string.regexmatch( $l, '(.*?)\s+(.*)' ) ) continue;   $key = string.trim( $1 );     $value = string.trim( $2 );      $h[$key] = $value; } You can then quickly look up values in the hash table using $h[ $key ]. However, we don't want to have to create the hash table every time we call the lookup function; we would like to create it once and then cache it somewhere.  We can use the global data table to store persistent data, and we can verify that the data is still current by checking that the MD5 of the resource file has not changed: ResourceTable2a sub update( $filename ) {    # Store the md5 of the resource file we have cached. No need to update if the file has not changed    $md5 = resource.getMD5( $filename );    if( $md5 == data.get( "resourcetable:".$filename.":md5" ) ) return;    # Do the update    $contents = resource.get( $filename );    $h = [ ];    foreach( $l in string.split( $contents, "\n" ) ) {       if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue;       $key = string.trim( $1 );         $value = string.trim( $2 );          $h[$key] = $value;    }    data.set( "resourcetable:".$filename.':data', $h );    data.set( "resourcetable:".$filename.':md5', $md5 ); } sub lookup( $filename, $key ) {    # Check to see if the file has been updated, and update our table if necessary    update( $filename );    $h = data.get( "resourcetable:".$filename.':data' );    return $h[$key]; } ​ Version 2a: we store the MD5 of the file in the global key 'resourcetable:filename:md5', and the hash table in the global key 'resourcetable:filename:data'. This implementation has one significant fault.  If two trafficscript rules are running concurrently, they may both try to update the keys in the global data table and a race condition may result in inconsistent data.  This situation is not possible on a single-core system with one zeus.zxtm process because rules are run serially and only pre-empted if they invoke a blocking operation, but it's entirely possible on a multi-core system, and TrafficScript does not implement mutexes or locks to help protect against this. The simplest solution is to give each core its own, private copy of the data.  Because system memory should be scaled with the number of cores, the additional overhead of these copies is generally acceptable: ResourceTable2b: sub update( $filename ) {    $pid = sys.getPid();    $md5 = resource.getMD5( $filename );    if( $md5 == data.get( "resourcetable:".$pid.$filename.":md5" ) ) return;    $contents = resource.get( $filename );    $h = [ ];    foreach( $l in string.split( $contents, "\n" ) ) {       if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue;       $key = string.trim( $1 );         $value = string.trim( $2 );          $h[$key] = $value;    }    data.set( "resourcetable:".$pid.$filename.':data', $h );    data.set( "resourcetable:".$pid.$filename.':md5', $md5 ); } sub lookup( $filename, $key ) {    update( $filename );    $pid = sys.getPid();    $h = data.get( "resourcetable:".$pid.$filename.':data' );    return $h[$key]; } ​ Version 2b: by including the pid in the name of the key, we avoid multi-core race conditions at the expense of multiple copies of the date Implementation 3: Store the key/value data directly in the global hash table data.set and data.get address a global key/value table.  We could use that directly, rather than constructing a TrafficScript hash: sub update( $filename ) {    $pid = sys.getPid();    $md5 = resource.getMD5( $filename );    if( $md5 == data.get( "resourcetable".$pid.$filename.":md5" ) ) return;    data.reset( "resourcetable".$pid.$filename.":" );    data.set( "resourcetable".$pid.$filename.":md5", $md5 );    $contents = resource.get( $filename );    foreach( $l in string.split( $contents, "\n" ) ) {       if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue;       $key = string.trim( $1 );         $value = string.trim( $2 );          data.set( "resourcetable".$pid.$filename."::".$key, $value );    } } sub lookup( $filename, $key ) {    update( $filename );    $pid = sys.getPid();    return data.get( "resourcetable".$pid.$filename."::".$key ); } ​ Version 3: key/value pairs are stored in the global data table.  Keys begin with the string "resourcetable:pid:filename:", so it's easy to delete all of the key/value pairs using data.reset() before rebuilding the dataset How do these implementations compare? We tested the number of lookups-per-second that each implementation could achieve (using a single-core virtual machine running on a laptop Core2 processor) to investigate performance for different dataset sizes: Resource file size (entries) 10 100 1,000 10,000 Implementation 1: simple search 300,000 100,000 17,500 1,000 Implementation 2: trafficscript hash, cached in global data table 27,000 2,000 250 10 Implementation 3: key/value pairs in the global data table 200,000 200,000 200,000 200,000 ResourceTable lookups per second (single core, lightweight processor) The test just exercised the rate of lookups in resource files of various sizes; the cost of building the cached datastructures (implementations 2 and 3) and one-off costs and not included in the tests. Interpreting the results The degradation of performance in implementation 1 as the file size increases was to be expected. The constant performance of implementation 3 was as expected, as hash tables should generally give O(1) lookup speed, not affected by the number of entries. The abysmal performance of implementation 2 is surprising, until you note that on every lookup we retrieve the entire hash table from the global data table: $h = data.get( "resourcetable:".$pid.$filename.':data' ); return $h[$key]; The global data table is a key/value store; all keys and values are serialized as strings.  The data.get() operation will read the serialized version of the hash table and reconstruct the entire table (up to 10,000 entries) before the O(1) lookup operation. What is most surprising perhaps is the speed at which you can search and extract data from a string using regular expressions (implementation 1).  For small and medium datasets (up to approx 50 entries), this is the simplest and fastest method, and it's only worth considering the more complex data.get key/value implementation for large datasets. Read more Check out the article How is memory managed in TrafficScript? for more detail on the ways that TrafficScript handles data and memory
View full article
This article describes how to gather activity statistics across a cluster of traffic managers using Perl, SOAP::Lite and Stingray's SOAP Control API. Overview Each local Stingray Traffic Manager tracks a very wide range of activity statistics. These may be exported using SNMP or retrieved using the System/Stats interface in Stingray's SOAP Control API. When you use the Activity monitoring in Stingray's Administration Interface, a collector process communicates with each of the Traffic Managers in your cluster, gathering the local statistics from each and merging them before plotting them on the activity chart. 'Aggregate data across all traffic managers' However, when you use the SNMP or Control API interfaces directly, you will only receive the statistics from the Traffic Manager machine you have connected to. If you want to get a cluster-wide view of activity using SNMP or the Control API, you will need to poll each machine and merge the results yourself. Using Perl and SOAP::Lite to query the traffic managers and merge activity statistics The following code sample determines the total TCP connection rate across the cluster as follows: Connect to the named traffic manager and use the getAllClusterMachines() method to retrieve a list of all of the machines in the cluster; Poll each machine in the cluster for its current value of TotalConn (the total number of TCP connections processed since startup); Sleep for 10 seconds, then poll each machine again; Calculate the number of connections processed by each traffic manager in the 10-second window, and calculate the per-second rate accurately using high-res time. The code: #!/usr/bin/perl -w use SOAP::Lite 0.6; use Time::HiRes qw( time sleep ); $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; my $userpass = "admin:admin";      # SOAP-capable authentication credentials my $adminserver = "stingray:9090"; # Details of an admin server in the cluster my $sampletime = 10;               # Sample time (seconds) sub getAllClusterMembers( $$ ); sub makeConnections( $$$ ); sub makeRequest( $$ ); my $machines = getAllClusterMembers( $adminserver, $userpass ); print "Discovered cluster members ". ( join ", ", @$machines ) . "\n"; my $connections = makeConnections( $machines, $userpass,    " http://soap.zeus.com/zxtm/1.0/System/Stats/ " ); # sample the value of getTotalConn my $start = time(); my $res1 = makeRequest( $connections, "getTotalConn" ); sleep( $sampletime-(time()-$start) ); my $res2 = makeRequest( $connections, "getTotalConn" ); # Determine connection rate per traffic manager my $totalrate = 0; foreach my $z ( keys %{$res1} ) {    my $conns   = $res2->{$z}->result - $res1->{$z}->result;    my $elapsed = $res2->{$z}->{time} - $res1->{$z}->{time};    my $rate = $conns / $elapsed;    $totalrate += $rate; } print "Total connection rate across all machines: " .       sprintf( '%.2f', $totalrate ) . "\n"; sub getAllClusterMembers( $$ ) {     my( $adminserver, $userpass ) = @_;     # Discover cluster members     my $mconn =  SOAP::Lite          -> ns(' http://soap.zeus.com/zxtm/1.0/System/MachineInfo/ ')          -> proxy(" https://$userpass\@$adminserver/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );     $mconn->proxy->ssl_opts( SSL_verify_mode => 0 );      my $res = $mconn->getAllClusterMachines();     # $res->result is a reference to an array of System.MachineInfo.Machine objects     # Pull out the name:port of the traffic managers in our cluster     my @machines = grep s@ https://(.*?)/@$1@ ,        map { $_->{admin_server}; } @{$res->result};     return \@machines; } sub makeConnections( $$$ ) {     my( $machines, $userpass, $ns ) = @_;     my %conns;     foreach my $z ( @$machines ) {        $conns{ $z } = SOAP::Lite          -> ns( $ns )          -> proxy(" https://$userpass\@$z/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );        $conns{ $z }->proxy->ssl_opts( SSL_verify_mode => 0 );     }     return \%conns; } sub makeRequest( $$ ) {     my( $conns, $req ) = @_;     my %res;     foreach my $z ( keys %$conns ) {        my $r = $conns->{$z}->$req();        $r->{time} = time();        $res{$z}=$r;     }     return \%res; } Running the script $ ./getConnections.pl Discovered cluster members stingray1-ny:9090, stingray1-sf:9090 Total connection rate across all machines: 5.02
View full article
What is Direct Server Return? Layer 2/3 Direct Server Return (DSR), also referred to as ‘triangulation’, is a network routing technique used in some load balancing situations: Incoming traffic from the client is received by the load balancer and forwarded to a back-end node Outgoing (return) traffic from the back-end node is sent directly to the client and bypasses the load balancer completely Incoming traffic (blue) is routed through the load balancer, and return traffic (red) bypasses the load balancer Direct Server Return is fundamentally different from the normal load balancing mode of operation, where the load balancer observes and manages both inbound and outbound traffic. In contrast, there are two other common load balancing modes of operation: NAT (Network Address Translation): layer-4 load balancers and simple layer 7 application delivery controllers use NAT (Network Address Translation) to rewrite the destination value of individual network packets.  Network connections are load-balanced by the choice of destination value. They often use a technique called ‘delayed binding’ to delay and inspect a new network connection before sending the packets to a back-end node; this allows them to perform content-based routing.  NAT-based load balancers can switch TCP streams, but have limited capabilities to inspect and rewrite network traffic. Proxy: Modern general-purpose load balancers like Stingray Traffic Manager operate as full proxies.  The proxy mode of operation is the most compute-intensive, but current general purpose hardware is more than powerful enough to manage traffic at multi-gigabit speeds. Whereas NAT-based load balancers manage traffic on a packet-by-packet basis, proxy-based load balancers can read entire request and responses.  They can manage and manipulate the traffic based on a full understanding of the transaction between the client and the application server. Note that some load balancers can operated in a dual-mode fashion - a service can be handled either in a NAT-like fashion or in a Proxy-like fashion.  This introduces are tradeoff between hardware performance and software sophistication - see SOL4707 - Choosing appropriate profiles for HTTP traffic for an example.  Stingray Traffic Manager can only function in a Proxy-like fashion. This article describes how the benefits of direct server return can be applied to a layer 7 traffic management device such as Stingray Traffic Manager. Why use Direct Server Return? Layer 2/3 Direct Server Return was very popular from 1995 to about 2000 because the load balancers of the time were seriously limited in performance and compute power; DSR uses less compute resources then a full NAT or Proxy load balancer.  DSR is no longer necessary for high performance services as modern load balancers on modern hardware can easily handle multi-gigabits of traffic without requiring DSR. DSR is still an appealing option for organizations who serve large media files, or who have very large volumes of traffic. Stingray Traffic Manager does not support a traditional DSR mode of operation, but it is straightforward to manage traffic to obtain a similar layer 7 DSR effect. Disadvantages of Layer2/3 Direct Server Return There are a number of distinct limitations and disadvantages with DSR: 1. The load balancer does not observe the response traffic The load balancer has no way of knowing if a back-end server has responded correctly to the remote client.   The server may have failed, or it may have returned a server error message.  An external monitoring service is necessary to verify the health and correct operation of each back-end server. 2. Proper load balancing is not possible The load balancer has no idea of service response times so it is difficult for it to perform effective, performance-sensitive load balancing. 3. Session persistence is severely limited Because the load balancer only observes the initial ‘SYN’ packet before it makes a load balancing decision, it can only perform session persistence based on the source IP address and port of the packet, i.e. the IP address of the remote client. The load balancer cannot perform cookie-based session persistence, SSL session ID persistence, or any of the many other session persistence methods offered by other load balancers. 4. Content-based routing is not possible Again, because the load balancer does not observe the initial request, it cannot perform content based routing. 5. Limited traffic management and reporting The load balancer cannot manage traffic, performing operations like SSL decryption, content compression, security checking, SYN cookies, bandwidth management, etc.  It cannot retry failed requests, or perform any traffic rewriting.  The load balancer cannot report on traffic statistics such as bandwidth sent. 6. DSR can only be used within a datacenter There is no way to perform DSR between datacenters (other than proprietary tunnelling, which may be limited by ISP egress filtering). In addition, many of the advanced capabilities of an application delivery controller that depend on inspection and modification (security, acceleration, caching, compression, scrubbing etc) cannot be deployed when a DSR mode is in use. Performance of Direct Server Return The performance benefits of DSR are often assumed to be greater than they really are.  Central to this doubt is the observation that client applications will send TCP ‘ACK’ packets via the load balancer in response to the data they receive from the server, and the volume of the ACK packets can overwhelm the load balancer. Although ACK packets are small, in many cases the rated capacities of network hardware assume that all packets are the size of the maximum MTU (typically 1500 bytes).  A load balancer running on a 100 MBit network could receive a little over 8,000 ACK packets per second. On a low-latency network, ACK packets are relatively infrequent (1 ACK packet for every 4 data packets), but for large downloads over a high-latency network (8 hops) the number of ACK packets closely approaches 1:1 as the server and client attempt to optimize the TCP session.  Therefore, over high-latency networks, a DSR-equipped load balancer will receive a similar volume of ACK packets to the volume of outgoing data packets (and the difference in size between the ACK and data packets has little effect to packet-based load balancers). Stingray alternatives to Layer 2/3 DSR There are two alternatives to direct server return: Use Stingray Traffic Manager in its usual full proxy mode Stingray Traffic Manager is comfortably able to manage over many Gbits of traffic in its normal ‘proxy’ mode on appropriate hardware, and can be scaled horizontally for increased capacity.  In benchmarks, modern Intel and AMD-based systems can achieve multiple 10's of Gbits of fully load-balanced traffic, and up to twice as much when serving content from Stingray Traffic Manager’s content cache. Redirect requests to the chosen origin server (a.k.a. Layer 7 DSR) For the most common protocols (HTTP and RTSP), it is possible to handle them in ‘proxy’ mode, and then redirect the client to the chosen server node once the load balancing and session persistence decision has been made.  For the large file download, the client communicates directly with the server node, bypassing Stingray Traffic Manager completely: Client issues HTTP or RTSP request to Stingray Traffic Manager Stingray Traffic Manager issues ‘probe’ request via pool to back-end server Stingray Traffic Manager verifies that the back-end server returns a correct response Stingray Traffic Manager sends a 302 redirect to the client, telling it to retry the request against the chosen back-end server Requests for small objects (blue) are proxied directly to the origin.  Requests for large objects (red) elicit a lightweight probe to locate the resource, and then the client is instructed (green)to retrieve the resource directly from the origin. This technique would generally be used selectively.  Small file downloads (web pages, images, etc) would be managed through the Stingray Traffic Manager.  Only large files – embedded media for example – would be handled in this redirect mode.  For this reason, the HTTP session will always run through the Stingray Traffic Manager. Layer 7 DSR with HTTP Layer 7 DSR with HTTP is fairly straightforward.  In the following example, incoming requests that begin “/media” will be converted into simple probe requests and sent to the ‘Media Servers’ pool.  The Stingray Traffic Manager will determine which node was chosen, and send the client an explicit redirect to retrieve the requested content from the chosen node: Request rule: Deploy the following TrafficScript request rule: $path = http.getPath(); if( string.startsWith( $path, "/media/" ) || 1 ) {    # Store the real path    connection.data.set( "path", $path );    # Convert the request to a lightweight HEAD for '/'    http.setMethod( "HEAD" );    http.setPath( "/" );    pool.use( "Media Servers" ); } Response rule: This rule reads the response from the server; load balancing and session persistence (if relevant) will ensure that we’ve connected with the optimal server node.  The rule only takes effect if we did the request rewrite, the $saved_path value will begin with ‘/media/’, so we can issue the redirect. $saved_path = connection.data.get( "path" ); if( string.startsWith( $saved_path, "/media" ) ) {    $chosen_node = connection.getNode();    http.redirect( " http:// ".$chosen_node.$saved_path ); } Layer 7 DSR  with RTSP An RTSP connection is a persistent TCP connection.  The client and server communicate with HTTP-like requests and responses.  In this example, Stingray Traffic Manager will receive initial RTSP connections from remote clients and load-balance them on to a pool of media servers.  In the RTSP protocol, a media download is always preceded by a ‘DESCRIBE’ request from the client; Stingray Traffic Manager will replace the ‘DESCRIBE’ response with a 302 Redirect response that tells the client to connect directly to the back-end media server. This code example has been tested with the Quicktime, Real and Windows media clients, and against pools of Quicktime, Helix (Real) and Windows media servers. The details Create a virtual server listening on port 554 (standard port for RTSP traffic).  Set the protocol type to be “RTSP”. In this example, we have three pools of media servers, and we’re going to select the pool based on the User-Agent field in the RTSP request.  The pools are named “Helix Servers”, “QuickTime Servers” and “Windows Media Servers”. Request rule: Deploy the following TrafficScript request rule: $client = rtsp.getRequestHeader( "User-Agent" ); # Choose the pool based on the User-Agent if( string.Contains( $client, "RealMedia" ) ) {    pool.select( "Helix Servers" ); } else if ( string.Contains( $client, "QuickTime" ) ) {    pool.select( "QuickTime Servers" ); } else if ( string.Contains( $client, "WMPlayer" ) ) {    pool.select( "Windows Media Servers" ); } This rule uses pool.select() to specify which pool to use when Stingray is ready to forward the request to a back-end server.  Response rule: All of the work takes place in the response rule.  This rule reads the response from the server.  If the request was a ‘DESCRIBE’ method, the rule then replaces the response with a 302 redirect, telling the client to connect directly to the chosen back-end server.  Add this rule as a response rule, setting it to run every time (not once). # Wait for a DESCRIBE response since this contains the stream $method = rtsp.getMethod(); if( $method != "DESCRIBE" ) break; # Get the chosen node $node = connection.getnode(); # Instruct the client to retry directly against the chosen node rtsp.redirect( "rtsp://" . $node . "/" . $path ); Appendix: How does DSR work? It’s useful to have an appreciation of how DSR (and Delayed Binding) functions in order to understand some of its limitations (such as content inspection). TCP overview A simplified overview of a TCP connection is as follows: Connection setup The client initiates a connection with a server by sending a ‘SYN’ packet.  The SYN packet contains a randomly generated client sequence number (along with other data). The server replies with a ‘SYN ACK’ packet, acknowledging the client’s SYN and sending its own randomly generated server sequence number. The client completes the TCP connection setup by sending an ACK packet to acknowledge the server’s SYN.  The TCP connection setup is often referred to as a 3-way TCP handshake.  Think of it as the following conversation: Client: “Can you hear me?” (SYN) Server: “Yes.  Can you hear me?” (ACK, SYN) Client: “Yes” (ACK) Data transfer Once the connection has been established by the 3-way handshake, the client and server exchange data packets with each other.  Because packets may be dropped or re-ordered, each packet contains a sequence number; the sequence number is incremented for each packet sent. When a client receives intact data packets from the server, it sends back an ACK (acknowledgement) with the packet sequence number.  When a client acknowledges a sequence number, it is acknowledging it received all packets up to that number, so ACKs may be sent less frequently than data packets.  The server may send several packets in sequence before it receives an ACK (determined by the (“window size”), and will resend packets if they are not ACK’d rapidly enough. Simple NAT-based Load Balancing There are many variants for IP and MAC rewriting used in simple NAT-based load balancing.  The simplest NAT-based load balancing technique uses Destination-NAT (DNAT) and works as follows: The client initiates a connection by sending a SYN packet to the Virtual IP (VIP) that the load balancer is listening on The load balancer makes a load balancing decision and forwards the SYN packet to the chosen node.  It rewrites the destination IP address in the packet to the IP address of the node.  The load-balancer also remembers the load-balancing decision it made. The node replies with a SYN/ACK.  The load-balancer rewrites the source IP address to be the VIP and forwards the packet on to the remote client. As more packets flow between the client and the server, the load balancer checks its internal NAT table to determine how the IP addresses should be rewritten. This implementation is very amenable to a hardware (ASIC) implementation.  The TCP connection is load-balanced on the first SYN packet; one of the implications is that the load balancer cannot inspect the content in the TCP connection before making the routing decision. Delayed Binding Delayed binding is a variant of the DNAT load balancing method.  It allows the load balancer to inspect a limited amount of the content before making the load balancing decision. When the load balancer receives the initial SYN, it chooses a server sequence number and returns a SYN/ACK response The load balancer completes the TCP handshake with the remote client and reads the initial few data packets in the client’s request. The load balancer reassembles the request, inspects it and makes the load-balancing decision.  It then makes a TCP connection to the chosen server, using DNAT (i.e., the client’s source IP address) and writes the request to the server. Once the request has been written, the load balancer must splice the client-side and server-side connection together.  It does this by using DNAT to forward packets between the two endpoints, and by rewriting the sequence numbers chosen by the server so that they match the initial sequence numbers that the load balancer used. This implementation is still amenable to hardware (ASIC) implementation.  However, layer 4-7 tasks such as detailed content inspection and content rewriting are beyond implementation in specialized hardware alone and are often implemented using software approaches (such as F5's FastHTTP profile), albeit with significant functional limitations. Direct Server Return Direct Server Return is most commonly implemented using MAC address translation (layer 2). A MAC (Media Access Control) address is a unique, unchanging hardware address that is bound to a network card.  Network devices will read all network packets destined for their MAC address. Network devices use ARP (address resolution protocol) to announce the MAC address that is hosting a particular IP address.  In a Direct Server Return configuration, the load balancer and the server nodes will all listen on the same VIP.  However, only the load balancer makes ARP broadcasts to tell the upstream router that the VIP maps to its MAC address. When a packet destined for the VIP arrives at the router, the router places it on the local network, addressed to the load balancer’s MAC address.  The load balancer picks that packet up. The load balancer then makes a load balancing decision, choosing which node to send it to.  The load balancer rewrites the MAC address in the packet and puts it back on the wire. The chosen node picks the packet up just as if it were addressed directly to it. When the node replies, it sends its packets directly to the source node.  They are immediately picked up by the upstream router and forwarded on. In this way, reply packets completely bypass the load balancer machine. Why content inspection is not possible Content inspection (delayed binding) is not possible because it requires that the load balancer first completes the three-way handshake with the remote source node, and possibly ACK’s some of the data packets. When the load balancer then sends the first SYN to the chosen node, the node will respond with a SYN/ACK packet directly back to the remote source.  The load balancer is out-of-line and cannot suppress this SYN/ACK.  Additionally, the sequence number that the node selects cannot be translated to the one that the remote client is expecting.  There is no way to persuade the node to pick up in the TCP connection from where the load balancer left off. For similar reasons, SYN cookies cannot be used by the load balancer to offload SYN floods from the server nodes. Alternative Implementations of Direct Server Return There are two alternative implementations of DSR (see this 2002 paper entitled 'The State of the Art'), but neither is widely used any more: TCP Tunnelling: IP tunnelling (aka IP encapsulation) can be used to tunnel the client IP packets from the load balancer to the server.  All client IP packets are encapsulated within IP datagrams, and the server runs a tunnel device (an OS driver and configuration) to strip off the datagram header before sending the client IP packet up the network stack. This configuration does not support delayed binding, or any equivalent means of inspecting content before making the load balancing decision TCP Connection Hopping: Resonate have implemented a proprietary protocol (Resonate Exchange Protocol, RXP) which interfaces deeply with the server node’s TCP stack.  Once a TCP connection has been established with the Resonate Central Dispatch load balancer and the initial data has been read, the load balancer can hand the response side of the connection off to the selected server node using RXP.  The RXP driver on the server suppresses the initial TCP handshake packets, and forces the use of the correct TCP sequence number.  This uniquely allows for content-based routing and direct server return in one solution. Neither of these methods are in wide use now.
View full article
FTP is an example of a 'server-first' protocol. The back-end server sends a greeting message before the client sends its first request. This means that the traffic manager must establish the connection to the back-end node before it can inspect the client's request.   Fortunately, it's possible to implement a full protocol proxy in Stingray's TrafficScript language. This article (dating from 2005) explains how.   FTP Virtual Hosting scenario   We're going to manage the following scenario:   A service provider is hosting FTP services for organizations - ferrari-f1.com, sauber-f1.com and minardi-f1.com. Each organization has their own cluster of FTP servers:   Ferrari have 3 Sun E15Ks in a pool named 'ferrari ftp' Sauber have a couple of old, ex-Ferrari servers in Switzerland, in a pool named 'sauber ftp' Minardi have a capable and cost-effective pair of pizza-box servers in a pool named 'minardi ftp'   The service provider hosts the FTP services through Stingray, and requires that users log in with their email address. If a user logs in as 'rbraun@ferrarif1.com ', Stingray will connect the user to the 'ferrari ftp' pool and log in with username 'rbraun'.   This is made complicated because an FTP connection begins with a 'server hello' message as follows:   220 ftp.zeus.com FTP server (Version wu-2.6.1-0.6x.21) ready.   ... before reading the data from the client.   Configuration   Create the virtual server (FTP, listening on port 21) and the three pools ('ferrari ftp' etc).  Configure the default pool for the virtual server to be the discard pool.   Configure the virtual server connection management settings, setting the FTP serverfirst_banner to:   220 F1 FTP server ready.   Add the following trafficscript request rule to the virtual server, setting it to run every time:   $req = string.trim( request.endswith( "\n" ) ); if( !string.regexmatch( $req, "USER (.*)", "i" ) ) { # if we're connected, forward the message; otherwise # return a login prompt if( connection.getNode() ) { break; } else { request.sendresponse( "530 Please log in!!\r\n" ); break; } } $loginname = $1; # The login name should look like 'user@host' if( ! string.regexmatch( $loginname, "(.*)@(.*)" ) ) { request.sendresponse( "530 Incorrect user or password!!\r\n" ); break; } $user = $1; $domain = string.lowercase( $2 ); request.set( "USER ".$user."\r\n" ); # select the pool we want... if( $domain == "ferrarif1.com" ) { pool.use( "ferrari ftp" ); } else if( $domain == "sauberf1.com" ) { pool.use( "sauber ftp" ); } else if( $domain == "minardif1.com" ) { pool.use( "minardi ftp" ); } else { request.sendresponse( "530 Incorrect user or password!!\r\n" ); }   And that's it! Stingray automatically slurps and discards the serverfirst banner message from the back-end ftp servers when it connects on the first request.   More...   Here's a more sophisticated example which reads the username and password from the client before attempting to connect. You could add your own authentication at this stage (for example, using http.request.get or auth.query to query an external server) before initiating the connect to the back-end ftp server:   TrafficScript request rule   $req = string.trim( request.endswith( "\n" ) ); if( string.regexmatch( $req, "USER (.*)" ) ) { connection.data.set( "user", $1 ); $msg = "331 Password required for ".$1."!!\r\n"; request.sendresponse( $msg ); break; } if( !string.regexmatch( $req, "PASS (.*)" ) ) { # if we're connected, forward the message; otherwise # return a login prompt if( connection.getNode() ) { break; } else { request.sendresponse( "530 Please log in!!\r\n" ); break; } } $loginname = connection.data.get( "user" ); $pass = $1; # The login name should look like 'user@host' if( ! string.regexmatch( $loginname, "(.*)@(.*)" ) ) { request.sendresponse( "530 Incorrect user or password!!\r\n" ); break; } $user = $1; $domain = string.lowercase( $2 ); # You could add your own authentication at this stage. # If the username and password is invalid, do the following: # # if( $badpassword ) { # request.sendresponse( "530 Incorrect user or password!!\r\n" ); # break; # } # now, replay the correct request against a new # server instance connection.data.set( "state", "connecting" ); request.set( "USER ".$user."\r\nPASS ".$pass."\r\n" ); # select the pool we want... if( $domain == "ferrarif1.com" ) { pool.use( "ferrari ftp" ); } else if( $domain == "sauberf1.com" ) { pool.use( "sauber ftp" ); } else if( $domain == "minardif1.com" ) { pool.use( "minardi ftp" ); } else { request.sendresponse( "530 Incorrect user or password!!\r\n" ); }   TrafficScript response rule   if( connection.data.get("state") == "connecting" ) { # We've just connected, but Stingray doesn't slurp the serverfirst # banner until after this rule has run. # Slurp the first line (the serverfirst banner), the second line # (the 331 need password) and then replace the serverfirst banner $first = response.getLine(); $second = response.getLine( "\n", $1 ); $remainder = string.skip( response.get(), $1 ); response.set( $first.$remainder ); connection.data.set( "state", "" ); }   Remember that both rules must be set to 'run every time'.
View full article
Lots of websites provide a protected area for authorized users to log in to. For instance, you might have a downloads section for products on your site where customers can access the software that they have bought.   There are many different ways to protect web pages with a user name and password. Their login and password could be quickly spread around. Once the details are common knowledge, anyone could login and access the site without paying.   Stingray and TrafficScript to the rescue!   Did you know that TrafficScript can be used to detect when a username and password are used from several different locations? You can then choose whether to disable the account or give the user a new password. All this can be done without replacing any of your current authentication systems on your website:   Looks like the login details for user 'ben99' have been leaked! How can we stop people leeching from this account?   For this example, we'll use a website where the entire site is protected with a PHP script that handles the authentication. It will check a user's password, and then set a USER cookie filled in with the user name. The details of the authentication scheme are not important. In this instance, all that matters is that TrafficScript can discover the user name of the account.   Writing the TrafficScript rule   First of all, TrafficScript needs to ignore any requests that aren't authenticated:   $user = http.getCookie( "USER" ); if( $user == "" ) break;   Next, we'll need to discover where the user is coming from. We'll use the IP address of their machine. However, they may also be connecting via a proxy, in which case we'll use the address supplied by the proxy.   $from = request.getRemoteIP(); $proxy = http.getHeader( "X-Forwarded-For" ); if( $proxy != "" ) $from = $proxy;   TrafficScript needs to keep track of which IP addresses have been used for each account. We will have to store a list of the IP addresses used. TrafficScript provides persistent storage with the data.get() and data.set() functions.   $list = data.get( $user ); if( !string.contains( $list, $from )) { # Add this entry in, padding list with spaces $list = sprintf( "%19s %s", $from, $list ); ...   Now we need to know how many unique IP addresses have been used to access this account. If the list has grown too large, then don't let this person fetch any more pages.   # Count the number of entries in the list. Each entry is 20 # characters long (the 19 in the sprintf plus a space) $entries = string.length( $list ) / 20; if( $entries > 4 ) { # Kick the user out with an error message http.sendResponse( "403 Permission denied", "text/plain", "Account locked", "" ); } else { # Update the list of IP addresses data.set( $user, $list ); } }   That's it! If a single account on your site is accessed from more than four different locations, the account will be locked out, preventing abuse.   As this is powered by TrafficScript, further improvements can be made. We can extend the protection in many ways, without having to touch the code that runs your actual site. Remember, this can be deployed with any kind of authentication being used - TrafficScript just needs the user name.   A more advanced example   This has a few new improvements. First of all, the account limits are given a timeout, enabling someone to access the site from different locations (e.g. home and office), but will still catch abuse if the account is being used simultaneously in different locations. Secondly, any abuse is logged, so that an administrator can check up on leaked accounts and take appropriate action. Finally, to show that we can work with other login schemes, this example uses HTTP Basic Authentication to get the user name.   # How long to keep data for each userid (seconds) $timelimit = 3600; # Maximum number of different IP addresses to allow a client # to connect from $maxips = 4; # Only interested in HTTP Basic authentication $h = http.getHeader( "Authorization" ); if( !string.startsWith( $h, "Basic " )) continue; # Extract and decode the username:password combination $enc = string.skip( $h, 6 ); $userpasswd = string.base64decode( $enc ); # Work out where the user came from. If they came via a proxy, # then ensure that we don't log the proxy's IP address(es) $from = request.getRemoteIP(); $proxy = http.getHeader( "X-Forwarded-For" ); if( $proxy != "" ) $from = $proxy; # Have we seen this user before? We will store a space separated # list of all the IPs that we have seen the user connect from $list = data.get( $userpasswd ); # Also check the timings. Only keep the records for a fixed period # of time, then delete them. $time = data.get( "time-" . $userpasswd ); $now = sys.time(); if(( $time == "" ) || (( $now - $time ) > $timelimit )) { # Entry expired (or hasn't been created yet). Start with a new # list and timestamp. $list = ""; $time = $now; data.set( "time-" . $userpasswd, $time ); } if( !string.contains( $list, $from )) { # Pad each entry in the list with spaces $list = sprintf( "%19s %s", $from, $list ); # Count the number of entries in the list. Each entry is 20 # characters long (the 19 in the sprintf plus a space) $entries = string.length( $list ) / 20; # Check if the list of used IP addresses is too large - if so, # send back an error page! if( $entries > $maxips ) { # Put a message in the logs so the admins can see the abuse # (Ensure that we show the username but not the password) $user = string.substring( $userpasswd, 0, string.find( $userpasswd, ":" ) - 1 ); log.info( "Login abuse for account: " . $user . " from " . $list ); http.sendResponse( "403 Permission denied", "text/html", "Your account is being accessed by too many users", "" ); } else { # Update the list and let the user through data.set( $userpasswd, $list ) ; } }   This article was originally written by Ben Mansell in March 2007
View full article
"The 'contact us' feature on many websites is often insecure, and makes it easy to launch denial of service (DoS) attacks on corporate mail servers," according to UK-based security consultancy SecureTest, as reported in The Register.   This article describes how such an attack might be launched, and how it can be easily mitigated against by using a traffic manager like Stingray.   Mounting an Attack   Many websites contain a "Contact Us" web-based form that generates an internal email. An attacker can use a benchmarking program like ApacheBench to easily submit a large number of requests to the form, bombarding the target organization's mail servers with large volumes of traffic.   Step 1. Identify the target web form and deduce the POST request     An appropriate POST request for the http://www.site.com/cgi-bin/mail.aspx page would contain the form parameters and desired values as an application/x-www-form-urlencoded file (ignore line breaks):   email_subject=Site+Feedback&mailto=target%40noname.org& email_name=John+Doe&email_from=target%40noname.org&email_country=US& email_comments=Ha%2C+Ha%2C+Ha%21%0D%0ADon%27t+try+this+at+home   Step 2. Mount the attack   The following example uses ApacheBench to submit the POST request data in postfile.txt. ApacheBench creates 10 users who send 10,000 requests to the target system.   # ab -p postfile.txt -c 10 -n 10000 -T application/x-www-form-urlencoded http://www.site.com/cgi-bin/mail.aspx   The attack is worsened because the web server typically resides inside the trusted DMZ and is not subject to the filtering that untrusted external clients must face. Additionally, this direct attack bypasses any security or validation that is built into the web form.   Ken Munro of SecureTest described the results of the firm's penetration testing work with clients. "By explicit agreement we conduct a 'contact us' DoS, and in every case we've tried so far, the client's mail server stops responding during the test window."   Defending against the Attack   There is a variety of ways to defend against this form of attack, but one of the easiest ways would be to rate-limit requests to the web-based form.   In Stingray, you can create a 'Rate Shaping Class'; we'll create one named 'mail limit' that restricts traffic to 2 requests per minute:     Using TrafficScript, we rate-limit traffic to the mail.aspx page to 2 requests per minute in total:   if( http.getPath() == "/cgi-bin/mail.aspx" ) { rate.use( "mail limit" ); }   In this case, one attacker could dominate the form and prevent other legitimate users from using it. So, we could instead limit each individual user (identified by source IP address) to 2 requests per minute:   if( http.getPath() == "/cgi-bin/mail.aspx" ) { rate.use( "mail limit", request.getRemoteIP() ); }   In the case of a distributed denial of service attack, we can rate limit on other criteria. For example, we could extract the 'name' field from the submitted data and rate-shape on that basis:   if( http.getPath() == "/cgi-bin/mail.aspx" ) { $name = http.getFormParam( "name" ); rate.use( "mail limit", $name ); }   Stingray gives you a very quick, simple and non-disruptive method to limit accesses to a vulnerable or resource-heavy web-based form like this. This solution illustrates one of the many ways that Stingray's traffic inspection and control can be used to help secure your public facing services.
View full article
Distributed denial of service (DDoS) attacks are the worst nightmare of every web presence. Common wisdom has it that there is nothing you can do to protect yourself when a DDoS attack hits you. Nothing? Well, unless you have Stingray Traffic Manager. In this article we'll describe how Stingray helped a customer keep their site available to legitimate users when they came under massive attack from the "dark side".   What is a DDoS attack?   DDoS attacks have risen to considerable prominence even in mainstream media recently, especially after the BBC published a report on how botnets can be used to send SPAM or take web-sites down and another story detailing that even computers of UK government agencies are taken over by botnets.   A botnet is an interconnected group of computers, often home PCs running MS Windows, normally used by their legitimate owners but actually under the control of cyber-criminals who can send commands to programs running on those computers. The fact that their machines are controlled by somebody else is due to operating system or application vulnerabilities and often unknown to the unassuming owners. When such a botnet member goes online, the malicious program starts to receive its orders. One such order can be to send SPAM emails, another to flood a certain web-site with requests and so on.   There are quite a few scary reports about online extortions in which web-site owners are forced to pay money to avoid disruption to their services.   Why are DDoS attacks so hard to defend against?   The reason DDoS attacks are so hard to counter is that they are using the service a web-site is providing and wants to provide: its content should be available to everybody. An individual PC connected to the internet via DSL usually cannot take down a server, because servers tend to have much more computing power and more networking bandwidth. By distributing the requests to as many different clients as possible, the attacker solves three problems in one go:   They get more bandwidth to hammer the server. The victim cannot thwart the attack by blocking individual IP addresses: that will only reduce the load by a negligible fraction. Also, clever DDoS attackers gradually change the clients sending the request. It's impossible to keep up with this by manually adapting the configuration of the service. It's much harder to identify that a client is part of the attack because each individual client may be sending only a few requests per second.   How to Protect against DDoS Attacks?   There is an article on how to ban IP addresses of individual attackers here: Dynamic Defense Against Network Attacks.The mechanism described there involves a Java Extension that modifies Stingray Traffic Manager's configuration via a SOAP call to add an offending IP address to the list of banned IPs in a Service Protection Class. In principle, this could be used to block DDoS attacks as well. In reality it can't, because SOAP is a rather heavyweight process that involves much too much overhead to be able to run hundreds of times per second. (Stingray's Java support is highly optimized and can handle tens of thousands of requests per second.)   The performance of the Java/SOAP combination can be improved by leveraging the fact that all SOAP calls in the Stingray API are array-based. So a list of IP addresses can be gathered in TrafficScript and added to Stingray's configuration in one call. But still, the blocking of unwanted requests would happen too late: at the application level rather than at the OS (or, preferably, the network) level. Therefore, the attacker could still inflict the load of accepting a connection, passing it up to Stingray Traffic Manager, checking the IP address inside Stingray Traffic Manager etc. It's much better to find a way to block the majority of connections before they reach Stingray Traffic Manager.   Introducing iptables   Linux offers an extensive framework for controlling network connections called iptables. One of its features is that it allows an administrator to block connections based on many different properties and conditions. We are going to use it in a very simple way: to ignore connection initiations based on the IP address of their origin. iptables can handle thousands of such conditions, but of course it has an impact on CPU utilization. However, this impact is still much lower than having to accept the connection and cope with it at the application layer.   iptables checks its rules against each and every packet that enters the system (potentially also on packets that are forwarded by and created in the system, but we are not going to use that aspect here). What we want to impede are new connections from IP addresses that we know are part of the DDoS attack. No expensive processing should be done on packets belonging to connections that have already been established and on which data is being exchanged. Therefore, the first rule to add is to let through all TCP packets that do not establish a new connections, i.e. that do not have the SYN flag set:   # iptables -I INPUT -p tcp \! --syn -j ACCEPT   Once an IP address has been identified as 'bad', it can be blocked with the following command:   # iptables -A INPUT -s [ip_address] -J DROP   Using Stingray Traffic Manager and TrafficScript to detect and block the attack   The rule that protects the service from the attack consists of two parts: Identifying the offending requests and blocking their origin IPs.   Identifying the Bad Guys: The Attack Signature   A gift shopping site that uses Stingray Traffic Manager to manage the traffic to their service recently noticed a surge of requests to their home page that threatened to take the web site down. They contacted us, and upon investigation of the request logs it became apparent that there were many requests with unconventional 'User-Agent' HTTP headers. A quick web search revealed that this was indicative of an automated distributed attack.   The first thing for the rule to do is therefore to look up the value of the User-Agent header in a list of agents that are known to be part of the attack:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 sub isAttack()  {      $ua = http.getHeader( "User-Agent" );      if ( $ua == "" || $ua == " " ) {         #log.info("Bad Agent [null] from ". request.getRemoteIP());         counter.increment(1,1);         return 1;      } else {         $agentmd5 = resource.getMD5( "bad-agents.txt" );         if ( $agentmd5 != data.get( "agentmd5" ) ) {            reloadBadAgentList( $agentmd5 );         }         if ( data.get( "BAD" . $ua ) ) {            #log.info("Bad agent ".$ua." from ". request.getRemoteIP());            counter.increment(2,1);            return 1;         }      }      return 0;  }    The rule fetches the relevant header from the HTTP request and makes a quick check whether the client sent an empty User-Agent or just a whitespace. If so, a counter is incremented that can be used in the UI to track how many such requests were found and then 1 is returned, indicating that this is indeed an unwanted request.   If a non-trivial User-Agent has been sent with the request, the list is queried. If the user-agent string has been marked as 'bad', another counter is incremented and again 1 is returned to the calling function. The techniques used here are similar to those in the more detailed HowTo: Store tables of data in TrafficScript article; when needed, the resource file is parsed and an entry in the system-wide data hash-table is created for each black-listed user agent.   This is accomplished by the following sub-routine:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 sub reloadBadAgentList( $newmd5 )  {      # do this first to minimize race conditions:      data.set( "agentmd5" , $newmd5 );      $badagents = resource.get( "bad-agents.txt" );      $i = 0;      data. reset ( "BAD" ); # clear old entries      while ( ( $j = string.find( $badagents , "\n" , $i ) ) != -1 ) {         $line = string.substring( $badagents , $i , $j -1 );         $i = $j +1;         $entry = "BAD" .string.trim( $line );         log .info( "Adding bad UA '" . $entry . "'" );         data.set( $entry , 1 );      }  }    Most of the time, however, it won't be necessary to read from the file system because the list of 'bad' agents does not change often (if ever for a given botnet attack). You can download the file with the black-listed agents here and there is even a web-page dedicated to user-agents, good and bad.   Configuring iptables from TrafficScript   Now that TrafficScript 'knows' that it is dealing with a request whose IP has to be blocked, this address must be added to the iptables 'INPUT' chain with target 'DROP'. The most lightweight way to get this information from inside Stingray Traffic Manager somewhere else is to use the HTTP client functionality in TrafficScript provided by the function http.request.get(). Since many such 'evil' IP addresses are expected per second, it is a good idea to buffer up a certain number of IPs before making an HTTP request (the first of which will have some overhead due to TCP's three-way handshake, but of course much less than forking a new process; subsequent requests will be made over the kept-alive connection).   Here is the rule that accomplishes the task:   1 2 3 4 5 6 7 8 9 10 11 12 13 if ( isAttack() ) {      $ip = request.getRemoteIP();      $iplist = data.get( "badiplist" );      if ( string.count( $iplist , "/" )+1 >= 10 ) {         data.remove( "badiplist" );         $url = "http://127.0.0.1:44252" . $iplist . "/" . $ip ;         http.request.get( $url , "" , 5);      } else {         data.set( "badiplist" , $iplist . "/" . $ip );      }      connection. sleep ( $sleep );      connection.discard();  }    A simple 'Web Server' that Adds Rules for iptables   Now who is going to handle all those funny HTTP GET requests? We need a simple web-server that reads the URL, splits it up into the IPs to be blocked and adds them to iptables (unless it is already being blocked). On startup this process checks which addresses are already in the black-list to make sure they are not added again (which would be a waste of resources), makes sure that a fast path is taken for packets that do not correspond to new connections and then listens for requests on a configurable port (in the rule above we used port 44252).   This daemon doesn't fork one iptables process per IP address to block. Instead, it uses the 'batch-mode' of the iptables framework, iptables-restore. With this tool, you compile a list of rules and send all of them down to the kernel with a single commit command.   A lot of details (like IPv6 support, throttling etc) have been left out because they are not specific to the problem at hand, but can be studied by downloading the Perl code (attached) of the program.   To start this server you have to be root and invoke the following command:   # iptablesrd.pl   Preventing too many requests with Stingray Traffic Manager's Rate Shaping   As it turned out when dealing with the DDoS attack that plagued our client, the bottleneck in the whole process described up until now was the addition of rules to iptables. This is not surprising as the kernel has to lock some of its internal structures before each such manipulation. On a moderately-sized workstation, for example, a few hundred transactions can be committed per second when starting from an empty rule set. Once there are, say, 10,000 IP addresses in the list, adding more becomes slower and slower, down to a few dozen per second at best. If we keep sending requests to the 'iptablesrd' web-server at a high rate, it won't be able to keep up with them. Basically, we have to take into account that this is the place where processing is channeled from a massively parallel, highly scalable process (Stingray) into the sequential, one-at-a-time mechanism that is needed to keep the iptables configuration consistent across CPUs.   Queuing up all these requests is pointless, as it will only eat resources on the server. It is much better to let Stingray Traffic Manager sleep on the connection for a short time (to slow down the attacker) and then close it. If the IP address continues to be part of the botnet, the next request will come soon enough and we can try and handle it then.   Luckily, Stingray comes with rate-shaping functionality that can be used in TrafficScript. Setting up a 'Rate' class in the 'Catalog' tab looks like this:     The Rate Class can now be used in the rule to restrict the number of HTTP requests Stingray makes per second:   1 2 3 4 5 6 if ( rate.getBackLog( "DDoS Protect" ) < 1 ) {      $url = "http://localhost:44252" . $iplist . "/" . $ip ;      rate. use ( "DDoS Protect" );      # our 'webserver' never sends a response      http.request.get( $url , "" , 5);  }    Note that we simply don't do anything if the rate class already has a back-log, i.e. there are outstanding requests to block IPs. If there is no request queued up, we impose the rate limitation on the current connection and then send out the request.   The Complete Rule   To wrap this section up, here is the rule in full:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 $sleep = 300; # in milliseconds  $maxbacklog = 1;  $ips_per_httprequest = 10;  $http_timeout = 5; # in seconds  $port = 44252; # keep in sync with argument to iptablesrd.pl       if ( isAttack() ) {      $ip = request.getRemoteIP();      $iplist = data.get( "badiplist" );      if ( string.count( $iplist , "/" )+1 >= $ips_per_httprequest ) {         data.remove( "badiplist" );         if ( rate.getBackLog( "ddos_protect" ) < $maxbacklog ) {            $url = "http://127.0.0.1:" . $port . $iplist . "/" . $ip ;            rate. use ( "ddos_protect" );            # our 'webserver' never sends a response            http.request.get( $url , "" , $http_timeout );         }      } else {         data.set( "badiplist" , $iplist . "/" . $ip );      }      connection. sleep ( $sleep );      connection.discard();  }       $rawurl = http.getRawURL();  if ( $rawurl == "/" ) {      counter.increment(3, 1);  # Small delay - shouldn't annoy real users, will at least slow down attackers      connection. sleep (100);      http.redirect( "/temp-redirection" );  # Attackers will probably ignore the redirection. Real browsers will come back  }       # Re-write the URL before passing it on to the web servers  if ( $rawurl == "/temp-redirection" ) {      http.setPath( "/" );  }       sub isAttack()  {      $ua = http.getHeader( "User-Agent" );           if ( $ua == "" || $ua == " " ) {         counter.increment(1,1);         return 1;      } else {         $agentmd5 = resource.getMD5( "bad-agents.txt" );         if ( $agentmd5 != data.get( "agentmd5" ) ) {            reloadBadAgentList( $agentmd5 );         }         if ( data.get( "BAD" . $ua ) ) {            counter.increment(2,1);            return 1;         }      }      return 0;  }       sub reloadBadAgentList( $newmd5 )  {      # do this first to minimize race conditions:      data.set( "agentmd5" , $newmd5 );      $badagents = resource.get( "bad-agents.txt" );      $i = 0;      data. reset ( "BAD" );      while ( ( $j = string.find( $badagents , "\n" , $i ) ) != -1 ) {         $line = string.substring( $badagents , $i , $j -1 );         $i = $j +1;         $entry = "BAD" .string.trim( $line );         data.set( $entry , 1 );      }  }   Note that there are a few tunables at the beginning of the rule. Also, since in the particular case of the gift shopping site all attack requests went to the home page ("/"), a small slowdown and subsequent redirect was added for that page.   Further Advice   The method described here can help mitigate the server-side effect of DDoS attacks. It is important, however, to adapt it to the particular nature of each attack and to the system Stingray Traffic Manager is running on. The most obvious adjustment is to change the isAttack() sub-routine to reliably detect attacks without blocking legitimate requests.   Beyond that, a careful eye has to be kept on the system to make sure Stingray strikes the right balance between adding bad IPs (which is expensive but keeps further requests from that IP out) and throwing away connections the attackers have managed to establish (which is cheap but won't block future connections from the same source). After a while, the rules for iptables will block all members of the botnet. However, botnets are dynamic, they change over time: new nodes are added while others drop out.   An useful improvement to the iptablesrd.pl process described above would therefore be to speculatively remove blocks if they have been added a long time ago and/or if the number of entries crosses a certain threshold (which will depend on the hardware available).   Most DDoS attacks are short-lived, however, so it may suffice to just wait until it's over.   The further upstream in the network the attack can be blocked, the better. With the current approach, blocking occurs at the machine Stingray Traffic Manager is running on. If the upstream router can be remote-controlled (e.g. via SNMP), it would be preferable to do the blocking there. The web server we are using in this article can easily be adapted to such a scenario.   A word of warning and caution: The method presented here is no panacea that can protect against arbitrary attacks. A massive DDoS attack can, for instance, saturate the bandwidth of a server with a flood of SYN packets and such an attack can only be handled further upstream in the network. But Stingray Traffic Manager can certainly be used to scale down the damage inflicted upon a web presence and take a great deal of load from the back-end servers.   Footnote   The image at the top of the article is a graphical representation of the distribution of nodes on the internet produced by the opte project. It is protected by the Creative Commons License.
View full article
...Riverbed customers protected!   When I got in to the office this morning, I wasn't expecting to read about a new BIND 9 exploit!! So as soon as I'd had my first cup of tea I sat down to put together a little TrafficScript magic to protect our customers.   BIND Dynamic Update DoS   The exploit works by sending a specially crafted DNS Update packet to a zones master server. Upon receiving the packet, the DNS server will shut down. ISC, The creators of BIND, have this to say about the new exploit   "Receipt of a specially-crafted dynamic update message to a zone for which the server is the master may cause BIND 9 servers to exit. Testing indicates that the attack packet has to be formulated against a zone for which that machine is a master. Launching the attack against slave zones does not trigger the assert."   "This vulnerability affects all servers that are masters for one or more zones – it is not limited to those that are configured to allow dynamic updates. Access controls will not provide an effective workaround."   Sounds nasty, but how easy is it to get access to code to exploit this vulnerability? Well the guy who found the bug, posted a fully functional perl script with the Debian Bug Report.   TrafficScript to the Rescue   I often talk to customers about how TrafficScript can be used to quickly patch bugs and vulnerabilities while they wait for a fix from the vendor or their own development teams. It's time to put my money where my mouth is, so here's the work around for this particular vulnerability:   $data = request.get( 3 ); if ( string.regexmatch($data, "..[()]" ) ) { log.warn("FOUND UPDATE PACKET"); connection.discard(); }   The above TrafficScript checks the Query Type of the incoming request, and if it's an UPDATE, then we discard the connection. Obviously you could extend this script to add a white list of servers which you want to allow updates from if necessary. However, you should only have this script in place while your servers are vulnerable, and you should apply patches as soon as you can.   Be safe!
View full article
If you are unfortunate enough to suffer a total failure of all of your webservers, all is not lost. Stingray Traffic Manager can host custom error pages for you, and this article shows you how! If all of the servers in a pool have failed, you have several options:   Each pool can be configured to have a 'Failure Pool'. This is used when all of the nodes in the primary pool have completely failed. You may configure the traffic manager to send an HTTP Redirect message, directing your visitors to an alternate website. However, you may reach the point where you've got nowhere to send your users. All your servers are down, so failure pools are not an option, and you can't redirect a visitor to a different site for the same reason.   In this case, you can use a third option:   You may configure a custom error page which is returned with every request. Custom error pages   Use the error_file setting in the Virtual Server configuration to specify the response if the back-end servers are not functioning. The error file should be placed in your Extra Files catalog (in the 'miscellaneous files' class:     <html> <head> <title>Sorry</title> <link rel="stylesheet" href="main.css" type="text/css" media="screen" > </head> <body> <img src="logo.gif"> <h1>Our apologies</h1> We're sorry. All of our operators are busy. Please try again later. </body> </html>   This HTML error page will now be returned whenever an HTTP request is received, and all of your servers are down.   Embedding images and other resources   Note that the HTML page has embedded images and stylesheets. Where are these files hosted? With the current configuration, the error page will be returned for every request.   You can use a little TrafficScript to detect requests for files referenced by the error page, and serve content directly from the conf/extra/ directory.   First, we'll modify the error page slightly to may it easier to recognize requests for files used by the error page:   <link rel="stylesheet" href="https://community.brocade.com/.extra/main.css" type="text/css" media="screen">   and   <img src="/.extra/logo.gif">   Then, upload the main.css and logo.gif files, and any others you use, to the Extra Files catalog.   Finally, the following TrafficScript request rule can detect requests for those files and will make the traffic manager serve the response directly:   # Only process requests that begin '/.extra/' $url = http.getPath(); if( ! string.regexmatch( $url, "^/\\.extra/(.*)$" ) ) { break; } else { $file = $1; } # If the file does not exist, stop if( ! resource.exists( $file ) ) break; # Work out the MIME type of the file from its extension $mimes = [ "html" => "text/html", "jpg" => "image/jpeg", "jpeg" => "image/jpeg", "png" => "image/png", "gif" => "image/gif", "js" => "application/x-javascript", "css" => "text/css" ]; if( string.regexmatch( $file, ".*\\.([^.]+)$" ) ) { $mime = $mimes[ $1 ]; } if( ! $mime ) $mime = "text/plain"; # Serve the file from the conf/extra directory $contents = resource.get( $file ); http.sendResponse( "200 OK", $mime, $contents, "" );   Copy and paste this TrafficScript into the Rules Catalog, and assign it as a request rule to the virtual server. Images (and css or js files) that are placed in the Extra Files catalog can be refered to using /.extra/imagename.png . You will also be able to test your error page by browsing to /.extra/errorpage.html (assuming the file is called errorpage.html in the extra directory).
View full article
What can you do if an isolated problem causes one or more of your application servers to fail? How can you prevent vistors to your website seeing the error, and instead send them a valid response?   This article shows how to use TrafficScript to inspect responses from your application servers and retry the requests against several different machines if a failure is detected.   The Scenario   Consider the following scenario. You're running a web based service on a cluster of four application servers, running .NET, Java, PHP, or some other application environment. An occasional error on one of the machines means that one particular application sometimes fails on that one machine. It might be caused by a runaway process, a race condition when you update configuration, or by failing system memory.   With Stingray, you can check the responses coming back from your application servers. For example, application errors may be identified by a '500 Internal Error' or '502 Bad Gateway' message (refer to the HTTP spec for a full list of error codes).   You can then write a Response rule that retries the request a certain number of times against different servers to see if it gets a better response before sending it back to the remote user.   $code = http.getResponseCode(); if( $code >= 500 && $code != 503 ) { # Not retrying 503s here, because they get retried # automatically before response rules are run if( request.getRetries() < 3 ) { # Avoid the current node when we retry, # if possible request.avoidNode( connection.getNode() ); log.warn( "Request " . http.getPath() . " to site " . http.getHostHeader() . " from " . request.getRemoteAddr() . " caused error " . http.getResponseCode() . " on node " . connection.getNode() ); request.retry(); } }   How does the rule work?   The rule does a few checks before telling Stingray to retry the request:   1. Did an error occur?   First of all, the rule checks to see if the response code indicated that an error occurred:   if( $code >= 500 && $code != 503 ) { ... }   If your service was prone to other types of error - for example, Java backtraces might be found in the middle of a response page - you could write a TrafficScript test for those errors instead.   2. Have we retried this request before?   Some requests may always generate an error response. We don't want to keep retrying a request in this case - we've got to stop at some point:   if( request.getRetries() < 3 ) { ... }   request.getRetries() returns the number of times that this request has been resent to a back-end node. It's initially 0; each time you call request.retry() , it is incremented.   This code will retry a request 3 times, in addition to the first time that it was processed.   3. Don't use the same node again!   When you retry a request, the load-balancing decision is recalculated to select the target node. However, you will probably want to avoid the node that generated the error before, as it may be likely to generate the error again.   request.avoidNode( connection.getNode() );   connection.getNode() returns the name of the node that was last used to process the request. request.avoidNode() gives the load balancing algorithm a hint that it should avoid that node. The hint is just advisory - if there are no other available nodes in the pool, that node will be used anyway.   4. Log what we're about to do.   This rule conceals problems with the service so that the end user does not see them. It it works well, these problems may never be found!   log.warn( "Request " . http.getPath() . " to site " . http.getHostHeader() . " from " . request.getRemoteAddr() . " caused error " . http.getResponseCode() . " on node " . connection.getNode() );   It's a sensible idea to log the fact that a request caused an unexpected error so that the problem can be investigated later.   5. Retry the request   Finally, tell Stingray to resubmit the request again, in the hope that this time we'll get a better response:   request.retry();   And that's it.   Notes   If a malicious user finds an HTTP request that always causes an error, perhaps because of an application bug, then this rule will replay the malicious request against 3 additional machines in your cluster. This makes it easier for the user to mount a DoS-style attack against your site, because he only needs to send 1/4 of the number of requests.   However, the rule explicitly logs that a failure occured, and logs both the request that caused the failure and the source of the request. This information is vital when performing triage, i.e., rapid fault fixing. Once you have noticed that the problem exists, you can very quickly add a request rule to drop the bad request before it is ever processed:   if( http.getPath() == "/known/bad/request" ) connection.discard();
View full article
When you move content around a web site, links break. Even if you've patched up all your internal links, site visitors from external links, outdated search results and people's bookmarks will be broken and return a '404 Not Found' error.   Rather than giving each user a sorry "404 Not Found" apology page, how about trying to send them to a useful page? The following TrafficScript example shows you exactly how to do that, without having to modify any of your web site content or configuration.   The TrafficScript rule works by inspecting the response from the webserver before it's sent back to the remote user. If the status code of the response is '404', the rule sends back a redirect to a higher level page:   http://www.site.com/products/does/not/exist.html returns 404, so try: http://www.site.com/products/does/not/ returns 404, so try: http://www.site.com/products/does/ returns 404, so try: http://www.site.com/products/ which works fine!     Here is the code (it's a Stingray response rule):   if( http.getResponseCode() == 404 ) { $path = http.getPath(); # If the home page gives a 404, nothing we can do! if( $path == "/" ) http.redirect( "http://www.google.com/" ); if( string.endsWith( $path, "/" ) ) $path = string.drop( $path, 1 ); $i = string.findr( $path, "/" ); $path = string.substring( $path, 0, $i-1 )."/"; http.redirect( $path ); }   Your users will never get a 404 Not Found message for any web page on your site; Stingray will try higher and higher pages until it finds one that exists.   Of course, you could use a similar strategy for other application errors, such as 503 Too Busy.   The same for images...   This strategy works fine for web pages, but it's not appropriate for embedded content such as missing images, stylesheets or javascript files.   For some content types, a 404 response is not user visible and is acceptable. For images, it may not be. Some browsers will display a broken image icon, where a simple transparent GIF image would be more appropriate:   if( http.getResponseCode() == 404 ) { $path = http.getPath(); # If the home page gives a 404, nothing we can do! if( $path == "/" ) http.redirect( "http://www.google.com/" ); # Is it an image? if( string.endsWith( $path, ".gif" ) || string.endsWith( $path, ".jpg" ) || string.endsWith( $path, ".png" ) ) { http.sendResponse( "200 OK", "image/gif", "GIF89a\x01\x00\x01\x00\x80\xff\x00\xff\xff\xff\x00\x00\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x02\x44\x01\x00\x3b", "" ); } # Is it a stylesheet (.css) or javascript file (.js)? if( string.endsWith( $path, ".css" ) || string.endsWith( $path, ".js" ) ) { http.sendResponse( "404", "text/plain", "", "" ); break; } if( string.endsWith( $path, "/" ) ) $path = string.drop( $path, 1 ); $i = string.findr( $path, "/" ); $path = string.substring( $path, 0, $i-1 )."/"; http.redirect( $path ); }  
View full article