cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
In a recent conversation, a user wished to use the Traffic Manager's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Traffic Manager's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Traffic Manager processes all of the requests without excessive queuing; the response time stays within desired limits. Traffic Manager typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
Web spiders are clever critters - they are automated programs designed to crawl over web pages, retrieving information from the whole of a site. (For example, Spiders power search engines and shopping comparison sites). But what do you do if your website is being overrun by the bugs? How can you prevent your service from being slowed down by a badly written, over-eager web spider?   Web spiders, (sometimes called robots, or bots), are meant to adhere to the Robot exclusion standard. By putting a file called robots.txt at the top of your site, you can restrict the pages that a web spider should load. However, not all spiders bother to check this file. Even worse, the standard gives no control over how often a spider may fetch pages. A poorly written spider could hammer your site with requests, trying to discover the price of everything that you are selling every minute of the day. The problem is, how do you stop these spiders while allowing your normal visitors to use the site without restrictions?   As you might expect, Stingray has the answer! The key feature to use is the 'Request Rate Shaping' classes. These will prevent any one user from fetching too many pages from your site.   Let's see how to put them to use:   Create a Rate Shaping Class   You can create a new class from the Catalogs page. You need to pick at least one rate limit: the maximum allowed requests per minute, or per second. For our example, we'll create a class called 'limit' that allows up to 100 requests a minute.   Put the rate shaping class into use - first attempt   Now, create a TrafficScript rule to use this class. Don't forget to add this new rule to the Virtual Server that runs your web site.   rate.use( "limit" );   This rule is run for each HTTP request to your site. It applies the rate shaping class to each of them.   However, this isn't good enough. We have just limited the entire range of visitors to the site to 100 requests a minute, in total. If we leave the settings as is, this would have a terrible effect on the site. We need to apply the rate shaping limits to each individual user.   Rate shaping - second attempt   Edit the TrafficScript rule and use this code instead:   rate.use( "limit", connection.getRemoteIP() );   We have provided a second parameter to the rate.use() function. The rule is taking the client IP address and using this to identify a user. It then applies the rate shaping class separately to each unique IP address. So, a user coming from IP address 1.2.3.4 can make up to 100 requests a minute, and a user from 5.6.7.8 could also make 100 requests at the same time.   Now, if a web spider browses your site, it will be rate limited.   Improvements   We can make this rate shaping work even better. One slight problem with the above code is that sometimes you may have multiple users arriving at your site from one IP address. For example, a company may have a single web proxy. Everyone in that company will appear to come from the same IP address. We don't want to collectively slow them down.   To work around this, we can use cookies to identify individual users. Let's assume your site already sets a cookie called 'USERID'. The value is unique for each visitor. We can use this in the rate shaping:   # Try reading the cookie $userid = http.getCookie( "USERID" ); if( $userid == "" ) { $userid = connection.getRemoteIP(); } rate.use( "limit", $userid );   This TrafficScript rule tries to use the cookie to identify a user. If it isn't present, it falls back to using the client IP address.   Even more improvements   There are many other possibilities for further improvements. We could detect web spiders by their User-Agent names, or perhaps we could only rate shape users who aren't accepting cookies. But we have already achieved our goal - now we have a means to limit the page requests by automated programs, while allowing ordinary users to fully use the site.   This article was originally written by Ben Mansell in December 2006.
View full article
Many services now use RSS feeds to distribute frequently updated information like news stories and status reports. Traffic Manager's powerful TrafficScript language lets you process RSS XML data, and this article describes how you can embed several RSS feeds into a web document.   It illustrates Traffic Manager's response rewriting capabilities, XML processing and its ability to query several external datasources while processing a web request.   In this example, we'll show how you can embed special RSS tags within a static web document. Traffic Manager will intercept these tags in the document and replace them with the appropriate RSS feed data&colon;   <!RSS http://community.brocade.com/community/product-lines/stingray/view-browse-feed.jspa?browseSite=place-content&browseViewID=placeContent&userID=9503&containerType=14&containerID=2005&filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D !>   We'll use a TrafficScript rule to process web responses, seek out the RSS tag and retrieve, format and insert the appropriate RSS data.   Check the response   First, the TrafficScript rule needs to obtain the response data, and verify that the response is a simple HTML document. We don't want to process images or other document types!     # Check the response type $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; # Get the response data $body = http.getResponseBody();   Find the embedded RSS tags   Next, we can use a regular expression to search through the response data and find any RSS tags in it:   (.*?)<!RSS\s+(.*?)\s+!>(.*)   Stingray supports Perl compatible regular expressions (regexs). This regex will find the first RSS tag in the document, and will assign text to the internal variables $1, $2 and $3:   $1: the text before the tag $2: the RSS URL within the tag $3: the text after the tag   The following code searches for RSS tags:     while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3; }   Retrieve the RSS data   An asynchronous HTTP request is sufficient to retrieve the RSS XML data&colon;     $rss = http.request.get( $url );     Transform the RSS data using an XSLT transform   The following XSLT transform can be used to extract the first 4 RSS items and format them up as an HTML <UL> list: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">    <xsl:template match="/">     <ul>       <xsl:apply-templates select="//item[position()&lt;5]"/>     </ul>    </xsl:template>         <xsl:template match="item">       <xsl: param name="URL" select="link/text()"/>       <xsl: param name="TITLE" select="title/text()"/>       <li><a href="{$URL}"><xsl:value-of select="$TITLE"/></a></li>    </xsl:template> </xsl:stylesheet>   Store the XSLT file in the Traffic Manager conf/extra directory, naming it 'rss.xslt', so that the rule can look it up using resource.get().   You can apply the XSLT transform to the XML data using the xml.xslt.transform function. The function returns the result with HTML entity encoding; use string.htmldecode to remove these:   $xsl = resource.get( "rss.xslt" ); $html= string.htmldecode( xml.xslt.transform( $rss, $xsl ) );   The entire rule   The entire response rule, with a little additional error checking, looks like this: $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; $body = http.getResponseBody(); $new = ""; $changed = 0; while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3;      $html = "<ul><li><b>RSS: ".$url."</b></li></ul>";      $rss = http.request.get( $url );    if( $1 != 200 ) {       $html = "<ul><li><b>Failed to retreive RSS feed</b></li></ul>";    } else {       $xsl = resource.get( "rss.xslt" );       $html = string.htmldecode( xml.xslt.transform( $rss, $xsl ) );       if( $html == -1 ) {           $html = "<ul><li><b>Failed to parse RSS feed</b></li></ul>";       }    }      $new = $new . $start . $html;    $body = $end;    $changed = 1; } if( $changed )   http.setresponsebody( $new . $body );
View full article
Meta-tags and the meta-description are used by search engines and other tools to infer more information about a website, and their judicious and responsible use can have a positive effect on a page's ranking in search engine results. Suffice to say, a page without any 'meta' information is likely to score lower than the same page with some appropriate information.   This article (originally published December 2006) describes how you can automatically infer and insert meta tags into a web page, on the fly.   Rewriting a response   First, decide what to use to generate a list of related keywords.   It would have been nice to have been able to slurp up all the text on the page and calculate the most commonly occurring unusual words. Surely that would have been the über-geek thing to do? Well, not really: unless I was careful I could end-up slowing down each response, and  there would be the danger that I produced a strange list of keywords that didn’t accurately represent what the page is trying to say (and could also be widely “Off-message”).   So I instead turned to three sources of on-message page summaries - the title tag, the contents of the big h1 tag and the elements of the page path.   The script   First I had to get the response body:   $body = http.getResponseBody();   This will be grepped for keywords, mangled to add the meta-tags and then returned by setting the response body:   http.setResponseBody( $body );   Next I had to make a list of keywords. As I mentioned before, my first plan was to look at the path: by converting slashes to commas I should be able to generate some correct keywords, something like this:   $path = http.getPath(); $path = string.regexsub( $path, "/+", "; ","g" );   After adding a few lines to first tidy-up the path: removing slashes at the beginning and end; and replacing underscores with spaces, it worked pretty well.    And, for solely aesthetic reasons I added   $path = string.uppercase($path);   Then, I took a look at the title tag. Something like this did the trick:   if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title_tag_text = $1; }   (the “i” flag here makes the search case-insensitive, just in-case).   This, indeed, worked fine. With a little cleaning-up, I was able to generate a meta-description similarly: I just stuck them together after adding some punctuation (solely to make it nicer when read: search engines often return the meta-description in the search result).   After playing with this for a while I wasn’t completely satisfied with the results: the meta-keywords looked great; but the meta-description was a little lacking in the real english department.   So, instead I turned my attention to the h1 tag on each page: it should already be a mini-description of each page. I grepped it in a similar fashion to the title tag and the generated description looked vastly improved.   Lastly, I added some code to check if a page already has a meta-description or meta-keywords to prevent the automatic tags being inserted in this case. This allows us to gradually add meta-tags by hand to our pages - and it means we always have a backup should we forget to add metas to a new page in the future.   The finished script looked like this:   # Only process HTML responses $ct = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $ct, "text/html" ) ) break; $body = http.getResponseBody(); $path = http.getPath(); # remove the first and last slashes; convert remaining slashes $path = string.regexsub( $path, "^/?(.*?)/?$", "$1" ); $path = string.replaceAll( $path, "_", " " ); $path = string.replaceAll( $path, "/", ", " ); if( string.regexmatch( $body, "<h1.*?>\\s*(.*?)\\s*</h1>", "i" ) ) { $h1 = $1; $h1 = string.regexsub( $h1, "<.*?>", "", "g" ); } if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title = $1; $title = string.regexsub( $title, "<.*?>", "", "g" ); } if( $h1 ) { $description = "Riverbed - " . $h1 . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $h1 . ", " . $title; } else { $description = "Riverbed - " . $path . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $title; } # only rewrite the meta-keywords if we don't already have some if(! string.regexmatch( $body, "<meta\\s+name='keywords'", "i" ) ) { $meta_keywords = " <meta name='keywords' content='" . $keywords ."'/>\n"; } # only rewrite the meta-description if we don't already have one if(! string.regexmatch( $body, "<meta\\s+name='description'", "i" ) ) { $meta_description = " <meta name='description' content='" . $description . "'/>"; } # find the title and stick the new meta tags in afterwards if( $meta_keywords || $meta_description ) { $body = string.regexsub( $body, "(<title>.*</title>)", "$1\n" . $meta_keywords . $meta_description ); http.setResponseBody( $body ); }   It should be fairly easy to adapt it to another site assuming the pages are built consistently.   This article was originally written by Sam Phillips (his own pic above) in December 2006, modified and tested in February 2013
View full article
Loggly is a cloud-based log management service.  The idea with Loggly is that you direct all your applications, hardware, software, etc. to send their logs to Loggly.  Once all the logs are in the Loggly cloud you can : Root cause and solve problems by performing powerful and flexible searches across all your devices and applications Set up alerts on log events Measure application performance Create custom graphs and analytics to better understand user behavior and experience   Having your Virtual Traffic Manager (vTM) logs alongside your application logs will provide valuable information to help further analyze and debug your applications.  You can export both the vTM event log as well as the request logs for each individual Virtual Server to Loggly.   vTM Event Log The vTM event log contains both error logs and informational messages.  To export the vTM Event Log to Loggly we will first create an Input into Loggly.  In the Loggly web interface navigate to Incoming Data -> Inputs and click on "+ Add Input".  The key field is the Service Type which must be set to Syslog UDP. After creating the input you'll be given a destination to send the logs to.  The next step is to tell the vTM to send logs to this destination. In the vTM web interface navigate to System > Alerting and select Syslog under the drop down menu for All Events. Click Update to save the changes. The final step is to click on Syslog and update the sysloghost to the Loggly destination. Virtual Server Request Logs Connections to a virtual server can be recorded in request logs. These logs can help track the usage of the virtual server, and can record many different pieces of information about each connection.  To export virtual server request logs to Loggly first navigate to Services > Virtual Servers > (your virtual server) > Request Logging. First set log!enabled to Yes, its not on by default. Scroll down and set syslog!enabled to Yes and set the syslog!endpoint to the same destination as with the vTM Event Logs.  Click Update to save the changes. Alternatively you can create a new input in Loggly for request logs if you don't want them to get mixed up with the Event Logs.   Making sure it works An easy way to make sure it works is to modify the configuration by creating and deleting a virtual for example.  This will generate an event in the vTM Event Log.  In Loggly you should see the light turn green for this input. The Virtual Traffic Manager is designed to be flexible, being the only software application delivery controller that can be seamlessly deployed in private, public, and hybrid clouds.  And now by exporting your vTM logs you can take full advantage of the powerful analysis tools available within Loggly.
View full article
The libLDAP.rts library and supporting library files (written by Mark Boddington) allow you to interrogate and modify LDAP traffic from a TrafficScript rule, and to respond directly to an LDAP request when desired.   You can use the library to meet a range of use cases, as described in the document Managing LDAP traffic with libLDAP.rts.   Note: This library allows you to inspect and modify LDAP traffic as it is balanced by Stingray.  If you want to issue LDAP requests from Stingray, check out the auth.query() TrafficScript function for this purpose, or the equivalent Authenticating users with Active Directory and Stingray Java Extensions Java Extension.   Overview   A long, long time ago on a Traffic Manager far, far away, I (Mark Boddington) wrote some libraries for processing LDAP traffic in TrafficScript:   libBER.rts – This is a TrafficScript library which implements all of the required Basic Encoding Rules (BER) functionality for LDAP. It does not completely implement BER though, LDAP doesn't use all of the available types, and this library hasn't implemented those not required by LDAP. libLDAP.rts – This is a TrafficScript library of functions which can be used to inspect and manipulate LDAP requests and responses. It requires libBER.rts to encode the LDAP packets. libLDAPauth.rts – This is a small library which uses libLdap to provide simple LDAP authentication to other services.   That library (version 1.0) mostly focused on inspecting LDAP requests. It was not particularly well suited to processing LDAP responses. Now, thanks to a Stingray PoC being run in partnership with the guys over at Clever Consulting, I've had cause to revist this library and improve upon the original. I'm pleased to announce libLDAP.rts version 1.1 has arrived.     What's new in libLdap Version 1.1?   Lazy Decoding. The library now only decodes the envelope  when getPacket() or getNextPacket() is called. This gets you the MessageID and the Operation. If you want to process further, the other functions handle decoding additional data as needed. New support for processing streams of LDAP Responses. Unlike Requests LDAP Responses are typically made up of multiple LDAP messages. The library can now be used to process multiple packets in a response. New SearchResult processing functions: getSearchResultDetails(), getSearchResultAttributes() and updateSearchResultDetails()   Lazy Decoding   Now that the decoding is lazier it means you can almost entirely bypass decoding for packets which you have no interest in. So if you only want to check BindRequests and/or BindResponses then those are the only packets you need to fully decode. The rest are sent through un-inspected (well except for the envelope).   Support for LDAP Response streams   We now have several functions to allow you to process responses which are made up of multiple LDAP messages, such  as those for Search Requests. You can use a loop with the "getNextPacket($packet["lastByte"])" function to process each LDAP message as it is returned from the LDAP server. The LDAP packet hash  now has a "lastByte" entry to help you keep track of the messages in the stream. There is also a new skipPacket() function to allow you to skip the encoder for packets which ou aren't modifying.   Search Result Processing   With the ability to process response streams I have added a  number of functions specifically for processing SearchResults. The getSearchDetails() function will return a SearchResult hash which contains the ObjectName decoded. If you are then interested in the object you can  call getSearchResultAttributes() to decode the Attributes which have been returned. If you make any changes to the Search Result you can then call updateSearchResultDetails() to update the packet, and then encodePacket() to re-encode it. Of course if at any point you determine that no changes are needed then you can call skipPacket() instead.   Example - Search Result Processing   import libDLAP.rts as ldap; $packet = ldap.getNextPacket(0); while ( $packet ) { # Get the Operation $op = ldap.getOp($packet); # Are we a Search Request Entry? if ( $op == "SearchRequestEntry" ) { $searchResult = ldap.getSearchResultDetails($packet); # Is the LDAPDN within example.com? if ( string.endsWith($searchResult["objectName"], "dc=example,dc=com") ) { # We have a search result in the tree we're interested in. Get the Attributes ldap.getSearchResultAttributes($searchResult); # Process all User Objects if ( array.contains($searchResult["attributes"]["objectClass"], "inetOrgPerson") ) { # Log the DN and all of the attributes log.info("DN: " . $searchResult["objectName"] ); foreach ( $att in hash.keys($searchResult["attributes"]) ) { log.info($att . " = " . lang.dump($searchResult["attributes"][$att]) ); } # Add the users favourite colour $searchResult["attributes"]["Favourite_Colour"] = [ "Riverbed Orange" ]; # If the password attribute is included.... remove it hash.delete($searchResult["attributes"], "userPassword"); # Update the search result ldap.updateSearchResultDetails($packet, $searchResult); # Commit the changes $stream .= ldap.encodePacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); continue; } } } # Not an interesting packet. Skip and move on. $stream .= ldap.skipPacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); } response.set($stream); response.flush();   This example reads each packet in turn by calling getNextPacket() and passing the lastByte attribute from the previously processed packet as the argument. We're looking for SearchResultEntry operations, If we find one we pass the packet to getSearchResultDetails() to decode the object which the search was for in order to determine the DN. If it's in example.com then we decide to process further and decode the attributes with getSearchResultAttributes(). If the object has an objectClass of inetOrgPerson we then print the attributes to the event log, remove the userPassword if it exists and set a favourite colour for the user. Finally we encode the packet and move on to the next one. Packets which we aren't interested in modifying are skipped.   Of course, rather than do all this checking in the response, we could have checked the SearchRequest in a request rule and then used connection.data.set() to flag the message ID for further processing.   We should also have a request rule which ensures that the objectClass is in the list of attributes requested by the end-user. But I'll leave that as an exercise for the reader ;-)   If you want more examples of how this library can be used, then please check out the additional use cases here: Managing LDAP traffic with libLDAP.rts
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Lync 2010.
View full article
Following up on this earlier article try using the below TrafficScript code snippet to automatically insert the Google Analytics code on all your webpages.  To use it: Copy the rule onto your Stingray Traffic Manager  by first navigating Catalogs -> Rules Scroll down to Create new rule, give the rule a name, and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the rule below. Change $account to your Google Analytics account number. If you are using multiple domains as described here set $multiple_domains to TRUE and set $tld to your Top Level Domain as specified in your Google Analytics account. Set the rule as a Response Rule in your Virtual Server by navigating to Services -> Virtual Servers -> <your virtual server> -> Rules -> Response Rules and Add rule. After that you should be good to go.  No need to individually modify your web pages, TrafficScript will take care of it all. # # Replace UA-XXXXXXXX-X with your Google Analytics Account Number # $account = 'UA-XXXXXXXX-X'; # # If you are tracking multiple domains, ie yourdomain.com, # yourdomain.net, etc. then set $mutliple_domains to TRUE and # replace yourdomain.com with your Top Level Domain as specified # in your Google Analytics account # $multiple_domains = FALSE; $tld = 'yourdomain.com'; # # Only modify text/html pages # if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # This variable contains the code to be inserted in the web page. Do not modify. # $html = "\n<script type=\"text/javascript\"> \n \   var _gaq = _gaq || []; \n \   _gaq.push(['_setAccount', " . $account . "]); \n"; if( $multiple_domains == TRUE ) {   $html .= " _gaq.push(['_setDomainName', " . $tld . "]); \n \   _gaq.push(['_setAllowLinker', true]); \n"; } $html .= " _gaq.push(['_trackPageview']); \n \   (function() { \n \   var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; \n \   ga.src=('https:' == document.location.protocol ? ' https://ssl ' : ' http://www ') + '.google-analytics.com/ga.js'; \n \   var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); \n \   })(); \n \ </script>\n"; # # Insert the code right before the </head> tag in the page # $body = http.getResponseBody(); $body = string.replace( $body, "</head>", $html . "</head>"); http.setResponseBody( $body );
View full article
A great usage of TrafficScipt is for managing and inserting widgets on to your site.  The attached TrafficScript code snippet inserts a Twitter Profile Widget to your site, as described here (sign in required).   To use it.   In the Stingray web interface navigate to Catalogs -> Rules and s croll down to Create new rule .  Give it a name such as Twitter Feed and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the code and save the rule. Modify the $user and $tag as described in the TrafficScript code snippet.  $user is your Twitter username and $tag specifies where in the web page the feed should go. Navigate to the Rules page of your Virtual Server ( Services -> Virtual Servers -> <your virtual server> -> Rules) and add Twitter Feed as a Response Rule   Reload your webpage and you should see the Twitter feed.   # # This TrafficScript code snippet will insert a Twitter Profile widget # for user $user as described here: # https://twitter.com/about/resources/widgets/widget_profile # The widget will be added directly after $tag. The resultant page will # look like: # ... # <tag> # <Twitter feed> # ... # # Replace 'riverbed' with your Twitter username $user = "riverbed"; # # You can keep the tag as <!--TWITTER FEED--> and insert that tag into # your web page or change $tag to some existing text in your web page, ie # $tag = "Our Twitter Feed:" $tag = "<!--TWITTER FEED-->"; # Only modify text/html pages if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # The actual code that will be inserted. Various values such as color, # height, width, etc. can be changed here. # $html = "\n\ <script charset=\"utf-8\" src=\"http://widgets.twimg.com/j/2/widget.js\"></script> \n \ <script> \n \ new TWTR.Widget({ \n \ version: 2, \n \ type: 'profile', \n \ rpp: 4, \n \ interval: 30000, \n \ width: 250, \n \ height: 300, \n \ theme: { \n \ shell: { \n \ background: '#333333', \n \ color: '#ffffff' \n \ }, \n \ tweets: { \n \ background: '#000000', \n \ color: '#ffffff', \n \ links: '#eb8507' \n \ } \n \ }, \n \ features: { \n \ scrollbar: false, \n \ loop: false, \n \ live: false, \n \ behavior: 'all' \n \ } \n \ }).render().setUser('".$user."').start(); \n \ </script><br>\n"; # This section inserts $html into the HTTP response after $tag $body = http.getResponseBody(); $body = string.replace( $body, $tag, $tag. $html); http.setResponseBody( $body );   Give it a try and let us know how you get on!   More Twitter solutions:   Traffic Managers can Tweet Too TrafficScript can Tweet Too
View full article
If you're running Apache HTTPD, you might have seen the recent advisory (and update) which can cause "significant CPU and memory usage" by abusing the HTTP/1.1 Range header.   If you're using Stingray Application Firewall simply update your baseline rules and you will be immediately protected. Otherwise, you can use TrafficScript to block this attack:   # Updated: Remove (if present) an old name that Apache accepts, MSIE 3 vintage http.removeHeader( "Request-Range" ); $r = http.getHeader( "Range" ); if( $r && string.count( $r, "," ) >= 5 ) { # Too many ranges, refuse the request http.sendResponse( "403 Forbidden", "text/plain", "Forbidden\n", "" ); }   This simply returns a 403 Forbidden response for any request asking for more than 5 ranges (at least 5 commas in the Range header). This is in line with the advisory's suggested mitigation: we don't block multiple ranges completely because they have legitimate uses, such as PDF readers that request parts of the document as you scroll through it, and the attack requires many more ranges to be effective.
View full article