cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
A great usage of TrafficScipt is for managing and inserting widgets on to your site.  The attached TrafficScript code snippet inserts a Twitter Profile Widget to your site, as described here (sign in required).   To use it.   In the Stingray web interface navigate to Catalogs -> Rules and s croll down to Create new rule .  Give it a name such as Twitter Feed and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the code and save the rule. Modify the $user and $tag as described in the TrafficScript code snippet.  $user is your Twitter username and $tag specifies where in the web page the feed should go. Navigate to the Rules page of your Virtual Server ( Services -> Virtual Servers -> <your virtual server> -> Rules) and add Twitter Feed as a Response Rule   Reload your webpage and you should see the Twitter feed.   # # This TrafficScript code snippet will insert a Twitter Profile widget # for user $user as described here: # https://twitter.com/about/resources/widgets/widget_profile # The widget will be added directly after $tag. The resultant page will # look like: # ... # <tag> # <Twitter feed> # ... # # Replace 'riverbed' with your Twitter username $user = "riverbed"; # # You can keep the tag as <!--TWITTER FEED--> and insert that tag into # your web page or change $tag to some existing text in your web page, ie # $tag = "Our Twitter Feed:" $tag = "<!--TWITTER FEED-->"; # Only modify text/html pages if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # The actual code that will be inserted. Various values such as color, # height, width, etc. can be changed here. # $html = "\n\ <script charset=\"utf-8\" src=\"http://widgets.twimg.com/j/2/widget.js\"></script> \n \ <script> \n \ new TWTR.Widget({ \n \ version: 2, \n \ type: 'profile', \n \ rpp: 4, \n \ interval: 30000, \n \ width: 250, \n \ height: 300, \n \ theme: { \n \ shell: { \n \ background: '#333333', \n \ color: '#ffffff' \n \ }, \n \ tweets: { \n \ background: '#000000', \n \ color: '#ffffff', \n \ links: '#eb8507' \n \ } \n \ }, \n \ features: { \n \ scrollbar: false, \n \ loop: false, \n \ live: false, \n \ behavior: 'all' \n \ } \n \ }).render().setUser('".$user."').start(); \n \ </script><br>\n"; # This section inserts $html into the HTTP response after $tag $body = http.getResponseBody(); $body = string.replace( $body, $tag, $tag. $html); http.setResponseBody( $body );   Give it a try and let us know how you get on!   More Twitter solutions:   Traffic Managers can Tweet Too TrafficScript can Tweet Too
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for VMware Horizon View Servers.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
When you move content around a web site, links break. Even if you've patched up all your internal links, site visitors from external links, outdated search results and people's bookmarks will be broken and return a '404 Not Found' error.   Rather than giving each user a sorry "404 Not Found" apology page, how about trying to send them to a useful page? The following TrafficScript example shows you exactly how to do that, without having to modify any of your web site content or configuration.   The TrafficScript rule works by inspecting the response from the webserver before it's sent back to the remote user. If the status code of the response is '404', the rule sends back a redirect to a higher level page:   http://www.site.com/products/does/not/exist.html returns 404, so try: http://www.site.com/products/does/not/ returns 404, so try: http://www.site.com/products/does/ returns 404, so try: http://www.site.com/products/ which works fine!     Here is the code (it's a Stingray response rule):   if( http.getResponseCode() == 404 ) { $path = http.getPath(); # If the home page gives a 404, nothing we can do! if( $path == "/" ) http.redirect( "http://www.google.com/" ); if( string.endsWith( $path, "/" ) ) $path = string.drop( $path, 1 ); $i = string.findr( $path, "/" ); $path = string.substring( $path, 0, $i-1 )."/"; http.redirect( $path ); }   Your users will never get a 404 Not Found message for any web page on your site; Stingray will try higher and higher pages until it finds one that exists.   Of course, you could use a similar strategy for other application errors, such as 503 Too Busy.   The same for images...   This strategy works fine for web pages, but it's not appropriate for embedded content such as missing images, stylesheets or javascript files.   For some content types, a 404 response is not user visible and is acceptable. For images, it may not be. Some browsers will display a broken image icon, where a simple transparent GIF image would be more appropriate:   if( http.getResponseCode() == 404 ) { $path = http.getPath(); # If the home page gives a 404, nothing we can do! if( $path == "/" ) http.redirect( "http://www.google.com/" ); # Is it an image? if( string.endsWith( $path, ".gif" ) || string.endsWith( $path, ".jpg" ) || string.endsWith( $path, ".png" ) ) { http.sendResponse( "200 OK", "image/gif", "GIF89a\x01\x00\x01\x00\x80\xff\x00\xff\xff\xff\x00\x00\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x02\x44\x01\x00\x3b", "" ); } # Is it a stylesheet (.css) or javascript file (.js)? if( string.endsWith( $path, ".css" ) || string.endsWith( $path, ".js" ) ) { http.sendResponse( "404", "text/plain", "", "" ); break; } if( string.endsWith( $path, "/" ) ) $path = string.drop( $path, 1 ); $i = string.findr( $path, "/" ); $path = string.substring( $path, 0, $i-1 )."/"; http.redirect( $path ); }  
View full article
For a comprehensive description of how this Stingray Java Extension operates, check out Yvan Seth's excellent article Making Stingray more RAD with Jython!   Overview   Stingray can invoke TrafficScript rules (see Feature Brief: TrafficScript) against requests and responses, and these rules run directly in the traffic manager kernel as high-performance bytecode.   A TrafficScript rule can also reach out to the local JVM to run servlets (Feature Brief: Java Extensions in Stingray Traffic Manager), and the PyRunner.jar library uses the JVM to run Python code against network traffic.  This is a great solution if you need to deploy complex traffic management policies and your development expertise lies with Python.   Requirements   Download and install Jython (http://www.jython.org/downloads.html).  This code was developed against Jython 2.5.3, but should run against other Jython versions.  For best compatibility across platforms, use the Jython installer from www.jython.org rather than the jython packages distributed by your OS vendor:   $ java -jar jython_installer-2.5.2.jar --console   Select installation option 1 (all components) or explicitly include the src part - this installs additional modules in extlibs that we will use later.   Locate the jython.jar file included in the install and upload this file to your Stingray Java Extensions catalog.   Download the PyRunner.jar file attached to this document and upload that to your Java Extensions catalog.  Alternatively, you can compile the Jar file from source:   $ javac -cp servlet.jar:zxtm-servlet.jar:jython.jar PyRunner.java $ jar -cvf PyRunner.jar PyRunner*.class   You can now run simple Python applications directly from TrafficScript!   A simple 'HelloWorld' example   Save the following Python code as Hello.py and upload the file to your Catalog > Extra Files catalog:     from javax.servlet.http import HttpServlet import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello World</title>" + "<body><h1 style='color:red;'>Hello World</h1>" + "The current time is " + time.strftime('%X %x %Z') + "</body></html>")   Assign the following TrafficScript request rule to your Virtual Server:   java.run( "PyRunner", "Hello.py" );   Now, whenever the TrafficScript rule is called, it will run the Hello.py code.  The PyRunner extension loads and compiles the Python code, and caches the compiled bytecode to optimize performance.   More sophisticated Python examples   The PyRunner.jar/jython.jar combination is capable of running simple Python examples, but it does not have access to the full set of Python core libraries.  These are to be found in additional jar files in the extlibs part of the Jython installation.   If you install Jython on the same machine you are running the Stingray software on, then you can point PyRunner.jar at that location:   Install Jython in a known location, such as /usr/local/jython - make sure to install all components (option 1 in the installation types) or explicitly add the src part Navigate to Catalogs > Java > PyRunner and add a parameter named python_home , set to /usr/local/jython (or other location as appropriate) In Catalogs > Java, delete the WEB-INF files generated previously - they won't be required any more From the System > Traffic Managers page, restart your Java runner.   You can install jython in this way on the Stingray Virtual Appliance, but please take be aware that the installation will not be preserved during a major upgrade, and it will not form part of the supported configuration of the virtual appliance.   Here's an updated version of Hello.py that uses the Python and Java md5 implementations to compare md5s for the string 'foo' (they should give the same result!):   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello World</title><body>" htmlOut += "<h1>Hello World</h1>" htmlOut += "The current time is " + time.strftime('%X %x %Z') + "<br/>" # try a Python md5 htmlOut += "Python MD5 of 'foo': %s<br/>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "<br/>" # yes, the Stingray attributes are available htmlOut += "Virtual Server: %s<br/>" % request.getAttribute("virtualserver") # 'args' is the parameter list for java.run(), beginning with the script name htmlOut += "Args: %s<br/>" % ", ".join(request.getAttribute("args")) htmlOut += "</body></html>" toClient.println(htmlOut)   Upload this file to your Extra catalog to replace the existing Hello.py script and try it out.   Rapid test and development   Check out publish.py - a simple python script that automates the task of uploading your python code to the Extra Files catalog: Deploying Python code to Stingray Traffic Manager
View full article
What can you do if an isolated problem causes one or more of your application servers to fail? How can you prevent vistors to your website seeing the error, and instead send them a valid response?   This article shows how to use TrafficScript to inspect responses from your application servers and retry the requests against several different machines if a failure is detected.   The Scenario   Consider the following scenario. You're running a web based service on a cluster of four application servers, running .NET, Java, PHP, or some other application environment. An occasional error on one of the machines means that one particular application sometimes fails on that one machine. It might be caused by a runaway process, a race condition when you update configuration, or by failing system memory.   With Stingray, you can check the responses coming back from your application servers. For example, application errors may be identified by a '500 Internal Error' or '502 Bad Gateway' message (refer to the HTTP spec for a full list of error codes).   You can then write a Response rule that retries the request a certain number of times against different servers to see if it gets a better response before sending it back to the remote user.   $code = http.getResponseCode(); if( $code >= 500 && $code != 503 ) { # Not retrying 503s here, because they get retried # automatically before response rules are run if( request.getRetries() < 3 ) { # Avoid the current node when we retry, # if possible request.avoidNode( connection.getNode() ); log.warn( "Request " . http.getPath() . " to site " . http.getHostHeader() . " from " . request.getRemoteAddr() . " caused error " . http.getResponseCode() . " on node " . connection.getNode() ); request.retry(); } }   How does the rule work?   The rule does a few checks before telling Stingray to retry the request:   1. Did an error occur?   First of all, the rule checks to see if the response code indicated that an error occurred:   if( $code >= 500 && $code != 503 ) { ... }   If your service was prone to other types of error - for example, Java backtraces might be found in the middle of a response page - you could write a TrafficScript test for those errors instead.   2. Have we retried this request before?   Some requests may always generate an error response. We don't want to keep retrying a request in this case - we've got to stop at some point:   if( request.getRetries() < 3 ) { ... }   request.getRetries() returns the number of times that this request has been resent to a back-end node. It's initially 0; each time you call request.retry() , it is incremented.   This code will retry a request 3 times, in addition to the first time that it was processed.   3. Don't use the same node again!   When you retry a request, the load-balancing decision is recalculated to select the target node. However, you will probably want to avoid the node that generated the error before, as it may be likely to generate the error again.   request.avoidNode( connection.getNode() );   connection.getNode() returns the name of the node that was last used to process the request. request.avoidNode() gives the load balancing algorithm a hint that it should avoid that node. The hint is just advisory - if there are no other available nodes in the pool, that node will be used anyway.   4. Log what we're about to do.   This rule conceals problems with the service so that the end user does not see them. It it works well, these problems may never be found!   log.warn( "Request " . http.getPath() . " to site " . http.getHostHeader() . " from " . request.getRemoteAddr() . " caused error " . http.getResponseCode() . " on node " . connection.getNode() );   It's a sensible idea to log the fact that a request caused an unexpected error so that the problem can be investigated later.   5. Retry the request   Finally, tell Stingray to resubmit the request again, in the hope that this time we'll get a better response:   request.retry();   And that's it.   Notes   If a malicious user finds an HTTP request that always causes an error, perhaps because of an application bug, then this rule will replay the malicious request against 3 additional machines in your cluster. This makes it easier for the user to mount a DoS-style attack against your site, because he only needs to send 1/4 of the number of requests.   However, the rule explicitly logs that a failure occured, and logs both the request that caused the failure and the source of the request. This information is vital when performing triage, i.e., rapid fault fixing. Once you have noticed that the problem exists, you can very quickly add a request rule to drop the bad request before it is ever processed:   if( http.getPath() == "/known/bad/request" ) connection.discard();
View full article
The famous TrafficScript Mandelbrot generator!
View full article
This article presents a TrafficScript library that give you easy and efficient access to tables of data stored as files in the Stingray configuration:   libTable.rts   Download the following TrafficScript library from gihtub and import it into your Rules Catalog, naming it libTable.rts :   libTable.rts   # libTable.rts # # Efficient lookups of key/value data in large resource files (>100 lines) # Use getFirst() and getNext() to iterate through the table sub lookup( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename."::".$key ); } sub getFirst( $filename ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":first" ); } sub getNext( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":next:".$key ); } # Internal functions sub update( $filename ) { $pid = sys.getPid(); $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable".$pid.$filename.":md5" ) ) return; data.reset( "resourcetable".$pid.$filename.":" ); data.set( "resourcetable".$pid.$filename.":md5", $md5 ); $contents = resource.get( $filename ); $pkey = ""; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); data.set( "resourcetable".$pid.$filename."::".$key, $value ); if( !$pkey ) { data.set( "resourcetable".$pid.$filename.":first", $key ); } else { data.set( "resourcetable".$pid.$filename.":next:".$pkey, $key ); } $pkey = $key; } }   Usage:   import libTable.rts as table; $filename = "data.txt"; # Look up a key/value pair $value = table.lookup( $filename, $key ); # Iterate through the table for( $key = table.getFirst( $filename ); $key != ""; $key = table.getNext( $filename, $key ) ) { $value = table.lookup( $filename, $key ); }   The library caches the contents of the file internally, and is very efficient for large files.  For smaller files, it may be slightly more efficient to search these files using a regular expression, but the convenience of this library may outweigh the small performance gains.   Data file format   This library provides access to files stored in the Stingray conf/extra folder (by way of the Extra Files > Miscellaneous Files) section of the catalog.  These files can be uploaded using the UI, the SOAP or REST API, or by manually copying them in place and initiating a configuration replication.   Files should contain  key-value pairs, one per line, space separated:   key1value1 key2value2 key3value3   Preservation of order   The lookup operation uses an open hash table, so is efficient for large files. The getFirst() and getNext() operations will iterate through the data table in order, returning the keys in the order they appear in the file.   Performance and alternative implementations   The performance of this library is investigated in the article Investigating the performance of TrafficScript - storing tables of data.  It is very efficient for large tables of data, and marginally less efficient than a simple regular-expression string search for small files.   If performance is of a concern and you only need to work with small datasets, then you could use the following library instead:   libTableSmall.rts   # libTableSmall.rts: Efficient lookups of key/value data in a small resource file (<100 lines) sub lookup( $filename, $key ) { $contents = resource.get( $filename ); if( string.regexmatch( $contents, '\n'.$key.'\s+([^\n]+)' ) ) return $1; if( string.regexmatch( $contents, '^'.$key.'\s+([^\n]+)' ) ) return $1; return ""; }  
View full article
Imagine you're running a popular image hosting site, and you're concerned that some users are downloading images too rapidly.  Or perhaps your site publishes airfares, or gaming odds, or auction prices, or real estate details and screen-scraping software is spidering your site and overloading your application servers.  Wouldn't it be great if you could identify the users who are abusing your web services and then apply preventive measures - for example, a bandwidth limit - for a period of time to limit those users' activity?   In this example, we'll look at how you can drive the control plane (the traffic manager configuration) from the data plane (a TrafficScript rule):   Identify a user by some id, for example, the remote IP address or a cookie value Measure the activity of each users using a rate class If a user exceeds the desired rate (their terms of service), add a resource file identifying the user and their 'last sinned' time Check the resource time to see if we should apply a short-term limit to that user's activity   Basic rule   # We want to monitor image downloads only if( !string.wildMatch( http.getPath(), "*.jpg" ) ) break; # Identify each user by their remote IP. # Could use a cookie value here, although that is vulnerable to spoofing # Note that we'll use $uid as a filename, so it needs to be secured $uid = request.getRemoteIP(); if( !rate.use.noQueue( "10 per minute", $uid ) ) { # They have exceeded the desired rate and broken the terms of use # Let's create a config file named $uid, containing the current time http.request.put( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, sys.time(), "Content-type: application/octet-stream\r\n". "Authorization: Basic ".string.base64encode( "admin:admin" ) ); } # Now test - did the user $uid break their terms of use recently? $lastbreach = resource.get( $uid ); if( ! $lastbreach ) break; # config file does not exist if( sys.time()-$lastbreach < 60 ) { # They last breached the limits less than 60 seconds ago response.setBandwidthClass( "Very slow" ); } else { # They have been forgiven their sins. Clean up the config file http.request.delete( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, "Authorization: Basic ".string.base64encode( "admin:admin" ) ); }   This example uses a rate class named '10 per minute' to monitor the request rate for each user, and a bandwidth class named ‘Very slow’ to apply an appropriate bandwidth limit.  You could potentially implement a similar solution using client-side cookies to identify users who should be bandwidth-limited, but this solution has the advantage that the state is stored locally and is not dependent on trusting the user to honor cookies.   There's scope to improve this rule.  The biggest danger is that if a user exceeds the limit consistently, this will result in a flurry of http.request.put() calls to the local REST daemon.  We can solve this problem quite easily with a rate class that will limit how frequently we update the configuration.  If that slows down a user who has just exceeded their terms of service, that's not really a problem for us!   rate.use( "10 per minute" ); # stall the user if necessary to avoid overload http.request.put( ... );   Note that we can safely use the rate class in two different contexts in one rule.  The first usage ( rate.use( "name", $uid ) ) will rate-limit each individual value of $uid ; the rate.use( "name" ) is a global rate limit that will limit all calls to the REST API .   Read more   Check out the other prioritization and rate shaping suggestions on splash, including:   Dynamic rate shaping slow applications The "Contact Us" attack against mail servers Stingray Spider Catcher Evaluating and Prioritizing Traffic with Stingray Traffic Manager
View full article
Introduction Many DDoS attacks work by exhausting the resources available to a website for handling new connections.  In most cases, the tool used to generate this traffic has the ability to make HTTP requests and follow HTTP redirect messages, but lacks the sophistication to store cookies.  As such, one of the most effective ways of combatting DDoS attacks is to drop connections from clients that don't store cookies during a redirect. Before you Proceed It's important to point out that using the solution herein may prevent at least the following legitimate uses of your website (and possibly others): Visits by user-agents that do not support cookies, or where cookies are disabled for any reason (such as privacy); some people may think that your website has gone down! Visits by internet search engine web-crawlers; this will prevent new content on your website from appearing in search results! If either of the above items concern you, I would suggest seeking advice (either from the community, or through your technical support channels). Solution Planning Implementing a solution in pure TrafficScript will prevent traffic from reaching the web servers.  But, attackers are still free to consume connection-handling resources on the traffic manager.  To make the solution more robust, we can use iptables to block traffic a bit earlier in the network stack.  This solution presents us with a couple of challenges: TrafficScript cannot execute shell commands, so how do we add rules to iptables? Assuming we don't want to permanently block all IP addresses that are involved in a DDoS attack, how can we expire the rules? Even though TrafficScript cannot directly run shell commands, the Event Handling system can.  We can use the event.emit() TrafficScript function to send jobs to a custom event handler shell script that will add an iptables rule that blocks the offending IP address.  To expire each rule can use the at command to schedule a job that removes it.  This means that we hand over the scheduling and running of that job over to the control of the OS (which is something that it was designed to do). The overall plans looks like this: Write a TrafficScript rule that emits a custom event when it detects a client that doesn't support cookies and redirects Write a shell script that takes as its input: an --eventtype argument (the event handler includes this automatically) a --duration argument (to define the length of time that an IP address stays blocked for) a string of information that includes the IP address that is to be blocked Create an event handler for the events that our TrafficScript is going to emit TrafficScript Code $cookie = http.getCookie( "DDoS-Test" ); if ( ! $cookie ) {       # Either it's the visitor's first time to the site, or they don't support cookies    $test = http.getFormParam( "cookie-test" );       if ( $test != "1" ) {       # It's their first time.  Set the cookie, redirect to the same page       # and add a query parameter so we know they have been redirected.       # Note: if they supplied a query string or used a POST,       # we'll respond with a bare redirect       $path = http.getPath();             http.sendResponse( "302 Found" , "text/plain" , "" ,          "Location: " . string.escape( $path ) .          "?cookie-test=1\r\nSet-Cookie: DDoS-Test=1" );          } else {             # We've redirected them and attempted to set the cookie, but they have not       # accepted.  Either they don't support cookies, or (more likely) they are a bot.             # Emit the custom event that will trigger the firewall script.       event.emit( "firewall" , request.getremoteip());             # Pause the connection for 100 ms to give the firewall time to catch up.       # Note: This may need tuning.       connection.sleep( 100 );             # Close the connection.       connection.close( "HTTP/1.1 200 OK\n" );    } } Installation This code will need to be applied to the virtual server as a request rule.  To do that, take the following steps: In the traffic manager GUI, navigate to Catalogs → Rule Enter ts-firewaller in the Name field Click the Use TrafficScript radio button Click the Create Rule button Paste the code from the attached ts-firewaller.rts file Click the Save button Navigate to the Virtual Server that you want to protect ( Services → <Service Name> ) Click the Rules link In the Request Rules section, select ts-firewaller from the drop-down box Click the Add Rule button Your virtual server should now be configured to execute the rule. Shell Script Code #!/bin/bash # Use getopt to collect parameters. params=`getopt -o e:,d: -l eventtype:,duration: -- "[email protected]"` # Evaluate the set of parameters. eval set -- "$params" while true; do   case "$1" in   --duration ) DURATION="$2"; shift 2 ;;   --eventtype ) EVENTTYPE="$2"; shift 2 ;;   -- ) shift; break ;;   * ) break ;;   esac done # Awk the IP address out of ARGV IP=$(echo "${BASH_ARGV}" | awk ' { print ( $(NF) ) }' ) # Add a new rule to the INPUT chain. iptables -A INPUT -s ${IP} -j DROP && # Queue a new job to delete the rule after DURATION minutes. # Prevents warning about executing the command using /bin/sh from # going in the traffic manager event log. echo "iptables -D INPUT -s ${IP} -j DROP" | at -M now + ${DURATION} minutes &> /dev/null Installation To use this script as an action program, you'll need to upload it via the GUI.  To do that, take the following steps: Open a new file with the editor of your choice (depends on what OS you're using) Copy and paste the script code into the editor Save the file as ts-firewaller.sh In the traffic manager UI, navigate to Catalogs → Extra Files → Action Programs Click the Choose File button Select the ts-firewaller.sh file that you just created Click the Upload Program button Event Handler Now that we have a rule that emits a custom event, and a script that we can use as an action program, we can configure the event handler that will tie the two together. First, we need to create a new event type: In the traffic manager's UI, navigate to System → Alerting Click the Manage Event Types button Enter Firewall in the Name field Click the Add Event Type button Click the + next to the Custom Events item in the event tree Click the Some custom events... radio button Enter firewall in the empty field Click the Update button Now that we have an event type, we need to create a new action: In the traffic manager UI, navigate to System → Alerting Click on the Manage Actions button In the Create New Action section, enter firewall in the Name field Click the Program radio button Click the Add Action button In the Program Arguments section, enter duration in the Name field Enter Determines the length of time in minutes that an IP will be blocked for in the Description field Click the Update button Enter 10 in the newly-appeared arg!duration field Click the Update button Now that we have an action configured, the only thing that we have left to do is to connect the custom event to the new action: In the traffic manager UI, navigate to System → Alerting In the Event Type column, select firewall from the drop-down box In the Actions column, select firewall from the drop-down box Click the Update button That concludes the installation steps; this solution should now be live! Testing Testing the functionality is pretty simple for this solution.  Basically, you can monitor the state of iptables while you run specific commands from a command line.  To do this, ssh into your traffic manager and execute iptables -L as root.  You should check this after tech of the upcoming tests. Since I'm using a Linux machine for testing, I'm going to use the curl command to send crafted requests to my traffic manager.  The 3 scenarios that I want to test are: Initial visit: The user-agent is missing a query string and a cookie Successful second visit: The user-agent has a query string and has provided the correct cookie Failed second visit: The user ages has a query string (indicating that they were redirected), but hasn't provided a cookie The respective curl commands that need to be run are: curl -v http:// <tmhost>/ curl -v http:// <tmhost>/?cookie-test=1 -b "DDoS-Test=1" curl -v http:// <tmhost>/?cookie-test=1 Note: If you run these commands from your workstation, you will be unable to connect to the traffic manager in any way for a period of 10 minutes!
View full article
Popular news and blogging sites such as Slashdot and Digg have huge readerships. They are community driven and allow their members to post articles on various topics ranging from hazelnut chocolate bars to global warming. These sites, due to their massive readership, have the power to generate huge spikes in the web traffic to those (un)fortunate enough to get mentioned in their articles. Fortunately Traffic Manager and TrafficScript can help.   If the referenced site happens to be yours, you are faced with dealing with this sudden and unpredictable spike in bandwidth and request rate, causing:   a large proportion or all of your available bandwidth to be consumed by visitors referred to you by this popular site; and in extreme cases, a cascade failure across your web servers as each one becomes overloaded, fails and, in doing so, adds further load onto the remaining web servers.   Bandwidth Management and Rate Shaping   Traffic Manager has the ability to shape traffic in two important ways. Firstly, you can restrict the amount of bandwidth any client or group of clients are allowed to consume. This is commonly known as "Bandwidth Management" and in Traffic Manager it's configured by using a bandwidth class. Bandwidth classes are used to specify the maximum bits per second to make available. The alternative method is to limit the number of requests that those clients or group of clients can make per second and/or per minute. This is commonly known as "Rate Shaping" and is configured within a rate class.   Both Rate Shaping and Bandwidth Management classes are configured and stored within the catalog section of Traffic Manager. Once you have created a class it is ready for use and can be applied to one or more of your Virtual Servers. However the true power of these Traffic Shaping features really becomes apparent when you make use of them with TrafficScript.   What is an Abusive Referer?   I would class an Abusive Referer as any site on the internet that refers enough traffic to your server to overwhelm it and effectively deny service to other users. This abuse is usually unintentional, the problem lies in the sheer number of people wanting to visit your site at that one time. This slashdot effect can be detected and dealt with by a TrafficScript rule and either a Bandwidth or a Rate Class.   Detecting and Managing Abusive Referers   Example One   Take a look at the TrafficScript below for an example of how you could stop a site (in this instance slashdot) from from using a large proportion or all of your available bandwidth.   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); response.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { response.setBandwidthClass( "slashdot" ); }   In this example we are specifically targeting slashdot users and preventing them from using more bandwidth than we have allotted them in our "slashdot" bandwidth class. This rule requires you to know the name of the site you want protection from, but this rule or similar could be modified to defend against other high traffic sites. Example Two The next example is a little more complicated, but will automatically limit all requests from any referer. I've chosen to use two rate classes here, BusyReferer for those sites I allow to send a large amount of traffic and StandardReferers for those I don't. At the top I specify a $whitelist, which contains sites I never want to rate shape, and $highTraffic which is a list of sites I'm going to shape with my BusyReferer class. By default, all traffic not in the white list is sent through one of my rate classes, but only on entry to the site. That's because subsequent requests will have myself as the referer and will be whitelisted. In times of high load, when a referer is sending more traffic than the rate class allows, a back log will build up, at that point we will also start issuing cookies to put the offending referers into a bandwidth class.   # Referer whitelist. These referers are never rate limited. $whitelist = "localhost 172.16.121.100"; # Referers that are allowed to pass a higher number of clients. $highTraffic = "google mypartner.com"; # How many queued requests are allowed before we track users. $shapeQueue = 2; # Retrieve the referer and strip out the domain name part. $referer = http.getheader("Referer"); $referer = String.regexsub($referer, ".*?://(.*?)/.*", "$1", "i" ); # Check to see if this user has already been given an abuse cookie. # If they have we'll force them into a bandwidth class if ( $cookie = http.getCookie("AbusiveReferer") ) { response.setBandwidthClass("AbusiveReferer"); } # If the referer is whitelisted then exit. if ( String.contains( $whitelist, $referer ) ) { break; } # Put the incoming users through the busy or standard rate classes # and check the queue length for their referer. if ( String.contains( $highTraffic, $referer ) ) { $backlog = rate.getbacklog("BusyReferer", $referer); rate.use("BusyReferer", $referer); } else { $backlog = rate.getbacklog("StandardReferer", $referer); rate.use("StandardReferer", $referer); } # If we have exceeded our backlog limit, then give them a cookie # this will enforce bandwidth shaping for subsequent requests. if ( $backlog > $shapeQueue ) { http.setResponseCookie("AbusiveReferer", $referer); response.setBandwidthClass("AbusiveReferer"); }   In order for the TrafficScript to function optimally, you must enter your servers own domainname(s) into the white list. If you do not, then the script will perform rate shaping on everyone surfing your website!   You also need to set appropriate values for the BusyReferer and StandardReferer shaping classes. Remember we're only counting the clients entry to the site, so Perhaps you want to set 10/minute as a maximum standard rate and then 20/minute for your BusyReferer rate.   In this script we also use a bandwidth class for when things get busy. You will need to create this class, called "AbusiveReferer" and assign it an appropriate amount of bandwidth. Users are only put into this class when their referer is exceeding the rate of referrals set by the relevant rate class.   Shaping with Context   Rate Shaping classes can be given a context so you can apply the class to a subset of users, based on a piece of key data. The second script uses context to create an instance of the Rate Shaping class for each referer. If you do not use context, then all referers will share the same instance of the rate class.   Conclusion   Traffic Manager can use bandwidth and rate shaping classes to control the number of requests that can be made by any group of clients. In this article, we have covered choosing the class based on the referer, which has allowed us to restrict the rate at which any one site can refer visitors to us. These examples could be modified to base the restrictions on other data, such as cookies, or even extended to work with other protocols. A good example would be FTP, where you could extract the username from the FTP logon data and apply a bandwidth class based on the username.
View full article
Recent investigations have revealed an error in the PHP and Java floating point library that may be exploited to cause a denial of service against a web service. You can use Stingray Traffic Manager to filter incoming traffic and discard requests that may seek to exploit this fault.   Background   In January 2011, a bug was discovered in PHP's floating point library. Under certain circumstances, an attempt to convert the string '2.2250738585072011e-308' into a floating point value would hang the PHP runtime.   A similar problem was discovered in the Java runtime (and compiler). The two articles give a detailed description of the nature of the problem and its cause (relating to the parsing of a number close to DBL_MIN, the smallest non-zero number that can be represented as a floating point).   The implications   What are the implications to a web developer or security team? This fault can be exploited to mount a denial-of-service attack if an attacker can send a carefully-crafted request that causes the PHP or Java runtime to attempt to convert a string into the problematic floating-point value. Web developers are accustomed to treating user input with suspicion - for example, careful escaping to prevent SQL injection attacks - but who would have thought that an innocuous floating point number could pose a similar threat? Any application code that parses input into a floating point could be vulnerable; for example, a mapping API that takes coordinates as input may be vulnerable.   However, there's an even simpler potential problem that is inherent the HTTP protocol; the family of 'Accept' HTTP headers use floating point scores that may be exploitable in certain implementations.   Accept: text/*;q=0.3, text/html;q=0.7, text/html;level=1, text/html;level=2;q=0.4, /;q=0.5 Accept-Charset: iso-8859-5, unicode-1-1;q=0.8 Accept-Encoding: gzip;q=1.0, identity; q=0.5, *;q=0 Accept-Language: da, en-gb;q=0.8, en;q=0.7   The wisest solution to protecting against this vulnerability would be to deploy a Web Application Firewall (see Stingray Application Firewall) and verify that the baseline protection rules detect attempts to exploit this attack. It's also possible to detect and drop these attacks using a TrafficScript rule, and this article presents a couple of solutions.   Floating Point: Solution 1   The following TrafficScript request rule checks all of the headers in each HTTP request. If the headers contain the sequence of digits that are the signature of this number, then the rule logs a warning and drops the request immediately.   $headers = http.getHeaders(); foreach( $header in hash.keys( $headers ) ) { $value = $headers[$header]; # remove any decimal points $value = string.replace( $value, '.', '' ); if( string.contains( $value, "2225073858507201" ) ) { log.warn( "suspect request - dropping" ); connection.discard(); } }   The result:     Floating Point: Solution 2   This more advanced solution checks both headers and form parameters, logs a more descriptive error message and illustrates the use of TrafficScript subroutines to minimise duplicated code:   # Checks the array of key-value (headers or form parameters) # If any value contains the suspect floating point value, return the # name of the header or form parameter sub check( $h ) { foreach( $k in hash.keys( $h ) ) { $v = $h[$k]; # remove any decimal points $v = string.replace( $v, '.', '' ); if( string.contains( $v, "2225073858507201" ) ) return $k; } } # Log the request and drop it immediately sub logAndDrop( $reason, $k, $v ) { $ip = request.getRemoteIP(); $country = geo.getCountry( $ip ); if( !$country ) $country = 'unknown'; $msg = 'Request from ' . $ip . ' (' . $country . ') ' . ' contained suspicious ' . $reason . ': ' . $k . ': ' . $v; log.warn( $msg ); # Optional - raise an event to trigger a configured event handler # event.emit( "FloatingPointAttack", $msg ); connection.discard(); } $headers = http.getHeaders(); if( $h = check( $headers ) ) logAndDrop( "header", $h, $headers[$h] ); $params = http.getFormParams(); if( $h = check( $params ) ) logAndDrop( "parameter", $h, $params[$h] );   The result, from an internal IP address (192.168.35.1) and using a querystring ?userid=2.2250738585072011e-308:     There is a very, very slim risk of false positives with these rules (dropping connections which would not have a malicious effect), but the probability of the string "2225073858507201" appearing is miniscule (except perhaps for blog posts about this very vulnerability...).
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle WebLogic Applications. Sample applications that can be deployed using this document include Oracle's PeopleSoft and Blackboard's Academic Suite.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
Loggly is a cloud-based log management service.  The idea with Loggly is that you direct all your applications, hardware, software, etc. to send their logs to Loggly.  Once all the logs are in the Loggly cloud you can : Root cause and solve problems by performing powerful and flexible searches across all your devices and applications Set up alerts on log events Measure application performance Create custom graphs and analytics to better understand user behavior and experience   Having your Virtual Traffic Manager (vTM) logs alongside your application logs will provide valuable information to help further analyze and debug your applications.  You can export both the vTM event log as well as the request logs for each individual Virtual Server to Loggly.   vTM Event Log The vTM event log contains both error logs and informational messages.  To export the vTM Event Log to Loggly we will first create an Input into Loggly.  In the Loggly web interface navigate to Incoming Data -> Inputs and click on "+ Add Input".  The key field is the Service Type which must be set to Syslog UDP. After creating the input you'll be given a destination to send the logs to.  The next step is to tell the vTM to send logs to this destination. In the vTM web interface navigate to System > Alerting and select Syslog under the drop down menu for All Events. Click Update to save the changes. The final step is to click on Syslog and update the sysloghost to the Loggly destination. Virtual Server Request Logs Connections to a virtual server can be recorded in request logs. These logs can help track the usage of the virtual server, and can record many different pieces of information about each connection.  To export virtual server request logs to Loggly first navigate to Services > Virtual Servers > (your virtual server) > Request Logging. First set log!enabled to Yes, its not on by default. Scroll down and set syslog!enabled to Yes and set the syslog!endpoint to the same destination as with the vTM Event Logs.  Click Update to save the changes. Alternatively you can create a new input in Loggly for request logs if you don't want them to get mixed up with the Event Logs.   Making sure it works An easy way to make sure it works is to modify the configuration by creating and deleting a virtual for example.  This will generate an event in the vTM Event Log.  In Loggly you should see the light turn green for this input. The Virtual Traffic Manager is designed to be flexible, being the only software application delivery controller that can be seamlessly deployed in private, public, and hybrid clouds.  And now by exporting your vTM logs you can take full advantage of the powerful analysis tools available within Loggly.
View full article
Web spiders are clever critters - they are automated programs designed to crawl over web pages, retrieving information from the whole of a site. (For example, Spiders power search engines and shopping comparison sites). But what do you do if your website is being overrun by the bugs? How can you prevent your service from being slowed down by a badly written, over-eager web spider?   Web spiders, (sometimes called robots, or bots), are meant to adhere to the Robot exclusion standard. By putting a file called robots.txt at the top of your site, you can restrict the pages that a web spider should load. However, not all spiders bother to check this file. Even worse, the standard gives no control over how often a spider may fetch pages. A poorly written spider could hammer your site with requests, trying to discover the price of everything that you are selling every minute of the day. The problem is, how do you stop these spiders while allowing your normal visitors to use the site without restrictions?   As you might expect, Stingray has the answer! The key feature to use is the 'Request Rate Shaping' classes. These will prevent any one user from fetching too many pages from your site.   Let's see how to put them to use:   Create a Rate Shaping Class   You can create a new class from the Catalogs page. You need to pick at least one rate limit: the maximum allowed requests per minute, or per second. For our example, we'll create a class called 'limit' that allows up to 100 requests a minute.   Put the rate shaping class into use - first attempt   Now, create a TrafficScript rule to use this class. Don't forget to add this new rule to the Virtual Server that runs your web site.   rate.use( "limit" );   This rule is run for each HTTP request to your site. It applies the rate shaping class to each of them.   However, this isn't good enough. We have just limited the entire range of visitors to the site to 100 requests a minute, in total. If we leave the settings as is, this would have a terrible effect on the site. We need to apply the rate shaping limits to each individual user.   Rate shaping - second attempt   Edit the TrafficScript rule and use this code instead:   rate.use( "limit", connection.getRemoteIP() );   We have provided a second parameter to the rate.use() function. The rule is taking the client IP address and using this to identify a user. It then applies the rate shaping class separately to each unique IP address. So, a user coming from IP address 1.2.3.4 can make up to 100 requests a minute, and a user from 5.6.7.8 could also make 100 requests at the same time.   Now, if a web spider browses your site, it will be rate limited.   Improvements   We can make this rate shaping work even better. One slight problem with the above code is that sometimes you may have multiple users arriving at your site from one IP address. For example, a company may have a single web proxy. Everyone in that company will appear to come from the same IP address. We don't want to collectively slow them down.   To work around this, we can use cookies to identify individual users. Let's assume your site already sets a cookie called 'USERID'. The value is unique for each visitor. We can use this in the rate shaping:   # Try reading the cookie $userid = http.getCookie( "USERID" ); if( $userid == "" ) { $userid = connection.getRemoteIP(); } rate.use( "limit", $userid );   This TrafficScript rule tries to use the cookie to identify a user. If it isn't present, it falls back to using the client IP address.   Even more improvements   There are many other possibilities for further improvements. We could detect web spiders by their User-Agent names, or perhaps we could only rate shape users who aren't accepting cookies. But we have already achieved our goal - now we have a means to limit the page requests by automated programs, while allowing ordinary users to fully use the site.   This article was originally written by Ben Mansell in December 2006.
View full article
Lots of websites provide a protected area for authorized users to log in to. For instance, you might have a downloads section for products on your site where customers can access the software that they have bought.   There are many different ways to protect web pages with a user name and password. Their login and password could be quickly spread around. Once the details are common knowledge, anyone could login and access the site without paying.   Stingray and TrafficScript to the rescue!   Did you know that TrafficScript can be used to detect when a username and password are used from several different locations? You can then choose whether to disable the account or give the user a new password. All this can be done without replacing any of your current authentication systems on your website:   Looks like the login details for user 'ben99' have been leaked! How can we stop people leeching from this account?   For this example, we'll use a website where the entire site is protected with a PHP script that handles the authentication. It will check a user's password, and then set a USER cookie filled in with the user name. The details of the authentication scheme are not important. In this instance, all that matters is that TrafficScript can discover the user name of the account.   Writing the TrafficScript rule   First of all, TrafficScript needs to ignore any requests that aren't authenticated:   $user = http.getCookie( "USER" ); if( $user == "" ) break;   Next, we'll need to discover where the user is coming from. We'll use the IP address of their machine. However, they may also be connecting via a proxy, in which case we'll use the address supplied by the proxy.   $from = request.getRemoteIP(); $proxy = http.getHeader( "X-Forwarded-For" ); if( $proxy != "" ) $from = $proxy;   TrafficScript needs to keep track of which IP addresses have been used for each account. We will have to store a list of the IP addresses used. TrafficScript provides persistent storage with the data.get() and data.set() functions.   $list = data.get( $user ); if( !string.contains( $list, $from )) { # Add this entry in, padding list with spaces $list = sprintf( "%19s %s", $from, $list ); ...   Now we need to know how many unique IP addresses have been used to access this account. If the list has grown too large, then don't let this person fetch any more pages.   # Count the number of entries in the list. Each entry is 20 # characters long (the 19 in the sprintf plus a space) $entries = string.length( $list ) / 20; if( $entries > 4 ) { # Kick the user out with an error message http.sendResponse( "403 Permission denied", "text/plain", "Account locked", "" ); } else { # Update the list of IP addresses data.set( $user, $list ); } }   That's it! If a single account on your site is accessed from more than four different locations, the account will be locked out, preventing abuse.   As this is powered by TrafficScript, further improvements can be made. We can extend the protection in many ways, without having to touch the code that runs your actual site. Remember, this can be deployed with any kind of authentication being used - TrafficScript just needs the user name.   A more advanced example   This has a few new improvements. First of all, the account limits are given a timeout, enabling someone to access the site from different locations (e.g. home and office), but will still catch abuse if the account is being used simultaneously in different locations. Secondly, any abuse is logged, so that an administrator can check up on leaked accounts and take appropriate action. Finally, to show that we can work with other login schemes, this example uses HTTP Basic Authentication to get the user name.   # How long to keep data for each userid (seconds) $timelimit = 3600; # Maximum number of different IP addresses to allow a client # to connect from $maxips = 4; # Only interested in HTTP Basic authentication $h = http.getHeader( "Authorization" ); if( !string.startsWith( $h, "Basic " )) continue; # Extract and decode the username:password combination $enc = string.skip( $h, 6 ); $userpasswd = string.base64decode( $enc ); # Work out where the user came from. If they came via a proxy, # then ensure that we don't log the proxy's IP address(es) $from = request.getRemoteIP(); $proxy = http.getHeader( "X-Forwarded-For" ); if( $proxy != "" ) $from = $proxy; # Have we seen this user before? We will store a space separated # list of all the IPs that we have seen the user connect from $list = data.get( $userpasswd ); # Also check the timings. Only keep the records for a fixed period # of time, then delete them. $time = data.get( "time-" . $userpasswd ); $now = sys.time(); if(( $time == "" ) || (( $now - $time ) > $timelimit )) { # Entry expired (or hasn't been created yet). Start with a new # list and timestamp. $list = ""; $time = $now; data.set( "time-" . $userpasswd, $time ); } if( !string.contains( $list, $from )) { # Pad each entry in the list with spaces $list = sprintf( "%19s %s", $from, $list ); # Count the number of entries in the list. Each entry is 20 # characters long (the 19 in the sprintf plus a space) $entries = string.length( $list ) / 20; # Check if the list of used IP addresses is too large - if so, # send back an error page! if( $entries > $maxips ) { # Put a message in the logs so the admins can see the abuse # (Ensure that we show the username but not the password) $user = string.substring( $userpasswd, 0, string.find( $userpasswd, ":" ) - 1 ); log.info( "Login abuse for account: " . $user . " from " . $list ); http.sendResponse( "403 Permission denied", "text/html", "Your account is being accessed by too many users", "" ); } else { # Update the list and let the user through data.set( $userpasswd, $list ) ; } }   This article was originally written by Ben Mansell in March 2007
View full article
This article illustrates how to write data to a MySQL database from a Java Extension, and how to use a background thread to minimize latency and control the load on the database.   Being Lazy with Java Extensions   With a Java Extension, you can log data in real time to an external database. The example in this article describes how to log the ‘referring’ source that each visitor comes in from when they enter a website. Logging is done to a MySQL database, and it maintains a count of how many times each key has been logged, so that you can determine which sites are sending you the most traffic.   The article then presents a modification that illustrates how to lazily perform operations such as database writes in the background (i.e. asynchronously) so that the performance the end user observes is not impaired.   Overview - let's count referers!   It’s often very revealing to find out which web sites are referring the most traffic to the sites that you are hosting. Tools like Google Analytics and web log analysis applications are one way of doing this, but in this example we’ll show an alternative method where we log the frequency of referring sites to a local database for easy access.   When a web browser submits an HTTP request for a resource, it commonly includes a header called "Referer" which identifies the page that linked to that resource. We’re not interested in internal referrers – where one page in the site links to another. We’re only interested in external referrers. We're going to log these 'external referrers' to a MySQL database, counting the frequency of each so that we can easily determine which occur most commonly. Create the database   Create a suitable MySQL database, with limited write access for a remote user: % mysql –h dbhost –u root –p Enter password: ******** mysql> CREATE DATABASE website; mysql> CREATE TABLE website.referers ( data VARCHAR(256) PRIMARY KEY, count INTEGER ); mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'%' IDENTIFIED BY 'W38_U5er'; mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'localhost' IDENTIFIED BY 'W38_U5er'; mysql> QUIT;   Verify that the table was correctly created and the ‘web’ user can access it:   % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> DESCRIBE website.referers; +-------+--------------+------+-----+---------+-------+ | Field | Type         | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | data  | varchar(256) | NO   | PRI |         |       | | count | int(11)      | YES  |     | NULL    |       | +-------+--------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)   mysql> SELECT * FROM website.referers; Empty set (0.00 sec)   The database looks good...   Create the Java Extension   We'll create a Java Extension that writes to the database, adding rows with the provided 'data' value, and setting the 'count' value to '1', or incrementing it if the row already exists.   CountThis.java   Compile up the following 'CountThis' Java Extension:   import java.io.IOException; import java.io.PrintWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThis extends HttpServlet { private static final long serialVersionUID = 1L; private Connection conn = null; private String userName = null; private String password = null; private String database = null; private String table = null; private String dbserver = null; public void init( ServletConfig config) throws ServletException { super.init( config ); userName = config.getInitParameter( "username" ); password = config.getInitParameter( "password" ); table = config.getInitParameter( "table" ); dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch( Exception e ) { throw new ServletException( "Could not initialize mysql: "+e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data == null ) return; if( conn == null ) { conn = DriverManager.getConnection( "jdbc:mysql://"+dbserver+"/", userName, password); } PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 ) " + "ON DUPLICATE KEY UPDATE count=count+1" ); s.setString(1, data); s.executeUpdate(); } catch( Exception e ) { conn = null; log( "Could not log data to database table '" + table + "': " + e.toString() ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   Upload the resulting CountThis.class file to Traffic Manager's Java Catalog. Click on the class name to configure the following initialization properties:   You must also upload the mysql connector (I used mysql-connector-java-5.1.24-bin.jar ) from dev.mysql.com to your Traffic Manager Java Catalog.   Add the TrafficScript rule   You can test the Extension very quickly using the following TrafficScript rule to log the each request:   java.run( "CountThis", http.getPath() );   Check the Traffic Manager  event log for any error messages, and query the table to verify that it is getting populated by the extension:   mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 5; +--------------------------+-------+ | data                     | count | +--------------------------+-------+ | /media/riverbed.png      |     5 | | /articles                |     3 | | /media/puppies.jpg       |     2 | | /media/ponies.png        |     2 | | /media/cats_and_mice.png |     2 | +--------------------------+-------+ 5 rows in set (0.00 sec)   mysql> TRUNCATE website.referers; Query OK, 0 rows affected (0.00 sec)   Use 'Truncate' to delete all of the rows in a table.   Log and count referer headers   We only want to log referrers from remote sites, so use the following TrafficScript rule to call the Extension only when it is required:   # This site $host = http.getHeader( "Host" ); # The referring site $referer = http.getHeader( "Referer" ); # Only log the Referer if it is an absolute URI and it comes from a different site if( string.contains( $referer, "://" ) && !string.contains( $referer, "://".$host."/" ) ) { java.run( "CountThis", $referer ); }   Add this rule as a request rule to a virtual server that processes HTTP traffic.   As users access the site, the referer header will be pushed into the database. A quick database query will tell you what's there: % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 4; +--------------------------------------------------+-------+ | referer                                          | count | +--------------------------------------------------+-------+ | http://www.google.com/search?q=stingray        |    92 | | http://www.riverbed.com/products/stingray      |    45 | | http://www.vmware.com/appliances               |    26 | | http://www.riverbed.com/                       |     5 | +--------------------------------------------------+-------+ 4 rows in set (0.00 sec)   Lazy writes to the database   This is a useful application of Java Extensions, but it has one big drawback. Every time a visitor arrives from a remote site, his first transaction is stalled while the Java Extension writes to the database. This breaks one of the key rules of website performance architecture – do everything you can asynchronously (i.e. in the background) so that your users are not impeded (see "Lazy Websites run Faster").   Instead, a better solution would be to maintain a separate, background thread that wrote the data in bulk to the database, while the foreground threads in the Java Extension simply appended the Referer data to a table:     CountThisAsync.java   The following Java Extension (CountThisAsync.java) is a modified version of CountThis.java that illustrates this technique:   import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.LinkedList; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThisAsync extends HttpServlet { private static final long serialVersionUID = 1L; private Writer writer = null; protected static LinkedList theData = new LinkedList(); protected class Writer extends Thread { private Connection conn = null; private String table; private int syncRate = 20; public void init( String username, String password, String url, String table ) throws Exception { Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection( url, username, password); this.table = table; start(); } public void run() { boolean running = true; while( running ) { try { sleep( syncRate*1000 ); } catch( InterruptedException e ) { running = false; }; try { PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 )" + "ON DUPLICATE KEY UPDATE count=count+1" ); conn.setAutoCommit( false ); synchronized( theData ) { while( !theData.isEmpty() ) { String data = theData.removeFirst(); s.setString(1, data); s.addBatch(); } } s.executeBatch(); } catch ( Exception e ) { log( e.toString() ); running = false; } } } } public void init( ServletConfig config ) throws ServletException { super.init( config ); String userName = config.getInitParameter( "username" ); String password = config.getInitParameter( "password" ); String table = config.getInitParameter( "table" ); String dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { writer = new Writer(); writer.init( userName, password, "jdbc:mysql://"+dbserver+"/", table ); } catch( Exception e ) { throw new ServletException( e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data != null && writer.isAlive() ) { synchronized( theData ) { theData.add( data ); } } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } public void destroy() { writer.interrupt(); try { writer.join( 1000L ); } catch( InterruptedException e ) {}; super.destroy(); } }   When the Extension is invoked by Traffic Manager , it simply stores the value of the Referer header in a local list and returns immediately. This minimizes any latency that the end user may observe.   The Extension creates a separate thread (embodied by the Writer class) that runs in the background. Every syncRate seconds, it removes all of the values from the list and writes them to the database.   Compile the extension: $ javac -cp servlet.jar:zxtm-servlet.jar CountThisAsync.java $ jar -cvf CountThisAsync.jar CountThisAsync*.class   ... and upload the resulting CountThisAsync.jar Jar file to your Java catalog . Remember to apply the four configuration parameters to the CountThisAsync.jar Java Extension so that it can access the database, and modify the TrafficScript rule so that it calls the CountThisAsync Java Extension.   You’ll observe that database updates may be delayed by up to 20 seconds (you can tune that delay in the code), but the level of service that end users experience will no longer be affected by the speed of the database.
View full article
SOA applications need just as much help as traditional web applications when it comes to reliability, performance and traffic management. This article provides four down-to-earth TrafficScript examples to show you how you can inspect the XML messages and manage SOA transactions. Why is XML difficult? SOA traffic generally uses the SOAP protocol, sending XML data over HTTP. It's not possible to reliably inspect or modify the XML data using simple tools like search-and-replace or regular expressions. In Computer Science terms, regular expressions match regular languages, whereas XML is a much more structured context-free language. Instead of regular expressions, standards like XPath and XSLT are used to inspect and manipulate XML data. Using TrafficScript rules, Stingray can inspect the payload of an SOAP request or response and use XPath operations to extract data from it, making traffic management decisions on this basis. Stingray can check the validity of XML data, and use XSLT operations to transform the payload to a different dialect of XML. The following four articles give examples of traffic inspection and management in an SOA application contect. Other examples of XML processing include embedding RSS data in an HTML document. Routing SOAP traffic Let’s say that the network is handling requests for a number of different SOAP methods. The traffic manager is the single access point – all SOAP traffic is directed to it. Behind the scenes, some of the methods have dedicated SOAP servers because they are particularly resource intensive; all other methods are handled by a common set of servers. The following example uses Stingray's pools. A pool is a group of servers that provide the same service. Individual pools have been created for some SOA components, and a ‘SOAP-Common-Servers’ pool contains the nodes that host the common SOA components. # Obtain the XML body of the SOAP request $request = http.getBody(); $namespace = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/*[1]"; # Extract the SOAP method using an XPath expression $method = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); # For ‘special’ SOAP methods, we have a dedicated pool of servers for each if( pool.getActiveNodes( "SOAP-".$method ) > 0 ) {    pool.select( "SOAP-".$method ); } else {    pool.select( "SOAP-Common-Servers" ); } TrafficScript: Routing SOAP requests according to the method Why is this useful? This allows you to deploy SOA services in a very flexible manner. When a new instance of a service is added, you do not need to modify every caller that may invoke this service. Instead, you need only add the service endpoint to the relevant pool. You can rapidly move a service from one server to another, for resourcing or security reasons (red zone, green zone), and an application can be easily built from services that are found in different locations. Ensuring fair access to resources With Stingray, you can also monitor the performance of each pool of servers to determine which SOAP methods are running the slowest. This can help troubleshoot performance problems and inform decisions to re-provision resources where they are needed the most. You can shape traffic – bandwidth or transactions per second – to limit the resources used and smooth out flash floods of traffic. With the programmability, you can shape different types of traffic in different ways. For example, the following TrafficScript code sample extracts a ‘username’ node from the SOAP request. It then rate-shapes SOAP requests so that each remote source (identified by remote IP address and ‘username’ node value) can submit SOAP requests at a maximum of 60 times per minute: # Obtain the source of the request $ip = request.getRemoteIP(); # Obtain the XML body of the SOAP request $request = http.getBody(); $namespace = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/*/username/text()"; # Extract the username using an XPath expression $username = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); # $key uniquely identifies this type of request from this source. $key = $ip . ", " . $username; # The 'transactions' rate shaping class limits each type to 60 per minute rate.use( "transactions", $key ); TrafficScript: Rate-shaping different users of SOAP traffic Why is this important? An SOA component may be used by multiple different SOA applications. Different applications may have different business priorities, so you might wish to prioritize some requests to a component over others. Applying ‘service governance’ policies using Stingray's rate shaping functionality ensures that all SOA applications get fair and appropriate access to critical components, and that no one application can overwhelm a component to the detriment of other applications. This can be compared to time-sharing systems – each SOA application is a different ‘user’, and users can be granted specific access to resources, with individual limits where required. When some SOA applications are externally accessible (via a web-based application for example), this is particularly important because a flash flood or malicious denial-of-service attack could ripple through, affecting many internal SOA components and internal applications. Securing Traffic Suppose that someone created a web services component for a travel company that enumerated all of the possible flights from one location to another on a particular day. The caller of the component could specify how many hops they were prepared to endure on the journey. Unfortunately, once the component was deployed, a serious bug was found. If a caller asked for a journey with the same start and finish, the component got stuck in an infinite loop. If a caller asked for a journey with a large number of hops (1000 hops perhaps), the computation cost grew exponentially, creating a simple, effective denial of service attack. Fixing the component is obviously the preferred solution, but it’s not always possible to do so in a timely fashion. Often, procedural barriers make it difficult to make changes to a live application. However, by controlling and manipulating the SOA requests as they travel over the network, you can very quickly roll out a security rule on your SDC to drop or modify the SOAP request. Here’s a snippet: $request = http.getBody(); $from = xml.XPath.matchNodeSet( $request, $namespace, "//from/text()" ); $dest = xml.XPath.matchNodeSet( $request, $namespace, "//dest/text()" ); # The error response; can read a precanned response from disk and return # it as a SOAP response if( $from == $dest ) {    $response = resource.get( "FlightPathFaultResponse.xml" );    connection.sendResponse( $response ); } $hops = xml.XPath.matchNodeSet( $request, $namespace, "//maxhops/text()" ); if( $hops > 3 ) {    # Apply an XSLT that sets the hops node to 3    $transform = resource.get( "FlightPath3Hops.xslt" );    http.setBody( xml.XSLT.transform( $request, $transform ) ); } TrafficScript: Checking validity of SOAP requests Why is this important? Using the Service Delivery Controller to manage and rewrite SOA traffic is a very rapid and lightweight alternative to rewriting SOA components. Patching the application in this way may not be a permanent solution, although it’s often sufficient to resolve problems. The real benefit is that once a fault is detected, it can be resolved quickly, without requiring in-depth knowledge of the application. Development staff need not be pulled away from other projects immediately. A full application-level fix can wait until the staff and resources are available; for example, at the next planned update of the component code. Validating SOAP responses If a SOAP server encounters an error, it may still return a valid SOAP response with a ‘Fault’ element inside. If you can look deep inside the SOAP response, you’ve got a great opportunity to work around such transient application errors. If a server returns a fault message where the faultcode indicates there was a problem with the server, wouldn’t it be great if you could retry the request against a different SOAP server in the cluster? $response = http.getResponseBody(); $ns = "xmlns:SOAP-ENV=\" http://schemas.xmlsoap.org/soap/envelope/ \""; $xpath = "/SOAP-ENV:Envelope/SOAP-ENV:Body/SOAP-ENV:Fault/faultcode/text()"; $faultcode = xml.XPath.matchNodeSet( $request, $namespace, $xpath ); if( string.endsWith( $faultcode, "Server" ) ) {    if( request.retries() < 2 ) {       request.avoidNode( connection.getNode() );       request.retry();    } } TrafficScript: If we receive a Server fault code in the SOAP response, retry the request at most 2 times against different servers Why is this important? This particular example shows how the error checking used by an SDC can be greatly extended to detect a wide range of errors, even in responses that appear “correct” to less intelligent traffic managers. It is one example of a wide range of applications where responses can be verified, scrubbed and filtered. Undesirable responses may include fault codes, sensitive information (like credit card or social security numbers), or even incorrectly-localized or formatted responses that may be entirely legitimate, but cannot be interpreted by the calling application. Pinpointing errors in a loosely-coupled SOA application is a difficult and invasive process, often involving the equivalent of adding ‘printf’ debug statements to the code of individual components. By inspecting responses at the network, it becomes much easier to investigate and diagnose application problems and then work round them, either by retrying requests or transforming and rewriting responses as outlined in the previous example.
View full article
Following is a library that I am working on that has one simple design goal: Make it easier to do authentication overlay with Stingray.   I want to have the ability to deploy a configuration that uses a single line to input an authentication element (basic auth or forms based) that takes the name of an authenticator, and uses a simple list to define what resources are protected and which groups can access them.   Below is the beginning of this library.  Once we have better code revision handling in splash (hint hint Owen Garrett!! ) I will move it to something more re-usable.  Until then, here it is.   As always, comments, suggestions, flames or gifts of mutton and mead most welcome...   The way I want to call it is like this:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import lib_auth_overlay as aaa;   # Here we challenge for user/pass $userpasswd = aaa.promptAuth401();   # extract the entered username / password into variables for clarity $username = $userpasswd [0]; $password = $userpasswd [1];   # Here we authenticate check that the user is a member of the listed group # We are using the "user_ldap" authenticator that I set up against my laptop.snrkl.org # AD domain controller. $authResult = aaa.doAuthAndCheckGroup( "user_ldap" , $username , $password , "CN=staff,CN=Users,DC=laptop,DC=snrkl,DC=org" );   # for convienience we will tell the user the result of their Auth in an http response aaa.doHtmlResponse.200( "Auth Result:" . $authResult );   here is the lib_auth_overlay that is referenced in the above element.  Please note the promptAuthHttpForm() sub routine is not yet finished...   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 sub doHtmlResponse.200 ( $message ){       http.sendResponse(          "200 OK" ,          "text/html" ,          $message ,          ""          ); }   sub challengeBasicAuth( $errorMessage , $realm ){       http.sendResponse(          "401 Access Denied" ,          "text/html" ,          $errorMessage ,          "WWW-Authenticate: Basic realm=\"" . $realm . "\"" );   }   sub basicAuthExtractUserPass( $ah ){ #// $ah is $authHeader,          $enc = string.skip( $ah , 6 );          $up = string. split (string.base64decode( $enc ), ":" );          return $up ;       }   sub doAuthAndGetGroups ( $authenticator , $u , $p ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator " . $authenticator . ": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          return $auth [ 'memberOf' ];       }   }   sub doAuthAndCheckGroup ( $authenticator , $u , $p , $g ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator \"" . $authenticator . "\": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          if ( lang.isArray( $auth [ 'memberOf' ])){ #//More than one group returned             foreach ( $group in $auth [ 'memberOf' ]){                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                }           }             #// If we get to here, we have exhausted list of groups with no match             return "FAIL" ;            } else { #// This means that only one group is returned             $group = $auth [ 'memberOf' ];                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "FAIL" ;                }        }     } }   sub promptAuth401(){       if (!http.getHeader( "Authorization" )) { #// no Authorization header present, lets challenge for credentials          challengeBasicAuth( "Error Message" , "Realm" );       } else {          $authHeader = http.getHeader( "Authorization" );          $up = basicAuthExtractUserPass( $authHeader );            return $up ;     } }   sub promptAuthHttpForm(){       $response = "<html> <head>Authenticate me...</head> <form action=/login method=POST> <input name=user required> <input name=realm type=hidden value=stingray> <input name=pass type=password required> <button>Log In</button> </form> </html>" ;       doHtmlResponse.200( $response ); }  
View full article
FTP is an example of a 'server-first' protocol. The back-end server sends a greeting message before the client sends its first request. This means that the traffic manager must establish the connection to the back-end node before it can inspect the client's request.   Fortunately, it's possible to implement a full protocol proxy in Stingray's TrafficScript language. This article (dating from 2005) explains how.   FTP Virtual Hosting scenario   We're going to manage the following scenario:   A service provider is hosting FTP services for organizations - ferrari-f1.com, sauber-f1.com and minardi-f1.com. Each organization has their own cluster of FTP servers:   Ferrari have 3 Sun E15Ks in a pool named 'ferrari ftp' Sauber have a couple of old, ex-Ferrari servers in Switzerland, in a pool named 'sauber ftp' Minardi have a capable and cost-effective pair of pizza-box servers in a pool named 'minardi ftp'   The service provider hosts the FTP services through Stingray, and requires that users log in with their email address. If a user logs in as '[email protected] ', Stingray will connect the user to the 'ferrari ftp' pool and log in with username 'rbraun'.   This is made complicated because an FTP connection begins with a 'server hello' message as follows:   220 ftp.zeus.com FTP server (Version wu-2.6.1-0.6x.21) ready.   ... before reading the data from the client.   Configuration   Create the virtual server (FTP, listening on port 21) and the three pools ('ferrari ftp' etc).  Configure the default pool for the virtual server to be the discard pool.   Configure the virtual server connection management settings, setting the FTP serverfirst_banner to:   220 F1 FTP server ready.   Add the following trafficscript request rule to the virtual server, setting it to run every time:   $req = string.trim( request.endswith( "\n" ) ); if( !string.regexmatch( $req, "USER (.*)", "i" ) ) { # if we're connected, forward the message; otherwise # return a login prompt if( connection.getNode() ) { break; } else { request.sendresponse( "530 Please log in!!\r\n" ); break; } } $loginname = $1; # The login name should look like '[email protected]' if( ! string.regexmatch( $loginname, "(.*)@(.*)" ) ) { request.sendresponse( "530 Incorrect user or password!!\r\n" ); break; } $user = $1; $domain = string.lowercase( $2 ); request.set( "USER ".$user."\r\n" ); # select the pool we want... if( $domain == "ferrarif1.com" ) { pool.use( "ferrari ftp" ); } else if( $domain == "sauberf1.com" ) { pool.use( "sauber ftp" ); } else if( $domain == "minardif1.com" ) { pool.use( "minardi ftp" ); } else { request.sendresponse( "530 Incorrect user or password!!\r\n" ); }   And that's it! Stingray automatically slurps and discards the serverfirst banner message from the back-end ftp servers when it connects on the first request.   More...   Here's a more sophisticated example which reads the username and password from the client before attempting to connect. You could add your own authentication at this stage (for example, using http.request.get or auth.query to query an external server) before initiating the connect to the back-end ftp server:   TrafficScript request rule   $req = string.trim( request.endswith( "\n" ) ); if( string.regexmatch( $req, "USER (.*)" ) ) { connection.data.set( "user", $1 ); $msg = "331 Password required for ".$1."!!\r\n"; request.sendresponse( $msg ); break; } if( !string.regexmatch( $req, "PASS (.*)" ) ) { # if we're connected, forward the message; otherwise # return a login prompt if( connection.getNode() ) { break; } else { request.sendresponse( "530 Please log in!!\r\n" ); break; } } $loginname = connection.data.get( "user" ); $pass = $1; # The login name should look like '[email protected]' if( ! string.regexmatch( $loginname, "(.*)@(.*)" ) ) { request.sendresponse( "530 Incorrect user or password!!\r\n" ); break; } $user = $1; $domain = string.lowercase( $2 ); # You could add your own authentication at this stage. # If the username and password is invalid, do the following: # # if( $badpassword ) { # request.sendresponse( "530 Incorrect user or password!!\r\n" ); # break; # } # now, replay the correct request against a new # server instance connection.data.set( "state", "connecting" ); request.set( "USER ".$user."\r\nPASS ".$pass."\r\n" ); # select the pool we want... if( $domain == "ferrarif1.com" ) { pool.use( "ferrari ftp" ); } else if( $domain == "sauberf1.com" ) { pool.use( "sauber ftp" ); } else if( $domain == "minardif1.com" ) { pool.use( "minardi ftp" ); } else { request.sendresponse( "530 Incorrect user or password!!\r\n" ); }   TrafficScript response rule   if( connection.data.get("state") == "connecting" ) { # We've just connected, but Stingray doesn't slurp the serverfirst # banner until after this rule has run. # Slurp the first line (the serverfirst banner), the second line # (the 331 need password) and then replace the serverfirst banner $first = response.getLine(); $second = response.getLine( "\n", $1 ); $remainder = string.skip( response.get(), $1 ); response.set( $first.$remainder ); connection.data.set( "state", "" ); }   Remember that both rules must be set to 'run every time'.
View full article
Having described Watermarking PDF documents with Stingray and Java and Watermarking Images with Java Extensions, this article describes the much simpler task of adding a watermark to an HTML document.   Stingray's TrafficScript language is fully capable of managing text content, so there's no need to revert to a more complex Java Extension to modify a web page:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 # Only process text/html responses  $ct = http.getResponseHeader( "Content-Type" );  if ( ! string.startsWith( $ct , "text/html" ) ) break;       # Calculate the watermark text  $text = "Hello, world!" ;       # A new style, named watermark, that defines how the watermark text should be displayed:  $style = '  <style type= "text/css" >  .watermark {      color: #d0d0d0;      font-size: 100pt;      -webkit-transform: rotate(-45deg);      -moz-transform: rotate(-45deg);      -o-transform: rotate(-45deg);      transform: rotate(-45deg);      position: absolute;      width: 100%;      height: 100%;      margin: 0;      z- index : 100;      top:200px;      left:25%;      opacity: .5;  }  </style>';       # A div that contains the watermark text  $div = '<div class="watermark">' . $text . '</div>' ;       # Imprint in the body of the document  $body = http.getResponseBody();  if ( string.regexmatch( $body , "^(.*)</body>(.*?)$" , "i" ) ) {      http.setResponseBody( $1 . $style . $div . "</body>" . $2 );  }  - See more at: https://splash.riverbed.com/docs/DOC-1664 #sthash.BsuXFRP2.dpuf   This rule has the following effect...: Of course, you can easily change the watermark text:   1 $watermark = "Hello, world!" ;   ... perhaps to add more debugging or instrumentation  to the page.   The CSS style for the watermark is based on this article , and other conversations on stackoverflow; you'll probably need to adapt it to get precisely the effect that you want.   This rule uses a simple technique to append text to an HTML document (see the Tracking user activity with Google Analytics article for another example). You could use it to perform other page transforms, such as the common attempt to apply copy-protection by putting a full-size transparent layer over the entire HTML document.
View full article