cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Content and Intellectual Property protection is a serious issue for any web site providing paid-for content. The article How to stop 'login abuse', using TrafficScript article describes how Stingray can be used to detect when a username and password is reused from different locations; this article showcases the power of Stingray's Java Extensions to apply a dynamically-generated, visible watermark to every image served up by your website.   The article covers how to use the Eclipse IDE or the command line to build the extension, how to apply the extension to your traffic, some optimization tricks to maximise performance, and how to debug and patch the code on-the-fly.   For more information on Java Extensions, you may like to read the article Feature Brief: Java Extensions in Stingray Traffic Manager.     Prerequisites   Before you begin, make sure that you have:   A working copy of Stingray, with the correct Java Runtime Environment (Sun JRE 1.5+) installed on the server (or just deploy the Stingray virtual appliance); A copy of the Java Platform JDK (to compile from the command line), and optionally an IDE such as Eclipse.   Configure the Stingray to load-balance traffic to a suitable website; you can use a public website like www.riverbed.com (remember to add a rule to set the host header if necessary). Check you can receive the website content through Stingray.   Step 1: Create your Java Extension   If you want to skip this section, just grab the ImageWatermark.class file attached to this article and proceed with that.   Go to the Stingray Admin interface, and locate the Catalog->Java Extensions page. On that page, locate the links to the Java Servlet API and ZXTM Java Extensions API files, and save these two Jar files in a convenient, long-term location:   Download the Java Servlet API and Stingray Java Extensions API files   If you are using the command line   Save the .jar files in the current directory, and create a new file named ImageWatermark.java.   If you are using Eclipse:   In Eclipse, create a new project:   Project Type: Java Project; Project Name: WaterMark; use the default options; Java Settings: Select the 'Libraries' tar and add the two external Jar files that you stored in the previous step:   Once you've created the project, go to the Package Explorer. Right-click on your project and create a new Class named 'ImageWatermark' with the default options.   The Java Code   Paste the following code into the ImageWatermark.java file that is created:   import java.io.IOException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; // Additional imports import java.awt.*; import java.awt.color.*; import java.awt.geom.*; import java.awt.image.*; import java.io.*; import javax.imageio.ImageIO; import com.zeus.ZXTMServlet.*; public class ImageWatermark extends HttpServlet { private static final long serialVersionUID = 1L; public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res; String ct = zres.getHeader( "Content-Type" ); if( ct == null || ! ct.startsWith( "image/" ) ) return; InputStream is = zres.getInputStream(); BufferedImage img = ImageIO.read( is ); Graphics2D g = (Graphics2D)img.getGraphics(); int width = img.getWidth(); int height = img.getHeight(); if( width < 200 || height < 30 ) return; String[] args = (String[])req.getAttribute( "args" ); String message = ( args != null ) ? args[0] : "Hello world!"; g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g.setComposite( AlphaComposite.getInstance( AlphaComposite.SRC_OVER, (float)0.5 )); Font myFont = new Font( "Sans", Font.PLAIN, 18 ); Rectangle2D bb = myFont.getStringBounds( message, g.getFontRenderContext() ); int x = 2; int y = (int)bb.getHeight(); g.setFont( myFont ); g.setColor( Color.darkGray ); g.drawString( message, x, y ); zres.setHeader( "Content-Type", "image/png" ); ImageIO.write( img, "PNG", zres.getOutputStream() ); } catch( Exception e ) { log( req.getRequestURI() + ": " + e.toString() ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   From the command line, you can compile this as follows:   $ javac -cp servlet.jar:zxtm-servlet.jar ImageWatermark.java   This will create an 'ImageWatermark.class' file in the current directory.   Using Eclipse, paste this source in and hit ‘Ctrl-Shift-O’ to get the correct imports. Then save the file – this will automatically compile it; check that there were no errors in the compilation.   This should create the output file ImageWatermark.class in somewhere like HOMEDIR/workspace/ImageWatermark/bin.   Step 2: Load the extension into Stingray and watermark some images   Go to the Java Extensions catalog page and upload the Java Extension 'class' file for the WaterMark extension.   When you upload the class file, the Stingray Admin Server will automatically create a simple RuleBuilder rule that invokes the Java Extension.   Configure your virtual server to run the RuleBuilder rule on each response, then shift-reload the webpage that is delivered through Stingray to clear your cache and reload each image: Note the little "Hello world!" watermark on the top left of any images larger then 200x30 pixels.   Step 3: Optimize the way that the extension is called   The Java Extension is called by the RuleBuilder rule on every HTTP response. However, the extension only processes images; HTML, CSS and other document types are ignored.   Selectively running Java Extensions   Invoking a Java Extension carries some overhead, so it is prudent to ensure that Extensions are only invoked when they are needed. With a small change to the rule, you can ensure that this is the case.   First, convert the "ImageWatermark" RuleBuilder rule to TrafficScript by editing the rule and using the "Convert Rule" button. This will create a simple TrafficScript rule which calls the WaterMark extension:   java.run( "ImageWatermark" );   Edit the rule to add a condition that only runs the WaterMark extension when the object type is an image:   $contenttype = http.getResponseHeader( "Content-Type" ); if( string.startsWith( $contenttype, "image/" ) ) { java.run( "ImageWatermark" ); }   Passing parameters to a Java Extension   You can pass parameters from TrafficScript to a Java Extension. They are passed in as additional arguments to the ‘java.run()’ TrafficScript function:   $ip = request.getRemoteIP(); $time = sys.timeToString( sys.time() ); $message = "IP: ".$ip.", ".$time; $contenttype = http.getResponseHeader( "Content-Type" ); if( string.startsWith( $contenttype, "image/" ) ) { java.run( "ImageWatermark", $message ); }   The Java extension will read these arguments using the 'args' attribute, which returns a string array of the argument values:   String[] args = (String[])req.getAttribute( "args" ); String message = ( args != null ) ? args[0] : "Hello world!";   Use the Stingray Admin Interface to load in the new copy of the Java extension.   Now, when you shift-reload the web page (to clear the cache) the watermark text on the image will contain the message created in the TrafficScript rule, with the IP address of the remote user and the time when the image was downloaded.   Of course, you could also generate this message directly in the Java Extension, but quick code changes (such as modifying the text in the message) are easier when the code resides in TrafficScript rather than a compiled Java class.   Step 4: Live debugging and hot code patching   Finally, refer to the "Remote Debugging" section of the Java Development Guide (Product Documentation). This describes how to configure the arguments used to start the Java Virtual Machine (JVM) so that it can accept live debugging sessions from a remote debugger, and how to configure Eclipse to connect to the JVM.   You can edit and save the code in Eclipse. When you save the code, Eclipse compiles it and patches the code in the JVM on the fly. Try changing the point size of the font or the color and see the effects immediately:   Font myFont = new Font( "Serif", Font.BOLD, 12 );   and...   g.setColor( Color.red );   Live patching in Eclipse is a great way to debug, test and update code, but remember that it only patches the code in the live Java VM. When Stingray restarts, it will fall back to the version of the Java class that you originally uploaded, so once you’re finished, remember to upload the compiled class through the Admin interface so that it persists.   Read more Feature Brief: Java Extensions in Traffic Manager Writing Java Extensions - an introduction Watermarking PDF documents with Java
View full article
Content protection is a key concern for many online services, and watermarking downloaded documents with a unique ID is one way to discourage and track unauthorized sharing. This article describes how to use Stingray to uniquely watermark every PDF document served from a web site.     In this example, Stingray will run a Java Extension to process all outgoing PDF documents from the web sites it is managing. The Java Extension can watermark each download with a custom message, including details such as the IP address, time of day and authentication credentials (if available) of the client:     The extension then encrypts the PDF document to make it difficult to remove the watermark.   Quick Start   Upload the attached PdfWatermark.jar file to your Java Extensions Catalog in Stingray Traffic Manager:   Create the following 'PDFWatermark' rule and apply it as a response rule to your virtual server:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 if ( http.getresponseheader( "Content-Type" ) != "application/pdf" ) break;       java.run( "PdfWatermark" ,         "x" , 10,         "y" , 20,         "textAlpha" , 30,         "textSize" , 40,         "textColor" , "0xff7f00" ,         "drawText" , "Downloaded by " .request.getRemoteIP(),         "textSize" , 26,         "drawText" , sys. gmtime . format ( "%a, %d %b %Y %T GMT" ),        "textSize" , 14,        "drawText" , http.getHostHeader() . http.getPath(),         "x" , 40,         "y" , 25,         "textAlpha" , 70,         "textColor" , "0xcccccc" ,         "textSize" , 16,         "textAngle" , 0,         "drawText" , "Copyright " .sys. time .year(),         "drawText" , "For restricted distribution"   );    Download a PDF document from your website, managed by the virtual server configured above.  Verify that the PDF document has been watermarked with the URL, client IP address, and time of download.   Troubleshooting   The Java Extension applies the watermark to PDF documents, and then encrypts them to make the watermark difficult to remove.   The Java Extension will not be able to apply a watermark to PDF documents that are already encrypted, or which are served with a mime type that does not begin ‘application/pdf’.   Customizing the extension   The behaviour of the extension is controlled by the parameters passed into the Java extension by the ‘java.run()’ function.   The following example applies a simple watermark:   1 2 3 4 5 6 7 8 9 10 11 if ( http.getresponseheader( "Content-Type" ) != "application/pdf" ) break;       $msg1 = http.getHostHeader() . http.getPath();  $msg2 = "Downloaded by " .http.getRemoteIP();  $msg3 = sys. gmtime . format ( "%a, %d %b %Y %T GMT" );       java.run( "PdfWatermark" ,      "drawText" , $msg1 ,      "drawText" , $msg2 ,      "drawText" , $msg3 ,  );    Advanced use of the Java Extension   This Java Extension takes a list of commands to control how and where it applies the watermark text:   Command Notes Default x As a percentage between 0 and 100; places the cursor horizontally on the page. 30 y As a percentage between 0 and 100; places the cursor vertically on the page. 30 textAngle In degrees, sets the angle of the text. 0 is horizontal (left to right); 90 is vertical (upwards). The special value "auto" sets the text angle from bottom-left to top-right in accordance with the aspect ratio of the page. “auto” textAlign Value is "L" (left), "R" (right), or "C" (center); controls the alignment of the text relative to the cursor placement. “L” textAlpha As a percentage, sets the alpha of the text when drawn with drawText."0" is completely transparent, "100" is solid (opaque). 75 textColor The color of the text when it is drawn with drawText, as hex value in a string. “0xAAAAAA” textSize In points, sets the size of the text when it is drawn with drawText. 20 drawText Draw the value (string) using the current cursor placement and text attributes; automatically moves the cursor down one line so that multiple lines of text can be rendered with successive calls to drawText.     Dependencies and Licenses   For convenience, the .jar extension contains the iText 5.4.0 library from iText software corp (http://www.itextpdf.com) and the bcprov-148 and bcmail-148 libraries from The Legion of the Bouncy Castle (http://www.bouncycastle.org), in addition to the PdfWatermark.class file.  The jar file was packaged using JarSplice (http://ninjacave.com/jarsplice).   Building the extension from source   If you'd like to build the Java Extension from source, here's the code:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 import java.awt.Color;  import java.io.IOException;  import java.io.InputStream;  import java.io.OutputStream;  import java.util.ArrayList;  import java.util.Enumeration;  import java.util.Hashtable;       import javax.servlet.ServletConfig;  import javax.servlet.ServletException;  import javax.servlet.http.HttpServlet;  import javax.servlet.http.HttpServletRequest;  import javax.servlet.http.HttpServletResponse;       import com.itextpdf.text.BaseColor;  import com.itextpdf.text.pdf.BaseFont;  import com.itextpdf.text.pdf.PdfContentByte;  import com.itextpdf.text.pdf.PdfGState;  import com.itextpdf.text.pdf.PdfReader;  import com.itextpdf.text.pdf.PdfStamper;  import com.itextpdf.text.pdf.PdfWriter;  import com.zeus.ZXTMServlet.ZXTMHttpServletResponse;       public class PdfWatermark extends HttpServlet {       private static final long serialVersionUID = 1L;            Hashtable<String, String> defaults = new Hashtable<String, String>();            public void init(ServletConfig config) throws ServletException {          super.init(config);               // Initialize defaults.  These are 'commands' that are run before any commands          // passed in to the extension through the args list          defaults.put( "x" , "30" );          defaults.put( "y" , "30" );          defaults.put( "textAngle" , "auto" );          defaults.put( "textAlign" , "L" );          defaults.put( "textAlpha" , "75" );          defaults.put( "textSize" , "20" );          defaults.put( "textColor" , "0xAAAAAA" );               // Read any values defined in the ZXTM configuration for this class          // to override the defaults          Enumeration<String> e = defaults. keys ();          while (e.hasMoreElements()) {             String k = e.nextElement();             String v = config.getInitParameter(k);             if (v != null)                defaults.put(k, v);          }       }            public void doGet(HttpServletRequest req, HttpServletResponse res)             throws ServletException, IOException {          try {             ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res;                  String ct = zres.getHeader( "Content-Type" );             if (ct == null || !ct.startsWith( "application/pdf" ))                return ;                  // process args             String[] args = (String[]) req.getAttribute( "args" );             if (args == null)                throw new Exception( "Missing argument list" );             if (args. length % 2 != 0)                throw new Exception(                      "Malformed argument list (expected even number of args)" );                  ArrayList<String[]> actions = new ArrayList<String[]>();                  Enumeration<String> e = defaults. keys ();             while (e.hasMoreElements()) {                String k = e.nextElement();                actions.add(new String[] { k, defaults.get(k) });             }             for ( int i = 0; i < args. length ; i += 2) {                actions.add(new String[] { args[i], args[i + 1] });             }                  InputStream is = zres.getInputStream();             OutputStream os = zres.getOutputStream();                  PdfReader reader = new PdfReader(is);                  int n = reader.getNumberOfPages();                  PdfStamper stamp = new PdfStamper(reader, os);             stamp.setEncryption(                   PdfWriter.STANDARD_ENCRYPTION_128 | PdfWriter.DO_NOT_ENCRYPT_METADATA,                   null, null,                   PdfWriter.ALLOW_PRINTING | PdfWriter.ALLOW_COPY                      | PdfWriter.ALLOW_FILL_IN | PdfWriter.ALLOW_SCREENREADERS                      | PdfWriter.ALLOW_DEGRADED_PRINTING);                  for ( int i = 1; i <= n; i++) {                PdfContentByte pageContent = stamp.getOverContent(i);                com.itextpdf.text.Rectangle pageSize = reader                      .getPageSizeWithRotation(i);                     watermarkPage(pageContent, actions, pageSize.getWidth(),                      pageSize.getHeight());             }                  stamp. close ();               } catch (Exception e) {             log (req.getRequestURI() + ": " + e.toString());             e.printStackTrace();          }       }            public void doPost(HttpServletRequest req, HttpServletResponse res)             throws ServletException, IOException {          doGet(req, res);       }            private void watermarkPage(PdfContentByte pageContent,             ArrayList<String[]> actions, float width, float height)             throws Exception {          float x = 0;          float y = 0;          double textAngle = 0;          int textAlign = PdfContentByte.ALIGN_CENTER;          int fontSize = 14;                    pageContent.beginText();               for ( int i = 0; i < actions.size(); i++) {             String action = actions.get(i)[0];             String value = actions.get(i)[1];                  if (action.equals( "x" )) {                x = Float.parseFloat(value) / 100 * width;                continue ;             }                  if (action.equals( "y" )) {                y = Float.parseFloat(value) / 100 * height;                continue ;             }                  if (action.equals( "textColor" )) {                Color c = Color.decode( value );                pageContent.setColorFill(                   new BaseColor( c.getRed(), c.getGreen(), c.getBlue() ) );                     continue ;             }                  if (action.equals( "textAlpha" )) {                PdfGState gs1 = new PdfGState();                gs1.setFillOpacity(Float.parseFloat(value) / 100f);                pageContent.setGState(gs1);                continue ;             }                  if (action.equals( "textAngle" )) {                if (value.equals( "auto" )) {                   textAngle = (float) Math. atan2 (height, width);                } else {                   textAngle = Math.toRadians( Double.parseDouble(value) );                }                continue ;             }                  if (action.equals( "textAlign" )) {                if (value.equals( "L" ))                   textAlign = PdfContentByte.ALIGN_LEFT;                else if (value.equals( "R" ))                   textAlign = PdfContentByte.ALIGN_RIGHT;                else                    textAlign = PdfContentByte.ALIGN_CENTER;                continue ;             }                  if (action.equals( "textSize" )) {                fontSize = Integer.parseInt(value);                pageContent.setFontAndSize(BaseFont                      .createFont(BaseFont.HELVETICA, BaseFont.WINANSI,                            BaseFont.EMBEDDED), fontSize);                continue ;             }                  // x,y is top left/center/right of text, so that when we move the             // cursor at the end of a line, we can cater for subsequent fontSize             // changes             if (action.equals( "drawText" )) {                pageContent.showTextAligned(textAlign, value,                      (float) (x + fontSize * Math. sin (textAngle)),                      (float) (y - fontSize * Math. cos (textAngle)),                      (float) Math.toDegrees(textAngle));                     x += fontSize * 1.2 * Math. sin (textAngle);                y -= fontSize * 1.2 * Math. cos (textAngle);                continue ;             }                  throw new Exception( "Unknown command '" + action + "'" );          }               pageContent.endText();       }  }   Compile against the Stingray servlet libraries (see Writing Java Extensions - an introduction), and the most recent versions of the iText library (http://www.itextpdf.com) and the bcprov and bcmail libraries (http://www.bouncycastle.org):   $ javac -cp servlet.jar:zxtm-servlet.jar:bcprov-jdk15on-148.jar:\ bcmail-jdk15on-148.jar:itextpdf-5.4.0.jar PdfWatermark.java   You can then upload the generated PdfWatermark.class file and the three iText/bcmail/bcprov jar files to the Stingray Java Catalog.   Creating a Fat Jar   Alternatively, you can package the class files and their jar dependencies as a single Fat Jar (http://ninjacave.com/jarsplice):   1. Package the PdfWatermark.class file as a jar file   $ jar cvf PdfWatermark.jar PdfWatermark.class   2. Run JarSplice   $ java -jar ~/Downloads/jarsplice-0.40.jar   4. Set the main class: 5. Hit 'CREATE FAT JAR' to generate your single fat jar: You can upload the resulting Jar file to the Stingray Java catalog, and Stingray will identify the PdfWatermark.class within.
View full article
Having described Watermarking PDF documents with Stingray and Java and Watermarking Images with Java Extensions, this article describes the much simpler task of adding a watermark to an HTML document.   Stingray's TrafficScript language is fully capable of managing text content, so there's no need to revert to a more complex Java Extension to modify a web page:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 # Only process text/html responses  $ct = http.getResponseHeader( "Content-Type" );  if ( ! string.startsWith( $ct , "text/html" ) ) break;       # Calculate the watermark text  $text = "Hello, world!" ;       # A new style, named watermark, that defines how the watermark text should be displayed:  $style = '  <style type= "text/css" >  .watermark {      color: #d0d0d0;      font-size: 100pt;      -webkit-transform: rotate(-45deg);      -moz-transform: rotate(-45deg);      -o-transform: rotate(-45deg);      transform: rotate(-45deg);      position: absolute;      width: 100%;      height: 100%;      margin: 0;      z- index : 100;      top:200px;      left:25%;      opacity: .5;  }  </style>';       # A div that contains the watermark text  $div = '<div class="watermark">' . $text . '</div>' ;       # Imprint in the body of the document  $body = http.getResponseBody();  if ( string.regexmatch( $body , "^(.*)</body>(.*?)$" , "i" ) ) {      http.setResponseBody( $1 . $style . $div . "</body>" . $2 );  }  - See more at: https://splash.riverbed.com/docs/DOC-1664 #sthash.BsuXFRP2.dpuf   This rule has the following effect...: Of course, you can easily change the watermark text:   1 $watermark = "Hello, world!" ;   ... perhaps to add more debugging or instrumentation  to the page.   The CSS style for the watermark is based on this article, and other conversations on stackoverflow; you'll probably need to adapt it to get precisely the effect that you want.   This rule uses a simple technique to append text to an HTML document (see the Tracking user activity with Google Analytics article for another example). You could use it to perform other page transforms, such as the common attempt to apply copy-protection by putting a full-size transparent layer over the entire HTML document.
View full article
Stingray allows you to inspect and manipulate both incoming and outgoing traffic with a customized version of Java's Servlet API. In this article we'll delve more deeply into some of the semantics of Stingray's Java Extensions and show how to validate XML files in up- and in downloads using TrafficScript and Java Extensions.   The example that will allow us to illustrate the use of XML processing by Stingray is a website that allows users to share music play-lists. We'll first look at the XML capabilities of 'conventional' TrafficScript, and then investigate the use of Java Extensions.   A play-list sharing web site   You have spent a lot of time developing a fancy website where users can upload their personal play-lists, making them available to others who can then search for music they like and download it. Of course you went for XML as the data format, not least because it allows you to make sure uploads are valid. Therefore, you can help your users' applications by providing them with a way of validating their XML-files as they are uploaded. Also, to be on the safe side, whenever an application downloads an XML play-list it should be checked and only reach the user if it passes the validation.   XML provides the concept of schema files to describe what a valid document has to look like. One popular schema language is the W3C's XML Schema Definition (XSD), see http://www.w3.org/TR/xmlschema-0/. Given an XSD file, you can hand an XML document to a validator to find out whether it actually conforms to the data structure specified in the schema.   Coming back to our example of a play-list sharing website, you have downloaded the popular xspf (XML Shareable Playlist Format, 'spiff') schema description from http://xspf.org/validation/. One of the tags allowed inside a track in XML files of this type is image. By specifying tags like  a user could see the following pictures:     Validating XML with TrafficScript   How do you validate an XML file from a user against that schema? Stingray's TrafficScript provides the xml.validate() function. Here's a simple rule to check the response of a web server against a XSD:   1 2 3 4 5 6 7 8 9 10 $doc = http.getResponseBody();  $schema = resource.get( "xspf.xsd" );  $result = xml.validate.xsd( $doc , $schema );  if ( 1 == $result ) {      log .info( "Validation succeeded" );  } else if ( 0 == $result ) {      log .info( "Validation failed" );  } else {      log .info( "Validation error" );  }   Let's have a closer look at what this rule does:   First, it reads in the whole response by calling http.getResponseBody(). This function is very practical but you have to be extremely careful with it. The reason is that you do not know beforehand how big the response actually is. It might be an audio stream totaling many hundred megabytes in size. Surely you don't want Stingray to buffer all that data. Therefore, when using http.getResponseBody() you should always check the mime type and the content length of the response (see below for code that does this). Our rule then goes on to load the schema definition file with resource.get(), which must be located in ZEUSHOME/zxtm/conf/extra/ for this step to work. Finally it does the actual validation and checks the result. In this simple example, we are only logging the result, on your music-sharing web site you would have to take the appropriate action.   The last rule was a response rule that worked on the result from the back-end web server. These files are actually under your control (at least theoretically), so validation is not that urgent. Things are different if you allow uploads to your web site. Any user-provided data must be validated before you let it through to your back-ends. The following request rule does the XML validation for you:   1 2 3 4 5 6 7 8 9 10 11 12 $m = http.getMethod();  if ( 0 == string.icmp( $m , "POST" ) ) {      $clen = http.getHeader( "Content-Length" );      if ( $clen > 0 && $clen <= 1024*1024 ) {         $schema = resource.get( "xspf.xsd" );         $doc = http.getBody();         $result = xml.validate.xsd( $doc , $schema );         # handle result      } else {         # handle over-sized posts      }  }    Note how we first look at the HTTP method, then retrieve the length of the post's body and check it. That check, which is done in the line   1 if ( $clen > 0 && $clen <= 1024*1024 ) {   ...deserves a bit more comment: The variable $clen was initialized from the post's Content-Length header, so it could be the empty string at that stage. When TrafficScript converts data to integers, variables that do not actually represent numbers are converted to 0 (see the TrafficScript reference for more details). Therefore, we have to check that $clen is greater than zero and at most 1 megabyte (or whatever limit you choose to impose on the size of uploads). After checking the content length we can safely invoke getBody().   A malicious user might have faked the HTTP header to specify a length larger than his actual post. This would lead Stingray to try to read more data than the client sends, pausing on a file descriptor until the connection times out. Due to Stingray's non-blocking IO multiplexing, however, other requests would be processed normally.   Validating XML with Stingray's Java Extensions   After having explored TrafficScript's built-in XML support, let's now see how XML validation can be done using Java Extensions.   If you are at all familiar with Java Servlets, Stingray's Java Extensions should feel like home for you. The main differences are   You have a lot of Stingray's built-in functionality ready at hand via attributes. You can manipulate both the response (as in conventional Servlets) and the request (unique to Stingray's Servlet Extensions as Stingray sits between the client and the server).   There's lots more detail in the Feature Brief: Java Extensions in Traffic Manager.   The interesting thing here is that this flow actually applies twice: First when the request is sent to the server (you can invoke the Java extension from a request rule) and then again when the response is sent back to the client (allowing you to change the result from a response rule). This is very practical for your music-sharing web site as you only have to write one Servlet. However, you have to be able to tell whether you are working on the response or the request. The ZXTMHttpServletResponse object which is passed to both the doGet() and doPost() methods of the HttpServlet object has a method to find out which direction of the traffic flow you are currently in: boolean isResponseRule(). This distinction is never needed in conventional Servlet programming as in that scenario it's the Servlet's task to create the response, not to modify an existing response.   These considerations make it easy to design the Stingray Servlet for your web site:   There will be an init() method to read in the schema definition and to set up the xml.validation.Validator object. We'll have a single private validate() method to do the actual work. The doGet() method will invoke validate() on the server's response, whereas the doPost() method does the same on the body of the request   After all that theory it's high time for some real code (note that any import directives have been removed for the sake of readability as they don't add anything to our discussion - see Writing Java Extensions - an introduction ):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public class XmlValidate extends HttpServlet {      private static final long serialVersionUID = 1L;      private static Validator validator = null;      public void init( ServletConfig config ) throws ServletException {         super.init( config );         String schema_file = config.getInitParameter( "schema_file" );              if ( schema_file == null )            throw new ServletException( "No schema file specified" );              SchemaFactory factory = SchemaFactory.newInstance(            XMLConstants.W3C_XML_SCHEMA_NS_URI);              Source schemaFile = new StreamSource(new File(schema_file));         try {            Schema schema = factory.newSchema(schemaFile);            validator = schema.newValidator();         } catch( SAXException saxe ) {            throw new ServletException(saxe.getMessage());         }      }  // ... other methods below  }   The validate() function is actually very simple as all the hard work is done inside the Java library. The only thing to be careful about is to make sure that we don't allow concurrent access to the Validator object from multiple threads:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 private boolean validate( InputStream in, HttpServletResponse res, String errmsg )         throws IOException      {         Source src=new StreamSource(in);         try {            synchronized( validator ) {               validator.validate(src);            }         } catch( SAXException saxe ) {            String msg = saxe.getMessage();            res.setContentType( "text/plain" );            PrintWriter out = res.getWriter();            out.println(errmsg);            out. print ( "Validation of the xml file has failed with error message: " );            out.println(msg);            return false;         }         return true;      }    Note that the only thing we have to do in case of a failure is to write to the stream that makes up the response. No matter whether this is being done in a request or a response rule, Stingray will take that as an indication that this is what should be sent back to the client. In the case of a request rule, Stingray won't even bother to hand on the request to a back-end server and instead send the result of the Java Servlet; in a response rule, the server's answer will be replaced by what the Servlet has produced.   Now we are ready for the doGet() method:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public void doGet( HttpServletRequest req, HttpServletResponse res )        throws ServletException, IOException     {        try {           ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res;           if ( !zres.isResponseRule() ) {              log ( "doGet called in request rule ... bailing out" );              return ;           }           InputStream in = zres.getInputStream();           validate(in, zres, "The file you requested was rejected." );        } catch( Exception e ) {           throw new ServletException(e.getMessage());        }     }    There's not really much work left apart from calling our validate() method with the error message to append in case of failure. As discussed previously, we make sure that we are actually working in the context of a response rule because otherwise the response would be empty. Exactly the opposite has to be done when processing a post:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public void doPost( HttpServletRequest req, HttpServletResponse res )       throws ServletException, IOException    {       try {          ZXTMHttpServletRequest zreq = (ZXTMHttpServletRequest) req;          ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res;            if ( zres.isResponseRule() ) {             log ( "doPost called in response rule ... bailing out" );             return ;          }            InputStream in = zreq.getInputStream();          if ( validate(in, zres, "Your upload was unsuccessful" ) ) {             // just let the post through to the backends          }       } catch(Exception e) {          throw new ServletException(e.getMessage());       }    }    The only thing missing are the rules to invoke the Servlet, so here they are (assuming that the Servlet has been loaded up via the 'Java' tab of the 'Catalogs' section in Stingray's UI as a file called XmlValidate.class). First the request rule:   1 2 3 4 $m = http.getMethod();  if ( 0 == string.icmp( $m , "POST" ) ) {      java.run( "XmlValidate" );  }    and the response rule is almost the same:   1 2 3 4 $m = http.getMethod();  if ( 0 == string.icmp( $m , "GET" ) ) {      java.run( "XmlValidate" );  }    It's your choice: TrafficScript or Java Extensions   Which is better?   So now you are left with a difficult decision: you have two implementations of the same functionality, which one do you choose? Bearing in mind that the unassuming java.run() leads to a considerable amount of inter-process communication between the Stingray child process and the Java Servlet runner, whereas the xml.validate() is handled in C++ inside the same process, it is a rather obvious choice. But there are still situations when you might prefer the Java solution.   One example would be that you have to do XML processing not supported directly by Stingray. Java is more flexible and complete in the XML support it provides. But there is another advantage to using Java: you can replace the actual implementation of the XML functionality. You might want to use Intel's XML Software SuiteJ for Java, for example. But how do you tell Stingray's Java runner to use another XML library? Only two settings have to be adapted:   java!classpath /opt/intel/xmlsoftwaresuite/java/1.0/lib/intel-xss.jar java!command java -Djava.library.path=/opt/intel/xmlsoftwaresuite/java/1.0/bin/intel64 -server   This applies if you have installed Intel's XML Software SuiteJ in /opt/intel/xmlsoftwaresuite/java/1.0/ and are using the 64 bit version of the shared library. Both changes can be made in the 'Global Settings' tab of the 'System' section in Stingray's UI.
View full article
TrafficScript rules often need to refer to tables of data - redirect mappings, user lists, IP black lists and the like.   For small tables that are not updated frequently, you can place these inline in the TrafficScript rule:   $redirect = [ "/widgets" => "/sales/widgets", "/login" => "/cgi-bin/login.cgi" ]; $path = http.getPath(); if( $redirect[ $path ] ) http.redirect( $redirect[ $path ] ); This approach becomes difficult to manage if the table becomes large, or you want to update it without having to edit the TrafficScript rule.  In this case, you can store the table externally (in a resource file) and reference it from the rule:     The following examples will consider a file that follows a standard space-separated 'key value' pattern, and we'll look at alternative TrafficScript approaches to efficiently handle the data and look up key-value pairs:   # cat /opt/zeus/zxtm/conf/extra/redirects.txt /widgets /sales/widgets /login /cgi-bin/login.cgi /support http://support.site.com We'll propose several 'ResourceTable' TrafficScript library implementations that express a lookup() function that can be used in the following fashion:   # ResourceTable provides a lookup( filename, key ) function import ResourceTable as table; $path = http.getPath(); $redirect = table.lookup( "redirects.txt", $path ); We'll then look at the performance of each to see which is the best.   For a summary of the solutions in this article, jump straight to libTable.rts: Interrogating tables of data in TrafficScript.   Implementation 1: Search the file on each lookup   ResourceTable1 sub lookup( $filename, $key ) { $contents = resource.get( $filename ); if( string.regexmatch( $contents, '\n'.$key.'\s+([^\n]+)' ) ) return $1; if( string.regexmatch( $contents, '^'.$key.'\s+([^\n]+)' ) ) return $1; return ""; } This simple implementation searches the file on each and every lookup, using a regular expression to locate the key and also the text on the remainder of the line.  It pins the key to the start of the line so that it does not mistakenly match lines where $key is a substring (suffix) of the key.   The implementation is simple and effective, but we would reasonably expect that it would become less and less efficient, the larger the resource file became.   Implementation 2: Store the table in a TrafficScript hash table for easy lookup   The following code builds a TrafficScript hash table from the contents of the resource file: $contents = resource.get( $filename ); $h = [ ]; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, '(.*?)\s+(.*)' ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); $h[$key] = $value; } You can then quickly look up values in the hash table using $h[ $key ].   However, we don't want to have to create the hash table every time we call the lookup function; we would like to create it once and then cache it somewhere.  We can use the global data table to store persistent data, and we can verify that the data is still current by checking that the MD5 of the resource file has not changed:   ResourceTable2a sub update( $filename ) { # Store the md5 of the resource file we have cached. No need to update if the file has not changed $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable:".$filename.":md5" ) ) return; # Do the update $contents = resource.get( $filename ); $h = [ ]; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); $h[$key] = $value; } data.set( "resourcetable:".$filename.':data', $h ); data.set( "resourcetable:".$filename.':md5', $md5 ); } sub lookup( $filename, $key ) { # Check to see if the file has been updated, and update our table if necessary update( $filename ); $h = data.get( "resourcetable:".$filename.':data' ); return $h[$key]; } Version 2a: we store the MD5 of the file in the global key 'resourcetable:filename:md5', and the hash table in the global key 'resourcetable:filename:data'.   This implementation has one significant fault.  If two trafficscript rules are running concurrently, they may both try to update the keys in the global data table and a race condition may result in inconsistent data.  This situation is not possible on a single-core system with one zeus.zxtm process because rules are run serially and only pre-empted if they invoke a blocking operation, but it's entirely possible on a multi-core system, and TrafficScript does not implement mutexes or locks to help protect against this.   The simplest solution is to give each core its own, private copy of the data.  Because system memory should be scaled with the number of cores, the additional overhead of these copies is generally acceptable:   ResourceTable2b: sub update( $filename ) { $pid = sys.getPid(); $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable:".$pid.$filename.":md5" ) ) return; $contents = resource.get( $filename ); $h = [ ]; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); $h[$key] = $value; } data.set( "resourcetable:".$pid.$filename.':data', $h ); data.set( "resourcetable:".$pid.$filename.':md5', $md5 ); } sub lookup( $filename, $key ) { update( $filename ); $pid = sys.getPid(); $h = data.get( "resourcetable:".$pid.$filename.':data' ); return $h[$key]; }   Version 2b: by including the pid in the name of the key, we avoid multi-core race conditions at the expense of multiple copies of the date   Implementation 3: Store the key/value data directly in the global hash table   data.set and data.get address a global key/value table.  We could use that directly, rather than constructing a TrafficScript hash:   sub update( $filename ) { $pid = sys.getPid(); $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable".$pid.$filename.":md5" ) ) return; data.reset( "resourcetable".$pid.$filename.":" ); data.set( "resourcetable".$pid.$filename.":md5", $md5 ); $contents = resource.get( $filename ); foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); data.set( "resourcetable".$pid.$filename."::".$key, $value ); } } sub lookup( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename."::".$key ); }   Version 3: key/value pairs are stored in the global data table.  Keys begin with the string "resourcetable:pid:filename:", so it's easy to delete all of the key/value pairs using data.reset() before rebuilding the dataset   How do these implementations compare?   We tested the number of lookups-per-second that each implementation could achieve (using a single-core virtual machine running on a laptop Core2 processor) to investigate performance for different dataset sizes:   Resource file size (entries) 10 100 1,000 10,000 Implementation 1: simple search 300,000 100,000 17,500 1,000 Implementation 2: trafficscript hash, cached in global data table 27,000 2,000 250 10 Implementation 3: key/value pairs in the global data table 200,000 200,000 200,000 200,000   ResourceTable lookups per second (single core, lightweight processor)   The test just exercised the rate of lookups in resource files of various sizes; the cost of building the cached datastructures (implementations 2 and 3) and one-off costs and not included in the tests.   Interpreting the results   The degradation of performance in implementation 1 as the file size increases was to be expected.   The constant performance of implementation 3 was as expected, as hash tables should generally give O(1) lookup speed, not affected by the number of entries.   The abysmal performance of implementation 2 is surprising, until you note that on every lookup we retrieve the entire hash table from the global data table:   $h = data.get( "resourcetable:".$pid.$filename.':data' ); return $h[$key];     The global data table is a key/value store; all keys and values are serialized as strings.  The data.get() operation will read the serialized version of the hash table and reconstruct the entire table (up to 10,000 entries) before the O(1) lookup operation.   What is most surprising perhaps is the speed at which you can search and extract data from a string using regular expressions (implementation 1).  For small and medium datasets (up to approx 50 entries), this is the simplest and fastest method, and it's only worth considering the more complex data.get key/value implementation for large datasets.   Read more   Check out the article How is memory managed in TrafficScript? for more detail on the ways that TrafficScript handles data and memory    
View full article
This article presents a TrafficScript library that give you easy and efficient access to tables of data stored as files in the Stingray configuration:   libTable.rts   Download the following TrafficScript library from gihtub and import it into your Rules Catalog, naming it libTable.rts:   libTable.rts   # libTable.rts # # Efficient lookups of key/value data in large resource files (>100 lines) # Use getFirst() and getNext() to iterate through the table sub lookup( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename."::".$key ); } sub getFirst( $filename ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":first" ); } sub getNext( $filename, $key ) { update( $filename ); $pid = sys.getPid(); return data.get( "resourcetable".$pid.$filename.":next:".$key ); } # Internal functions sub update( $filename ) { $pid = sys.getPid(); $md5 = resource.getMD5( $filename ); if( $md5 == data.get( "resourcetable".$pid.$filename.":md5" ) ) return; data.reset( "resourcetable".$pid.$filename.":" ); data.set( "resourcetable".$pid.$filename.":md5", $md5 ); $contents = resource.get( $filename ); $pkey = ""; foreach( $l in string.split( $contents, "\n" ) ) { if( ! string.regexmatch( $l, "(.*?)\\s+(.*)" ) ) continue; $key = string.trim( $1 ); $value = string.trim( $2 ); data.set( "resourcetable".$pid.$filename."::".$key, $value ); if( !$pkey ) { data.set( "resourcetable".$pid.$filename.":first", $key ); } else { data.set( "resourcetable".$pid.$filename.":next:".$pkey, $key ); } $pkey = $key; } }   Usage:   import libTable.rts as table; $filename = "data.txt"; # Look up a key/value pair $value = table.lookup( $filename, $key ); # Iterate through the table for( $key = table.getFirst( $filename ); $key != ""; $key = table.getNext( $filename, $key ) ) { $value = table.lookup( $filename, $key ); }   The library caches the contents of the file internally, and is very efficient for large files.  For smaller files, it may be slightly more efficient to search these files using a regular expression, but the convenience of this library may outweigh the small performance gains.   Data file format   This library provides access to files stored in the Stingray conf/extra folder (by way of the Extra Files > Miscellaneous Files) section of the catalog.  These files can be uploaded using the UI, the SOAP or REST API, or by manually copying them in place and initiating a configuration replication.   Files should contain  key-value pairs, one per line, space separated:   key1value1 key2value2 key3value3   Preservation of order   The lookup operation uses an open hash table, so is efficient for large files. The getFirst() and getNext() operations will iterate through the data table in order, returning the keys in the order they appear in the file.   Performance and alternative implementations   The performance of this library is investigated in the article Investigating the performance of TrafficScript - storing tables of data.  It is very efficient for large tables of data, and marginally less efficient than a simple regular-expression string search for small files.   If performance is of a concern and you only need to work with small datasets, then you could use the following library instead:   libTableSmall.rts   # libTableSmall.rts: Efficient lookups of key/value data in a small resource file (<100 lines) sub lookup( $filename, $key ) { $contents = resource.get( $filename ); if( string.regexmatch( $contents, '\n'.$key.'\s+([^\n]+)' ) ) return $1; if( string.regexmatch( $contents, '^'.$key.'\s+([^\n]+)' ) ) return $1; return ""; }  
View full article
In Stingray, each virtual server is configured to manage traffic of a particular protocol.  For example, the HTTP virtual server type expects to see HTTP traffic, and automatically applies a number of optimizations - keepalive pooling, HTTP upgrades, pipelines - and offers a set of HTTP-specific functionality (caching, compression etc).   A virtual server is bound to a specific port number (e.g. 80 for HTTP, 443 for HTTPS) and a set of IP addresses.  Although you can configure several virtual servers to listen on the same port, they must be bound to different IP addresses; you cannot have two virtual servers bound to the same IP: port pair as Stingray will not know which virtual server to route traffic to.   "But I need to use one port for several different applications!"   Sometimes, perhaps due to firewall restrictions, you can't publish services on arbitrary ports.  Perhaps you can only publish services on port 80 and 443; all other ports are judged unsafe and are firewalled off. Furthermore, it may not be possible to publish several external IP addresses.   You need to accept traffic for several different protocols on the same IP: port pair.  Each protocol needs a particular virtual server to manage it;  How can you achieve this?   The scenario   Let's imagine you are hosting several very different services:   A plain-text web application that needs an HTTP virtual server listening on port 80 A second web application listening for HTTPS traffic listening on port 443 An XML-based service load-balanced across several servers listening on port 180 SSH login to a back-end server (this is a 'server-first' protocol) listening on port 22   Clearly, you'll need four different virtual servers (one for each service), but due to firewall limitations, all traffic must be tunnelled to port 80 on a single IP address.  How can you resolve this?   The solution - version 1   The solution is relatively straightforward for the first three protocols.  They are all 'client-first' protocols (see Feature Brief: Server First, Client First and Generic Streaming Protocols), so Stingray can read the initial data written from the client.   Virtual servers to handle individual protocols   First, create three internal virtual servers, listening on unused private ports (I've added 7000 to the public ports).  Each virtual server should be configured to manage its protocol appropriately, and to forward traffic to the correct target pool of servers.  You can test each virtual server by directing your client application to the correct port (e.g. http://stingray-ip-address:7080/), provided that you can access the relevant port (e.g. you are behind the firewall):   For security, you can bind these virtual servers to localhost so that they can only be accessed from the Stingray device.   A public 'demultiplexing' virtual server   Create three 'loopback' pools (one for each protocol), directing traffic to localhost:7080, localhost:7180 and localhost:7443.   Create a 'public' virtual server listening on port 80 that interrogates traffic using the following rule, and then selects the appropriate pool based on the data the clients send.  The virtual server should be 'client first', meaning that it will wait for data from the client connection before triggering any rules:     # Get what data we have... $data = request.get(); # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   The Detect protocol rule is triggered once we receive client data   Now you can target all your client applications at port 80, tunnel through the firewall and demultiplex the traffic on the Stingray device.   The solution - version 2   You may have noticed that we omitted SSH from the first version of the solution.   SSH is a challenging protocol to manage in this way because it is 'server first' - the client connects and waits for the server to respond with a banner (greeting) before writing any data on the connection.  This means that we cannot use the approach above to identify the protocol type before we select a pool.   However, there's a good workaround.  We can modify the solution presented above so that it waits for client data.  If it does not receive any data within (for example) 5 seconds, then assume that the connection is the server-first SSH type.   First, create a "SSH" virtual server and pool listening on (for example) 7022 and directing traffic to your target SSH virtual server (for example, localhost:22 - the local SSH on the Stingray host):     Note that this is a 'Generic server first' virtual server type, because that's the appropriate type for SSH.   Second, create an additional 'loopback' pool named 'Internal SSH loopback' that forwards traffic to localhost:7022 (the SSH virtual server).   Thirdly, reconfigure the Port 80 listener public virtual server to be 'Generic streaming' rather than 'Generic client first'.  This means that it will run the request rule immediately on a client connection, rather than waiting for client data.   Finally, update the request rule to read the client data.  Because request.get() returns whatever is in the network buffer for client data, we spin and poll this buffer every 10 ms until we either get some data, or we timeout after 5 seconds.   # Get what data we have... $data = request.get(); $count = 500; while( $data == "" && $count-- > 0 ) { connection.sleep( 10 ); # milliseconds $data = request.get(); } if( $data == "" ) { # We've waited long enough... this must be a server-first protocol pool.use( "Internal SSH loopback" ); } # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   This solution isn't perfect (the spin and poll may incur a hit for a busy service over a slow network connection) but it's an effective solution for the single-port firewall problem and explains how to tunnel SSH over port 80 (not that you'd ever do such a thing, would you?)   Read more   Check out Feature Brief: Server First, Client First and Generic Streaming Protocols for background The WebSockets example (libWebSockets.rts: Managing WebSockets traffic with Traffic Manager) uses a similar approach to demultiplex websockets and HTTP traffic
View full article
The libLDAP.rts library and supporting library files (written by Mark Boddington) allow you to interrogate and modify LDAP traffic from a TrafficScript rule, and to respond directly to an LDAP request when desired.   You can use the library to meet a range of use cases, as described in the document Managing LDAP traffic with libLDAP.rts.   Note: This library allows you to inspect and modify LDAP traffic as it is balanced by Stingray.  If you want to issue LDAP requests from Stingray, check out the auth.query() TrafficScript function for this purpose, or the equivalent Authenticating users with Active Directory and Stingray Java Extensions Java Extension.   Overview   A long, long time ago on a Traffic Manager far, far away, I (Mark Boddington) wrote some libraries for processing LDAP traffic in TrafficScript:   libBER.rts – This is a TrafficScript library which implements all of the required Basic Encoding Rules (BER) functionality for LDAP. It does not completely implement BER though, LDAP doesn't use all of the available types, and this library hasn't implemented those not required by LDAP. libLDAP.rts – This is a TrafficScript library of functions which can be used to inspect and manipulate LDAP requests and responses. It requires libBER.rts to encode the LDAP packets. libLDAPauth.rts – This is a small library which uses libLdap to provide simple LDAP authentication to other services.   That library (version 1.0) mostly focused on inspecting LDAP requests. It was not particularly well suited to processing LDAP responses. Now, thanks to a Stingray PoC being run in partnership with the guys over at Clever Consulting, I've had cause to revist this library and improve upon the original. I'm pleased to announce libLDAP.rts version 1.1 has arrived.     What's new in libLdap Version 1.1?   Lazy Decoding. The library now only decodes the envelope  when getPacket() or getNextPacket() is called. This gets you the MessageID and the Operation. If you want to process further, the other functions handle decoding additional data as needed. New support for processing streams of LDAP Responses. Unlike Requests LDAP Responses are typically made up of multiple LDAP messages. The library can now be used to process multiple packets in a response. New SearchResult processing functions: getSearchResultDetails(), getSearchResultAttributes() and updateSearchResultDetails()   Lazy Decoding   Now that the decoding is lazier it means you can almost entirely bypass decoding for packets which you have no interest in. So if you only want to check BindRequests and/or BindResponses then those are the only packets you need to fully decode. The rest are sent through un-inspected (well except for the envelope).   Support for LDAP Response streams   We now have several functions to allow you to process responses which are made up of multiple LDAP messages, such  as those for Search Requests. You can use a loop with the "getNextPacket($packet["lastByte"])" function to process each LDAP message as it is returned from the LDAP server. The LDAP packet hash  now has a "lastByte" entry to help you keep track of the messages in the stream. There is also a new skipPacket() function to allow you to skip the encoder for packets which ou aren't modifying.   Search Result Processing   With the ability to process response streams I have added a  number of functions specifically for processing SearchResults. The getSearchDetails() function will return a SearchResult hash which contains the ObjectName decoded. If you are then interested in the object you can  call getSearchResultAttributes() to decode the Attributes which have been returned. If you make any changes to the Search Result you can then call updateSearchResultDetails() to update the packet, and then encodePacket() to re-encode it. Of course if at any point you determine that no changes are needed then you can call skipPacket() instead.   Example - Search Result Processing   import libDLAP.rts as ldap; $packet = ldap.getNextPacket(0); while ( $packet ) { # Get the Operation $op = ldap.getOp($packet); # Are we a Search Request Entry? if ( $op == "SearchRequestEntry" ) { $searchResult = ldap.getSearchResultDetails($packet); # Is the LDAPDN within example.com? if ( string.endsWith($searchResult["objectName"], "dc=example,dc=com") ) { # We have a search result in the tree we're interested in. Get the Attributes ldap.getSearchResultAttributes($searchResult); # Process all User Objects if ( array.contains($searchResult["attributes"]["objectClass"], "inetOrgPerson") ) { # Log the DN and all of the attributes log.info("DN: " . $searchResult["objectName"] ); foreach ( $att in hash.keys($searchResult["attributes"]) ) { log.info($att . " = " . lang.dump($searchResult["attributes"][$att]) ); } # Add the users favourite colour $searchResult["attributes"]["Favourite_Colour"] = [ "Riverbed Orange" ]; # If the password attribute is included.... remove it hash.delete($searchResult["attributes"], "userPassword"); # Update the search result ldap.updateSearchResultDetails($packet, $searchResult); # Commit the changes $stream .= ldap.encodePacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); continue; } } } # Not an interesting packet. Skip and move on. $stream .= ldap.skipPacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); } response.set($stream); response.flush();   This example reads each packet in turn by calling getNextPacket() and passing the lastByte attribute from the previously processed packet as the argument. We're looking for SearchResultEntry operations, If we find one we pass the packet to getSearchResultDetails() to decode the object which the search was for in order to determine the DN. If it's in example.com then we decide to process further and decode the attributes with getSearchResultAttributes(). If the object has an objectClass of inetOrgPerson we then print the attributes to the event log, remove the userPassword if it exists and set a favourite colour for the user. Finally we encode the packet and move on to the next one. Packets which we aren't interested in modifying are skipped.   Of course, rather than do all this checking in the response, we could have checked the SearchRequest in a request rule and then used connection.data.set() to flag the message ID for further processing.   We should also have a request rule which ensures that the objectClass is in the list of attributes requested by the end-user. But I'll leave that as an exercise for the reader ;-)   If you want more examples of how this library can be used, then please check out the additional use cases here: Managing LDAP traffic with libLDAP.rts
View full article
Google Analytics is a great tool for monitoring and tracking visitors to your web sites. Perhaps best of all, it's entirely web based - you only need a web browser to access the analysis services it provides.     To enable tracking for your web sites, you need to embed a small fragment of JavaScript code in every web page. This extension makes this easy, by inspecting all outgoing content and inserting the code into each HTML page, while honoring the users 'Do Not Track' preferences.   Installing the Extension   Requirements   This extension has been tested against Stingray Traffic Manager 9.1, and should function with all versions from 7.0 onwards.   Installation    Copy the contents of the User Analytics rule below. Open in an editor, and paste the contents into a new response rule:     Verify that the extension is functioning correctly by accessing a page through the traffic manager and use 'View Source' to verify that the Google Analytics code has been added near the top of the document, just before the closing </head> tag:     User Analytics rule   # Edit the following to set your profile ID; $defaultProfile = "UA-123456-1"; # You may override the profile ID on a site-by-site basis here $overrideProfile = [ "support.mysite.com" => "UA-123456-2", "secure.mysite.com" => "UA-123456-3" ]; # End of configuration settings # Only process text/html responses $contentType = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $contenttype, "text/html" )) break; # Honor any Do-Not-Track preference $dnt = http.getHeader( "DNT" ); if ( $dnt == "1" ) break; # Determine the correct $uacct profile ID $uacct = $overrideProfile[ http.getHostHeader() ]; if( !$uacct ) $uacct = $defaultProfile; # See http://www.google.com/support/googleanalytics/bin/answer.py?answer=174090 $script = ' <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(["_setAccount", "' . $uacct . '"]); _gaq.push(["_trackPageview"]); (function() { var ga = document.createElement("script"); ga.type = "text/javascript"; ga.async = true; ga.src=("https:" == document.location.protocol ? "https://ssl" : "http://www") + ".google-analytics.com/ga.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(ga, s); })(); </script>'; $body = http.getResponseBody(); # Find the location of the closing '</head>' tag $i = string.find( $body, "</head>" ); if( $i ==-1 ) $i = string.findI( $body, "</head>" ); if( $i ==-1 ) break; # Give up http.setResponseBody( string.left( $body, $i ) . $script . string.skip( $body, $i ));   For some extensions to this rule, check out Faisal Memon's article Google Analytics revisited
View full article
Following up on this earlier article try using the below TrafficScript code snippet to automatically insert the Google Analytics code on all your webpages.  To use it:   Copy the rule onto your Traffic Manager  by first navigating Catalogs -> Rules Scroll down to Create new rule, give the rule a name, and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the rule below. Change $account to your Google Analytics account number. If you are using multiple domains as described here set $multiple_domains to TRUE and set $tld to your Top Level Domain as specified in your Google Analytics account. Set the rule as a Response Rule in your Virtual Server by navigating to Services -> Virtual Servers -> <your virtual server> -> Rules -> Response Rules and Add rule.   After that you should be good to go.  No need to individually modify your web pages, TrafficScript will take care of it all.   # # Replace UA-XXXXXXXX-X with your Google Analytics Account Number # $account = 'UA-XXXXXXXX-X'; # # If you are tracking multiple domains, ie yourdomain.com, # yourdomain.net, etc. then set $mutliple_domains to TRUE and # replace yourdomain.com with your Top Level Domain as specified # in your Google Analytics account # $multiple_domains = FALSE; $tld = 'yourdomain.com'; # # Only modify text/html pages # if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # This variable contains the code to be inserted in the web page. Do not modify. # $html = "\n<script type=\"text/javascript\"> \n \ var _gaq = _gaq || []; \n \ _gaq.push(['_setAccount', " . $account . "]); \n"; if( $multiple_domains == TRUE ) { $html .= " _gaq.push(['_setDomainName', " . $tld . "]); \n \ _gaq.push(['_setAllowLinker', true]); \n"; } $html .= " _gaq.push(['_trackPageview']); \n \ (function() { \n \ var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; \n \ ga.src=('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; \n \ var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); \n \ })(); \n \ </script>\n"; # # Insert the code right before the </head> tag in the page # $body = http.getResponseBody(); $body = string.replace( $body, "</head>", $html . "</head>"); http.setResponseBody( $body );
View full article
This article describes how to gather activity statistics across a cluster of traffic managers using Perl, SOAP::Lite and Traffic Manager's SOAP Control API.   Overview   Each local Traffic Manager tracks a very wide range of activity statistics. These may be exported using SNMP or retrieved using the System/Stats interface in Traffic Manager's SOAP Control API.   When you use the Activity monitoring in Traffic Manager's Administration Interface, a collector process communicates with each of the Traffic Managers in your cluster, gathering the local statistics from each and merging them before plotting them on the activity chart.   'Aggregate data across all traffic managers'   However, when you use the SNMP or Control API interfaces directly, you will only receive the statistics from the Traffic Manager machine you have connected to. If you want to get a cluster-wide view of activity using SNMP or the Control API, you will need to poll each machine and merge the results yourself.   Using Perl and SOAP::Lite to query the traffic managers and merge activity statistics   The following code sample determines the total TCP connection rate across the cluster as follows:   Connect to the named traffic manager and use the getAllClusterMachines() method to retrieve a list of all of the machines in the cluster; Poll each machine in the cluster for its current value of TotalConn (the total number of TCP connections processed since startup); Sleep for 10 seconds, then poll each machine again; Calculate the number of connections processed by each traffic manager in the 10-second window, and calculate the per-second rate accurately using high-res time.   The code:   #!/usr/bin/perl -w use SOAP::Lite 0.6; use Time::HiRes qw( time sleep ); $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; my $userpass = "admin:admin"; # SOAP-capable authentication credentials my $adminserver = "stingray:9090"; # Details of an admin server in the cluster my $sampletime = 10; # Sample time (seconds) sub getAllClusterMembers( $$ ); sub makeConnections( $$$ ); sub makeRequest( $$ ); my $machines = getAllClusterMembers( $adminserver, $userpass ); print "Discovered cluster members ". ( join ", ", @$machines ) . "\n"; my $connections = makeConnections( $machines, $userpass, "http://soap.zeus.com/zxtm/1.0/System/Stats/" ); # sample the value of getTotalConn my $start = time(); my $res1 = makeRequest( $connections, "getTotalConn" ); sleep( $sampletime-(time()-$start) ); my $res2 = makeRequest( $connections, "getTotalConn" ); # Determine connection rate per traffic manager my $totalrate = 0; foreach my $z ( keys %{$res1} ) { my $conns = $res2->{$z}->result - $res1->{$z}->result; my $elapsed = $res2->{$z}->{time} - $res1->{$z}->{time}; my $rate = $conns / $elapsed; $totalrate += $rate; } print "Total connection rate across all machines: " . sprintf( '%.2f', $totalrate ) . "\n"; sub getAllClusterMembers( $$ ) { my( $adminserver, $userpass ) = @_; # Discover cluster members my $mconn = SOAP::Lite -> ns('http://soap.zeus.com/zxtm/1.0/System/MachineInfo/') -> proxy("https://$userpass\@$adminserver/soap") -> on_fault( sub { my( $conn, $res ) = @_; die ref $res?$res->faultstring:$conn->transport->status; } ); $mconn->proxy->ssl_opts( SSL_verify_mode => 0 ); my $res = $mconn->getAllClusterMachines(); # $res->result is a reference to an array of System.MachineInfo.Machine objects # Pull out the name/port of the traffic managers in our cluster my @machines = grep [email protected]://(.*?)/@[email protected], map { $_->{admin_server}; } @{$res->result}; return \@machines; } sub makeConnections( $$$ ) { my( $machines, $userpass, $ns ) = @_; my %conns; foreach my $z ( @$machines ) { $conns{ $z } = SOAP::Lite -> ns( $ns ) -> proxy("https://$userpass\@$z/soap") -> on_fault( sub { my( $conn, $res ) = @_; die ref $res?$res->faultstring:$conn->transport->status; } ); $conns{ $z }->proxy->ssl_opts( SSL_verify_mode => 0 ); } return \%conns; } sub makeRequest( $$ ) { my( $conns, $req ) = @_; my %res; foreach my $z ( keys %$conns ) { my $r = $conns->{$z}->$req(); $r->{time} = time(); $res{$z}=$r; } return \%res; }   Running the script   $ ./getConnections.pl Discovered cluster members stingray1-ny:9090, stingray1-sf:9090 Total connection rate across all machines: 5.02
View full article
The libDNS.rts library provides a means to interrogate and modify DNS traffic from a TrafficScript rule, and to respond directly to DNS request when desired.
View full article
The famous TrafficScript Mandelbrot generator!
View full article
In a recent conversation, a user wished to use the Traffic Manager's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Traffic Manager's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Traffic Manager processes all of the requests without excessive queuing; the response time stays within desired limits. Traffic Manager typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
Introduction   Do you ever face any of these requirements?   "I want to best-effort provide certain levels of service for certain users." "I want to prioritize some transactions over others." "I want to restrict the activities of certain types of users."   This article explains that to address these problems, you must consider the following questions:    "Under what circumstances do you want the policy to take effect?" "How do you wish to categorise your users?" "How do you wish to apply the differentiation?"   It then describes some of the measures you can take to monitor performance more deeply and apply prioritization to nominated traffic:   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions. Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations. Request Rate Shaping - Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   Whether you are running an eCommerce web site, online corporate services or an internal intranet, there’s always the need to squeeze more performance from limited resources and to ensure that your most valuable users get the best possible levels of service from the services you are hosting.   An example   Imagine that you are running a successful gaming service in a glamorous location.  The usage of your service is growing daily, and many of your long-term users are becoming very valuable.   Unfortunately, much of your bandwidth and server hits are taken up by competitors’ robots that screen-scrape your betting statistics, and poorly-written bots that spam your gaming tables and occasionally place low-value bets. At certain times of the day, this activity is so great that it impacts the quality of the service you deliver, and your most valuable customers are affected.     Using Traffic Manager to measure, classify and prioritize traffic, you can construct a service policy that comes into effect when your web site begins to run slowly to enforce different levels of service:    Competitor’s screen-scraping robots are tightly restricted to one request per second each.  A ten-second delay reduces the value of the information they screen-scrape. Users who have not yet logged in are limited to a small proportion of your available bandwidth and directed to a pair of basic web servers, thus reserving capacity for users who are logged in. Users who have made large transactions in the past are tagged with a cookie and the performance they receive is measured.  If they are receiving poor levels of service (over 100ms response time), then some of the transaction servers are reserved for these high-value users and the activity of other users is shaped by a system-wide queue.   Whether you are operating a gaming service, a content portal, a B2B or B2C eCommerce site or an internal intranet, this kind of service policy can help ensure that key customers get the best possible service, minimize the churn of valuable users and prevent undesirable visitors from harming the service to the detriment of others.   Designing a service policy     “I want to best-effort guarantee certain levels of service for certain users.” “I want to prioritize some transactions over others.” “I want to restrict the activities of certain users.”   To address these problems, you must consider the following questions:   Under what circumstances do you want the policy to take effect? How do you wish to categorise your users? How do you wish to apply the differentiation?   One or more TrafficScript rules can be used to apply the policy.  They take advantage of the following features:   When does the policy take effect?   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions.   How are users categorized?   Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations.   How are they given different levels of service?   Request Rate Shaping – Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   TrafficScript   Feature Brief: TrafficScript is the key to defining traffic management policies to implement these prioritization rules.  TrafficScript brings together functionality to monitor and classify behavior, and then applies functionality to impose the appropriate prioritization rules.   For example, the following TrafficScript request rule inspects HTTP requests.  If the request is for a .jsp page, the rule looks at the client’s ‘Priority’ cookie and routes the request to the ‘high-priority’ or ‘low-priority’ server pools as appropriate:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { $cookie = http.getCookie( "Priority" ); if( $cookie == "high" ) { pool.use( "high-priority" ); } else { pool.use( "low-priority" ); } }   Generally, if you can describe the traffic management logic that you require, it is possible to implement it using TrafficScript.   Capability 1: Service Level Monitoring   Using Feature Brief: Service Level Monitoring, Traffic Manager can measure and react to changes in response times for your hosted services, by comparing response times to a desired time.   You configure Service Level Monitoring by creating a Service Level Monitoring Class (SLM Class).  The SLM Class is configured with the desired response time (for example, 100ms), and some thresholds that define actions to take.  For example, if fewer than 80% of requests meet the desired response time, Traffic Manager can log a warning; if fewer than 50% meet the desired time, Traffic Manager can raise a system alert.   Suppose that we were concerned about the performance of our Java servlets.  We can configure an SLM Class with the desired performance, and use it to monitor all requests for Java servlets:   $url = http.getPath(); if( string.startsWith( $url, "/servlet/" ) { connection.setServiceLevelClass( "Java servlets" ); }   You can then monitor the performance figures generated by the ‘Java servlets’ SLM class to discover the response times, and the proportion of requests that fall outside the desired response time:   Once requests are monitored by an SLM Class, you can discover the proportion of requests that meet (or fail to meet) the desired response time within a TrafficScript rule.  This makes it possible to implement TrafficScript logic that is only called when services are underperforming.   Example: Simple Differentiation   Suppose we had a TrafficScript rule that tested to see if a request came from a ‘high value’ customer.   When our service is running slowly, high-value customers should be sent to one server pool (‘gold’) and other customers sent to a lower-performing server pool (‘bronze’). However, when the service is running at normal speed, we want to send all customers to all servers (the server pool named ‘all servers’).   The following TrafficScript rule describes how this logic can be implemented:   # Monitor all traffic with the 'response time' SLM class, which is # configured with a desired response time of 200ms connection.setServiceLevelClass( "response time" ); # Now, check the historical activity (last 10 seconds) to see if it’s # been acceptable (more than 90% requests served within 200ms) if( slm.conforming( "response time" ) > 90 ) ) { # select the ‘all servers’ server pool and terminate the rule pool.use( "all servers" ); } # If we get here, things are running slowly # Here, we decide a customer is ‘high value’ if they have a login cookie, # so we penalize customers who are not logged in. You can put your own # test here instead $logincookie = http.getCookie( "Login" ); if( $logincookie ) { pool.use( "gold" ); } else { pool.use( "bronze" ); }   For a more sophisticated example of this technique, check out the article Dynamic rate shaping slow applications   Capability 2: Application Traffic Inspection   There’s no limit to how you can inspect and evaluate your traffic.  Traffic Manager lets you look at any aspect of a client’s request, so that you can then categorize them as you need. For example:   # What is the client asking for? $url = http.getPath(); # ... and the QueryString $qs = http.getQueryString(); # Where has the client come from? $referrer = http.getHeader( "Referer" ); $country = geo.getCountryCode( request.getRemoteIP() ); # What sort of browser is the client using? $ua = http.getHeader( "User-Agent" ); # Is the client trying to spend more than $49.99? if( http.getPath() == "/checkout.cgi" && http.getFormParam( "total" ) > 4999 ) ... # What’s the value of the CustomerName field in the XML purchase order # in the SOAP request? $body = http.getBody(); $name = xml.xpath.matchNodeSet( $body, "", "//Info/CustomerName/text()"); # Take the name, post it to a database server with a web interface and # inspect the response. Does the response contain the value ‘Premium’? $response = http.request.post( "http://my.database.server/query", "name=".string.htmlEncode( $name ) ); if( string.contains( $response, "Premium" ) ) { ... }   Remembering the Classification with a Cookie   Often, it only takes one request to identify the status of a user, but you want to remember this decision for all subsequent requests.  For example, if a user places an item in his shopping cart by accessing the URL ‘/cart.php’, then you want to remember this information for all of his subsequent requests.   Adding a response cookie is the way to do this.  You can do this in either a Request or Response Rule with the ‘http.setResponseCookie()’ function:   if( http.getPath() == "/cart.php" ) { http.setResponseCookie( "GotItems", "Yes" ); }   This cookie will be sent by the client on every subsequent request, so to test if the user has placed items in his shopping cart, you just need to test for the presence of the ‘GotItems’ cookie in each request rule:   if( http.getCookie( "GotItems" ) ) { ... }   If necessary, you can encrypt and sign the cookie so that it cannot be spoofed or reused:   # Setting the cookie # Create an encryption key using the client’s IP address and user agent # Encrypt the current time using encryption key; it can only be decrypted # using the same key $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $encrypted = string.encrypt( sys.time(), $key ); $encoded = string.hexencode( $encrypted ); http.setResponseHeader( "Set-Cookie", "GotItems=".$encoded ); # Validating the cookie $isValid = 0; if( $cookie = http.getCookie( "GotItems" ) ) { $encrypted = string.hexdecode( $cookie ); $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $secret = string.decrypt( $encrypted, $key ); # If the cookie has been tampered with, or the ip address or user # agent differ, the string.decrypt will return an empty string. # If it worked and the data was less than 1 hour old, it’s valid: if( $secret && sys.time()-$secret < 3600 ) { $isValid = 1; } }   Capability 3: Request Rate Shaping   Having decided when to apply your service policy (using Service Level Monitoring), and classified your users (using Application Traffic Inspection), you now need to decide how to prioritize valuable users and penalize undesirable ones.   The Feature Brief: Bandwidth and Rate Shaping in Traffic Manager capability is used to apply maximum request rates:   On a global basis (“no more than 100 requests per second to my application servers”); On a very fine-grained per-user or per-class basis (“no user can make more than 10 requests per minute to any of my statistics pages”).   You can construct a service policy that places limits on a wide range of events, with very fine grained control over how events are identified.  You can impose per-second and per-minute rates on these events.   For example:   You can rate-shape individual web spiders, to stop them overwhelming your web site. Each web spider, from each remote IP address, can be given maximum request rates. You can throttle individual SMTP connections, or groups of connections from the same client, so that each connection is limited to a maximum number of sent emails per minute. You may also rate-shape new SMTP connections, so that a remote client can only establish new connections at a particular rate. You can apply a global rate shape to the number of connections per second that are forwarded to an application. You can identify individual user’s attempts to log in to a service, and then impede any dictionary-based login attacks by restricting each user to a limited number of attempts per minute.   Request Rate Limits are imposed using the TrafficScript rate.use() function, and you can configure per-second and per-minute limits in the rate class.  Both limits are applied (note that if the per-minute limit is more than 60-times the per-second limit, it has no effect).   Using a Rate Class   Rate classes function as queues.  When the TrafficScript rate.use() function is called, the connection is suspended and added to the queue that the rate class manages.  Connections are then released from the queue according to the per-second and per-minute limits.   There is no limit to the size of the backlog of queued connections.  For example, if 1000 requests arrived in quick succession to a rate class that admitted 10 per second, 990 of them would be immediately queued.  Each second, 10 more requests would be released from the front of the queue.   While they are queued, connections may time out or be closed by the remote client.  If this happens, they are immediately discarded.   You can use the rate.getBacklog() function to discover how many requests are currently queued.  If the backlog is too large, you may decide to return an error page to the user rather than risk their connection timing out.  For example, to rate shape jsp requests, but defer requests when the backlog gets too large:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { if( rate.getBacklog( "shape requests" ) > 100 ) { http.redirect( "http://mysite/too_busy.html" ); } else { rate.use( "shape requests" ); } }   Rate Classes with Keys In many circumstances, you may need to apply more fine-grained rate-shape limits.  For example, imagine a login page; we wish to limit how frequently each individual user can attempt to log in to just 2 attempts per minute.   The rate.use() function can take an optional ‘key’ which identifies a specific instance of the rate class.  This key can be used to create multiple, independent rate classes that share the same limits, but enforce them independently for each individual key.   For example, the ‘login limit’ class is restricted to 2 requests per minute, to limit how often each user can attempt to log in:   $url = http.getPath(); if( string.endsWith( $url, "login.cgi" ) ) { $user = http.getFormParam( "username" ); rate.use( "login limit", $user ); }   This rule can help to defeat dictionary attacks where attackers try to brute-force crack a user’s password.  The rate shaping limits are applied independently to each different value of $user.  As each new user accesses the system, they are limited to 2 requests per minute, independently of all other users who share the “login limit” rate shaping class.   For another example, check out The "Contact Us" attack against mail servers.   Applying service policies with rate shaping   Of course, once you’ve classified your users, you can apply different rate settings to different categories of users:   # If they have an odd-looking user agent, or if there’s no host header, # the client is probably a web spider. Limit it to 1 request per second. $ua = http.getHeader( "User-Agent" ); if( ! string.startsWith( $ua, "Mozilla/" ) && ! string.startsWith( $ua, "Opera/" ) || ! http.getHeader( "Host" ) ) { rate.use( "spiders", request.getRemoteIP() ); }   If the service is running slowly, rate-shape users who have not placed items into their shopping cart with a global limit, and rate-shape other users to 8 requests per second each:   if( slm.conforming( "timer" ) < 80 ) { $cookie = request.getCookie( "Cart" ); if( ! $cookie ) { rate.use( "casual users" ); } else { # Get a unique id for the user $cookie = request.getCookie( "JSPSESSIONID" ); rate.use( "8 per second", $cookie ); } }   Capability 4: Bandwidth Shaping   Feature Brief: Bandwidth and Rate Shaping in Traffic Manager allows Traffic Manager to limit the number of bytes per second used by inbound or outbound traffic, for an entire service, or by the type of request.   Bandwidth limits are automatically shared and enforced across all the Traffic Managers in a cluster. Individual Traffic Managers take different proportions of the total limit, depending on the load on each, and unused bandwidth is equitably allocated across the cluster depending on the need of each machine.   Like Request Rate Shaping, you can use Bandwidth shaping to limit the activities of subsets of your users. For example, you may have a 1 Gbits/s network connection which is being over-utilized by a certain type of client, which is affecting the responsiveness of the service.  You may therefore wish to limit the bandwidth available to those clients to 20Mbits/s.   Using Bandwidth Shaping Like Request Rate Shaping, you configure a Bandwidth class with a maximum bandwidth limit.  Connections are allocated to a class as follows:   response.setBandwidthClass( "class name" );   All of the connections allocated to the class share the same bandwidth limit.   Example: Managing Flash Floods The following example helps to mitigate the ‘Slashdot Effect’, a common example of a Flash Flood problem.  In this situation, a web site is overwhelmed by traffic as a result of a high-profile link (for example, from the Slashdot news site), and the level of service that regular users experience suffers as a result.   The example looks at the ‘Referer’ header, which identifies where a user has come from to access a web site.  If the user has come from ‘slashdot.org’, he is tagged with a cookie so that all of his subsequent requests can be identified, and he is allocated to a low-bandwidth class:   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot.org" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); connection.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { connection.setBandwidthClass( "slashdot" ); }   For a more in depth discussion, check out Detecting and Managing Abusive Referers.   Capability 5: Traffic Routing and Termination   Different levels of service can be provided by different traffic routing, or in extreme events, by dropping some requests.   For example, some large media sites provide different levels of content; high-bandwidth rich media versions of news stories are served during normal usage, and low-bandwidth versions which are served when traffic levels are extremely high.  Many websites provide flash-enabled and simple HTML versions of their home page and navigation.   This is also commonplace when presenting content to a range of browsing devices with different capabilities and bandwidth.   The switch between high and low bandwidth versions could take place as part of a service policy: as the service begins to under-perform, some (or all) users could be forced onto the low-bandwidth versions so that a better level of service is maintained.   # Forcibly change requests that begin /high/ to /low/ $url = http.getPath(); if( string.startsWith( $url, "/high" ) ) { $url = string.replace( $url, "/high", "/low" ); http.setPath( $low ); }   Example: Ticket Booking Systems   Ticket Booking systems for major events often suffer enormous floods of demand when tickets become available.   You can use Stingray's request rate shaping system to limit how many visitors are admitted to the service, and if the service becomes overwhelmed, you can send back a ‘please try again’ message rather than keeping the user ‘on hold’ in the queue indefinitely.   Suppose the ‘booking’ rate shaping class is configured to admit 10 users per second, and that users enter the booking process by accessing the URL /bookevent?eventID=<id>.  This rule ensures that no user is queued for more than 30 seconds, by keeping the queue length to no more than 300 users (10 users/second * 30 seconds):   # limit how users can book events $url = http.getPath(); if( $url == "/bookevent" ) { # How many users are already queued? if( rate.getBacklog( "booking" ) > 300 ) { http.redirect( "http://www.mysite.com/too_busy.html"); } else { rate.use( "booking" ); } }   Example: Prioritizing Resource Usage In many cases, the resources are limited and when a site is overwhelmed, users’ requests still need to be served.   Consider the following scenario:   The site runs a cluster of 4 identical application servers (‘servers ‘1’ to ‘4’); Users are categorized into casual visitors and customers; customers have a ‘Cart’ cookie, and casual visitors do not.   Our goal is to give all users the best possible level of service, but if customers begin to get a poor level of service, we want to prioritize them over casual visitors.  We desire that more then 80% of customers get responses within 100ms.   This can be achieved by splitting the 4 servers into 2 pools: the ‘allservers’ pool contains servers 1 to 4, and the ‘someservers’ pool contains servers 1 and 2 only.   When the service is poor for the customers, we will restrict the casual visitors to just the ‘someservers’ pool.  This effectively reserves the additional servers 3 and 4 for the customers’ exclusive use.   The following code uses the ‘response’ SLM class to measure the level of service that customers receive:   $customer = http.getCookie( "Cart" ); if( $customer ) { connection.setServiceLevelClass( "response" ); pool.use( "allservers" ); } else { if( slm.conforming( "response" ) < 80 ) { pool.use( "someservers" ); } else { pool.use( "allservers" ); } }   Capability 6: Selective Traffic Optimization Some of Traffic Manager's features can be used to improve the end user’s experience, but they take up resources on the system:   Pulse Web Acceleraror (Aptimizer) rewrites page content for faster download and rendering, but is very CPU intensive. Content Compression reduces the bandwidth used in responses and gives better response times, but it takes considerable CPU resources and can degrade performance. Feature Brief: Traffic Manager Content Caching can give much faster responses, and it is possible to cache multiple versions of content for each user.  However, this consumes memory on the system.   All of these features can be enabled and disabled on a per-user basis, as part of a service policy.   Pulse Web Accelerator (Stingray Aptimizer)   Use the http.aptimizer.bypass() and http.aptimizer.use() TrafficScript functions to control whether Traffic Manager will employ the Aptimizer optimization module for web content.    Note that these functions only refer to optimizations to the base HTML document (e.g. index.html, or other content of type text/html) - all other resources will be served as is appropriate.  For example, if a client receives an aptimized version of the base content and then requests the image sprites, Traffic Manager will always serve up the sprites.   # Optimize web content for clients based in Australia $ip = request.getRemoteIP(); if( geo.getCountry( $ip ) == "Australia" ) { http.aptimizer.use( "All", "Remote Users" ); }   Content Compression Use the http.compress.enable() and http.compress.disable() TrafficScript functions to control whether or not Traffic Manager will compress response content to the remote client.   Note that Traffic Manager will only compress content if the remote browser has indicated that it supports compression.   On a lightly loaded system, it’s appropriate to compress all response content whenever possible :   http.compress.enable();   On a system where the CPU usage is becoming too high, you can selectively compress content:   # Don’t compress by default http.compress.disable(); if( $isvaluable ) { # do compress in this case http.compress.enable(); }   Content Caching Traffic Manager can cache multiple different versions of a HTTP response.  For example, if your home page is generated by an application that customizes it for each user, Traffic Manager can cache each version separately, and return the correct version from the cache for each user who accesses the page.   Traffic Manager's cache has a limited size so that it does not consume too much memory and cause performance to degrade.  You may wish to prioritize which pages you put in the cache, using the http.cache.disable() and http.cache.enable() TrafficScript  functions.   Note: you also need to enable Content Caching in your Virtual Server configuration; otherwise the TrafficScript cache control functions will have no effect.   # Get the user name $user = http.getCookie( "UserName" ); # Don’t cache any pages by default: http.cache.disable(); if( $isvaluable ) { # Do cache these pages for better performance. # Each user gets a different version of the page, so we need to cache # the page indexed by the user name. http.cache.setkey( $user ); http.cache.enable(); }   Custom Logging A service policy can be complicated to construct and implement.   The TrafficScript functions log.info(), log.warn() and log.error() are used to write messages to the event log, and so are very useful debugging tools to assist in developing complex TrafficScript rules.   For example, the following code:   if( $isvaluable && slm.conforming( "timer" ) < 70 ) { log.info( "User ".$user." needs priority" ); }   … will append the following message to your error log file:   $ tail $ZEUSHOME/zxtm/log/errors [20/Jan/2013:10:24:46 +0000] INFO rulename rulelogmsginfo vsname User Jack needs priority   You can also inspect your error log file by viewing the ‘Event Log’ on the  Admin Server.   When you are debugging a rule, you can use log.info() to print out progress messages as the rule executes.  The log.info() function takes a string parameter; you can construct complex strings by appending variables and literals together using the ‘.’ operator:   $msg = "Received ".connection.getDataLen()." bytes."; log.info( $msg );   The functions log.warn() and log.error() are similar to log.info().  They prefix error log messages with a higher priority - either “WARN” or “ERROR” and you can filter and act on these using the Event Handling system.   You should be careful when printing out connection data verbatim, because the connection data may contain control characters or other non-printable characters.  You can encode data using either ‘string.hexEncode()’ or ‘string.escape()’; you should use ‘string.hexEncode()’ if the data is binary, and ‘string.escape()’ if the data contains readable text with a small number of non-printable characters.   Conclusion Traffic Manager is a powerful toolkit for network and application administrators.  This white paper describes a number of techniques to use tools in the kit to solve a range of traffic valuation and prioritization tasks.   For more examples of how Traffic Manager and TrafficScript can manipulate and prioritize traffic, check out the Top Examples of Traffic Manager in action on the Pulse Community.
View full article
Top examples of Pulse vADC in action   Examples of how SteelApp can be deployed to address a range of application delivery challenges.   Modifying Content   Simple web page changes - updating a copyright date Adding meta-tags to a website with Traffic Manager Tracking user activity with Google Analytics and Google Analytics revisited Embedding RSS data into web content using Traffic Manager Add a Countdown Timer Using TrafficScript to add a Twitter feed to your web site Embedded Twitter Timeline Embedded Google Maps Watermarking PDF documents with Traffic Manager and Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking web content with Pulse vADC and TrafficScript   Prioritizing Traffic   Evaluating and Prioritizing Traffic with Traffic Manager HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Using Pulse vADC to Catch Spiders Dynamic rate shaping slow applications Stop hot-linking and bandwidth theft! Slowing down busy users - driving the REST API from TrafficScript   Performance Optimization   Cache your website - just for one second? HowTo: Monitor the response time of slow services HowTo: Use low-bandwidth content during periods of high load   Fixing Application Problems   No more 404 Not Found...? Hiding Application Errors Sending custom error pages   Compliance Problems   Satisfying EU cookie regulations using The cookiesDirective.js and TrafficScript   Security problems   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Traffic Manager Heartbleed: Using TrafficScript to detect TLS heartbeat records TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) SAML 2.0 Protocol Validation with TrafficScript Disabling SSL v3.0 for SteelApp   Infrastructure   Transparent Load Balancing with Traffic Manager HowTo: Launch a website at 5am Using Stingray Traffic Manager as a Forward Proxy Tunnelling multiple protocols through the same port AutoScaling Docker applications with Traffic Manager Elastic Application Delivery - Demo How to deploy Traffic Manager Cluster in AWS VPC   Other solutions   Building a load-balancing MySQL proxy with TrafficScript Serving Web Content from Traffic Manager using Python and Serving Web Content from Traffic Manager using Java Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager TrafficScript can Tweet Too Instrument web content with Traffic Manager Antivirus Protection for Web Applications Generating Mandelbrot sets using TrafficScript Content Optimization across Equatorial Boundaries
View full article
Many services now use RSS feeds to distribute frequently updated information like news stories and status reports. Traffic Manager's powerful TrafficScript language lets you process RSS XML data, and this article describes how you can embed several RSS feeds into a web document.   It illustrates Traffic Manager's response rewriting capabilities, XML processing and its ability to query several external datasources while processing a web request.   In this example, we'll show how you can embed special RSS tags within a static web document. Traffic Manager will intercept these tags in the document and replace them with the appropriate RSS feed data&colon;   <!RSS http://community.brocade.com/community/product-lines/stingray/view-browse-feed.jspa?browseSite=place-content&browseViewID=placeContent&userID=9503&containerType=14&containerID=2005&filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D !>   We'll use a TrafficScript rule to process web responses, seek out the RSS tag and retrieve, format and insert the appropriate RSS data.   Check the response   First, the TrafficScript rule needs to obtain the response data, and verify that the response is a simple HTML document. We don't want to process images or other document types!     # Check the response type $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; # Get the response data $body = http.getResponseBody();   Find the embedded RSS tags   Next, we can use a regular expression to search through the response data and find any RSS tags in it:   (.*?)<!RSS\s+(.*?)\s+!>(.*)   Stingray supports Perl compatible regular expressions (regexs). This regex will find the first RSS tag in the document, and will assign text to the internal variables $1, $2 and $3:   $1: the text before the tag $2: the RSS URL within the tag $3: the text after the tag   The following code searches for RSS tags:     while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3; }   Retrieve the RSS data   An asynchronous HTTP request is sufficient to retrieve the RSS XML data&colon;     $rss = http.request.get( $url );     Transform the RSS data using an XSLT transform   The following XSLT transform can be used to extract the first 4 RSS items and format them up as an HTML <UL> list: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">    <xsl:template match="/">     <ul>       <xsl:apply-templates select="//item[position()&lt;5]"/>     </ul>    </xsl:template>         <xsl:template match="item">       <xsl: param name="URL" select="link/text()"/>       <xsl: param name="TITLE" select="title/text()"/>       <li><a href="{$URL}"><xsl:value-of select="$TITLE"/></a></li>    </xsl:template> </xsl:stylesheet>   Store the XSLT file in the Traffic Manager conf/extra directory, naming it 'rss.xslt', so that the rule can look it up using resource.get().   You can apply the XSLT transform to the XML data using the xml.xslt.transform function. The function returns the result with HTML entity encoding; use string.htmldecode to remove these:   $xsl = resource.get( "rss.xslt" ); $html= string.htmldecode( xml.xslt.transform( $rss, $xsl ) );   The entire rule   The entire response rule, with a little additional error checking, looks like this: $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; $body = http.getResponseBody(); $new = ""; $changed = 0; while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3;      $html = "<ul><li><b>RSS: ".$url."</b></li></ul>";      $rss = http.request.get( $url );    if( $1 != 200 ) {       $html = "<ul><li><b>Failed to retreive RSS feed</b></li></ul>";    } else {       $xsl = resource.get( "rss.xslt" );       $html = string.htmldecode( xml.xslt.transform( $rss, $xsl ) );       if( $html == -1 ) {           $html = "<ul><li><b>Failed to parse RSS feed</b></li></ul>";       }    }      $new = $new . $start . $html;    $body = $end;    $changed = 1; } if( $changed )   http.setresponsebody( $new . $body );
View full article
With the evolution of social media as a tool for marketing and current events, we commonly see the Twitter feed updated long before the website. It’s not surprising for people to rely on these outlets for information.   Fortunately Twitter provides a suite of widgets and scripting tools to integrate Twitter information for your application. The tools available can be implemented with little code changes and support many applications. Unfortunately the same reason a website is not as fresh as social media is because of the code changes required. The code could be owned by different people in the organization or you may have limited access to the code due to security or CMS environment. Traffic Manager provides the ability to insert the required code into your site with no changes to the application.     Twitter Overview "Embeddable timelines make it easy to syndicate any public Twitter timeline to your website with one line of code. Create an embedded timeline from your widgets settings page on twitter.com, or choose “Embed this…” from the options menu on profile, search and collection pages.   Just like timelines on twitter.com, embeddable timelines are interactive and enable your visitors to reply, Retweet, and favorite Tweets directly from your pages. Users can expand Tweets to see Cards inline, as well as Retweet and favorite counts. An integrated Tweet box encourages users to respond or start new conversations, and the option to auto-expand media brings photos front and center.   These new timeline tools are built specifically for the web, mobile web, and touch devices. They load fast, scale with your traffic, and update in real-time." -twitter.com   Thank you Faisal Memon for the original article Using TrafficScript to add a Twitter feed to your web site   As happens more often than than not, platform access changes. This time twitter is our prime example. When loading Twitter js, http://widgets.twimg.com/j/2/widget.js you can see the following notice:   The Twitter API v1.0 is deprecated, and this widget has ceased functioning.","You can replace it with a new, upgraded widget from <https://twitter.com/settings/widgets/new/"+H+">","For more information on alternative Twitter tools, see <https://dev.twitter.com/docs/twitter-for-websites>   To save you some time, Twitter really means deprecated and the information link is broken. For more information on alternative Twitter tools the Twitter for Websites | Home. For information related to the information in this article, please see Embedded Timelines | Home   One of the biggest changes in the current twitter platform is the requirement for a "data-widget-id". The data-widget-id is unique, and is used by the twitter platform to provide information to generate the data. Before getting started with the Traffic Manager and Web application you will have to create a new widget using your twitter account https://twitter.com/settings/widgets/new/. Once you create your widget, will see the "Copy and paste the code into the HTML of your site." section on the twitter website. Along with other information, this code contains your "data-widget-id". See Create widget image.   Create widget (click to zoom)   This example uses a Traffic Script response rule to rewrite the HTTP body from the application. Specifically I know the body for my application includes a html comment  <!--SIDEBAR-->.    This rule will insert the required client side code into the HTTP body and send the updated body in to complete the request.  The $inserttag variable can be just about anything in the body itself  i.e. the "MORE LIKE THIS" text on the side of this page. Simply change the code below to:     $inserttag = "MORE LIKE THIS";   Some of the values used in the example (i.e. width, data-theme, data-link-color, data-tweet-limit) are not required. They have been included to demonstrate customization. When you create/save the widget on the twitter website, the configuration options (See the Create widget image above) are associated with the "data-widget-id". For example "data-theme", if you saved the widget with light and you want the light theme, it can be excluded. Alternatively if you saved the widget with light, you can use "data-theme=dark" and over ride the value saved with the widget.  In the example time line picture the data-link-color value is used to over ride the value provided with the saved "data-widget-id".   Example Response Rule, *line spaced for splash readability and use of variables for easy customization. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # Only modify text/html pages    if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         $inserttag = "<!--SIDEBAR-->" ;       # create a widget ID @ https://twitter.com/settings/widgets/new  #This is the id used by riverbed.com   $ttimelinedataid = "261517019072040960" ;  $ttimelinewidth = "520" ; # max could be limited by ID config.  $ttimelineheight = "420" ;  $ttimelinelinkcolor = "#0080ff" ; #0 for default or ID config, #0080ff & #0099cc are nice  $ttimelinetheme = "dark" ; #"light" or "dark"  $ttimelinelimit = "0" ; #0 = unlimited with scroll. >=1 will ignore height.  #See https://dev.twitter.com/web/embedded-timelines#customization for other options.       $ttimelinehtml = "<a class=\"twitter-timeline\" " .                   "width=\"" . $ttimelinewidth . "" .                     "\" height=\"" . $ttimelineheight . "" .                     "\" data-theme=\"" . $ttimelinetheme . "" .                   "\" data-link-color=\"" . $ttimelinelinkcolor . "" .                   "\" data-tweet-limit=\"" . $ttimelinelimit . "" .                   "\" data-widget-id=\"" . $ttimelinedataid . "" .                    "\"></a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)" .                     "[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))" .                     "{js=d.createElement(s);js.id=id;js.src=p+" .                   "\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js," .                   "fjs);}}(document,\"script\",\"twitter-wjs\");" .                     "</script><br>" . $inserttag . "" ;         $body = http.getResponseBody();    $body = string.replace( $body , $inserttag , $ttimelinehtml );  http.setResponseBody( $body );    A short version of the rule above, still with line breaks for readability.   1 2 3 4 5 6 7 8 9 if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         http.setResponseBody(string.replace( http.getResponseBody(), "<!--SIDEBAR-->" ,   "<a class=\"twitter-timeline\" width=\"520\" height=\"420\" data-theme=\"dark\" " .  "data-link-color=\"#0080ff\" data-tweet-limit=\"0\" data-widget-id=\"261517019072040960\">" .  "</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test" .  "(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;" .  "js.src=p+\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js,fjs);}}" .  "(document,\"script\",\"twitter-wjs\");</script><br><!--SIDEBAR-->" ));    Result from either rule:  
View full article
You may be familiar with the security concept of a 'honeypot' - a sandboxed, sacrificial computer system that sits safely away from the primary systems.  Any attempts to access that computer are a strong indicator that an attacker is at work, probing for weak points in a network.   A recent Slashdot article raised an interesting idea... 'honeywords' are fake accounts in a password database that don't correspond to real users.  Any attempts to log in with one of these accounts is a strong indicator that the password database has been stolen.   In a similar vein, attempts to log in with common, predictable admin accounts are a strong indicator that an attacker is scanning your system and looking for weaknesses.  This article describes how you can detect these attacks with ease, and then considers different methods you could use to block the attacker.   Detecting Attack Attempts   Attackers look for common account names and passwords (see [1], [2] and [3])   Traffic Manager is in an ideal position to detect attack attempts.  It can inspect the username and password in each login attempt, and flag an alert if a user appears to be scanning for familiar usernames.   Step 1: Determine how the login process functions   Credentials are usually presented to the server as HTTP form parameters, typically in an HTTP POST to an SSL-protected endpoint: Web Inspection tools such as the Chrome Developer tools (illustrated above) help you understand how the authentication credentials are presented to the login service.   You can use the TrafficScript function http.getFormParam() to look up the submitted HTTP form parameters - this function extracts parameters from both the query string (GET and POST requests) and HTTP request body (POST requests), handles any unusual body transfer encoding, and %-decodes the values:   $userid = http.getFormParam( "Email" ); $pass = http.getFormParam( "Password" );   Step 2: Does this constitute an attack?   You'll need to make a judgement as to what constitutes an attack attempt against your service.  A single attempt to log-in with 'admin:admin' is probably sufficient to block a user, but multiple attempts in a short period of time certainly indicate a concerted attack.   An easy way to count user/password combinations is to use a rate shaping class to count events.  Stingray's rate classes are usually used to implement queues (delaying requests that exceed the per-second or per-minute queue), but you can also use the rate.use.noQueue() function to determine if an event has exceeded the rate limit or not, without queuing it.   Let's construct a policy that detects if a particular source IP address is trying to log in to one of our false 'admin' accounts too frequently:   $path = http.getPath(); if( $path != "/cgi-bin/login.cgi" ) break; $ip = request.getRemoteIP(); $user = http.getFormParam( "user" ); if( string.regexmatch( $user, "^(admin|root|phpadmin|test|guest|user|administrator|mysql|adm|oracle)$" ) ) { if( rate.use.noQueue( "5 per minute", $ip ) == 0 ) { # User has exceeded the limits .... } }   An aside: If you would like to maintain a large list of honeyword names (making sure that none of them correspond to real accounts), then you may find it easier to store them in an external table using libTable.rts: Interrogating tables of data in TrafficScript.       Responding to Attack Attempts   If you determine that a particular IP address is generating attack attempts and you want to block it, there are a number of ways that you can do so.  They vary in complexity, accuracy and the ability to 'time out' the period that an IP address is blocked out for:   Method Sophistication Store data locally in the global data segment Straightforward to code, timeouts possible, not cluster-aware Store data in the resource directory Straightforward to code, timeouts possible, is cluster-aware Update configuration in service protection policy Straightforward to code, difficult to avoid race conditions, not possible to timeout the configuration, is cluster aware Provision iptables rules from an event Complex to code accurately but very effective, not possible to timeout, is cluster aware   Updating the configuration in a service protection policy could be achieved by calling the REST API from TrafficScript - perform a GET on the configuration ( /api/tm/1.0/config/active/protection/ name ), update the banned array, and PUT the configuration back again.  However, there is no natural way to remove (timeout) a block on an IP address after a period of inactivity.   Provisioning iptables rules would be possible with a specific event handler that responded to the TrafficScript function event.emit( "block", $ip ), but once again, there's no easy way to time a block rule out.   Storing data locally in the resource directory is a good approach, and is described in detail in the article Slowing down busy users - driving the REST API from TrafficScript.  The basic premise is that you can use the REST API to 'touch' a file (named after an IP address) in the resource directory, and you block a user if their IP address corresponds to a file in the resource directory that is not too old.  However, if the user does not return, you will build up a large number of files in the resource directory that should be manually pruned.   Storing data in the global data segment (How is memory managed in TrafficScript?) is perhaps the best solution.  The following code sample illustrates the basic premise:     $prefix = "blocked-ip-address:"; # Record that an IP address is blocked data.set( $prefix.$ip, 1 ); # Check if an IP address is blocked if( data.get( $prefix.$ip ) ) { connection.discard();#sthash.YB8cEYo7.dpuf } # Delete all records data.reset( $prefix );   You could implement timeouts in a simple fashion, for example, by calling data.reset() on the first transaction after the top of every hour:   $hour = sys.time.hour(); $last = data.get( $prefix."hour" ); if( $last != $hour ) { data.reset( $prefix ); data.set( $prefix."hour", $hour ); }   An aside: There is a very slight risk of a race condition here (if two cores run the rule simultaneously) but the effects are not significant.   This approach gives a simple and effective solution to the problem of detecting logins to fake admin accounts, and then blocking the IP address for up to an hour.   What if I want to block IP addresses for longer?   One weakness of the approach above is that if an IP address is added to the block table at 59 minutes past the hour, it will be removed a minute later.  This may not be a serious fault; if the user is continuing to try to force admin accounts, the rule will detect this and block the IP address shortly after.   An alternative solution is to store two tables - one for odd-numbered hours, and one for even-numbered hours:   When you add an IP address, place it in the odd or even table according to the current hour When you test for the presence of an IP address, check both tables When the hour rolls over and you switch to the even-numbered table (for example), delete all of the entries (using data.reset ) before proceeding - they will be between one and two hours old   $prefix = "blocked-ip-address:"; # Check if an IP address is blocked if( data.get( $prefix."0:".$ip ) || data.get( $prefix."1:".$ip ) ) { connection.discard(); } # Add an IP address (this is an infrequent operation we hope!) $hour = sys.time.hour(); $pp = ( $hour % 2 ) . ":"; # pp is either 0: or 1: $last = data.get( $prefix.$pp."hour" ); if( $last != $hour ) { data.reset( $prefix.$pp ); data.set( $prefix.$pp."hour", $hour ); } data.set( $prefix.$pp.$ip, 1 );   This extension to the rule could further be extended to any number of tables, and to any time interval, though this is almost certainly overkill for this solution.   Read More   Interested in knowing what usernames are most commonly used?  Check out the article Being Lazy with Java Extensions and the 'CountThis' extension Other security and denial-of-service -related articles - check out the Security section of the Top Stingray Examples and Use Cases article
View full article
Popular news and blogging sites such as Slashdot and Digg have huge readerships. They are community driven and allow their members to post articles on various topics ranging from hazelnut chocolate bars to global warming. These sites, due to their massive readership, have the power to generate huge spikes in the web traffic to those (un)fortunate enough to get mentioned in their articles. Fortunately Traffic Manager and TrafficScript can help.   If the referenced site happens to be yours, you are faced with dealing with this sudden and unpredictable spike in bandwidth and request rate, causing:   a large proportion or all of your available bandwidth to be consumed by visitors referred to you by this popular site; and in extreme cases, a cascade failure across your web servers as each one becomes overloaded, fails and, in doing so, adds further load onto the remaining web servers.   Bandwidth Management and Rate Shaping   Traffic Manager has the ability to shape traffic in two important ways. Firstly, you can restrict the amount of bandwidth any client or group of clients are allowed to consume. This is commonly known as "Bandwidth Management" and in Traffic Manager it's configured by using a bandwidth class. Bandwidth classes are used to specify the maximum bits per second to make available. The alternative method is to limit the number of requests that those clients or group of clients can make per second and/or per minute. This is commonly known as "Rate Shaping" and is configured within a rate class.   Both Rate Shaping and Bandwidth Management classes are configured and stored within the catalog section of Traffic Manager. Once you have created a class it is ready for use and can be applied to one or more of your Virtual Servers. However the true power of these Traffic Shaping features really becomes apparent when you make use of them with TrafficScript.   What is an Abusive Referer?   I would class an Abusive Referer as any site on the internet that refers enough traffic to your server to overwhelm it and effectively deny service to other users. This abuse is usually unintentional, the problem lies in the sheer number of people wanting to visit your site at that one time. This slashdot effect can be detected and dealt with by a TrafficScript rule and either a Bandwidth or a Rate Class.   Detecting and Managing Abusive Referers   Example One   Take a look at the TrafficScript below for an example of how you could stop a site (in this instance slashdot) from from using a large proportion or all of your available bandwidth.   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); response.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { response.setBandwidthClass( "slashdot" ); }   In this example we are specifically targeting slashdot users and preventing them from using more bandwidth than we have allotted them in our "slashdot" bandwidth class. This rule requires you to know the name of the site you want protection from, but this rule or similar could be modified to defend against other high traffic sites. Example Two The next example is a little more complicated, but will automatically limit all requests from any referer. I've chosen to use two rate classes here, BusyReferer for those sites I allow to send a large amount of traffic and StandardReferers for those I don't. At the top I specify a $whitelist, which contains sites I never want to rate shape, and $highTraffic which is a list of sites I'm going to shape with my BusyReferer class. By default, all traffic not in the white list is sent through one of my rate classes, but only on entry to the site. That's because subsequent requests will have myself as the referer and will be whitelisted. In times of high load, when a referer is sending more traffic than the rate class allows, a back log will build up, at that point we will also start issuing cookies to put the offending referers into a bandwidth class.   # Referer whitelist. These referers are never rate limited. $whitelist = "localhost 172.16.121.100"; # Referers that are allowed to pass a higher number of clients. $highTraffic = "google mypartner.com"; # How many queued requests are allowed before we track users. $shapeQueue = 2; # Retrieve the referer and strip out the domain name part. $referer = http.getheader("Referer"); $referer = String.regexsub($referer, ".*?://(.*?)/.*", "$1", "i" ); # Check to see if this user has already been given an abuse cookie. # If they have we'll force them into a bandwidth class if ( $cookie = http.getCookie("AbusiveReferer") ) { response.setBandwidthClass("AbusiveReferer"); } # If the referer is whitelisted then exit. if ( String.contains( $whitelist, $referer ) ) { break; } # Put the incoming users through the busy or standard rate classes # and check the queue length for their referer. if ( String.contains( $highTraffic, $referer ) ) { $backlog = rate.getbacklog("BusyReferer", $referer); rate.use("BusyReferer", $referer); } else { $backlog = rate.getbacklog("StandardReferer", $referer); rate.use("StandardReferer", $referer); } # If we have exceeded our backlog limit, then give them a cookie # this will enforce bandwidth shaping for subsequent requests. if ( $backlog > $shapeQueue ) { http.setResponseCookie("AbusiveReferer", $referer); response.setBandwidthClass("AbusiveReferer"); }   In order for the TrafficScript to function optimally, you must enter your servers own domainname(s) into the white list. If you do not, then the script will perform rate shaping on everyone surfing your website!   You also need to set appropriate values for the BusyReferer and StandardReferer shaping classes. Remember we're only counting the clients entry to the site, so Perhaps you want to set 10/minute as a maximum standard rate and then 20/minute for your BusyReferer rate.   In this script we also use a bandwidth class for when things get busy. You will need to create this class, called "AbusiveReferer" and assign it an appropriate amount of bandwidth. Users are only put into this class when their referer is exceeding the rate of referrals set by the relevant rate class.   Shaping with Context   Rate Shaping classes can be given a context so you can apply the class to a subset of users, based on a piece of key data. The second script uses context to create an instance of the Rate Shaping class for each referer. If you do not use context, then all referers will share the same instance of the rate class.   Conclusion   Traffic Manager can use bandwidth and rate shaping classes to control the number of requests that can be made by any group of clients. In this article, we have covered choosing the class based on the referer, which has allowed us to restrict the rate at which any one site can refer visitors to us. These examples could be modified to base the restrictions on other data, such as cookies, or even extended to work with other protocols. A good example would be FTP, where you could extract the username from the FTP logon data and apply a bandwidth class based on the username.
View full article