cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
To get started quickly with Python on Stingray Traffic Manager, go straight to PyRunner.jar: Running Python code in Stingray Traffic Manager.  This article dives deep into how that extension was developed.   As is well documented, we support the use of Java extensions to manipulate traffic. One of the great things about supporting "Java" is that this really means supporting the JVM platform... which, in turn, means we support any language that will run on the JVM and can access the Java libraries.   Java isn't my first choice of languages, especially when it comes to web development. My language of choice in this sphere is Python; and thanks to the Jython project we can write our extensions in Python!   Jython comes with a servlet named PyServlet which will work with Stingray out-of-the-box, this is a good place to start. First let us quickly set up our Jython environment, working Java support is a prerequisite of course. (If you're not already familiar with Stingray Java extensions I recommend reviewing the Java Development Guide available in the Stingray Product Documentation and the Splash guide Writing Java Extensions - an introduction )   Grab Jython 2.5 (if there is now a more recent version I expect it will work fine as well)   Jython comes in the form of a .jar installer, on Linux I recommend using the CLI install variant:   java -jar jython_installer-2.5.0.jar -c   Install to a path of your choosing, I've used: /space/jython   Upload /space/jython/jython.jar to your Java Extensions Catalog   In the Java Extensions Catalog you can now see a couple of extensions provided by jython.jar, including org.python.util.PyServlet. By default this servlet maps URLs ending in .py to local Python files which it will compile and cache. Set up a test HTTP Virtual Server (I've created a test one called "Jython" tied to the "discard" pool), and add the following request rule to it:   if (string.endsWith(http.getPath(), ".py")) { java.run( "org.python.util.PyServlet" ); }   The rule avoids errors by only invoking the extension for .py files. (Though if you invoke PyServlet for a .py file that doesn't exist you will get a nasty NullPointerException stack trace in your event log.)   Next, create a file called Hello.py containing the following code:   from javax.servlet.http import HttpServlet class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello from Python</title>" + "<body><h1 style='color:green;'>Hello from Python!</h1></body></html>")   Upload Hello.py to your Java Extensions Catalog. Now if you visit the file Hello.py at the URL of your VirtualServer you should see the message "Hello from Python!" Hey, we didn't have to compile anything! Your event log will have some messages about processing .jar files, this only happens on first invocation. You will also get a warning in your event log every time you visit Hello.py:   WARN java/* servleterror Servlet org.python.util.PyServlet: Unknown attribute (pyservlet)   This is just because Stingray doesn't set up or use this particular attribute. We'll ignore the warning for now, and get rid of it when we tailor PyServlet to be more convenient for Stingray use. The servlet will also have created some some extra files in your Java Libraries & Data Catalog under a top-level directory WEB-INF. It is possible to change the location used for these files, we'll get back to that in a moment.   All quite neat so far, but the icing on the cake is yet to come. If you open up $ZEUSHOME/conf/jars/Hello.py in your favourite text editor and change the message to "Hello <em>again</em> from Python!" and refresh your browser you'll notice that the new message comes straight through. This is because the PyServlet code checks the .py file, if it is newer than the cached bytecode it will re-interpret and re-cache it. We now have somewhat-more-rapid application development for Stingray extensions. Bringing together the excellence of Python and the extensiveness of Java libraries.   However, what I really want to do is use TrafficScript at the top level to tell PyServlet which Python servlet to run. This will require a little tweaking of the code. We want to get rid of the code that resolves .py file from the request URI and replace it with code that uses an argument passed in by TrafficScript. While we're at it, we'll make it non-HTTP-specific and add some other small tweaks. The changes are documented in comments in the code, which is attached to this document.   To compile the Stingray version of the servlet and pop the two classes into a single convenient .jar file execute the following commands.   $ javac -cp $ZEUSHOME/zxtm/lib/servlet.jar:/space/jython/jython.jar ZeusPyServlet.java $ jar -cvf ZeusPyServlet.jar ZeusPyServlet.class ZeusPyServletCacheEntry.class   (Adjusting paths to suit your environment if necessary.)   Upload the ZeusPyServlet.jar file to your Java Extensions Catalog. You should now have a ZeusPyServlet extension available. Change your TrafficScript rule to load this new extension and provide an appropriate argument.   if (string.endsWith(http.getPath(), ".py")) { java.run( "ZeusPyServlet", "Hello.py" ); }   Now visiting Hello.py works just as before. In fact, visiting any URL that ends in .py will now generate the same result as visiting Hello.py. We have complete control over what Python code is executed from our TrafficScript rule, much more convenient.   If you continue hacking from this point you'll soon find that we're missing core parts of python with the setup described so far. For example adding import md5 to your servlet code will break the servlet, you'd see this in your Stingray Event Log:   WARN servlets/ZeusPyServlet Servlet threw exception javax.servlet.ServletException:            Exception during init of /opt/zeus/zws/zxtm/conf/jars/ServerStatus.py WARN  Java: Traceback (most recent call last): WARN  Java:    File "/opt/zeus/zws/zxtm/conf/jars/ServerStatus.py", line 2, in <module> WARN  Java:      import md5 WARN  Java: ImportError: No module named md5   This is because the class files for the core Python libraries are not included in jython.jar. To get a fully functioning Jython we need to tell ZeusPyServlet where Jython is installed. To do this you must have Jython installed on the same machine as the Stingray software, and then you just have to set a configuration parameter for the servlet, in summary:   Install Jython on your Stingray machine, I've installed mine to /space/jython In Catalogs > Java > ZeusPyServlet add some parameters: Parameter: python_home, Value: /space/jython (or wherever you have installed Jython) Parameter: debug, Value: none required (this is optional, it will turn on some potentially useful debug messages) Back in Catalogs > Java you can now delete all the WEB-INF files, now that Jython knows where it is installed it doesn't need this Go to System > Traffic Managers and click the 'Restart Java Runner ...' button, then confim the restart (this ensures no bad state is cached)   Now your Jython should be fully functional, here's a script for you to try that uses MD5 functionality from both Python and Java. Just replace the content of Hello.py with the following code.   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello from Python</title><body>" htmlOut += "<h1>Hello from Python!</h1>" # try a Python md5 htmlOut += "<h2>Python MD5 of 'foo': %s</h2>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "<h2>Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "</h2>" # yes, the Stingray attributes are available htmlOut += "<h2>VS: %s</h2>" % request.getAttribute("virtualserver") htmlOut += "</body></html>" toClient.println(htmlOut)   An important point to realise about Jython is that beyond the usual core Python APIs you cannot expect all the 3rd party Python libraries out there to "just work". Non-core Python modules compiled from C (and any modules that depend on such modules) are the main issue here. For example the popular Numeric package will not work with Jython. Not to worry though, there are often pure-Python alternatives. Don't forget that you have all Java libraries available too; and even special Java libraries designed to extend Python-like APIs to Jyhon such as JNumeric, a Jython equivalent to Numeric. There's more information on the Jython wiki. I recommend reading through all the FAQs as a starting point. It is perhaps best to think of Jython as a language which gives you the neatness of Python syntax and the Python core with the utility of the massive collection of Java APIs out there.
View full article
  Introduction   While I was thinking of writing an article on how to use the traffic manager to satisfy EU cookie regulations, I figured "somebody else has probably done all the hard work".  Sure enough, a quick search turned up an excellent and (more importantly) free utility called cookiesDirective.js.  In addition to cookieDirective.js being patently nifty, its website left me with a nostalgic craving for a short, wide glass of milk.   Background   If you're reading this article, you probably have a good idea of why you might want (need) to disclose to your users that your site uses cookies.  You should visit the site at http://cookiesdirective.com in order to gain a richer understanding of what the cookieDirective script actually does and why you might want to use it.  For the impatient, let's just assume that you're perfectly happy for random code to run in your visitors' browsers.     Requirements   A website. A TrafficScript-enabled traffic manager, configured to forward traffic to your web servers. Preparation   According to the directions, one must follow "just a simple 3-step process" in order to use cookieDirective.js:   Move cookie-generating JavaScript in your page (such as Google Analytics) in to a separate file, and the name of the file to a function that causes it to get loaded before the closing </head> tag in the HTML body.  Basically, this makes it possible to display the cookie disclosure message before the cookie-generating code gets run by the browser.  That much moving code around is not within the scope of this article.  For now, let's assume that displaying the message to the user is "good enough". Add a snippet of code to the end of your html body that causes the browser to download cookiesDirective.js.  In the example code, it gets downloaded directly from cookiesdirective.com, but you should really download it and host it on your own web server if you're going to be using it in production. Add another snippet of code that runs the JavaScript.  This is the bit that causes the popup to appear. The Goods   # The path to your home page? $homepath = '/us/'; # The location on the page where the cookie notification should appear (top or bottom)? $noticelocation = 'bottom'; # The URL that contains your privacy statement. $privacyURL = 'http://www.riverbed.com/us/privacy_policy.php'; # ==== DO NOT EDIT BELOW THIS LINE! (unless you really want to) ==== sub insert_before_endbody($response, $payload){ # Search from the end of the document for the closing body tag. $idx = string.findr($response, "</body>"); # Insert the payload. $response = string.substring($response, 0, $idx-1) . $payload . string.substring($response, $idx, -1); # Return the response. return $response; } $path = http.getpath(); if ( $path == $homepath ){ # Initialize the response body. $response = http.getresponsebody(); # Cookie-generating JavaScript gets loaded in this function. $wrapper = '<script type="text/javascript">function cookiesDirectiveScriptWrapper(){}</script>'; # Imports the cookiesdirective code. # FIXME: Download the package and host it locally! $code = '<script type=_ # Executes the cookiesdirective code, providing the appropriate arguments. $run = '<script type="text/javascript">cookiesDirective(\'' . $noticelocation . '\',0,\'' . $privacyURL . '\',\'\');</script>'; # Insert everything into the response body. foreach($snippet in [$wrapper, $code, $run]){ $response = insert_before_endbody($response, $snippet); } # Update the response data. http.setresponsebody($response); }   This particular example works on the main Riverbed site.  To get the code to work, you'll need to change at least the $homepath and $privacyURL variables.  If you want the notice to appear at the top of the page, you can change the $noticelocation variable.   NOTE: Remember to apply this rule to your virtual server as a response rule!
View full article
If tweeting is good enough for the Pope, then Stingray can certainly get in on the game!  If you use tools to monitor Twitter accounts, this article will explain how you can direct Stingray's event log messages to a nominated twitter account so that you can be alerted immediately a key event occurs:     You might also like to check out the companion article - TrafficScript can Tweet Too.   Creating the Twitter Application   Posting to Twitter is now a little involved.  You need to create an application with the correct access (Read + Write), and then create the appropriate oauth parameters so that clients can authenticate themselves to this application.   To start off, log on to https://dev.twitter.com/apps and create an application as follows:     Follow the steps in https://dev.twitter.com/docs/auth/tokens-devtwittercom to create the access tokens so that you can post to your application from your own twitter account.   The OAuth authentication for your application requires two public/secret key pairs that you can generate from the management website:     Configuring Stingray   Create the event action   Download and edit the following event handling script, updating the oauth tokens to match your own application and access tokens:   #!/usr/bin/python import sys import socket import oauth2 as oauth def oauth_req(url, http_method="GET", post_body=None, http_headers=None): consumer = oauth.Consumer(key="Pplp3z4ogRW4wKP3YOlAA", secret="jbWlWYOSgzC9XXXXXXXXXXXXXXXX") token = oauth.Token(key="1267105740-xhMWKdsNqoKAof7wptTZ5PmNrodBJcQm1tQ5ssR", secret="p8GCJUZLXk1AeXXXXXXXXXXXX") client = oauth.Client(consumer, token) resp, content = client.request( url, method=http_method, body=post_body, headers=http_headers ) return content msg = sys.argv[-1] words = msg.split( '\t' ) host = socket.gethostname().split( '.' )[0] update = oauth_req( "https://api.twitter.com/1.1/statuses/update.json", "POST", "status=" + words[0] + ": " + words[-1] + " (" + words[-2] + " " + words[1].split('/')[-1] + ") #" + host )   From your Stingray Admin Server, create a new External Program Action named 'Tweet', and upload the edited event handling script.   Note that the script won't work on the Stingray Virtual Appliance (because it does not have a python interpreter), and on a Linux or Solaris host, you will need to install the python oauth2 module ( pip install oauth2 ).   Test the event out with the: 'Update and Test' button in the edit page for the Tweet action, and review the event log to confirm that there are no errors:     You should see the test tweet in your twitter feed:     Wire the action to the appropriate events   Wire up an event handler to trigger the Tweet action when the appropriate events are raised:     ... and then site back and follow your twitter stream to see what's up with your Stingrays.
View full article
Not content with getting your Traffic Manager to tweet through the event handing system (Traffic Managers can Tweet Too), this article explains how you can tweet directly from TrafficScript.   This is more than just a trivial example; sending tweets from the event handling system requires an 'action program', and the Stingray Virtual Appliance does not include the necessary third-party libraries to conduct the OAuth-authenticated transaction.  This example also illustrates how you can construct and perform OAuth authentication from TrafficScript.   Getting started - Create your Twitter 'Application'   Tp get started, you'll need to create a twitter 'Application' that will receive and process your twitter status updates.  Follow the instructions in the Traffic Managers can Tweet Too article, up to the point where you have an application with the two public/secret key pairs:   The TrafficScript Rule   The following TrafficScript rule posts status updates using your application's and user's OAuth credentials.  The document Authorizing a request | Twitter Developers describes the process for authorizing a request using OAuth to the twitter service.   The rule is for test and demonstration purposes.  If you assign it to an HTTP virtual server (as a request rule), it will intercept all requests and look for a query-string parameter called 'status'.  If that parameter exists, it will post the value of status to twitter using the credentials in the rule.  Make sure to update the rule with the correct parameters from your twitter application.   The rule uses the TrafficScript HMAC library described in the document HowTo: Calculating HMAC hashes in TrafficScript.   # TrafficScript Twitter Client import libHMAC.rts as hmac; # Key client parameters # https://dev.twitter.com/apps/<app-id>/oauth $oauth_consumer_key = "Pplp3z4ogRW4wKP3YOlAA"; $oauth_token = "1267105740-xhMWKdsNqoKAof7wptTZ5PmNrodBJcQm1tQ5ssR"; $consumer_secret = "jbWlWYOSgzC9WXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"; $oauth_token_secret = "p8GCJUZLXk1AeXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"; $method = "POST"; $url = "https://api.twitter.com/1/statuses/update.json"; #------- $status = http.getFormParam( "status" ); if( ! $status ) { http.sendResponse( 200, "text/plain", "Please provide a status in the querystring (?status=...)", "" ); } $oauth = [ "oauth_consumer_key" => $oauth_consumer_key, "oauth_nonce" => string.base64encode( string.randomBytes( 32 )), "oauth_signature_method" => "HMAC-SHA1", "oauth_timestamp" => sys.time(), "oauth_token" => $oauth_token, "oauth_version" => "1.0" ]; $pstring = ""; foreach( $k in array.sort( hash.keys( $oauth ))) { $pstring .= string_escapeStrict( $k ) . "=" . string_escapeStrict( $oauth[ $k ] ) . "&"; } $pstring .= "status=". string_escapeStrict( $status ); $sbstring = string.uppercase( $method ) ."&" . string_escapeStrict( $url ) . "&" . string_escapeStrict( $pstring ); $skey = string_escapeStrict( $consumer_secret ) . "&" . string_escapeStrict( $oauth_token_secret ); $oauth["oauth_signature"] = string.base64encode( hmac.SHA1( $skey, $sbstring ) ); $body = "status=". string_escapeStrict( $status ); $oauthheader = "OAuth "; foreach( $k in hash.keys( $oauth )) { $oauthheader .= string_escapeStrict( $k ) . "=\"" . string_escapeStrict( $oauth[ $k ] ) . "\", "; } $oauthheader = string.drop( $oauthheader, 2 ); $r = http.request.post( $url, $body, "Authorization: ".$oauthheader, 10 ); $response = "Response code: ".$1." (".$4.")\n" . "Content Type: ".$2."\n" . "\n" . $r . "\n\n" . "I sent: POST " . $url ."\n" . $body . "\n\n" . " pstring: ".$pstring."\n". " sbstring: ".$sbstring."\n". " skey: ".$skey."\n". " signature: ".$oauth["oauth_signature"]."\n\n". $oauthheader ."\n"; http.sendResponse( 200, "text/plain", $response, "" ); # %-encoding to the strict standards of RFC 3986 sub string_escapeStrict( $a ) { $r = ""; while( string.length( $a ) ) { $c = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $i = ord( $c ); if( ($i >= 0x30 && $i <= 0x39 ) || ( $i >= 0x41 && $i <= 0x5A ) || ( $i >= 0x61 && $i <= 0x7A ) || $i == 0x2D || $i == 0x2E || $i == 0x5F || $i == 0x7E ) { $r .= $c; } else { $h = ($i & 0xF0 ) >> 4; $l = ($i & 0x0F ); $r .= "%" . string.substring( "0123456789ABCDEF", $h, $h ) . string.substring( "0123456789ABCDEF", $l, $l ); } } return $r; }   Submit a request to this virtual server, and if all is well you should get a 200 status code response:   ... along with your first 'Hello, World' from TrafficScript:     One thing to note - twitter does rate-limit and de-dupe tweets from the same source, so repeatedly submitting the same URL without changing the query string each time is not going to work so well.   Good luck!
View full article
On 27th February 2006, we took part in VMware's launch of their Virtual Appliance initiative.  Riverbed Stingray (or 'Zeus Extensible Traffic Manager / ZXTM') was the first ADC product that was packaged as a virtual appliance and we were delighted to be a launch partner with VMware, and gain certification in November 2006 when they opened up their third party certification program.   We had to synchronize the release of our own community web content with VMware's website launch, which was scheduled for 9pm PST. That's 5am in our Cambridge UK dev labs!   With a simple bit of TrafficScript, we were able to test and review our new web content internally before the release, and make the new content live at 5am while everyone slept soundly in their beds.   Our problem...   The community website we operated was a reasonably sophisticated website. It was based on a blogging engine, and the configuration and content for the site was split between the filesystem and a local database. Content was served up from the database via the website and an RSS feed. To add a new section with new content to the site, it was necessary to coordinate a number of changes to the filesystem and the database together.   We wanted to make the new content live for external visitors at 5am on Monday 27th, but we also wanted the new content to be visible internally and to selected partners before the release, so that we could test and review it.   The obvious solution of scripting the database and filesystem changes to take place at 5am was not a satisfactory solution. It was hard to test on the live site, and it did not let us publish the new site internally beforehand.   How we did it   if( $usenew ) { pool.use( "Staging server" ); } else { pool.use( "DMZ Server" ); }   We had a couple of options.   We have a staging system that we use to develop new code and content before putting it on the live site. This system has its own database and filesystem, and when we publish a change, we copy the new settings to the live site manually. We could have elected to run the new site on the staging system, and use Stingray to direct traffic to the live or staging server as appropriate:   However, this option would have exposed our staging website (running on a developer's desktop behind the DMZ) to live traffic, and created a vulnerable single-point-of-failure. Instead, we modified the current site so that it could select the database to use based on the presence of an HTTP header:   $usenew = 0; # For requests from internal users, always use the new site $remoteip = request.getremoteip(); if( string.ipmaskmatch( $remoteip, "10.0.0.0/8" ) ) { $usenew = 1; } # If its after 5am on the 27th, always use the new site # Fix this before the 1st of the following month! if (sys.time.monthday() == 27 && sys.time.hour() >= 5 ) { $usenew = 1; } if( sys.time.monthday() > 27 ) { $usenew = 1; } http.removeHeader( "NEWSITE" ); if( $usenew ) { http.addHeader( "NEWSITE", "1" ); }   PHP code   // The php code selects overrides the database host $DB_HOST = "livedb.internal.zeus.com"; if( isset( $_ENV['HTTP_NEWSITE'] ) && ( $_ENV['HTTP_HOST'] == 'knowledgehub.zeus.com' ) ) { $DB_HOST = "stagedb.internal.zeus.com"; }   This way, only the secured DMZ webserver processed external traffic, but it would use the internal staging database for the new content.   Did it work?   Of course it did! Because we used Stingray to categorize the traffic, we could safely test the new content, confident that the switchover would be seamless. No one was awake at 5am when the site went live, but traffic to the site jumped after the launch:  
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
...Riverbed customers protected!   When I got in to the office this morning, I wasn't expecting to read about a new BIND 9 exploit!! So as soon as I'd had my first cup of tea I sat down to put together a little TrafficScript magic to protect our customers.   BIND Dynamic Update DoS   The exploit works by sending a specially crafted DNS Update packet to a zones master server. Upon receiving the packet, the DNS server will shut down. ISC, The creators of BIND, have this to say about the new exploit   "Receipt of a specially-crafted dynamic update message to a zone for which the server is the master may cause BIND 9 servers to exit. Testing indicates that the attack packet has to be formulated against a zone for which that machine is a master. Launching the attack against slave zones does not trigger the assert."   "This vulnerability affects all servers that are masters for one or more zones – it is not limited to those that are configured to allow dynamic updates. Access controls will not provide an effective workaround."   Sounds nasty, but how easy is it to get access to code to exploit this vulnerability? Well the guy who found the bug, posted a fully functional perl script with the Debian Bug Report.   TrafficScript to the Rescue   I often talk to customers about how TrafficScript can be used to quickly patch bugs and vulnerabilities while they wait for a fix from the vendor or their own development teams. It's time to put my money where my mouth is, so here's the work around for this particular vulnerability:   $data = request.get( 3 ); if ( string.regexmatch($data, "..[()]" ) ) { log.warn("FOUND UPDATE PACKET"); connection.discard(); }   The above TrafficScript checks the Query Type of the incoming request, and if it's an UPDATE, then we discard the connection. Obviously you could extend this script to add a white list of servers which you want to allow updates from if necessary. However, you should only have this script in place while your servers are vulnerable, and you should apply patches as soon as you can.   Be safe!
View full article
An interesting use case cropped up recently - one of our users wanted to do some smarts with the login credentials of an FTP session.   This article steps through a few sample FTP rules and explains how to manage this sort of traffic.   Before you begin   Make sure you have a suitable FTP client.  The command-line ftp tool shipped with most Unix-like systems supports a -d flag that reports the underlying FTP messages, so it's great for this exercise.   Pick a target FTP server.  I tested against ftp.riverbed.com and ftp.debian.org , but other ftp servers may differ for subtle reasons.   Review the FTP protocol specification - it's sufficient to know that it's a single TCP control channel, requests are of the form 'VERB[ parameter]\r\n" and responses are of the form 'CODE message\n'.  Multi-line responses are accepted; all but the last line of the reponse include an additional hyphen ('CODE-message\n').   Create your FTP virtual server   Use the 'Add a new service' wizard to create your FTP virtual server.  Just for fun, add a server banner (Virtual Server > Connection Management > FTP-Specific Settings):     Verify that you can log in to your FTP server through Stingray, and that the banner is rewritten: Now we're good to go!   Intercepting login credentials   We want to intercept FTP login attempts, and change all logins to 'anonymous'.  If a user logs in with 'username:password', we're going to convert that to 'anonymous:username' and discard the password.   Create the following Request Rule, and assign it to the FTP virtual server:   log.info( "Recieved connection: state is '" . connection.data.get( "state" ) . "'" ); if( connection.data.get( "state" ) == "" ) { # This is server-first, so we have no data on the first connect connection.data.set( "state", "connected" ); break; } if( connection.data.get( "state" ) == "connected" ) { # Get the request line $req = string.trim( request.endswith( "\n" ) ); log.info( " ... got request '" . $req . "'" ); if( string.regexmatch( $req, "USER (.*)" ) ) { connection.data.set( "user", $1 ); # Translate this to an anonymous login log.info( " ... rewriting request to 'USER anonymous'" ); request.set( "USER anonymous\r\n" ); } if( string.regexmatch( $req, "PASS (.*)" ) ) { $pass = $1; connection.data.set( "pass", $pass ); $user = connection.data.get( "user" ); # Set the appropriate password log.info( " ... rewriting request to 'PASS ".$user."'" ); request.set( "PASS ".$user."\r\n" ); } }   Now, if you log in with your email address (for example) and a password, the rule will switch your login to an anonymous one and will log the result:   Authenticating the user's credentials   You can extend this rule to authenticate the credentials that the user provided.  At the point in the rule where you have the username and password, you can call a Stingray authenticator, a Java Extension, or reference a libTable.rts: Interrogating tables of data in TrafficScript in your TrafficScript rule:   #AD authentication $ldap = auth.query( "AD Auth", $user, $pass ); if( $ldap['Error'] ) { log.error( "Error with authenticator 'AD Auth': " . $auth['Error'] ); connection.discard(); } else if( !$ldap['OK'] ) { log.info("User not authenticated. Username and/or password incorrect"); connection.discard(); }  
View full article
Meta-tags and the meta-description are used by search engines and other tools to infer more information about a website, and their judicious and responsible use can have a positive effect on a page's ranking in search engine results. Suffice to say, a page without any 'meta' information is likely to score lower than the same page with some appropriate information.   This article (originally published December 2006) describes how you can automatically infer and insert meta tags into a web page, on the fly.   Rewriting a response   First, decide what to use to generate a list of related keywords.   It would have been nice to have been able to slurp up all the text on the page and calculate the most commonly occurring unusual words. Surely that would have been the über-geek thing to do? Well, not really: unless I was careful I could end-up slowing down each response, and  there would be the danger that I produced a strange list of keywords that didn’t accurately represent what the page is trying to say (and could also be widely “Off-message”).   So I instead turned to three sources of on-message page summaries - the title tag, the contents of the big h1 tag and the elements of the page path.   The script   First I had to get the response body:   $body = http.getResponseBody();   This will be grepped for keywords, mangled to add the meta-tags and then returned by setting the response body:   http.setResponseBody( $body );   Next I had to make a list of keywords. As I mentioned before, my first plan was to look at the path: by converting slashes to commas I should be able to generate some correct keywords, something like this:   $path = http.getPath(); $path = string.regexsub( $path, "/+", "; ","g" );   After adding a few lines to first tidy-up the path: removing slashes at the beginning and end; and replacing underscores with spaces, it worked pretty well.    And, for solely aesthetic reasons I added   $path = string.uppercase($path);   Then, I took a look at the title tag. Something like this did the trick:   if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title_tag_text = $1; }   (the “i” flag here makes the search case-insensitive, just in-case).   This, indeed, worked fine. With a little cleaning-up, I was able to generate a meta-description similarly: I just stuck them together after adding some punctuation (solely to make it nicer when read: search engines often return the meta-description in the search result).   After playing with this for a while I wasn’t completely satisfied with the results: the meta-keywords looked great; but the meta-description was a little lacking in the real english department.   So, instead I turned my attention to the h1 tag on each page: it should already be a mini-description of each page. I grepped it in a similar fashion to the title tag and the generated description looked vastly improved.   Lastly, I added some code to check if a page already has a meta-description or meta-keywords to prevent the automatic tags being inserted in this case. This allows us to gradually add meta-tags by hand to our pages - and it means we always have a backup should we forget to add metas to a new page in the future.   The finished script looked like this:   # Only process HTML responses $ct = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $ct, "text/html" ) ) break; $body = http.getResponseBody(); $path = http.getPath(); # remove the first and last slashes; convert remaining slashes $path = string.regexsub( $path, "^/?(.*?)/?$", "$1" ); $path = string.replaceAll( $path, "_", " " ); $path = string.replaceAll( $path, "/", ", " ); if( string.regexmatch( $body, "<h1.*?>\\s*(.*?)\\s*</h1>", "i" ) ) { $h1 = $1; $h1 = string.regexsub( $h1, "<.*?>", "", "g" ); } if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title = $1; $title = string.regexsub( $title, "<.*?>", "", "g" ); } if( $h1 ) { $description = "Riverbed - " . $h1 . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $h1 . ", " . $title; } else { $description = "Riverbed - " . $path . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $title; } # only rewrite the meta-keywords if we don't already have some if(! string.regexmatch( $body, "<meta\\s+name='keywords'", "i" ) ) { $meta_keywords = " <meta name='keywords' content='" . $keywords ."'/>\n"; } # only rewrite the meta-description if we don't already have one if(! string.regexmatch( $body, "<meta\\s+name='description'", "i" ) ) { $meta_description = " <meta name='description' content='" . $description . "'/>"; } # find the title and stick the new meta tags in afterwards if( $meta_keywords || $meta_description ) { $body = string.regexsub( $body, "(<title>.*</title>)", "$1\n" . $meta_keywords . $meta_description ); http.setResponseBody( $body ); }   It should be fairly easy to adapt it to another site assuming the pages are built consistently.   This article was originally written by Sam Phillips (his own pic above) in December 2006, modified and tested in February 2013
View full article
Many services now use RSS feeds to distribute frequently updated information like news stories and status reports. Traffic Manager's powerful TrafficScript language lets you process RSS XML data, and this article describes how you can embed several RSS feeds into a web document.   It illustrates Traffic Manager's response rewriting capabilities, XML processing and its ability to query several external datasources while processing a web request.   In this example, we'll show how you can embed special RSS tags within a static web document. Traffic Manager will intercept these tags in the document and replace them with the appropriate RSS feed data&colon;   <!RSS http://community.brocade.com/community/product-lines/stingray/view-browse-feed.jspa?browseSite=place-content&browseViewID=placeContent&userID=9503&containerType=14&containerID=2005&filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D !>   We'll use a TrafficScript rule to process web responses, seek out the RSS tag and retrieve, format and insert the appropriate RSS data.   Check the response   First, the TrafficScript rule needs to obtain the response data, and verify that the response is a simple HTML document. We don't want to process images or other document types!     # Check the response type $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; # Get the response data $body = http.getResponseBody();   Find the embedded RSS tags   Next, we can use a regular expression to search through the response data and find any RSS tags in it:   (.*?)<!RSS\s+(.*?)\s+!>(.*)   Stingray supports Perl compatible regular expressions (regexs). This regex will find the first RSS tag in the document, and will assign text to the internal variables $1, $2 and $3:   $1: the text before the tag $2: the RSS URL within the tag $3: the text after the tag   The following code searches for RSS tags:     while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3; }   Retrieve the RSS data   An asynchronous HTTP request is sufficient to retrieve the RSS XML data&colon;     $rss = http.request.get( $url );     Transform the RSS data using an XSLT transform   The following XSLT transform can be used to extract the first 4 RSS items and format them up as an HTML <UL> list: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">    <xsl:template match="/">     <ul>       <xsl:apply-templates select="//item[position()&lt;5]"/>     </ul>    </xsl:template>         <xsl:template match="item">       <xsl: param name="URL" select="link/text()"/>       <xsl: param name="TITLE" select="title/text()"/>       <li><a href="{$URL}"><xsl:value-of select="$TITLE"/></a></li>    </xsl:template> </xsl:stylesheet>   Store the XSLT file in the Traffic Manager conf/extra directory, naming it 'rss.xslt', so that the rule can look it up using resource.get().   You can apply the XSLT transform to the XML data using the xml.xslt.transform function. The function returns the result with HTML entity encoding; use string.htmldecode to remove these:   $xsl = resource.get( "rss.xslt" ); $html= string.htmldecode( xml.xslt.transform( $rss, $xsl ) );   The entire rule   The entire response rule, with a little additional error checking, looks like this: $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; $body = http.getResponseBody(); $new = ""; $changed = 0; while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3;      $html = "<ul><li><b>RSS: ".$url."</b></li></ul>";      $rss = http.request.get( $url );    if( $1 != 200 ) {       $html = "<ul><li><b>Failed to retreive RSS feed</b></li></ul>";    } else {       $xsl = resource.get( "rss.xslt" );       $html = string.htmldecode( xml.xslt.transform( $rss, $xsl ) );       if( $html == -1 ) {           $html = "<ul><li><b>Failed to parse RSS feed</b></li></ul>";       }    }      $new = $new . $start . $html;    $body = $end;    $changed = 1; } if( $changed )   http.setresponsebody( $new . $body );
View full article
The libLDAP.rts library and supporting library files (written by Mark Boddington) allow you to interrogate and modify LDAP traffic from a TrafficScript rule, and to respond directly to an LDAP request when desired.   You can use the library to meet a range of use cases, as described in the document Managing LDAP traffic with libLDAP.rts.   Note: This library allows you to inspect and modify LDAP traffic as it is balanced by Stingray.  If you want to issue LDAP requests from Stingray, check out the auth.query() TrafficScript function for this purpose, or the equivalent Authenticating users with Active Directory and Stingray Java Extensions Java Extension.   Overview   A long, long time ago on a Traffic Manager far, far away, I (Mark Boddington) wrote some libraries for processing LDAP traffic in TrafficScript:   libBER.rts – This is a TrafficScript library which implements all of the required Basic Encoding Rules (BER) functionality for LDAP. It does not completely implement BER though, LDAP doesn't use all of the available types, and this library hasn't implemented those not required by LDAP. libLDAP.rts – This is a TrafficScript library of functions which can be used to inspect and manipulate LDAP requests and responses. It requires libBER.rts to encode the LDAP packets. libLDAPauth.rts – This is a small library which uses libLdap to provide simple LDAP authentication to other services.   That library (version 1.0) mostly focused on inspecting LDAP requests. It was not particularly well suited to processing LDAP responses. Now, thanks to a Stingray PoC being run in partnership with the guys over at Clever Consulting, I've had cause to revist this library and improve upon the original. I'm pleased to announce libLDAP.rts version 1.1 has arrived.     What's new in libLdap Version 1.1?   Lazy Decoding. The library now only decodes the envelope  when getPacket() or getNextPacket() is called. This gets you the MessageID and the Operation. If you want to process further, the other functions handle decoding additional data as needed. New support for processing streams of LDAP Responses. Unlike Requests LDAP Responses are typically made up of multiple LDAP messages. The library can now be used to process multiple packets in a response. New SearchResult processing functions: getSearchResultDetails(), getSearchResultAttributes() and updateSearchResultDetails()   Lazy Decoding   Now that the decoding is lazier it means you can almost entirely bypass decoding for packets which you have no interest in. So if you only want to check BindRequests and/or BindResponses then those are the only packets you need to fully decode. The rest are sent through un-inspected (well except for the envelope).   Support for LDAP Response streams   We now have several functions to allow you to process responses which are made up of multiple LDAP messages, such  as those for Search Requests. You can use a loop with the "getNextPacket($packet["lastByte"])" function to process each LDAP message as it is returned from the LDAP server. The LDAP packet hash  now has a "lastByte" entry to help you keep track of the messages in the stream. There is also a new skipPacket() function to allow you to skip the encoder for packets which ou aren't modifying.   Search Result Processing   With the ability to process response streams I have added a  number of functions specifically for processing SearchResults. The getSearchDetails() function will return a SearchResult hash which contains the ObjectName decoded. If you are then interested in the object you can  call getSearchResultAttributes() to decode the Attributes which have been returned. If you make any changes to the Search Result you can then call updateSearchResultDetails() to update the packet, and then encodePacket() to re-encode it. Of course if at any point you determine that no changes are needed then you can call skipPacket() instead.   Example - Search Result Processing   import libDLAP.rts as ldap; $packet = ldap.getNextPacket(0); while ( $packet ) { # Get the Operation $op = ldap.getOp($packet); # Are we a Search Request Entry? if ( $op == "SearchRequestEntry" ) { $searchResult = ldap.getSearchResultDetails($packet); # Is the LDAPDN within example.com? if ( string.endsWith($searchResult["objectName"], "dc=example,dc=com") ) { # We have a search result in the tree we're interested in. Get the Attributes ldap.getSearchResultAttributes($searchResult); # Process all User Objects if ( array.contains($searchResult["attributes"]["objectClass"], "inetOrgPerson") ) { # Log the DN and all of the attributes log.info("DN: " . $searchResult["objectName"] ); foreach ( $att in hash.keys($searchResult["attributes"]) ) { log.info($att . " = " . lang.dump($searchResult["attributes"][$att]) ); } # Add the users favourite colour $searchResult["attributes"]["Favourite_Colour"] = [ "Riverbed Orange" ]; # If the password attribute is included.... remove it hash.delete($searchResult["attributes"], "userPassword"); # Update the search result ldap.updateSearchResultDetails($packet, $searchResult); # Commit the changes $stream .= ldap.encodePacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); continue; } } } # Not an interesting packet. Skip and move on. $stream .= ldap.skipPacket( $packet ); $packet = ldap.getNextPacket($packet["lastByte"]); } response.set($stream); response.flush();   This example reads each packet in turn by calling getNextPacket() and passing the lastByte attribute from the previously processed packet as the argument. We're looking for SearchResultEntry operations, If we find one we pass the packet to getSearchResultDetails() to decode the object which the search was for in order to determine the DN. If it's in example.com then we decide to process further and decode the attributes with getSearchResultAttributes(). If the object has an objectClass of inetOrgPerson we then print the attributes to the event log, remove the userPassword if it exists and set a favourite colour for the user. Finally we encode the packet and move on to the next one. Packets which we aren't interested in modifying are skipped.   Of course, rather than do all this checking in the response, we could have checked the SearchRequest in a request rule and then used connection.data.set() to flag the message ID for further processing.   We should also have a request rule which ensures that the objectClass is in the list of attributes requested by the end-user. But I'll leave that as an exercise for the reader ;-)   If you want more examples of how this library can be used, then please check out the additional use cases here: Managing LDAP traffic with libLDAP.rts
View full article
  1. The Issue   When using perpetual licensing on a Traffic Manager, it is restricted to a throughput licensing limitation as per the license.  If this limitation is reached, traffic will be queued and in extreme situations, if the throughput reaches much higher than expected levels, some traffic could be dropped because of the limitation.   2. The Solution   Automatically increase the allocated bandwidth for the Traffic Manager!!   3. A Brief Overview of the Solution   An SSC holds the licensed bandwidth configuration for the Traffic Manager instance.   The Traffic Manager is configured to execute a script on an event being raised, the bwlimited event.   The script makes REST calls to the SSC in order to obtain and then increment if necessary, the Traffic Manager's bandwidth allocation.   I have written the script used here, to only increment if the resulting bandwidth allocation is 5Mbps or under, but this restriction could be removed if it's not required.  The idea behind this was to allow the Traffic Manager to increment it's allocation, but to only let it have a certain maximum amount of bandwidth from the SSC bandwidth "bucket".   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set.   For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out the automated bandwidth allocation increase on the Traffic Manager, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the instance in the event of a bandwidth limitation event firing. b. Upload the script to be used, on to the Traffic Manager. c. Create a new event and action on the Traffic Manager which will be initiated when the bandwidth limitation is hit, calling the script mentioned in point a above.   4.2.a. The Script to increment the Traffic Manager Bandwidth Allocation   This script, called  and attached, is shown below.   Script Function:   Obtain the Traffic Manager instance configuration from the SSC. Extract the current bandwidth allocation for the Traffic Manager instance from the information obtained. If the current bandwidth is less then 5Mbps, then increment the allocation by 1Mbps and issue the REST call to the SSC to make the changes to the instance configuration as required.  If the bandwidth is currently 5Mbps, then do nothing, as we've hit the limit for this particular Traffic Manager instance.   #!/bin/bash # # Bandwidth_Increment # ------------------- # Called on event: bwlimited # # Request the current instance information requested_instance_info=$(curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" \ -X GET -u admin:password https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002) # Extract the current bandwidth figure for the instance current_instance_bandwidth=$(echo $requested_instance_info | sed -e 's/.*"bandwidth": \(\S*\).*/\1/g' | tr -d \,) # Add 1 to the original bandwidth figure, imposing a 5Mbps limitation on this instance bandwidth entry if [ $current_instance_bandwidth -lt 5 ] then new_instance_bandwidth=$(expr $current_instance_bandwidth + 1) # Set the instance bandwidth figure to the new bandwidth figure (original + 1) curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \ '{"bandwidth":'"${new_instance_bandwidth}"'}' \ https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002 fi   There are some obvious parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton which you can use to carry out whatever configuration is required on the SSC for a particular Traffic Manager.  Events and actions can be set up on the Traffic Manager which can then be used to execute scripts which can access the SSC and make the changes necessary based on any logic you see fit.   4.2.b. Upload the Bash Scripts to be Used   On the Traffic Manager, upload the bash script that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.   4.2.c. Create a New Event and Action for the Bandwidth Limitation Hit   On the Traffic Manager, create a new event type as shown in the screenshot below - I've created Bandwidth_Increment, but this event could be called anything relevant.  The important factor here is that the event is raised from the bwlimited event.     Once this event has been created, an action must be associated with it.   Create a new external program action as shown in the screenshot below - I've created one called Bandwidth_Increment, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called SSC_Bandwidth_Increment.     5. Testing   In order to test the solution, on the SSC, set the initial bandwidth for the Traffic Manager instance to 1Mbps.   Generate some traffic through to a service on the Traffic Manager that will force the Traffic Manager to hit it's 1Mbps limitation for a succession of time.  This will cause the bwlimited event to fire and for the Bandwidth_Increment action to be executed, running the SSC_Bandwidth_Increment script.   The script will increment the Traffic Manager bandwidth by 1Mbps.   Check and confirm this on the SSC.   Once confirmed, stop the traffic generation.   Note: As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Manager.   You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Riverbed Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Magento.  
View full article
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
The VMware Horizon Mirage Load Balancing Solution Guide describes how to configure Riverbed SteelApp to load balance VMware Horizon Mirage servers.   VMware® Horizon Mirage™ provides unified image management for physical desktops, virtual desktops and BYOD.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Oracle EBS 12.1.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
This simple TrafficScript rule helps you debug and instrument the traffic manager.  It adds a simple overlay on each web page containing key information that is useful in a test and debugging situation:     Served from: identifies the server node that generated the response Took XX seconds: using the technique in HowTo: Log slow connections in Stingray Traffic Manager identifies the time between recieving the HTTP request and completion of response rules Took YY retries: if present, indicates that the traffic manager had to retry the request one or more times before getting a satisfactory response   The TrafficScript response rule is as follows:   $ct = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $ct, "text/html" ) ) break; $style = ' <style type="text/css"> .stingrayTip { position:absolute; top:0; right:0; background:black; color:#acdcac; opacity:.6; z-index:100; padding:4px; } .stingrayTip:hover { opacity:.8; } </style>'; $text = "Served from " . connection.getNode(); if( request.getRetries() ) $text .= "<br>Took " . request.getRetries() . " retries"; if( connection.data.get("start") ) $text .= "<br>Took " . (sys.time.highres()-connection.data.get("start")) . " seconds"; if( connection.data.get("notes") ) $text .= "<br>" . connection.data.get( "notes" ); $body = http.getResponseBody(); $html = '<div class="stingrayTip">' . $text . '</div>'; $body = string.regexsub( $body, "(<body[^>]*>)", $style."$1\n".$html."\n", "i" ); http.setResponseBody( $body );   Using the timing information   To get the 'took XX seconds' information, place the rule as late as possible in your chain of response rules, and add the following rule as a request rule, to run as early as possible:   $tm = sys.time.highres(); connection.data.set("start", string.sprintf( "%f", $tm ) );   See HowTo: Log slow connections in Stingray Traffic Manager for more information.   Additional information   You can present additional information in the info box by storing it in a connection-local variable called 'notes', for example:   $cookies = http.getHeader( "Cookie" ); $msg = "Cookies: " . $cookies; connection.data.set( 'notes', $msg );   Read more   This example uses techniques similar to the Watermarking web content with Stingray and TrafficScript article.
View full article
Bandwidth can be expensive. So it can be annoying if other websites steal your bandwidth from you. A common problem is when people use 'hot-linking' or 'deep-linking' to place images from your site on to their own pages. Every time someone views their website, you will pick up the bandwidth tab, and users of your website may be impacted because of the reduced bandwidth. So how can this be stopped? When a web browser requests a page or an image from your site, the request includes a ' Referer ' header (The misspelling is required in the specs!). This referrer gives the URL of the page that linked to the file. So, if you go to https://splash.riverbed.com/ , your browser will load the HTML for the page, and then load all the images. Each time it asks the web server for an image, it will report that the referrer was https://splash.riverbed.com/ . We can use this referrer header to check that the image is being loaded for your own site, and not for someone else's. If another website embedded a link to one of these images, the Referer: header would contain the URL of their site instead. This site has a more in-depth discussion of bandwidth-stealing; the Stingray approach is an alternative to the Apache solution it presents. Solving the problem with RuleBuilder RuleBuilder is a simple, GUI front-end to TrafficScript that lets you create straightforward 'if condition then action'-style policies.  Use the Stingray Admin Server to create a new RuleBuilder rule as follows: You should then associate that with your virtual server, configuring it to run as a Request Rule: All done. This rule will catch any hotlink requests for content where the URL ends with ' .gif ', ' .jpg ' or ' .png ', and redirect to the image: http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png TrafficScript improvements We can make some simple improvements to this rule: We can provide a simple list of file extensions to check against, rather than using a regular expression.  This is easier to manage, though not necessarily faster We can check that the referer matches the host header for the site.  That is a simple approach that avoids embedding the domain (e.g. riverbed.com) in the rule, thus making it less likely to surprise you when you apply the rule to a different website First convert the rule to TrafficScript.  That will reveal the implementation of the rule, and you can edit the TrafficScript version to implement the additional features you require: $headerReferer = http.getheader( "Referer" ); $path = http.getpath(); if( string.contains( $headerReferer, "riverbed.com" ) == 0         && $headerReferer != ""         && string.regexmatch( $path, "\\.(jpg|gif|png)$" ) ){         http.redirect( " http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png " ); } The RuleBuilder rule, converted to TrafficScript Edit the rule so that it resembles the following: $extensions = [ 'jpg', 'jpeg', 'gif', 'png', 'svg' ]; $redirectto = " http://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/Stop_sign.svg/200px-Stop_sign.svg.png "; ####################################### $referer = http.getheader( "Referer" ); $host    = http.getHostHeader(); $path    = http.getpath(); $ext     = ""; if( string.regexMatch( $path, '\.(.*?)$' ) ) $ext = $1; if( array.contains( $extensions, $ext )    && $referer != ""    && !string.contains( $referer, $host )    && $path != $redirectto ) {       http.redirect( $redirectto ); } Alternate rule implementation
View full article
Stingray allows you to inspect and manipulate both incoming and outgoing traffic with a customized version of Java's Servlet API. In this article we'll delve more deeply into some of the semantics of Stingray's Java Extensions and show how to validate XML files in up- and in downloads using TrafficScript and Java Extensions.   The example that will allow us to illustrate the use of XML processing by Stingray is a website that allows users to share music play-lists. We'll first look at the XML capabilities of 'conventional' TrafficScript, and then investigate the use of Java Extensions.   A play-list sharing web site   You have spent a lot of time developing a fancy website where users can upload their personal play-lists, making them available to others who can then search for music they like and download it. Of course you went for XML as the data format, not least because it allows you to make sure uploads are valid. Therefore, you can help your users' applications by providing them with a way of validating their XML-files as they are uploaded. Also, to be on the safe side, whenever an application downloads an XML play-list it should be checked and only reach the user if it passes the validation.   XML provides the concept of schema files to describe what a valid document has to look like. One popular schema language is the W3C's XML Schema Definition (XSD), see http://www.w3.org/TR/xmlschema-0/. Given an XSD file, you can hand an XML document to a validator to find out whether it actually conforms to the data structure specified in the schema.   Coming back to our example of a play-list sharing website, you have downloaded the popular xspf (XML Shareable Playlist Format, 'spiff') schema description from http://xspf.org/validation/ . One of the tags allowed inside a track in XML files of this type is image. By specifying tags like  a user could see the following pictures:     Validating XML with TrafficScript   How do you validate an XML file from a user against that schema? Stingray's TrafficScript provides the xml.validate() function. Here's a simple rule to check the response of a web server against a XSD:   1 2 3 4 5 6 7 8 9 10 $doc = http.getResponseBody();  $schema = resource.get( "xspf.xsd" );  $result = xml.validate.xsd( $doc , $schema );  if ( 1 == $result ) {      log .info( "Validation succeeded" );  } else if ( 0 == $result ) {      log .info( "Validation failed" );  } else {      log .info( "Validation error" );  }   Let's have a closer look at what this rule does:   First, it reads in the whole response by calling http.getResponseBody(). This function is very practical but you have to be extremely careful with it. The reason is that you do not know beforehand how big the response actually is. It might be an audio stream totaling many hundred megabytes in size. Surely you don't want Stingray to buffer all that data. Therefore, when using http.getResponseBody() you should always check the mime type and the content length of the response (see below for code that does this). Our rule then goes on to load the schema definition file with resource.get(), which must be located in ZEUSHOME/zxtm/conf/extra/ for this step to work. Finally it does the actual validation and checks the result. In this simple example, we are only logging the result, on your music-sharing web site you would have to take the appropriate action.   The last rule was a response rule that worked on the result from the back-end web server. These files are actually under your control (at least theoretically), so validation is not that urgent. Things are different if you allow uploads to your web site. Any user-provided data must be validated before you let it through to your back-ends. The following request rule does the XML validation for you:   1 2 3 4 5 6 7 8 9 10 11 12 $m = http.getMethod();  if ( 0 == string.icmp( $m , "POST" ) ) {      $clen = http.getHeader( "Content-Length" );      if ( $clen > 0 && $clen <= 1024*1024 ) {         $schema = resource.get( "xspf.xsd" );         $doc = http.getBody();         $result = xml.validate.xsd( $doc , $schema );         # handle result      } else {         # handle over-sized posts      }  }    Note how we first look at the HTTP method, then retrieve the length of the post's body and check it. That check, which is done in the line   1 if ( $clen > 0 && $clen <= 1024*1024 ) {   ...deserves a bit more comment: The variable $clen was initialized from the post's Content-Length header, so it could be the empty string at that stage. When TrafficScript converts data to integers, variables that do not actually represent numbers are converted to 0 (see the TrafficScript reference for more details). Therefore, we have to check that $clen is greater than zero and at most 1 megabyte (or whatever limit you choose to impose on the size of uploads). After checking the content length we can safely invoke getBody().   A malicious user might have faked the HTTP header to specify a length larger than his actual post. This would lead Stingray to try to read more data than the client sends, pausing on a file descriptor until the connection times out. Due to Stingray's non-blocking IO multiplexing, however, other requests would be processed normally.   Validating XML with Stingray's Java Extensions   After having explored TrafficScript's built-in XML support, let's now see how XML validation can be done using Java Extensions.   If you are at all familiar with Java Servlets, Stingray's Java Extensions should feel like home for you. The main differences are   You have a lot of Stingray's built-in functionality ready at hand via attributes. You can manipulate both the response (as in conventional Servlets) and the request (unique to Stingray's Servlet Extensions as Stingray sits between the client and the server).   There's lots more detail in the Feature Brief: Java Extensions in Traffic Manager.   The interesting thing here is that this flow actually applies twice: First when the request is sent to the server (you can invoke the Java extension from a request rule) and then again when the response is sent back to the client (allowing you to change the result from a response rule). This is very practical for your music-sharing web site as you only have to write one Servlet. However, you have to be able to tell whether you are working on the response or the request. The ZXTMHttpServletResponse object which is passed to both the doGet() and doPost() methods of the HttpServlet object has a method to find out which direction of the traffic flow you are currently in: boolean isResponseRule() . This distinction is never needed in conventional Servlet programming as in that scenario it's the Servlet's task to create the response, not to modify an existing response.   These considerations make it easy to design the Stingray Servlet for your web site:   There will be an init() method to read in the schema definition and to set up the xml.validation.Validator object. We'll have a single private validate() method to do the actual work. The doGet() method will invoke validate() on the server's response, whereas the doPost() method does the same on the body of the request   After all that theory it's high time for some real code (note that any import directives have been removed for the sake of readability as they don't add anything to our discussion - see Writing Java Extensions - an introduction ):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 public class XmlValidate extends HttpServlet {      private static final long serialVersionUID = 1L;      private static Validator validator = null;      public void init( ServletConfig config ) throws ServletException {         super.init( config );         String schema_file = config.getInitParameter( "schema_file" );              if ( schema_file == null )            throw new ServletException( "No schema file specified" );              SchemaFactory factory = SchemaFactory.newInstance(            XMLConstants.W3C_XML_SCHEMA_NS_URI);              Source schemaFile = new StreamSource(new File(schema_file));         try {            Schema schema = factory.newSchema(schemaFile);            validator = schema.newValidator();         } catch( SAXException saxe ) {            throw new ServletException(saxe.getMessage());         }      }  // ... other methods below  }   The validate() function is actually very simple as all the hard work is done inside the Java library. The only thing to be careful about is to make sure that we don't allow concurrent access to the Validator object from multiple threads:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 private boolean validate( InputStream in, HttpServletResponse res, String errmsg )         throws IOException      {         Source src=new StreamSource(in);         try {            synchronized( validator ) {               validator.validate(src);            }         } catch( SAXException saxe ) {            String msg = saxe.getMessage();            res.setContentType( "text/plain" );            PrintWriter out = res.getWriter();            out.println(errmsg);            out. print ( "Validation of the xml file has failed with error message: " );            out.println(msg);            return false;         }         return true;      }    Note that the only thing we have to do in case of a failure is to write to the stream that makes up the response. No matter whether this is being done in a request or a response rule, Stingray will take that as an indication that this is what should be sent back to the client. In the case of a request rule, Stingray won't even bother to hand on the request to a back-end server and instead send the result of the Java Servlet; in a response rule, the server's answer will be replaced by what the Servlet has produced.   Now we are ready for the doGet() method:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public void doGet( HttpServletRequest req, HttpServletResponse res )        throws ServletException, IOException     {        try {           ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res;           if ( !zres.isResponseRule() ) {              log ( "doGet called in request rule ... bailing out" );              return ;           }           InputStream in = zres.getInputStream();           validate(in, zres, "The file you requested was rejected." );        } catch( Exception e ) {           throw new ServletException(e.getMessage());        }     }    There's not really much work left apart from calling our validate() method with the error message to append in case of failure. As discussed previously, we make sure that we are actually working in the context of a response rule because otherwise the response would be empty. Exactly the opposite has to be done when processing a post:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public void doPost( HttpServletRequest req, HttpServletResponse res )       throws ServletException, IOException    {       try {          ZXTMHttpServletRequest zreq = (ZXTMHttpServletRequest) req;          ZXTMHttpServletResponse zres = (ZXTMHttpServletResponse) res;            if ( zres.isResponseRule() ) {             log ( "doPost called in response rule ... bailing out" );             return ;          }            InputStream in = zreq.getInputStream();          if ( validate(in, zres, "Your upload was unsuccessful" ) ) {             // just let the post through to the backends          }       } catch(Exception e) {          throw new ServletException(e.getMessage());       }    }    The only thing missing are the rules to invoke the Servlet, so here they are (assuming that the Servlet has been loaded up via the 'Java' tab of the 'Catalogs' section in Stingray's UI as a file called XmlValidate.class ). First the request rule:   1 2 3 4 $m = http.getMethod();  if ( 0 == string.icmp( $m , "POST" ) ) {      java.run( "XmlValidate" );  }    and the response rule is almost the same:   1 2 3 4 $m = http.getMethod();  if ( 0 == string.icmp( $m , "GET" ) ) {      java.run( "XmlValidate" );  }    It's your choice: TrafficScript or Java Extensions   Which is better?   So now you are left with a difficult decision: you have two implementations of the same functionality, which one do you choose? Bearing in mind that the unassuming java.run() leads to a considerable amount of inter-process communication between the Stingray child process and the Java Servlet runner, whereas the xml.validate() is handled in C++ inside the same process, it is a rather obvious choice. But there are still situations when you might prefer the Java solution.   One example would be that you have to do XML processing not supported directly by Stingray. Java is more flexible and complete in the XML support it provides. But there is another advantage to using Java: you can replace the actual implementation of the XML functionality. You might want to use Intel's XML Software SuiteJ for Java, for example. But how do you tell Stingray's Java runner to use another XML library? Only two settings have to be adapted:   java!classpath /opt/intel/xmlsoftwaresuite/java/1.0/lib/intel-xss.jar java!command java -Djava.library.path=/opt/intel/xmlsoftwaresuite/java/1.0/bin/intel64 -server   This applies if you have installed Intel's XML Software SuiteJ in /opt/intel/xmlsoftwaresuite/java/1.0/ and are using the 64 bit version of the shared library. Both changes can be made in the 'Global Settings' tab of the 'System' section in Stingray's UI.
View full article