cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
With the evolution of social media as a tool for marketing and current events, we commonly see the Twitter feed updated long before the website. It’s not surprising for people to rely on these outlets for information.   Fortunately Twitter provides a suite of widgets and scripting tools to integrate Twitter information for your application. The tools available can be implemented with little code changes and support many applications. Unfortunately the same reason a website is not as fresh as social media is because of the code changes required. The code could be owned by different people in the organization or you may have limited access to the code due to security or CMS environment. Traffic Manager provides the ability to insert the required code into your site with no changes to the application.      Twitter Overview "Embeddable timelines make it easy to syndicate any public Twitter timeline to your website with one line of code. Create an embedded timeline from your widgets settings page on twitter.com, or choose “Embed this…” from the options menu on profile, search and collection pages.   Just like timelines on twitter.com, embeddable timelines are interactive and enable your visitors to reply, Retweet, and favorite Tweets directly from your pages. Users can expand Tweets to see Cards inline, as well as Retweet and favorite counts. An integrated Tweet box encourages users to respond or start new conversations, and the option to auto-expand media brings photos front and center.   These new timeline tools are built specifically for the web, mobile web, and touch devices. They load fast, scale with your traffic, and update in real-time." -twitter.com   Thank you Faisal Memon for the original article Using TrafficScript to add a Twitter feed to your web site   As happens more often than than not, platform access changes. This time twitter is our prime example. When loading Twitter js, http://widgets.twimg.com/j/2/widget.js you can see the following notice:   The Twitter API v1.0 is deprecated, and this widget has ceased functioning.","You can replace it with a new, upgraded widget from <https://twitter.com/settings/widgets/new/"+H+">","For more information on alternative Twitter tools, see <https://dev.twitter.com/docs/twitter-for-websites>   To save you some time, Twitter really means deprecated and the information link is broken. For more information on alternative Twitter tools the Twitter for Websites | Home. For information related to the information in this article, please see Embedded Timelines | Home   One of the biggest changes in the current twitter platform is the requirement for a "data-widget-id". The data-widget-id is unique, and is used by the twitter platform to provide information to generate the data. Before getting started with the Traffic Manager and Web application you will have to create a new widget using your twitter account https://twitter.com/settings/widgets/new/. Once you create your widget, will see the "Copy and paste the code into the HTML of your site." section on the twitter website. Along with other information, this code contains your "data-widget-id". See Create widget image.   Create widget (click to zoom)   This example uses a Traffic Script response rule to rewrite the HTTP body from the application. Specifically I know the body for my application includes a html comment   <!--SIDEBAR-->.    This rule will insert the required client side code into the HTTP body and send the updated body in to complete the request.  The $inserttag variable can be just about anything in the body itself  i.e. the "MORE LIKE THIS" text on the side of this page. Simply change the code below to:     $inserttag = "MORE LIKE THIS";   Some of the values used in the example (i.e. width, data-theme, data-link-color, data-tweet-limit) are not required. They have been included to demonstrate customization. When you create/save the widget on the twitter website, the configuration options (See the Create widget image above) are associated with the "data-widget-id". For example "data-theme", if you saved the widget with light and you want the light theme, it can be excluded. Alternatively if you saved the widget with light, you can use "data-theme=dark" and over ride the value saved with the widget.  In the example time line picture the data-link-color value is used to over ride the value provided with the saved "data-widget-id".   Example Response Rule, *line spaced for splash readability and use of variables for easy customization. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # Only modify text/html pages    if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         $inserttag = "<!--SIDEBAR-->" ;       # create a widget ID @ https://twitter.com/settings/widgets/new  #This is the id used by riverbed.com   $ttimelinedataid = "261517019072040960" ;  $ttimelinewidth = "520" ; # max could be limited by ID config.  $ttimelineheight = "420" ;  $ttimelinelinkcolor = "#0080ff" ; #0 for default or ID config, #0080ff & #0099cc are nice  $ttimelinetheme = "dark" ; #"light" or "dark"  $ttimelinelimit = "0" ; #0 = unlimited with scroll. >=1 will ignore height.  #See https://dev.twitter.com/web/embedded-timelines#customization for other options.       $ttimelinehtml = "<a class=\"twitter-timeline\" " .                   "width=\"" . $ttimelinewidth . "" .                     "\" height=\"" . $ttimelineheight . "" .                     "\" data-theme=\"" . $ttimelinetheme . "" .                   "\" data-link-color=\"" . $ttimelinelinkcolor . "" .                   "\" data-tweet-limit=\"" . $ttimelinelimit . "" .                   "\" data-widget-id=\"" . $ttimelinedataid . "" .                    "\"></a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)" .                     "[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))" .                     "{js=d.createElement(s);js.id=id;js.src=p+" .                   "\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js," .                   "fjs);}}(document,\"script\",\"twitter-wjs\");" .                     "</script><br>" . $inserttag . "" ;         $body = http.getResponseBody();    $body = string.replace( $body , $inserttag , $ttimelinehtml );  http.setResponseBody( $body );    A short version of the rule above, still with line breaks for readability.   1 2 3 4 5 6 7 8 9 if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         http.setResponseBody(string.replace( http.getResponseBody(), "<!--SIDEBAR-->" ,   "<a class=\"twitter-timeline\" width=\"520\" height=\"420\" data-theme=\"dark\" " .  "data-link-color=\"#0080ff\" data-tweet-limit=\"0\" data-widget-id=\"261517019072040960\">" .  "</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test" .  "(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;" .  "js.src=p+\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js,fjs);}}" .  "(document,\"script\",\"twitter-wjs\");</script><br><!--SIDEBAR-->" ));    Result from either rule:  
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft SharePoint 2013.  
View full article
The VMware Horizon Mirage Load Balancing Solution Guide describes how to configure Riverbed SteelApp to load balance VMware Horizon Mirage servers.   VMware® Horizon Mirage™ provides unified image management for physical desktops, virtual desktops and BYOD.
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Magento.  
View full article
An interesting use case cropped up recently - one of our users wanted to do some smarts with the login credentials of an FTP session.   This article steps through a few sample FTP rules and explains how to manage this sort of traffic.   Before you begin   Make sure you have a suitable FTP client.  The command-line ftp tool shipped with most Unix-like systems supports a -d flag that reports the underlying FTP messages, so it's great for this exercise.   Pick a target FTP server.  I tested against ftp.riverbed.com and ftp.debian.org , but other ftp servers may differ for subtle reasons.   Review the FTP protocol specification - it's sufficient to know that it's a single TCP control channel, requests are of the form 'VERB[ parameter]\r\n" and responses are of the form 'CODE message\n'.  Multi-line responses are accepted; all but the last line of the reponse include an additional hyphen ('CODE-message\n').   Create your FTP virtual server   Use the 'Add a new service' wizard to create your FTP virtual server.  Just for fun, add a server banner (Virtual Server > Connection Management > FTP-Specific Settings):     Verify that you can log in to your FTP server through Stingray, and that the banner is rewritten: Now we're good to go!   Intercepting login credentials   We want to intercept FTP login attempts, and change all logins to 'anonymous'.  If a user logs in with 'username:password', we're going to convert that to 'anonymous:username' and discard the password.   Create the following Request Rule, and assign it to the FTP virtual server:   log.info( "Recieved connection: state is '" . connection.data.get( "state" ) . "'" ); if( connection.data.get( "state" ) == "" ) { # This is server-first, so we have no data on the first connect connection.data.set( "state", "connected" ); break; } if( connection.data.get( "state" ) == "connected" ) { # Get the request line $req = string.trim( request.endswith( "\n" ) ); log.info( " ... got request '" . $req . "'" ); if( string.regexmatch( $req, "USER (.*)" ) ) { connection.data.set( "user", $1 ); # Translate this to an anonymous login log.info( " ... rewriting request to 'USER anonymous'" ); request.set( "USER anonymous\r\n" ); } if( string.regexmatch( $req, "PASS (.*)" ) ) { $pass = $1; connection.data.set( "pass", $pass ); $user = connection.data.get( "user" ); # Set the appropriate password log.info( " ... rewriting request to 'PASS ".$user."'" ); request.set( "PASS ".$user."\r\n" ); } }   Now, if you log in with your email address (for example) and a password, the rule will switch your login to an anonymous one and will log the result:   Authenticating the user's credentials   You can extend this rule to authenticate the credentials that the user provided.  At the point in the rule where you have the username and password, you can call a Stingray authenticator, a Java Extension, or reference a libTable.rts: Interrogating tables of data in TrafficScript in your TrafficScript rule:   #AD authentication $ldap = auth.query( "AD Auth", $user, $pass ); if( $ldap['Error'] ) { log.error( "Error with authenticator 'AD Auth': " . $auth['Error'] ); connection.discard(); } else if( !$ldap['OK'] ) { log.info("User not authenticated. Username and/or password incorrect"); connection.discard(); }  
View full article
You may be familiar with the security concept of a 'honeypot' - a sandboxed, sacrificial computer system that sits safely away from the primary systems.  Any attempts to access that computer are a strong indicator that an attacker is at work, probing for weak points in a network.   A recent Slashdot article raised an interesting idea... 'honeywords' are fake accounts in a password database that don't correspond to real users.  Any attempts to log in with one of these accounts is a strong indicator that the password database has been stolen.   In a similar vein, attempts to log in with common, predictable admin accounts are a strong indicator that an attacker is scanning your system and looking for weaknesses.  This article describes how you can detect these attacks with ease, and then considers different methods you could use to block the attacker.   Detecting Attack Attempts   Attackers look for common account names and passwords (see [1], [2] and [3])   Traffic Manager is in an ideal position to detect attack attempts.  It can inspect the username and password in each login attempt, and flag an alert if a user appears to be scanning for familiar usernames.   Step 1: Determine how the login process functions   Credentials are usually presented to the server as HTTP form parameters, typically in an HTTP POST to an SSL-protected endpoint: Web Inspection tools such as the Chrome Developer tools (illustrated above) help you understand how the authentication credentials are presented to the login service.   You can use the TrafficScript function http.getFormParam() to look up the submitted HTTP form parameters - this function extracts parameters from both the query string (GET and POST requests) and HTTP request body (POST requests), handles any unusual body transfer encoding, and %-decodes the values:   $userid = http.getFormParam( "Email" ); $pass = http.getFormParam( "Password" );   Step 2: Does this constitute an attack?   You'll need to make a judgement as to what constitutes an attack attempt against your service.  A single attempt to log-in with 'admin:admin' is probably sufficient to block a user, but multiple attempts in a short period of time certainly indicate a concerted attack.   An easy way to count user/password combinations is to use a rate shaping class to count events.  Stingray's rate classes are usually used to implement queues (delaying requests that exceed the per-second or per-minute queue), but you can also use the rate.use.noQueue() function to determine if an event has exceeded the rate limit or not, without queuing it.   Let's construct a policy that detects if a particular source IP address is trying to log in to one of our false 'admin' accounts too frequently:   $path = http.getPath(); if( $path != "/cgi-bin/login.cgi" ) break; $ip = request.getRemoteIP(); $user = http.getFormParam( "user" ); if( string.regexmatch( $user, "^(admin|root|phpadmin|test|guest|user|administrator|mysql|adm|oracle)$" ) ) { if( rate.use.noQueue( "5 per minute", $ip ) == 0 ) { # User has exceeded the limits .... } }   An aside: If you would like to maintain a large list of honeyword names (making sure that none of them correspond to real accounts), then you may find it easier to store them in an external table using libTable.rts: Interrogating tables of data in TrafficScript.       Responding to Attack Attempts   If you determine that a particular IP address is generating attack attempts and you want to block it, there are a number of ways that you can do so.  They vary in complexity, accuracy and the ability to 'time out' the period that an IP address is blocked out for:   Method Sophistication Store data locally in the global data segment Straightforward to code, timeouts possible, not cluster-aware Store data in the resource directory Straightforward to code, timeouts possible, is cluster-aware Update configuration in service protection policy Straightforward to code, difficult to avoid race conditions, not possible to timeout the configuration, is cluster aware Provision iptables rules from an event Complex to code accurately but very effective, not possible to timeout, is cluster aware   Updating the configuration in a service protection policy could be achieved by calling the REST API from TrafficScript - perform a GET on the configuration ( /api/tm/1.0/config/active/protection/ name ), update the banned array, and PUT the configuration back again.  However, there is no natural way to remove (timeout) a block on an IP address after a period of inactivity.   Provisioning iptables rules would be possible with a specific event handler that responded to the TrafficScript function event.emit( "block", $ip ), but once again, there's no easy way to time a block rule out.   Storing data locally in the resource directory is a good approach, and is described in detail in the article Slowing down busy users - driving the REST API from TrafficScript.  The basic premise is that you can use the REST API to 'touch' a file (named after an IP address) in the resource directory, and you block a user if their IP address corresponds to a file in the resource directory that is not too old.  However, if the user does not return, you will build up a large number of files in the resource directory that should be manually pruned.   Storing data in the global data segment (How is memory managed in TrafficScript?) is perhaps the best solution.  The following code sample illustrates the basic premise:     $prefix = "blocked-ip-address:"; # Record that an IP address is blocked data.set( $prefix.$ip, 1 ); # Check if an IP address is blocked if( data.get( $prefix.$ip ) ) { connection.discard();#sthash.YB8cEYo7.dpuf } # Delete all records data.reset( $prefix );   You could implement timeouts in a simple fashion, for example, by calling data.reset() on the first transaction after the top of every hour:   $hour = sys.time.hour(); $last = data.get( $prefix."hour" ); if( $last != $hour ) { data.reset( $prefix ); data.set( $prefix."hour", $hour ); }   An aside: There is a very slight risk of a race condition here (if two cores run the rule simultaneously) but the effects are not significant.   This approach gives a simple and effective solution to the problem of detecting logins to fake admin accounts, and then blocking the IP address for up to an hour.   What if I want to block IP addresses for longer?   One weakness of the approach above is that if an IP address is added to the block table at 59 minutes past the hour, it will be removed a minute later.  This may not be a serious fault; if the user is continuing to try to force admin accounts, the rule will detect this and block the IP address shortly after.   An alternative solution is to store two tables - one for odd-numbered hours, and one for even-numbered hours:   When you add an IP address, place it in the odd or even table according to the current hour When you test for the presence of an IP address, check both tables When the hour rolls over and you switch to the even-numbered table (for example), delete all of the entries (using data.reset ) before proceeding - they will be between one and two hours old   $prefix = "blocked-ip-address:"; # Check if an IP address is blocked if( data.get( $prefix."0:".$ip ) || data.get( $prefix."1:".$ip ) ) { connection.discard(); } # Add an IP address (this is an infrequent operation we hope!) $hour = sys.time.hour(); $pp = ( $hour % 2 ) . ":"; # pp is either 0: or 1: $last = data.get( $prefix.$pp."hour" ); if( $last != $hour ) { data.reset( $prefix.$pp ); data.set( $prefix.$pp."hour", $hour ); } data.set( $prefix.$pp.$ip, 1 );   This extension to the rule could further be extended to any number of tables, and to any time interval, though this is almost certainly overkill for this solution.   Read More   Interested in knowing what usernames are most commonly used?  Check out the article Being Lazy with Java Extensions and the 'CountThis' extension Other security and denial-of-service -related articles - check out the Security section of the Top Stingray Examples and Use Cases article
View full article
When Stingray load-balances a connection to an iPlanet/SunONE/Sun Java System Web Server server or application, the connection appears to originate from the Stingray machine. This can be a problem if the server wishes to perform access control based on the client's IP address, or if it wants to log the true source address of the request, and is well documented in the article IP Transparency: Preserving the Client IP address in Stingray Traffic Manager.   Stingray has an IP Transparency feature that preserves the client's IP address, but this requires a Stingray Kernel Modules for Linux Software (pre-installed on Stingray Virtual Appliances and available separately for Stingray software) and is currently only available under Linux. As an alternative, the mod_remoteip module is a good solution for Apache; this article presents a similar module for iPlanet and related webservers.   How it works   Stingray automatically inserts a special X-Cluster-Client-Ip header into each request, which identifies the true source address of the request. The iPlanet/Sun NSAPI module inspects this header and corrects the calculation of the source address. This change is transparent to the web server, and to any applications running on or behind the web server.   Obtaining the Module   Compile the module from source:   https://gist.github.com/5546803   To determine the appropriate compilation steps for an NSAPI module for your instance of iPlanet, you can first build the NSAPI examples in your SunONE installation:   $ cd plugins/nsapi/examples/ $ make cc -DNET_SSL -DSOLARIS -D_REENTRANT -DMCC_HTTPD -DXP_UNIX -DSPAPI20 \ -I../../include -I../../include/base -I../../include/frame -c addlog.c ld -G addlog.o -o example.so   You can build the iprewrite.so module using similar options. Set NSHOME to the installation location for iPlanet:   $ export NSHOME=/opt/iplanet $ cc -DNET_SSL -DSOLARIS -D_REENTRANT -DMCC_HTTPD -DXP_UNIX -DSPAPI20 \ -I$NSHOME/plugins/include -I$NSHOME/plugins/include/base \ -I$NSHOME/plugins/include/frame -c iprewrite.c $ ld -G iprewrite.o -o iprewrite.so $ cp iprewrite.so $NSHOME/plugins   Configuring the Module   To configure the module, you will need to edit the magnus.conf and obj.conf files for the virtual server you are using. If the virtual server is named 'test', you'll find these files in the https-test/config directory.   magnus.conf   Add the following lines to the end of the magnus.conf file. Ensure that the shlib option identifies the full path to the iprewrite.so module, and that you set TrustedIPs to either '*', or the list of Stingray back-end IP addresses:   Init fn="load-modules" funcs="iprewrite-init,iprewrite-all,iprewrite-func" \ shlib="/usr/local/iplanet/plugins/iprewrite.so" Init fn="iprewrite-init" TrustedIPs="10.100.1.68 10.100.1.69"   The TrustedIPs option specifies the back-end addresses of the Stingray machines. The iprewrite.so module will only trust the 'X-Cluster-Client-Ip' header in connections which originate from these IP addresses. This means that remote users cannot spoof their source addresses by inserting a false header and accessing the iPlanet/Sun servers directly.   obj.conf   Locate the 'default' object in your obj.conf file and add the following line at the start of the directives inside that object:   <Object name=default> AuthTrans fn="iprewrite-all" ...   Restart your iPlanet/Sun servers, and monitor your servers' error logs (https-name/log/errors).   The Result   iPlanet/Sun, and applications running on the server will see the correct source IP address for each request. The access log module will log the correct address when you use %a or %h in your log format string.   If you have misconfigured the TrustedIPs value, you will see messages like:   Ignoring X-Cluster-Client-Ip '204.17.28.130' from non-Load Balancer machine '10.100.1.31'   Add the IP address to the trusted IP list and restart.   Alternate Configuration   The 'iprewrite-all' SAF function changes the ip address for the entire duration of the connection. This may be too invasive for some environments, and its possible that a later SAF function may modify the IP address again. You can use the 'iprewrite-func' SAF function to change the ip address for a single NSAPI function. For example, BEA's NSAPI WebLogic connector ('wl_proxy') is normally configured as follows:   <Object name="weblogic" ppath="/weblogic/"> Service fn=wl_proxy WebLogicHost=localhost    WebLogicPort=7001 PathTrim="/weblogic" </Object>   You can change the IP address just for that function call, using the iprewrite-func SAF function as follows:   <Object name="weblogic" ppath="/weblogic/"> Service fn=iprewrite-func func=wl_proxy WebLogicHost=localhost    WebLogicPort=7001 PathTrim="/weblogic" </Object>
View full article
This Document provides step by step instructions on migrating Cisco ACE configuration to Stingray Traffic Manager.
View full article
Imagine you're running a popular image hosting site, and you're concerned that some users are downloading images too rapidly.  Or perhaps your site publishes airfares, or gaming odds, or auction prices, or real estate details and screen-scraping software is spidering your site and overloading your application servers.  Wouldn't it be great if you could identify the users who are abusing your web services and then apply preventive measures - for example, a bandwidth limit - for a period of time to limit those users' activity?   In this example, we'll look at how you can drive the control plane (the traffic manager configuration) from the data plane (a TrafficScript rule):   Identify a user by some id, for example, the remote IP address or a cookie value Measure the activity of each users using a rate class If a user exceeds the desired rate (their terms of service), add a resource file identifying the user and their 'last sinned' time Check the resource time to see if we should apply a short-term limit to that user's activity   Basic rule   # We want to monitor image downloads only if( !string.wildMatch( http.getPath(), "*.jpg" ) ) break; # Identify each user by their remote IP. # Could use a cookie value here, although that is vulnerable to spoofing # Note that we'll use $uid as a filename, so it needs to be secured $uid = request.getRemoteIP(); if( !rate.use.noQueue( "10 per minute", $uid ) ) { # They have exceeded the desired rate and broken the terms of use # Let's create a config file named $uid, containing the current time http.request.put( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, sys.time(), "Content-type: application/octet-stream\r\n". "Authorization: Basic ".string.base64encode( "admin:admin" ) ); } # Now test - did the user $uid break their terms of use recently? $lastbreach = resource.get( $uid ); if( ! $lastbreach ) break; # config file does not exist if( sys.time()-$lastbreach < 60 ) { # They last breached the limits less than 60 seconds ago response.setBandwidthClass( "Very slow" ); } else { # They have been forgiven their sins. Clean up the config file http.request.delete( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, "Authorization: Basic ".string.base64encode( "admin:admin" ) ); }   This example uses a rate class named '10 per minute' to monitor the request rate for each user, and a bandwidth class named ‘Very slow’ to apply an appropriate bandwidth limit.  You could potentially implement a similar solution using client-side cookies to identify users who should be bandwidth-limited, but this solution has the advantage that the state is stored locally and is not dependent on trusting the user to honor cookies.   There's scope to improve this rule.  The biggest danger is that if a user exceeds the limit consistently, this will result in a flurry of http.request.put() calls to the local REST daemon.  We can solve this problem quite easily with a rate class that will limit how frequently we update the configuration.  If that slows down a user who has just exceeded their terms of service, that's not really a problem for us!   rate.use( "10 per minute" ); # stall the user if necessary to avoid overload http.request.put( ... );   Note that we can safely use the rate class in two different contexts in one rule.  The first usage ( rate.use( "name", $uid ) ) will rate-limit each individual value of $uid ; the rate.use( "name" ) is a global rate limit that will limit all calls to the REST API .   Read more   Check out the other prioritization and rate shaping suggestions on splash, including:   Dynamic rate shaping slow applications The "Contact Us" attack against mail servers Stingray Spider Catcher Evaluating and Prioritizing Traffic with Stingray Traffic Manager
View full article
Following is a library that I am working on that has one simple design goal: Make it easier to do authentication overlay with Stingray.   I want to have the ability to deploy a configuration that uses a single line to input an authentication element (basic auth or forms based) that takes the name of an authenticator, and uses a simple list to define what resources are protected and which groups can access them.   Below is the beginning of this library.  Once we have better code revision handling in splash (hint hint Owen Garrett!! ) I will move it to something more re-usable.  Until then, here it is.   As always, comments, suggestions, flames or gifts of mutton and mead most welcome...   The way I want to call it is like this:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import lib_auth_overlay as aaa;   # Here we challenge for user/pass $userpasswd = aaa.promptAuth401();   # extract the entered username / password into variables for clarity $username = $userpasswd [0]; $password = $userpasswd [1];   # Here we authenticate check that the user is a member of the listed group # We are using the "user_ldap" authenticator that I set up against my laptop.snrkl.org # AD domain controller. $authResult = aaa.doAuthAndCheckGroup( "user_ldap" , $username , $password , "CN=staff,CN=Users,DC=laptop,DC=snrkl,DC=org" );   # for convienience we will tell the user the result of their Auth in an http response aaa.doHtmlResponse.200( "Auth Result:" . $authResult );   here is the lib_auth_overlay that is referenced in the above element.  Please note the promptAuthHttpForm() sub routine is not yet finished...   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 sub doHtmlResponse.200 ( $message ){       http.sendResponse(          "200 OK" ,          "text/html" ,          $message ,          ""          ); }   sub challengeBasicAuth( $errorMessage , $realm ){       http.sendResponse(          "401 Access Denied" ,          "text/html" ,          $errorMessage ,          "WWW-Authenticate: Basic realm=\"" . $realm . "\"" );   }   sub basicAuthExtractUserPass( $ah ){ #// $ah is $authHeader,          $enc = string.skip( $ah , 6 );          $up = string. split (string.base64decode( $enc ), ":" );          return $up ;       }   sub doAuthAndGetGroups ( $authenticator , $u , $p ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator " . $authenticator . ": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          return $auth [ 'memberOf' ];       }   }   sub doAuthAndCheckGroup ( $authenticator , $u , $p , $g ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator \"" . $authenticator . "\": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          if ( lang.isArray( $auth [ 'memberOf' ])){ #//More than one group returned             foreach ( $group in $auth [ 'memberOf' ]){                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                }           }             #// If we get to here, we have exhausted list of groups with no match             return "FAIL" ;            } else { #// This means that only one group is returned             $group = $auth [ 'memberOf' ];                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "FAIL" ;                }        }     } }   sub promptAuth401(){       if (!http.getHeader( "Authorization" )) { #// no Authorization header present, lets challenge for credentials          challengeBasicAuth( "Error Message" , "Realm" );       } else {          $authHeader = http.getHeader( "Authorization" );          $up = basicAuthExtractUserPass( $authHeader );            return $up ;     } }   sub promptAuthHttpForm(){       $response = "<html> <head>Authenticate me...</head> <form action=/login method=POST> <input name=user required> <input name=realm type=hidden value=stingray> <input name=pass type=password required> <button>Log In</button> </form> </html>" ;       doHtmlResponse.200( $response ); }  
View full article
In Stingray, each virtual server is configured to manage traffic of a particular protocol.  For example, the HTTP virtual server type expects to see HTTP traffic, and automatically applies a number of optimizations - keepalive pooling, HTTP upgrades, pipelines - and offers a set of HTTP-specific functionality (caching, compression etc).   A virtual server is bound to a specific port number (e.g. 80 for HTTP, 443 for HTTPS) and a set of IP addresses.  Although you can configure several virtual servers to listen on the same port, they must be bound to different IP addresses; you cannot have two virtual servers bound to the same IP: port pair as Stingray will not know which virtual server to route traffic to.   "But I need to use one port for several different applications!"   Sometimes, perhaps due to firewall restrictions, you can't publish services on arbitrary ports.  Perhaps you can only publish services on port 80 and 443; all other ports are judged unsafe and are firewalled off. Furthermore, it may not be possible to publish several external IP addresses.   You need to accept traffic for several different protocols on the same IP: port pair.  Each protocol needs a particular virtual server to manage it;  How can you achieve this?   The scenario   Let's imagine you are hosting several very different services:   A plain-text web application that needs an HTTP virtual server listening on port 80 A second web application listening for HTTPS traffic listening on port 443 An XML-based service load-balanced across several servers listening on port 180 SSH login to a back-end server (this is a 'server-first' protocol) listening on port 22   Clearly, you'll need four different virtual servers (one for each service), but due to firewall limitations, all traffic must be tunnelled to port 80 on a single IP address.  How can you resolve this?   The solution - version 1   The solution is relatively straightforward for the first three protocols.  They are all 'client-first' protocols (see Feature Brief: Server First, Client First and Generic Streaming Protocols), so Stingray can read the initial data written from the client.   Virtual servers to handle individual protocols   First, create three internal virtual servers, listening on unused private ports (I've added 7000 to the public ports).  Each virtual server should be configured to manage its protocol appropriately, and to forward traffic to the correct target pool of servers.  You can test each virtual server by directing your client application to the correct port (e.g. http://stingray-ip-address:7080/ ), provided that you can access the relevant port (e.g. you are behind the firewall):   For security, you can bind these virtual servers to localhost so that they can only be accessed from the Stingray device.   A public 'demultiplexing' virtual server   Create three 'loopback' pools (one for each protocol), directing traffic to localhost:7080, localhost:7180 and localhost:7443.   Create a 'public' virtual server listening on port 80 that interrogates traffic using the following rule, and then selects the appropriate pool based on the data the clients send.  The virtual server should be 'client first', meaning that it will wait for data from the client connection before triggering any rules:     # Get what data we have... $data = request.get(); # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   The Detect protocol rule is triggered once we receive client data   Now you can target all your client applications at port 80, tunnel through the firewall and demultiplex the traffic on the Stingray device.   The solution - version 2   You may have noticed that we omitted SSH from the first version of the solution.   SSH is a challenging protocol to manage in this way because it is 'server first' - the client connects and waits for the server to respond with a banner (greeting) before writing any data on the connection.  This means that we cannot use the approach above to identify the protocol type before we select a pool.   However, there's a good workaround.  We can modify the solution presented above so that it waits for client data.  If it does not receive any data within (for example) 5 seconds, then assume that the connection is the server-first SSH type.   First, create a "SSH" virtual server and pool listening on (for example) 7022 and directing traffic to your target SSH virtual server (for example, localhost:22 - the local SSH on the Stingray host):     Note that this is a 'Generic server first' virtual server type, because that's the appropriate type for SSH.   Second, create an additional 'loopback' pool named 'Internal SSH loopback' that forwards traffic to localhost:7022 (the SSH virtual server).   Thirdly, reconfigure the Port 80 listener public virtual server to be 'Generic streaming' rather than 'Generic client first'.  This means that it will run the request rule immediately on a client connection, rather than waiting for client data.   Finally, update the request rule to read the client data.  Because request.get() returns whatever is in the network buffer for client data, we spin and poll this buffer every 10 ms until we either get some data, or we timeout after 5 seconds.   # Get what data we have... $data = request.get(); $count = 500; while( $data == "" && $count-- > 0 ) { connection.sleep( 10 ); # milliseconds $data = request.get(); } if( $data == "" ) { # We've waited long enough... this must be a server-first protocol pool.use( "Internal SSH loopback" ); } # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   This solution isn't perfect (the spin and poll may incur a hit for a busy service over a slow network connection) but it's an effective solution for the single-port firewall problem and explains how to tunnel SSH over port 80 (not that you'd ever do such a thing, would you?)   Read more   Check out Feature Brief: Server First, Client First and Generic Streaming Protocols for background The WebSockets example (libWebSockets.rts: Managing WebSockets traffic with Stingray Traffic Manager) uses a similar approach to demultiplex websockets and HTTP traffic
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
There are many reasons why you may want to serve web content directly from Stingray Traffic Manager - simplification, performance, ease of administration and, perhaps most importantly, to host a 'Sorry Page' if your entire web infrastructure has failed and Stingray is all that is left.   The article Using Stingray Traffic Manager as a Webserver describes the rationale in more detail and presents a simple TrafficScript-based webserver.  However, we can do a lot more with a more complete programming language - mime types, index pages, more control over the location of the document root are all simple to implement with Python.   Get started with PyRunner.jar   Start with the procedure describe in the article PyRunner.jar: Running Python code in Stingray Traffic Manager.  The PyRunner extension lets you run Python code in Stingray, using the local JVM and the Jython implementation.   Note that this example does not work reliably with versions of Jython prior to 2.7 beta1 - I hit problems when a library attempted to import the Jython errno module (possibly related to https://github.com/int3/doppio/issues/177).   webserver.py   Once you've installed PyRunner (using an appropriate version of Jython), upload the following Python script to your Extra Files catalog.  Make sure to call the script 'webserver.py', and edit the location of the docroot to an appropriate value:   from javax.servlet.http import HttpServlet from urllib import url2pathname from os import listdir from os.path import normpath,isdir,isfile,getmtime,getsize import mimetypes import datetime docroot = '/tmp' def dirlist( uri, path, response ): files = '' for f in listdir( path ): if (f[0] == '.') and (f[1] != '.'): continue # hidden files if isdir( path+'/'+f ): size = '&lt;DIR&gt;' f += '/' # Add trailing slash for directories else: size = '{:,d} bytes'.format( getsize( path+'/'+f )) mtime = datetime.datetime.fromtimestamp( getmtime( path+'/'+f )) files += '<a href="{f}">{f:<30}</a> {t:14} {s:>17}\n'.format( f=f, t=mtime, s=size ) html = ''''' <html><head><title>{uri}</title></head><body> <h1>Directory listing for {uri}</h1> <pre>{files}<a href="../">Up</a></pre> </body></html>''' .format(uri=uri, files=files ) response.setContentType( 'text/html' ) toClient = response.getWriter() toClient.write( html ) class webserver(HttpServlet): def doGet(self, request, response): uri = request.getRequestURI() print "Processing "+uri path = normpath( docroot + url2pathname( uri ) ) # Never access files outside the docroot if path.startswith( docroot ) == False: path = docroot uri = '/' if isdir( path ): if uri.endswith( '/' ) == False: response.sendRedirect( uri+'/' ) if isfile( path + '/index.html' ): response.sendRedirect( uri + 'index.html' ) else: dirlist( uri, path, response ) return try: c = open( path, 'r' ).read() except Exception, e: response.sendError( 400, "Could not read "+path+": "+str( e ) ) return mtype = mimetypes.guess_type( path )[0] if mtype == None: mtype = 'application/octet-stream' response.setContentType( mtype ) toClient = response.getWriter() toClient.write( c )   If you want to edit the rule and try some changes, they you'll find the publish.py script in Deploying Python code to Stingray Traffic Manager useful, and you should follow the Stingray Event log ( tail -f /opt/zeus/zxtm/log/errors ) to catch any problems.   Performance Optimization   The article Using Stingray Traffic Manager as a Webserver describes how to front the web server virtual server with a second virtual server that caches responses and responds directly to web requests.   It also presents an alternative webserver implementation written in Java - take your pick!
View full article
Stingray Traffic Manager has a host of great, capable features to improve the performance and reliability of your web servers.  However, is there sometimes a case for putting the web content on Stingray directly, and using it as a webserver?   The role of a webserver in a modern application has shrunk over the years.  Now it's often just a front-end for a variety of application servers, performing authentication and serving simple static web content... not unlike the role of Stingray.  Often, its position in the application changes when Stingray is added:     Now, if you could move the static content on to Stingray Traffic Manager, wouldn't that help to simplify your application architecture still further? This article presents three such ways:   A simple webserver in TrafficScript   TrafficScript can send back web pages without difficulty, using the http.sendResponse() function.  It can load web content directly from Stingray's Resource Directory (the Extra Files catalog).   Here's a simple TrafficScript webserver that intercepts all requests for content under '/static' and attempts to satisfy them with files from the Resource directory:   # We will serve static web pages for all content under this directory $static = "/static/"; $page = http.getPath(); if( !string.startsWith( $page, $static )) break; # Look for the file in the resource directory $file = string.skip( $page, string.length( $static )); if( resource.exists( $file )) { # Page found! http.sendResponse( 200, "text/html", resource.get( $file ), "" ); } else { # Page not found, send an error back http.sendResponse( 404, "text/html", "Not found", "" ); }   Add this file (as a request rule) to your virtual server and upload some files to Catalog > Extra Files.  You can then browse them from Stingray, using URLs beginning /static.   This is a very basic example.  For example, it does not support mime types (it assumes everything is text/html) - you can check out the Sending custom error pages article for a more sophisticated TrafficScript example that shows you how to host an entire web page (images, css and all) that you can use as an error page if your webservers are down.   However, the example also does not support directory indices... in fact, because the docroot is the extra files catalog, there's no (easy) way to manage a hierarchy of web content.  Wouldn't it be better if you could serve your web content directly from a directory in disk?   A more sophisticated Web Server in Java   The article Serving Web Content from Stingray using Java presents a more sophisticated web server written in Java.  It runs as a Java Extension and can access files in a nominated docroot outside of the Stingray configuration.  It supports mime types and directory indices.   Another webserver - in Python   Stingray can also run application code in Python, by way of the PyRunner.jar: Running Python code in Stingray Traffic Manager implementation that runs Python code on Stingray's local JVM.  This article Serving Web Content from Stingray using Python presents an alternative webserver written in Python.   Optimizing for Performance   Perhaps the most widely used feature in Stingray Traffic Manager, after basic load balancing, health monitoring and the like, i s Content Caching .  Content Caching is a very easy and effective way to reduce the overhead of generating web content, whether the content is read from disk or generated by an application.  The content generated by our webserver implementations is fairly 'static' (does not change) so it's ripe for caching to reduce the load on our webservers.   There's one complication - you can't just turn content caching on and expect it to work in this situation.  That's because content caching hooks into two stages in the transaction lifecycle in Stingray:   Once all response rules have completed, Stingray will inspect the final response and decide whether it can be cached for future reuse Once all request rules have completed, Stingray will examine the current request and determine if there's a suitable response in the cache   In our webserver implementations, the content was generated during the request processing step and written back to the client using http.sendResponse (or equivalent).  We never run any response rules (so the content cannot be cached) and we never get to the end of the request rules (so we would not check the cache anyway).   The elegant solution is to create a virtual server in Stingray specifically to run the WebServe extension.  The primary virtual server can forward traffic to your application servers, or to the internal 'webserver' virtual server as appropriate.  It can then cache the responses (and respond directly to future requests) without any difficulty:   The primary Virtual Server decides whether to direct traffic to the back-end servers, or to the internal web server using an appropriate TrafficScript rule:   Create a new HTTP virtual server - this will be used to run the Webserver Extension. Configure the virtual server to listen on localhost only, as it need not be reachable from anywhere else.  You can chose an unused high port, such as 8080 perhaps.   Add the appropriate TrafficScript rule to this virtual server to make it run the WebServer Extension.   Create a pool called 'Internal Web Server' that will direct traffic to this new virtual server. The pool should contain the node localhost:[port].   Extend the TrafficScript rule on the original virtual server to:   # Use the internal web server for static content if( string.startsWith( http.getPath(), "/static" )) { pool.use( "Internal Web Server" ); }   Enable the web cache on the original virtual server.   Now, all traffic goes to the original virtual server as before. Static pages are directed to the internal web server, and the content from that will be cached. With this configuration, Stingray should be able to serve web pages as quickly as needed.   Don't forget that all the other Stingray features like content compression, logging, rate-shaping, SSL encryption and so on can all be used with the new internal web server. You can even use response rules to alter the static pages as they are sent out.   Now, time to throw your web servers away?  
View full article
To get started quickly with Python on Stingray Traffic Manager, go straight to PyRunner.jar: Running Python code in Stingray Traffic Manager.  This article dives deep into how that extension was developed.   As is well documented, we support the use of Java extensions to manipulate traffic. One of the great things about supporting "Java" is that this really means supporting the JVM platform... which, in turn, means we support any language that will run on the JVM and can access the Java libraries.   Java isn't my first choice of languages, especially when it comes to web development. My language of choice in this sphere is Python; and thanks to the Jython project we can write our extensions in Python!   Jython comes with a servlet named PyServlet which will work with Stingray out-of-the-box, this is a good place to start. First let us quickly set up our Jython environment, working Java support is a prerequisite of course. (If you're not already familiar with Stingray Java extensions I recommend reviewing the Java Development Guide available in the Stingray Product Documentation and the Splash guide Writing Java Extensions - an introduction )   Grab Jython 2.5 (if there is now a more recent version I expect it will work fine as well)   Jython comes in the form of a .jar installer, on Linux I recommend using the CLI install variant:   java -jar jython_installer-2.5.0.jar -c   Install to a path of your choosing, I've used: /space/jython   Upload /space/jython/jython.jar to your Java Extensions Catalog   In the Java Extensions Catalog you can now see a couple of extensions provided by jython.jar, including org.python.util.PyServlet. By default this servlet maps URLs ending in .py to local Python files which it will compile and cache. Set up a test HTTP Virtual Server (I've created a test one called "Jython" tied to the "discard" pool), and add the following request rule to it:   if (string.endsWith(http.getPath(), ".py")) { java.run( "org.python.util.PyServlet" ); }   The rule avoids errors by only invoking the extension for .py files. (Though if you invoke PyServlet for a .py file that doesn't exist you will get a nasty NullPointerException stack trace in your event log.)   Next, create a file called Hello.py containing the following code:   from javax.servlet.http import HttpServlet class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello from Python</title>" + "<body><h1 style='color:green;'>Hello from Python!</h1></body></html>")   Upload Hello.py to your Java Extensions Catalog. Now if you visit the file Hello.py at the URL of your VirtualServer you should see the message "Hello from Python!" Hey, we didn't have to compile anything! Your event log will have some messages about processing .jar files, this only happens on first invocation. You will also get a warning in your event log every time you visit Hello.py:   WARN java/* servleterror Servlet org.python.util.PyServlet: Unknown attribute (pyservlet)   This is just because Stingray doesn't set up or use this particular attribute. We'll ignore the warning for now, and get rid of it when we tailor PyServlet to be more convenient for Stingray use. The servlet will also have created some some extra files in your Java Libraries & Data Catalog under a top-level directory WEB-INF. It is possible to change the location used for these files, we'll get back to that in a moment.   All quite neat so far, but the icing on the cake is yet to come. If you open up $ZEUSHOME/conf/jars/Hello.py in your favourite text editor and change the message to "Hello <em>again</em> from Python!" and refresh your browser you'll notice that the new message comes straight through. This is because the PyServlet code checks the .py file, if it is newer than the cached bytecode it will re-interpret and re-cache it. We now have somewhat-more-rapid application development for Stingray extensions. Bringing together the excellence of Python and the extensiveness of Java libraries.   However, what I really want to do is use TrafficScript at the top level to tell PyServlet which Python servlet to run. This will require a little tweaking of the code. We want to get rid of the code that resolves .py file from the request URI and replace it with code that uses an argument passed in by TrafficScript. While we're at it, we'll make it non-HTTP-specific and add some other small tweaks. The changes are documented in comments in the code, which is attached to this document.   To compile the Stingray version of the servlet and pop the two classes into a single convenient .jar file execute the following commands.   $ javac -cp $ZEUSHOME/zxtm/lib/servlet.jar:/space/jython/jython.jar ZeusPyServlet.java $ jar -cvf ZeusPyServlet.jar ZeusPyServlet.class ZeusPyServletCacheEntry.class   (Adjusting paths to suit your environment if necessary.)   Upload the ZeusPyServlet.jar file to your Java Extensions Catalog. You should now have a ZeusPyServlet extension available. Change your TrafficScript rule to load this new extension and provide an appropriate argument.   if (string.endsWith(http.getPath(), ".py")) { java.run( "ZeusPyServlet", "Hello.py" ); }   Now visiting Hello.py works just as before. In fact, visiting any URL that ends in .py will now generate the same result as visiting Hello.py. We have complete control over what Python code is executed from our TrafficScript rule, much more convenient.   If you continue hacking from this point you'll soon find that we're missing core parts of python with the setup described so far. For example adding import md5 to your servlet code will break the servlet, you'd see this in your Stingray Event Log:   WARN servlets/ZeusPyServlet Servlet threw exception javax.servlet.ServletException:            Exception during init of /opt/zeus/zws/zxtm/conf/jars/ServerStatus.py WARN  Java: Traceback (most recent call last): WARN  Java:    File "/opt/zeus/zws/zxtm/conf/jars/ServerStatus.py", line 2, in <module> WARN  Java:      import md5 WARN  Java: ImportError: No module named md5   This is because the class files for the core Python libraries are not included in jython.jar. To get a fully functioning Jython we need to tell ZeusPyServlet where Jython is installed. To do this you must have Jython installed on the same machine as the Stingray software, and then you just have to set a configuration parameter for the servlet, in summary:   Install Jython on your Stingray machine, I've installed mine to /space/jython In Catalogs > Java > ZeusPyServlet add some parameters: Parameter: python_home, Value: /space/jython (or wherever you have installed Jython) Parameter: debug, Value: none required (this is optional, it will turn on some potentially useful debug messages) Back in Catalogs > Java you can now delete all the WEB-INF files, now that Jython knows where it is installed it doesn't need this Go to System > Traffic Managers and click the 'Restart Java Runner ...' button, then confim the restart (this ensures no bad state is cached)   Now your Jython should be fully functional, here's a script for you to try that uses MD5 functionality from both Python and Java. Just replace the content of Hello.py with the following code.   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello from Python</title><body>" htmlOut += "<h1>Hello from Python!</h1>" # try a Python md5 htmlOut += "<h2>Python MD5 of 'foo': %s</h2>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "<h2>Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "</h2>" # yes, the Stingray attributes are available htmlOut += "<h2>VS: %s</h2>" % request.getAttribute("virtualserver") htmlOut += "</body></html>" toClient.println(htmlOut)   An important point to realise about Jython is that beyond the usual core Python APIs you cannot expect all the 3rd party Python libraries out there to "just work". Non-core Python modules compiled from C (and any modules that depend on such modules) are the main issue here. For example the popular Numeric package will not work with Jython. Not to worry though, there are often pure-Python alternatives. Don't forget that you have all Java libraries available too; and even special Java libraries designed to extend Python-like APIs to Jyhon such as JNumeric, a Jython equivalent to Numeric. There's more information on the Jython wiki. I recommend reading through all the FAQs as a starting point. It is perhaps best to think of Jython as a language which gives you the neatness of Python syntax and the Python core with the utility of the massive collection of Java APIs out there.
View full article
For a comprehensive description of how this Stingray Java Extension operates, check out Yvan Seth's excellent article Making Stingray more RAD with Jython!   Overview   Stingray can invoke TrafficScript rules (see Feature Brief: TrafficScript) against requests and responses, and these rules run directly in the traffic manager kernel as high-performance bytecode.   A TrafficScript rule can also reach out to the local JVM to run servlets (Feature Brief: Java Extensions in Stingray Traffic Manager), and the PyRunner.jar library uses the JVM to run Python code against network traffic.  This is a great solution if you need to deploy complex traffic management policies and your development expertise lies with Python.   Requirements   Download and install Jython (http://www.jython.org/downloads.html).  This code was developed against Jython 2.5.3, but should run against other Jython versions.  For best compatibility across platforms, use the Jython installer from www.jython.org rather than the jython packages distributed by your OS vendor:   $ java -jar jython_installer-2.5.2.jar --console   Select installation option 1 (all components) or explicitly include the src part - this installs additional modules in extlibs that we will use later.   Locate the jython.jar file included in the install and upload this file to your Stingray Java Extensions catalog.   Download the PyRunner.jar file attached to this document and upload that to your Java Extensions catalog.  Alternatively, you can compile the Jar file from source:   $ javac -cp servlet.jar:zxtm-servlet.jar:jython.jar PyRunner.java $ jar -cvf PyRunner.jar PyRunner*.class   You can now run simple Python applications directly from TrafficScript!   A simple 'HelloWorld' example   Save the following Python code as Hello.py and upload the file to your Catalog > Extra Files catalog:     from javax.servlet.http import HttpServlet import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello World</title>" + "<body><h1 style='color:red;'>Hello World</h1>" + "The current time is " + time.strftime('%X %x %Z') + "</body></html>")   Assign the following TrafficScript request rule to your Virtual Server:   java.run( "PyRunner", "Hello.py" );   Now, whenever the TrafficScript rule is called, it will run the Hello.py code.  The PyRunner extension loads and compiles the Python code, and caches the compiled bytecode to optimize performance.   More sophisticated Python examples   The PyRunner.jar/jython.jar combination is capable of running simple Python examples, but it does not have access to the full set of Python core libraries.  These are to be found in additional jar files in the extlibs part of the Jython installation.   If you install Jython on the same machine you are running the Stingray software on, then you can point PyRunner.jar at that location:   Install Jython in a known location, such as /usr/local/jython - make sure to install all components (option 1 in the installation types) or explicitly add the src part Navigate to Catalogs > Java > PyRunner and add a parameter named python_home , set to /usr/local/jython (or other location as appropriate) In Catalogs > Java, delete the WEB-INF files generated previously - they won't be required any more From the System > Traffic Managers page, restart your Java runner.   You can install jython in this way on the Stingray Virtual Appliance, but please take be aware that the installation will not be preserved during a major upgrade, and it will not form part of the supported configuration of the virtual appliance.   Here's an updated version of Hello.py that uses the Python and Java md5 implementations to compare md5s for the string 'foo' (they should give the same result!):   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello World</title><body>" htmlOut += "<h1>Hello World</h1>" htmlOut += "The current time is " + time.strftime('%X %x %Z') + "<br/>" # try a Python md5 htmlOut += "Python MD5 of 'foo': %s<br/>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "<br/>" # yes, the Stingray attributes are available htmlOut += "Virtual Server: %s<br/>" % request.getAttribute("virtualserver") # 'args' is the parameter list for java.run(), beginning with the script name htmlOut += "Args: %s<br/>" % ", ".join(request.getAttribute("args")) htmlOut += "</body></html>" toClient.println(htmlOut)   Upload this file to your Extra catalog to replace the existing Hello.py script and try it out.   Rapid test and development   Check out publish.py - a simple python script that automates the task of uploading your python code to the Extra Files catalog: Deploying Python code to Stingray Traffic Manager
View full article