cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
The VMware Horizon Mirage Load Balancing Solution Guide describes how to configure Riverbed SteelApp to load balance VMware Horizon Mirage servers.   VMware® Horizon Mirage™ provides unified image management for physical desktops, virtual desktops and BYOD.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Magento.  
View full article
  1. The Issue   When using perpetual licensing on a Traffic Manager, it is restricted to a throughput licensing limitation as per the license.  If this limitation is reached, traffic will be queued and in extreme situations, if the throughput reaches much higher than expected levels, some traffic could be dropped because of the limitation.   2. The Solution   Automatically increase the allocated bandwidth for the Traffic Manager!!   3. A Brief Overview of the Solution   An SSC holds the licensed bandwidth configuration for the Traffic Manager instance.   The Traffic Manager is configured to execute a script on an event being raised, the bwlimited event.   The script makes REST calls to the SSC in order to obtain and then increment if necessary, the Traffic Manager's bandwidth allocation.   I have written the script used here, to only increment if the resulting bandwidth allocation is 5Mbps or under, but this restriction could be removed if it's not required.  The idea behind this was to allow the Traffic Manager to increment it's allocation, but to only let it have a certain maximum amount of bandwidth from the SSC bandwidth "bucket".   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set.   For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out the automated bandwidth allocation increase on the Traffic Manager, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the instance in the event of a bandwidth limitation event firing. b. Upload the script to be used, on to the Traffic Manager. c. Create a new event and action on the Traffic Manager which will be initiated when the bandwidth limitation is hit, calling the script mentioned in point a above.   4.2.a. The Script to increment the Traffic Manager Bandwidth Allocation   This script, called  and attached, is shown below.   Script Function:   Obtain the Traffic Manager instance configuration from the SSC. Extract the current bandwidth allocation for the Traffic Manager instance from the information obtained. If the current bandwidth is less then 5Mbps, then increment the allocation by 1Mbps and issue the REST call to the SSC to make the changes to the instance configuration as required.  If the bandwidth is currently 5Mbps, then do nothing, as we've hit the limit for this particular Traffic Manager instance.   #!/bin/bash # # Bandwidth_Increment # ------------------- # Called on event: bwlimited # # Request the current instance information requested_instance_info=$(curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" \ -X GET -u admin:password https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002) # Extract the current bandwidth figure for the instance current_instance_bandwidth=$(echo $requested_instance_info | sed -e 's/.*"bandwidth": \(\S*\).*/\1/g' | tr -d \,) # Add 1 to the original bandwidth figure, imposing a 5Mbps limitation on this instance bandwidth entry if [ $current_instance_bandwidth -lt 5 ] then new_instance_bandwidth=$(expr $current_instance_bandwidth + 1) # Set the instance bandwidth figure to the new bandwidth figure (original + 1) curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \ '{"bandwidth":'"${new_instance_bandwidth}"'}' \ https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002 fi   There are some obvious parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton which you can use to carry out whatever configuration is required on the SSC for a particular Traffic Manager.  Events and actions can be set up on the Traffic Manager which can then be used to execute scripts which can access the SSC and make the changes necessary based on any logic you see fit.   4.2.b. Upload the Bash Scripts to be Used   On the Traffic Manager, upload the bash script that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.   4.2.c. Create a New Event and Action for the Bandwidth Limitation Hit   On the Traffic Manager, create a new event type as shown in the screenshot below - I've created Bandwidth_Increment, but this event could be called anything relevant.  The important factor here is that the event is raised from the bwlimited event.     Once this event has been created, an action must be associated with it.   Create a new external program action as shown in the screenshot below - I've created one called Bandwidth_Increment, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called SSC_Bandwidth_Increment.     5. Testing   In order to test the solution, on the SSC, set the initial bandwidth for the Traffic Manager instance to 1Mbps.   Generate some traffic through to a service on the Traffic Manager that will force the Traffic Manager to hit it's 1Mbps limitation for a succession of time.  This will cause the bwlimited event to fire and for the Bandwidth_Increment action to be executed, running the SSC_Bandwidth_Increment script.   The script will increment the Traffic Manager bandwidth by 1Mbps.   Check and confirm this on the SSC.   Once confirmed, stop the traffic generation.   Note: As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Manager.   You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Riverbed Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
Installation   Unzip the download ( Stingray Traffic Manager Cacti Templates.zip ) Via the Cacti UI, “Import Templates” and import the Data, Host, and Graph templates.  * Included graph templates are not required for functionality. Copy the files for the Cacti folder in the zip file to their corresponding directory inn your cacti install. Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers. Assign the host template to Traffic Manager(s) and create new graphs.   * Due to the method used by Cacti for creating graphs and the related RRD files, it is my recommendation NOT to create all graphs via the Device Page.   If you create all the graphs via the “*Create Graphs for this Host” link on the device page, Cacti will create an individual data source (RRD file and SNMP query for each graph) resulting in a significant amount of wasted Cacti and Device resources. Test yourself with the Stingray SNMP graph.   My recommendation is to create a single initial graph for each Data Query or Data Input method (i.e. one for Virtual Servers and one for Global values) and add any additional graphs via the Cacti’s Graph Management using the existing Data Source Drop downs.   Data Queries   Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl * Perl script to query the STM for most of the sys.globals values Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers.xml * Cacti XML snmp query for the Virtual Servers Table MIB   Graph Templates   Stingray_-_global_-_cpu.xml Stingray_-_global_-_dns_lookups.xml Stingray_-_global_-_dns_traffic.xml Stingray_-_global_-_memory.xml Stingray_-_global_-_snmp.xml Stingray_-_global_-_ssl_-_client_cert.xml Stingray_-_global_-_ssl_-_decryption_cipher.xml Stingray_-_global_-_ssl_-_handshakes.xml Stingray_-_global_-_ssl_-_session_id.xml Stingray_-_global_-_ssl_-_throughput.xml Stingray_-_global_-_swap_memory.xml Stingray_-_global_-_system_-_misc.xml Stingray_-_global_-_traffic_-_misc.xml Stingray_-_global_-_traffic_-_tcp.xml Stingray_-_global_-_traffic_-_throughput.xml Stingray_-_global_-_traffic_script_data_usage.xml Stingray_-_virtual_server_-_total_timeouts.xml Stingray_-_virtual_server_-_connections.xml Stingray_-_virtual_server_-_timeouts.xml Stingray_-_virtual_server_-_traffic.xml     Sample Graphs (click image for full size)           Compatibility   This template has been tested with STM 9.4 and Cacti 0.8.8.a   Known Issues   Cacti will create unnecessary queries and data files if the “*Create Graphs for this Host” link on the device page is used. See install notes for work around.   Conclusion   Cacti is sufficient with providing SNMP based RRD graphs, but is limited in Information available, Analytics, Correlation, Scale, Stability and Support.   This is not just a shameless plug; Brocade offers a MUCH more robust set of monitoring and performance tools.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft SharePoint 2013.  
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Lync 2013.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2013.
View full article
An interesting use case cropped up recently - one of our users wanted to do some smarts with the login credentials of an FTP session.   This article steps through a few sample FTP rules and explains how to manage this sort of traffic.   Before you begin   Make sure you have a suitable FTP client.  The command-line ftp tool shipped with most Unix-like systems supports a -d flag that reports the underlying FTP messages, so it's great for this exercise.   Pick a target FTP server.  I tested against ftp.riverbed.com and ftp.debian.org , but other ftp servers may differ for subtle reasons.   Review the FTP protocol specification - it's sufficient to know that it's a single TCP control channel, requests are of the form 'VERB[ parameter]\r\n" and responses are of the form 'CODE message\n'.  Multi-line responses are accepted; all but the last line of the reponse include an additional hyphen ('CODE-message\n').   Create your FTP virtual server   Use the 'Add a new service' wizard to create your FTP virtual server.  Just for fun, add a server banner (Virtual Server > Connection Management > FTP-Specific Settings):     Verify that you can log in to your FTP server through Stingray, and that the banner is rewritten: Now we're good to go!   Intercepting login credentials   We want to intercept FTP login attempts, and change all logins to 'anonymous'.  If a user logs in with 'username:password', we're going to convert that to 'anonymous:username' and discard the password.   Create the following Request Rule, and assign it to the FTP virtual server:   log.info( "Recieved connection: state is '" . connection.data.get( "state" ) . "'" ); if( connection.data.get( "state" ) == "" ) { # This is server-first, so we have no data on the first connect connection.data.set( "state", "connected" ); break; } if( connection.data.get( "state" ) == "connected" ) { # Get the request line $req = string.trim( request.endswith( "\n" ) ); log.info( " ... got request '" . $req . "'" ); if( string.regexmatch( $req, "USER (.*)" ) ) { connection.data.set( "user", $1 ); # Translate this to an anonymous login log.info( " ... rewriting request to 'USER anonymous'" ); request.set( "USER anonymous\r\n" ); } if( string.regexmatch( $req, "PASS (.*)" ) ) { $pass = $1; connection.data.set( "pass", $pass ); $user = connection.data.get( "user" ); # Set the appropriate password log.info( " ... rewriting request to 'PASS ".$user."'" ); request.set( "PASS ".$user."\r\n" ); } }   Now, if you log in with your email address (for example) and a password, the rule will switch your login to an anonymous one and will log the result:   Authenticating the user's credentials   You can extend this rule to authenticate the credentials that the user provided.  At the point in the rule where you have the username and password, you can call a Stingray authenticator, a Java Extension, or reference a libTable.rts: Interrogating tables of data in TrafficScript in your TrafficScript rule:   #AD authentication $ldap = auth.query( "AD Auth", $user, $pass ); if( $ldap['Error'] ) { log.error( "Error with authenticator 'AD Auth': " . $auth['Error'] ); connection.discard(); } else if( !$ldap['OK'] ) { log.info("User not authenticated. Username and/or password incorrect"); connection.discard(); }  
View full article
When Stingray load-balances a connection to an iPlanet/SunONE/Sun Java System Web Server server or application, the connection appears to originate from the Stingray machine. This can be a problem if the server wishes to perform access control based on the client's IP address, or if it wants to log the true source address of the request, and is well documented in the article IP Transparency: Preserving the Client IP address in Stingray Traffic Manager.   Stingray has an IP Transparency feature that preserves the client's IP address, but this requires a Stingray Kernel Modules for Linux Software (pre-installed on Stingray Virtual Appliances and available separately for Stingray software) and is currently only available under Linux. As an alternative, the mod_remoteip module is a good solution for Apache; this article presents a similar module for iPlanet and related webservers.   How it works   Stingray automatically inserts a special X-Cluster-Client-Ip header into each request, which identifies the true source address of the request. The iPlanet/Sun NSAPI module inspects this header and corrects the calculation of the source address. This change is transparent to the web server, and to any applications running on or behind the web server.   Obtaining the Module   Compile the module from source:   https://gist.github.com/5546803   To determine the appropriate compilation steps for an NSAPI module for your instance of iPlanet, you can first build the NSAPI examples in your SunONE installation:   $ cd plugins/nsapi/examples/ $ make cc -DNET_SSL -DSOLARIS -D_REENTRANT -DMCC_HTTPD -DXP_UNIX -DSPAPI20 \ -I../../include -I../../include/base -I../../include/frame -c addlog.c ld -G addlog.o -o example.so   You can build the iprewrite.so module using similar options. Set NSHOME to the installation location for iPlanet:   $ export NSHOME=/opt/iplanet $ cc -DNET_SSL -DSOLARIS -D_REENTRANT -DMCC_HTTPD -DXP_UNIX -DSPAPI20 \ -I$NSHOME/plugins/include -I$NSHOME/plugins/include/base \ -I$NSHOME/plugins/include/frame -c iprewrite.c $ ld -G iprewrite.o -o iprewrite.so $ cp iprewrite.so $NSHOME/plugins   Configuring the Module   To configure the module, you will need to edit the magnus.conf and obj.conf files for the virtual server you are using. If the virtual server is named 'test', you'll find these files in the https-test/config directory.   magnus.conf   Add the following lines to the end of the magnus.conf file. Ensure that the shlib option identifies the full path to the iprewrite.so module, and that you set TrustedIPs to either '*', or the list of Stingray back-end IP addresses:   Init fn="load-modules" funcs="iprewrite-init,iprewrite-all,iprewrite-func" \ shlib="/usr/local/iplanet/plugins/iprewrite.so" Init fn="iprewrite-init" TrustedIPs="10.100.1.68 10.100.1.69"   The TrustedIPs option specifies the back-end addresses of the Stingray machines. The iprewrite.so module will only trust the 'X-Cluster-Client-Ip' header in connections which originate from these IP addresses. This means that remote users cannot spoof their source addresses by inserting a false header and accessing the iPlanet/Sun servers directly.   obj.conf   Locate the 'default' object in your obj.conf file and add the following line at the start of the directives inside that object:   <Object name=default> AuthTrans fn="iprewrite-all" ...   Restart your iPlanet/Sun servers, and monitor your servers' error logs (https-name/log/errors).   The Result   iPlanet/Sun, and applications running on the server will see the correct source IP address for each request. The access log module will log the correct address when you use %a or %h in your log format string.   If you have misconfigured the TrustedIPs value, you will see messages like:   Ignoring X-Cluster-Client-Ip '204.17.28.130' from non-Load Balancer machine '10.100.1.31'   Add the IP address to the trusted IP list and restart.   Alternate Configuration   The 'iprewrite-all' SAF function changes the ip address for the entire duration of the connection. This may be too invasive for some environments, and its possible that a later SAF function may modify the IP address again. You can use the 'iprewrite-func' SAF function to change the ip address for a single NSAPI function. For example, BEA's NSAPI WebLogic connector ('wl_proxy') is normally configured as follows:   <Object name="weblogic" ppath="/weblogic/"> Service fn=wl_proxy WebLogicHost=localhost    WebLogicPort=7001 PathTrim="/weblogic" </Object>   You can change the IP address just for that function call, using the iprewrite-func SAF function as follows:   <Object name="weblogic" ppath="/weblogic/"> Service fn=iprewrite-func func=wl_proxy WebLogicHost=localhost    WebLogicPort=7001 PathTrim="/weblogic" </Object>
View full article
This Document provides step by step instructions on migrating Cisco ACE configuration to Stingray Traffic Manager.
View full article
Imagine you're running a popular image hosting site, and you're concerned that some users are downloading images too rapidly.  Or perhaps your site publishes airfares, or gaming odds, or auction prices, or real estate details and screen-scraping software is spidering your site and overloading your application servers.  Wouldn't it be great if you could identify the users who are abusing your web services and then apply preventive measures - for example, a bandwidth limit - for a period of time to limit those users' activity?   In this example, we'll look at how you can drive the control plane (the traffic manager configuration) from the data plane (a TrafficScript rule):   Identify a user by some id, for example, the remote IP address or a cookie value Measure the activity of each users using a rate class If a user exceeds the desired rate (their terms of service), add a resource file identifying the user and their 'last sinned' time Check the resource time to see if we should apply a short-term limit to that user's activity   Basic rule   # We want to monitor image downloads only if( !string.wildMatch( http.getPath(), "*.jpg" ) ) break; # Identify each user by their remote IP. # Could use a cookie value here, although that is vulnerable to spoofing # Note that we'll use $uid as a filename, so it needs to be secured $uid = request.getRemoteIP(); if( !rate.use.noQueue( "10 per minute", $uid ) ) { # They have exceeded the desired rate and broken the terms of use # Let's create a config file named $uid, containing the current time http.request.put( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, sys.time(), "Content-type: application/octet-stream\r\n". "Authorization: Basic ".string.base64encode( "admin:admin" ) ); } # Now test - did the user $uid break their terms of use recently? $lastbreach = resource.get( $uid ); if( ! $lastbreach ) break; # config file does not exist if( sys.time()-$lastbreach < 60 ) { # They last breached the limits less than 60 seconds ago response.setBandwidthClass( "Very slow" ); } else { # They have been forgiven their sins. Clean up the config file http.request.delete( "http://localhost:9070/api/tm/1.0/config/active/extra/".$uid, "Authorization: Basic ".string.base64encode( "admin:admin" ) ); }   This example uses a rate class named '10 per minute' to monitor the request rate for each user, and a bandwidth class named ‘Very slow’ to apply an appropriate bandwidth limit.  You could potentially implement a similar solution using client-side cookies to identify users who should be bandwidth-limited, but this solution has the advantage that the state is stored locally and is not dependent on trusting the user to honor cookies.   There's scope to improve this rule.  The biggest danger is that if a user exceeds the limit consistently, this will result in a flurry of http.request.put() calls to the local REST daemon.  We can solve this problem quite easily with a rate class that will limit how frequently we update the configuration.  If that slows down a user who has just exceeded their terms of service, that's not really a problem for us!   rate.use( "10 per minute" ); # stall the user if necessary to avoid overload http.request.put( ... );   Note that we can safely use the rate class in two different contexts in one rule.  The first usage ( rate.use( "name", $uid ) ) will rate-limit each individual value of $uid ; the rate.use( "name" ) is a global rate limit that will limit all calls to the REST API .   Read more   Check out the other prioritization and rate shaping suggestions on splash, including:   Dynamic rate shaping slow applications The "Contact Us" attack against mail servers Stingray Spider Catcher Evaluating and Prioritizing Traffic with Stingray Traffic Manager
View full article
Following is a library that I am working on that has one simple design goal: Make it easier to do authentication overlay with Stingray.   I want to have the ability to deploy a configuration that uses a single line to input an authentication element (basic auth or forms based) that takes the name of an authenticator, and uses a simple list to define what resources are protected and which groups can access them.   Below is the beginning of this library.  Once we have better code revision handling in splash (hint hint Owen Garrett!! ) I will move it to something more re-usable.  Until then, here it is.   As always, comments, suggestions, flames or gifts of mutton and mead most welcome...   The way I want to call it is like this:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import lib_auth_overlay as aaa;   # Here we challenge for user/pass $userpasswd = aaa.promptAuth401();   # extract the entered username / password into variables for clarity $username = $userpasswd [0]; $password = $userpasswd [1];   # Here we authenticate check that the user is a member of the listed group # We are using the "user_ldap" authenticator that I set up against my laptop.snrkl.org # AD domain controller. $authResult = aaa.doAuthAndCheckGroup( "user_ldap" , $username , $password , "CN=staff,CN=Users,DC=laptop,DC=snrkl,DC=org" );   # for convienience we will tell the user the result of their Auth in an http response aaa.doHtmlResponse.200( "Auth Result:" . $authResult );   here is the lib_auth_overlay that is referenced in the above element.  Please note the promptAuthHttpForm() sub routine is not yet finished...   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 sub doHtmlResponse.200 ( $message ){       http.sendResponse(          "200 OK" ,          "text/html" ,          $message ,          ""          ); }   sub challengeBasicAuth( $errorMessage , $realm ){       http.sendResponse(          "401 Access Denied" ,          "text/html" ,          $errorMessage ,          "WWW-Authenticate: Basic realm=\"" . $realm . "\"" );   }   sub basicAuthExtractUserPass( $ah ){ #// $ah is $authHeader,          $enc = string.skip( $ah , 6 );          $up = string. split (string.base64decode( $enc ), ":" );          return $up ;       }   sub doAuthAndGetGroups ( $authenticator , $u , $p ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator " . $authenticator . ": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          return $auth [ 'memberOf' ];       }   }   sub doAuthAndCheckGroup ( $authenticator , $u , $p , $g ){       $auth = auth.query( $authenticator , $u , $p );       if ( $auth [ 'Error' ] ) {          log .error( "Error with authenticator \"" . $authenticator . "\": " . $auth [ 'Error' ] );          return "Authentication Error" ;       } else if ( ! $auth [ 'OK' ] ) { #//Auth is not OK          # Unauthorised          log . warn ( "Access Denied - invalid username or password for user: \"" . $u . "\"" );          return "Access Denied - invalid username or password" ;       } else if ( $auth [ 'OK' ] ){          log .info( "Authenticated \"" . $u . "\" successfully at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));          if ( lang.isArray( $auth [ 'memberOf' ])){ #//More than one group returned             foreach ( $group in $auth [ 'memberOf' ]){                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                }           }             #// If we get to here, we have exhausted list of groups with no match             return "FAIL" ;            } else { #// This means that only one group is returned             $group = $auth [ 'memberOf' ];                if ( $group == $g ) {                   log .info( "User \"" . $u . "\" permitted access " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "PASS" ;                   break;                } else {                   log . warn ( "User \"" . $u . "\" denied access - not a member of \"" . $g . "\" at " . sys. localtime . format ( "%a, %d %b %Y %T EST" ));                   return "FAIL" ;                }        }     } }   sub promptAuth401(){       if (!http.getHeader( "Authorization" )) { #// no Authorization header present, lets challenge for credentials          challengeBasicAuth( "Error Message" , "Realm" );       } else {          $authHeader = http.getHeader( "Authorization" );          $up = basicAuthExtractUserPass( $authHeader );            return $up ;     } }   sub promptAuthHttpForm(){       $response = "<html> <head>Authenticate me...</head> <form action=/login method=POST> <input name=user required> <input name=realm type=hidden value=stingray> <input name=pass type=password required> <button>Log In</button> </form> </html>" ;       doHtmlResponse.200( $response ); }  
View full article
In Stingray, each virtual server is configured to manage traffic of a particular protocol.  For example, the HTTP virtual server type expects to see HTTP traffic, and automatically applies a number of optimizations - keepalive pooling, HTTP upgrades, pipelines - and offers a set of HTTP-specific functionality (caching, compression etc).   A virtual server is bound to a specific port number (e.g. 80 for HTTP, 443 for HTTPS) and a set of IP addresses.  Although you can configure several virtual servers to listen on the same port, they must be bound to different IP addresses; you cannot have two virtual servers bound to the same IP: port pair as Stingray will not know which virtual server to route traffic to.   "But I need to use one port for several different applications!"   Sometimes, perhaps due to firewall restrictions, you can't publish services on arbitrary ports.  Perhaps you can only publish services on port 80 and 443; all other ports are judged unsafe and are firewalled off. Furthermore, it may not be possible to publish several external IP addresses.   You need to accept traffic for several different protocols on the same IP: port pair.  Each protocol needs a particular virtual server to manage it;  How can you achieve this?   The scenario   Let's imagine you are hosting several very different services:   A plain-text web application that needs an HTTP virtual server listening on port 80 A second web application listening for HTTPS traffic listening on port 443 An XML-based service load-balanced across several servers listening on port 180 SSH login to a back-end server (this is a 'server-first' protocol) listening on port 22   Clearly, you'll need four different virtual servers (one for each service), but due to firewall limitations, all traffic must be tunnelled to port 80 on a single IP address.  How can you resolve this?   The solution - version 1   The solution is relatively straightforward for the first three protocols.  They are all 'client-first' protocols (see Feature Brief: Server First, Client First and Generic Streaming Protocols), so Stingray can read the initial data written from the client.   Virtual servers to handle individual protocols   First, create three internal virtual servers, listening on unused private ports (I've added 7000 to the public ports).  Each virtual server should be configured to manage its protocol appropriately, and to forward traffic to the correct target pool of servers.  You can test each virtual server by directing your client application to the correct port (e.g. http://stingray-ip-address:7080/ ), provided that you can access the relevant port (e.g. you are behind the firewall):   For security, you can bind these virtual servers to localhost so that they can only be accessed from the Stingray device.   A public 'demultiplexing' virtual server   Create three 'loopback' pools (one for each protocol), directing traffic to localhost:7080, localhost:7180 and localhost:7443.   Create a 'public' virtual server listening on port 80 that interrogates traffic using the following rule, and then selects the appropriate pool based on the data the clients send.  The virtual server should be 'client first', meaning that it will wait for data from the client connection before triggering any rules:     # Get what data we have... $data = request.get(); # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   The Detect protocol rule is triggered once we receive client data   Now you can target all your client applications at port 80, tunnel through the firewall and demultiplex the traffic on the Stingray device.   The solution - version 2   You may have noticed that we omitted SSH from the first version of the solution.   SSH is a challenging protocol to manage in this way because it is 'server first' - the client connects and waits for the server to respond with a banner (greeting) before writing any data on the connection.  This means that we cannot use the approach above to identify the protocol type before we select a pool.   However, there's a good workaround.  We can modify the solution presented above so that it waits for client data.  If it does not receive any data within (for example) 5 seconds, then assume that the connection is the server-first SSH type.   First, create a "SSH" virtual server and pool listening on (for example) 7022 and directing traffic to your target SSH virtual server (for example, localhost:22 - the local SSH on the Stingray host):     Note that this is a 'Generic server first' virtual server type, because that's the appropriate type for SSH.   Second, create an additional 'loopback' pool named 'Internal SSH loopback' that forwards traffic to localhost:7022 (the SSH virtual server).   Thirdly, reconfigure the Port 80 listener public virtual server to be 'Generic streaming' rather than 'Generic client first'.  This means that it will run the request rule immediately on a client connection, rather than waiting for client data.   Finally, update the request rule to read the client data.  Because request.get() returns whatever is in the network buffer for client data, we spin and poll this buffer every 10 ms until we either get some data, or we timeout after 5 seconds.   # Get what data we have... $data = request.get(); $count = 500; while( $data == "" && $count-- > 0 ) { connection.sleep( 10 ); # milliseconds $data = request.get(); } if( $data == "" ) { # We've waited long enough... this must be a server-first protocol pool.use( "Internal SSH loopback" ); } # SSL/TLS record layer: # handshake(22), ProtocolVersion.major(3), ProtocolVersion.minor(0-3) if( string.regexmatch( $data, '^\026\003[\000-\003]' )) { # Looks like SSLv3 or TLS v1/2/3 pool.use( "Internal HTTPS loopback" ); } if( string.startsWithI( $data, "<xml" )) { # Looks like our XML-based protocol pool.use( "Internal XML loopback" ); } if( string.regexmatch( $data, "^(GET |POST |PUT |DELETE |OPTIONS |HEAD )" )) { # Looks like HTTP pool.use( "Internal HTTP loopback" ); } log.info( "Request: '".$data."' unrecognised!" ); connection.discard();   This solution isn't perfect (the spin and poll may incur a hit for a busy service over a slow network connection) but it's an effective solution for the single-port firewall problem and explains how to tunnel SSH over port 80 (not that you'd ever do such a thing, would you?)   Read more   Check out Feature Brief: Server First, Client First and Generic Streaming Protocols for background The WebSockets example (libWebSockets.rts: Managing WebSockets traffic with Stingray Traffic Manager) uses a similar approach to demultiplex websockets and HTTP traffic
View full article
There are many reasons why you may want to serve web content directly from Stingray Traffic Manager - simplification, performance, ease of administration and, perhaps most importantly, to host a 'Sorry Page' if your entire web infrastructure has failed and Stingray is all that is left.   The article Using Stingray Traffic Manager as a Webserver describes the rationale in more detail and presents a simple TrafficScript-based webserver.  However, we can do a lot more with a more complete programming language - mime types, index pages, more control over the location of the document root are all simple to implement with Python.   Get started with PyRunner.jar   Start with the procedure describe in the article PyRunner.jar: Running Python code in Stingray Traffic Manager.  The PyRunner extension lets you run Python code in Stingray, using the local JVM and the Jython implementation.   Note that this example does not work reliably with versions of Jython prior to 2.7 beta1 - I hit problems when a library attempted to import the Jython errno module (possibly related to https://github.com/int3/doppio/issues/177).   webserver.py   Once you've installed PyRunner (using an appropriate version of Jython), upload the following Python script to your Extra Files catalog.  Make sure to call the script 'webserver.py', and edit the location of the docroot to an appropriate value:   from javax.servlet.http import HttpServlet from urllib import url2pathname from os import listdir from os.path import normpath,isdir,isfile,getmtime,getsize import mimetypes import datetime docroot = '/tmp' def dirlist( uri, path, response ): files = '' for f in listdir( path ): if (f[0] == '.') and (f[1] != '.'): continue # hidden files if isdir( path+'/'+f ): size = '&lt;DIR&gt;' f += '/' # Add trailing slash for directories else: size = '{:,d} bytes'.format( getsize( path+'/'+f )) mtime = datetime.datetime.fromtimestamp( getmtime( path+'/'+f )) files += '<a href="{f}">{f:<30}</a> {t:14} {s:>17}\n'.format( f=f, t=mtime, s=size ) html = ''''' <html><head><title>{uri}</title></head><body> <h1>Directory listing for {uri}</h1> <pre>{files}<a href="../">Up</a></pre> </body></html>''' .format(uri=uri, files=files ) response.setContentType( 'text/html' ) toClient = response.getWriter() toClient.write( html ) class webserver(HttpServlet): def doGet(self, request, response): uri = request.getRequestURI() print "Processing "+uri path = normpath( docroot + url2pathname( uri ) ) # Never access files outside the docroot if path.startswith( docroot ) == False: path = docroot uri = '/' if isdir( path ): if uri.endswith( '/' ) == False: response.sendRedirect( uri+'/' ) if isfile( path + '/index.html' ): response.sendRedirect( uri + 'index.html' ) else: dirlist( uri, path, response ) return try: c = open( path, 'r' ).read() except Exception, e: response.sendError( 400, "Could not read "+path+": "+str( e ) ) return mtype = mimetypes.guess_type( path )[0] if mtype == None: mtype = 'application/octet-stream' response.setContentType( mtype ) toClient = response.getWriter() toClient.write( c )   If you want to edit the rule and try some changes, they you'll find the publish.py script in Deploying Python code to Stingray Traffic Manager useful, and you should follow the Stingray Event log ( tail -f /opt/zeus/zxtm/log/errors ) to catch any problems.   Performance Optimization   The article Using Stingray Traffic Manager as a Webserver describes how to front the web server virtual server with a second virtual server that caches responses and responds directly to web requests.   It also presents an alternative webserver implementation written in Java - take your pick!
View full article
Stingray Traffic Manager has a host of great, capable features to improve the performance and reliability of your web servers.  However, is there sometimes a case for putting the web content on Stingray directly, and using it as a webserver?   The role of a webserver in a modern application has shrunk over the years.  Now it's often just a front-end for a variety of application servers, performing authentication and serving simple static web content... not unlike the role of Stingray.  Often, its position in the application changes when Stingray is added:     Now, if you could move the static content on to Stingray Traffic Manager, wouldn't that help to simplify your application architecture still further? This article presents three such ways:   A simple webserver in TrafficScript   TrafficScript can send back web pages without difficulty, using the http.sendResponse() function.  It can load web content directly from Stingray's Resource Directory (the Extra Files catalog).   Here's a simple TrafficScript webserver that intercepts all requests for content under '/static' and attempts to satisfy them with files from the Resource directory:   # We will serve static web pages for all content under this directory $static = "/static/"; $page = http.getPath(); if( !string.startsWith( $page, $static )) break; # Look for the file in the resource directory $file = string.skip( $page, string.length( $static )); if( resource.exists( $file )) { # Page found! http.sendResponse( 200, "text/html", resource.get( $file ), "" ); } else { # Page not found, send an error back http.sendResponse( 404, "text/html", "Not found", "" ); }   Add this file (as a request rule) to your virtual server and upload some files to Catalog > Extra Files.  You can then browse them from Stingray, using URLs beginning /static.   This is a very basic example.  For example, it does not support mime types (it assumes everything is text/html) - you can check out the Sending custom error pages article for a more sophisticated TrafficScript example that shows you how to host an entire web page (images, css and all) that you can use as an error page if your webservers are down.   However, the example also does not support directory indices... in fact, because the docroot is the extra files catalog, there's no (easy) way to manage a hierarchy of web content.  Wouldn't it be better if you could serve your web content directly from a directory in disk?   A more sophisticated Web Server in Java   The article Serving Web Content from Stingray using Java presents a more sophisticated web server written in Java.  It runs as a Java Extension and can access files in a nominated docroot outside of the Stingray configuration.  It supports mime types and directory indices.   Another webserver - in Python   Stingray can also run application code in Python, by way of the PyRunner.jar: Running Python code in Stingray Traffic Manager implementation that runs Python code on Stingray's local JVM.  This article Serving Web Content from Stingray using Python presents an alternative webserver written in Python.   Optimizing for Performance   Perhaps the most widely used feature in Stingray Traffic Manager, after basic load balancing, health monitoring and the like, i s Content Caching .  Content Caching is a very easy and effective way to reduce the overhead of generating web content, whether the content is read from disk or generated by an application.  The content generated by our webserver implementations is fairly 'static' (does not change) so it's ripe for caching to reduce the load on our webservers.   There's one complication - you can't just turn content caching on and expect it to work in this situation.  That's because content caching hooks into two stages in the transaction lifecycle in Stingray:   Once all response rules have completed, Stingray will inspect the final response and decide whether it can be cached for future reuse Once all request rules have completed, Stingray will examine the current request and determine if there's a suitable response in the cache   In our webserver implementations, the content was generated during the request processing step and written back to the client using http.sendResponse (or equivalent).  We never run any response rules (so the content cannot be cached) and we never get to the end of the request rules (so we would not check the cache anyway).   The elegant solution is to create a virtual server in Stingray specifically to run the WebServe extension.  The primary virtual server can forward traffic to your application servers, or to the internal 'webserver' virtual server as appropriate.  It can then cache the responses (and respond directly to future requests) without any difficulty:   The primary Virtual Server decides whether to direct traffic to the back-end servers, or to the internal web server using an appropriate TrafficScript rule:   Create a new HTTP virtual server - this will be used to run the Webserver Extension. Configure the virtual server to listen on localhost only, as it need not be reachable from anywhere else.  You can chose an unused high port, such as 8080 perhaps.   Add the appropriate TrafficScript rule to this virtual server to make it run the WebServer Extension.   Create a pool called 'Internal Web Server' that will direct traffic to this new virtual server. The pool should contain the node localhost:[port].   Extend the TrafficScript rule on the original virtual server to:   # Use the internal web server for static content if( string.startsWith( http.getPath(), "/static" )) { pool.use( "Internal Web Server" ); }   Enable the web cache on the original virtual server.   Now, all traffic goes to the original virtual server as before. Static pages are directed to the internal web server, and the content from that will be cached. With this configuration, Stingray should be able to serve web pages as quickly as needed.   Don't forget that all the other Stingray features like content compression, logging, rate-shaping, SSL encryption and so on can all be used with the new internal web server. You can even use response rules to alter the static pages as they are sent out.   Now, time to throw your web servers away?  
View full article
The article Managing consistent caches across a Stingray Cluster describes in detail how to configure a pair of Stingray devices to operate together as a fully fault-tolerant cache.   The beauty of the configuration was that it minimized the load on the origin servers - content would only be requested from the origin servers when it had expired on both peers, and a maximum of one request per 15 seconds (configurable) per item of content would be sent to the origin servers:     The solution uses two Stingray Traffic Managers, and all incoming traffic is distributed to one single front-end traffic manager.   How could we extend this solution to support more than two traffic managers (for very high-availability requirements) with multiple active traffic managers?   Overview   The basic architecture of the solution is as follows:   We begin with a cluster of 3 Stingray Traffic Managers, named stm-1, stm-2 and stm-3, with a multi-hosted IP address distributing traffic across the three traffic managers Incoming traffic is looped through all three traffic managers before being forwarded to the origin servers; the return traffic can then be cached by each traffic manager   If any of the traffic managers have a cached version of the response, they respond directly   Configuration   Starting from a working cluster.  In this example, the names 'stm-1', 'stm-2' and 'stm-3' resolve to the permanent IP addresses of each traffic manager; replace these with the hostnames of the machines in your cluster.  The origin servers are webserver1, webserver2 and webserver3.   Step 1: Create the basic pool and virtual server   Create a pool named 'website0', containing the addresses of the origin servers. Create a virtual server that uses the 'discard' pool as its default pool.  Add a request rule to select 'website0':   pool.use( "website0" );   ... and verify that you can browse your website through this virtual server.   Step 2: Create the additional pools   You will need to create N * (N-1) additional pools if you have N traffic managers in your cluster.   Pools website10, website20 and website30 contain the origin servers and either node stm-1:80, stm-2:80 or stm-3:80.  Edit each pool and enable priority lists so that the stm node is used in favor to the origin servers:   Configuration for Pools website10 (left), website20 (middle) and website30 (right)   Pools website230, website310 and website120 contain the origin servers and two of nodes stm-1:80, stm-2:80 or stm-3:80.  Edit each pool and enable priority lists so that the stm nodes are each used in favor to the origin servers.   For example, pool website310 will contain nodes stm-3:80 and stm-1:80, and have the following priority list configuration:     Step 3: Add the TrafficScript rule to route traffic through the three Stingrays   Enable trafficscript!variable_pool_use (Global Settings > Other Settings), then add the following TrafficScript request rule:   # Consistent cache with multiple active traffic managers $tm = [ 'stm-1' => [ 'id' => '1', 'chain' => '123' ], 'stm-2' => [ 'id' => '2', 'chain' => '231' ], 'stm-3' => [ 'id' => '3', 'chain' => '312' ] ]; $me = sys.hostname(); $id = $tm[$me]['id']; $chain = http.getHeader( 'X-Chain' ); if( !$chain ) $chain = $tm[$me]['chain']; log.info( "Request " . http.getPath() . ": ".$me.", id ".$id.": chain: ".$chain ); do { $i = string.left( $chain, 1 ); $chain = string.skip( $chain, 1 ); } while( $chain && $i != $id ); log.info( "Request " . http.getPath() . ": New chain is ".$chain.", selecting pool 'website".$chain."0'"); http.setHeader( 'X-Chain', $chain ); pool.use( 'website'.$chain.'0' );   Leave the debugging 'log.info' statements in for the moment; you should comment them out when you deploy in production.   How does the rule work?   When traffic is received by a Traffic Manager (for example, the traffic manager with hostname stm-2), the rule selects the chain of traffic managers to process that request - traffic managers 2, 3 and 1.   It updates the chain by removing '2' from the start, and then selects pool 'website310'.   This pool selects stm-3 in preference, then stm-1 (if stm-3 has failed), and finally the origin servers if both devices have failed.   stm-3 will process the request, check the chain (which is now '31'), remove itself from the start of the chain and select pool 'website10'.   stm-1 will then select the origin servers.   This way, a route for the traffic is threaded through all of the working traffic managers in the cluster.   Testing the rule   You should test the configuration with a single request.  It can be very difficult to unravel multiple requests at the same time with this configuration.   Note that each traffic manager in the cluster will log its activity, but the merging of these logs is done at a per-second accuracy, so they will likely be misordered.  You could add a 'connection.sleep( 2000 )' in the rule for the purposes of testing to avoid this problem.   Enable caching   Once you are satisfied that the configuration is forwarding each request through every traffic manager, and that failures are appropriately handled, then you can configure caching.  The details of the configuration are explained in the Managing consistent caches across a Stingray Cluster article:     Test the configuration using a simple, repeated GET for a cacheable object:   $ while sleep 1 ; do wget http://192.168.35.41/zeus/kh/logo.png ; done   Just as in the Consistent Caches article, you'll see that all Stingrays have the content in their cache, and it's refreshed from one of the origin servers once every 15 seconds:   Notes   This configuration used a Multi-Hosted IP address to distribute traffic across the cluster.  It works just as well with single hosted addresses, and this can make testing somewhat easier as you can control which traffic manager receives the initial request.   You could construct a similar configuration using Failpools rather than priority lists.  The disadvantage of using failpools is that Stingray would treat the failure of a Stingray node as a serious error (because an entire pool has failed), whereas with priority lists, the failure of a node is reported as a warning.  A warning is more appropriate because the configuration can easily accommodate the failure of one or two Stingray nodes.   Performance should not be unduly affected by the need to thread requests through multiple traffic managers.  All cacheable requests are served directly by the traffic manager that received the request.  The only requests that traverse multiple traffic managers are those that are not in the cache, either because the response is not cacheable or because it has expired according to the 'one check every 15 seconds' policy.
View full article
To get started quickly with Python on Stingray Traffic Manager, go straight to PyRunner.jar: Running Python code in Stingray Traffic Manager.  This article dives deep into how that extension was developed.   As is well documented, we support the use of Java extensions to manipulate traffic. One of the great things about supporting "Java" is that this really means supporting the JVM platform... which, in turn, means we support any language that will run on the JVM and can access the Java libraries.   Java isn't my first choice of languages, especially when it comes to web development. My language of choice in this sphere is Python; and thanks to the Jython project we can write our extensions in Python!   Jython comes with a servlet named PyServlet which will work with Stingray out-of-the-box, this is a good place to start. First let us quickly set up our Jython environment, working Java support is a prerequisite of course. (If you're not already familiar with Stingray Java extensions I recommend reviewing the Java Development Guide available in the Stingray Product Documentation and the Splash guide Writing Java Extensions - an introduction )   Grab Jython 2.5 (if there is now a more recent version I expect it will work fine as well)   Jython comes in the form of a .jar installer, on Linux I recommend using the CLI install variant:   java -jar jython_installer-2.5.0.jar -c   Install to a path of your choosing, I've used: /space/jython   Upload /space/jython/jython.jar to your Java Extensions Catalog   In the Java Extensions Catalog you can now see a couple of extensions provided by jython.jar, including org.python.util.PyServlet. By default this servlet maps URLs ending in .py to local Python files which it will compile and cache. Set up a test HTTP Virtual Server (I've created a test one called "Jython" tied to the "discard" pool), and add the following request rule to it:   if (string.endsWith(http.getPath(), ".py")) { java.run( "org.python.util.PyServlet" ); }   The rule avoids errors by only invoking the extension for .py files. (Though if you invoke PyServlet for a .py file that doesn't exist you will get a nasty NullPointerException stack trace in your event log.)   Next, create a file called Hello.py containing the following code:   from javax.servlet.http import HttpServlet class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello from Python</title>" + "<body><h1 style='color:green;'>Hello from Python!</h1></body></html>")   Upload Hello.py to your Java Extensions Catalog. Now if you visit the file Hello.py at the URL of your VirtualServer you should see the message "Hello from Python!" Hey, we didn't have to compile anything! Your event log will have some messages about processing .jar files, this only happens on first invocation. You will also get a warning in your event log every time you visit Hello.py:   WARN java/* servleterror Servlet org.python.util.PyServlet: Unknown attribute (pyservlet)   This is just because Stingray doesn't set up or use this particular attribute. We'll ignore the warning for now, and get rid of it when we tailor PyServlet to be more convenient for Stingray use. The servlet will also have created some some extra files in your Java Libraries & Data Catalog under a top-level directory WEB-INF. It is possible to change the location used for these files, we'll get back to that in a moment.   All quite neat so far, but the icing on the cake is yet to come. If you open up $ZEUSHOME/conf/jars/Hello.py in your favourite text editor and change the message to "Hello <em>again</em> from Python!" and refresh your browser you'll notice that the new message comes straight through. This is because the PyServlet code checks the .py file, if it is newer than the cached bytecode it will re-interpret and re-cache it. We now have somewhat-more-rapid application development for Stingray extensions. Bringing together the excellence of Python and the extensiveness of Java libraries.   However, what I really want to do is use TrafficScript at the top level to tell PyServlet which Python servlet to run. This will require a little tweaking of the code. We want to get rid of the code that resolves .py file from the request URI and replace it with code that uses an argument passed in by TrafficScript. While we're at it, we'll make it non-HTTP-specific and add some other small tweaks. The changes are documented in comments in the code, which is attached to this document.   To compile the Stingray version of the servlet and pop the two classes into a single convenient .jar file execute the following commands.   $ javac -cp $ZEUSHOME/zxtm/lib/servlet.jar:/space/jython/jython.jar ZeusPyServlet.java $ jar -cvf ZeusPyServlet.jar ZeusPyServlet.class ZeusPyServletCacheEntry.class   (Adjusting paths to suit your environment if necessary.)   Upload the ZeusPyServlet.jar file to your Java Extensions Catalog. You should now have a ZeusPyServlet extension available. Change your TrafficScript rule to load this new extension and provide an appropriate argument.   if (string.endsWith(http.getPath(), ".py")) { java.run( "ZeusPyServlet", "Hello.py" ); }   Now visiting Hello.py works just as before. In fact, visiting any URL that ends in .py will now generate the same result as visiting Hello.py. We have complete control over what Python code is executed from our TrafficScript rule, much more convenient.   If you continue hacking from this point you'll soon find that we're missing core parts of python with the setup described so far. For example adding import md5 to your servlet code will break the servlet, you'd see this in your Stingray Event Log:   WARN servlets/ZeusPyServlet Servlet threw exception javax.servlet.ServletException:            Exception during init of /opt/zeus/zws/zxtm/conf/jars/ServerStatus.py WARN  Java: Traceback (most recent call last): WARN  Java:    File "/opt/zeus/zws/zxtm/conf/jars/ServerStatus.py", line 2, in <module> WARN  Java:      import md5 WARN  Java: ImportError: No module named md5   This is because the class files for the core Python libraries are not included in jython.jar. To get a fully functioning Jython we need to tell ZeusPyServlet where Jython is installed. To do this you must have Jython installed on the same machine as the Stingray software, and then you just have to set a configuration parameter for the servlet, in summary:   Install Jython on your Stingray machine, I've installed mine to /space/jython In Catalogs > Java > ZeusPyServlet add some parameters: Parameter: python_home, Value: /space/jython (or wherever you have installed Jython) Parameter: debug, Value: none required (this is optional, it will turn on some potentially useful debug messages) Back in Catalogs > Java you can now delete all the WEB-INF files, now that Jython knows where it is installed it doesn't need this Go to System > Traffic Managers and click the 'Restart Java Runner ...' button, then confim the restart (this ensures no bad state is cached)   Now your Jython should be fully functional, here's a script for you to try that uses MD5 functionality from both Python and Java. Just replace the content of Hello.py with the following code.   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello from Python</title><body>" htmlOut += "<h1>Hello from Python!</h1>" # try a Python md5 htmlOut += "<h2>Python MD5 of 'foo': %s</h2>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "<h2>Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "</h2>" # yes, the Stingray attributes are available htmlOut += "<h2>VS: %s</h2>" % request.getAttribute("virtualserver") htmlOut += "</body></html>" toClient.println(htmlOut)   An important point to realise about Jython is that beyond the usual core Python APIs you cannot expect all the 3rd party Python libraries out there to "just work". Non-core Python modules compiled from C (and any modules that depend on such modules) are the main issue here. For example the popular Numeric package will not work with Jython. Not to worry though, there are often pure-Python alternatives. Don't forget that you have all Java libraries available too; and even special Java libraries designed to extend Python-like APIs to Jyhon such as JNumeric, a Jython equivalent to Numeric. There's more information on the Jython wiki. I recommend reading through all the FAQs as a starting point. It is perhaps best to think of Jython as a language which gives you the neatness of Python syntax and the Python core with the utility of the massive collection of Java APIs out there.
View full article
For a comprehensive description of how this Stingray Java Extension operates, check out Yvan Seth's excellent article Making Stingray more RAD with Jython!   Overview   Stingray can invoke TrafficScript rules (see Feature Brief: TrafficScript) against requests and responses, and these rules run directly in the traffic manager kernel as high-performance bytecode.   A TrafficScript rule can also reach out to the local JVM to run servlets (Feature Brief: Java Extensions in Stingray Traffic Manager), and the PyRunner.jar library uses the JVM to run Python code against network traffic.  This is a great solution if you need to deploy complex traffic management policies and your development expertise lies with Python.   Requirements   Download and install Jython (http://www.jython.org/downloads.html).  This code was developed against Jython 2.5.3, but should run against other Jython versions.  For best compatibility across platforms, use the Jython installer from www.jython.org rather than the jython packages distributed by your OS vendor:   $ java -jar jython_installer-2.5.2.jar --console   Select installation option 1 (all components) or explicitly include the src part - this installs additional modules in extlibs that we will use later.   Locate the jython.jar file included in the install and upload this file to your Stingray Java Extensions catalog.   Download the PyRunner.jar file attached to this document and upload that to your Java Extensions catalog.  Alternatively, you can compile the Jar file from source:   $ javac -cp servlet.jar:zxtm-servlet.jar:jython.jar PyRunner.java $ jar -cvf PyRunner.jar PyRunner*.class   You can now run simple Python applications directly from TrafficScript!   A simple 'HelloWorld' example   Save the following Python code as Hello.py and upload the file to your Catalog > Extra Files catalog:     from javax.servlet.http import HttpServlet import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") toClient.println("<html><head><title>Hello World</title>" + "<body><h1 style='color:red;'>Hello World</h1>" + "The current time is " + time.strftime('%X %x %Z') + "</body></html>")   Assign the following TrafficScript request rule to your Virtual Server:   java.run( "PyRunner", "Hello.py" );   Now, whenever the TrafficScript rule is called, it will run the Hello.py code.  The PyRunner extension loads and compiles the Python code, and caches the compiled bytecode to optimize performance.   More sophisticated Python examples   The PyRunner.jar/jython.jar combination is capable of running simple Python examples, but it does not have access to the full set of Python core libraries.  These are to be found in additional jar files in the extlibs part of the Jython installation.   If you install Jython on the same machine you are running the Stingray software on, then you can point PyRunner.jar at that location:   Install Jython in a known location, such as /usr/local/jython - make sure to install all components (option 1 in the installation types) or explicitly add the src part Navigate to Catalogs > Java > PyRunner and add a parameter named python_home , set to /usr/local/jython (or other location as appropriate) In Catalogs > Java, delete the WEB-INF files generated previously - they won't be required any more From the System > Traffic Managers page, restart your Java runner.   You can install jython in this way on the Stingray Virtual Appliance, but please take be aware that the installation will not be preserved during a major upgrade, and it will not form part of the supported configuration of the virtual appliance.   Here's an updated version of Hello.py that uses the Python and Java md5 implementations to compare md5s for the string 'foo' (they should give the same result!):   from javax.servlet.http import HttpServlet from java.security import MessageDigest from md5 import md5 import time class Hello(HttpServlet): def doGet(self, request, response): toClient = response.getWriter() response.setContentType ("text/html") htmlOut = "<html><head><title>Hello World</title><body>" htmlOut += "<h1>Hello World</h1>" htmlOut += "The current time is " + time.strftime('%X %x %Z') + "<br/>" # try a Python md5 htmlOut += "Python MD5 of 'foo': %s<br/>" % md5("foo").hexdigest() # try a Java md5 htmlOut += "Java MD5 of 'foo': " jmd5 = MessageDigest.getInstance("MD5") digest = jmd5.digest("foo") for byte in digest: htmlOut += "%02x" % (byte & 0xFF) htmlOut += "<br/>" # yes, the Stingray attributes are available htmlOut += "Virtual Server: %s<br/>" % request.getAttribute("virtualserver") # 'args' is the parameter list for java.run(), beginning with the script name htmlOut += "Args: %s<br/>" % ", ".join(request.getAttribute("args")) htmlOut += "</body></html>" toClient.println(htmlOut)   Upload this file to your Extra catalog to replace the existing Hello.py script and try it out.   Rapid test and development   Check out publish.py - a simple python script that automates the task of uploading your python code to the Extra Files catalog: Deploying Python code to Stingray Traffic Manager
View full article