cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
by Aidan Clarke   Traditional IT applications were simple: they lived in one place, in your data center. If you wanted more capacity, you added more servers, storage and networks. If you wanted to make the application more reliable, you doubled it to make it highly available: you had one system running “active” - while the other system waited on “standby.” This concept of “redundancy” was simple, so long as you could buy two of everything, and were happy that only half of the infrastructure was active at any one time - not an efficient solution.   But modern applications need a modern approach to performance, security and reliability: which is why Pulse vADC approaches things differently, a software solution for a software world, where distributed applications need an “always-active” architecture.   We often hear from IT professionals that they used to avoid Active/Active architectures; for fear that performance would be compromised under failure. Our customers routinely deploy Pulse vADC in Active/Active, or even Active/Active/Active/Active solutions all the time: they can choose the right balance between node and cluster size, to optimize the availability, while reducing the size of the fault domain.     Similarly, high-availability architectures used to require that HA peers were installed as Layer 2 adjacent (ie: on the same network). These architectures simply don't work in today's clouds; for example, AWS availability zones, by their very design, are on different Layer 3 networks. In order to run a Layer 2 HA pair in Amazon AWS, you need to put your whole solution in a single AWS Availability zone - a practice that Amazon architects strongly discourage.   With Pulse vADC, if you can connect to each other via a network, then you can cluster your application. Which means that you can choose an availability architecture to suit your application - whether it lives in your data center, in a cloud, or both.   Get started with Pulse vADC today, our Community Edition is free to download and try out in your test and development environment.     This article is part of a series, beginning with: Staying Afloat in the Application Economy More to Explore: Prev: One ADC Platform, Any Environment Next: Intelligent N+M Clustering   
View full article
  Introduction   Many DDoS attacks work by exhausting the resources available to a website for handling new connections.  In most cases, the tool used to generate this traffic has the ability to make HTTP requests and follow HTTP redirect messages, but lacks the sophistication to store cookies.  As such, one of the most effective ways of combatting DDoS attacks is to drop connections from clients that don't store cookies during a redirect.   Before you Proceed   It's important to point out that using the solution herein may prevent at least the following legitimate uses of your website (and possibly others):   Visits by user-agents that do not support cookies, or where cookies are disabled for any reason (such as privacy); some people may think that your website has gone down! Visits by internet search engine web-crawlers; this will prevent new content on your website from appearing in search results! If either of the above items concern you, I would suggest seeking advice (either from the community, or through your technical support channels).   Solution Planning   Implementing a solution in pure TrafficScript will prevent traffic from reaching the web servers.  But, attackers are still free to consume connection-handling resources on the traffic manager.  To make the solution more robust, we can use iptables to block traffic a bit earlier in the network stack.  This solution presents us with a couple of challenges:   TrafficScript cannot execute shell commands, so how do we add rules to iptables? Assuming we don't want to permanently block all IP addresses that are involved in a DDoS attack, how can we expire the rules?   Even though TrafficScript cannot directly run shell commands, the Event Handling system can.  We can use the event.emit() TrafficScript function to send jobs to a custom event handler shell script that will add an iptables rule that blocks the offending IP address. To expire each rule can use the at command to schedule a job that removes it.  This means that we hand over the scheduling and running of that job over to the control of the OS (which is something that it was designed to do).   The overall plans looks like this:   Write a TrafficScript rule that emits a custom event when it detects a client that doesn't support cookies and redirects Write a shell script that takes as its input: an --eventtype argument (the event handler includes this automatically) a --duration argument (to define the length of time that an IP address stays blocked for) a string of information that includes the IP address that is to be blocked Create an event handler for the events that our TrafficScript is going to emit TrafficScript   Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 $cookie = http.getCookie( "DDoS-Test" );   if ( ! $cookie ) {              # Either it's the visitor's first time to the site, or they don't support cookies      $test = http.getFormParam( "cookie-test" );              if ( $test != "1" ) {         # It's their first time.  Set the cookie, redirect to the same page         # and add a query parameter so we know they have been redirected.         # Note: if they supplied a query string or used a POST,         # we'll respond with a bare redirect         $path = http.getPath();                    http.sendResponse( "302 Found" , "text/plain" , "" ,            "Location: " . string.escape( $path ) .            "?cookie-test=1\r\nSet-Cookie: DDoS-Test=1" );                 } else {                    # We've redirected them and attempted to set the cookie, but they have not         # accepted.  Either they don't support cookies, or (more likely) they are a bot.                    # Emit the custom event that will trigger the firewall script.         event.emit( "firewall" , request.getremoteip());                    # Pause the connection for 100 ms to give the firewall time to catch up.         # Note: This may need tuning.         connection. sleep ( 100 );                    # Close the connection.         connection. close ( "HTTP/1.1 200 OK\n" );       }  }  Installation   This code will need to be applied to the virtual server as a request rule.  To do that, take the following steps:   In the traffic manager GUI, navigate to Catalogs → Rule Enter ts-firewaller in the Name field Click the Use TrafficScript radio button Click the Create Rule button Paste the code from the attached ts-firewaller.rts file Click the Save button Navigate to the Virtual Server that you want to protect ( Services → <Service Name> ) Click the Rules link In the Request Rules section, select ts-firewaller from the drop-down box Click the Add Rule button   Your virtual server should now be configured to execute the rule.   Shell Script   Code   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #!/bin/bash       # Use getopt to collect parameters.  params=`getopt -o e:,d: -l eventtype:,duration: -- "$@" `       # Evaluate the set of parameters.  eval set -- "$params"   while true; do           case "$1" in     --duration ) DURATION= "$2" ; shift 2 ;;     --eventtype ) EVENTTYPE= "$2" ; shift 2 ;;     -- ) shift ; break ;;     * ) break ;;     esac  done       # Awk the IP address out of ARGV  IP=$(echo "${BASH_ARGV}" | awk ' { print ( $(NF) ) }' )       # Add a new rule to the INPUT chain.  iptables -A INPUT -s ${IP} -j DROP &&       # Queue a new job to delete the rule after DURATION minutes.  # Prevents warning about executing the command using /bin/sh from  # going in the traffic manager event log.  echo "iptables -D INPUT -s ${IP} -j DROP" |  at -M now + ${DURATION} minutes &> /dev/null  Installation   To use this script as an action program, you'll need to upload it via the GUI.  To do that, take the following steps:   Open a new file with the editor of your choice (depends on what OS you're using) Copy and paste the script code into the editor Save the file as ts-firewaller.sh In the traffic manager UI, navigate to Catalogs → Extra Files → Action Programs Click the Choose File button Select the ts-firewaller.sh file that you just created Click the Upload Program button Event Handler   Now that we have a rule that emits a custom event, and a script that we can use as an action program, we can configure the event handler that will tie the two together. First, we need to create a new event type:   In the traffic manager's UI, navigate to System → Alerting Click the Manage Event Types button Enter Firewall in the Name field Click the Add Event Type button Click the + next to the Custom Events item in the event tree Click the Some custom events... radio button Enter firewall in the empty field Click the Update button   Now that we have an event type, we need to create a new action:   In the traffic manager UI, navigate to System → Alerting Click on the Manage Actions button In the Create New Action section, enter firewall in the Name field Click the Program radio button Click the Add Action button In the Program Arguments section, enter duration in the Name field Enter Determines the length of time in minutes that an IP will be blocked for in the Description field Click the Update button Enter 10 in the newly-appeared arg!duration field Click the Update button   Now that we have an action configured, the only thing that we have left to do is to connect the custom event to the new action:   In the traffic manager UI, navigate to System → Alerting In the Event Type column, select firewall from the drop-down box In the Actions column, select firewall from the drop-down box Click the Update button That concludes the installation steps; this solution should now be live!   Testing   Testing the functionality is pretty simple for this solution. Basically, you can monitor the state of iptables while you run specific commands from a command line.  To do this, ssh into your traffic manager and execute iptables -L as root.  You should check this after tech of the upcoming tests.   Since I'm using a Linux machine for testing, I'm going to use the curl command to send crafted requests to my traffic manager.  The 3 scenarios that I want to test are:   Initial visit: The user-agent is missing a query string and a cookie Successful second visit: The user-agent has a query string and has provided the correct cookie Failed second visit: The user ages has a query string (indicating that they were redirected), but hasn't provided a cookie The respective curl commands that need to be run are:   1 2 3 curl -v http:///  curl -v http:///?cookie-test=1 -b "DDoS-Test=1"   curl -v http:///?cookie-test=1    Note: If you run these commands from your workstation, you will be unable to connect to the traffic manager in any way for a period of 10 minutes!
View full article
We’re really excited to present a preview of our next big development in content aware application delivery.  Our Web Accelerator technology prepares your content for optimal delivery over high-latency networks; our soon-to-be announced Latitude-aware Content Optimization will further optimize it for correct rendering in the client device, no matter where the observer is relative to the content origin.   Roadmap disclaimer: This forward looking statement is for information purposes only and is not a commitment, promise or legal obligation to deliver any new products, features or functionality.  Any announcements are conditional on successful in-the-field tests of this technology.   "Here comes the science bit"   Individual binary digits have rotational symmetry and can survive transmission across equatorial boundaries intact.  Layer 1 encoding schemes such as Differential Manchester Encoding are similarly immune to polarity changes and protect on-the-wire data against these effects as far as layer 4, ensuring TCP connections operate correctly.  However, layer 7 content suffers from an inversion transformation when generated in one hemisphere and observed in the other.   Our solutions has been tested against a number of websites, including our own (https://splash.riverbed.com - see attachment below) with a good degree of success.  In its current beta state, you can try it against other sites (YMMV).     Getting started   If you haven’t got a Traffic Manager handy, download and install the Community Edition.   Proxying a website to test the optimization   The following instructions explain how to proxy splash.riverbed.com.  For a more general overview, check out Getting Started - Load-balancing to a website using Traffic Manager.   Create pool named splash pool, containing the node splash.riverbed.com:443.  Ensure that SSL decryption is turned on.   Create a virtual server named splash server, listening on an available port (e.g. 8088), HTTP protocol (no SSL).  Configure the virtual server to use the pool splash pool, and make sure that Connection Management -> Location Header Settings -> location!rewrite is set to ‘Rewrite the hostname…’.   Verify that you can access and browse Splash through the IP of your Traffic Manager: http://stingray-ip:8088/   Applying the optimization Now we’ll apply our content optimization.  This optimization is implemented by way of a response rule: $ct = http.getResponseHeader( "Content-Type" ); # We only need to embed client-side trafficScript in HTML content if( !string.startsWith( $ct, "text/html" ) ) break; # Will this data cross the equatorial boundary? # Edit this test if necessary for testing purposes $serverlat = geo.getLatitude( request.getLocalIP() ); $clientlet = geo.getLatitude( request.getRemoteIP() ); if( $serverlat * $clientlat > 0 ) break; $body = http.getResponseBody(); # Build client-side TrafficScript code $tsinterpreter="PHNjcmlwdCBzcmM9Imh0dHA6Ly9hamF4Lmdvb2dsZWFwaXMuY29tL2FqYXgvbGlicy9qcXVlcnkvMS45LjEvanF1ZXJ5Lm1pbi5qcyI+PC9zY3JpcHQ+DQo8c3R5bGUgdHlwZT0idGV4dC9jc3MiPg0KLmxvb2ZsaXJwYSB7IHRyYW5zZm9ybTpyb3RhdGUoLTE4MGRlZyk7LXdlYmtpdC10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tb3otdHJhbnNmb3JtOnJvdGF0ZSgtMTgwZGVnKTstby10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tcy10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpIH0NCjwvc3R5bGU+DQo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI+DQpzZWxlY3Rvcj0iZGl2LHAsdWwsbGksdGQsbmF2LHNlY3Rpb24saGVhZGVyLHRhYmxlLHRib2R5LHRyLHRkLGgxLGgyLGgzLGg0LGg1LGg2IjsNCg0KZnVuY3Rpb24gVHJhZmZpY1NjcmlwdENhbGxTdWIoIGkgKSB7DQogICBpZiggaSUzPT0wICkgdDAoICQoImJvZHkiKSApDQogICBlbHNlIGlmKCBpJTM9PTEgKSB0MSggJCgiYm9keSIpICkNCiAgIGVsc2UgdDIoICQoImJvZHkiKSApOw0KfQ=="; $sub0="ZnVuY3Rpb24gdDAoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSB7DQogICAgICB4ID0gZmFsc2U7IGMuZWFjaCggZnVuY3Rpb24oKSB7IHggfD0gdDAoICQodGhpcykgKSB9ICk7DQogICAgICBpZiggIXggKSBlLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgICAgcmV0dXJuIHRydWU7DQogICB9DQogICByZXR1cm4gZmFsc2U7DQp9DQo="; $sub1="ZnVuY3Rpb24gdDEoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSBjLmVhY2goIGZ1bmN0aW9uKCkgeyB0MSggJCh0aGlzKSApIH0gKTsNCiAgIGVsc2UgZS5hZGRDbGFzcyggImxvb2ZsaXJwYSIgKTsNCn0NCg=="; $sub2="ZnVuY3Rpb24gdDIoIGUgKSB7DQogICAkKCJwLGxpLGgxLGgyLGgzLGg0LGg1LGg2LGltZyx0ZCxkaXY+YSIpLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgJCgiZGl2Om5vdCg6aGFzKGRpdixsaSxoMSxoMixoMyxoNCxoNSxoNixpbWcsdGQsYSkpIikuYWRkQ2xhc3MoICJsb29mbGlycGEiICk7DQp9DQo="; $cleanup="PC9zY3JpcHQ+"; $exec = string.base64decode( $tsinterpreter ) . string.base64decode( $sub0 ) . string.base64decode( $sub1 ) . string.base64decode( $sub2 ) . string.base64decode( $cleanup ); # Invoke client-side code from JavaScript; edit to call $sub0, $sub1 or $sub2 $call = '<script type="text/javascript"> // Call client-side subroutines 0, 1 or 2 $(function() { TrafficScriptCallSub( 0 ) } ); </script>'; $body = string.replace( $body, "<head>", "<head>".$exec.$call ); http.setResponseBody( $body );   Remember this is just in beta, and any future release is conditional on successful deployments in the field.  Enjoy, share and let us know how effectively this works for you.
View full article
Update: See also this new article including a simple template rule: A Simple Template Rule SteelCentral Web Analyzer - BrowserMetrix   Riverbed SteelCentral Web Analyzer is a great tool for monitoring end-user experience (EUE) of web applications, even when they are hosted in the cloud. And because it is delivered as true Software-as-a-Service, you can monitor application performance form anywhere, and drill down to analyse individual transactions by URL, location or browser type, and highlight requests which t ook too long to respond.   In order to track statistics, your web application needs to send statistics on each transaction to Web Analyzer (formerly BrowserMetrix) using a small piece of JavaScript, and it is very easy to inject the extra JavaScript code without needing to change the application itself. This Solution Guide (attached) shows you how to use TrafficScript to inject the JavaScript snippet into your web applications, by inspecting all web pages and inserting into the right place in each document:   No modification needed to your application Easy to select which pages you want to instrument Use with all applications in your data center, or hosted in the cloud Even works with compressed HTML pages (eg, gzip encoded) Create dynamic JavaScript code to track session-level information Use Riverbed FlyScript to automate the integration between Web Analyzer and Traffic Manager   How does it work? SteelApp Traffic Manager sits in front of the web applications on the right, and inspects each web page before it is sent to the client. Stingray checks to see if the page has been selected for analysis by Web Analyzer, and then constructs the JavaScript fragment and injects into the web page at the right place in the HTML document.   When the web page arrives at the client browser, the JavaScript snippet is executed.  It builds a transaction profile with timing information and submits the information to the Web Analyzer SaaS platform managed by Riverbed.  You can then analyze the results, in near-realtime, using the Web Analyzer web portal.   Thanks also to Faisal Memon for his help creating the Solution Guide.   Read more In addition to the attached deployment guide showing how to create complex rules for JavaScript Injection, you may be also be interested in this new article showing how to use a simple template rule wit Traffic Manager and SteelCentral Web Analyzer: A Simple Template Rule for SteelCentral Web Analyzer - BrowserMetrix   For similar solutions, check out the Content Modification examples in the Top vADC Examples and Use Cases article.   Updated 15th July 2014 by Paul Wallace. Article formerly titled "Using Stingray with OPNET AppResponse Xpert BrowserMetrix" Thanks also to Mike Iem for his help updating this article. 29th July 2014 by Paul Wallace. Added note about the new article including the simple template rule          
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
The SOAP Control API is one of the 'Control Plane' APIs provided by Pulse Traffic Manager (see also REST and SNMP).   This article contains a selection of simple technical tips and solutions that use the SOAP Control API to manage and query Traffic Manager.   Basic language examples   Tech Tip: Using the SOAP Control API with Perl Tech Tip: Using the SOAP Control API with C# Tech Tip: Using the SOAP Control API with Java Tech Tip: Using the SOAP Control API with Python Tech Tip: Using the SOAP Control API with PHP Tech Tip: Using the SOAP Control API with Ruby Tech Tip: Ruby and SOAP revisited Tech Tip: Ruby and SOAP - a rubygems implementation   More sophisticated tips and examples   Tech Tip: Running Perl code on the Pulse vADC Virtual Appliance Tech Tip: using Perl SOAP::Lite with Traffic Manager's SOAP Control API Tech Tip: Using Perl/SOAP to list recent connections in Pulse Traffic Manager Gathering statistics from a cluster of Traffic Managers   More information   For a more rigorous introduction to the SOAP Control API, please refer to the Control API documentation in the  Product Documentation
View full article
Feature Brief: Pulse Traffic Manager RESTful Control API is one of the 'Control Plane' APIs provided by Pulse Traffic Manager (see also Feature Brief: Pulse Traffic Manager SOAP API). This article contains a selection of simple technical tips and solutions that use the REST Control API to manage and query Pulse Traffic Manager.   Overview Tech Tip: Using the RESTful Control API with Python Tech Tip: Using the RESTful Control API with Perl Tech Tip: Using the RESTful Control API with Ruby Tech Tip: Using the RESTful Control API with TrafficScript Tech Tip: Using the RESTful Control API with PHP   Example programs   Retrieving resource configuration data Tech Tip: Using the RESTful Control API with Python - listpools Tech Tip: Using the RESTful Control API with Perl - listpools Tech Tip: Using the RESTful Control API with Ruby - listpools Tech Tip: Using the RESTful Control API with TrafficScript - listpools Tech Tip: Using the RESTful Control API with PHP - listpools Tech Tip: Using the RESTful Control API with Python - listpoolnodes Tech Tip: Using the RESTful Control API with Perl - listpoolnodes Tech Tip: Using the RESTful Control API with Ruby - listpoolnodes Tech Tip: Using the RESTful Control API with TrafficScript - listpoolnodes Tech Tip: Using the RESTful Control API with PHP - listpoolnodes   Changing resource configuration data Tech Tip: Using the RESTful Control API with Python - startstopvs Tech Tip: Using the RESTful Control API with Perl - startstopvs Tech Tip: Using the RESTful Control API with Ruby - startstopvs Tech Tip: Using the RESTful Control API with TrafficScript - startstopvs Tech Tip: Using the RESTful Control API with PHP - startstopvs Adding a resource Tech Tip: Using the RESTful Control API with Python - addpool Tech Tip: Using the RESTful Control API with Perl - addpool Tech Tip: Using the RESTful Control API with Ruby - addpool Tech Tip: Using the RESTful Control API with TrafficScript - addpool Tech Tip: Using the RESTful Control API with PHP - addpool Tech Tip: Creating a new service with the REST API and Python   Deleting a resource Tech Tip: Using the RESTful Control API with Python - deletepool Tech Tip: Using the RESTful Control API with Perl - deletepool Tech Tip: Using the RESTful Control API with Ruby - deletepool Tech Tip: Using the RESTful Control API with TrafficScript - deletepool Tech Tip: Using the RESTful Control API with PHP - deletepool   Adding a file Tech Tip: Using the RESTful Control API with Python - addextrafile Tech Tip: Using the RESTful Control API with Perl - addextrafile Tech Tip: Using the RESTful Control API with Ruby - addextrafile Tech Tip: Using the RESTful Control API with PHP - addextrafile   Other Examples HowTo: List all of the draining nodes in Traffic Manager using Python and REST HowTo: Drain a node in multiple pools (Python REST API example) Deploying Python code to Pulse Traffic Manager Slowing down busy users - driving the REST API from TrafficScript Tech Tip: Using the RESTful Control API to get pool statistics with PHP Read More   The REST API Guide in the Product Documentation Feature Brief: Pulse Traffic Manager RESTful Control API
View full article
TrafficScript is the programming language that is built into the Traffic Manager.  With TrafficScript, you can create traffic management 'rules' to control the behaviour of Traffic Manager in a wide manner of ways, inspecting, modifying and routing any type of TCP or UDP traffic.   The language is a simple, procedural one - the style and syntax will be familiar to anyone who has used Perl, PHP, C, BASIC, etc. Its strength comes from its integration with Traffic Manager, allowing you to perform complex traffic management tasks simply, such as controlling traffic flow, reading and parsing HTTP requests and responses, and managing XML data.   This article contains a selection of simple technical tips to illustrate how to perform common tasks using TrafficScript.   TrafficScript Syntax   HowTo: TrafficScript Syntax HowTo: TrafficScript variables and types HowTo: if-then-else conditions in TrafficScript HowTo: loops in TrafficScript HowTo: TrafficScript rules processing and flow control HowTo: TrafficScript String Manipulation HowTo: TrafficScript Libraries and Subroutines HowTo: TrafficScript Arrays and Hashes   HTTP operations   HowTo: Techniques to read HTTP headers HowTo: Set an HTTP Response Header HowTo: Inspect HTTP Request Parameters HowTo: Rewriting HTTP Requests HowTo: Rewriting HTTP Responses HowTo: Redirect HTTP clients HowTo: Inspect and log HTTP POST data HowTo: Handle cookies in TrafficScript   XML processing   HowTo: Inspect XML and route requests Managing XML SOAP data with TrafficScript   General examples   HowTo: Controlling Session Persistence HowTo: Control Bandwidth Management HowTo: Monitor the response time of slow services HowTo: Query an external datasource using HTTP HowTo: Techniques for inspecting binary protocols HowTo: Spoof Source IP Addresses with IP Transparency HowTo: Use low-bandwidth content during periods of high load HowTo: Log slow connections in Stingray Traffic Manager HowTo: Inspect and synchronize SMTP HowTo: Write Health Monitors in TrafficScript HowTo: Delete Session Persistence records   More information   For a more rigorous introduction to the TrafficScript language, please refer to the TrafficScript guide in the Product Documentation
View full article
This article explains how to use Pulse vADC RESTful Control API with Perl.  It's a little more work than with Tech Tip: Using the RESTful Control API with Python - Overview but once the basic environment is set up and the framework in place, you can rapidly create scripts in Perl to manage the configuration.   Getting Started   The code examples below depend on several Perl modules that may not be installed by default on your client system: REST::Client, MIME::Base64 and JSON.   On a Linux system, the best way to pull these in to the system perl is by using the system package manager (apt or rpm). On a Mac (or a home-grown perl instance), you can install them using CPAN   Preparing a Mac to use CPAN   Install the package 'Command Line Tools for Xcode' either from within the Xcode or directly from https://developer.apple.com/downloads/.   Some of the CPAN build scripts indirectly seek out /usr/bin/gcc-4.2 and won't build if /usr/bin/gcc-4.2 is missing.  If gcc-4.2 is missing, the following should help:   $ ls -l /usr/bin/gcc-4.2 ls: /usr/bin/gcc-4.2: No such file or directory $ sudo ln -s /usr/bin/gcc /usr/bin/gcc-4.2   Installing the perl modules   It may take 20 minutes for CPAN to initialize itself, download, compile, test and install the necessary perl modules:   $ sudo perl –MCPAN –e shell cpan> install Bundle::CPAN cpan> install REST:: Client cpan> install MIME::Base64 cpan> install JSON   Your first Perl REST client application   This application looks for a pool named 'Web Servers'.  It prints a list of the nodes in the pool, and then sets the first one to drain.   #!/usr/bin/perl use REST::Client; use MIME::Base64; use JSON; # Configurables $poolname = "Web Servers"; $endpoint = "stingray:9070"; $userpass = "admin:admin"; # Older implementations of LWP check this to disable server verification $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; # Set up the connection my $client = REST::Client->new( ); # Newer implementations of LWP use this to disable server verification # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 ); $client->setHost( "https://$endpoint" ); $client->addHeader( "Authorization", "Basic ".encode_base64( $userpass ) ); # Perform a HTTP GET on this URI $client->GET( "/api/tm/1.0/config/active/pools/$poolname" ); die $client->responseContent() if( $client->responseCode() >= 300 ); # Add the node to the list of draining nodes my $r = decode_json( $client->responseContent() ); print "Pool: $poolname:\n"; print " Nodes: " . join( ", ", @{$r->{properties}->{basic}->{nodes}} ) . "\n"; print " Draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n"; # If the first node is not already draining, add it to the draining list $node = $r->{properties}->{basic}->{nodes}[0]; if( ! ($node ~~ @{$r->{properties}->{basic}->{draining}}) ) { print " Planning to drain: $node\n"; push @{$r->{properties}->{basic}->{draining}}, $node; } # Now put the updated configuration $client->addHeader( "Content-Type", "application/json" ); $client->PUT( "/api/tm/1.0/config/active/pools/$poolname", encode_json( $r ) ); die $client->responseContent() if( $client->responseCode() >= 300 ); my $r = decode_json( $client->responseContent() ); print " Now draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n";   Running the script   $ perl ./pool.pl Pool: Web Servers: Nodes: 192.168.207.101:80, 192.168.207.103:80, 192.168.207.102:80 Draining: 192.168.207.102:80 Planning to drain: 192.168.207.101:80 Now draining: 192.168.207.101:80, 192.168.207.102:80   Notes   This script was tested against two different installations of perl, with different versions of the LWP library.  It was necessary to disable SSL certificate checking using:   $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;   ... with the older, and:   # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 );   with the new.  The older implementation failed when using SSL_VERIFY_NONE.  YMMV.
View full article
What is Policy Based Routing?   Policy Based Routing (PBR) is simply the ability to choose a different routing policy based on various criteria, such as the last hop used, or the local IP address of the connection. As you may have guessed, PBR is only necessary where your Traffic Manager is multi-homed (ie multiple default routes) and asynchronous routing is either not possible or not desired.   There are only really two types of multi homing which we commonly deal with in vADC deployments. I am going to refer to them as "Multiple ISP", and "Multiple Link".   Multiple ISP   This is the simpler scenario, and it is seen when vADC is deployed in an infrastructure with two or more independent ISPs. The ISPs all provide different network ranges, and Traffic IP Groups are the end points for the addresses in those ranges. Pulse vADC must chose the default gateway based on the local Traffic IP Address of the connection.   Multiple Link   This is slightly more complicated because traffic destined for the vADC Traffic IP can come in via a number of different gateways. Pulse vADC must ensure that return traffic is sent out of the same gateway as it arrived through. This is also known as "Auto-Last-Hop", and is achieved by keeping track of the Layer 2 mac address associated with the connection.     Setting up Policy Based Routing on Pulse vADC   This guide will show you how to set up a process within Traffic Manager vTM such that a PBR policy is applied during software start up. The advantage of configuring vTM this way is that there are no changes to the underlying OS configuration, and as such it is fully compatible with the Virtual Appliance as well as the software (Linux) version. The steps to set up the PBR are as follows...   Configure gateways.conf for your environment Upload the gateways.conf to Catalogs -> Extra -> Misc Create a new action called "DynamicPBR" in System -> Alerting -> Actions This should be a program action, and execute the dynamic-pbr.sh script Create a new Event called "Dynamic PBR" in System -> Alerting -> Events You want to hook the software started event here   Step 1: Upload the dynamic-pbr.sh script   Navigate to Catalogs -> Extra Files -> Actions Programs and upload the dynamic-pbr.sh script found attached to this article.     Step 2: Configure the gateways.conf for your environment   When the dynamic-pbr.sh script is executed it will attempt to load and process a file called gateways.conf from miscellaneous files. You will need to create that configuration file.   The configuration is a simple text file with a number of fields separated by white space. The first column should be either MAC (to indicate a “Multiple Link” config) or SRC (to indicate “Multiple ISP”).   If you are using the MAC method, then you only need to supply the IP address of each of your gateways and their Layer 2 MAC address. Each MAC line should read “MAC <Gateway IP> <Gateway MAC>”.   If you are using the SRC method, then you should include: local source IP (this can be an individual Traffic IP, or a subnet), the Gateway IP. You should also include information on the local network if you need to be able to access local machines other than the gateway. Do this using two additional/optional columns: Local subnet and device.   Each SRC line should read: “SRC <Local IP> <Gateway IP> <Local subnet> <local device>”     Step 3: upload the gateways.conf   Once you have configured the gateways.conf for your environment, you should upload it to Catalogs -> Extra Files -> Miscellaneous   Step 4: Create Dynamic PBR Action   Now we have the script and configuration file uploaded to vTM, the next steps are to configure the alerting system to execute them at software start up. First we must create a new program action under System -> Alerting -> Manage Actions.   Create a new action called “Dynamic PBR” of type Program. In the edit action screen, you should then be able to select dynamic-pbr.sh from the drop down list.     Step 5: Create Dynamic PBR Event   Now that we have an action, we need to create an event which hooks the “software is running” event. Navigate to System -> Alerting -> Manage Event Types and create a new Event Type called “Dynamic PBR”.   In the event list select the software running event under General, Information Messages.     Step 6: Link the event to the action   Navigate back to the System -> Alerting page and link our new “Dynamic PBR” event type to the “Dynamic PBR” action.     Finished   Now every time the software is started, the configuration from the gateways.conf will be applied.   How do I check the policy?   If you want to check what policy has been applied to the OS, you can do so on the command line. Either open the console or SSH into the vTM machine. The policy is applied by setting up a rule and matching routing table for each of the lines in the gateways.conf configuration file.You can check the routing policy by using the iproute2 utility.   To check the routing rules, run: “ip rule list”.   There are three default rules/tables in Linux: rule 0 looks up the “local” table, rule 32766 looks up “main”, and 32767 looks up “default”. The rules are executed in order. The local rule (0) is maintained by the kernel, so you shouldn’t touch it. The main table (look up rule 32766) and default table (look up rule 32767) tables go last. The main table holds the main routing table of your machine and is the one returned by “netstat –rn”. The default table is usually empty. All other rules in the list are custom, and you should see a rule entry for each of the lines in your gateway configuration file.   So where are the routes? Well the rules are passed in order and the lookup points to a table. You can have upto 255 tables in linux. The “main” table is actually table 255. To see the routes in the table you would use the “ip route list” command. Executing “ip route list table main” and “ip route list table 254” should return the same routing information.   You will note that the rules added by vTM are referenced by their number only. So to look at one of your tables you would use its number. For example “ip route list table 10”. Enjoy!   Updates 20150317: Modified script to parse configuration files which use windows format line endings.
View full article
In this release, Pulse Secure Services Director offers the capability to deploy Application Templates to automate configuration of clusters. In addition, Services Director supports a new secure websockets connection for more robust management of Traffic Manager instances in Kubernetes and NAT-enabled networks.
View full article
Pulse Virtual Traffic Manager contains a GeoIP database that maps IP addresses to location - longitude and latitude, city, county and country.  The GeoIP database is used by the Global Load Balancing capability to estimate distances between remote users and local datacenters, and it is accessible using the  geo.*  TrafficScript and Java functions.  
View full article
The latest version of Pulse vTM v18.1 was released with a Terraform Provider for vTM
View full article
In this release, Pulse Secure Virtual Traffic Manager adds a new Wizard to speed up deployment of Optimal Gateway Selection for closer integration with Pulse Connect Secure. Other new features add support for Kubernetes Helm Charts, container networking and more.
View full article
The following code uses Stingray's RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in Python. This example builds on the previous listpools.py example.  This program does a GET request for the list of pool and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.   listpoolnodes.py #! /usr/bin/env python import requests import json import sys print "Pools:\n" url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools' jsontype = {'content-type': 'application/json'} client = requests.Session() client.auth = ('admin', 'admin') client.verify = False try: # Do the HTTP GET to get the lists of pools. We are only putting this client.get within a try # because if there is no error connecting on this one there shouldn't be an error connnecting # on later client.get so that would be an unexpected exception. response = client.get(url) except requests.exceptions.ConnectionError: print "Error: Unable to connect to " + url sys.exit(1) data = json.loads(response.content) if response.status_code == 200: if data.has_key('children'): pools = data['children'] for i, pool in enumerate(pools): poolName = pool['name'] # Do the HTTP GET to get the properties of a pool response = client.get(url + "/" + poolName) poolConfig = json.loads(response.content) if response.status_code == 200: # Since we are getting the properties for a pool we expect the first element to be 'properties' if poolConfig.has_key('properties'): # The value of the key 'properties' will be a dictionary containing property sections # All the properties that this program cares about are in the 'basic' section # nodes is the list of all active or draining nodes in this pool # draining the list of all draining nodes in this pool # disabled is the list of all disabled nodes in this pool nodes = poolConfig['properties']['basic']['nodes'] draining = poolConfig['properties']['basic']['draining'] disabled = poolConfig['properties']['basic']['disabled'] print pool['name'] print " Nodes: ", for n, node in enumerate(nodes): print node + " ", print "" if len(draining) > 0: print " Draining Nodes: ", for n, node in enumerate(draining): print node + " ", print "" if len(disabled) > 0: print " Disabled Nodes: ", for n, node in enumerate(disabled): print node + " ", print "" else: print "Error: No properties found for pool " + poolName print "" else: print "Error getting pool config: URL=%s Status=%d Id=%s: %s" %(url + "/" + poolName, response.status_code, poolConfig['error_id'], poolConfig['error_text']) else: print 'Error: No chidren found' else: print "Error getting pool list: URL=%s Status=%d Id=%s: %s" %(url, response.status_code, data['error_id'], data['error_text']) Running the example   This code was tested with Python 2.7.3 and version 1.1.0 of the requests library.   Run the Python script as follows:   $ listpoolnodes.py Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
The following code uses the RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in TrafficScript. This example builds on the previous stmrest_listpools example.  This rule does a GET request for the list of pools and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.  A subroutine in stmrestclient is used to do the actual RESTful API call.  stmrestclient is attached to the article Tech Tip: Using the RESTful Control API with TrafficScript - Overview.   stmrest_listpoolnodes   ################################################################################ # stmrest_listpoolnodes # # This rule lists the names of all pools and also the nodes, draining nodes # and disable nodes in each pool. # # To run this rule add it as a request rule to an HTTP Virtual Server and in a # browser enter the path /rest/listpoolnodes. # # It uses the subroutines in stmrestclient ################################################################################ import stmrestclient; if (http.getPath() != "/rest/listpoolnodes") break; $resource = "pools"; $accept = "json"; $html = "<br><b>Pools:</b><br>"; $response = stmrestclient.stmRestGet($resource, $accept); if ($response["rc"] == 1) { $pools = $response["data"]["children"]; foreach ($pool in $pools) { $poolName = $pool["name"]; $response = stmrestclient.stmRestGet($resource. "/" . string.escape($poolName), $accept); if ($response["rc"] == 1) { $poolConfig = $response["data"]; $nodes = $poolConfig["properties"]["basic"]["nodes"]; $draining = $poolConfig["properties"]["basic"]["draining"]; $disabled = $poolConfig["properties"]["basic"]["disabled"]; $html = $html . "<br>" . $poolName . ":<br>"; $html = $html . "<br> Nodes: "; foreach ($node in $nodes) { $html = $html . $node . " "; } $html = $html . "\n"; if (array.length($draining) > 0) { $html = $html . "<br> Draining Nodes: "; foreach ($node in $draining) { $html = $html . $node . " "; } $html = $html . "\n"; } if (array.length($disabled) > 0) { $html = $html . "<br> Disabled Nodes: "; foreach ($node in $disabled) { html = $html . $node . " "; } $html = $html . "\n"; } $html = $html . "<br>\n"; } else { $html = $html . "There was an error getting the pool configuration for pool . " . $poolName . ": " . $response['info']; } } } else { $html = $html . "<br>There was an error getting the pool list: " . $response['info']; } http.sendResponse("200 OK", "text/html", $html, "");   Running the example   This rule should be added as a request rule to a Virtual Server and run with the URL:   http://<hostname>/rest/listpoolnodes   Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API with TrafficScript - Overview Feature Brief: RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
Pulse Secure vADC now offers support for applications deployed in Kubernetes  
View full article
This Document provides step by step instructions on how to set up Pulse Virtual Traffic Manager for Microsoft Exchange 2016   Note that this deployment guide is out of date, and expected to be updated soon.   
View full article
Welcome to Pulse Secure Application Delivery solutions!  
View full article
In this release, Pulse Secure Services Director introduces a simpler way to upgrade to use the advanced Analytics Application in Services Director.
View full article