cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
Traffic Manager logs activity (hits per minute and data transferred) for Virtual Servers, Pools and Nodes. You can view the logged data on the Activity -> Historical Activity page in the Admin interface. However, sometimes it is usefule to extract these graphs and archive them. The Admin server doesn't provide a facility to this, so this article presents a simple perl script that generates these graphs.   Using the Script Download the script attached to this article and copy it on to one of your Traffic Managers (it must be run from a Traffic Manager host or virtual appliance).  Ensure that the script is executable.   Running it with no arguments will produce the help output:   Usage: ./getHistoricalActivity.pl --output=image.png [OPTIONS] Generates a graph of historical activity. If given no arguments, will default to graphing all virtual servers for 24 hours Required arguments: --output=FILE File to output image to ('-' for stdout) Optional arguments: --time=HOURS Number of hours to graph (default 24) --width=WIDTH Width of output graph (default 640) --height=HEIGHT Height of output graph (default 480) --allvs Graph all Virtual Servers (default) --allpools Graph all pools --allnodes Graph all nodes --vs=VS1,VS2 Graph a subset of virtual servers --pool=POOL1,POOL2 Graph a subset of pools --node=NODE1,NODE2 Graph a subset of nodes --linear Graph using a linear axis (default) --logarithmic Graph using a logarithmic axis --hits Graph hits per minute (default) --bytes Graph bytes per second --help Show this help message   Running it just the --output argument will generate an image like the following: The line colours and the virtual server names are output to STDOUT: # ./foo.pl --output=foo.png Line colours: 00cfff: apache ffe626: Zeus AFM Integration 9a41d8: Web Site 00d840: SSL Passthrough ff4ca8: soundex
View full article
Top examples of Pulse vADC in action   Examples of how SteelApp can be deployed to address a range of application delivery challenges.   Modifying Content   Simple web page changes - updating a copyright date Adding meta-tags to a website with Traffic Manager Tracking user activity with Google Analytics and Google Analytics revisited Embedding RSS data into web content using Traffic Manager Add a Countdown Timer Using TrafficScript to add a Twitter feed to your web site Embedded Twitter Timeline Embedded Google Maps Watermarking PDF documents with Traffic Manager and Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking web content with Pulse vADC and TrafficScript   Prioritizing Traffic   Evaluating and Prioritizing Traffic with Traffic Manager HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Using Pulse vADC to Catch Spiders Dynamic rate shaping slow applications Stop hot-linking and bandwidth theft! Slowing down busy users - driving the REST API from TrafficScript   Performance Optimization   Cache your website - just for one second? HowTo: Monitor the response time of slow services HowTo: Use low-bandwidth content during periods of high load   Fixing Application Problems   No more 404 Not Found...? Hiding Application Errors Sending custom error pages   Compliance Problems   Satisfying EU cookie regulations using The cookiesDirective.js and TrafficScript   Security problems   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Traffic Manager Heartbleed: Using TrafficScript to detect TLS heartbeat records TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) SAML 2.0 Protocol Validation with TrafficScript Disabling SSL v3.0 for SteelApp   Infrastructure   Transparent Load Balancing with Traffic Manager HowTo: Launch a website at 5am Using Stingray Traffic Manager as a Forward Proxy Tunnelling multiple protocols through the same port AutoScaling Docker applications with Traffic Manager Elastic Application Delivery - Demo How to deploy Traffic Manager Cluster in AWS VPC   Other solutions   Building a load-balancing MySQL proxy with TrafficScript Serving Web Content from Traffic Manager using Python and Serving Web Content from Traffic Manager using Java Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager TrafficScript can Tweet Too Instrument web content with Traffic Manager Antivirus Protection for Web Applications Generating Mandelbrot sets using TrafficScript Content Optimization across Equatorial Boundaries
View full article
With the evolution of social media as a tool for marketing and current events, we commonly see the Twitter feed updated long before the website. It’s not surprising for people to rely on these outlets for information.   Fortunately Twitter provides a suite of widgets and scripting tools to integrate Twitter information for your application. The tools available can be implemented with little code changes and support many applications. Unfortunately the same reason a website is not as fresh as social media is because of the code changes required. The code could be owned by different people in the organization or you may have limited access to the code due to security or CMS environment. Traffic Manager provides the ability to insert the required code into your site with no changes to the application.      Twitter Overview "Embeddable timelines make it easy to syndicate any public Twitter timeline to your website with one line of code. Create an embedded timeline from your widgets settings page on twitter.com, or choose “Embed this…” from the options menu on profile, search and collection pages.   Just like timelines on twitter.com, embeddable timelines are interactive and enable your visitors to reply, Retweet, and favorite Tweets directly from your pages. Users can expand Tweets to see Cards inline, as well as Retweet and favorite counts. An integrated Tweet box encourages users to respond or start new conversations, and the option to auto-expand media brings photos front and center.   These new timeline tools are built specifically for the web, mobile web, and touch devices. They load fast, scale with your traffic, and update in real-time." -twitter.com   Thank you Faisal Memon for the original article Using TrafficScript to add a Twitter feed to your web site   As happens more often than than not, platform access changes. This time twitter is our prime example. When loading Twitter js, http://widgets.twimg.com/j/2/widget.js you can see the following notice:   The Twitter API v1.0 is deprecated, and this widget has ceased functioning.","You can replace it with a new, upgraded widget from <https://twitter.com/settings/widgets/new/"+H+">","For more information on alternative Twitter tools, see <https://dev.twitter.com/docs/twitter-for-websites>   To save you some time, Twitter really means deprecated and the information link is broken. For more information on alternative Twitter tools the Twitter for Websites | Home. For information related to the information in this article, please see Embedded Timelines | Home   One of the biggest changes in the current twitter platform is the requirement for a "data-widget-id". The data-widget-id is unique, and is used by the twitter platform to provide information to generate the data. Before getting started with the Traffic Manager and Web application you will have to create a new widget using your twitter account https://twitter.com/settings/widgets/new/. Once you create your widget, will see the "Copy and paste the code into the HTML of your site." section on the twitter website. Along with other information, this code contains your "data-widget-id". See Create widget image.   Create widget (click to zoom)   This example uses a Traffic Script response rule to rewrite the HTTP body from the application. Specifically I know the body for my application includes a html comment   <!--SIDEBAR-->.    This rule will insert the required client side code into the HTTP body and send the updated body in to complete the request.  The $inserttag variable can be just about anything in the body itself  i.e. the "MORE LIKE THIS" text on the side of this page. Simply change the code below to:     $inserttag = "MORE LIKE THIS";   Some of the values used in the example (i.e. width, data-theme, data-link-color, data-tweet-limit) are not required. They have been included to demonstrate customization. When you create/save the widget on the twitter website, the configuration options (See the Create widget image above) are associated with the "data-widget-id". For example "data-theme", if you saved the widget with light and you want the light theme, it can be excluded. Alternatively if you saved the widget with light, you can use "data-theme=dark" and over ride the value saved with the widget.  In the example time line picture the data-link-color value is used to over ride the value provided with the saved "data-widget-id".   Example Response Rule, *line spaced for splash readability and use of variables for easy customization. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # Only modify text/html pages    if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         $inserttag = "<!--SIDEBAR-->" ;       # create a widget ID @ https://twitter.com/settings/widgets/new  #This is the id used by riverbed.com   $ttimelinedataid = "261517019072040960" ;  $ttimelinewidth = "520" ; # max could be limited by ID config.  $ttimelineheight = "420" ;  $ttimelinelinkcolor = "#0080ff" ; #0 for default or ID config, #0080ff & #0099cc are nice  $ttimelinetheme = "dark" ; #"light" or "dark"  $ttimelinelimit = "0" ; #0 = unlimited with scroll. >=1 will ignore height.  #See https://dev.twitter.com/web/embedded-timelines#customization for other options.       $ttimelinehtml = "<a class=\"twitter-timeline\" " .                   "width=\"" . $ttimelinewidth . "" .                     "\" height=\"" . $ttimelineheight . "" .                     "\" data-theme=\"" . $ttimelinetheme . "" .                   "\" data-link-color=\"" . $ttimelinelinkcolor . "" .                   "\" data-tweet-limit=\"" . $ttimelinelimit . "" .                   "\" data-widget-id=\"" . $ttimelinedataid . "" .                    "\"></a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)" .                     "[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))" .                     "{js=d.createElement(s);js.id=id;js.src=p+" .                   "\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js," .                   "fjs);}}(document,\"script\",\"twitter-wjs\");" .                     "</script><br>" . $inserttag . "" ;         $body = http.getResponseBody();    $body = string.replace( $body , $inserttag , $ttimelinehtml );  http.setResponseBody( $body );    A short version of the rule above, still with line breaks for readability.   1 2 3 4 5 6 7 8 9 if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         http.setResponseBody(string.replace( http.getResponseBody(), "<!--SIDEBAR-->" ,   "<a class=\"twitter-timeline\" width=\"520\" height=\"420\" data-theme=\"dark\" " .  "data-link-color=\"#0080ff\" data-tweet-limit=\"0\" data-widget-id=\"261517019072040960\">" .  "</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test" .  "(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;" .  "js.src=p+\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js,fjs);}}" .  "(document,\"script\",\"twitter-wjs\");</script><br><!--SIDEBAR-->" ));    Result from either rule:  
View full article
In October 2014, Google published details of a vulnerability in the SSL 3.0 protocol - named "POODLE" - which makes it possible for an attacker to decrypt messages between client and server in some circumstances. Because this is a problem with the protocol itself, rather than with a specific implementation of the protocol, this means that any client-server transaction which supports SSL 3.0 is at risk. Even if the client-server supports higher levels of security (such as TLS 1.2), it is possible for an attacker to force a downgrade to SSL 3.0 using a man-in-the-middle attack - which means that systems should disable SSL 3.0 to protect against this kind of attack, and use more recent security handshake protocols such as TLS.   How to Disable SSL 3.0 Completely   With Traffic Manager, it is easy to disable SSL v3.0 completely from the system console. Navigate to System->Global settings->SSL Configuration, and you can control how Traffic Manager manages SSL transactions:     How to Trap SSL Requests:   So we can disable SSL 3.0 completely, but some browsers will show an unhelpful error message: ideally, we would provide some extra feedback to the user to show what the problem is, and how to resolve it. Attach this TrafficScript rule to your virtual server: if you leave SSL 3.0 enabled, this rule permits any transaction using TLS, but traps SSL requests and returns a custom error message to the user:   $cipher = ssl.clientCipher(); if (string.len($cipher) > 0) { if (string.contains($cipher, "version=TLS")) { # this is the good case, incrementing the user SNMP counter counter64.increment(1,1); break; } else { # logic for the SSL (insecure) cases counter64.increment(2,1); # increment a counter for bad cases event.emit ("ssl request", "IP: ".request.getRemoteIP()." User-agent: ".http.getHeader("User-Agent")); http.sendResponse( "400 Bad request", "text/plain", "This service requires TLS security, and is using SSL security. \ Please verify your SSL/TLS settings and try again", "" ); } }   This TrafficScript rule will write an event message to the Traffic Manager log file, identifying the client IP and User Agent, and we also increment a user-defined counter to help track how often attempts are made to open an SSL transaction. These counters can be graphed on the Traffic Manager Activity Monitor, or retrieved remotely as user-defined SNMP variables, (use index 1 for good TLS requests, and index 2 for SSL requests that were rejected). The rule also raises a custom event named "ssl request" which can be used to trigger external actions if needed.   To test the script using Firefox, go to the "about:config" page, and change the value "security.tls.version.max" from the default of "3" to "0" This will force SSL 3.0 to be used instead of TLS. In newer versions of Firefox, you may also need to set "security.tls.version.min" to "0" - but don't forget to set these values back to a secure setting after testing.   Poodle icon designed by http://www.thenounproject.com/edward from the http://www.thenounproject.com.
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
You've deployed the PyRunner.jar extension to Traffic Manager so that you can run Python code (see PyRunner.jar: Running Python code in Traffic Manager).  What's the easiest way to deploy the Python code on Traffic Manager?   The following script makes this easy.  It runs a quick syntax check against the Python code, then uses the REST API (Tech Tip: Using the RESTful Control API with Python - Overview) to PUT the python file in the conf/extra part of the Traffic Manager configuration.   publish.py   The source of the publish script:   #!/usr/bin/python import requests import sys import py_compile # FIXME: these need to be correct for your deployment url = 'https://stingray:9070/api/tm/1.0/config/active/' auth = ( 'admin', 'admin' ) file = sys.argv[1] # Syntax-check script before uploading try: py_compile.compile( file, '/dev/null', file, True ) except Exception, e: print "Compilation check failed:", e sys.exit( 1 ) src=open( file, 'r' ).read() # Deploy to Stingray client = requests.Session() client.auth = auth client.verify = 0 try: response = client.put( url+'extra/'+file, data = src ) except Exception, e: print "Error: Unable to connect to " + url + ": ", e sys.exit( 1 ) print "Uploaded " + file   Save this script on your development environment, check the URL and auth parameters, and make the script executable ( chmod +x ).   Deploying Python code to Traffic Manager   Upload the PyRunner.jar file to Traffic Manager, as per the instructions in PyRunner.jar: Running Python code in Traffic Manager.   Create a simple TrafficScript rule and associate it with your virtual server:   java.run( "PyRunner", "vars.py" );   Create a python file named vars.py   from javax.servlet.http import HttpServlet class vars(HttpServlet): def doGet(self, request, response): reqsText = ' %30s: %s\n' % ( "URL", request.getRequestURL() ) reqsText += ' %30s: %s\n' % ( "URI", request.getRequestURI() ) reqsText += ' %30s: %s\n' % ( "Query String", request.getQueryString() ) headText = '' names = request.getHeaderNames() for n in names: headText += ' %30s: %s\n' % ( n, request.getHeader( n ) ) paramText = '' names = request.getParameterNames() for n in names: paramText += ' %30s: %s\n' % ( n, request.getParameter( n ) ) attrText = '' names = request.getAttributeNames() for n in names: attrText += ' %30s: %s\n' % ( n, request.getAttribute( n ) ) attrText += ' %30s: %s\n' % ( "args", request.getAttribute( "args" ) ) toClient = response.getWriter() response.setContentType ("text/html") htmlOut = ''' <html><head><title>vars.py</title><body> <h3>Request</h3><pre>%s</pre> <h3>Headers</h3><pre>%s</pre> <h3>Parameters</h3><pre>%s</pre> <h3>Attributes</h3><pre>%s</pre> ''' % ( reqsText, headText, paramText, attrText ) toClient.println(htmlOut)   Upload it using publish.py:   $ ./publish.py vars.py Uploaded vars.py   Then test it out!   The key advantage of this technique is that it makes publishing Python/Jython code very quick and easy.  Traffic Manager will notice that the Python code has changed and re-load it immediately, giving you a quick test cycle.
View full article
Overview This article illustrates how you can create a custom 'program' action that is triggered when a selection of events is raised.  The action will seek to take the appropriate debugging or remedial action to address the problem associated with each event. Candidate actions If a node fails, we will capture network traffic to and from that node If a node begins to underperform, we will obtain process information from that node If the number of file descriptors gets too low, we will generate a report of file descriptor usage If the Traffic Manager encounters a FATAL problem, we will generate a technical support report To help us create and debug the event handler, we'll first create a very simple debugging action.  Go to the System -> Alerting -> Manage Actions page. Create a Program Action named 'Debug Problem', and configure it to call /bin/echo: Creating the Action To help us create and debug the event handler, we'll first create a very simple debugging action.  Go to the System -> Alerting -> Manage Actions page. Create a Program Action named 'Debug Problem', and configure it to call /bin/echo: The program (/bin/echo) is passed two parameters by default: the name of the event type that triggered the action and information about the specific event reported within that event type. This will suffice for now - we will add more arguments later when we have finished writing the program. Creating an event type   Next, we create a set of events (an 'event type') that will trigger the action:   Go to System -> Alerting -> Manage Event Types and create a new event type called 'Problems to Debug'. You will be presented with a list of all the events that Traffic Manager can catch in a tree structure. Select the following events:   Nodes -> General Events -> Serious Errors -> Node has failed SLM Classes -> Information Messages -> Node information when SLM is non-conforming General -> Warnings -> Running out of free file descriptors General -> ZXTM Software -> Internal software error   Save the event type by clicking 'Update' Linking the event type to the action   The next step is to configure Traffic Manager to trigger the action when one of the events in our event type occurs.   Go to the System -> Alerting page and select the 'Problems to Debug' event type from the drop-down box at the bottom of the page. The event type will appear in the list of mappings alongside a drop-down box containing a list of all the actions that have been configured. Select the 'Debug Problem' action from the list.     It would also be useful to receive a notification that some debug output has been produced, so select 'E-Mail' from the list of actions as well. Click 'Update' to save the changes and then, if you haven't already done so, configure the E-Mail action to use your mail server and e-mail address.   Writing the Program   Currently the 'Debug Problem' action will not do anything useful when it is triggered, so we need to write a program for it to run.  The code for this program is attached to this article.   The program examines the event information it receives and, for certain events, performs some debugging actions. The program determines which event it is handling by matching the primary tag (as presented in the 'Event Type' configuration list). When a node fails... The Perl program looks for the 'nodefail' tag, then extracts the name of the node and its port from the message.   if( $message =~ /\tnodefail\t/ ) { my( $node, $port ) = ( $1, $2 ) if $message =~ /\tnodes\/(\S+):(\d+)\t/; }   It then starts capturing traffic going between Traffic Manager and that node to see if there are any clues as to what is causing the failure. The node might, for example, be ignoring invalid requests from a particular client, thus causing the passive monitoring feature of Traffic Manager to mark it as failed. `tcpdump -c 1000 -n -s 0 -i any -w $diagnostic_file host $node`;   The captured traffic is then sent to a different machine so it can be analysed.   `scp $diagnostic_file $scpuser\@$scpdest`;   The program uses scp to send the information, which usually requires a password to be entered to access the remote machine. Because scp is being invoked by the program there is no opportunity to enter a password. To get around this problem, you can configure scp to contact a particular remote machine without requiring a password. Alternatively, if no location is passed to the program, it will just write the files to a specific location on the Traffic Manager machine so you can access them manually.   When Traffic Manager encounters a problem...   If there is a problem with Traffic Manager, the program will create a technical support report that you can send to the support team should you need further assistance with the problem. Information about the specific problem that occurred in the software will be sent in the notification e-mail that we configured earlier.   `$ENV{ZEUSHOME}/zxtm/bin/support-report $diagnostic_file`; When the number of free file descriptors is running low... If Traffic Manager detects that it is running low on free file descriptors, the program will obtain information about current memory usage, disk usage, active connections and file descriptor settings.   `ulimit -a >> $diagnostic_file`; `vmstat -s >> $diagnostic_file`; `df -h >> $diagnostic_file`; `netstat -an >> $diagnostic_file`;   By examining this information, you should be able to determine why the system is running low on file descriptors. Often it is because the maximum number of file descriptors (as reported by ulimit) is too low, though it could also be caused by the system running out of memory or disk space or there simply being an abnormally high number of active connections. When a Service Level Monitoring class fails... Finally, if SLM fails the program is triggered with the ' slmnodeinfo ' event that identifies which nodes contributed to the SLM failure. In this case, the program will log on to the nodes in question and obtain information about the running processes to see what is going wrong. To do this it uses rsh, which means that you need to have the appropriate permissions configured in the '.rhosts' files on each node to allow the machine running Traffic Manager to access them without a password.   `rsh -l $rshuser $node "ps -eo pid,ppid,rss,vsize,pcpu,pmem,cmd -ww --sort=-pcpu" >> $diagnostic_file`; `rsh -l $rshuser $node "vmstat -s" >> $diagnostic_file`; Testing the program The program also looks out for a 'testaction' event, which is reported when you use the 'Update and Test' button on the action page. We will use this later to make sure the program is working correctly and copies the debug output to the correct location.   Adding the Program to Traffic Manager   We can now configure the 'Debug Problem' action to use the correct program. Upload the program to Traffic Manager's Action Programs catalog (in the 'Extra Files' section.)     Go to System -> Alerting -> Manage Actions, and edit the Debug Problem action; change the program from 'Custom...' to the program you just uploaded.   You will have noticed that the program takes several arguments beyond just the event information. These arguments include the location to which files should be sent and the scp and rsh usernames to use when connecting to remote machines. You can use the 'Argument Descriptions' section of the page to configure the action to supply these arguments. After expanding the Argument Descriptions section, enter 'rshuser' into the name box and 'Username used to log on to failing nodes' in the description box. Click update and then add the remaining arguments - scpuser and scpdest - in the same way.   The arguments will appear in the 'Additional Settings' section where you can configure them with the appropriate values for your system. Click 'Update' to save the configuration and scroll down to the Additional Settings section again. The command that will be executed when the action is triggered is shown at the bottom of this section:       It would also be helpful to enable 'Verbose' mode on the action at this point so any problems that occur are reported in the Event Log.   If you want to test the program out, click 'Update and Test' from the Debug Problem action's page and you should find a file called 'test-event.txt' in the location you put in the 'scpdest' parameter. If not then double check that you can use scp to copy files from the Traffic Manager machine to that location without requiring any user interaction.   If you did get the file then when any of the events in the 'Debug Problems' event type occur you will receive some additional debugging information!
View full article
  Traffic Manager does not provide a ‘connection mirroring’ or ‘transparent failover’ capability.  This article describes contemporary connection mirroring techniques and their strengths and limitations, and explains how Traffic Manager may be used with VMware Fault Tolerance to create an effective solution that preserves all connections in the event of a hardware failure, while processing them fully at layer 7. What is connection mirroring?   A fault tolerant load balancer cluster eliminates single points of failure:  When load balancers are deployed in a fault tolerant cluster, they present a reliable endpoint for the services they manage.  If one load balancer device fails, its peers are able to step in and accept traffic so the service can continue to operate.   …but a failover event will drop all established TCP connections: However, if a load balancer fails, any TCP connections that are established to that load balancer will be dropped.  Clients will either receive a RST or FIN close message, or they may just experience a timeout.  The clients will need to re-establish the TCP connection.  This is an inconvenience for long-lived protocols that do not support automatic reconnects, such as FTP.     Connection Mirroring offers a solution: If the load balancers are operating in a basic layer-4 packet forwarding mode, the only actions they perform is to NAT the packets to the correct target node, and to apply sequence number translation.  They can share this connection information with their peer.  If a load balancer fails, the TCP client will retransmit its packets after an appropriate timeout.  The packets will be received by the peer who can then apply the correct NAT and sequence number operations.   When is it appropriate to use connection mirroring?   Connection mirroring is best used when only very basic packet-based load balancing is in use.  For example, F5 recommend that you "enable connection mirroring on Performance (Layer 4) virtual servers only" and comment "mirroring short-term connections such as HTTP and UDP is not recommended, as this will cause a decrease in system performance... is typically not necessary, as those protocols allow for failure of individual requests without loss of the entire session".   Cisco also support layer 4 connection mirroring (referring to it as ‘Stateful Failover’) and note that it is only possible for layer 4 connections.  When using a Cisco ACE device, it is not possible to failover connections that are proxied, including connections that employ SSL decryption or HTTP compression.   Layer 7 connection mirroring imposes a significant network and CPU overhead   Layer 7 connection mirroring puts a very high load on the dedicated heartbeat link (all incoming packets are replicated to the standby peer) and is CPU intensive (both traffic managers must process the same transactions at layer 7). It may add latency or interfere with normal system operation, and not all ADC features are supported in a mirrored configuration.  Because of these limitations, F5 advise "the overhead incurred by mirroring HTTP connections is excessive given the minimal advantages."   Does connection mirroring guarantee seamless failover?   Due to timing and implementation details, connection mirroring does not guarantee seamless failover.  State data must be shared to the peer once the TCP connection is established, and this must be done asynchronously to avoid delaying every TCP connection.  If a load balancer fails before it has shared the state information, the TCP session cannot be resumed.   Typical duration of a TCP transaction (not including lingering keepalives) 500 ms Typical window before which state information is synchronized (implementation dependent) 200 ms (state exchanged 5 times per second) On failure, percentage of connections that cannot be re-established 200/500 = 40%   Connection mirroring does not guarantee seamless failover because connections must proceed while state is being shared   What is the effect of connection mirroring on uptime?   Connection mirroring carries a cost: increased internal traffic for state sharing, and severe limitations on the functionality that may be used at the load balancing tier.  What effect does it have on a service’s uptime?   Typical duration of a TCP transaction (not including lingering keepalives) 500 ms Typical number of individual load balancer failures in a 12 month period 5 Percentage of transactions that would be dropped if a load balancer failed 50% (assuming an active-active pair of load balancers)     Percentage of transactions that would be recovered on a failure 60% (analysis above: 40% would not be recovered)     What is the probability that an individual connection would be impacted by a load balancer failure? 500/(365.5*24*3600*1000) * 50% * 5 = 0.000000040 What is the probability that connection could be ‘rescued’ with connection mirroring? 60% = 0.6 What proportion of transactions would be impacted by a failure, and then recovered by connection mirroring? 0.000000040 * 0.6 = 0.000000024 (i.e. 0.0000024%)   Connection mirroring improves uptime by an infinitesimal amount   General advice   Consider using connection mirroring when: Operating in L2-4 NAT load balancing modes Performing NAT load balancing with no content inspection (no delayed binding) No content processing e.g. SSL, compression, caching, cookie injection is required Base protocol does not support automatic reconnects – e.g. FTP Connections are long-lived and a dropped connection would inconvenience the user, e.g. SSH Your load balancer is unreliable and failures are sufficiently frequent that the overhead of mirroring is worthwhile You are running a fault-tolerant pair of load balancers   Don’t use connection mirroring when: Operating in full proxy modes Performing NAT or full proxy load balancing with content inspection Compressing content, SSL decrypting, caching, session persistence methods that inject cookies, application firewall Base protocol supports reconnects – e.g. RDP Connections are short-lived and easily re-established e.g HTTP Your load balancers are reliable and you can accommodate instantaneous loss of connections in the event that one does fail You plan to run a cluster of three or more load balancers (this configuration is not supported by the major vendors who offer connection mirroring)   Benefits of using Connection Mirroring Improves uptime by 0.0000024% (typical) (2.4 millionths of a percent)   Costs of using Connection Mirroring Limits traffic inspection or manipulation in load balancer. Increases internal traffic and increases load on load balancer   Balance the benefits of connection limiting against the additional risk and complexity of enabling it and the potential loss in performance and functionality that will result.  Be aware that, based on the preceding analysis, unless your goal is to achieve more than 7-9’s uptime (99.99999%), connection mirroring will not measurably contribute to the reliability of your service.   When connections are too valuable to lose…   Pulse customers include emergency and first-response services around the world, NGO services publishing disaster-response information and even major political fund-raising concerns. In each case, extremely high availability and consistent performance in the face of large spikes of traffic are paramount to the organizations who selected Traffic Manager.   A number of customers use VMware Fault Tolerance with Traffic Manager to achieve enhanced uptime without compromising the any of the functionality that Traffic Manager offers. VMware Fault Tolerance maintains a perfect shadow of a running virtual machine, running on a separate host.  If the primary virtual machine fails due to a catastrophic hardware failure, the shadow seamlessly takes over all traffic, including established connections, with a typical latency of less than 1 ms. All application-level workloads, such as SSL decryption, TrafficScript processing and Authentication are maintained without any interruption in service:   VMware Fault Tolerance runs a secondary virtual machine in ‘lock step’ with the primary. Network traffic and other non-determinstic events are replicated to the secondary, ensuring that it maintains an identical execution state to the primary. If the primary fails, the secondary takes over seamlessly and a new secondary is started.   Such configurations leverage standard VMware technology and are fully supported. They have been proven in production and offer enhanced connection mirroring functionality compared to proprietary ADC solutions
View full article
Java Extensions are one of the 'data plane' APIs provided by Traffic Manager to process network transactions.  Java Extensions are invoked from TrafficScript using the java.run() function.   This article contains a selection of technical tips and solutions to illustrate the use of Java Extensions.   Basic Language Examples   Writing Java Extensions - an introduction (presenting a template and 'Hello World' application) Writing TrafficScript functions in Java (illustrating how to use the GenericServlet interface) Tech Tip: Prompting for Authentication in a Java Extension Tech Tip: Reading HTTP responses in a Java Extension   Advanced Language Examples   Apache Commons Logging (TODO) Authenticating users with Active Directory and Stingray Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking PDF documents with Traffic Manager and Java Extensions Being Lazy with Java Extensions XML, TrafficScript and Java Extensions Merging RSS feeds using Java Extensions (12/17/2008) Serving Web Content from Traffic Manager using Java Stingray-API.jar: A Java Interface Library for Traffic Manager's SOAP Control API TrafficManager Status - Using the Control API from a Java Extension   Java Extensions in other languages   PyRunner.jar: Running Python code in Traffic Manager Making Traffic Manager more RAD with Jython! Scala, Traffic Manager and Java Extensions (06/30/2009)   More information   Feature Brief: Java Extensions in Traffic Manager Java Development Guide documentation in the Product Documentation
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
When deploying applications using content management systems, application owners are typically limited to the functionality of the CMS application in use or third party add-on's available. Unfortunately, these components alone may not deliver the application requirements.  Leaving the application owner to dedicate resources to develop a solution that usually ends up taking longer than it should, or not working at all. This article addresses some hypothetical production use cases, where the application does not provide the administrators an easy method to add a timer to the website.   This solution builds upon the previous articles (Embedded Google Maps - Augmenting Web Applications with Traffic Manager and Embedded Twitter Timeline - Augmenting Web Applications with Traffic Manager). "Using" a solution from Owen Garrett (See Instrument web content with Traffic Manager),This example will use a simple CSS overlay to display the added information.   Basic Rule   As a starting point to understand the minimum requirements, and to customize for your own use. I.E. Most people want to use "text-align:center". Values may need to be added to the $style or $html for your application, see examples.   1 2 3 4 5 6 7 8 9 10 11 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       $timer =  ( "366" - ( sys. gmtime . format ( "%j" ) ) );       $html =  '<div class="Countdown">' . $timer . ' DAYS UNTIL THE END OF THE YEAR</div>' ;       $style = '<style type="text/css">.Countdown{z-index:100;background:white}</style>' ;       $body = http.getResponseBody();  $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );  http.setResponseBody( $body );   Example 1 - Simple Day Countdown Timer   This example covers a common use case popular with retailers, a countdown for the holiday shopping season. This example also adds font formatting and additional text with a link.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 #Only process text/html content  if ( !string.startsWith (http.getResponseHeader ( "Content-Type" ), "text/html" )) break;       #Countdown target  #Julian day of the year "001" to "366"  $targetday = "359" ;  $bgcolor = "#D71920" ;  $labelday = "DAYS" ;  $title = "UNTIL CHRISTMAS" ;  $titlecolor = "white" ;  $link = "/dept.jump?id=dept20020200034" ;  $linkcolor = "yellow" ;  $linktext = "VISIT YOUR ONE-STOP GIFT SHOP" ;       #Calculate days between today and targetday  $timer = ( $targetday - ( sys. gmtime . format ( "%j" ) ) );       #Remove the S from "DAYS" if only 1 day left  if ( $timer == 1 ){     $labelday = string.drop( $label , 1 );  };       $html = '  <div class= "TrafficScriptCountdown" >     <h3>       <font color= "'.$titlecolor.'" >         '.$timer.' '.$labelday.' '.$title.'        </font>       <a href= "'.$link.'" >         <font color= "'.$linkcolor.'" >           '.$linktext.'          </font>       </a>     </h3>  </div>  ';       $style = '  <style type= "text/css" >  .TrafficScriptCountdown {     position:relative;     top:0;     width:100%;     text-align:center;     background: '.$bgcolor.' ;     opacity:100%;     z- index :1000;     padding:0  }  </style>  ';       $body = http.getResponseBody();       $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );       http.setResponseBody( $body );?    Example 1 in Action     Example 2 - Ticking countdown timer with second detail   This example covers how to dynamically display the time down to seconds. Opposed to sending data to the client every second, I chose to use a client side java script found @ HTML Countdown to Date v3 (Javascript Timer)  | ricocheting.com   Example 2 Response Rule   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       #Countdown target  $year = "2014" ;  $month = "11" ;  $day = "3" ;  $hr = "8" ;  $min = "0" ;  $sec = "0" ;  #number of hours offset from UTC  $utc = "-8" ;       $labeldays = "DAYS" ;  $labelhrs = "HRS" ;  $labelmins = "MINS" ;  $labelsecs = "SECS" ;  $separator = ", " ;       $timer = '<script type= "text/javascript" >  var CDown=function(){this.state=0,this.counts=[],this.interval=null};CDown. prototype =\  {init:function(){this.state=1;var t=this;this.interval=window.setInterval(function()\  {t.tick()},1e3)},add:function(t,s){tzOffset= '.$utc.' ,dx=t.toGMTString(),dx=dx. substr \  (0,dx. length -3),tzCurrent=t.getTimezoneOffset()/60*-2,t.setTime(Date.parse(dx)),\  t.setHours(t.getHours()+tzCurrent-tzOffset),this.counts. push ({d:t,id:s}),this.tick(),\  0==this.state&&this.init()},expire:function(t){ for (var s in t)this.display\  (this.counts[t[s]], "Now!" ),this.counts. splice (t[s],1)}, format :function(t){var s= "" ;\  return 0!=t.d&&(s+=t.d+ " " +(1==t.d? "'.string.drop( $labeldays, 1 ).'" :" '.$labeldays.' \  ")+" '.$separator.' "),0!=t.h&&(s+=t.h+" "+(1==t.h?" '.string.drop( $labelhrs, 1 ).' ":\  "'.$labelhrs.'" )+ "'.$separator.'" ),s+=t.m+ " " +(1==t.m?"\  '.string.drop( $labelmins, 1 ).' ":" '.$labelmins.' ")+" '.$separator.' ",s+=t.s+" "\  +(1==t.s? "'.string.drop( $labelsecs, 1 ).'" : "'.$labelsecs.'" )+ "'.$separator.'" \  ,s. substr (0,s. length -2)},math:function(t){var i=w=d=h=m=s=ms=0; return ms=( "" +\  (t %1e3 +1e3)). substr (1,3),t=Math.floor(t/1e3),i=Math.floor(t/31536e3),w=Math.floor\  (t/604800),d=Math.floor(t/86400),t%=86400,h=Math.floor(t/3600),t%=3600,m=Math.floor\  (t/60),t%=60,s=Math.floor(t),{y:i,w:w,d:d,h:h,m:m,s:s,ms:ms}},tick:function()\  {var t=(new Date).getTime(),s=[],i=0,n=0; if (this.counts) for (var e=0,\  o=this.counts. length ;o>e;++e)i=this.counts[e],n=i.d.getTime()-t,0>n?s. push (e):\  this.display(i,this. format (this.math(n)));s. length >0&&this.expire(s),\  0==this.counts. length &&window.clearTimeout(this.interval)},display:function(t,s)\  {document.getElementById(t.id).innerHTML=s}},window.onload=function()\  {var t=new CDown;t.add(new Date\  ( '.$year.' , '.--$month.' , '.$day.' , '.$hr.' , '.$min.' , '.$sec.' ), "countbox1" )};  </script><span id= "countbox1" ></span>';       $html =  '<div class= "TrafficScriptCountdown" ><center><h3><font color= "white" >\  COUNTDOWN TO RIVERBED FORCE '.$timer.' </font>\  <a href= "https://secure3.aetherquest.com/riverbedforce2014/" ><font color= "yellow" >\  REGISTER NOW</a></h3></font></center></div>';       $style = '<style type= "text/css" >.TrafficScriptCountdown{position:relative;top:0;\  width:100%;background: #E9681D;opacity:100%;z-index:1000;padding:0}</style>';       http.setResponseBody( string.regexsub( http.getResponseBody(),  "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" ) );    Example 2 in action     Notes   Example 1 results in faster page load time than Example 2. Example 1 can be easily extended to enable Traffic Script to set $timer to include detail down to the second as in example 2. Be aware of any trailing space(s) after the " \ " line breaks when copy and paste is used to import the rule. Incorrect spacing can stop the JS and the HTML from functioning. You may have to adjust the elements for your web application. (i.e. z-index, the regex sub match, div class, etc.).   This is a great example of using Traffic Manager to deliver a solution in minutes that could otherwise could take hours.
View full article
The SOAP Control API is one of the 'Control Plane' APIs provided by Pulse Traffic Manager (see also REST and SNMP).   This article contains a selection of simple technical tips and solutions that use the SOAP Control API to manage and query Traffic Manager.   Basic language examples   Tech Tip: Using the SOAP Control API with Perl Tech Tip: Using the SOAP Control API with C# Tech Tip: Using the SOAP Control API with Java Tech Tip: Using the SOAP Control API with Python Tech Tip: Using the SOAP Control API with PHP Tech Tip: Using the SOAP Control API with Ruby Tech Tip: Ruby and SOAP revisited Tech Tip: Ruby and SOAP - a rubygems implementation   More sophisticated tips and examples   Tech Tip: Running Perl code on the Pulse vADC Virtual Appliance Tech Tip: using Perl SOAP::Lite with Traffic Manager's SOAP Control API Tech Tip: Using Perl/SOAP to list recent connections in Pulse Traffic Manager Gathering statistics from a cluster of Traffic Managers   More information   For a more rigorous introduction to the SOAP Control API, please refer to the Control API documentation in the  Product Documentation
View full article
Feature Brief: Pulse Traffic Manager RESTful Control API is one of the 'Control Plane' APIs provided by Pulse Traffic Manager (see also Feature Brief: Pulse Traffic Manager SOAP API). This article contains a selection of simple technical tips and solutions that use the REST Control API to manage and query Pulse Traffic Manager.   Overview Tech Tip: Using the RESTful Control API with Python Tech Tip: Using the RESTful Control API with Perl Tech Tip: Using the RESTful Control API with Ruby Tech Tip: Using the RESTful Control API with TrafficScript Tech Tip: Using the RESTful Control API with PHP   Example programs   Retrieving resource configuration data Tech Tip: Using the RESTful Control API with Python - listpools Tech Tip: Using the RESTful Control API with Perl - listpools Tech Tip: Using the RESTful Control API with Ruby - listpools Tech Tip: Using the RESTful Control API with TrafficScript - listpools Tech Tip: Using the RESTful Control API with PHP - listpools Tech Tip: Using the RESTful Control API with Python - listpoolnodes Tech Tip: Using the RESTful Control API with Perl - listpoolnodes Tech Tip: Using the RESTful Control API with Ruby - listpoolnodes Tech Tip: Using the RESTful Control API with TrafficScript - listpoolnodes Tech Tip: Using the RESTful Control API with PHP - listpoolnodes   Changing resource configuration data Tech Tip: Using the RESTful Control API with Python - startstopvs Tech Tip: Using the RESTful Control API with Perl - startstopvs Tech Tip: Using the RESTful Control API with Ruby - startstopvs Tech Tip: Using the RESTful Control API with TrafficScript - startstopvs Tech Tip: Using the RESTful Control API with PHP - startstopvs Adding a resource Tech Tip: Using the RESTful Control API with Python - addpool Tech Tip: Using the RESTful Control API with Perl - addpool Tech Tip: Using the RESTful Control API with Ruby - addpool Tech Tip: Using the RESTful Control API with TrafficScript - addpool Tech Tip: Using the RESTful Control API with PHP - addpool Tech Tip: Creating a new service with the REST API and Python   Deleting a resource Tech Tip: Using the RESTful Control API with Python - deletepool Tech Tip: Using the RESTful Control API with Perl - deletepool Tech Tip: Using the RESTful Control API with Ruby - deletepool Tech Tip: Using the RESTful Control API with TrafficScript - deletepool Tech Tip: Using the RESTful Control API with PHP - deletepool   Adding a file Tech Tip: Using the RESTful Control API with Python - addextrafile Tech Tip: Using the RESTful Control API with Perl - addextrafile Tech Tip: Using the RESTful Control API with Ruby - addextrafile Tech Tip: Using the RESTful Control API with PHP - addextrafile   Other Examples HowTo: List all of the draining nodes in Traffic Manager using Python and REST HowTo: Drain a node in multiple pools (Python REST API example) Deploying Python code to Pulse Traffic Manager Slowing down busy users - driving the REST API from TrafficScript Tech Tip: Using the RESTful Control API to get pool statistics with PHP Read More   The REST API Guide in the Product Documentation Feature Brief: Pulse Traffic Manager RESTful Control API
View full article
TrafficScript is the programming language that is built into the Traffic Manager.  With TrafficScript, you can create traffic management 'rules' to control the behaviour of Traffic Manager in a wide manner of ways, inspecting, modifying and routing any type of TCP or UDP traffic.   The language is a simple, procedural one - the style and syntax will be familiar to anyone who has used Perl, PHP, C, BASIC, etc. Its strength comes from its integration with Traffic Manager, allowing you to perform complex traffic management tasks simply, such as controlling traffic flow, reading and parsing HTTP requests and responses, and managing XML data.   This article contains a selection of simple technical tips to illustrate how to perform common tasks using TrafficScript.   TrafficScript Syntax   HowTo: TrafficScript Syntax HowTo: TrafficScript variables and types HowTo: if-then-else conditions in TrafficScript HowTo: loops in TrafficScript HowTo: TrafficScript rules processing and flow control HowTo: TrafficScript String Manipulation HowTo: TrafficScript Libraries and Subroutines HowTo: TrafficScript Arrays and Hashes   HTTP operations   HowTo: Techniques to read HTTP headers HowTo: Set an HTTP Response Header HowTo: Inspect HTTP Request Parameters HowTo: Rewriting HTTP Requests HowTo: Rewriting HTTP Responses HowTo: Redirect HTTP clients HowTo: Inspect and log HTTP POST data HowTo: Handle cookies in TrafficScript   XML processing   HowTo: Inspect XML and route requests Managing XML SOAP data with TrafficScript   General examples   HowTo: Controlling Session Persistence HowTo: Control Bandwidth Management HowTo: Monitor the response time of slow services HowTo: Query an external datasource using HTTP HowTo: Techniques for inspecting binary protocols HowTo: Spoof Source IP Addresses with IP Transparency HowTo: Use low-bandwidth content during periods of high load HowTo: Log slow connections in Stingray Traffic Manager HowTo: Inspect and synchronize SMTP HowTo: Write Health Monitors in TrafficScript HowTo: Delete Session Persistence records   More information   For a more rigorous introduction to the TrafficScript language, please refer to the TrafficScript guide in the Product Documentation
View full article
This article explains how to use Pulse vADC RESTful Control API with Perl.  It's a little more work than with Tech Tip: Using the RESTful Control API with Python - Overview but once the basic environment is set up and the framework in place, you can rapidly create scripts in Perl to manage the configuration.   Getting Started   The code examples below depend on several Perl modules that may not be installed by default on your client system: REST::Client, MIME::Base64 and JSON.   On a Linux system, the best way to pull these in to the system perl is by using the system package manager (apt or rpm). On a Mac (or a home-grown perl instance), you can install them using CPAN   Preparing a Mac to use CPAN   Install the package 'Command Line Tools for Xcode' either from within the Xcode or directly from https://developer.apple.com/downloads/.   Some of the CPAN build scripts indirectly seek out /usr/bin/gcc-4.2 and won't build if /usr/bin/gcc-4.2 is missing.  If gcc-4.2 is missing, the following should help:   $ ls -l /usr/bin/gcc-4.2 ls: /usr/bin/gcc-4.2: No such file or directory $ sudo ln -s /usr/bin/gcc /usr/bin/gcc-4.2   Installing the perl modules   It may take 20 minutes for CPAN to initialize itself, download, compile, test and install the necessary perl modules:   $ sudo perl –MCPAN –e shell cpan> install Bundle::CPAN cpan> install REST:: Client cpan> install MIME::Base64 cpan> install JSON   Your first Perl REST client application   This application looks for a pool named 'Web Servers'.  It prints a list of the nodes in the pool, and then sets the first one to drain.   #!/usr/bin/perl use REST::Client; use MIME::Base64; use JSON; # Configurables $poolname = "Web Servers"; $endpoint = "stingray:9070"; $userpass = "admin:admin"; # Older implementations of LWP check this to disable server verification $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; # Set up the connection my $client = REST::Client->new( ); # Newer implementations of LWP use this to disable server verification # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 ); $client->setHost( "https://$endpoint" ); $client->addHeader( "Authorization", "Basic ".encode_base64( $userpass ) ); # Perform a HTTP GET on this URI $client->GET( "/api/tm/1.0/config/active/pools/$poolname" ); die $client->responseContent() if( $client->responseCode() >= 300 ); # Add the node to the list of draining nodes my $r = decode_json( $client->responseContent() ); print "Pool: $poolname:\n"; print " Nodes: " . join( ", ", @{$r->{properties}->{basic}->{nodes}} ) . "\n"; print " Draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n"; # If the first node is not already draining, add it to the draining list $node = $r->{properties}->{basic}->{nodes}[0]; if( ! ($node ~~ @{$r->{properties}->{basic}->{draining}}) ) { print " Planning to drain: $node\n"; push @{$r->{properties}->{basic}->{draining}}, $node; } # Now put the updated configuration $client->addHeader( "Content-Type", "application/json" ); $client->PUT( "/api/tm/1.0/config/active/pools/$poolname", encode_json( $r ) ); die $client->responseContent() if( $client->responseCode() >= 300 ); my $r = decode_json( $client->responseContent() ); print " Now draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n";   Running the script   $ perl ./pool.pl Pool: Web Servers: Nodes: 192.168.207.101:80, 192.168.207.103:80, 192.168.207.102:80 Draining: 192.168.207.102:80 Planning to drain: 192.168.207.101:80 Now draining: 192.168.207.101:80, 192.168.207.102:80   Notes   This script was tested against two different installations of perl, with different versions of the LWP library.  It was necessary to disable SSL certificate checking using:   $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;   ... with the older, and:   # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 );   with the new.  The older implementation failed when using SSL_VERIFY_NONE.  YMMV.
View full article
This document covers updating the built-in GeoIP database. See TechTip: Extending the Pulse vTM GeoIP database for instructions on adding custom entries to the database.  
View full article
Pulse Virtual Traffic Manager contains a GeoIP database that maps IP addresses to location - longitude and latitude, city, county and country.  The GeoIP database is used by the Global Load Balancing capability to estimate distances between remote users and local datacenters, and it is accessible using the  geo.*  TrafficScript and Java functions.  
View full article
The following code uses Stingray's RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in Python. This example builds on the previous listpools.py example.  This program does a GET request for the list of pool and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.   listpoolnodes.py #! /usr/bin/env python import requests import json import sys print "Pools:\n" url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools' jsontype = {'content-type': 'application/json'} client = requests.Session() client.auth = ('admin', 'admin') client.verify = False try: # Do the HTTP GET to get the lists of pools. We are only putting this client.get within a try # because if there is no error connecting on this one there shouldn't be an error connnecting # on later client.get so that would be an unexpected exception. response = client.get(url) except requests.exceptions.ConnectionError: print "Error: Unable to connect to " + url sys.exit(1) data = json.loads(response.content) if response.status_code == 200: if data.has_key('children'): pools = data['children'] for i, pool in enumerate(pools): poolName = pool['name'] # Do the HTTP GET to get the properties of a pool response = client.get(url + "/" + poolName) poolConfig = json.loads(response.content) if response.status_code == 200: # Since we are getting the properties for a pool we expect the first element to be 'properties' if poolConfig.has_key('properties'): # The value of the key 'properties' will be a dictionary containing property sections # All the properties that this program cares about are in the 'basic' section # nodes is the list of all active or draining nodes in this pool # draining the list of all draining nodes in this pool # disabled is the list of all disabled nodes in this pool nodes = poolConfig['properties']['basic']['nodes'] draining = poolConfig['properties']['basic']['draining'] disabled = poolConfig['properties']['basic']['disabled'] print pool['name'] print " Nodes: ", for n, node in enumerate(nodes): print node + " ", print "" if len(draining) > 0: print " Draining Nodes: ", for n, node in enumerate(draining): print node + " ", print "" if len(disabled) > 0: print " Disabled Nodes: ", for n, node in enumerate(disabled): print node + " ", print "" else: print "Error: No properties found for pool " + poolName print "" else: print "Error getting pool config: URL=%s Status=%d Id=%s: %s" %(url + "/" + poolName, response.status_code, poolConfig['error_id'], poolConfig['error_text']) else: print 'Error: No chidren found' else: print "Error getting pool list: URL=%s Status=%d Id=%s: %s" %(url, response.status_code, data['error_id'], data['error_text']) Running the example   This code was tested with Python 2.7.3 and version 1.1.0 of the requests library.   Run the Python script as follows:   $ listpoolnodes.py Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
The following code uses the RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in TrafficScript. This example builds on the previous stmrest_listpools example.  This rule does a GET request for the list of pools and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.  A subroutine in stmrestclient is used to do the actual RESTful API call.  stmrestclient is attached to the article Tech Tip: Using the RESTful Control API with TrafficScript - Overview.   stmrest_listpoolnodes   ################################################################################ # stmrest_listpoolnodes # # This rule lists the names of all pools and also the nodes, draining nodes # and disable nodes in each pool. # # To run this rule add it as a request rule to an HTTP Virtual Server and in a # browser enter the path /rest/listpoolnodes. # # It uses the subroutines in stmrestclient ################################################################################ import stmrestclient; if (http.getPath() != "/rest/listpoolnodes") break; $resource = "pools"; $accept = "json"; $html = "<br><b>Pools:</b><br>"; $response = stmrestclient.stmRestGet($resource, $accept); if ($response["rc"] == 1) { $pools = $response["data"]["children"]; foreach ($pool in $pools) { $poolName = $pool["name"]; $response = stmrestclient.stmRestGet($resource. "/" . string.escape($poolName), $accept); if ($response["rc"] == 1) { $poolConfig = $response["data"]; $nodes = $poolConfig["properties"]["basic"]["nodes"]; $draining = $poolConfig["properties"]["basic"]["draining"]; $disabled = $poolConfig["properties"]["basic"]["disabled"]; $html = $html . "<br>" . $poolName . ":<br>"; $html = $html . "<br> Nodes: "; foreach ($node in $nodes) { $html = $html . $node . " "; } $html = $html . "\n"; if (array.length($draining) > 0) { $html = $html . "<br> Draining Nodes: "; foreach ($node in $draining) { $html = $html . $node . " "; } $html = $html . "\n"; } if (array.length($disabled) > 0) { $html = $html . "<br> Disabled Nodes: "; foreach ($node in $disabled) { html = $html . $node . " "; } $html = $html . "\n"; } $html = $html . "<br>\n"; } else { $html = $html . "There was an error getting the pool configuration for pool . " . $poolName . ": " . $response['info']; } } } else { $html = $html . "<br>There was an error getting the pool list: " . $response['info']; } http.sendResponse("200 OK", "text/html", $html, "");   Running the example   This rule should be added as a request rule to a Virtual Server and run with the URL:   http://<hostname>/rest/listpoolnodes   Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API with TrafficScript - Overview Feature Brief: RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article