cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG [email protected]$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing [email protected] $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 [email protected] $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
We spend a great deal of time focusing on how to speed up customers' web services. We constantly research new techniques to load balance traffic, optimise network connections and improve the performance of overloaded application servers. The techniques and options available from us (and yes, from our competitors too!) may seem bewildering at times. So I would like to spend a short time singing the praises of one specific feature, which I can confidently say will improve your website's performance above all others - caching your website.   "But my website is uncacheable! It's full of dynamic, changing pages. Caching is useless to me!"   We'll answer that objection soon, but first, it is worth a quick explanation of the two main styles of caching:     Client-side caching   Most people's experience of a web cache is on their web browser. Internet Explorer or Firefox will store copies of web pages on your hard drive, so if you visit a site again, it can load the content from disk instead of over the Internet.   There's another layer of caching going on though. Your ISP may also be doing some caching. The ISP wants to save money on their bandwidth, and so puts a big web cache in front of everyone's Internet access. The cache keeps copies of the most-visited web pages, storing bits of many different websites. A popular and widely used open-source web cache is Squid.   However, not all web pages are cacheable near the client. Websites have dynamic content, so for example any web page containing personalized or changing information will not be stored in your ISP's cache. Generally the cache will fill up with "static" content such as images, movies, etc. These get stored for hours or days. For your ISP, this is great, as these big files take up the most of their precious bandwidth.   For someone running their own website, the browser caching or ISP caching does not do much. They might save a little bandwidth from the ISP image caching if they have lots of visitors from the same ISP, but the bulk of the website, including most of the content generated by their application servers, will not be cached and their servers will still have lots of work to do.   Server-side caching (with Traffic Manager)   Here, the main aim is not to save bandwidth, but to accelerate your website. The traffic manager sits in your datacenter (or your cloud provider), in front of your web and application servers. Access to your website is through the Traffic Manager software, so it sees both the requests and responses. Traffic Manager can then start to answer these requests itself, delivering cached responses. Your servers then have less work to do. Less work = faster responses = fewer servers needed = saves money!   "But I told you - my website isn't cacheable!"   There's a reason why your website is marked uncacheable. Remember the ISP caches...? They mustn't store your changing, constantly updating web pages. To enforce this, application servers send back instructions with every web page, the Cache-Control HTTP header, saying "Don't cache this". Traffic Manager obeys these cache instructions too, because it's well-behaved.   But, think - how often does your website really change? Take a very busy site, for example a popular news site. Its front page may be labelled as uncacheable so that vistors always see the latest news, since it changes as new stories are added. But new additions aren't happening every second of the day. What if the page was marked as cacheable - for just one second? Visitors would still see the most up-to-date news, but the load on the site servers would plummet. Even if the website had as few as ten views in a second, this simple change would reduce the load on the app servers ten-fold.   This isn't an isolated example - there are plenty of others: Think twitter searches, auction listings, "live" graphing, and so on. All such content can be cached briefly without any noticable change to the "liveness" of the site. Traffic Manager can deliver a cached version of your web page much faster than your application servers - not just because it is highly optimized, but because sending a cached copy of a page is so much less work than generating it from scratch.   So if this simple cache change is so great, why don't people use this technique more - surely app servers can mark their web pages as cacheable for one or two seconds without Traffic Manager's help, and those browser/ISP caches can then do the magic? Well, the browser caches aren't going to be any use - an individual isn't going to be viewing the same page on your website multiple times a second (and if they keep hitting the reload button, their page requests are not cacheable anyway). So how about those big ISP caches? Unfortunately, they aren't always clever enough either. Some see a web page marked as cacheable for a short time and will either:   Not cache it at all (it's going to expire soon, what's the point in keeping it?) or will cache it for much longer (if it is cacheable for 3 seconds, why not cache it for 300, right?)   Also, by leaving the caching to the client-side, the cache hit rate gets worse. A user in France isn't going to be able to make use of a cached copy of your site stored in a US ISP's cache, for instance.   If you use Traffic Manager to do the caching, these issues can be solved. First, the cache is held in one place - your datacenter, so it is available to all visitors. Second, Traffic Manager can tweak the cache instructions for the page, so it caches the page while forcing other people not to. Here is what's going on:     Request arrives at Traffic Manager, which sends it on to your application server. App server sends web page response back to the traffic manager. The page has a Cache-Control: no-cache header, since the app server thinks the page can't be cached. TrafficScript response rule identifies the page as one that can be cached, for a short time. It changes the cache instructions to Cache-Control: max-age=3, meaning that the page can now be cached for three seconds. Traffic Manager's web cache stores the page. Traffic Manager sends out the response to the user (and to anyone else for the next three seconds), but changes the cache instructions to Cache-Control: no-cache, to ensure downstream caches, ISP caches and web browsers do not try to cache the page further.   Result: a much faster web site, yet it still serves dynamic and constantly updating pages to viewers. Give it a try - you will be amazed at the performance improvements possible, even when caching for just a second. Remember, almost anything can be cached if you configure your servers correctly!   How to set up Traffic Manager   On the admin server, edit the virtual server that you want to cache, and click on the "Content Caching" link. Enable the cache. There are options here for the default cache time for pages. These can be changed as desired, but are primarily for the "ordinary" content that is cacheable normally, such as images, etc. The "webcache!control_out" setting allows you to change the Cache-Control header for your pages after they have been cached by the Traffic Manager software, so you can put "no-cache" here to stop others from caching your pages.   The "webcache!refresh_time" setting is a useful extra here. Set this to one second. This will smooth out the load on your app servers. When a cached page is about to expire (i.e. it's too old to stay cached) and a new request arrives, Traffic Manager will hand over a single request to your app servers, to see if there is a newer page available. Other requests continue to get served from the cache. This can prevent 'waves' of requests hitting your app servers when a page is about to expire from the cache.   Now, we need to make Traffic Manager cache the specific pages of your site that the app server claims are uncacheable. We do this using the RuleBuilder system for defining rules, so click on the "Catalogs" icon and then select the "Rules" tab. Now create a new RuleBuilder rule.   This rule needs to run for the specific web pages that you wish to make cacheable for short amounts of time. For an example, we'll make "/news" cacheable. Add a condition of "HTTP:URL Path" to match "/news", then add an action to set a HTTP response header. The rule should look like this:     Finally, add this rule as a response rule to your virtual server. That's it! Your site should now start to be cached. Just a final few words of caution:   Be selective in the pages that you mark as cacheable; remember that personalized pages (e.g. showing a username) cannot be cached otherwise other people will see those pages too! If necessary, some page redesign might be called for to split the content into "generic" and "user-specific" iframes or AJAX requests. Server-side caching saves you CPU time, not bandwidth. If your website is slow because you are hitting your site throughput limits, then other techniques are needed.
View full article
When you need to scale out your MySQL database, replication is a good way to proceed. Database writes (UPDATEs) go to a 'master' server and are replicated across a set of 'slave' servers. Reads (SELECTs) are load-balanced across the slaves.   Overview   MySQL's replication documentation describes how to configure replication:   MySQL Replication   A quick solution...   If you can modify your MySQL client application to direct 'Write' (i.e. 'UPDATE') connections to one IP address/port and 'Read' (i.e. 'SELECT') connections to another, then this problem is trivial to solve. This generally needs a code update (Using Replication for Scale-Out).   You will need to direct the 'Update' connections to the master database (or through a dedicated Traffic Manager virtual server), and direct the 'Read' connections to a Traffic Manager virtual server (in 'generic server first' mode) and load-balance the connections across the pool of MySQL slave servers using the 'least connections' load-balancing method: Routing connections from the application   However, in most cases, you probably don't have that degree of control over how your client application issues MySQL connections; all connections are directed to a single IP:port. A load balancer will need to discriminate between different connection types and route them accordingly.   Routing MySQL traffic   A MySQL database connection is authenticated by a username and password. In most database designs, multiple users with different access rights are used; less privileged user accounts can only read data (issuing 'SELECT' statements), and more privileged users can also perform updates (issuing 'UPDATE' statements). A well architected application with sound security boundaries will take advantage of these multiple user accounts, using the account with least privilege to perform each operation. This reduces the opportunities for attacks like SQL injection to subvert database transactions and perform undesired updates.   This article describes how to use Traffic Manager to inspect and manage MySQL connections, routing connections authenticated with privileged users to the master database and load-balancing other connects to the slaves:   Load-balancing MySQL connections   Designing a MySQL proxy   Stingray Traffic Manager functions as an application-level (layer-7) proxy. Most protocols are relatively easy for layer-7 proxies like Traffic Manager to inspect and load-balance, and work 'out-of-the-box' or with relatively little configuration.   For more information, refer to the article Server First, Client First and Generic Streaming Protocols.   Proxying MySQL connections   MySQL is much more complicated to proxy and load-balance.   When a MySQL client connects, the server immediately responds with a randomly generated challenge string (the 'salt'). The client then authenticates itself by responding with the username for the connection and a copy of the 'salt' encrypted using the corresponding password:   Connect and Authenticate in MySQL   If the proxy is to route and load-balance based on the username in the connection, it needs to correctly authenticate the client connection first. When it finally connects to the chosen MySQL server, it will then have to re-authenticate the connection with the back-end server using a different salt.   Implementing a MySQL proxy in TrafficScript   In this example, we're going to proxy MySQL connections from two users - 'mysqlmaster' and 'mysqlslave', directing connections to the 'SQL Master' and 'SQL Slaves' pools as appropriate.   The proxy is implemented using two TrafficScript rules ('mysql-request' and 'mysql-response') on a 'server-first' Virtual Server listening on port 3306 for MySQL client connections. Together, the rules implement a simple state machine that mediates between the client and server:   Implementing a MySQL proxy in TrafficScript   The state machine authenticates and inspects the client connection before deciding which pool to direct the connection to. The rule needs to know the encrypted password and desired pool for each user. The virtual server should be configured to send traffic to the built-in 'discard' pool by default.   The request rule:   Configure the following request rule on a 'server first' virtual server. Edit the values at the top to reflect the encrypted passwords (copied from the MySQL users table) and desired pools:   sub encpassword( $user ) { # From the mysql users table - double-SHA1 of the password # Do not include the leading '*' in the long 40-byte encoded password if( $user == "mysqlmaster" ) return "B17453F89631AE57EFC1B401AD1C7A59EFD547E5"; if( $user == "mysqlslave" ) return "14521EA7B4C66AE94E6CFF753453F89631AE57EF"; } sub pool( $user ) { if( $user == "mysqlmaster" ) return "SQL Master"; if( $user == "mysqlslave" ) return "SQL Slaves"; } $state = connection.data.get( "state" ); if( !$state ) { # First time in; we've just recieved a fresh connection $salt1 = randomBytes( 8 ); $salt2 = randomBytes( 12 ); connection.data.set( "salt", $salt1.$salt2 ); $server_hs = "\0\0\0\0" . # length - fill in below "\012" . # protocol version "Stingray Proxy v0.9\0" . # server version "\01\0\0\0" . # thread 1 $salt1."\0" . # salt(1) "\054\242" . # Capabilities "\010\02\0" . # Lang and status "\0\0\0\0\0\0\0\0\0\0\0\0\0" . # Unused $salt2."\0"; # salt(2) $l = string.length( $server_hs )-4; # Will be <= 255 $server_hs = string.replaceBytes( $server_hs, string.intToBytes( $l, 1 ), 0 ); connection.data.set( "state", "wait for clienths" ); request.sendResponse( $server_hs ); break; } if( $state == "wait for clienths" ) { # We've recieved the client handshake. $chs = request.get( 1 ); $chs_len = string.bytesToInt( $chs ); $chs = request.get( $chs_len + 4 ); # user starts at byte 36; password follows after $i = string.find( $chs, "\0", 36 ); $user = string.subString( $chs, 36, $i-1 ); $encpasswd = string.subString( $chs, $i+2, $i+21 ); $passwd2 = string.hexDecode( encpassword( $user ) ); $salt = connection.data.get( "salt" ); $passwd1 = string_xor( $encpasswd, string.hashSHA1( $salt.$passwd2 ) ); if( string.hashSHA1( $passwd1 ) != $passwd2 ) { log.warn( "User '" . $user . "': authentication failure" ); connection.data.set( "state", "authentication failed" ); connection.discard(); } connection.data.set( "user", $user ); connection.data.set( "passwd1", $passwd1 ); connection.data.set( "clienths", $chs ); connection.data.set( "state", "wait for serverhs" ); request.set( "" ); # Select pool based on user pool.select( pool( $user ) ); break; } if( $state == "wait for client data" ) { # Write the client handshake we remembered from earlier to the server, # and piggyback the request we've just recieved on the end $req = request.get(); $chs = connection.data.get( "clienths" ); $passwd1 = connection.data.get( "passwd1" ); $salt = connection.data.get( "salt" ); $encpasswd = string_xor( $passwd1, string.hashSHA1( $salt . string.hashSHA1( $passwd1 ) ) ); $i = string.find( $chs, "\0", 36 ); $chs = string.replaceBytes( $chs, $encpasswd, $i+2 ); connection.data.set( "state", "do authentication" ); request.set( $chs.$req ); break; } # Helper function sub string_xor( $a, $b ) { $r = ""; while( string.length( $a ) ) { $a1 = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $b1 = string.left( $b, 1 ); $b = string.skip( $b, 1 ); $r = $r . chr( ord( $a1 ) ^ ord ( $b1 ) ); } return $r; }   The response rule   Configure the following as a response rule, set to run every time, for the MySQL virtual server.   $state = connection.data.get( "state" ); $authok = "\07\0\0\2\0\0\0\02\0\0\0"; if( $state == "wait for serverhs" ) { # Read server handshake, remember the salt $shs = response.get( 1 ); $shs_len = string.bytesToInt( $shs )+4; $shs = response.get( $shs_len ); $salt1 = string.substring( $shs, $shs_len-40, $shs_len-33 ); $salt2 = string.substring( $shs, $shs_len-13, $shs_len-2 ); connection.data.set( "salt", $salt1.$salt2 ); # Write an authentication confirmation now to provoke the client # to send us more data (the first query). This will prepare the # state machine to write the authentication to the server connection.data.set( "state", "wait for client data" ); response.set( $authok ); break; } if( $state == "do authentication" ) { # We're expecting two responses. # The first is the authentication confirmation which we discard. $res = response.get(); $res1 = string.left( $res, 11 ); $res2 = string.skip( $res, 11 ); if( $res1 != $authok ) { $user = connection.data.get( "user" ); log.info( "Unexpected authentication failure for " . $user ); connection.discard(); } connection.data.set( "state", "complete" ); response.set( $res2 ); break; }   Testing your configuration   If you have several MySQL databases to test against, testing this configuration is straightforward. Edit the request rule to add the correct passwords and pools, and use the mysql command-line client to make connections:   $ mysql -h zeus -u username -p Enter password: *******   Check the 'current connections' list in the Traffic Manager UI to see how it has connected each session to a back-end database server.   If you encounter problems, try the following steps:   Ensure that trafficscript!variable_pool_use is set to 'Yes' in the Global Settings page on the UI. This setting allows you to use non-literal values in pool.use() and pool.select() TrafficScript functions. Turn on the log!client_connection_failures and log!server_connection_failures settings in the Virtual Server > Connection Management configuration page; these settings will configure the traffic manager to write detailed debug messages to the Event Log whenever a connection fails.   Then review your Traffic Manager Event Log and your mysql logs in the event of an error.   Traffic Manager's access logging can be used to record every connection. You can use the special *{name}d log macro to record information stored using connection.data.set(), such as the username used in each connection.   Conclusion   This article has demonstrated how to build a fairly sophisticated protocol parser where the Traffic Manager-based proxy performs full authentication and inspection before making a load-balancing decision. The protocol parser then performs the authentication again against the chosen back-end server.   Once the client-side and server-side handshakes are complete, Traffic Manager will simply forward data back and fro between the client and the server.   This example addresses the problem of scaling out your MySQL database, giving load-balancing and redundancy for database reads ('SELECTs'). It does not address the problem of scaling out your master 'write' server - you need to address that by investing in a sufficiently powerful server, architecting your database and application to minimise the number and impact of write operations, or by selecting a full clustering solution.     The solution leaves a single point of failure, in the form of the master database. This problem could be effectively dealt with by creating a monitor that tests the master database for correct operation. If it detects a failure, the monitor could promote one of the slave databases to master status and reconfigure the 'SQLMaster' pool to direct write (UPDATE) traffic to the new MySQL master server.   Acknowledgements   Ian Redfern's MySQL protocol description was invaluable in developing the proxy code.     Appendix - Password Problems? This example assumes that you are using MySQL 4.1.x or later (it was tested with MySQL 5 clients and servers), and that your database has passwords in the 'long' 41-byte MySQL 4.1 (and later) format (see http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html)   If you upgrade a pre-4.1 MySQL database to 4.1 or later, your passwords will remain in the pre-4.1 'short' format.   You can verify what password format your MySQL database is using as follows:   mysql> select password from mysql.user where user='username'; +------------------+ | password         | +------------------+ | 6a4ba5f42d7d4f51 | +------------------+ 1 rows in set (0.00 sec)   mysql> update mysql.user set password=PASSWORD('password') where user='username'; Query OK, 1 rows affected (0.00 sec) Rows matched: 1  Changed: 1  Warnings: 0   mysql> select password from mysql.user where user='username'; +-------------------------------------------+ | password                                  | +-------------------------------------------+ | *14521EA7B4C66AE94E6CFF753453F89631AE57EF | +-------------------------------------------+ 1 rows in set (0.00 sec)   If you can't create 'long' passwords, your database may be stuck in 'short' password mode. Run the following command to resize the password table if necessary:   $ mysql_fix_privilege_tables --password=admin password   Check that 'old_passwords' is not set to '1' (see here) in your my.cnf configuration file.   Check that the mysqld process isn't running with the --old-passwords option.   Finally, ensure that the privileges you have configured apply to connections from the Stingray proxy. You may need to GRANT... TO 'user'@'%' for example.
View full article
Top Deployment Guides   The following is a list of tested and validated deployment guides for common enterprise applications. Ask your sales team for information for the latest information    Microsoft   Virtual Traffic Manager and Microsoft Lync 2013 Virtual Traffic Manager and Microsoft Lync 2010 Virtual Traffic Manager and Microsoft Skype for Business Virtual Traffic Manager and Microsoft Exchange 2010 Virtual Traffic Manager and Microsoft Exchange 2013 Virtual Traffic Manager and Microsoft Exchange 2016 Virtual Traffic Manager and Microsoft SharePoint 2013 Virtual Traffic Manager and Microsoft SharePoint 2010 Virtual Traffic Manager and Microsoft Outlook Web Access Virtual Traffic Manager and Microsoft Intelligent Application Gateway Virtual Traffic Manager and Microsoft IIS  Oracle   Virtual Traffic Manager and Oracle EBS 12.1 Virtual Traffic Manager and Oracle Enterprise Manager 12c Virtual Traffic Manager and Oracle Application Server 10G Virtual Traffic Manager and Oracle WebLogic Applications (Ex: PeopleSoft and Blackboard) Virtual Traffic Manager and Glassfish Application Server   VMware   Virtual Traffic Manager and VMware Horizon View Servers Virtual Traffic Manager Plugin for VMware vRealize Orchestrator   Other Applications   Virtual Traffic Manager and SAP NetWeaver Virtual Traffic Manager and Magento  
View full article
This article illustrates how to write data to a MySQL database from a Java Extension, and how to use a background thread to minimize latency and control the load on the database.   Being Lazy with Java Extensions   With a Java Extension, you can log data in real time to an external database. The example in this article describes how to log the ‘referring’ source that each visitor comes in from when they enter a website. Logging is done to a MySQL database, and it maintains a count of how many times each key has been logged, so that you can determine which sites are sending you the most traffic.   The article then presents a modification that illustrates how to lazily perform operations such as database writes in the background (i.e. asynchronously) so that the performance the end user observes is not impaired.   Overview - let's count referers!   It’s often very revealing to find out which web sites are referring the most traffic to the sites that you are hosting. Tools like Google Analytics and web log analysis applications are one way of doing this, but in this example we’ll show an alternative method where we log the frequency of referring sites to a local database for easy access.   When a web browser submits an HTTP request for a resource, it commonly includes a header called "Referer" which identifies the page that linked to that resource. We’re not interested in internal referrers – where one page in the site links to another. We’re only interested in external referrers. We're going to log these 'external referrers' to a MySQL database, counting the frequency of each so that we can easily determine which occur most commonly. Create the database   Create a suitable MySQL database, with limited write access for a remote user: % mysql –h dbhost –u root –p Enter password: ******** mysql> CREATE DATABASE website; mysql> CREATE TABLE website.referers ( data VARCHAR(256) PRIMARY KEY, count INTEGER ); mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'%' IDENTIFIED BY 'W38_U5er'; mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'localhost' IDENTIFIED BY 'W38_U5er'; mysql> QUIT;   Verify that the table was correctly created and the ‘web’ user can access it:   % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> DESCRIBE website.referers; +-------+--------------+------+-----+---------+-------+ | Field | Type         | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | data  | varchar(256) | NO   | PRI |         |       | | count | int(11)      | YES  |     | NULL    |       | +-------+--------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)   mysql> SELECT * FROM website.referers; Empty set (0.00 sec)   The database looks good...   Create the Java Extension   We'll create a Java Extension that writes to the database, adding rows with the provided 'data' value, and setting the 'count' value to '1', or incrementing it if the row already exists.   CountThis.java   Compile up the following 'CountThis' Java Extension:   import java.io.IOException; import java.io.PrintWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThis extends HttpServlet { private static final long serialVersionUID = 1L; private Connection conn = null; private String userName = null; private String password = null; private String database = null; private String table = null; private String dbserver = null; public void init( ServletConfig config) throws ServletException { super.init( config ); userName = config.getInitParameter( "username" ); password = config.getInitParameter( "password" ); table = config.getInitParameter( "table" ); dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch( Exception e ) { throw new ServletException( "Could not initialize mysql: "+e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data == null ) return; if( conn == null ) { conn = DriverManager.getConnection( "jdbc:mysql://"+dbserver+"/", userName, password); } PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 ) " + "ON DUPLICATE KEY UPDATE count=count+1" ); s.setString(1, data); s.executeUpdate(); } catch( Exception e ) { conn = null; log( "Could not log data to database table '" + table + "': " + e.toString() ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   Upload the resulting CountThis.class file to Traffic Manager's Java Catalog. Click on the class name to configure the following initialization properties:   You must also upload the mysql connector (I used mysql-connector-java-5.1.24-bin.jar ) from dev.mysql.com to your Traffic Manager Java Catalog.   Add the TrafficScript rule   You can test the Extension very quickly using the following TrafficScript rule to log the each request:   java.run( "CountThis", http.getPath() );   Check the Traffic Manager  event log for any error messages, and query the table to verify that it is getting populated by the extension:   mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 5; +--------------------------+-------+ | data                     | count | +--------------------------+-------+ | /media/riverbed.png      |     5 | | /articles                |     3 | | /media/puppies.jpg       |     2 | | /media/ponies.png        |     2 | | /media/cats_and_mice.png |     2 | +--------------------------+-------+ 5 rows in set (0.00 sec)   mysql> TRUNCATE website.referers; Query OK, 0 rows affected (0.00 sec)   Use 'Truncate' to delete all of the rows in a table.   Log and count referer headers   We only want to log referrers from remote sites, so use the following TrafficScript rule to call the Extension only when it is required:   # This site $host = http.getHeader( "Host" ); # The referring site $referer = http.getHeader( "Referer" ); # Only log the Referer if it is an absolute URI and it comes from a different site if( string.contains( $referer, "://" ) && !string.contains( $referer, "://".$host."/" ) ) { java.run( "CountThis", $referer ); }   Add this rule as a request rule to a virtual server that processes HTTP traffic.   As users access the site, the referer header will be pushed into the database. A quick database query will tell you what's there: % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 4; +--------------------------------------------------+-------+ | referer                                          | count | +--------------------------------------------------+-------+ | http://www.google.com/search?q=stingray        |    92 | | http://www.riverbed.com/products/stingray      |    45 | | http://www.vmware.com/appliances               |    26 | | http://www.riverbed.com/                       |     5 | +--------------------------------------------------+-------+ 4 rows in set (0.00 sec)   Lazy writes to the database   This is a useful application of Java Extensions, but it has one big drawback. Every time a visitor arrives from a remote site, his first transaction is stalled while the Java Extension writes to the database. This breaks one of the key rules of website performance architecture – do everything you can asynchronously (i.e. in the background) so that your users are not impeded (see "Lazy Websites run Faster").   Instead, a better solution would be to maintain a separate, background thread that wrote the data in bulk to the database, while the foreground threads in the Java Extension simply appended the Referer data to a table:     CountThisAsync.java   The following Java Extension (CountThisAsync.java) is a modified version of CountThis.java that illustrates this technique:   import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.LinkedList; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThisAsync extends HttpServlet { private static final long serialVersionUID = 1L; private Writer writer = null; protected static LinkedList theData = new LinkedList(); protected class Writer extends Thread { private Connection conn = null; private String table; private int syncRate = 20; public void init( String username, String password, String url, String table ) throws Exception { Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection( url, username, password); this.table = table; start(); } public void run() { boolean running = true; while( running ) { try { sleep( syncRate*1000 ); } catch( InterruptedException e ) { running = false; }; try { PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 )" + "ON DUPLICATE KEY UPDATE count=count+1" ); conn.setAutoCommit( false ); synchronized( theData ) { while( !theData.isEmpty() ) { String data = theData.removeFirst(); s.setString(1, data); s.addBatch(); } } s.executeBatch(); } catch ( Exception e ) { log( e.toString() ); running = false; } } } } public void init( ServletConfig config ) throws ServletException { super.init( config ); String userName = config.getInitParameter( "username" ); String password = config.getInitParameter( "password" ); String table = config.getInitParameter( "table" ); String dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { writer = new Writer(); writer.init( userName, password, "jdbc:mysql://"+dbserver+"/", table ); } catch( Exception e ) { throw new ServletException( e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data != null && writer.isAlive() ) { synchronized( theData ) { theData.add( data ); } } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } public void destroy() { writer.interrupt(); try { writer.join( 1000L ); } catch( InterruptedException e ) {}; super.destroy(); } }   When the Extension is invoked by Traffic Manager , it simply stores the value of the Referer header in a local list and returns immediately. This minimizes any latency that the end user may observe.   The Extension creates a separate thread (embodied by the Writer class) that runs in the background. Every syncRate seconds, it removes all of the values from the list and writes them to the database.   Compile the extension: $ javac -cp servlet.jar:zxtm-servlet.jar CountThisAsync.java $ jar -cvf CountThisAsync.jar CountThisAsync*.class   ... and upload the resulting CountThisAsync.jar Jar file to your Java catalog . Remember to apply the four configuration parameters to the CountThisAsync.jar Java Extension so that it can access the database, and modify the TrafficScript rule so that it calls the CountThisAsync Java Extension.   You’ll observe that database updates may be delayed by up to 20 seconds (you can tune that delay in the code), but the level of service that end users experience will no longer be affected by the speed of the database.
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
Meta-tags and the meta-description are used by search engines and other tools to infer more information about a website, and their judicious and responsible use can have a positive effect on a page's ranking in search engine results. Suffice to say, a page without any 'meta' information is likely to score lower than the same page with some appropriate information.   This article (originally published December 2006) describes how you can automatically infer and insert meta tags into a web page, on the fly.   Rewriting a response   First, decide what to use to generate a list of related keywords.   It would have been nice to have been able to slurp up all the text on the page and calculate the most commonly occurring unusual words. Surely that would have been the über-geek thing to do? Well, not really: unless I was careful I could end-up slowing down each response, and  there would be the danger that I produced a strange list of keywords that didn’t accurately represent what the page is trying to say (and could also be widely “Off-message”).   So I instead turned to three sources of on-message page summaries - the title tag, the contents of the big h1 tag and the elements of the page path.   The script   First I had to get the response body:   $body = http.getResponseBody();   This will be grepped for keywords, mangled to add the meta-tags and then returned by setting the response body:   http.setResponseBody( $body );   Next I had to make a list of keywords. As I mentioned before, my first plan was to look at the path: by converting slashes to commas I should be able to generate some correct keywords, something like this:   $path = http.getPath(); $path = string.regexsub( $path, "/+", "; ","g" );   After adding a few lines to first tidy-up the path: removing slashes at the beginning and end; and replacing underscores with spaces, it worked pretty well.    And, for solely aesthetic reasons I added   $path = string.uppercase($path);   Then, I took a look at the title tag. Something like this did the trick:   if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title_tag_text = $1; }   (the “i” flag here makes the search case-insensitive, just in-case).   This, indeed, worked fine. With a little cleaning-up, I was able to generate a meta-description similarly: I just stuck them together after adding some punctuation (solely to make it nicer when read: search engines often return the meta-description in the search result).   After playing with this for a while I wasn’t completely satisfied with the results: the meta-keywords looked great; but the meta-description was a little lacking in the real english department.   So, instead I turned my attention to the h1 tag on each page: it should already be a mini-description of each page. I grepped it in a similar fashion to the title tag and the generated description looked vastly improved.   Lastly, I added some code to check if a page already has a meta-description or meta-keywords to prevent the automatic tags being inserted in this case. This allows us to gradually add meta-tags by hand to our pages - and it means we always have a backup should we forget to add metas to a new page in the future.   The finished script looked like this:   # Only process HTML responses $ct = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $ct, "text/html" ) ) break; $body = http.getResponseBody(); $path = http.getPath(); # remove the first and last slashes; convert remaining slashes $path = string.regexsub( $path, "^/?(.*?)/?$", "$1" ); $path = string.replaceAll( $path, "_", " " ); $path = string.replaceAll( $path, "/", ", " ); if( string.regexmatch( $body, "<h1.*?>\\s*(.*?)\\s*</h1>", "i" ) ) { $h1 = $1; $h1 = string.regexsub( $h1, "<.*?>", "", "g" ); } if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title = $1; $title = string.regexsub( $title, "<.*?>", "", "g" ); } if( $h1 ) { $description = "Riverbed - " . $h1 . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $h1 . ", " . $title; } else { $description = "Riverbed - " . $path . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $title; } # only rewrite the meta-keywords if we don't already have some if(! string.regexmatch( $body, "<meta\\s+name='keywords'", "i" ) ) { $meta_keywords = " <meta name='keywords' content='" . $keywords ."'/>\n"; } # only rewrite the meta-description if we don't already have one if(! string.regexmatch( $body, "<meta\\s+name='description'", "i" ) ) { $meta_description = " <meta name='description' content='" . $description . "'/>"; } # find the title and stick the new meta tags in afterwards if( $meta_keywords || $meta_description ) { $body = string.regexsub( $body, "(<title>.*</title>)", "$1\n" . $meta_keywords . $meta_description ); http.setResponseBody( $body ); }   It should be fairly easy to adapt it to another site assuming the pages are built consistently.   This article was originally written by Sam Phillips (his own pic above) in December 2006, modified and tested in February 2013
View full article
When deploying applications using content management systems, application owners are typically limited to the functionality of the CMS application in use or third party add-on's available. Unfortunately, these components alone may not deliver the application requirements.  Leaving the application owner to dedicate resources to develop a solution that usually ends up taking longer than it should, or not working at all. This article addresses some hypothetical production use cases, where the application does not provide the administrators an easy method to add a timer to the website.   This solution builds upon the previous articles (Embedded Google Maps - Augmenting Web Applications with Traffic Manager and Embedded Twitter Timeline - Augmenting Web Applications with Traffic Manager). "Using" a solution from Owen Garrett (See Instrument web content with Traffic Manager),This example will use a simple CSS overlay to display the added information.   Basic Rule   As a starting point to understand the minimum requirements, and to customize for your own use. I.E. Most people want to use "text-align:center". Values may need to be added to the $style or $html for your application, see examples.   1 2 3 4 5 6 7 8 9 10 11 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       $timer =  ( "366" - ( sys. gmtime . format ( "%j" ) ) );       $html =  '<div class="Countdown">' . $timer . ' DAYS UNTIL THE END OF THE YEAR</div>' ;       $style = '<style type="text/css">.Countdown{z-index:100;background:white}</style>' ;       $body = http.getResponseBody();  $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );  http.setResponseBody( $body );   Example 1 - Simple Day Countdown Timer   This example covers a common use case popular with retailers, a countdown for the holiday shopping season. This example also adds font formatting and additional text with a link.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 #Only process text/html content  if ( !string.startsWith (http.getResponseHeader ( "Content-Type" ), "text/html" )) break;       #Countdown target  #Julian day of the year "001" to "366"  $targetday = "359" ;  $bgcolor = "#D71920" ;  $labelday = "DAYS" ;  $title = "UNTIL CHRISTMAS" ;  $titlecolor = "white" ;  $link = "/dept.jump?id=dept20020200034" ;  $linkcolor = "yellow" ;  $linktext = "VISIT YOUR ONE-STOP GIFT SHOP" ;       #Calculate days between today and targetday  $timer = ( $targetday - ( sys. gmtime . format ( "%j" ) ) );       #Remove the S from "DAYS" if only 1 day left  if ( $timer == 1 ){     $labelday = string.drop( $label , 1 );  };       $html = '  <div class= "TrafficScriptCountdown" >     <h3>       <font color= "'.$titlecolor.'" >         '.$timer.' '.$labelday.' '.$title.'        </font>       <a href= "'.$link.'" >         <font color= "'.$linkcolor.'" >           '.$linktext.'          </font>       </a>     </h3>  </div>  ';       $style = '  <style type= "text/css" >  .TrafficScriptCountdown {     position:relative;     top:0;     width:100%;     text-align:center;     background: '.$bgcolor.' ;     opacity:100%;     z- index :1000;     padding:0  }  </style>  ';       $body = http.getResponseBody();       $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );       http.setResponseBody( $body );?    Example 1 in Action     Example 2 - Ticking countdown timer with second detail   This example covers how to dynamically display the time down to seconds. Opposed to sending data to the client every second, I chose to use a client side java script found @ HTML Countdown to Date v3 (Javascript Timer)  | ricocheting.com   Example 2 Response Rule   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       #Countdown target  $year = "2014" ;  $month = "11" ;  $day = "3" ;  $hr = "8" ;  $min = "0" ;  $sec = "0" ;  #number of hours offset from UTC  $utc = "-8" ;       $labeldays = "DAYS" ;  $labelhrs = "HRS" ;  $labelmins = "MINS" ;  $labelsecs = "SECS" ;  $separator = ", " ;       $timer = '<script type= "text/javascript" >  var CDown=function(){this.state=0,this.counts=[],this.interval=null};CDown. prototype =\  {init:function(){this.state=1;var t=this;this.interval=window.setInterval(function()\  {t.tick()},1e3)},add:function(t,s){tzOffset= '.$utc.' ,dx=t.toGMTString(),dx=dx. substr \  (0,dx. length -3),tzCurrent=t.getTimezoneOffset()/60*-2,t.setTime(Date.parse(dx)),\  t.setHours(t.getHours()+tzCurrent-tzOffset),this.counts. push ({d:t,id:s}),this.tick(),\  0==this.state&&this.init()},expire:function(t){ for (var s in t)this.display\  (this.counts[t[s]], "Now!" ),this.counts. splice (t[s],1)}, format :function(t){var s= "" ;\  return 0!=t.d&&(s+=t.d+ " " +(1==t.d? "'.string.drop( $labeldays, 1 ).'" :" '.$labeldays.' \  ")+" '.$separator.' "),0!=t.h&&(s+=t.h+" "+(1==t.h?" '.string.drop( $labelhrs, 1 ).' ":\  "'.$labelhrs.'" )+ "'.$separator.'" ),s+=t.m+ " " +(1==t.m?"\  '.string.drop( $labelmins, 1 ).' ":" '.$labelmins.' ")+" '.$separator.' ",s+=t.s+" "\  +(1==t.s? "'.string.drop( $labelsecs, 1 ).'" : "'.$labelsecs.'" )+ "'.$separator.'" \  ,s. substr (0,s. length -2)},math:function(t){var i=w=d=h=m=s=ms=0; return ms=( "" +\  (t %1e3 +1e3)). substr (1,3),t=Math.floor(t/1e3),i=Math.floor(t/31536e3),w=Math.floor\  (t/604800),d=Math.floor(t/86400),t%=86400,h=Math.floor(t/3600),t%=3600,m=Math.floor\  (t/60),t%=60,s=Math.floor(t),{y:i,w:w,d:d,h:h,m:m,s:s,ms:ms}},tick:function()\  {var t=(new Date).getTime(),s=[],i=0,n=0; if (this.counts) for (var e=0,\  o=this.counts. length ;o>e;++e)i=this.counts[e],n=i.d.getTime()-t,0>n?s. push (e):\  this.display(i,this. format (this.math(n)));s. length >0&&this.expire(s),\  0==this.counts. length &&window.clearTimeout(this.interval)},display:function(t,s)\  {document.getElementById(t.id).innerHTML=s}},window.onload=function()\  {var t=new CDown;t.add(new Date\  ( '.$year.' , '.--$month.' , '.$day.' , '.$hr.' , '.$min.' , '.$sec.' ), "countbox1" )};  </script><span id= "countbox1" ></span>';       $html =  '<div class= "TrafficScriptCountdown" ><center><h3><font color= "white" >\  COUNTDOWN TO RIVERBED FORCE '.$timer.' </font>\  <a href= "https://secure3.aetherquest.com/riverbedforce2014/" ><font color= "yellow" >\  REGISTER NOW</a></h3></font></center></div>';       $style = '<style type= "text/css" >.TrafficScriptCountdown{position:relative;top:0;\  width:100%;background: #E9681D;opacity:100%;z-index:1000;padding:0}</style>';       http.setResponseBody( string.regexsub( http.getResponseBody(),  "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" ) );    Example 2 in action     Notes   Example 1 results in faster page load time than Example 2. Example 1 can be easily extended to enable Traffic Script to set $timer to include detail down to the second as in example 2. Be aware of any trailing space(s) after the " \ " line breaks when copy and paste is used to import the rule. Incorrect spacing can stop the JS and the HTML from functioning. You may have to adjust the elements for your web application. (i.e. z-index, the regex sub match, div class, etc.).   This is a great example of using Traffic Manager to deliver a solution in minutes that could otherwise could take hours.
View full article
Update: See also this new article including a simple template rule: A Simple Template Rule SteelCentral Web Analyzer - BrowserMetrix   Riverbed SteelCentral Web Analyzer is a great tool for monitoring end-user experience (EUE) of web applications, even when they are hosted in the cloud. And because it is delivered as true Software-as-a-Service, you can monitor application performance form anywhere, and drill down to analyse individual transactions by URL, location or browser type, and highlight requests which t ook too long to respond.   In order to track statistics, your web application needs to send statistics on each transaction to Web Analyzer (formerly BrowserMetrix) using a small piece of JavaScript, and it is very easy to inject the extra JavaScript code without needing to change the application itself. This Solution Guide (attached) shows you how to use TrafficScript to inject the JavaScript snippet into your web applications, by inspecting all web pages and inserting into the right place in each document:   No modification needed to your application Easy to select which pages you want to instrument Use with all applications in your data center, or hosted in the cloud Even works with compressed HTML pages (eg, gzip encoded) Create dynamic JavaScript code to track session-level information Use Riverbed FlyScript to automate the integration between Web Analyzer and Traffic Manager   How does it work? SteelApp Traffic Manager sits in front of the web applications on the right, and inspects each web page before it is sent to the client. Stingray checks to see if the page has been selected for analysis by Web Analyzer, and then constructs the JavaScript fragment and injects into the web page at the right place in the HTML document.   When the web page arrives at the client browser, the JavaScript snippet is executed.  It builds a transaction profile with timing information and submits the information to the Web Analyzer SaaS platform managed by Riverbed.  You can then analyze the results, in near-realtime, using the Web Analyzer web portal.   Thanks also to Faisal Memon for his help creating the Solution Guide.   Read more In addition to the attached deployment guide showing how to create complex rules for JavaScript Injection, you may be also be interested in this new article showing how to use a simple template rule wit Traffic Manager and SteelCentral Web Analyzer: A Simple Template Rule for SteelCentral Web Analyzer - BrowserMetrix   For similar solutions, check out the Content Modification examples in the Top vADC Examples and Use Cases article.   Updated 15th July 2014 by Paul Wallace. Article formerly titled "Using Stingray with OPNET AppResponse Xpert BrowserMetrix" Thanks also to Mike Iem for his help updating this article. 29th July 2014 by Paul Wallace. Added note about the new article including the simple template rule          
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
Dynamic information is more abundant now than ever, but we still see web applications provide static content. Unfortunately many websites are still using a static picture for a location map because of application code changes required. Traffic Manager provides the ability to insert the required code into your site with no changes to the application. This simplifies the ability to provide users dynamic and interactive content tailored for them.  Fortunately, Google provides an API to use embedded Google maps for your application. These maps can be implemented with little code changes and support many applications. This document will focus on using the Traffic Manager to provide embedded Google Maps without configuration or code changes to the application.   "The Google Maps Embed API uses a simple HTTP request to return a dynamic, interactive map. The map can be easily embedded in your web page by setting the Embed API URL as the src attribute of an iframe...   Google Maps Embed API maps are easy to add to your webpage—just set the URL you build as the value of an iframe's src attribute. Control the size of the map with the iframe's height and width attributes. No JavaScript required. "... -- Google Maps Embed API — Google Developers   Google Maps Embedded API Notes   Please reference the Google Documentation at Google Maps Embed API — Google Developers for additional information and options not covered in this document.   Google API Key   Before you get started with the Traffic Script, your need to get a Google API Key. Requests to the Google Embed API must include a free API key as the value of the URL key parameter. Your key enables you to monitor your application's Maps API usage, and ensures that Google can contact you about your website/application if necessary. Visit Google Maps Embed API — Google Developers to for directions to obtain an API key.   By default, a key can be used on any site. We strongly recommend that you restrict the use of your key to domains that you administer, to prevent use on unauthorized sites. You can specify which domains are allowed to use your API key by clicking the Edit allowed referrers... link for your key. -- Google Maps Embed API — Google Developers   The API key is included in clear text to the client ( search nerdydata for "https://www.google.com/maps/embed/v1/place?key=" ). I also recommend you restrict use of your key to your domains.   Map Modes   Google provides four map modes available for use,and the mode is specified in the request URL.   Place mode displays a map pin at a particular place or address, such as a landmark, business, geographic feature, or town. Directions mode displays the path between two or more specified points on the map, as well as the distance and travel time. Search mode displays results for a search across the visible map region. It's recommended that a location for the search be defined, either by including a location in the search term (record+stores+in+Seattle) or by including a center and zoom parameter to bound the search. View mode returns a map with no markers or directions.   A few use cases:   Display a map of a specific location with labels using place mode (Covered in this document). Display Parking and Transit information for a location with Search Mode.(Covered in this document). Provide directions (between locations or from the airport to a location) using Directions mode Display nearby Hotels or tourist information with Search mode using keywords or "lodging" or "landmarks" Use geo location and Traffic Script and provide a dynamic Search map of Gym's local to each visitor for your fitness blog. My personal favorite for Intranets Save time figuring out where to eat lunch around the office and use Search Mode with keyword "restaurant" Improve my Traffic Script productivity and use Search Mode with keyword "coffee+shops"   Traffic Script Examples   Example 1: Place Map (Replace a string)   This example covers a basic method to replace a string in the HTML code. This rule will replace a string within the existing HTML with Google Place map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;        $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;   $googleapikey = "YOUR_KEY_HERE" ;   $googlemapurl = "https://www.google.com/maps/embed/v1/place" ;   #Map height and width   $mapheight = "420" ;   $mapwidth = "420" ;        #String of HTML to be replaced   $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;        #Replacement HTML   $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .   "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=" .   "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .        #Get the existing HTTP Body for modification   $body = http.getResponseBody();        #Regex sub against the body looking for the defined string   $body = string.replaceall( $body , $insertstring , $googlemaphtml );   http.setResponseBody( $body );    Example 2: Search Map (Replace a string) This example is the same as Example 1, but a change in the map type (note the change in the $googlemapurl?q=parking+near). This rule will replace a string within the existing HTML with Google Search map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;           #String of HTML to be replaced    $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;           #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .           #Get the existing HTTP Body for modification    $body = http.getResponseBody();           #Regex sub against the body looking for the defined string    $body = string.replaceall( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );    Example 3: Search Map (Replace a section)   This example provides a different method to insert code into the existing HTML. This rule uses regex to replace a section of the existing HTML with Google map iframe HTML, and has also been formatted for easy customization and readability. The change from Example 2 can be noted (See $insertstring and string.regexsub).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 #Only process text/html content       if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;          #String of HTML to be replaced    $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;          #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .          #Get the existing HTTP Body for modification    $body = http.getResponseBody();          #Regex sub against the body looking for the defined string    $body = string.regexsub( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );     Example 3.1 (Shortened)   For reference a shortened version of the Example 3 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;                http.setResponseBody ( string.regexsub( http.getResponseBody(),      "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,      "<iframe width=\"420\" height=\"420\" frameborder=\"0\" style=\"border:0\" " .      "src=\"https://www.google.com/maps/embed/v1/search?" .      "q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107" .      "&key=YOUR_KEY_HERE\"></iframe>" ) );     Example 4: Search Map ( Replace a section with formatting, select URL, & additional map)   This example is closer to a production use case. Specifically this was created with www.riverbed.com as my pool nodes. This rule has the following changes from Example 3: use HTML formatting to visually integrate with an existing application (<div class=\"six columns\">), only process for the desired URL path of contact (line #3), and provides an additional Transit Stop map (lines 27-31).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 #Only process text/html content in the contact path  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" )       || http.getpath() == "contact" ) break;       $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;  $mapcenter = string.urlencode( "37.784465,-122.398570" );  $mapzoom = "14" ;  #Google API key  $googleapikey = "YOUR_KEY_HERE" ;  $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;  #Map height and width  $mapheight = "420" ;  $mapwidth = "420" ;       #Regex match for the HTML section to be replaced  $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;       #Replacment HTML  $googlemapshtml =   #HTML cleanup (2x "</div>") and New Section title  "</div></div></a><h4>Parking and Transit Information</h4>" .  #BEGIN Parking Map. Using existing css for layout  "<div class=\"six columns\"><h5>Parking Map</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" . $nearaddress . "" .  "&key=" . $googleapikey . "\"></iframe></div>" .  #BEGIN Transit Map. Using existing css for layout  "<div class=\"six columns\"><h5>Transit Stop's</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=Transit+Stop+near+" . $nearaddress . "" .  "&center=" . $mapcenter . "&zoom=" . $mapzoom . "&key=" . $googleapikey . "\"></iframe></div>" .  #Include the removed HTML comment  "<!-- TAB 2 Content (Office Locations) -->" ;       #Get the existing HTTP Body for modification  $body = http.getResponseBody();       #Regex sub against the body looking for the defined string  $body = string.regexsub( $body , $insertstring , $googlemapshtml );  http.setResponseBody( $body );    Example 4.1 (Shortened)   For reference a shortened version of the Example 4 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 if ( !string.startsWith ( http.getResponseHeader( "Content-Type" ), "text/html" )         || http.getpath() == "contact" ) break;           http.setResponseBody( string.regexsub(  http.getResponseBody() ,    "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,     "</div></div></a><h4>Parking and Transit Information</h4><div class=\"six columns\">" .    "<h5>Parking Map</h5><iframe width=\"420\" height=\"420\" frameborder=\"0\" " .    "style=\"border:0\" src=\"https://www.google.com/maps/embed/v1/search" .    "?q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107&key=YOU_KEY_HERE\"></iframe>" .  "</div><div class=\"six columns\"><h5>Transit Stop's</h5><iframe width=\"420\" " .  "height=\"420\" frameborder=\"0\" style=\"border:0\" " .  "src=\"https://www.google.com/maps/embed/v1/search?q=Transit+Stop+near+" .  "680+Folsom+St.+San+Francisco,+CA+94107&center=37.784465%2C-122.398570&zoom=14" .  "&key=YOUR_KEY_HERE\"></iframe></div><!-- TAB 2 Content (Office Locations) -->" ) );  
View full article
Riverbed SteelApp™ Traffic Manager from Riverbed Technology is a high performance software-based application delivery controller (ADC), designed to deliver faster and more reliable access to Microsoft Azure applications as well as private applications. As a software-based ADC, it provides unprecedented scale and flexibility to deliver advanced application services.
View full article
The VMware Horizon Mirage Load Balancing Solution Guide describes how to configure Riverbed SteelApp to load balance VMware Horizon Mirage servers.   VMware® Horizon Mirage™ provides unified image management for physical desktops, virtual desktops and BYOD.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Magento.  
View full article
  1. The Issue   When using perpetual licensing on a Traffic Manager, it is restricted to a throughput licensing limitation as per the license.  If this limitation is reached, traffic will be queued and in extreme situations, if the throughput reaches much higher than expected levels, some traffic could be dropped because of the limitation.   2. The Solution   Automatically increase the allocated bandwidth for the Traffic Manager!!   3. A Brief Overview of the Solution   An SSC holds the licensed bandwidth configuration for the Traffic Manager instance.   The Traffic Manager is configured to execute a script on an event being raised, the bwlimited event.   The script makes REST calls to the SSC in order to obtain and then increment if necessary, the Traffic Manager's bandwidth allocation.   I have written the script used here, to only increment if the resulting bandwidth allocation is 5Mbps or under, but this restriction could be removed if it's not required.  The idea behind this was to allow the Traffic Manager to increment it's allocation, but to only let it have a certain maximum amount of bandwidth from the SSC bandwidth "bucket".   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set.   For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out the automated bandwidth allocation increase on the Traffic Manager, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the instance in the event of a bandwidth limitation event firing. b. Upload the script to be used, on to the Traffic Manager. c. Create a new event and action on the Traffic Manager which will be initiated when the bandwidth limitation is hit, calling the script mentioned in point a above.   4.2.a. The Script to increment the Traffic Manager Bandwidth Allocation   This script, called  and attached, is shown below.   Script Function:   Obtain the Traffic Manager instance configuration from the SSC. Extract the current bandwidth allocation for the Traffic Manager instance from the information obtained. If the current bandwidth is less then 5Mbps, then increment the allocation by 1Mbps and issue the REST call to the SSC to make the changes to the instance configuration as required.  If the bandwidth is currently 5Mbps, then do nothing, as we've hit the limit for this particular Traffic Manager instance.   #!/bin/bash # # Bandwidth_Increment # ------------------- # Called on event: bwlimited # # Request the current instance information requested_instance_info=$(curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" \ -X GET -u admin:password https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002) # Extract the current bandwidth figure for the instance current_instance_bandwidth=$(echo $requested_instance_info | sed -e 's/.*"bandwidth": \(\S*\).*/\1/g' | tr -d \,) # Add 1 to the original bandwidth figure, imposing a 5Mbps limitation on this instance bandwidth entry if [ $current_instance_bandwidth -lt 5 ] then new_instance_bandwidth=$(expr $current_instance_bandwidth + 1) # Set the instance bandwidth figure to the new bandwidth figure (original + 1) curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \ '{"bandwidth":'"${new_instance_bandwidth}"'}' \ https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002 fi   There are some obvious parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton which you can use to carry out whatever configuration is required on the SSC for a particular Traffic Manager.  Events and actions can be set up on the Traffic Manager which can then be used to execute scripts which can access the SSC and make the changes necessary based on any logic you see fit.   4.2.b. Upload the Bash Scripts to be Used   On the Traffic Manager, upload the bash script that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.   4.2.c. Create a New Event and Action for the Bandwidth Limitation Hit   On the Traffic Manager, create a new event type as shown in the screenshot below - I've created Bandwidth_Increment, but this event could be called anything relevant.  The important factor here is that the event is raised from the bwlimited event.     Once this event has been created, an action must be associated with it.   Create a new external program action as shown in the screenshot below - I've created one called Bandwidth_Increment, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called SSC_Bandwidth_Increment.     5. Testing   In order to test the solution, on the SSC, set the initial bandwidth for the Traffic Manager instance to 1Mbps.   Generate some traffic through to a service on the Traffic Manager that will force the Traffic Manager to hit it's 1Mbps limitation for a succession of time.  This will cause the bwlimited event to fire and for the Bandwidth_Increment action to be executed, running the SSC_Bandwidth_Increment script.   The script will increment the Traffic Manager bandwidth by 1Mbps.   Check and confirm this on the SSC.   Once confirmed, stop the traffic generation.   Note: As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Manager.   You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Riverbed Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article
Installation   Unzip the download ( Stingray Traffic Manager Cacti Templates.zip ) Via the Cacti UI, “Import Templates” and import the Data, Host, and Graph templates.  * Included graph templates are not required for functionality. Copy the files for the Cacti folder in the zip file to their corresponding directory inn your cacti install. Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers. Assign the host template to Traffic Manager(s) and create new graphs.   * Due to the method used by Cacti for creating graphs and the related RRD files, it is my recommendation NOT to create all graphs via the Device Page.   If you create all the graphs via the “*Create Graphs for this Host” link on the device page, Cacti will create an individual data source (RRD file and SNMP query for each graph) resulting in a significant amount of wasted Cacti and Device resources. Test yourself with the Stingray SNMP graph.   My recommendation is to create a single initial graph for each Data Query or Data Input method (i.e. one for Virtual Servers and one for Global values) and add any additional graphs via the Cacti’s Graph Management using the existing Data Source Drop downs.   Data Queries   Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl * Perl script to query the STM for most of the sys.globals values Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers.xml * Cacti XML snmp query for the Virtual Servers Table MIB   Graph Templates   Stingray_-_global_-_cpu.xml Stingray_-_global_-_dns_lookups.xml Stingray_-_global_-_dns_traffic.xml Stingray_-_global_-_memory.xml Stingray_-_global_-_snmp.xml Stingray_-_global_-_ssl_-_client_cert.xml Stingray_-_global_-_ssl_-_decryption_cipher.xml Stingray_-_global_-_ssl_-_handshakes.xml Stingray_-_global_-_ssl_-_session_id.xml Stingray_-_global_-_ssl_-_throughput.xml Stingray_-_global_-_swap_memory.xml Stingray_-_global_-_system_-_misc.xml Stingray_-_global_-_traffic_-_misc.xml Stingray_-_global_-_traffic_-_tcp.xml Stingray_-_global_-_traffic_-_throughput.xml Stingray_-_global_-_traffic_script_data_usage.xml Stingray_-_virtual_server_-_total_timeouts.xml Stingray_-_virtual_server_-_connections.xml Stingray_-_virtual_server_-_timeouts.xml Stingray_-_virtual_server_-_traffic.xml     Sample Graphs (click image for full size)           Compatibility   This template has been tested with STM 9.4 and Cacti 0.8.8.a   Known Issues   Cacti will create unnecessary queries and data files if the “*Create Graphs for this Host” link on the device page is used. See install notes for work around.   Conclusion   Cacti is sufficient with providing SNMP based RRD graphs, but is limited in Information available, Analytics, Correlation, Scale, Stability and Support.   This is not just a shameless plug; Brocade offers a MUCH more robust set of monitoring and performance tools.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft SharePoint 2013.  
View full article