cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
This article describes how to gather activity statistics across a cluster of traffic managers using Perl, SOAP::Lite and Stingray's SOAP Control API. Overview Each local Stingray Traffic Manager tracks a very wide range of activity statistics. These may be exported using SNMP or retrieved using the System/Stats interface in Stingray's SOAP Control API. When you use the Activity monitoring in Stingray's Administration Interface, a collector process communicates with each of the Traffic Managers in your cluster, gathering the local statistics from each and merging them before plotting them on the activity chart. 'Aggregate data across all traffic managers' However, when you use the SNMP or Control API interfaces directly, you will only receive the statistics from the Traffic Manager machine you have connected to. If you want to get a cluster-wide view of activity using SNMP or the Control API, you will need to poll each machine and merge the results yourself. Using Perl and SOAP::Lite to query the traffic managers and merge activity statistics The following code sample determines the total TCP connection rate across the cluster as follows: Connect to the named traffic manager and use the getAllClusterMachines() method to retrieve a list of all of the machines in the cluster; Poll each machine in the cluster for its current value of TotalConn (the total number of TCP connections processed since startup); Sleep for 10 seconds, then poll each machine again; Calculate the number of connections processed by each traffic manager in the 10-second window, and calculate the per-second rate accurately using high-res time. The code: #!/usr/bin/perl -w use SOAP::Lite 0.6; use Time::HiRes qw( time sleep ); $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; my $userpass = "admin:admin";      # SOAP-capable authentication credentials my $adminserver = "stingray:9090"; # Details of an admin server in the cluster my $sampletime = 10;               # Sample time (seconds) sub getAllClusterMembers( $$ ); sub makeConnections( $$$ ); sub makeRequest( $$ ); my $machines = getAllClusterMembers( $adminserver, $userpass ); print "Discovered cluster members ". ( join ", ", @$machines ) . "\n"; my $connections = makeConnections( $machines, $userpass,    " http://soap.zeus.com/zxtm/1.0/System/Stats/ " ); # sample the value of getTotalConn my $start = time(); my $res1 = makeRequest( $connections, "getTotalConn" ); sleep( $sampletime-(time()-$start) ); my $res2 = makeRequest( $connections, "getTotalConn" ); # Determine connection rate per traffic manager my $totalrate = 0; foreach my $z ( keys %{$res1} ) {    my $conns   = $res2->{$z}->result - $res1->{$z}->result;    my $elapsed = $res2->{$z}->{time} - $res1->{$z}->{time};    my $rate = $conns / $elapsed;    $totalrate += $rate; } print "Total connection rate across all machines: " .       sprintf( '%.2f', $totalrate ) . "\n"; sub getAllClusterMembers( $$ ) {     my( $adminserver, $userpass ) = @_;     # Discover cluster members     my $mconn =  SOAP::Lite          -> ns(' http://soap.zeus.com/zxtm/1.0/System/MachineInfo/ ')          -> proxy(" https://$userpass\@$adminserver/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );     $mconn->proxy->ssl_opts( SSL_verify_mode => 0 );      my $res = $mconn->getAllClusterMachines();     # $res->result is a reference to an array of System.MachineInfo.Machine objects     # Pull out the name:port of the traffic managers in our cluster     my @machines = grep s@ https://(.*?)/@$1@ ,        map { $_->{admin_server}; } @{$res->result};     return \@machines; } sub makeConnections( $$$ ) {     my( $machines, $userpass, $ns ) = @_;     my %conns;     foreach my $z ( @$machines ) {        $conns{ $z } = SOAP::Lite          -> ns( $ns )          -> proxy(" https://$userpass\@$z/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );        $conns{ $z }->proxy->ssl_opts( SSL_verify_mode => 0 );     }     return \%conns; } sub makeRequest( $$ ) {     my( $conns, $req ) = @_;     my %res;     foreach my $z ( keys %$conns ) {        my $r = $conns->{$z}->$req();        $r->{time} = time();        $res{$z}=$r;     }     return \%res; } Running the script $ ./getConnections.pl Discovered cluster members stingray1-ny:9090, stingray1-sf:9090 Total connection rate across all machines: 5.02
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2013.
View full article
This article describes how to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port.   Overview   WebSockets is an emerging protocol that is used by many web developers to provide responsive and interactive applications.  It's commonly used for talk and email applications, real-time games, and stock market and other monitoring applications.   By design, WebSockets is intended to resemble HTTP.  It is transported over tcp/80, and the initial handshake resembles an HTTP transaction, but the underlying protocol is a simple bidirectional TCP connection.   For more information on the protocol, refer to the Wikipedia summary and RFC 6455.     Basic WebSockets load balancing   Basic WebSockets Load Balancing   Basic WebSockets load balancing is straightforward.  You must use the 'Generic Streaming' protocol type to ensure that Stingray will correctly handle the asynchronous nature of websockets traffic.   Inspecting and modifying the WebSocket handshake   A WebSocket handshake message resembles an HTTP request, but you cannot use the built-in http.* TrafficScript functions to manage it because these are only available in HTTP-type virtual servers.   The libWebSockets.rts library (see below) implements analogous functions that you can use instead:   libWebSockets.rts   Paste the libWebSockets.txt library to your Rules catalog and reference it from your TrafficScript rule as follows:   import libWebSockets.rts as ws;   You can then use the ws.* functions to inspect and modify WebSockets handshakes.  Common operations include fixing up host headers and URLs in the request, and selecting the target servers (the 'pool') based on the attributes of the request.   import libWebSockets.rts as ws; if( ws.getHeader( "Host" ) == "echo.example.com" ) { ws.setHeader( "Host", "www.example.com" ); ws.setPath( "/echo" ); pool.use( "WebSockets servers" ); }   Ensure that the rules associated with WebSockets virtual server are configured to run at the Request stage, and to run 'Once', not 'Every'.  The rule should just be triggered to read and process the initial client handshake, and does not need to run against subsequent messages in the websocket connection:   Code to handle the WebSocket handshake should be configured as a Request Rule, with 'Run Once'   SSL-encrypted WebSockets   Stingray can SSL-decrypt TCP connections, and this operates fully with the SSL-encrypted wss:// protocol: Configure your virtual server to listen on port 443 (or another port if necessary) Enable SSL decryption on the virtual server, using a suitable certificate Note that when testing this capability, we found that Chrome refused to connect to WebSocket services with untrusted or invalid certificates, and did not issue a warning or prompt to trust the certificate.  Other web browsers may operate similarly.  In Chrome's case, it was necessary to access the virtual server directly ( https:// ), save the certificate and then import it into the certificate store.   Stingray can also SSL-encrypt downstream TCP connections (enable SSL encryption in the pool containing the real websocket servers) and this operates fully with SSL-enabled origin WebSockets servers.   Handling HTTP and WebSockets traffic   HTTP traffic should be handled by an HTTP-type virtual server rather than a Generic Streaming one.  HTTP virtual servers can employ HTTP optimizations (keepalive handling, HTTP upgrades, Compression, Caching, HTTP Session Persistence) and can access the http.* TrafficScript functions in their rules.   If possible, you should run two public-facing virtual servers, listening on two separate IP addresses.  For example, HTTP traffic should be directed to www.site.com (which resolves to the public IP for the HTTP virtual server) and WebSockets traffic should be directed to ws.site.com (resolving to the other public IP): Configure two virtual servers, each listening on the appropriate IP address   Sometimes, this is not possible – the WebSockets code is hardwired to the main www domain, or it's not possible to obtain a second public IP address. In that case, all traffic can be directed to the WebSockets virtual server and then HTTP traffic can be demultiplexed and forwarded internally to an HTTP virtual server:   Listen on a single IP address, and split off the HTTP traffic to a second HTTP virtual server   The following TrafficScript code, attached to the 'WS Virtual Server', will detect if the request is an HTTP request (rather than a WebSockets one) and hand the request off internally to an HTTP virtual server by way of a special 'Loopback Pool':   import libWebSockets.rts as ws; if( !ws.isWS() ) pool.use( "Loopback Pool" );   Notes: Testing WebSockets   The implementation described in this article was developed using the following browser-based client, load-balancing traffic to public 'echo' servers (ws://echo.websocket.org/, wss://echo.websocket.org, ws://ajf.me:8080/).   testclient.html   At the time of testing: echo.websocket.org did not respond to ping tests, so the default ping health monitor needed to be removed Chrome24 refused to connect to SSL-enabled wss resources unless they had a trusted certificate, and did not warn otherwise If you find this solution useful, please let us know in the comments below.
View full article
Following up on this earlier article try using the below TrafficScript code snippet to automatically insert the Google Analytics code on all your webpages.  To use it: Copy the rule onto your Stingray Traffic Manager  by first navigating Catalogs -> Rules Scroll down to Create new rule, give the rule a name, and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the rule below. Change $account to your Google Analytics account number. If you are using multiple domains as described here set $multiple_domains to TRUE and set $tld to your Top Level Domain as specified in your Google Analytics account. Set the rule as a Response Rule in your Virtual Server by navigating to Services -> Virtual Servers -> <your virtual server> -> Rules -> Response Rules and Add rule. After that you should be good to go.  No need to individually modify your web pages, TrafficScript will take care of it all. # # Replace UA-XXXXXXXX-X with your Google Analytics Account Number # $account = 'UA-XXXXXXXX-X'; # # If you are tracking multiple domains, ie yourdomain.com, # yourdomain.net, etc. then set $mutliple_domains to TRUE and # replace yourdomain.com with your Top Level Domain as specified # in your Google Analytics account # $multiple_domains = FALSE; $tld = 'yourdomain.com'; # # Only modify text/html pages # if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # This variable contains the code to be inserted in the web page. Do not modify. # $html = "\n<script type=\"text/javascript\"> \n \   var _gaq = _gaq || []; \n \   _gaq.push(['_setAccount', " . $account . "]); \n"; if( $multiple_domains == TRUE ) {   $html .= " _gaq.push(['_setDomainName', " . $tld . "]); \n \   _gaq.push(['_setAllowLinker', true]); \n"; } $html .= " _gaq.push(['_trackPageview']); \n \   (function() { \n \   var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; \n \   ga.src=('https:' == document.location.protocol ? ' https://ssl ' : ' http://www ') + '.google-analytics.com/ga.js'; \n \   var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); \n \   })(); \n \ </script>\n"; # # Insert the code right before the </head> tag in the page # $body = http.getResponseBody(); $body = string.replace( $body, "</head>", $html . "</head>"); http.setResponseBody( $body );
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2010.   "This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software"
View full article
 UPDATE January 2015: libDNS version 2.1 released! The new release is backward-compatible with previous releases and includes the following bug fixes: Fixed multiple bugs in parsing of DNS records   The libDNS.rts library (written by Matthew Geldert and enhanced by Matthias Scheler) attached below provides a means to interrogate and modify DNS traffic from a TrafficScript rule, and to respond directly to DNS request when desired.   You can use it to:   Modify DNS requests on the fly, rewriting a lookup for one address so that it becomes a lookup for a different one: HowTo: Modify DNS requests using libDNS.rts Intercept and respond directly to DNS requests, informed by TrafficScript logic: HowTo: Respond directly to DNS requests using libDNS.rts Implement a simple DNS responder (avoiding the need for a back-end DNS server) - particularly useful in a Global Server Load Balancing scenario: HowTo: Implement a simple DNS resolver using libDNS.rts   Note: This library allows you to inspect and modify DNS traffic as it is balanced by Stingray.  If you want to issue DNS requests from Stingray, check out the net.dns.resolveHost() and net.dns.resolveIP() TrafficScript functions for this purpose.   Overview   This rule can be used when Stingray receives DNS lookups (UDP or TCP) using a Virtual Server, and optionally uses a pool to forward these requests to a real DNS server:   Typically, a listen IP addresses of the Stingray will be published as the NS record for a subdomain so that clients explicitly direct DNS lookups to the Stingray device, although you can use Stingray software in a transparent mode and intercept DNS lookups that are routed through Stingray using the iptables configuration described in Transparent Load Balancing with Stingray Traffic Manager.   Installation     Step 1:   Configure Stingray with a virtual server listening on port 53; you can use the 'discard' pool to get started:   If you use 'dig' to send a DNS request to that virtual server (listening on 192.168.35.10 in this case), you won't get a response because the virtual server will discard all requests:   $ dig @192.168.35.10 www.zeus.com ; <<>> DiG 9.8.3-P1 <<>> @192.168.35.10 www.zeus.com ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached   Step 2:   Create a new rule named libDNS.rts using the attached TrafficScript Source:   This rule is used as a library, so you don't need to associate it with a Virtual Server.   Then associate the following request rule with your virtual server:   import libDNS.rts as dns; $request = request.get(); $packet = dns.convertRawDataToObject($request, "udp"); # Ignore unparsable packets and query responses to avoid # attacks like the one described in CVE-2004-0789. if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) { break; } log.info( "DNS request: " . lang.dump( $packet ) ); $host = dns.getQuestion( $packet )["host"]; $packet = dns.addResponse($packet, "answer", $host, "www.site.com.", "CNAME", "IN", "60", []); $packet["qr"] = 1; log.info( "DNS response: " . lang.dump( $packet ) ); request.sendResponse( dns.convertObjectToRawData($packet, "udp"));   This request rule will catch all DNS requests and respond directly with a CNAME directing the client to look up 'www.site.com' instead:   $ dig @192.168.35.10 www.zeus.com ; <<>> DiG 9.8.3-P1 <<>> @192.168.35.10 www.zeus.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<     Documentation   This DNS library provides a set of helper functions to query and modify DNS packets.  Internally, a decoded DNS packet is represented as a TrafficScript datastructure and when you are writing and debugging rules, it may be useful to use log.info( lang.dump( $packet ) ) as in the rule above to help understand the internal structures.   Create a new DNS packet   $packet = dns.newDnsObject(); log.info( lang.dump( $packet ) );   Creates a data structure model of a DNS request/response packet. The default flags are set for an A record lookup. Returns the internal hash data structure used for the DNS packet.   Set the question in a DNS packet   $packet = dns.setQuestion( $packet, $host, $type, $class );   Sets the question in a data structure representation of a DNS packet.   $packet - the packet data structure to manipulate. $host - the host to lookup. $type - the type of RR to request.   Returns the modified packet data structure, or -1 if the type is unknown.     Get the question in the DNS packet   $q = dns.getQuestion( $packet ); $host = $q["host"]; $type = $q["type"]; $class = $q["class"];   Gets the question from a data structure representing a DNS request/response.  Returns a hash of the host and type in the data structure's question section.   Add an answer to the DNS packet   $packet = dns.addResponse($packet, $section, $name, $host, $type, $class, $ttl, $additional);   Adds an answer to the specified RR section:   $packet - the packet data structure to manipulate. $section - name of the section ("answer", "authority" or "additional") to add the answer to $name, $host, $type, $class, $ttl, $additional - the answer data   Returns the modified packet data structure.   Remove an answer from the DNS packet   $packet = dns.removeResponse( $packet, $section, $answer ); while( $packet["additionalcount"] > 0 ) { $packet = dns.removeResponse( $packet, "additional", 0); }   Removes an answer from the specified RR section.   $packet - the packet data structure to manipulate. $section - name of the section ("answer", "authority" or "additional") to remove the entry from $answer - the array position of the answer to remove (0 removes the first).   Returns the modified packet data structure, or -1 if the specified array key is out of range.   Convert the datastructure packet object to a raw packet   $data = dns.convertObjectToRawData( $packet, $protocol ); request.sendResponse( $data );   Converts a data structure into the raw data suitable to send in a DNS request or response.   $packet - data structure to convert. $protocol - transport protocol of data ("udp" or "tcp").   Returns the raw packet data.     Convert a raw packet to  an internal datastructure object   $data = request.get(); $packet = dns.convertRawDatatoObject( $data, $protocol );   Converts a raw DNS request/response to a manipulatable data structure.   $data - raw DNS packet $protocol - transport protocol of data ("udp" or "tcp")   Returns trafficscript data structure (hash).     Kudos   Kudos to Matthew Geldert, original author of this library, and Matthias Scheler, author of version 2.0.
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
Stingray Traffic Manager can run as either a forward or a reverse proxy. But what is a Proxy? A reverse proxy? A forward proxy? And what can you do with such a feature?   Let's try and clarify what all these proxies are. In computing, a Proxy is a service that accepts network connections from clients and then forwards them on to a server. So in essence, any Load Balancer or Traffic Manager is a kind of proxy. Web caches are another example of proxy servers. These keep a copy of frequently requested web pages and will deliver these pages themselves, rather than having to forward the request on to the 'real' server.   Forward and Reverse Proxies   The difference between a 'forward' and 'reverse' proxy is determined by where the proxy is running.   Forward Proxies:  Your ISP probably uses a web cache to reduce its bandwidth costs. In this case, the proxy is sitting between your computer and the whole Internet. This is a 'forward proxy'. The proxy has a limited set of users (the ISP's customers), and can forward requests on to any machine on the Internet (i.e. the web sites that the customers are browsing).   Reverse Proxies: Alternatively, a company can put a web cache in the same data center as their web servers, and use it to reduce the load on their systems. This is a 'reverse proxy'. The proxy has an unlimited set of users (anyone who wants to view the web site), but proxies requests on to a specific set of machines (the web servers running the company's web site). This is a typical role for Traffic Managers - they are traditionally used as a reverse proxy.   Using Stingray Traffic Manager as a Forward Proxy   You may use Stingray Traffic Manager to forward requests on to any other computer, not just to a pre-configured set of machines in a pool. TrafficScript is used to select the exact address to forward the request on to:   pool.use( "Pool name", $ipaddress, $port );   The pool.use() function is used, in the same way as you would normally pick a pool of servers to let Stingray Traffic Manager load balance to. The extra parameters specify the exact machine to use. This machine does not have to belong to the pool that is mentioned; the pool name is there just so Stingray Traffic Manager can use its settings for the connection (e.g. timeout settings, SSL encryption, and so on).   We refer to this technique as 'Forward Proxy mode', or 'Forward Proxy' for short.   What use is a Forward Proxy?   Combined with TrafficScript, the Forward Proxy feature gives you complete control over the load balancing of requests. For example, you could use Stingray Traffic Manager to load balance RDP (Remote Desktop Protocol), using TrafficScript to pick out the user name of a new connection, look the name up in a database and find the hostname of a desktop to allocate for that user.   Forward Proxying also allows Stingray Traffic Manager to be used nearer the clients on a network. With some TrafficScript, Stingray Traffic Manager can operate as a caching web proxy, speeding up local Internet usage. You can then tie in other Stingray Traffic Manager features like bandwidth shaping, service level monitoring and so on. TrafficScript response rules could then filter the incoming data if needed.   Example: A web caching proxy using Stingray Traffic Manager and TrafficScript   You will need to set up Stingray Traffic Manager with a virtual server listening for HTTP proxy traffic. Set HTTP as the protocol, and enable web caching. Also, be sure to disable Stingray's "Location Header rewriting", on the connection management page. Then you will need to add a TrafficScript rule to examine the incoming connections and pick a suitable machine. Here's how you would build such a rule:   # Put a sanity check in the rule, to ensure that only proxy traffic is being received: $host = http.getHostHeader(); if( http.headerExists( "X-Forwarded-For" ) || $host == "" ) { http.sendResponse( "400 Bad request", "text/plain", "This is a proxy service, you must send proxy requests", "" ); } # Trim the leading http://host from the URL if necessary $url = http.getRawUrl(); if ( string.startswith( $url, "http://" ) ) { $slash = string.find( $url, "/", 8 ); $url = string.substring( $url, $slash, -1 ); } http.setPath( string.unescape( $url ) ); # Extract the port out of the Host: header, if it is there $pos = string.find( $host, ":" ); if( $pos >= 0 ) { $port = string.skip( $host, $pos + 1 ); $host = string.substring( $host, 0, $pos - 1 ); } else { $port = 80; } # We need to alter the HTTP request to supply the true IP address of the client # requesting the page, and we need to tweak the request to remove any proxy-specific headers. http.setHeader( "X-Forwarded-For", request.getRemoteIP() ); http.removeHeader( "Range" ); # Removing this header will make the request more cacheable http.removeHeader( "Proxy-Connection" ); # The user might have requested a page that is unresolvable, e.g. # http://fakehostname.nowhere/. Let's resolve the IP and check $ip = net.dns.resolveHost( $host ); if( $ip == "" ) { http.sendResponse( "404 Unknown host", "text/plain", "Failed to resolve " . $host . " to an IP address", "" ); } # The last task is to forward the request on to the target website pool.use( "Forward Proxy Pool", $ip, $port );   Done! Now try using the proxy: Go to your web browser's settings page or your operating system's network configuration (as appropriate) and configure an HTTP proxy.  Fill in the hostname of your Stingray Traffic Manager and the port number of the virtual server running this TrafficScript rule. Now try browsing to a few different web sites. You will be able to see the URLs on the Current Activity page in the UI, and the Web Cache page will show you details of the content that has been cached by Stingray:   'Recent Connections' report lists connections proxied to remote sites   Content Cache report lists the resources that Stingray has cached locally     This is just one use of the forward proxy. You could easily use the feature for other uses, e.g. email delivery, SSL-encrypted proxies, and so on. Try it and see!
View full article
Google Analytics i s a great tool for monitoring and tracking visitors to your web sites. Perhaps best of all, it's entirely web based - you only need a web browser to access the analysis services it provides.     To enable tracking for your web sites, you need to embed a small fragment of JavaScript code in every web page. This extension makes this easy, by inspecting all outgoing content and inserting the code into each HTML page, while honoring the users 'Do Not Track' preferences.   Installing the Extension   Requirements   This extension has been tested against Stingray Traffic Manager 9.1, and should function with all versions from 7.0 onwards.   Installation    Copy the contents of the User Analytics rule below. Open in an editor, and paste the contents into a new response rule:     Verify that the extension is functioning correctly by accessing a page through the traffic manager and use 'View Source' to verify that the Google Analytics code has been added near the top of the document, just before the closing </head> tag:     User Analytics rule   # Edit the following to set your profile ID; $defaultProfile = "UA-123456-1"; # You may override the profile ID on a site-by-site basis here $overrideProfile = [ "support.mysite.com" => "UA-123456-2", "secure.mysite.com" => "UA-123456-3" ]; # End of configuration settings # Only process text/html responses $contentType = http.getResponseHeader( "Content-Type" ); if( !string.startsWith( $contenttype, "text/html" )) break; # Honor any Do-Not-Track preference $dnt = http.getHeader( "DNT" ); if ( $dnt == "1" ) break; # Determine the correct $uacct profile ID $uacct = $overrideProfile[ http.getHostHeader() ]; if( !$uacct ) $uacct = $defaultProfile; # See http://www.google.com/support/googleanalytics/bin/answer.py?answer=174090 $script = ' <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(["_setAccount", "' . $uacct . '"]); _gaq.push(["_trackPageview"]); (function() { var ga = document.createElement("script"); ga.type = "text/javascript"; ga.async = true; ga.src=("https:" == document.location.protocol ? "https://ssl" : "http://www") + ".google-analytics.com/ga.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(ga, s); })(); </script>'; $body = http.getResponseBody(); # Find the location of the closing '</head>' tag $i = string.find( $body, "</head>" ); if( $i ==-1 ) $i = string.findI( $body, "</head>" ); if( $i ==-1 ) break; # Give up http.setResponseBody( string.left( $body, $i ) . $script . string.skip( $body, $i ));   For some extensions to this rule, check out Faisal Memon's article Google Analytics revisited
View full article
When you need to scale out your MySQL database, replication is a good way to proceed. Database writes (UPDATEs) go to a 'master' server and are replicated across a set of 'slave' servers. Reads (SELECTs) are load-balanced across the slaves.   Overview   MySQL's replication documentation describes how to configure replication:   MySQL Replication   A quick solution...   If you can modify your MySQL client application to direct 'Write' (i.e. 'UPDATE') connections to one IP address/port and 'Read' (i.e. 'SELECT') connections to another, then this problem is trivial to solve. This generally needs a code update (Using Replication for Scale-Out).   You will need to direct the 'Update' connections to the master database (or through a dedicated Traffic Manager virtual server), and direct the 'Read' connections to a Traffic Manager virtual server (in 'generic server first' mode) and load-balance the connections across the pool of MySQL slave servers using the 'least connections' load-balancing method: Routing connections from the application   However, in most cases, you probably don't have that degree of control over how your client application issues MySQL connections; all connections are directed to a single IP:port. A load balancer will need to discriminate between different connection types and route them accordingly.   Routing MySQL traffic   A MySQL database connection is authenticated by a username and password. In most database designs, multiple users with different access rights are used; less privileged user accounts can only read data (issuing 'SELECT' statements), and more privileged users can also perform updates (issuing 'UPDATE' statements). A well architected application with sound security boundaries will take advantage of these multiple user accounts, using the account with least privilege to perform each operation. This reduces the opportunities for attacks like SQL injection to subvert database transactions and perform undesired updates.   This article describes how to use Traffic Manager to inspect and manage MySQL connections, routing connections authenticated with privileged users to the master database and load-balancing other connects to the slaves:   Load-balancing MySQL connections   Designing a MySQL proxy   Stingray Traffic Manager functions as an application-level (layer-7) proxy. Most protocols are relatively easy for layer-7 proxies like Traffic Manager to inspect and load-balance, and work 'out-of-the-box' or with relatively little configuration.   For more information, refer to the article Server First, Client First and Generic Streaming Protocols.   Proxying MySQL connections   MySQL is much more complicated to proxy and load-balance.   When a MySQL client connects, the server immediately responds with a randomly generated challenge string (the 'salt'). The client then authenticates itself by responding with the username for the connection and a copy of the 'salt' encrypted using the corresponding password:   Connect and Authenticate in MySQL   If the proxy is to route and load-balance based on the username in the connection, it needs to correctly authenticate the client connection first. When it finally connects to the chosen MySQL server, it will then have to re-authenticate the connection with the back-end server using a different salt.   Implementing a MySQL proxy in TrafficScript   In this example, we're going to proxy MySQL connections from two users - 'mysqlmaster' and 'mysqlslave', directing connections to the 'SQL Master' and 'SQL Slaves' pools as appropriate.   The proxy is implemented using two TrafficScript rules ('mysql-request' and 'mysql-response') on a 'server-first' Virtual Server listening on port 3306 for MySQL client connections. Together, the rules implement a simple state machine that mediates between the client and server:   Implementing a MySQL proxy in TrafficScript   The state machine authenticates and inspects the client connection before deciding which pool to direct the connection to. The rule needs to know the encrypted password and desired pool for each user. The virtual server should be configured to send traffic to the built-in 'discard' pool by default.   The request rule:   Configure the following request rule on a 'server first' virtual server. Edit the values at the top to reflect the encrypted passwords (copied from the MySQL users table) and desired pools:   sub encpassword( $user ) { # From the mysql users table - double-SHA1 of the password # Do not include the leading '*' in the long 40-byte encoded password if( $user == "mysqlmaster" ) return "B17453F89631AE57EFC1B401AD1C7A59EFD547E5"; if( $user == "mysqlslave" ) return "14521EA7B4C66AE94E6CFF753453F89631AE57EF"; } sub pool( $user ) { if( $user == "mysqlmaster" ) return "SQL Master"; if( $user == "mysqlslave" ) return "SQL Slaves"; } $state = connection.data.get( "state" ); if( !$state ) { # First time in; we've just recieved a fresh connection $salt1 = randomBytes( 8 ); $salt2 = randomBytes( 12 ); connection.data.set( "salt", $salt1.$salt2 ); $server_hs = "\0\0\0\0" . # length - fill in below "\012" . # protocol version "Stingray Proxy v0.9\0" . # server version "\01\0\0\0" . # thread 1 $salt1."\0" . # salt(1) "\054\242" . # Capabilities "\010\02\0" . # Lang and status "\0\0\0\0\0\0\0\0\0\0\0\0\0" . # Unused $salt2."\0"; # salt(2) $l = string.length( $server_hs )-4; # Will be <= 255 $server_hs = string.replaceBytes( $server_hs, string.intToBytes( $l, 1 ), 0 ); connection.data.set( "state", "wait for clienths" ); request.sendResponse( $server_hs ); break; } if( $state == "wait for clienths" ) { # We've recieved the client handshake. $chs = request.get( 1 ); $chs_len = string.bytesToInt( $chs ); $chs = request.get( $chs_len + 4 ); # user starts at byte 36; password follows after $i = string.find( $chs, "\0", 36 ); $user = string.subString( $chs, 36, $i-1 ); $encpasswd = string.subString( $chs, $i+2, $i+21 ); $passwd2 = string.hexDecode( encpassword( $user ) ); $salt = connection.data.get( "salt" ); $passwd1 = string_xor( $encpasswd, string.hashSHA1( $salt.$passwd2 ) ); if( string.hashSHA1( $passwd1 ) != $passwd2 ) { log.warn( "User '" . $user . "': authentication failure" ); connection.data.set( "state", "authentication failed" ); connection.discard(); } connection.data.set( "user", $user ); connection.data.set( "passwd1", $passwd1 ); connection.data.set( "clienths", $chs ); connection.data.set( "state", "wait for serverhs" ); request.set( "" ); # Select pool based on user pool.select( pool( $user ) ); break; } if( $state == "wait for client data" ) { # Write the client handshake we remembered from earlier to the server, # and piggyback the request we've just recieved on the end $req = request.get(); $chs = connection.data.get( "clienths" ); $passwd1 = connection.data.get( "passwd1" ); $salt = connection.data.get( "salt" ); $encpasswd = string_xor( $passwd1, string.hashSHA1( $salt . string.hashSHA1( $passwd1 ) ) ); $i = string.find( $chs, "\0", 36 ); $chs = string.replaceBytes( $chs, $encpasswd, $i+2 ); connection.data.set( "state", "do authentication" ); request.set( $chs.$req ); break; } # Helper function sub string_xor( $a, $b ) { $r = ""; while( string.length( $a ) ) { $a1 = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $b1 = string.left( $b, 1 ); $b = string.skip( $b, 1 ); $r = $r . chr( ord( $a1 ) ^ ord ( $b1 ) ); } return $r; }   The response rule   Configure the following as a response rule, set to run every time, for the MySQL virtual server.   $state = connection.data.get( "state" ); $authok = "\07\0\0\2\0\0\0\02\0\0\0"; if( $state == "wait for serverhs" ) { # Read server handshake, remember the salt $shs = response.get( 1 ); $shs_len = string.bytesToInt( $shs )+4; $shs = response.get( $shs_len ); $salt1 = string.substring( $shs, $shs_len-40, $shs_len-33 ); $salt2 = string.substring( $shs, $shs_len-13, $shs_len-2 ); connection.data.set( "salt", $salt1.$salt2 ); # Write an authentication confirmation now to provoke the client # to send us more data (the first query). This will prepare the # state machine to write the authentication to the server connection.data.set( "state", "wait for client data" ); response.set( $authok ); break; } if( $state == "do authentication" ) { # We're expecting two responses. # The first is the authentication confirmation which we discard. $res = response.get(); $res1 = string.left( $res, 11 ); $res2 = string.skip( $res, 11 ); if( $res1 != $authok ) { $user = connection.data.get( "user" ); log.info( "Unexpected authentication failure for " . $user ); connection.discard(); } connection.data.set( "state", "complete" ); response.set( $res2 ); break; }   Testing your configuration   If you have several MySQL databases to test against, testing this configuration is straightforward. Edit the request rule to add the correct passwords and pools, and use the mysql command-line client to make connections:   $ mysql -h zeus -u username -p Enter password: *******   Check the 'current connections' list in the Traffic Manager UI to see how it has connected each session to a back-end database server.   If you encounter problems, try the following steps:   Ensure that trafficscript!variable_pool_use is set to 'Yes' in the Global Settings page on the UI. This setting allows you to use non-literal values in pool.use() and pool.select() TrafficScript functions. Turn on the log!client_connection_failures and log!server_connection_failures settings in the Virtual Server > Connection Management configuration page; these settings will configure the traffic manager to write detailed debug messages to the Event Log whenever a connection fails.   Then review your Traffic Manager Event Log and your mysql logs in the event of an error.   Traffic Manager's access logging can be used to record every connection. You can use the special *{name}d log macro to record information stored using connection.data.set(), such as the username used in each connection.   Conclusion   This article has demonstrated how to build a fairly sophisticated protocol parser where the Traffic Manager-based proxy performs full authentication and inspection before making a load-balancing decision. The protocol parser then performs the authentication again against the chosen back-end server.   Once the client-side and server-side handshakes are complete, Traffic Manager will simply forward data back and fro between the client and the server.   This example addresses the problem of scaling out your MySQL database, giving load-balancing and redundancy for database reads ('SELECTs'). It does not address the problem of scaling out your master 'write' server - you need to address that by investing in a sufficiently powerful server, architecting your database and application to minimise the number and impact of write operations, or by selecting a full clustering solution.     The solution leaves a single point of failure, in the form of the master database. This problem could be effectively dealt with by creating a monitor that tests the master database for correct operation. If it detects a failure, the monitor could promote one of the slave databases to master status and reconfigure the 'SQLMaster' pool to direct write (UPDATE) traffic to the new MySQL master server.   Acknowledgements   Ian Redfern's MySQL protocol description was invaluable in developing the proxy code.     Appendix - Password Problems? This example assumes that you are using MySQL 4.1.x or later (it was tested with MySQL 5 clients and servers), and that your database has passwords in the 'long' 41-byte MySQL 4.1 (and later) format (see http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html)   If you upgrade a pre-4.1 MySQL database to 4.1 or later, your passwords will remain in the pre-4.1 'short' format.   You can verify what password format your MySQL database is using as follows:   mysql> select password from mysql.user where user='username'; +------------------+ | password         | +------------------+ | 6a4ba5f42d7d4f51 | +------------------+ 1 rows in set (0.00 sec)   mysql> update mysql.user set password=PASSWORD('password') where user='username'; Query OK, 1 rows affected (0.00 sec) Rows matched: 1  Changed: 1  Warnings: 0   mysql> select password from mysql.user where user='username'; +-------------------------------------------+ | password                                  | +-------------------------------------------+ | *14521EA7B4C66AE94E6CFF753453F89631AE57EF | +-------------------------------------------+ 1 rows in set (0.00 sec)   If you can't create 'long' passwords, your database may be stuck in 'short' password mode. Run the following command to resize the password table if necessary:   $ mysql_fix_privilege_tables --password=admin password   Check that 'old_passwords' is not set to '1' (see here) in your my.cnf configuration file.   Check that the mysqld process isn't running with the --old-passwords option.   Finally, ensure that the privileges you have configured apply to connections from the Stingray proxy. You may need to GRANT... TO 'user'@'%' for example.
View full article
  1. The Issue   When using perpetual licensing on a Traffic Manager, it is restricted to a throughput licensing limitation as per the license.  If this limitation is reached, traffic will be queued and in extreme situations, if the throughput reaches much higher than expected levels, some traffic could be dropped because of the limitation.   2. The Solution   Automatically increase the allocated bandwidth for the Traffic Manager!!   3. A Brief Overview of the Solution   An SSC holds the licensed bandwidth configuration for the Traffic Manager instance.   The Traffic Manager is configured to execute a script on an event being raised, the bwlimited event.   The script makes REST calls to the SSC in order to obtain and then increment if necessary, the Traffic Manager's bandwidth allocation.   I have written the script used here, to only increment if the resulting bandwidth allocation is 5Mbps or under, but this restriction could be removed if it's not required.  The idea behind this was to allow the Traffic Manager to increment it's allocation, but to only let it have a certain maximum amount of bandwidth from the SSC bandwidth "bucket".   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set.   For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out the automated bandwidth allocation increase on the Traffic Manager, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the instance in the event of a bandwidth limitation event firing. b. Upload the script to be used, on to the Traffic Manager. c. Create a new event and action on the Traffic Manager which will be initiated when the bandwidth limitation is hit, calling the script mentioned in point a above.   4.2.a. The Script to increment the Traffic Manager Bandwidth Allocation   This script, called  and attached, is shown below.   Script Function:   Obtain the Traffic Manager instance configuration from the SSC. Extract the current bandwidth allocation for the Traffic Manager instance from the information obtained. If the current bandwidth is less then 5Mbps, then increment the allocation by 1Mbps and issue the REST call to the SSC to make the changes to the instance configuration as required.  If the bandwidth is currently 5Mbps, then do nothing, as we've hit the limit for this particular Traffic Manager instance.   #!/bin/bash # # Bandwidth_Increment # ------------------- # Called on event: bwlimited # # Request the current instance information requested_instance_info=$(curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" \ -X GET -u admin:password https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002) # Extract the current bandwidth figure for the instance current_instance_bandwidth=$(echo $requested_instance_info | sed -e 's/.*"bandwidth": \(\S*\).*/\1/g' | tr -d \,) # Add 1 to the original bandwidth figure, imposing a 5Mbps limitation on this instance bandwidth entry if [ $current_instance_bandwidth -lt 5 ] then new_instance_bandwidth=$(expr $current_instance_bandwidth + 1) # Set the instance bandwidth figure to the new bandwidth figure (original + 1) curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \ '{"bandwidth":'"${new_instance_bandwidth}"'}' \ https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002 fi   There are some obvious parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton which you can use to carry out whatever configuration is required on the SSC for a particular Traffic Manager.  Events and actions can be set up on the Traffic Manager which can then be used to execute scripts which can access the SSC and make the changes necessary based on any logic you see fit.   4.2.b. Upload the Bash Scripts to be Used   On the Traffic Manager, upload the bash script that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.   4.2.c. Create a New Event and Action for the Bandwidth Limitation Hit   On the Traffic Manager, create a new event type as shown in the screenshot below - I've created Bandwidth_Increment, but this event could be called anything relevant.  The important factor here is that the event is raised from the bwlimited event.     Once this event has been created, an action must be associated with it.   Create a new external program action as shown in the screenshot below - I've created one called Bandwidth_Increment, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called SSC_Bandwidth_Increment.     5. Testing   In order to test the solution, on the SSC, set the initial bandwidth for the Traffic Manager instance to 1Mbps.   Generate some traffic through to a service on the Traffic Manager that will force the Traffic Manager to hit it's 1Mbps limitation for a succession of time.  This will cause the bwlimited event to fire and for the Bandwidth_Increment action to be executed, running the SSC_Bandwidth_Increment script.   The script will increment the Traffic Manager bandwidth by 1Mbps.   Check and confirm this on the SSC.   Once confirmed, stop the traffic generation.   Note: As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Manager.   You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Riverbed Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
Installation   Unzip the download ( Stingray Traffic Manager Cacti Templates.zip ) Via the Cacti UI, “Import Templates” and import the Data, Host, and Graph templates.  * Included graph templates are not required for functionality. Copy the files for the Cacti folder in the zip file to their corresponding directory inn your cacti install. Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers. Assign the host template to Traffic Manager(s) and create new graphs.   * Due to the method used by Cacti for creating graphs and the related RRD files, it is my recommendation NOT to create all graphs via the Device Page.   If you create all the graphs via the “*Create Graphs for this Host” link on the device page, Cacti will create an individual data source (RRD file and SNMP query for each graph) resulting in a significant amount of wasted Cacti and Device resources. Test yourself with the Stingray SNMP graph.   My recommendation is to create a single initial graph for each Data Query or Data Input method (i.e. one for Virtual Servers and one for Global values) and add any additional graphs via the Cacti’s Graph Management using the existing Data Source Drop downs.   Data Queries   Stingray Global Values script query - /cacti/site/scripts/stingray_globals.pl * Perl script to query the STM for most of the sys.globals values Stingray Virtual Server Table snmp query - cacti/resource/snmp_queries/stingray_vservers.xml * Cacti XML snmp query for the Virtual Servers Table MIB   Graph Templates   Stingray_-_global_-_cpu.xml Stingray_-_global_-_dns_lookups.xml Stingray_-_global_-_dns_traffic.xml Stingray_-_global_-_memory.xml Stingray_-_global_-_snmp.xml Stingray_-_global_-_ssl_-_client_cert.xml Stingray_-_global_-_ssl_-_decryption_cipher.xml Stingray_-_global_-_ssl_-_handshakes.xml Stingray_-_global_-_ssl_-_session_id.xml Stingray_-_global_-_ssl_-_throughput.xml Stingray_-_global_-_swap_memory.xml Stingray_-_global_-_system_-_misc.xml Stingray_-_global_-_traffic_-_misc.xml Stingray_-_global_-_traffic_-_tcp.xml Stingray_-_global_-_traffic_-_throughput.xml Stingray_-_global_-_traffic_script_data_usage.xml Stingray_-_virtual_server_-_total_timeouts.xml Stingray_-_virtual_server_-_connections.xml Stingray_-_virtual_server_-_timeouts.xml Stingray_-_virtual_server_-_traffic.xml     Sample Graphs (click image for full size)           Compatibility   This template has been tested with STM 9.4 and Cacti 0.8.8.a   Known Issues   Cacti will create unnecessary queries and data files if the “*Create Graphs for this Host” link on the device page is used. See install notes for work around.   Conclusion   Cacti is sufficient with providing SNMP based RRD graphs, but is limited in Information available, Analytics, Correlation, Scale, Stability and Support.   This is not just a shameless plug; Brocade offers a MUCH more robust set of monitoring and performance tools.
View full article
When deploying applications using content management systems, application owners are typically limited to the functionality of the CMS application in use or third party add-on's available. Unfortunately, these components alone may not deliver the application requirements.  Leaving the application owner to dedicate resources to develop a solution that usually ends up taking longer than it should, or not working at all. This article addresses some hypothetical production use cases, where the application does not provide the administrators an easy method to add a timer to the website.   This solution builds upon the previous articles (Embedded Google Maps - Augmenting Web Applications with Traffic Manager and Embedded Twitter Timeline - Augmenting Web Applications with Traffic Manager). "Using" a solution from Owen Garrett (See Instrument web content with Traffic Manager),This example will use a simple CSS overlay to display the added information.   Basic Rule   As a starting point to understand the minimum requirements, and to customize for your own use. I.E. Most people want to use "text-align:center". Values may need to be added to the $style or $html for your application, see examples.   1 2 3 4 5 6 7 8 9 10 11 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       $timer =  ( "366" - ( sys. gmtime . format ( "%j" ) ) );       $html =  '<div class="Countdown">' . $timer . ' DAYS UNTIL THE END OF THE YEAR</div>' ;       $style = '<style type="text/css">.Countdown{z-index:100;background:white}</style>' ;       $body = http.getResponseBody();  $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );  http.setResponseBody( $body );   Example 1 - Simple Day Countdown Timer   This example covers a common use case popular with retailers, a countdown for the holiday shopping season. This example also adds font formatting and additional text with a link.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 #Only process text/html content  if ( !string.startsWith (http.getResponseHeader ( "Content-Type" ), "text/html" )) break;       #Countdown target  #Julian day of the year "001" to "366"  $targetday = "359" ;  $bgcolor = "#D71920" ;  $labelday = "DAYS" ;  $title = "UNTIL CHRISTMAS" ;  $titlecolor = "white" ;  $link = "/dept.jump?id=dept20020200034" ;  $linkcolor = "yellow" ;  $linktext = "VISIT YOUR ONE-STOP GIFT SHOP" ;       #Calculate days between today and targetday  $timer = ( $targetday - ( sys. gmtime . format ( "%j" ) ) );       #Remove the S from "DAYS" if only 1 day left  if ( $timer == 1 ){     $labelday = string.drop( $label , 1 );  };       $html = '  <div class= "TrafficScriptCountdown" >     <h3>       <font color= "'.$titlecolor.'" >         '.$timer.' '.$labelday.' '.$title.'        </font>       <a href= "'.$link.'" >         <font color= "'.$linkcolor.'" >           '.$linktext.'          </font>       </a>     </h3>  </div>  ';       $style = '  <style type= "text/css" >  .TrafficScriptCountdown {     position:relative;     top:0;     width:100%;     text-align:center;     background: '.$bgcolor.' ;     opacity:100%;     z- index :1000;     padding:0  }  </style>  ';       $body = http.getResponseBody();       $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );       http.setResponseBody( $body );?    Example 1 in Action     Example 2 - Ticking countdown timer with second detail   This example covers how to dynamically display the time down to seconds. Opposed to sending data to the client every second, I chose to use a client side java script found @ HTML Countdown to Date v3 (Javascript Timer)  | ricocheting.com   Example 2 Response Rule   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       #Countdown target  $year = "2014" ;  $month = "11" ;  $day = "3" ;  $hr = "8" ;  $min = "0" ;  $sec = "0" ;  #number of hours offset from UTC  $utc = "-8" ;       $labeldays = "DAYS" ;  $labelhrs = "HRS" ;  $labelmins = "MINS" ;  $labelsecs = "SECS" ;  $separator = ", " ;       $timer = '<script type= "text/javascript" >  var CDown=function(){this.state=0,this.counts=[],this.interval=null};CDown. prototype =\  {init:function(){this.state=1;var t=this;this.interval=window.setInterval(function()\  {t.tick()},1e3)},add:function(t,s){tzOffset= '.$utc.' ,dx=t.toGMTString(),dx=dx. substr \  (0,dx. length -3),tzCurrent=t.getTimezoneOffset()/60*-2,t.setTime(Date.parse(dx)),\  t.setHours(t.getHours()+tzCurrent-tzOffset),this.counts. push ({d:t,id:s}),this.tick(),\  0==this.state&&this.init()},expire:function(t){ for (var s in t)this.display\  (this.counts[t[s]], "Now!" ),this.counts. splice (t[s],1)}, format :function(t){var s= "" ;\  return 0!=t.d&&(s+=t.d+ " " +(1==t.d? "'.string.drop( $labeldays, 1 ).'" :" '.$labeldays.' \  ")+" '.$separator.' "),0!=t.h&&(s+=t.h+" "+(1==t.h?" '.string.drop( $labelhrs, 1 ).' ":\  "'.$labelhrs.'" )+ "'.$separator.'" ),s+=t.m+ " " +(1==t.m?"\  '.string.drop( $labelmins, 1 ).' ":" '.$labelmins.' ")+" '.$separator.' ",s+=t.s+" "\  +(1==t.s? "'.string.drop( $labelsecs, 1 ).'" : "'.$labelsecs.'" )+ "'.$separator.'" \  ,s. substr (0,s. length -2)},math:function(t){var i=w=d=h=m=s=ms=0; return ms=( "" +\  (t %1e3 +1e3)). substr (1,3),t=Math.floor(t/1e3),i=Math.floor(t/31536e3),w=Math.floor\  (t/604800),d=Math.floor(t/86400),t%=86400,h=Math.floor(t/3600),t%=3600,m=Math.floor\  (t/60),t%=60,s=Math.floor(t),{y:i,w:w,d:d,h:h,m:m,s:s,ms:ms}},tick:function()\  {var t=(new Date).getTime(),s=[],i=0,n=0; if (this.counts) for (var e=0,\  o=this.counts. length ;o>e;++e)i=this.counts[e],n=i.d.getTime()-t,0>n?s. push (e):\  this.display(i,this. format (this.math(n)));s. length >0&&this.expire(s),\  0==this.counts. length &&window.clearTimeout(this.interval)},display:function(t,s)\  {document.getElementById(t.id).innerHTML=s}},window.onload=function()\  {var t=new CDown;t.add(new Date\  ( '.$year.' , '.--$month.' , '.$day.' , '.$hr.' , '.$min.' , '.$sec.' ), "countbox1" )};  </script><span id= "countbox1" ></span>';       $html =  '<div class= "TrafficScriptCountdown" ><center><h3><font color= "white" >\  COUNTDOWN TO RIVERBED FORCE '.$timer.' </font>\  <a href= "https://secure3.aetherquest.com/riverbedforce2014/" ><font color= "yellow" >\  REGISTER NOW</a></h3></font></center></div>';       $style = '<style type= "text/css" >.TrafficScriptCountdown{position:relative;top:0;\  width:100%;background: #E9681D;opacity:100%;z-index:1000;padding:0}</style>';       http.setResponseBody( string.regexsub( http.getResponseBody(),  "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" ) );    Example 2 in action     Notes   Example 1 results in faster page load time than Example 2. Example 1 can be easily extended to enable Traffic Script to set $timer to include detail down to the second as in example 2. Be aware of any trailing space(s) after the " \ " line breaks when copy and paste is used to import the rule. Incorrect spacing can stop the JS and the HTML from functioning. You may have to adjust the elements for your web application. (i.e. z-index, the regex sub match, div class, etc.).   This is a great example of using Traffic Manager to deliver a solution in minutes that could otherwise could take hours.
View full article
Dynamic information is more abundant now than ever, but we still see web applications provide static content. Unfortunately many websites are still using a static picture for a location map because of application code changes required. Traffic Manager provides the ability to insert the required code into your site with no changes to the application. This simplifies the ability to provide users dynamic and interactive content tailored for them.  Fortunately, Google provides an API to use embedded Google maps for your application. These maps can be implemented with little code changes and support many applications. This document will focus on using the Traffic Manager to provide embedded Google Maps without configuration or code changes to the application.   "The Google Maps Embed API uses a simple HTTP request to return a dynamic, interactive map. The map can be easily embedded in your web page by setting the Embed API URL as the src attribute of an iframe...   Google Maps Embed API maps are easy to add to your webpage—just set the URL you build as the value of an iframe's src attribute. Control the size of the map with the iframe's height and width attributes. No JavaScript required. "... -- Google Maps Embed API — Google Developers   Google Maps Embedded API Notes   Please reference the Google Documentation at Google Maps Embed API — Google Developers for additional information and options not covered in this document.   Google API Key   Before you get started with the Traffic Script, your need to get a Google API Key. Requests to the Google Embed API must include a free API key as the value of the URL key parameter. Your key enables you to monitor your application's Maps API usage, and ensures that Google can contact you about your website/application if necessary. Visit Google Maps Embed API — Google Developers to for directions to obtain an API key.   By default, a key can be used on any site. We strongly recommend that you restrict the use of your key to domains that you administer, to prevent use on unauthorized sites. You can specify which domains are allowed to use your API key by clicking the Edit allowed referrers... link for your key. -- Google Maps Embed API — Google Developers   The API key is included in clear text to the client ( search nerdydata for "https://www.google.com/maps/embed/v1/place?key=" ). I also recommend you restrict use of your key to your domains.   Map Modes   Google provides four map modes available for use,and the mode is specified in the request URL.   Place mode displays a map pin at a particular place or address, such as a landmark, business, geographic feature, or town. Directions mode displays the path between two or more specified points on the map, as well as the distance and travel time. Search mode displays results for a search across the visible map region. It's recommended that a location for the search be defined, either by including a location in the search term (record+stores+in+Seattle) or by including a center and zoom parameter to bound the search. View mode returns a map with no markers or directions.   A few use cases:   Display a map of a specific location with labels using place mode (Covered in this document). Display Parking and Transit information for a location with Search Mode.(Covered in this document). Provide directions (between locations or from the airport to a location) using Directions mode Display nearby Hotels or tourist information with Search mode using keywords or "lodging" or "landmarks" Use geo location and Traffic Script and provide a dynamic Search map of Gym's local to each visitor for your fitness blog. My personal favorite for Intranets Save time figuring out where to eat lunch around the office and use Search Mode with keyword "restaurant" Improve my Traffic Script productivity and use Search Mode with keyword "coffee+shops"   Traffic Script Examples   Example 1: Place Map (Replace a string)   This example covers a basic method to replace a string in the HTML code. This rule will replace a string within the existing HTML with Google Place map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;        $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;   $googleapikey = "YOUR_KEY_HERE" ;   $googlemapurl = "https://www.google.com/maps/embed/v1/place" ;   #Map height and width   $mapheight = "420" ;   $mapwidth = "420" ;        #String of HTML to be replaced   $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;        #Replacement HTML   $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .   "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=" .   "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .        #Get the existing HTTP Body for modification   $body = http.getResponseBody();        #Regex sub against the body looking for the defined string   $body = string.replaceall( $body , $insertstring , $googlemaphtml );   http.setResponseBody( $body );    Example 2: Search Map (Replace a string) This example is the same as Example 1, but a change in the map type (note the change in the $googlemapurl?q=parking+near). This rule will replace a string within the existing HTML with Google Search map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;           #String of HTML to be replaced    $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;           #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .           #Get the existing HTTP Body for modification    $body = http.getResponseBody();           #Regex sub against the body looking for the defined string    $body = string.replaceall( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );    Example 3: Search Map (Replace a section)   This example provides a different method to insert code into the existing HTML. This rule uses regex to replace a section of the existing HTML with Google map iframe HTML, and has also been formatted for easy customization and readability. The change from Example 2 can be noted (See $insertstring and string.regexsub).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 #Only process text/html content       if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;          #String of HTML to be replaced    $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;          #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .          #Get the existing HTTP Body for modification    $body = http.getResponseBody();          #Regex sub against the body looking for the defined string    $body = string.regexsub( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );     Example 3.1 (Shortened)   For reference a shortened version of the Example 3 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;                http.setResponseBody ( string.regexsub( http.getResponseBody(),      "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,      "<iframe width=\"420\" height=\"420\" frameborder=\"0\" style=\"border:0\" " .      "src=\"https://www.google.com/maps/embed/v1/search?" .      "q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107" .      "&key=YOUR_KEY_HERE\"></iframe>" ) );     Example 4: Search Map ( Replace a section with formatting, select URL, & additional map)   This example is closer to a production use case. Specifically this was created with www.riverbed.com as my pool nodes. This rule has the following changes from Example 3: use HTML formatting to visually integrate with an existing application (<div class=\"six columns\">), only process for the desired URL path of contact (line #3), and provides an additional Transit Stop map (lines 27-31).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 #Only process text/html content in the contact path  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" )       || http.getpath() == "contact" ) break;       $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;  $mapcenter = string.urlencode( "37.784465,-122.398570" );  $mapzoom = "14" ;  #Google API key  $googleapikey = "YOUR_KEY_HERE" ;  $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;  #Map height and width  $mapheight = "420" ;  $mapwidth = "420" ;       #Regex match for the HTML section to be replaced  $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;       #Replacment HTML  $googlemapshtml =   #HTML cleanup (2x "</div>") and New Section title  "</div></div></a><h4>Parking and Transit Information</h4>" .  #BEGIN Parking Map. Using existing css for layout  "<div class=\"six columns\"><h5>Parking Map</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" . $nearaddress . "" .  "&key=" . $googleapikey . "\"></iframe></div>" .  #BEGIN Transit Map. Using existing css for layout  "<div class=\"six columns\"><h5>Transit Stop's</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=Transit+Stop+near+" . $nearaddress . "" .  "&center=" . $mapcenter . "&zoom=" . $mapzoom . "&key=" . $googleapikey . "\"></iframe></div>" .  #Include the removed HTML comment  "<!-- TAB 2 Content (Office Locations) -->" ;       #Get the existing HTTP Body for modification  $body = http.getResponseBody();       #Regex sub against the body looking for the defined string  $body = string.regexsub( $body , $insertstring , $googlemapshtml );  http.setResponseBody( $body );    Example 4.1 (Shortened)   For reference a shortened version of the Example 4 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 if ( !string.startsWith ( http.getResponseHeader( "Content-Type" ), "text/html" )         || http.getpath() == "contact" ) break;           http.setResponseBody( string.regexsub(  http.getResponseBody() ,    "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,     "</div></div></a><h4>Parking and Transit Information</h4><div class=\"six columns\">" .    "<h5>Parking Map</h5><iframe width=\"420\" height=\"420\" frameborder=\"0\" " .    "style=\"border:0\" src=\"https://www.google.com/maps/embed/v1/search" .    "?q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107&key=YOU_KEY_HERE\"></iframe>" .  "</div><div class=\"six columns\"><h5>Transit Stop's</h5><iframe width=\"420\" " .  "height=\"420\" frameborder=\"0\" style=\"border:0\" " .  "src=\"https://www.google.com/maps/embed/v1/search?q=Transit+Stop+near+" .  "680+Folsom+St.+San+Francisco,+CA+94107&center=37.784465%2C-122.398570&zoom=14" .  "&key=YOUR_KEY_HERE\"></iframe></div><!-- TAB 2 Content (Office Locations) -->" ) );  
View full article
With the evolution of social media as a tool for marketing and current events, we commonly see the Twitter feed updated long before the website. It’s not surprising for people to rely on these outlets for information.   Fortunately Twitter provides a suite of widgets and scripting tools to integrate Twitter information for your application. The tools available can be implemented with little code changes and support many applications. Unfortunately the same reason a website is not as fresh as social media is because of the code changes required. The code could be owned by different people in the organization or you may have limited access to the code due to security or CMS environment. Traffic Manager provides the ability to insert the required code into your site with no changes to the application.      Twitter Overview "Embeddable timelines make it easy to syndicate any public Twitter timeline to your website with one line of code. Create an embedded timeline from your widgets settings page on twitter.com, or choose “Embed this…” from the options menu on profile, search and collection pages.   Just like timelines on twitter.com, embeddable timelines are interactive and enable your visitors to reply, Retweet, and favorite Tweets directly from your pages. Users can expand Tweets to see Cards inline, as well as Retweet and favorite counts. An integrated Tweet box encourages users to respond or start new conversations, and the option to auto-expand media brings photos front and center.   These new timeline tools are built specifically for the web, mobile web, and touch devices. They load fast, scale with your traffic, and update in real-time." -twitter.com   Thank you Faisal Memon for the original article Using TrafficScript to add a Twitter feed to your web site   As happens more often than than not, platform access changes. This time twitter is our prime example. When loading Twitter js, http://widgets.twimg.com/j/2/widget.js you can see the following notice:   The Twitter API v1.0 is deprecated, and this widget has ceased functioning.","You can replace it with a new, upgraded widget from <https://twitter.com/settings/widgets/new/"+H+">","For more information on alternative Twitter tools, see <https://dev.twitter.com/docs/twitter-for-websites>   To save you some time, Twitter really means deprecated and the information link is broken. For more information on alternative Twitter tools the Twitter for Websites | Home. For information related to the information in this article, please see Embedded Timelines | Home   One of the biggest changes in the current twitter platform is the requirement for a "data-widget-id". The data-widget-id is unique, and is used by the twitter platform to provide information to generate the data. Before getting started with the Traffic Manager and Web application you will have to create a new widget using your twitter account https://twitter.com/settings/widgets/new/. Once you create your widget, will see the "Copy and paste the code into the HTML of your site." section on the twitter website. Along with other information, this code contains your "data-widget-id". See Create widget image.   Create widget (click to zoom)   This example uses a Traffic Script response rule to rewrite the HTTP body from the application. Specifically I know the body for my application includes a html comment   <!--SIDEBAR-->.    This rule will insert the required client side code into the HTTP body and send the updated body in to complete the request.  The $inserttag variable can be just about anything in the body itself  i.e. the "MORE LIKE THIS" text on the side of this page. Simply change the code below to:     $inserttag = "MORE LIKE THIS";   Some of the values used in the example (i.e. width, data-theme, data-link-color, data-tweet-limit) are not required. They have been included to demonstrate customization. When you create/save the widget on the twitter website, the configuration options (See the Create widget image above) are associated with the "data-widget-id". For example "data-theme", if you saved the widget with light and you want the light theme, it can be excluded. Alternatively if you saved the widget with light, you can use "data-theme=dark" and over ride the value saved with the widget.  In the example time line picture the data-link-color value is used to over ride the value provided with the saved "data-widget-id".   Example Response Rule, *line spaced for splash readability and use of variables for easy customization. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # Only modify text/html pages    if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         $inserttag = "<!--SIDEBAR-->" ;       # create a widget ID @ https://twitter.com/settings/widgets/new  #This is the id used by riverbed.com   $ttimelinedataid = "261517019072040960" ;  $ttimelinewidth = "520" ; # max could be limited by ID config.  $ttimelineheight = "420" ;  $ttimelinelinkcolor = "#0080ff" ; #0 for default or ID config, #0080ff & #0099cc are nice  $ttimelinetheme = "dark" ; #"light" or "dark"  $ttimelinelimit = "0" ; #0 = unlimited with scroll. >=1 will ignore height.  #See https://dev.twitter.com/web/embedded-timelines#customization for other options.       $ttimelinehtml = "<a class=\"twitter-timeline\" " .                   "width=\"" . $ttimelinewidth . "" .                     "\" height=\"" . $ttimelineheight . "" .                     "\" data-theme=\"" . $ttimelinetheme . "" .                   "\" data-link-color=\"" . $ttimelinelinkcolor . "" .                   "\" data-tweet-limit=\"" . $ttimelinelimit . "" .                   "\" data-widget-id=\"" . $ttimelinedataid . "" .                    "\"></a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)" .                     "[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))" .                     "{js=d.createElement(s);js.id=id;js.src=p+" .                   "\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js," .                   "fjs);}}(document,\"script\",\"twitter-wjs\");" .                     "</script><br>" . $inserttag . "" ;         $body = http.getResponseBody();    $body = string.replace( $body , $inserttag , $ttimelinehtml );  http.setResponseBody( $body );    A short version of the rule above, still with line breaks for readability.   1 2 3 4 5 6 7 8 9 if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         http.setResponseBody(string.replace( http.getResponseBody(), "<!--SIDEBAR-->" ,   "<a class=\"twitter-timeline\" width=\"520\" height=\"420\" data-theme=\"dark\" " .  "data-link-color=\"#0080ff\" data-tweet-limit=\"0\" data-widget-id=\"261517019072040960\">" .  "</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test" .  "(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;" .  "js.src=p+\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js,fjs);}}" .  "(document,\"script\",\"twitter-wjs\");</script><br><!--SIDEBAR-->" ));    Result from either rule:  
View full article
Riverbed SteelApp™ Traffic Manager from Riverbed Technology is a high performance software-based application delivery controller (ADC), designed to deliver faster and more reliable access to Microsoft Azure applications as well as private applications. As a software-based ADC, it provides unprecedented scale and flexibility to deliver advanced application services.
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for SAP NetWeaver.   This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software.
View full article