cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
  Traffic Manager does not provide a ‘connection mirroring’ or ‘transparent failover’ capability.  This article describes contemporary connection mirroring techniques and their strengths and limitations, and explains how Traffic Manager may be used with VMware Fault Tolerance to create an effective solution that preserves all connections in the event of a hardware failure, while processing them fully at layer 7. What is connection mirroring?   A fault tolerant load balancer cluster eliminates single points of failure:  When load balancers are deployed in a fault tolerant cluster, they present a reliable endpoint for the services they manage.  If one load balancer device fails, its peers are able to step in and accept traffic so the service can continue to operate.   …but a failover event will drop all established TCP connections: However, if a load balancer fails, any TCP connections that are established to that load balancer will be dropped.  Clients will either receive a RST or FIN close message, or they may just experience a timeout.  The clients will need to re-establish the TCP connection.  This is an inconvenience for long-lived protocols that do not support automatic reconnects, such as FTP.     Connection Mirroring offers a solution: If the load balancers are operating in a basic layer-4 packet forwarding mode, the only actions they perform is to NAT the packets to the correct target node, and to apply sequence number translation.  They can share this connection information with their peer.  If a load balancer fails, the TCP client will retransmit its packets after an appropriate timeout.  The packets will be received by the peer who can then apply the correct NAT and sequence number operations.   When is it appropriate to use connection mirroring?   Connection mirroring is best used when only very basic packet-based load balancing is in use.  For example, F5 recommend that you "enable connection mirroring on Performance (Layer 4) virtual servers only" and comment "mirroring short-term connections such as HTTP and UDP is not recommended, as this will cause a decrease in system performance... is typically not necessary, as those protocols allow for failure of individual requests without loss of the entire session".   Cisco also support layer 4 connection mirroring (referring to it as ‘Stateful Failover’) and note that it is only possible for layer 4 connections.  When using a Cisco ACE device, it is not possible to failover connections that are proxied, including connections that employ SSL decryption or HTTP compression.   Layer 7 connection mirroring imposes a significant network and CPU overhead   Layer 7 connection mirroring puts a very high load on the dedicated heartbeat link (all incoming packets are replicated to the standby peer) and is CPU intensive (both traffic managers must process the same transactions at layer 7). It may add latency or interfere with normal system operation, and not all ADC features are supported in a mirrored configuration.  Because of these limitations, F5 advise "the overhead incurred by mirroring HTTP connections is excessive given the minimal advantages."   Does connection mirroring guarantee seamless failover?   Due to timing and implementation details, connection mirroring does not guarantee seamless failover.  State data must be shared to the peer once the TCP connection is established, and this must be done asynchronously to avoid delaying every TCP connection.  If a load balancer fails before it has shared the state information, the TCP session cannot be resumed.   Typical duration of a TCP transaction (not including lingering keepalives) 500 ms Typical window before which state information is synchronized (implementation dependent) 200 ms (state exchanged 5 times per second) On failure, percentage of connections that cannot be re-established 200/500 = 40%   Connection mirroring does not guarantee seamless failover because connections must proceed while state is being shared   What is the effect of connection mirroring on uptime?   Connection mirroring carries a cost: increased internal traffic for state sharing, and severe limitations on the functionality that may be used at the load balancing tier.  What effect does it have on a service’s uptime?   Typical duration of a TCP transaction (not including lingering keepalives) 500 ms Typical number of individual load balancer failures in a 12 month period 5 Percentage of transactions that would be dropped if a load balancer failed 50% (assuming an active-active pair of load balancers)     Percentage of transactions that would be recovered on a failure 60% (analysis above: 40% would not be recovered)     What is the probability that an individual connection would be impacted by a load balancer failure? 500/(365.5*24*3600*1000) * 50% * 5 = 0.000000040 What is the probability that connection could be ‘rescued’ with connection mirroring? 60% = 0.6 What proportion of transactions would be impacted by a failure, and then recovered by connection mirroring? 0.000000040 * 0.6 = 0.000000024 (i.e. 0.0000024%)   Connection mirroring improves uptime by an infinitesimal amount   General advice   Consider using connection mirroring when: Operating in L2-4 NAT load balancing modes Performing NAT load balancing with no content inspection (no delayed binding) No content processing e.g. SSL, compression, caching, cookie injection is required Base protocol does not support automatic reconnects – e.g. FTP Connections are long-lived and a dropped connection would inconvenience the user, e.g. SSH Your load balancer is unreliable and failures are sufficiently frequent that the overhead of mirroring is worthwhile You are running a fault-tolerant pair of load balancers   Don’t use connection mirroring when: Operating in full proxy modes Performing NAT or full proxy load balancing with content inspection Compressing content, SSL decrypting, caching, session persistence methods that inject cookies, application firewall Base protocol supports reconnects – e.g. RDP Connections are short-lived and easily re-established e.g HTTP Your load balancers are reliable and you can accommodate instantaneous loss of connections in the event that one does fail You plan to run a cluster of three or more load balancers (this configuration is not supported by the major vendors who offer connection mirroring)   Benefits of using Connection Mirroring Improves uptime by 0.0000024% (typical) (2.4 millionths of a percent)   Costs of using Connection Mirroring Limits traffic inspection or manipulation in load balancer. Increases internal traffic and increases load on load balancer   Balance the benefits of connection limiting against the additional risk and complexity of enabling it and the potential loss in performance and functionality that will result.  Be aware that, based on the preceding analysis, unless your goal is to achieve more than 7-9’s uptime (99.99999%), connection mirroring will not measurably contribute to the reliability of your service.   When connections are too valuable to lose…   Pulse customers include emergency and first-response services around the world, NGO services publishing disaster-response information and even major political fund-raising concerns. In each case, extremely high availability and consistent performance in the face of large spikes of traffic are paramount to the organizations who selected Traffic Manager.   A number of customers use VMware Fault Tolerance with Traffic Manager to achieve enhanced uptime without compromising the any of the functionality that Traffic Manager offers. VMware Fault Tolerance maintains a perfect shadow of a running virtual machine, running on a separate host.  If the primary virtual machine fails due to a catastrophic hardware failure, the shadow seamlessly takes over all traffic, including established connections, with a typical latency of less than 1 ms. All application-level workloads, such as SSL decryption, TrafficScript processing and Authentication are maintained without any interruption in service:   VMware Fault Tolerance runs a secondary virtual machine in ‘lock step’ with the primary. Network traffic and other non-determinstic events are replicated to the secondary, ensuring that it maintains an identical execution state to the primary. If the primary fails, the secondary takes over seamlessly and a new secondary is started.   Such configurations leverage standard VMware technology and are fully supported. They have been proven in production and offer enhanced connection mirroring functionality compared to proprietary ADC solutions
View full article
Java Extensions are one of the 'data plane' APIs provided by Traffic Manager to process network transactions.  Java Extensions are invoked from TrafficScript using the java.run() function.   This article contains a selection of technical tips and solutions to illustrate the use of Java Extensions.   Basic Language Examples   Writing Java Extensions - an introduction (presenting a template and 'Hello World' application) Writing TrafficScript functions in Java (illustrating how to use the GenericServlet interface) Tech Tip: Prompting for Authentication in a Java Extension Tech Tip: Reading HTTP responses in a Java Extension   Advanced Language Examples   Apache Commons Logging (TODO) Authenticating users with Active Directory and Stingray Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking PDF documents with Traffic Manager and Java Extensions Being Lazy with Java Extensions XML, TrafficScript and Java Extensions Merging RSS feeds using Java Extensions (12/17/2008) Serving Web Content from Traffic Manager using Java Stingray-API.jar: A Java Interface Library for Traffic Manager's SOAP Control API TrafficManager Status - Using the Control API from a Java Extension   Java Extensions in other languages   PyRunner.jar: Running Python code in Traffic Manager Making Traffic Manager more RAD with Jython! Scala, Traffic Manager and Java Extensions (06/30/2009)   More information   Feature Brief: Java Extensions in Traffic Manager Java Development Guide documentation in the Product Documentation
View full article
(Originally posted by Owen Garrett on Sept 9, 2006. Updated by Paul Wallace on 31st January 2014)   The following Perl example illustrates how to invoke the System.Cache.clearWebCache() method to clear the content cache on the local Traffic Manager or Load Balancer.   #!/usr/bin/perl -w use SOAP::Lite0.60; # This is the url of the ZXTM admin server - CHANGE THIS my $admin_server ='https://admin:password@zxtmhost:9090'; my $conn = SOAP::Lite -> ns('http://soap.zeus.com/zxtm/1.0/System/Cache/') -> proxy("$admin_server/soap") -> on_fault(sub{ my( $conn, $res )=@_; dieref $res ? $res->faultstring : $conn->transport->status;}); $conn->clearWebCache();
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
We spend a great deal of time focusing on how to speed up customers' web services. We constantly research new techniques to load balance traffic, optimise network connections and improve the performance of overloaded application servers. The techniques and options available from us (and yes, from our competitors too!) may seem bewildering at times. So I would like to spend a short time singing the praises of one specific feature, which I can confidently say will improve your website's performance above all others - caching your website.   "But my website is uncacheable! It's full of dynamic, changing pages. Caching is useless to me!"   We'll answer that objection soon, but first, it is worth a quick explanation of the two main styles of caching:     Client-side caching   Most people's experience of a web cache is on their web browser. Internet Explorer or Firefox will store copies of web pages on your hard drive, so if you visit a site again, it can load the content from disk instead of over the Internet.   There's another layer of caching going on though. Your ISP may also be doing some caching. The ISP wants to save money on their bandwidth, and so puts a big web cache in front of everyone's Internet access. The cache keeps copies of the most-visited web pages, storing bits of many different websites. A popular and widely used open-source web cache is Squid.   However, not all web pages are cacheable near the client. Websites have dynamic content, so for example any web page containing personalized or changing information will not be stored in your ISP's cache. Generally the cache will fill up with "static" content such as images, movies, etc. These get stored for hours or days. For your ISP, this is great, as these big files take up the most of their precious bandwidth.   For someone running their own website, the browser caching or ISP caching does not do much. They might save a little bandwidth from the ISP image caching if they have lots of visitors from the same ISP, but the bulk of the website, including most of the content generated by their application servers, will not be cached and their servers will still have lots of work to do.   Server-side caching (with Traffic Manager)   Here, the main aim is not to save bandwidth, but to accelerate your website. The traffic manager sits in your datacenter (or your cloud provider), in front of your web and application servers. Access to your website is through the Traffic Manager software, so it sees both the requests and responses. Traffic Manager can then start to answer these requests itself, delivering cached responses. Your servers then have less work to do. Less work = faster responses = fewer servers needed = saves money!   "But I told you - my website isn't cacheable!"   There's a reason why your website is marked uncacheable. Remember the ISP caches...? They mustn't store your changing, constantly updating web pages. To enforce this, application servers send back instructions with every web page, the Cache-Control HTTP header, saying "Don't cache this". Traffic Manager obeys these cache instructions too, because it's well-behaved.   But, think - how often does your website really change? Take a very busy site, for example a popular news site. Its front page may be labelled as uncacheable so that vistors always see the latest news, since it changes as new stories are added. But new additions aren't happening every second of the day. What if the page was marked as cacheable - for just one second? Visitors would still see the most up-to-date news, but the load on the site servers would plummet. Even if the website had as few as ten views in a second, this simple change would reduce the load on the app servers ten-fold.   This isn't an isolated example - there are plenty of others: Think twitter searches, auction listings, "live" graphing, and so on. All such content can be cached briefly without any noticable change to the "liveness" of the site. Traffic Manager can deliver a cached version of your web page much faster than your application servers - not just because it is highly optimized, but because sending a cached copy of a page is so much less work than generating it from scratch.   So if this simple cache change is so great, why don't people use this technique more - surely app servers can mark their web pages as cacheable for one or two seconds without Traffic Manager's help, and those browser/ISP caches can then do the magic? Well, the browser caches aren't going to be any use - an individual isn't going to be viewing the same page on your website multiple times a second (and if they keep hitting the reload button, their page requests are not cacheable anyway). So how about those big ISP caches? Unfortunately, they aren't always clever enough either. Some see a web page marked as cacheable for a short time and will either:   Not cache it at all (it's going to expire soon, what's the point in keeping it?) or will cache it for much longer (if it is cacheable for 3 seconds, why not cache it for 300, right?)   Also, by leaving the caching to the client-side, the cache hit rate gets worse. A user in France isn't going to be able to make use of a cached copy of your site stored in a US ISP's cache, for instance.   If you use Traffic Manager to do the caching, these issues can be solved. First, the cache is held in one place - your datacenter, so it is available to all visitors. Second, Traffic Manager can tweak the cache instructions for the page, so it caches the page while forcing other people not to. Here is what's going on:     Request arrives at Traffic Manager, which sends it on to your application server. App server sends web page response back to the traffic manager. The page has a Cache-Control: no-cache header, since the app server thinks the page can't be cached. TrafficScript response rule identifies the page as one that can be cached, for a short time. It changes the cache instructions to Cache-Control: max-age=3, meaning that the page can now be cached for three seconds. Traffic Manager's web cache stores the page. Traffic Manager sends out the response to the user (and to anyone else for the next three seconds), but changes the cache instructions to Cache-Control: no-cache, to ensure downstream caches, ISP caches and web browsers do not try to cache the page further.   Result: a much faster web site, yet it still serves dynamic and constantly updating pages to viewers. Give it a try - you will be amazed at the performance improvements possible, even when caching for just a second. Remember, almost anything can be cached if you configure your servers correctly!   How to set up Traffic Manager   On the admin server, edit the virtual server that you want to cache, and click on the "Content Caching" link. Enable the cache. There are options here for the default cache time for pages. These can be changed as desired, but are primarily for the "ordinary" content that is cacheable normally, such as images, etc. The "webcache!control_out" setting allows you to change the Cache-Control header for your pages after they have been cached by the Traffic Manager software, so you can put "no-cache" here to stop others from caching your pages.   The "webcache!refresh_time" setting is a useful extra here. Set this to one second. This will smooth out the load on your app servers. When a cached page is about to expire (i.e. it's too old to stay cached) and a new request arrives, Traffic Manager will hand over a single request to your app servers, to see if there is a newer page available. Other requests continue to get served from the cache. This can prevent 'waves' of requests hitting your app servers when a page is about to expire from the cache.   Now, we need to make Traffic Manager cache the specific pages of your site that the app server claims are uncacheable. We do this using the RuleBuilder system for defining rules, so click on the "Catalogs" icon and then select the "Rules" tab. Now create a new RuleBuilder rule.   This rule needs to run for the specific web pages that you wish to make cacheable for short amounts of time. For an example, we'll make "/news" cacheable. Add a condition of "HTTP:URL Path" to match "/news", then add an action to set a HTTP response header. The rule should look like this:     Finally, add this rule as a response rule to your virtual server. That's it! Your site should now start to be cached. Just a final few words of caution:   Be selective in the pages that you mark as cacheable; remember that personalized pages (e.g. showing a username) cannot be cached otherwise other people will see those pages too! If necessary, some page redesign might be called for to split the content into "generic" and "user-specific" iframes or AJAX requests. Server-side caching saves you CPU time, not bandwidth. If your website is slow because you are hitting your site throughput limits, then other techniques are needed.
View full article
When you need to scale out your MySQL database, replication is a good way to proceed. Database writes (UPDATEs) go to a 'master' server and are replicated across a set of 'slave' servers. Reads (SELECTs) are load-balanced across the slaves.   Overview   MySQL's replication documentation describes how to configure replication:   MySQL Replication   A quick solution...   If you can modify your MySQL client application to direct 'Write' (i.e. 'UPDATE') connections to one IP address/port and 'Read' (i.e. 'SELECT') connections to another, then this problem is trivial to solve. This generally needs a code update (Using Replication for Scale-Out).   You will need to direct the 'Update' connections to the master database (or through a dedicated Traffic Manager virtual server), and direct the 'Read' connections to a Traffic Manager virtual server (in 'generic server first' mode) and load-balance the connections across the pool of MySQL slave servers using the 'least connections' load-balancing method: Routing connections from the application   However, in most cases, you probably don't have that degree of control over how your client application issues MySQL connections; all connections are directed to a single IP:port. A load balancer will need to discriminate between different connection types and route them accordingly.   Routing MySQL traffic   A MySQL database connection is authenticated by a username and password. In most database designs, multiple users with different access rights are used; less privileged user accounts can only read data (issuing 'SELECT' statements), and more privileged users can also perform updates (issuing 'UPDATE' statements). A well architected application with sound security boundaries will take advantage of these multiple user accounts, using the account with least privilege to perform each operation. This reduces the opportunities for attacks like SQL injection to subvert database transactions and perform undesired updates.   This article describes how to use Traffic Manager to inspect and manage MySQL connections, routing connections authenticated with privileged users to the master database and load-balancing other connects to the slaves:   Load-balancing MySQL connections   Designing a MySQL proxy   Stingray Traffic Manager functions as an application-level (layer-7) proxy. Most protocols are relatively easy for layer-7 proxies like Traffic Manager to inspect and load-balance, and work 'out-of-the-box' or with relatively little configuration.   For more information, refer to the article Server First, Client First and Generic Streaming Protocols.   Proxying MySQL connections   MySQL is much more complicated to proxy and load-balance.   When a MySQL client connects, the server immediately responds with a randomly generated challenge string (the 'salt'). The client then authenticates itself by responding with the username for the connection and a copy of the 'salt' encrypted using the corresponding password:   Connect and Authenticate in MySQL   If the proxy is to route and load-balance based on the username in the connection, it needs to correctly authenticate the client connection first. When it finally connects to the chosen MySQL server, it will then have to re-authenticate the connection with the back-end server using a different salt.   Implementing a MySQL proxy in TrafficScript   In this example, we're going to proxy MySQL connections from two users - 'mysqlmaster' and 'mysqlslave', directing connections to the 'SQL Master' and 'SQL Slaves' pools as appropriate.   The proxy is implemented using two TrafficScript rules ('mysql-request' and 'mysql-response') on a 'server-first' Virtual Server listening on port 3306 for MySQL client connections. Together, the rules implement a simple state machine that mediates between the client and server:   Implementing a MySQL proxy in TrafficScript   The state machine authenticates and inspects the client connection before deciding which pool to direct the connection to. The rule needs to know the encrypted password and desired pool for each user. The virtual server should be configured to send traffic to the built-in 'discard' pool by default.   The request rule:   Configure the following request rule on a 'server first' virtual server. Edit the values at the top to reflect the encrypted passwords (copied from the MySQL users table) and desired pools:   sub encpassword( $user ) { # From the mysql users table - double-SHA1 of the password # Do not include the leading '*' in the long 40-byte encoded password if( $user == "mysqlmaster" ) return "B17453F89631AE57EFC1B401AD1C7A59EFD547E5"; if( $user == "mysqlslave" ) return "14521EA7B4C66AE94E6CFF753453F89631AE57EF"; } sub pool( $user ) { if( $user == "mysqlmaster" ) return "SQL Master"; if( $user == "mysqlslave" ) return "SQL Slaves"; } $state = connection.data.get( "state" ); if( !$state ) { # First time in; we've just recieved a fresh connection $salt1 = randomBytes( 8 ); $salt2 = randomBytes( 12 ); connection.data.set( "salt", $salt1.$salt2 ); $server_hs = "\0\0\0\0" . # length - fill in below "\012" . # protocol version "Stingray Proxy v0.9\0" . # server version "\01\0\0\0" . # thread 1 $salt1."\0" . # salt(1) "\054\242" . # Capabilities "\010\02\0" . # Lang and status "\0\0\0\0\0\0\0\0\0\0\0\0\0" . # Unused $salt2."\0"; # salt(2) $l = string.length( $server_hs )-4; # Will be <= 255 $server_hs = string.replaceBytes( $server_hs, string.intToBytes( $l, 1 ), 0 ); connection.data.set( "state", "wait for clienths" ); request.sendResponse( $server_hs ); break; } if( $state == "wait for clienths" ) { # We've recieved the client handshake. $chs = request.get( 1 ); $chs_len = string.bytesToInt( $chs ); $chs = request.get( $chs_len + 4 ); # user starts at byte 36; password follows after $i = string.find( $chs, "\0", 36 ); $user = string.subString( $chs, 36, $i-1 ); $encpasswd = string.subString( $chs, $i+2, $i+21 ); $passwd2 = string.hexDecode( encpassword( $user ) ); $salt = connection.data.get( "salt" ); $passwd1 = string_xor( $encpasswd, string.hashSHA1( $salt.$passwd2 ) ); if( string.hashSHA1( $passwd1 ) != $passwd2 ) { log.warn( "User '" . $user . "': authentication failure" ); connection.data.set( "state", "authentication failed" ); connection.discard(); } connection.data.set( "user", $user ); connection.data.set( "passwd1", $passwd1 ); connection.data.set( "clienths", $chs ); connection.data.set( "state", "wait for serverhs" ); request.set( "" ); # Select pool based on user pool.select( pool( $user ) ); break; } if( $state == "wait for client data" ) { # Write the client handshake we remembered from earlier to the server, # and piggyback the request we've just recieved on the end $req = request.get(); $chs = connection.data.get( "clienths" ); $passwd1 = connection.data.get( "passwd1" ); $salt = connection.data.get( "salt" ); $encpasswd = string_xor( $passwd1, string.hashSHA1( $salt . string.hashSHA1( $passwd1 ) ) ); $i = string.find( $chs, "\0", 36 ); $chs = string.replaceBytes( $chs, $encpasswd, $i+2 ); connection.data.set( "state", "do authentication" ); request.set( $chs.$req ); break; } # Helper function sub string_xor( $a, $b ) { $r = ""; while( string.length( $a ) ) { $a1 = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $b1 = string.left( $b, 1 ); $b = string.skip( $b, 1 ); $r = $r . chr( ord( $a1 ) ^ ord ( $b1 ) ); } return $r; }   The response rule   Configure the following as a response rule, set to run every time, for the MySQL virtual server.   $state = connection.data.get( "state" ); $authok = "\07\0\0\2\0\0\0\02\0\0\0"; if( $state == "wait for serverhs" ) { # Read server handshake, remember the salt $shs = response.get( 1 ); $shs_len = string.bytesToInt( $shs )+4; $shs = response.get( $shs_len ); $salt1 = string.substring( $shs, $shs_len-40, $shs_len-33 ); $salt2 = string.substring( $shs, $shs_len-13, $shs_len-2 ); connection.data.set( "salt", $salt1.$salt2 ); # Write an authentication confirmation now to provoke the client # to send us more data (the first query). This will prepare the # state machine to write the authentication to the server connection.data.set( "state", "wait for client data" ); response.set( $authok ); break; } if( $state == "do authentication" ) { # We're expecting two responses. # The first is the authentication confirmation which we discard. $res = response.get(); $res1 = string.left( $res, 11 ); $res2 = string.skip( $res, 11 ); if( $res1 != $authok ) { $user = connection.data.get( "user" ); log.info( "Unexpected authentication failure for " . $user ); connection.discard(); } connection.data.set( "state", "complete" ); response.set( $res2 ); break; }   Testing your configuration   If you have several MySQL databases to test against, testing this configuration is straightforward. Edit the request rule to add the correct passwords and pools, and use the mysql command-line client to make connections:   $ mysql -h zeus -u username -p Enter password: *******   Check the 'current connections' list in the Traffic Manager UI to see how it has connected each session to a back-end database server.   If you encounter problems, try the following steps:   Ensure that trafficscript!variable_pool_use is set to 'Yes' in the Global Settings page on the UI. This setting allows you to use non-literal values in pool.use() and pool.select() TrafficScript functions. Turn on the log!client_connection_failures and log!server_connection_failures settings in the Virtual Server > Connection Management configuration page; these settings will configure the traffic manager to write detailed debug messages to the Event Log whenever a connection fails.   Then review your Traffic Manager Event Log and your mysql logs in the event of an error.   Traffic Manager's access logging can be used to record every connection. You can use the special *{name}d log macro to record information stored using connection.data.set(), such as the username used in each connection.   Conclusion   This article has demonstrated how to build a fairly sophisticated protocol parser where the Traffic Manager-based proxy performs full authentication and inspection before making a load-balancing decision. The protocol parser then performs the authentication again against the chosen back-end server.   Once the client-side and server-side handshakes are complete, Traffic Manager will simply forward data back and fro between the client and the server.   This example addresses the problem of scaling out your MySQL database, giving load-balancing and redundancy for database reads ('SELECTs'). It does not address the problem of scaling out your master 'write' server - you need to address that by investing in a sufficiently powerful server, architecting your database and application to minimise the number and impact of write operations, or by selecting a full clustering solution.     The solution leaves a single point of failure, in the form of the master database. This problem could be effectively dealt with by creating a monitor that tests the master database for correct operation. If it detects a failure, the monitor could promote one of the slave databases to master status and reconfigure the 'SQLMaster' pool to direct write (UPDATE) traffic to the new MySQL master server.   Acknowledgements   Ian Redfern's MySQL protocol description was invaluable in developing the proxy code.     Appendix - Password Problems? This example assumes that you are using MySQL 4.1.x or later (it was tested with MySQL 5 clients and servers), and that your database has passwords in the 'long' 41-byte MySQL 4.1 (and later) format (see http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html)   If you upgrade a pre-4.1 MySQL database to 4.1 or later, your passwords will remain in the pre-4.1 'short' format.   You can verify what password format your MySQL database is using as follows:   mysql> select password from mysql.user where user='username'; +------------------+ | password         | +------------------+ | 6a4ba5f42d7d4f51 | +------------------+ 1 rows in set (0.00 sec)   mysql> update mysql.user set password=PASSWORD('password') where user='username'; Query OK, 1 rows affected (0.00 sec) Rows matched: 1  Changed: 1  Warnings: 0   mysql> select password from mysql.user where user='username'; +-------------------------------------------+ | password                                  | +-------------------------------------------+ | *14521EA7B4C66AE94E6CFF753453F89631AE57EF | +-------------------------------------------+ 1 rows in set (0.00 sec)   If you can't create 'long' passwords, your database may be stuck in 'short' password mode. Run the following command to resize the password table if necessary:   $ mysql_fix_privilege_tables --password=admin password   Check that 'old_passwords' is not set to '1' (see here) in your my.cnf configuration file.   Check that the mysqld process isn't running with the --old-passwords option.   Finally, ensure that the privileges you have configured apply to connections from the Stingray proxy. You may need to GRANT... TO 'user'@'%' for example.
View full article
Top Deployment Guides   The following is a list of tested and validated deployment guides for common enterprise applications. Ask your sales team for information for the latest information    Microsoft   Virtual Traffic Manager and Microsoft Lync 2013 Virtual Traffic Manager and Microsoft Lync 2010 Virtual Traffic Manager and Microsoft Skype for Business Virtual Traffic Manager and Microsoft Exchange 2010 Virtual Traffic Manager and Microsoft Exchange 2013 Virtual Traffic Manager and Microsoft Exchange 2016 Virtual Traffic Manager and Microsoft SharePoint 2013 Virtual Traffic Manager and Microsoft SharePoint 2010 Virtual Traffic Manager and Microsoft Outlook Web Access Virtual Traffic Manager and Microsoft Intelligent Application Gateway Virtual Traffic Manager and Microsoft IIS  Oracle   Virtual Traffic Manager and Oracle EBS 12.1 Virtual Traffic Manager and Oracle Enterprise Manager 12c Virtual Traffic Manager and Oracle Application Server 10G Virtual Traffic Manager and Oracle WebLogic Applications (Ex: PeopleSoft and Blackboard) Virtual Traffic Manager and Glassfish Application Server   VMware   Virtual Traffic Manager and VMware Horizon View Servers Virtual Traffic Manager Plugin for VMware vRealize Orchestrator   Other Applications   Virtual Traffic Manager and SAP NetWeaver Virtual Traffic Manager and Magento  
View full article
This article illustrates how to write data to a MySQL database from a Java Extension, and how to use a background thread to minimize latency and control the load on the database.   Being Lazy with Java Extensions   With a Java Extension, you can log data in real time to an external database. The example in this article describes how to log the ‘referring’ source that each visitor comes in from when they enter a website. Logging is done to a MySQL database, and it maintains a count of how many times each key has been logged, so that you can determine which sites are sending you the most traffic.   The article then presents a modification that illustrates how to lazily perform operations such as database writes in the background (i.e. asynchronously) so that the performance the end user observes is not impaired.   Overview - let's count referers!   It’s often very revealing to find out which web sites are referring the most traffic to the sites that you are hosting. Tools like Google Analytics and web log analysis applications are one way of doing this, but in this example we’ll show an alternative method where we log the frequency of referring sites to a local database for easy access.   When a web browser submits an HTTP request for a resource, it commonly includes a header called "Referer" which identifies the page that linked to that resource. We’re not interested in internal referrers – where one page in the site links to another. We’re only interested in external referrers. We're going to log these 'external referrers' to a MySQL database, counting the frequency of each so that we can easily determine which occur most commonly. Create the database   Create a suitable MySQL database, with limited write access for a remote user: % mysql –h dbhost –u root –p Enter password: ******** mysql> CREATE DATABASE website; mysql> CREATE TABLE website.referers ( data VARCHAR(256) PRIMARY KEY, count INTEGER ); mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'%' IDENTIFIED BY 'W38_U5er'; mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'localhost' IDENTIFIED BY 'W38_U5er'; mysql> QUIT;   Verify that the table was correctly created and the ‘web’ user can access it:   % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> DESCRIBE website.referers; +-------+--------------+------+-----+---------+-------+ | Field | Type         | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | data  | varchar(256) | NO   | PRI |         |       | | count | int(11)      | YES  |     | NULL    |       | +-------+--------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)   mysql> SELECT * FROM website.referers; Empty set (0.00 sec)   The database looks good...   Create the Java Extension   We'll create a Java Extension that writes to the database, adding rows with the provided 'data' value, and setting the 'count' value to '1', or incrementing it if the row already exists.   CountThis.java   Compile up the following 'CountThis' Java Extension:   import java.io.IOException; import java.io.PrintWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThis extends HttpServlet { private static final long serialVersionUID = 1L; private Connection conn = null; private String userName = null; private String password = null; private String database = null; private String table = null; private String dbserver = null; public void init( ServletConfig config) throws ServletException { super.init( config ); userName = config.getInitParameter( "username" ); password = config.getInitParameter( "password" ); table = config.getInitParameter( "table" ); dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch( Exception e ) { throw new ServletException( "Could not initialize mysql: "+e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data == null ) return; if( conn == null ) { conn = DriverManager.getConnection( "jdbc:mysql://"+dbserver+"/", userName, password); } PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 ) " + "ON DUPLICATE KEY UPDATE count=count+1" ); s.setString(1, data); s.executeUpdate(); } catch( Exception e ) { conn = null; log( "Could not log data to database table '" + table + "': " + e.toString() ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   Upload the resulting CountThis.class file to Traffic Manager's Java Catalog. Click on the class name to configure the following initialization properties:   You must also upload the mysql connector (I used mysql-connector-java-5.1.24-bin.jar ) from dev.mysql.com to your Traffic Manager Java Catalog.   Add the TrafficScript rule   You can test the Extension very quickly using the following TrafficScript rule to log the each request:   java.run( "CountThis", http.getPath() );   Check the Traffic Manager  event log for any error messages, and query the table to verify that it is getting populated by the extension:   mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 5; +--------------------------+-------+ | data                     | count | +--------------------------+-------+ | /media/riverbed.png      |     5 | | /articles                |     3 | | /media/puppies.jpg       |     2 | | /media/ponies.png        |     2 | | /media/cats_and_mice.png |     2 | +--------------------------+-------+ 5 rows in set (0.00 sec)   mysql> TRUNCATE website.referers; Query OK, 0 rows affected (0.00 sec)   Use 'Truncate' to delete all of the rows in a table.   Log and count referer headers   We only want to log referrers from remote sites, so use the following TrafficScript rule to call the Extension only when it is required:   # This site $host = http.getHeader( "Host" ); # The referring site $referer = http.getHeader( "Referer" ); # Only log the Referer if it is an absolute URI and it comes from a different site if( string.contains( $referer, "://" ) && !string.contains( $referer, "://".$host."/" ) ) { java.run( "CountThis", $referer ); }   Add this rule as a request rule to a virtual server that processes HTTP traffic.   As users access the site, the referer header will be pushed into the database. A quick database query will tell you what's there: % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 4; +--------------------------------------------------+-------+ | referer                                          | count | +--------------------------------------------------+-------+ | http://www.google.com/search?q=stingray        |    92 | | http://www.riverbed.com/products/stingray      |    45 | | http://www.vmware.com/appliances               |    26 | | http://www.riverbed.com/                       |     5 | +--------------------------------------------------+-------+ 4 rows in set (0.00 sec)   Lazy writes to the database   This is a useful application of Java Extensions, but it has one big drawback. Every time a visitor arrives from a remote site, his first transaction is stalled while the Java Extension writes to the database. This breaks one of the key rules of website performance architecture – do everything you can asynchronously (i.e. in the background) so that your users are not impeded (see "Lazy Websites run Faster").   Instead, a better solution would be to maintain a separate, background thread that wrote the data in bulk to the database, while the foreground threads in the Java Extension simply appended the Referer data to a table:     CountThisAsync.java   The following Java Extension (CountThisAsync.java) is a modified version of CountThis.java that illustrates this technique:   import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.LinkedList; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThisAsync extends HttpServlet { private static final long serialVersionUID = 1L; private Writer writer = null; protected static LinkedList theData = new LinkedList(); protected class Writer extends Thread { private Connection conn = null; private String table; private int syncRate = 20; public void init( String username, String password, String url, String table ) throws Exception { Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection( url, username, password); this.table = table; start(); } public void run() { boolean running = true; while( running ) { try { sleep( syncRate*1000 ); } catch( InterruptedException e ) { running = false; }; try { PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 )" + "ON DUPLICATE KEY UPDATE count=count+1" ); conn.setAutoCommit( false ); synchronized( theData ) { while( !theData.isEmpty() ) { String data = theData.removeFirst(); s.setString(1, data); s.addBatch(); } } s.executeBatch(); } catch ( Exception e ) { log( e.toString() ); running = false; } } } } public void init( ServletConfig config ) throws ServletException { super.init( config ); String userName = config.getInitParameter( "username" ); String password = config.getInitParameter( "password" ); String table = config.getInitParameter( "table" ); String dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { writer = new Writer(); writer.init( userName, password, "jdbc:mysql://"+dbserver+"/", table ); } catch( Exception e ) { throw new ServletException( e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data != null && writer.isAlive() ) { synchronized( theData ) { theData.add( data ); } } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } public void destroy() { writer.interrupt(); try { writer.join( 1000L ); } catch( InterruptedException e ) {}; super.destroy(); } }   When the Extension is invoked by Traffic Manager , it simply stores the value of the Referer header in a local list and returns immediately. This minimizes any latency that the end user may observe.   The Extension creates a separate thread (embodied by the Writer class) that runs in the background. Every syncRate seconds, it removes all of the values from the list and writes them to the database.   Compile the extension: $ javac -cp servlet.jar:zxtm-servlet.jar CountThisAsync.java $ jar -cvf CountThisAsync.jar CountThisAsync*.class   ... and upload the resulting CountThisAsync.jar Jar file to your Java catalog . Remember to apply the four configuration parameters to the CountThisAsync.jar Java Extension so that it can access the database, and modify the TrafficScript rule so that it calls the CountThisAsync Java Extension.   You’ll observe that database updates may be delayed by up to 20 seconds (you can tune that delay in the code), but the level of service that end users experience will no longer be affected by the speed of the database.
View full article
AutoScaling in SteelApp AutoScaling enables a Traffic Manager to scale a pool of back end nodes either up or down based on application response time.  An obvious use case for AutoScaling is for a website that has different flows of traffic during the course of a day. When the server load increases the application response time decreases, so the number of web or application servers is increased to handle the additional load; conversely, when the load drops, the number of servers is reduced as they are no longer needed. This can be especially useful in environments where the customers pay for each of their compute resources (i.e. chargeback). The AutoScaling functionality is enabled through a Cloud API in Traffic Manager and this previous Article  covered some of the basics of creating a custom Cloud API.         Docker Docker is a framework for applications to be built, packaged and deployed. It works closely with Linux Containers and stores everything a container needs in a re-usable form. Docker, along with Linux Containers, have many use cases, but again going back to the web application already mentioned, a Docker container would have all of a web application's needs built-in; things like Apache libraries and binaries, HTML files, Database connectors and more. When the Docker application is deployed, all the information about it, like memory, image file location, networking, etc. is given to it and is reported back in Docker. This lets Traffic Manager hook back into it to find information that is needed in a Pool.     Gluing everything together There are a few things that need to be enabled and configured to allow Traffic Manager and Docker play well together:   Docker must be listening for REST API calls A Docker image that is deployable and networkable. A Cloud API plugin for Traffic Manager.   When the two APIs are put together they form the framework for building the scalable application pool. The flow starts when Traffic Manager needs a node for a Pool, Traffic Manager then tells Docker it needs a new container, Docker creates the container, Traffic Manager then tells Docker to start the container, Traffic Manager waits and after it has started, Traffic Manager finds the IP address of the container from Docker and adds it to its pool. When Traffic Manager decides that it no longer needs a node, it asks Docker about the containers, Traffic Manager finds the oldest container, and then tells Docker to destroy it. There can be more or less steps in the process, like a “holding period” for a VM before it is destroyed, but it is otherwise fairly straight forward.   When it is boiled down, the Traffic Manager’s Cloud API only has a few tasks to execute; create instances, destroy instances, and get the status of instances. Each of these tasks has a few additional things to do but that is generally what it needs to do and the API of Docker provides a way to access each of these functions.   These few tasks translate to some specific Docker API calls; All Contain er Statuses, Individual Container details, Start a Container, Stop a Container, Destroy Container, examples of these calls are:   GET http://192.168.122.10:2375/containers/json GET  http://192.168.122.10:2375/containers/<CONTAINER_ID>/json POST http://192.168.122.10:2375/containers/create POST http://192.168.122.10:2375/containers/<CONTAINER_ID>/start DELETE http://192.168.122.10:2375/containers/<CONTAINER_ID>   The python script that is attached provides some basic AutoScaling functionality and only needs to be modified to set the appropriate Docker Host (a variable on line 239) to get started. Image ID and Name Prefix can be specified during the Pool setup. Additional parameters can be added using a separate options file that is not covered in this post, but can be understood from the previous article.   Additional Reading Docker API Docs Product Documentation Pulse vADC - Application Delivery Controller Feature Brief: Traffic Manager's Autoscaling capability  
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
Fixed-size licensing works for fixed-sized applications. If your application rarely changes, and sees a steady workload, then you can optimize the costs of the platform to match the resources you need.  
View full article
Meta-tags and the meta-description are used by search engines and other tools to infer more information about a website, and their judicious and responsible use can have a positive effect on a page's ranking in search engine results. Suffice to say, a page without any 'meta' information is likely to score lower than the same page with some appropriate information.   This article (originally published December 2006) describes how you can automatically infer and insert meta tags into a web page, on the fly.   Rewriting a response   First, decide what to use to generate a list of related keywords.   It would have been nice to have been able to slurp up all the text on the page and calculate the most commonly occurring unusual words. Surely that would have been the über-geek thing to do? Well, not really: unless I was careful I could end-up slowing down each response, and  there would be the danger that I produced a strange list of keywords that didn’t accurately represent what the page is trying to say (and could also be widely “Off-message”).   So I instead turned to three sources of on-message page summaries - the title tag, the contents of the big h1 tag and the elements of the page path.   The script   First I had to get the response body:   $body = http.getResponseBody();   This will be grepped for keywords, mangled to add the meta-tags and then returned by setting the response body:   http.setResponseBody( $body );   Next I had to make a list of keywords. As I mentioned before, my first plan was to look at the path: by converting slashes to commas I should be able to generate some correct keywords, something like this:   $path = http.getPath(); $path = string.regexsub( $path, "/+", "; ","g" );   After adding a few lines to first tidy-up the path: removing slashes at the beginning and end; and replacing underscores with spaces, it worked pretty well.    And, for solely aesthetic reasons I added   $path = string.uppercase($path);   Then, I took a look at the title tag. Something like this did the trick:   if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title_tag_text = $1; }   (the “i” flag here makes the search case-insensitive, just in-case).   This, indeed, worked fine. With a little cleaning-up, I was able to generate a meta-description similarly: I just stuck them together after adding some punctuation (solely to make it nicer when read: search engines often return the meta-description in the search result).   After playing with this for a while I wasn’t completely satisfied with the results: the meta-keywords looked great; but the meta-description was a little lacking in the real english department.   So, instead I turned my attention to the h1 tag on each page: it should already be a mini-description of each page. I grepped it in a similar fashion to the title tag and the generated description looked vastly improved.   Lastly, I added some code to check if a page already has a meta-description or meta-keywords to prevent the automatic tags being inserted in this case. This allows us to gradually add meta-tags by hand to our pages - and it means we always have a backup should we forget to add metas to a new page in the future.   The finished script looked like this:   # Only process HTML responses $ct = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $ct, "text/html" ) ) break; $body = http.getResponseBody(); $path = http.getPath(); # remove the first and last slashes; convert remaining slashes $path = string.regexsub( $path, "^/?(.*?)/?$", "$1" ); $path = string.replaceAll( $path, "_", " " ); $path = string.replaceAll( $path, "/", ", " ); if( string.regexmatch( $body, "<h1.*?>\\s*(.*?)\\s*</h1>", "i" ) ) { $h1 = $1; $h1 = string.regexsub( $h1, "<.*?>", "", "g" ); } if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title = $1; $title = string.regexsub( $title, "<.*?>", "", "g" ); } if( $h1 ) { $description = "Riverbed - " . $h1 . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $h1 . ", " . $title; } else { $description = "Riverbed - " . $path . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $title; } # only rewrite the meta-keywords if we don't already have some if(! string.regexmatch( $body, "<meta\\s+name='keywords'", "i" ) ) { $meta_keywords = " <meta name='keywords' content='" . $keywords ."'/>\n"; } # only rewrite the meta-description if we don't already have one if(! string.regexmatch( $body, "<meta\\s+name='description'", "i" ) ) { $meta_description = " <meta name='description' content='" . $description . "'/>"; } # find the title and stick the new meta tags in afterwards if( $meta_keywords || $meta_description ) { $body = string.regexsub( $body, "(<title>.*</title>)", "$1\n" . $meta_keywords . $meta_description ); http.setResponseBody( $body ); }   It should be fairly easy to adapt it to another site assuming the pages are built consistently.   This article was originally written by Sam Phillips (his own pic above) in December 2006, modified and tested in February 2013
View full article
# Sub routines to allow for debug logging to easily be toggled on or off in a TrafficScript. # This routine will only trigger if connection.data.get("debug") returns "1" # Use the following code in your TrafficScript to call it: # I usually let the user toggle debugging by putting something like # this in the top of the rule in a section marked "User editable section" ####################################### # User editable section: $debug = 1; # set to 0 to disable ####################################### # To set the rule up to enable debug logging, you need # to set the connection.data value for "debug" to "1" # I usually do this in a section of the rule marked "Don't edit past here" ####################################### # Don't edit past this line: ####################################### if ( $debug = 1) { connection.data.set("debug", "1"); } #sub routine to debug log at the "info" level sub debuglog.info($message){ if ( connection.data.get("debug") == "1") { log.info( "debuglog.info: __" . $message . "__ "); } } #sub routine to debug log at the "warn" level sub debuglog.warn($message){ if ( connection.data.get("debug") == "1") { log.warn( "debuglog.info: __" . $message . "__ "); } } #sub routine to debug log at the "error" level sub debuglog.error($message){ if ( connection.data.get("debug") == "1") { log.error( "debuglog.info: __" . $message . "__ "); } } # When you want to log a debug line, all you need to do is call debuglog.info("This is my Message"); # If debug is set to 1 in the rule, the line is logged, otherwise it will be ignored. debuglog.info("If debugging is turned on, log this line of test to the event log with the severity level of: Info" ); debuglog.warn("If debugging is turned on, log this line of test to the event log with the severity level of: Warning" ); debuglog.error("If debugging is turned on, log this line of test to the event log with the severity level of: Error" );
View full article
When deploying applications using content management systems, application owners are typically limited to the functionality of the CMS application in use or third party add-on's available. Unfortunately, these components alone may not deliver the application requirements.  Leaving the application owner to dedicate resources to develop a solution that usually ends up taking longer than it should, or not working at all. This article addresses some hypothetical production use cases, where the application does not provide the administrators an easy method to add a timer to the website.   This solution builds upon the previous articles (Embedded Google Maps - Augmenting Web Applications with Traffic Manager and Embedded Twitter Timeline - Augmenting Web Applications with Traffic Manager). "Using" a solution from Owen Garrett (See Instrument web content with Traffic Manager),This example will use a simple CSS overlay to display the added information.   Basic Rule   As a starting point to understand the minimum requirements, and to customize for your own use. I.E. Most people want to use "text-align:center". Values may need to be added to the $style or $html for your application, see examples.   1 2 3 4 5 6 7 8 9 10 11 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       $timer =  ( "366" - ( sys. gmtime . format ( "%j" ) ) );       $html =  '<div class="Countdown">' . $timer . ' DAYS UNTIL THE END OF THE YEAR</div>' ;       $style = '<style type="text/css">.Countdown{z-index:100;background:white}</style>' ;       $body = http.getResponseBody();  $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );  http.setResponseBody( $body );   Example 1 - Simple Day Countdown Timer   This example covers a common use case popular with retailers, a countdown for the holiday shopping season. This example also adds font formatting and additional text with a link.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 #Only process text/html content  if ( !string.startsWith (http.getResponseHeader ( "Content-Type" ), "text/html" )) break;       #Countdown target  #Julian day of the year "001" to "366"  $targetday = "359" ;  $bgcolor = "#D71920" ;  $labelday = "DAYS" ;  $title = "UNTIL CHRISTMAS" ;  $titlecolor = "white" ;  $link = "/dept.jump?id=dept20020200034" ;  $linkcolor = "yellow" ;  $linktext = "VISIT YOUR ONE-STOP GIFT SHOP" ;       #Calculate days between today and targetday  $timer = ( $targetday - ( sys. gmtime . format ( "%j" ) ) );       #Remove the S from "DAYS" if only 1 day left  if ( $timer == 1 ){     $labelday = string.drop( $label , 1 );  };       $html = '  <div class= "TrafficScriptCountdown" >     <h3>       <font color= "'.$titlecolor.'" >         '.$timer.' '.$labelday.' '.$title.'        </font>       <a href= "'.$link.'" >         <font color= "'.$linkcolor.'" >           '.$linktext.'          </font>       </a>     </h3>  </div>  ';       $style = '  <style type= "text/css" >  .TrafficScriptCountdown {     position:relative;     top:0;     width:100%;     text-align:center;     background: '.$bgcolor.' ;     opacity:100%;     z- index :1000;     padding:0  }  </style>  ';       $body = http.getResponseBody();       $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );       http.setResponseBody( $body );?    Example 1 in Action     Example 2 - Ticking countdown timer with second detail   This example covers how to dynamically display the time down to seconds. Opposed to sending data to the client every second, I chose to use a client side java script found @ HTML Countdown to Date v3 (Javascript Timer)  | ricocheting.com   Example 2 Response Rule   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       #Countdown target  $year = "2014" ;  $month = "11" ;  $day = "3" ;  $hr = "8" ;  $min = "0" ;  $sec = "0" ;  #number of hours offset from UTC  $utc = "-8" ;       $labeldays = "DAYS" ;  $labelhrs = "HRS" ;  $labelmins = "MINS" ;  $labelsecs = "SECS" ;  $separator = ", " ;       $timer = '<script type= "text/javascript" >  var CDown=function(){this.state=0,this.counts=[],this.interval=null};CDown. prototype =\  {init:function(){this.state=1;var t=this;this.interval=window.setInterval(function()\  {t.tick()},1e3)},add:function(t,s){tzOffset= '.$utc.' ,dx=t.toGMTString(),dx=dx. substr \  (0,dx. length -3),tzCurrent=t.getTimezoneOffset()/60*-2,t.setTime(Date.parse(dx)),\  t.setHours(t.getHours()+tzCurrent-tzOffset),this.counts. push ({d:t,id:s}),this.tick(),\  0==this.state&&this.init()},expire:function(t){ for (var s in t)this.display\  (this.counts[t[s]], "Now!" ),this.counts. splice (t[s],1)}, format :function(t){var s= "" ;\  return 0!=t.d&&(s+=t.d+ " " +(1==t.d? "'.string.drop( $labeldays, 1 ).'" :" '.$labeldays.' \  ")+" '.$separator.' "),0!=t.h&&(s+=t.h+" "+(1==t.h?" '.string.drop( $labelhrs, 1 ).' ":\  "'.$labelhrs.'" )+ "'.$separator.'" ),s+=t.m+ " " +(1==t.m?"\  '.string.drop( $labelmins, 1 ).' ":" '.$labelmins.' ")+" '.$separator.' ",s+=t.s+" "\  +(1==t.s? "'.string.drop( $labelsecs, 1 ).'" : "'.$labelsecs.'" )+ "'.$separator.'" \  ,s. substr (0,s. length -2)},math:function(t){var i=w=d=h=m=s=ms=0; return ms=( "" +\  (t %1e3 +1e3)). substr (1,3),t=Math.floor(t/1e3),i=Math.floor(t/31536e3),w=Math.floor\  (t/604800),d=Math.floor(t/86400),t%=86400,h=Math.floor(t/3600),t%=3600,m=Math.floor\  (t/60),t%=60,s=Math.floor(t),{y:i,w:w,d:d,h:h,m:m,s:s,ms:ms}},tick:function()\  {var t=(new Date).getTime(),s=[],i=0,n=0; if (this.counts) for (var e=0,\  o=this.counts. length ;o>e;++e)i=this.counts[e],n=i.d.getTime()-t,0>n?s. push (e):\  this.display(i,this. format (this.math(n)));s. length >0&&this.expire(s),\  0==this.counts. length &&window.clearTimeout(this.interval)},display:function(t,s)\  {document.getElementById(t.id).innerHTML=s}},window.onload=function()\  {var t=new CDown;t.add(new Date\  ( '.$year.' , '.--$month.' , '.$day.' , '.$hr.' , '.$min.' , '.$sec.' ), "countbox1" )};  </script><span id= "countbox1" ></span>';       $html =  '<div class= "TrafficScriptCountdown" ><center><h3><font color= "white" >\  COUNTDOWN TO RIVERBED FORCE '.$timer.' </font>\  <a href= "https://secure3.aetherquest.com/riverbedforce2014/" ><font color= "yellow" >\  REGISTER NOW</a></h3></font></center></div>';       $style = '<style type= "text/css" >.TrafficScriptCountdown{position:relative;top:0;\  width:100%;background: #E9681D;opacity:100%;z-index:1000;padding:0}</style>';       http.setResponseBody( string.regexsub( http.getResponseBody(),  "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" ) );    Example 2 in action     Notes   Example 1 results in faster page load time than Example 2. Example 1 can be easily extended to enable Traffic Script to set $timer to include detail down to the second as in example 2. Be aware of any trailing space(s) after the " \ " line breaks when copy and paste is used to import the rule. Incorrect spacing can stop the JS and the HTML from functioning. You may have to adjust the elements for your web application. (i.e. z-index, the regex sub match, div class, etc.).   This is a great example of using Traffic Manager to deliver a solution in minutes that could otherwise could take hours.
View full article
by Aidan Clarke   Traditional IT applications were simple: they lived in one place, in your data center. If you wanted more capacity, you added more servers, storage and networks. If you wanted to make the application more reliable, you doubled it to make it highly available: you had one system running “active” - while the other system waited on “standby.” This concept of “redundancy” was simple, so long as you could buy two of everything, and were happy that only half of the infrastructure was active at any one time - not an efficient solution.   But modern applications need a modern approach to performance, security and reliability: which is why Pulse vADC approaches things differently, a software solution for a software world, where distributed applications need an “always-active” architecture.   We often hear from IT professionals that they used to avoid Active/Active architectures; for fear that performance would be compromised under failure. Our customers routinely deploy Pulse vADC in Active/Active, or even Active/Active/Active/Active solutions all the time: they can choose the right balance between node and cluster size, to optimize the availability, while reducing the size of the fault domain.     Similarly, high-availability architectures used to require that HA peers were installed as Layer 2 adjacent (ie: on the same network). These architectures simply don't work in today's clouds; for example, AWS availability zones, by their very design, are on different Layer 3 networks. In order to run a Layer 2 HA pair in Amazon AWS, you need to put your whole solution in a single AWS Availability zone - a practice that Amazon architects strongly discourage.   With Pulse vADC, if you can connect to each other via a network, then you can cluster your application. Which means that you can choose an availability architecture to suit your application - whether it lives in your data center, in a cloud, or both.   Get started with Pulse vADC today, our Community Edition is free to download and try out in your test and development environment.     This article is part of a series, beginning with: Staying Afloat in the Application Economy More to Explore: Prev: One ADC Platform, Any Environment Next: Intelligent N+M Clustering   
View full article
  Introduction   Many DDoS attacks work by exhausting the resources available to a website for handling new connections.  In most cases, the tool used to generate this traffic has the ability to make HTTP requests and follow HTTP redirect messages, but lacks the sophistication to store cookies.  As such, one of the most effective ways of combatting DDoS attacks is to drop connections from clients that don't store cookies during a redirect.   Before you Proceed   It's important to point out that using the solution herein may prevent at least the following legitimate uses of your website (and possibly others):   Visits by user-agents that do not support cookies, or where cookies are disabled for any reason (such as privacy); some people may think that your website has gone down! Visits by internet search engine web-crawlers; this will prevent new content on your website from appearing in search results! If either of the above items concern you, I would suggest seeking advice (either from the community, or through your technical support channels).   Solution Planning   Implementing a solution in pure TrafficScript will prevent traffic from reaching the web servers.  But, attackers are still free to consume connection-handling resources on the traffic manager.  To make the solution more robust, we can use iptables to block traffic a bit earlier in the network stack.  This solution presents us with a couple of challenges:   TrafficScript cannot execute shell commands, so how do we add rules to iptables? Assuming we don't want to permanently block all IP addresses that are involved in a DDoS attack, how can we expire the rules?   Even though TrafficScript cannot directly run shell commands, the Event Handling system can.  We can use the event.emit() TrafficScript function to send jobs to a custom event handler shell script that will add an iptables rule that blocks the offending IP address. To expire each rule can use the at command to schedule a job that removes it.  This means that we hand over the scheduling and running of that job over to the control of the OS (which is something that it was designed to do).   The overall plans looks like this:   Write a TrafficScript rule that emits a custom event when it detects a client that doesn't support cookies and redirects Write a shell script that takes as its input: an --eventtype argument (the event handler includes this automatically) a --duration argument (to define the length of time that an IP address stays blocked for) a string of information that includes the IP address that is to be blocked Create an event handler for the events that our TrafficScript is going to emit TrafficScript   Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 $cookie = http.getCookie( "DDoS-Test" );   if ( ! $cookie ) {              # Either it's the visitor's first time to the site, or they don't support cookies      $test = http.getFormParam( "cookie-test" );              if ( $test != "1" ) {         # It's their first time.  Set the cookie, redirect to the same page         # and add a query parameter so we know they have been redirected.         # Note: if they supplied a query string or used a POST,         # we'll respond with a bare redirect         $path = http.getPath();                    http.sendResponse( "302 Found" , "text/plain" , "" ,            "Location: " . string.escape( $path ) .            "?cookie-test=1\r\nSet-Cookie: DDoS-Test=1" );                 } else {                    # We've redirected them and attempted to set the cookie, but they have not         # accepted.  Either they don't support cookies, or (more likely) they are a bot.                    # Emit the custom event that will trigger the firewall script.         event.emit( "firewall" , request.getremoteip());                    # Pause the connection for 100 ms to give the firewall time to catch up.         # Note: This may need tuning.         connection. sleep ( 100 );                    # Close the connection.         connection. close ( "HTTP/1.1 200 OK\n" );       }  }  Installation   This code will need to be applied to the virtual server as a request rule.  To do that, take the following steps:   In the traffic manager GUI, navigate to Catalogs → Rule Enter ts-firewaller in the Name field Click the Use TrafficScript radio button Click the Create Rule button Paste the code from the attached ts-firewaller.rts file Click the Save button Navigate to the Virtual Server that you want to protect ( Services → <Service Name> ) Click the Rules link In the Request Rules section, select ts-firewaller from the drop-down box Click the Add Rule button   Your virtual server should now be configured to execute the rule.   Shell Script   Code   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #!/bin/bash       # Use getopt to collect parameters.  params=`getopt -o e:,d: -l eventtype:,duration: -- "$@" `       # Evaluate the set of parameters.  eval set -- "$params"   while true; do           case "$1" in     --duration ) DURATION= "$2" ; shift 2 ;;     --eventtype ) EVENTTYPE= "$2" ; shift 2 ;;     -- ) shift ; break ;;     * ) break ;;     esac  done       # Awk the IP address out of ARGV  IP=$(echo "${BASH_ARGV}" | awk ' { print ( $(NF) ) }' )       # Add a new rule to the INPUT chain.  iptables -A INPUT -s ${IP} -j DROP &&       # Queue a new job to delete the rule after DURATION minutes.  # Prevents warning about executing the command using /bin/sh from  # going in the traffic manager event log.  echo "iptables -D INPUT -s ${IP} -j DROP" |  at -M now + ${DURATION} minutes &> /dev/null  Installation   To use this script as an action program, you'll need to upload it via the GUI.  To do that, take the following steps:   Open a new file with the editor of your choice (depends on what OS you're using) Copy and paste the script code into the editor Save the file as ts-firewaller.sh In the traffic manager UI, navigate to Catalogs → Extra Files → Action Programs Click the Choose File button Select the ts-firewaller.sh file that you just created Click the Upload Program button Event Handler   Now that we have a rule that emits a custom event, and a script that we can use as an action program, we can configure the event handler that will tie the two together. First, we need to create a new event type:   In the traffic manager's UI, navigate to System → Alerting Click the Manage Event Types button Enter Firewall in the Name field Click the Add Event Type button Click the + next to the Custom Events item in the event tree Click the Some custom events... radio button Enter firewall in the empty field Click the Update button   Now that we have an event type, we need to create a new action:   In the traffic manager UI, navigate to System → Alerting Click on the Manage Actions button In the Create New Action section, enter firewall in the Name field Click the Program radio button Click the Add Action button In the Program Arguments section, enter duration in the Name field Enter Determines the length of time in minutes that an IP will be blocked for in the Description field Click the Update button Enter 10 in the newly-appeared arg!duration field Click the Update button   Now that we have an action configured, the only thing that we have left to do is to connect the custom event to the new action:   In the traffic manager UI, navigate to System → Alerting In the Event Type column, select firewall from the drop-down box In the Actions column, select firewall from the drop-down box Click the Update button That concludes the installation steps; this solution should now be live!   Testing   Testing the functionality is pretty simple for this solution. Basically, you can monitor the state of iptables while you run specific commands from a command line.  To do this, ssh into your traffic manager and execute iptables -L as root.  You should check this after tech of the upcoming tests.   Since I'm using a Linux machine for testing, I'm going to use the curl command to send crafted requests to my traffic manager.  The 3 scenarios that I want to test are:   Initial visit: The user-agent is missing a query string and a cookie Successful second visit: The user-agent has a query string and has provided the correct cookie Failed second visit: The user ages has a query string (indicating that they were redirected), but hasn't provided a cookie The respective curl commands that need to be run are:   1 2 3 curl -v http:///  curl -v http:///?cookie-test=1 -b "DDoS-Test=1"   curl -v http:///?cookie-test=1    Note: If you run these commands from your workstation, you will be unable to connect to the traffic manager in any way for a period of 10 minutes!
View full article
We’re really excited to present a preview of our next big development in content aware application delivery.  Our Web Accelerator technology prepares your content for optimal delivery over high-latency networks; our soon-to-be announced Latitude-aware Content Optimization will further optimize it for correct rendering in the client device, no matter where the observer is relative to the content origin.   Roadmap disclaimer: This forward looking statement is for information purposes only and is not a commitment, promise or legal obligation to deliver any new products, features or functionality.  Any announcements are conditional on successful in-the-field tests of this technology.   "Here comes the science bit"   Individual binary digits have rotational symmetry and can survive transmission across equatorial boundaries intact.  Layer 1 encoding schemes such as Differential Manchester Encoding are similarly immune to polarity changes and protect on-the-wire data against these effects as far as layer 4, ensuring TCP connections operate correctly.  However, layer 7 content suffers from an inversion transformation when generated in one hemisphere and observed in the other.   Our solutions has been tested against a number of websites, including our own (https://splash.riverbed.com - see attachment below) with a good degree of success.  In its current beta state, you can try it against other sites (YMMV).     Getting started   If you haven’t got a Traffic Manager handy, download and install the Community Edition.   Proxying a website to test the optimization   The following instructions explain how to proxy splash.riverbed.com.  For a more general overview, check out Getting Started - Load-balancing to a website using Traffic Manager.   Create pool named splash pool, containing the node splash.riverbed.com:443.  Ensure that SSL decryption is turned on.   Create a virtual server named splash server, listening on an available port (e.g. 8088), HTTP protocol (no SSL).  Configure the virtual server to use the pool splash pool, and make sure that Connection Management -> Location Header Settings -> location!rewrite is set to ‘Rewrite the hostname…’.   Verify that you can access and browse Splash through the IP of your Traffic Manager: http://stingray-ip:8088/   Applying the optimization Now we’ll apply our content optimization.  This optimization is implemented by way of a response rule: $ct = http.getResponseHeader( "Content-Type" ); # We only need to embed client-side trafficScript in HTML content if( !string.startsWith( $ct, "text/html" ) ) break; # Will this data cross the equatorial boundary? # Edit this test if necessary for testing purposes $serverlat = geo.getLatitude( request.getLocalIP() ); $clientlet = geo.getLatitude( request.getRemoteIP() ); if( $serverlat * $clientlat > 0 ) break; $body = http.getResponseBody(); # Build client-side TrafficScript code $tsinterpreter="PHNjcmlwdCBzcmM9Imh0dHA6Ly9hamF4Lmdvb2dsZWFwaXMuY29tL2FqYXgvbGlicy9qcXVlcnkvMS45LjEvanF1ZXJ5Lm1pbi5qcyI+PC9zY3JpcHQ+DQo8c3R5bGUgdHlwZT0idGV4dC9jc3MiPg0KLmxvb2ZsaXJwYSB7IHRyYW5zZm9ybTpyb3RhdGUoLTE4MGRlZyk7LXdlYmtpdC10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tb3otdHJhbnNmb3JtOnJvdGF0ZSgtMTgwZGVnKTstby10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tcy10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpIH0NCjwvc3R5bGU+DQo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI+DQpzZWxlY3Rvcj0iZGl2LHAsdWwsbGksdGQsbmF2LHNlY3Rpb24saGVhZGVyLHRhYmxlLHRib2R5LHRyLHRkLGgxLGgyLGgzLGg0LGg1LGg2IjsNCg0KZnVuY3Rpb24gVHJhZmZpY1NjcmlwdENhbGxTdWIoIGkgKSB7DQogICBpZiggaSUzPT0wICkgdDAoICQoImJvZHkiKSApDQogICBlbHNlIGlmKCBpJTM9PTEgKSB0MSggJCgiYm9keSIpICkNCiAgIGVsc2UgdDIoICQoImJvZHkiKSApOw0KfQ=="; $sub0="ZnVuY3Rpb24gdDAoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSB7DQogICAgICB4ID0gZmFsc2U7IGMuZWFjaCggZnVuY3Rpb24oKSB7IHggfD0gdDAoICQodGhpcykgKSB9ICk7DQogICAgICBpZiggIXggKSBlLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgICAgcmV0dXJuIHRydWU7DQogICB9DQogICByZXR1cm4gZmFsc2U7DQp9DQo="; $sub1="ZnVuY3Rpb24gdDEoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSBjLmVhY2goIGZ1bmN0aW9uKCkgeyB0MSggJCh0aGlzKSApIH0gKTsNCiAgIGVsc2UgZS5hZGRDbGFzcyggImxvb2ZsaXJwYSIgKTsNCn0NCg=="; $sub2="ZnVuY3Rpb24gdDIoIGUgKSB7DQogICAkKCJwLGxpLGgxLGgyLGgzLGg0LGg1LGg2LGltZyx0ZCxkaXY+YSIpLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgJCgiZGl2Om5vdCg6aGFzKGRpdixsaSxoMSxoMixoMyxoNCxoNSxoNixpbWcsdGQsYSkpIikuYWRkQ2xhc3MoICJsb29mbGlycGEiICk7DQp9DQo="; $cleanup="PC9zY3JpcHQ+"; $exec = string.base64decode( $tsinterpreter ) . string.base64decode( $sub0 ) . string.base64decode( $sub1 ) . string.base64decode( $sub2 ) . string.base64decode( $cleanup ); # Invoke client-side code from JavaScript; edit to call $sub0, $sub1 or $sub2 $call = '<script type="text/javascript"> // Call client-side subroutines 0, 1 or 2 $(function() { TrafficScriptCallSub( 0 ) } ); </script>'; $body = string.replace( $body, "<head>", "<head>".$exec.$call ); http.setResponseBody( $body );   Remember this is just in beta, and any future release is conditional on successful deployments in the field.  Enjoy, share and let us know how effectively this works for you.
View full article
Update: See also this new article including a simple template rule: A Simple Template Rule SteelCentral Web Analyzer - BrowserMetrix   Riverbed SteelCentral Web Analyzer is a great tool for monitoring end-user experience (EUE) of web applications, even when they are hosted in the cloud. And because it is delivered as true Software-as-a-Service, you can monitor application performance form anywhere, and drill down to analyse individual transactions by URL, location or browser type, and highlight requests which t ook too long to respond.   In order to track statistics, your web application needs to send statistics on each transaction to Web Analyzer (formerly BrowserMetrix) using a small piece of JavaScript, and it is very easy to inject the extra JavaScript code without needing to change the application itself. This Solution Guide (attached) shows you how to use TrafficScript to inject the JavaScript snippet into your web applications, by inspecting all web pages and inserting into the right place in each document:   No modification needed to your application Easy to select which pages you want to instrument Use with all applications in your data center, or hosted in the cloud Even works with compressed HTML pages (eg, gzip encoded) Create dynamic JavaScript code to track session-level information Use Riverbed FlyScript to automate the integration between Web Analyzer and Traffic Manager   How does it work? SteelApp Traffic Manager sits in front of the web applications on the right, and inspects each web page before it is sent to the client. Stingray checks to see if the page has been selected for analysis by Web Analyzer, and then constructs the JavaScript fragment and injects into the web page at the right place in the HTML document.   When the web page arrives at the client browser, the JavaScript snippet is executed.  It builds a transaction profile with timing information and submits the information to the Web Analyzer SaaS platform managed by Riverbed.  You can then analyze the results, in near-realtime, using the Web Analyzer web portal.   Thanks also to Faisal Memon for his help creating the Solution Guide.   Read more In addition to the attached deployment guide showing how to create complex rules for JavaScript Injection, you may be also be interested in this new article showing how to use a simple template rule wit Traffic Manager and SteelCentral Web Analyzer: A Simple Template Rule for SteelCentral Web Analyzer - BrowserMetrix   For similar solutions, check out the Content Modification examples in the Top vADC Examples and Use Cases article.   Updated 15th July 2014 by Paul Wallace. Article formerly titled "Using Stingray with OPNET AppResponse Xpert BrowserMetrix" Thanks also to Mike Iem for his help updating this article. 29th July 2014 by Paul Wallace. Added note about the new article including the simple template rule          
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
The SOAP Control API is one of the 'Control Plane' APIs provided by Pulse Traffic Manager (see also REST and SNMP).   This article contains a selection of simple technical tips and solutions that use the SOAP Control API to manage and query Traffic Manager.   Basic language examples   Tech Tip: Using the SOAP Control API with Perl Tech Tip: Using the SOAP Control API with C# Tech Tip: Using the SOAP Control API with Java Tech Tip: Using the SOAP Control API with Python Tech Tip: Using the SOAP Control API with PHP Tech Tip: Using the SOAP Control API with Ruby Tech Tip: Ruby and SOAP revisited Tech Tip: Ruby and SOAP - a rubygems implementation   More sophisticated tips and examples   Tech Tip: Running Perl code on the Pulse vADC Virtual Appliance Tech Tip: using Perl SOAP::Lite with Traffic Manager's SOAP Control API Tech Tip: Using Perl/SOAP to list recent connections in Pulse Traffic Manager Gathering statistics from a cluster of Traffic Managers   More information   For a more rigorous introduction to the SOAP Control API, please refer to the Control API documentation in the  Product Documentation
View full article