cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
In a recent conversation, a user wished to use the Traffic Manager's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Traffic Manager's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Traffic Manager processes all of the requests without excessive queuing; the response time stays within desired limits. Traffic Manager typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
Introduction   Do you ever face any of these requirements?   "I want to best-effort provide certain levels of service for certain users." "I want to prioritize some transactions over others." "I want to restrict the activities of certain types of users."   This article explains that to address these problems, you must consider the following questions:    "Under what circumstances do you want the policy to take effect?" "How do you wish to categorise your users?" "How do you wish to apply the differentiation?"   It then describes some of the measures you can take to monitor performance more deeply and apply prioritization to nominated traffic:   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions. Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations. Request Rate Shaping - Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   Whether you are running an eCommerce web site, online corporate services or an internal intranet, there’s always the need to squeeze more performance from limited resources and to ensure that your most valuable users get the best possible levels of service from the services you are hosting.   An example   Imagine that you are running a successful gaming service in a glamorous location.  The usage of your service is growing daily, and many of your long-term users are becoming very valuable.   Unfortunately, much of your bandwidth and server hits are taken up by competitors’ robots that screen-scrape your betting statistics, and poorly-written bots that spam your gaming tables and occasionally place low-value bets. At certain times of the day, this activity is so great that it impacts the quality of the service you deliver, and your most valuable customers are affected.     Using Traffic Manager to measure, classify and prioritize traffic, you can construct a service policy that comes into effect when your web site begins to run slowly to enforce different levels of service:     Competitor’s screen-scraping robots are tightly restricted to one request per second each.  A ten-second delay reduces the value of the information they screen-scrape. Users who have not yet logged in are limited to a small proportion of your available bandwidth and directed to a pair of basic web servers, thus reserving capacity for users who are logged in. Users who have made large transactions in the past are tagged with a cookie and the performance they receive is measured.  If they are receiving poor levels of service (over 100ms response time), then some of the transaction servers are reserved for these high-value users and the activity of other users is shaped by a system-wide queue.   Whether you are operating a gaming service, a content portal, a B2B or B2C eCommerce site or an internal intranet, this kind of service policy can help ensure that key customers get the best possible service, minimize the churn of valuable users and prevent undesirable visitors from harming the service to the detriment of others.   Designing a service policy     “I want to best-effort guarantee certain levels of service for certain users.” “I want to prioritize some transactions over others.” “I want to restrict the activities of certain users.”   To address these problems, you must consider the following questions:   Under what circumstances do you want the policy to take effect? How do you wish to categorise your users? How do you wish to apply the differentiation?   One or more TrafficScript rules can be used to apply the policy.  They take advantage of the following features:   When does the policy take effect?   Service Level Monitoring – Measure system performance, and apply policies only when they are needed. Custom Logging - Log and analyse activity to record and validate policy decisions.   How are users categorized?   Application traffic inspection - Determine source, user, content, value; XML processing with XPath searches and calculations.   How are they given different levels of service?   Request Rate Shaping – Apply fine-grained rate limits for transactions. Bandwidth Control - Allocate and reserve bandwidth. Traffic Routing and Termination - Route high and low priority traffic differently; Terminate undesired requests early Selective Traffic Optimization - Selective caching and compression.   TrafficScript   Feature Brief: TrafficScript is the key to defining traffic management policies to implement these prioritization rules.  TrafficScript brings together functionality to monitor and classify behavior, and then applies functionality to impose the appropriate prioritization rules.   For example, the following TrafficScript request rule inspects HTTP requests.  If the request is for a .jsp page, the rule looks at the client’s ‘Priority’ cookie and routes the request to the ‘high-priority’ or ‘low-priority’ server pools as appropriate:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { $cookie = http.getCookie( "Priority" ); if( $cookie == "high" ) { pool.use( "high-priority" ); } else { pool.use( "low-priority" ); } }   Generally, if you can describe the traffic management logic that you require, it is possible to implement it using TrafficScript.   Capability 1: Service Level Monitoring   Using Feature Brief: Service Level Monitoring, Traffic Manager can measure and react to changes in response times for your hosted services, by comparing response times to a desired time.   You configure Service Level Monitoring by creating a Service Level Monitoring Class (SLM Class).  The SLM Class is configured with the desired response time (for example, 100ms), and some thresholds that define actions to take.  For example, if fewer than 80% of requests meet the desired response time, Traffic Manager can log a warning; if fewer than 50% meet the desired time, Traffic Manager can raise a system alert.   Suppose that we were concerned about the performance of our Java servlets.  We can configure an SLM Class with the desired performance, and use it to monitor all requests for Java servlets:   $url = http.getPath(); if( string.startsWith( $url, "/servlet/" ) { connection.setServiceLevelClass( "Java servlets" ); }   You can then monitor the performance figures generated by the ‘Java servlets’ SLM class to discover the response times, and the proportion of requests that fall outside the desired response time:   Once requests are monitored by an SLM Class, you can discover the proportion of requests that meet (or fail to meet) the desired response time within a TrafficScript rule.  This makes it possible to implement TrafficScript logic that is only called when services are underperforming.   Example: Simple Differentiation   Suppose we had a TrafficScript rule that tested to see if a request came from a ‘high value’ customer.   When our service is running slowly, high-value customers should be sent to one server pool (‘gold’) and other customers sent to a lower-performing server pool (‘bronze’). However, when the service is running at normal speed, we want to send all customers to all servers (the server pool named ‘all servers’).   The following TrafficScript rule describes how this logic can be implemented:   # Monitor all traffic with the 'response time' SLM class, which is # configured with a desired response time of 200ms connection.setServiceLevelClass( "response time" ); # Now, check the historical activity (last 10 seconds) to see if it’s # been acceptable (more than 90% requests served within 200ms) if( slm.conforming( "response time" ) > 90 ) ) { # select the ‘all servers’ server pool and terminate the rule pool.use( "all servers" ); } # If we get here, things are running slowly # Here, we decide a customer is ‘high value’ if they have a login cookie, # so we penalize customers who are not logged in. You can put your own # test here instead $logincookie = http.getCookie( "Login" ); if( $logincookie ) { pool.use( "gold" ); } else { pool.use( "bronze" ); }   For a more sophisticated example of this technique, check out the article Dynamic rate shaping slow applications   Capability 2: Application Traffic Inspection   There’s no limit to how you can inspect and evaluate your traffic.  Traffic Manager lets you look at any aspect of a client’s request, so that you can then categorize them as you need. For example:   # What is the client asking for? $url = http.getPath(); # ... and the QueryString $qs = http.getQueryString(); # Where has the client come from? $referrer = http.getHeader( "Referer" ); $country = geo.getCountryCode( request.getRemoteIP() ); # What sort of browser is the client using? $ua = http.getHeader( "User-Agent" ); # Is the client trying to spend more than $49.99? if( http.getPath() == "/checkout.cgi" && http.getFormParam( "total" ) > 4999 ) ... # What’s the value of the CustomerName field in the XML purchase order # in the SOAP request? $body = http.getBody(); $name = xml.xpath.matchNodeSet( $body, "", "//Info/CustomerName/text()"); # Take the name, post it to a database server with a web interface and # inspect the response. Does the response contain the value ‘Premium’? $response = http.request.post( "http://my.database.server/query", "name=".string.htmlEncode( $name ) ); if( string.contains( $response, "Premium" ) ) { ... }   Remembering the Classification with a Cookie   Often, it only takes one request to identify the status of a user, but you want to remember this decision for all subsequent requests.  For example, if a user places an item in his shopping cart by accessing the URL ‘ /cart.php ’, then you want to remember this information for all of his subsequent requests.   Adding a response cookie is the way to do this.  You can do this in either a Request or Response Rule with the ‘ http.setResponseCookie() ’ function:   if( http.getPath() == "/cart.php" ) { http.setResponseCookie( "GotItems", "Yes" ); }   This cookie will be sent by the client on every subsequent request, so to test if the user has placed items in his shopping cart, you just need to test for the presence of the ‘GotItems’ cookie in each request rule:   if( http.getCookie( "GotItems" ) ) { ... }   If necessary, you can encrypt and sign the cookie so that it cannot be spoofed or reused:   # Setting the cookie # Create an encryption key using the client’s IP address and user agent # Encrypt the current time using encryption key; it can only be decrypted # using the same key $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $encrypted = string.encrypt( sys.time(), $key ); $encoded = string.hexencode( $encrypted ); http.setResponseHeader( "Set-Cookie", "GotItems=".$encoded ); # Validating the cookie $isValid = 0; if( $cookie = http.getCookie( "GotItems" ) ) { $encrypted = string.hexdecode( $cookie ); $key = http.getHeader( "User-Agent" ) . ":" . request.getRemoteIP(); $secret = string.decrypt( $encrypted, $key ); # If the cookie has been tampered with, or the ip address or user # agent differ, the string.decrypt will return an empty string. # If it worked and the data was less than 1 hour old, it’s valid: if( $secret && sys.time()-$secret < 3600 ) { $isValid = 1; } }   Capability 3: Request Rate Shaping   Having decided when to apply your service policy (using Service Level Monitoring), and classified your users (using Application Traffic Inspection), you now need to decide how to prioritize valuable users and penalize undesirable ones.   The Feature Brief: Bandwidth and Rate Shaping in Traffic Manager capability is used to apply maximum request rates:   On a global basis (“no more than 100 requests per second to my application servers”); On a very fine-grained per-user or per-class basis (“no user can make more than 10 requests per minute to any of my statistics pages”).   You can construct a service policy that places limits on a wide range of events, with very fine grained control over how events are identified.  You can impose per-second and per-minute rates on these events.   For example:   You can rate-shape individual web spiders, to stop them overwhelming your web site. Each web spider, from each remote IP address, can be given maximum request rates. You can throttle individual SMTP connections, or groups of connections from the same client, so that each connection is limited to a maximum number of sent emails per minute. You may also rate-shape new SMTP connections, so that a remote client can only establish new connections at a particular rate. You can apply a global rate shape to the number of connections per second that are forwarded to an application. You can identify individual user’s attempts to log in to a service, and then impede any dictionary-based login attacks by restricting each user to a limited number of attempts per minute.   Request Rate Limits are imposed using the TrafficScript rate.use() function, and you can configure per-second and per-minute limits in the rate class.  Both limits are applied (note that if the per-minute limit is more than 60-times the per-second limit, it has no effect).   Using a Rate Class   Rate classes function as queues.  When the TrafficScript rate.use() function is called, the connection is suspended and added to the queue that the rate class manages.  Connections are then released from the queue according to the per-second and per-minute limits.   There is no limit to the size of the backlog of queued connections.  For example, if 1000 requests arrived in quick succession to a rate class that admitted 10 per second, 990 of them would be immediately queued.  Each second, 10 more requests would be released from the front of the queue.   While they are queued, connections may time out or be closed by the remote client.  If this happens, they are immediately discarded.   You can use the rate.getBacklog() function to discover how many requests are currently queued.  If the backlog is too large, you may decide to return an error page to the user rather than risk their connection timing out.  For example, to rate shape jsp requests, but defer requests when the backlog gets too large:   $url = http.getPath(); if( string.endsWith( $url, ".jsp" ) ) { if( rate.getBacklog( "shape requests" ) > 100 ) { http.redirect( "http://mysite/too_busy.html" ); } else { rate.use( "shape requests" ); } }   Rate Classes with Keys In many circumstances, you may need to apply more fine-grained rate-shape limits.  For example, imagine a login page; we wish to limit how frequently each individual user can attempt to log in to just 2 attempts per minute.   The rate.use() function can take an optional ‘key’ which identifies a specific instance of the rate class.  This key can be used to create multiple, independent rate classes that share the same limits, but enforce them independently for each individual key.   For example, the ‘login limit’ class is restricted to 2 requests per minute, to limit how often each user can attempt to log in:   $url = http.getPath(); if( string.endsWith( $url, "login.cgi" ) ) { $user = http.getFormParam( "username" ); rate.use( "login limit", $user ); }   This rule can help to defeat dictionary attacks where attackers try to brute-force crack a user’s password.  The rate shaping limits are applied independently to each different value of $user.  As each new user accesses the system, they are limited to 2 requests per minute, independently of all other users who share the “login limit” rate shaping class.   For another example, check out The "Contact Us" attack against mail servers.   Applying service policies with rate shaping   Of course, once you’ve classified your users, you can apply different rate settings to different categories of users:   # If they have an odd-looking user agent, or if there’s no host header, # the client is probably a web spider. Limit it to 1 request per second. $ua = http.getHeader( "User-Agent" ); if( ! string.startsWith( $ua, "Mozilla/" ) && ! string.startsWith( $ua, "Opera/" ) || ! http.getHeader( "Host" ) ) { rate.use( "spiders", request.getRemoteIP() ); }   If the service is running slowly, rate-shape users who have not placed items into their shopping cart with a global limit, and rate-shape other users to 8 requests per second each:   if( slm.conforming( "timer" ) < 80 ) { $cookie = request.getCookie( "Cart" ); if( ! $cookie ) { rate.use( "casual users" ); } else { # Get a unique id for the user $cookie = request.getCookie( "JSPSESSIONID" ); rate.use( "8 per second", $cookie ); } }   Capability 4: Bandwidth Shaping   Feature Brief: Bandwidth and Rate Shaping in Traffic Manager allows Traffic Manager to limit the number of bytes per second used by inbound or outbound traffic, for an entire service, or by the type of request.   Bandwidth limits are automatically shared and enforced across all the Traffic Managers in a cluster. Individual Traffic Managers take different proportions of the total limit, depending on the load on each, and unused bandwidth is equitably allocated across the cluster depending on the need of each machine.   Like Request Rate Shaping, you can use Bandwidth shaping to limit the activities of subsets of your users. For example, you may have a 1 Gbits/s network connection which is being over-utilized by a certain type of client, which is affecting the responsiveness of the service.  You may therefore wish to limit the bandwidth available to those clients to 20Mbits/s.   Using Bandwidth Shaping Like Request Rate Shaping, you configure a Bandwidth class with a maximum bandwidth limit.  Connections are allocated to a class as follows:   response.setBandwidthClass( "class name" );   All of the connections allocated to the class share the same bandwidth limit.   Example: Managing Flash Floods The following example helps to mitigate the ‘Slashdot Effect’, a common example of a Flash Flood problem.  In this situation, a web site is overwhelmed by traffic as a result of a high-profile link (for example, from the Slashdot news site), and the level of service that regular users experience suffers as a result.   The example looks at the ‘Referer’ header, which identifies where a user has come from to access a web site.  If the user has come from ‘slashdot.org’, he is tagged with a cookie so that all of his subsequent requests can be identified, and he is allocated to a low-bandwidth class:   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot.org" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); connection.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { connection.setBandwidthClass( "slashdot" ); }   For a more in depth discussion, check out Detecting and Managing Abusive Referers.   Capability 5: Traffic Routing and Termination   Different levels of service can be provided by different traffic routing, or in extreme events, by dropping some requests.   For example, some large media sites provide different levels of content; high-bandwidth rich media versions of news stories are served during normal usage, and low-bandwidth versions which are served when traffic levels are extremely high.  Many websites provide flash-enabled and simple HTML versions of their home page and navigation.   This is also commonplace when presenting content to a range of browsing devices with different capabilities and bandwidth.   The switch between high and low bandwidth versions could take place as part of a service policy: as the service begins to under-perform, some (or all) users could be forced onto the low-bandwidth versions so that a better level of service is maintained.   # Forcibly change requests that begin /high/ to /low/ $url = http.getPath(); if( string.startsWith( $url, "/high" ) ) { $url = string.replace( $url, "/high", "/low" ); http.setPath( $low ); }   Example: Ticket Booking Systems   Ticket Booking systems for major events often suffer enormous floods of demand when tickets become available.   You can use Stingray's request rate shaping system to limit how many visitors are admitted to the service, and if the service becomes overwhelmed, you can send back a ‘please try again’ message rather than keeping the user ‘on hold’ in the queue indefinitely.   Suppose the ‘booking’ rate shaping class is configured to admit 10 users per second, and that users enter the booking process by accessing the URL /bookevent?eventID=<id> .  This rule ensures that no user is queued for more than 30 seconds, by keeping the queue length to no more than 300 users (10 users/second * 30 seconds):   # limit how users can book events $url = http.getPath(); if( $url == "/bookevent" ) { # How many users are already queued? if( rate.getBacklog( "booking" ) > 300 ) { http.redirect( "http://www.mysite.com/too_busy.html"); } else { rate.use( "booking" ); } }   Example: Prioritizing Resource Usage In many cases, the resources are limited and when a site is overwhelmed, users’ requests still need to be served.   Consider the following scenario:   The site runs a cluster of 4 identical application servers (‘servers ‘1’ to ‘4’); Users are categorized into casual visitors and customers; customers have a ‘Cart’ cookie, and casual visitors do not.   Our goal is to give all users the best possible level of service, but if customers begin to get a poor level of service, we want to prioritize them over casual visitors.  We desire that more then 80% of customers get responses within 100ms.   This can be achieved by splitting the 4 servers into 2 pools: the ‘allservers’ pool contains servers 1 to 4, and the ‘someservers’ pool contains servers 1 and 2 only.   When the service is poor for the customers, we will restrict the casual visitors to just the ‘someservers’ pool.  This effectively reserves the additional servers 3 and 4 for the customers’ exclusive use.   The following code uses the ‘response’ SLM class to measure the level of service that customers receive:   $customer = http.getCookie( "Cart" ); if( $customer ) { connection.setServiceLevelClass( "response" ); pool.use( "allservers" ); } else { if( slm.conforming( "response" ) < 80 ) { pool.use( "someservers" ); } else { pool.use( "allservers" ); } }   Capability 6: Selective Traffic Optimization Some of Traffic Manager's features can be used to improve the end user’s experience, but they take up resources on the system:   Pulse Web Acceleraror (Aptimizer) rewrites page content for faster download and rendering, but is very CPU intensive. Content Compression reduces the bandwidth used in responses and gives better response times, but it takes considerable CPU resources and can degrade performance. Feature Brief: Traffic Manager Content Caching can give much faster responses, and it is possible to cache multiple versions of content for each user.  However, this consumes memory on the system.   All of these features can be enabled and disabled on a per-user basis, as part of a service policy.   Pulse Web Accelerator (Stingray Aptimizer)   Use the http.aptimizer.bypass() and http.aptimizer.use() TrafficScript functions to control whether Traffic Manager will employ the Aptimizer optimization module for web content.    Note that these functions only refer to optimizations to the base HTML document (e.g. index.html, or other content of type text/html) - all other resources will be served as is appropriate.  For example, if a client receives an aptimized version of the base content and then requests the image sprites, Traffic Manager will always serve up the sprites.   # Optimize web content for clients based in Australia $ip = request.getRemoteIP(); if( geo.getCountry( $ip ) == "Australia" ) { http.aptimizer.use( "All", "Remote Users" ); }   Content Compression Use the http.compress.enable() and http.compress.disable() TrafficScript functions to control whether or not Traffic Manager will compress response content to the remote client.   Note that Traffic Manager will only compress content if the remote browser has indicated that it supports compression.   On a lightly loaded system, it’s appropriate to compress all response content whenever possible :   http.compress.enable();   On a system where the CPU usage is becoming too high, you can selectively compress content:   # Don’t compress by default http.compress.disable(); if( $isvaluable ) { # do compress in this case http.compress.enable(); }   Content Caching Traffic Manager can cache multiple different versions of a HTTP response.  For example, if your home page is generated by an application that customizes it for each user, Traffic Manager can cache each version separately, and return the correct version from the cache for each user who accesses the page.   Traffic Manager's cache has a limited size so that it does not consume too much memory and cause performance to degrade.  You may wish to prioritize which pages you put in the cache, using the http.cache.disable() and http.cache.enable() TrafficScript  functions.   Note: you also need to enable Content Caching in your Virtual Server configuration; otherwise the TrafficScript cache control functions will have no effect.   # Get the user name $user = http.getCookie( "UserName" ); # Don’t cache any pages by default: http.cache.disable(); if( $isvaluable ) { # Do cache these pages for better performance. # Each user gets a different version of the page, so we need to cache # the page indexed by the user name. http.cache.setkey( $user ); http.cache.enable(); }   Custom Logging A service policy can be complicated to construct and implement.   The TrafficScript functions log.info() , log.warn() and log.error() are used to write messages to the event log, and so are very useful debugging tools to assist in developing complex TrafficScript rules.   For example, the following code:   if( $isvaluable && slm.conforming( "timer" ) < 70 ) { log.info( "User ".$user." needs priority" ); }   … will append the following message to your error log file:   $ tail $ZEUSHOME/zxtm/log/errors [20/Jan/2013:10:24:46 +0000] INFO rulename rulelogmsginfo vsname User Jack needs priority   You can also inspect your error log file by viewing the ‘Event Log’ on the  Admin Server.   When you are debugging a rule, you can use log.info() to print out progress messages as the rule executes.  The log.info() f unction takes a string parameter; you can construct complex strings by appending variables and literals together using the ‘.’ operator:   $msg = "Received ".connection.getDataLen()." bytes."; log.info( $msg );   The functions log.warn() and log.error() are similar to log.info() .  They prefix error log messages with a higher priority - either “WARN” or “ERROR” and you can filter and act on these using the Event Handling system.   You should be careful when printing out connection data verbatim, because the connection data may contain control characters or other non-printable characters.  You can encode data using either ‘ string.hexEncode() ’ or ‘ string.escape( ) ’; you should use ‘ string.hexEncode() ’ if the data is binary, and ‘ string.escape() ’ if the data contains readable text with a small number of non-printable characters.   Conclusion Traffic Manager is a powerful toolkit for network and application administrators.  This white paper describes a number of techniques to use tools in the kit to solve a range of traffic valuation and prioritization tasks.   For more examples of how Traffic Manager and TrafficScript can manipulate and prioritize traffic, check out the Top Examples of Traffic Manager in action on the Pulse Community.
View full article
Top examples of Pulse vADC in action   Examples of how SteelApp can be deployed to address a range of application delivery challenges.   Modifying Content   Simple web page changes - updating a copyright date Adding meta-tags to a website with Traffic Manager Tracking user activity with Google Analytics and Google Analytics revisited Embedding RSS data into web content using Traffic Manager Add a Countdown Timer Using TrafficScript to add a Twitter feed to your web site Embedded Twitter Timeline Embedded Google Maps Watermarking PDF documents with Traffic Manager and Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking web content with Pulse vADC and TrafficScript   Prioritizing Traffic   Evaluating and Prioritizing Traffic with Traffic Manager HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Using Pulse vADC to Catch Spiders Dynamic rate shaping slow applications Stop hot-linking and bandwidth theft! Slowing down busy users - driving the REST API from TrafficScript   Performance Optimization   Cache your website - just for one second? HowTo: Monitor the response time of slow services HowTo: Use low-bandwidth content during periods of high load   Fixing Application Problems   No more 404 Not Found...? Hiding Application Errors Sending custom error pages   Compliance Problems   Satisfying EU cookie regulations using The cookiesDirective.js and TrafficScript   Security problems   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Traffic Manager Heartbleed: Using TrafficScript to detect TLS heartbeat records TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) SAML 2.0 Protocol Validation with TrafficScript Disabling SSL v3.0 for SteelApp   Infrastructure   Transparent Load Balancing with Traffic Manager HowTo: Launch a website at 5am Using Stingray Traffic Manager as a Forward Proxy Tunnelling multiple protocols through the same port AutoScaling Docker applications with Traffic Manager Elastic Application Delivery - Demo How to deploy Traffic Manager Cluster in AWS VPC   Other solutions   Building a load-balancing MySQL proxy with TrafficScript Serving Web Content from Traffic Manager using Python and Serving Web Content from Traffic Manager using Java Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager TrafficScript can Tweet Too Instrument web content with Traffic Manager Antivirus Protection for Web Applications Generating Mandelbrot sets using TrafficScript Content Optimization across Equatorial Boundaries
View full article
Many services now use RSS feeds to distribute frequently updated information like news stories and status reports. Traffic Manager's powerful TrafficScript language lets you process RSS XML data, and this article describes how you can embed several RSS feeds into a web document.   It illustrates Traffic Manager's response rewriting capabilities, XML processing and its ability to query several external datasources while processing a web request.   In this example, we'll show how you can embed special RSS tags within a static web document. Traffic Manager will intercept these tags in the document and replace them with the appropriate RSS feed data&colon;   <!RSS http://community.brocade.com/community/product-lines/stingray/view-browse-feed.jspa?browseSite=place-content&browseViewID=placeContent&userID=9503&containerType=14&containerID=2005&filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D !>   We'll use a TrafficScript rule to process web responses, seek out the RSS tag and retrieve, format and insert the appropriate RSS data.   Check the response   First, the TrafficScript rule needs to obtain the response data, and verify that the response is a simple HTML document. We don't want to process images or other document types!     # Check the response type $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; # Get the response data $body = http.getResponseBody();   Find the embedded RSS tags   Next, we can use a regular expression to search through the response data and find any RSS tags in it:   (.*?)<!RSS\s+(.*?)\s+!>(.*)   Stingray supports Perl compatible regular expressions (regexs). This regex will find the first RSS tag in the document, and will assign text to the internal variables $1, $2 and $3:   $1: the text before the tag $2: the RSS URL within the tag $3: the text after the tag   The following code searches for RSS tags:     while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3; }   Retrieve the RSS data   An asynchronous HTTP request is sufficient to retrieve the RSS XML data&colon;     $rss = http.request.get( $url );     Transform the RSS data using an XSLT transform   The following XSLT transform can be used to extract the first 4 RSS items and format them up as an HTML <UL> list: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">    <xsl:template match="/">     <ul>       <xsl:apply-templates select="//item[position()&lt;5]"/>     </ul>    </xsl:template>         <xsl:template match="item">       <xsl: param name="URL" select="link/text()"/>       <xsl: param name="TITLE" select="title/text()"/>       <li><a href="{$URL}"><xsl:value-of select="$TITLE"/></a></li>    </xsl:template> </xsl:stylesheet>   Store the XSLT file in the Traffic Manager conf/extra directory, naming it 'rss.xslt', so that the rule can look it up using resource.get().   You can apply the XSLT transform to the XML data using the xml.xslt.transform function. The function returns the result with HTML entity encoding; use string.htmldecode to remove these:   $xsl = resource.get( "rss.xslt" ); $html= string.htmldecode( xml.xslt.transform( $rss, $xsl ) );   The entire rule   The entire response rule, with a little additional error checking, looks like this: $contentType = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contentType, "text/html" ) ) break; $body = http.getResponseBody(); $new = ""; $changed = 0; while( string.regexmatch( $body, '(.*?)<!RSS\s+(.*?)\s*!>(.*)' )) {    $start = $1;    $url = $2;    $end = $3;      $html = "<ul><li><b>RSS: ".$url."</b></li></ul>";      $rss = http.request.get( $url );    if( $1 != 200 ) {       $html = "<ul><li><b>Failed to retreive RSS feed</b></li></ul>";    } else {       $xsl = resource.get( "rss.xslt" );       $html = string.htmldecode( xml.xslt.transform( $rss, $xsl ) );       if( $html == -1 ) {           $html = "<ul><li><b>Failed to parse RSS feed</b></li></ul>";       }    }      $new = $new . $start . $html;    $body = $end;    $changed = 1; } if( $changed )   http.setresponsebody( $new . $body );
View full article
With the evolution of social media as a tool for marketing and current events, we commonly see the Twitter feed updated long before the website. It’s not surprising for people to rely on these outlets for information.   Fortunately Twitter provides a suite of widgets and scripting tools to integrate Twitter information for your application. The tools available can be implemented with little code changes and support many applications. Unfortunately the same reason a website is not as fresh as social media is because of the code changes required. The code could be owned by different people in the organization or you may have limited access to the code due to security or CMS environment. Traffic Manager provides the ability to insert the required code into your site with no changes to the application.      Twitter Overview "Embeddable timelines make it easy to syndicate any public Twitter timeline to your website with one line of code. Create an embedded timeline from your widgets settings page on twitter.com, or choose “Embed this…” from the options menu on profile, search and collection pages.   Just like timelines on twitter.com, embeddable timelines are interactive and enable your visitors to reply, Retweet, and favorite Tweets directly from your pages. Users can expand Tweets to see Cards inline, as well as Retweet and favorite counts. An integrated Tweet box encourages users to respond or start new conversations, and the option to auto-expand media brings photos front and center.   These new timeline tools are built specifically for the web, mobile web, and touch devices. They load fast, scale with your traffic, and update in real-time." -twitter.com   Thank you Faisal Memon for the original article Using TrafficScript to add a Twitter feed to your web site   As happens more often than than not, platform access changes. This time twitter is our prime example. When loading Twitter js, http://widgets.twimg.com/j/2/widget.js you can see the following notice:   The Twitter API v1.0 is deprecated, and this widget has ceased functioning.","You can replace it with a new, upgraded widget from <https://twitter.com/settings/widgets/new/"+H+">","For more information on alternative Twitter tools, see <https://dev.twitter.com/docs/twitter-for-websites>   To save you some time, Twitter really means deprecated and the information link is broken. For more information on alternative Twitter tools the Twitter for Websites | Home. For information related to the information in this article, please see Embedded Timelines | Home   One of the biggest changes in the current twitter platform is the requirement for a "data-widget-id". The data-widget-id is unique, and is used by the twitter platform to provide information to generate the data. Before getting started with the Traffic Manager and Web application you will have to create a new widget using your twitter account https://twitter.com/settings/widgets/new/. Once you create your widget, will see the "Copy and paste the code into the HTML of your site." section on the twitter website. Along with other information, this code contains your "data-widget-id". See Create widget image.   Create widget (click to zoom)   This example uses a Traffic Script response rule to rewrite the HTTP body from the application. Specifically I know the body for my application includes a html comment   <!--SIDEBAR-->.    This rule will insert the required client side code into the HTTP body and send the updated body in to complete the request.  The $inserttag variable can be just about anything in the body itself  i.e. the "MORE LIKE THIS" text on the side of this page. Simply change the code below to:     $inserttag = "MORE LIKE THIS";   Some of the values used in the example (i.e. width, data-theme, data-link-color, data-tweet-limit) are not required. They have been included to demonstrate customization. When you create/save the widget on the twitter website, the configuration options (See the Create widget image above) are associated with the "data-widget-id". For example "data-theme", if you saved the widget with light and you want the light theme, it can be excluded. Alternatively if you saved the widget with light, you can use "data-theme=dark" and over ride the value saved with the widget.  In the example time line picture the data-link-color value is used to over ride the value provided with the saved "data-widget-id".   Example Response Rule, *line spaced for splash readability and use of variables for easy customization. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 # Only modify text/html pages    if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         $inserttag = "<!--SIDEBAR-->" ;       # create a widget ID @ https://twitter.com/settings/widgets/new  #This is the id used by riverbed.com   $ttimelinedataid = "261517019072040960" ;  $ttimelinewidth = "520" ; # max could be limited by ID config.  $ttimelineheight = "420" ;  $ttimelinelinkcolor = "#0080ff" ; #0 for default or ID config, #0080ff & #0099cc are nice  $ttimelinetheme = "dark" ; #"light" or "dark"  $ttimelinelimit = "0" ; #0 = unlimited with scroll. >=1 will ignore height.  #See https://dev.twitter.com/web/embedded-timelines#customization for other options.       $ttimelinehtml = "<a class=\"twitter-timeline\" " .                   "width=\"" . $ttimelinewidth . "" .                     "\" height=\"" . $ttimelineheight . "" .                     "\" data-theme=\"" . $ttimelinetheme . "" .                   "\" data-link-color=\"" . $ttimelinelinkcolor . "" .                   "\" data-tweet-limit=\"" . $ttimelinelimit . "" .                   "\" data-widget-id=\"" . $ttimelinedataid . "" .                    "\"></a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)" .                     "[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id))" .                     "{js=d.createElement(s);js.id=id;js.src=p+" .                   "\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js," .                   "fjs);}}(document,\"script\",\"twitter-wjs\");" .                     "</script><br>" . $inserttag . "" ;         $body = http.getResponseBody();    $body = string.replace( $body , $inserttag , $ttimelinehtml );  http.setResponseBody( $body );    A short version of the rule above, still with line breaks for readability.   1 2 3 4 5 6 7 8 9 if ( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break;         http.setResponseBody(string.replace( http.getResponseBody(), "<!--SIDEBAR-->" ,   "<a class=\"twitter-timeline\" width=\"520\" height=\"420\" data-theme=\"dark\" " .  "data-link-color=\"#0080ff\" data-tweet-limit=\"0\" data-widget-id=\"261517019072040960\">" .  "</a><script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test" .  "(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;" .  "js.src=p+\"://platform.twitter.com/widgets.js\";fjs.parentNode.insertBefore(js,fjs);}}" .  "(document,\"script\",\"twitter-wjs\");</script><br><!--SIDEBAR-->" ));    Result from either rule:  
View full article
You may be familiar with the security concept of a 'honeypot' - a sandboxed, sacrificial computer system that sits safely away from the primary systems.  Any attempts to access that computer are a strong indicator that an attacker is at work, probing for weak points in a network.   A recent Slashdot article raised an interesting idea... 'honeywords' are fake accounts in a password database that don't correspond to real users.  Any attempts to log in with one of these accounts is a strong indicator that the password database has been stolen.   In a similar vein, attempts to log in with common, predictable admin accounts are a strong indicator that an attacker is scanning your system and looking for weaknesses.  This article describes how you can detect these attacks with ease, and then considers different methods you could use to block the attacker.   Detecting Attack Attempts   Attackers look for common account names and passwords (see [1], [2] and [3])   Traffic Manager is in an ideal position to detect attack attempts.  It can inspect the username and password in each login attempt, and flag an alert if a user appears to be scanning for familiar usernames.   Step 1: Determine how the login process functions   Credentials are usually presented to the server as HTTP form parameters, typically in an HTTP POST to an SSL-protected endpoint: Web Inspection tools such as the Chrome Developer tools (illustrated above) help you understand how the authentication credentials are presented to the login service.   You can use the TrafficScript function http.getFormParam() to look up the submitted HTTP form parameters - this function extracts parameters from both the query string (GET and POST requests) and HTTP request body (POST requests), handles any unusual body transfer encoding, and %-decodes the values:   $userid = http.getFormParam( "Email" ); $pass = http.getFormParam( "Password" );   Step 2: Does this constitute an attack?   You'll need to make a judgement as to what constitutes an attack attempt against your service.  A single attempt to log-in with 'admin:admin' is probably sufficient to block a user, but multiple attempts in a short period of time certainly indicate a concerted attack.   An easy way to count user/password combinations is to use a rate shaping class to count events.  Stingray's rate classes are usually used to implement queues (delaying requests that exceed the per-second or per-minute queue), but you can also use the rate.use.noQueue() function to determine if an event has exceeded the rate limit or not, without queuing it.   Let's construct a policy that detects if a particular source IP address is trying to log in to one of our false 'admin' accounts too frequently:   $path = http.getPath(); if( $path != "/cgi-bin/login.cgi" ) break; $ip = request.getRemoteIP(); $user = http.getFormParam( "user" ); if( string.regexmatch( $user, "^(admin|root|phpadmin|test|guest|user|administrator|mysql|adm|oracle)$" ) ) { if( rate.use.noQueue( "5 per minute", $ip ) == 0 ) { # User has exceeded the limits .... } }   An aside: If you would like to maintain a large list of honeyword names (making sure that none of them correspond to real accounts), then you may find it easier to store them in an external table using libTable.rts: Interrogating tables of data in TrafficScript.       Responding to Attack Attempts   If you determine that a particular IP address is generating attack attempts and you want to block it, there are a number of ways that you can do so.  They vary in complexity, accuracy and the ability to 'time out' the period that an IP address is blocked out for:   Method Sophistication Store data locally in the global data segment Straightforward to code, timeouts possible, not cluster-aware Store data in the resource directory Straightforward to code, timeouts possible, is cluster-aware Update configuration in service protection policy Straightforward to code, difficult to avoid race conditions, not possible to timeout the configuration, is cluster aware Provision iptables rules from an event Complex to code accurately but very effective, not possible to timeout, is cluster aware   Updating the configuration in a service protection policy could be achieved by calling the REST API from TrafficScript - perform a GET on the configuration ( /api/tm/1.0/config/active/protection/ name ), update the banned array, and PUT the configuration back again.  However, there is no natural way to remove (timeout) a block on an IP address after a period of inactivity.   Provisioning iptables rules would be possible with a specific event handler that responded to the TrafficScript function event.emit( "block", $ip ), but once again, there's no easy way to time a block rule out.   Storing data locally in the resource directory is a good approach, and is described in detail in the article Slowing down busy users - driving the REST API from TrafficScript.  The basic premise is that you can use the REST API to 'touch' a file (named after an IP address) in the resource directory, and you block a user if their IP address corresponds to a file in the resource directory that is not too old.  However, if the user does not return, you will build up a large number of files in the resource directory that should be manually pruned.   Storing data in the global data segment (How is memory managed in TrafficScript?) is perhaps the best solution.  The following code sample illustrates the basic premise:     $prefix = "blocked-ip-address:"; # Record that an IP address is blocked data.set( $prefix.$ip, 1 ); # Check if an IP address is blocked if( data.get( $prefix.$ip ) ) { connection.discard();#sthash.YB8cEYo7.dpuf } # Delete all records data.reset( $prefix );   You could implement timeouts in a simple fashion, for example, by calling data.reset() on the first transaction after the top of every hour:   $hour = sys.time.hour(); $last = data.get( $prefix."hour" ); if( $last != $hour ) { data.reset( $prefix ); data.set( $prefix."hour", $hour ); }   An aside: There is a very slight risk of a race condition here (if two cores run the rule simultaneously) but the effects are not significant.   This approach gives a simple and effective solution to the problem of detecting logins to fake admin accounts, and then blocking the IP address for up to an hour.   What if I want to block IP addresses for longer?   One weakness of the approach above is that if an IP address is added to the block table at 59 minutes past the hour, it will be removed a minute later.  This may not be a serious fault; if the user is continuing to try to force admin accounts, the rule will detect this and block the IP address shortly after.   An alternative solution is to store two tables - one for odd-numbered hours, and one for even-numbered hours:   When you add an IP address, place it in the odd or even table according to the current hour When you test for the presence of an IP address, check both tables When the hour rolls over and you switch to the even-numbered table (for example), delete all of the entries (using data.reset ) before proceeding - they will be between one and two hours old   $prefix = "blocked-ip-address:"; # Check if an IP address is blocked if( data.get( $prefix."0:".$ip ) || data.get( $prefix."1:".$ip ) ) { connection.discard(); } # Add an IP address (this is an infrequent operation we hope!) $hour = sys.time.hour(); $pp = ( $hour % 2 ) . ":"; # pp is either 0: or 1: $last = data.get( $prefix.$pp."hour" ); if( $last != $hour ) { data.reset( $prefix.$pp ); data.set( $prefix.$pp."hour", $hour ); } data.set( $prefix.$pp.$ip, 1 );   This extension to the rule could further be extended to any number of tables, and to any time interval, though this is almost certainly overkill for this solution.   Read More   Interested in knowing what usernames are most commonly used?  Check out the article Being Lazy with Java Extensions and the 'CountThis' extension Other security and denial-of-service -related articles - check out the Security section of the Top Stingray Examples and Use Cases article
View full article
Popular news and blogging sites such as Slashdot and Digg have huge readerships. They are community driven and allow their members to post articles on various topics ranging from hazelnut chocolate bars to global warming. These sites, due to their massive readership, have the power to generate huge spikes in the web traffic to those (un)fortunate enough to get mentioned in their articles. Fortunately Traffic Manager and TrafficScript can help.   If the referenced site happens to be yours, you are faced with dealing with this sudden and unpredictable spike in bandwidth and request rate, causing:   a large proportion or all of your available bandwidth to be consumed by visitors referred to you by this popular site; and in extreme cases, a cascade failure across your web servers as each one becomes overloaded, fails and, in doing so, adds further load onto the remaining web servers.   Bandwidth Management and Rate Shaping   Traffic Manager has the ability to shape traffic in two important ways. Firstly, you can restrict the amount of bandwidth any client or group of clients are allowed to consume. This is commonly known as "Bandwidth Management" and in Traffic Manager it's configured by using a bandwidth class. Bandwidth classes are used to specify the maximum bits per second to make available. The alternative method is to limit the number of requests that those clients or group of clients can make per second and/or per minute. This is commonly known as "Rate Shaping" and is configured within a rate class.   Both Rate Shaping and Bandwidth Management classes are configured and stored within the catalog section of Traffic Manager. Once you have created a class it is ready for use and can be applied to one or more of your Virtual Servers. However the true power of these Traffic Shaping features really becomes apparent when you make use of them with TrafficScript.   What is an Abusive Referer?   I would class an Abusive Referer as any site on the internet that refers enough traffic to your server to overwhelm it and effectively deny service to other users. This abuse is usually unintentional, the problem lies in the sheer number of people wanting to visit your site at that one time. This slashdot effect can be detected and dealt with by a TrafficScript rule and either a Bandwidth or a Rate Class.   Detecting and Managing Abusive Referers   Example One   Take a look at the TrafficScript below for an example of how you could stop a site (in this instance slashdot) from from using a large proportion or all of your available bandwidth.   $referrer = http.getHeader( "Referer" ); if( string.contains( $referrer, "slashdot" ) ) { http.addResponseHeader( "Set-Cookie", "slashdot=1" ); response.setBandwidthClass( "slashdot" ); } if( http.getCookie( "slashdot" ) ) { response.setBandwidthClass( "slashdot" ); }   In this example we are specifically targeting slashdot users and preventing them from using more bandwidth than we have allotted them in our "slashdot" bandwidth class. This rule requires you to know the name of the site you want protection from, but this rule or similar could be modified to defend against other high traffic sites. Example Two The next example is a little more complicated, but will automatically limit all requests from any referer. I've chosen to use two rate classes here, BusyReferer for those sites I allow to send a large amount of traffic and StandardReferers for those I don't. At the top I specify a $whitelist, which contains sites I never want to rate shape, and $highTraffic which is a list of sites I'm going to shape with my BusyReferer class. By default, all traffic not in the white list is sent through one of my rate classes, but only on entry to the site. That's because subsequent requests will have myself as the referer and will be whitelisted. In times of high load, when a referer is sending more traffic than the rate class allows, a back log will build up, at that point we will also start issuing cookies to put the offending referers into a bandwidth class.   # Referer whitelist. These referers are never rate limited. $whitelist = "localhost 172.16.121.100"; # Referers that are allowed to pass a higher number of clients. $highTraffic = "google mypartner.com"; # How many queued requests are allowed before we track users. $shapeQueue = 2; # Retrieve the referer and strip out the domain name part. $referer = http.getheader("Referer"); $referer = String.regexsub($referer, ".*?://(.*?)/.*", "$1", "i" ); # Check to see if this user has already been given an abuse cookie. # If they have we'll force them into a bandwidth class if ( $cookie = http.getCookie("AbusiveReferer") ) { response.setBandwidthClass("AbusiveReferer"); } # If the referer is whitelisted then exit. if ( String.contains( $whitelist, $referer ) ) { break; } # Put the incoming users through the busy or standard rate classes # and check the queue length for their referer. if ( String.contains( $highTraffic, $referer ) ) { $backlog = rate.getbacklog("BusyReferer", $referer); rate.use("BusyReferer", $referer); } else { $backlog = rate.getbacklog("StandardReferer", $referer); rate.use("StandardReferer", $referer); } # If we have exceeded our backlog limit, then give them a cookie # this will enforce bandwidth shaping for subsequent requests. if ( $backlog > $shapeQueue ) { http.setResponseCookie("AbusiveReferer", $referer); response.setBandwidthClass("AbusiveReferer"); }   In order for the TrafficScript to function optimally, you must enter your servers own domainname(s) into the white list. If you do not, then the script will perform rate shaping on everyone surfing your website!   You also need to set appropriate values for the BusyReferer and StandardReferer shaping classes. Remember we're only counting the clients entry to the site, so Perhaps you want to set 10/minute as a maximum standard rate and then 20/minute for your BusyReferer rate.   In this script we also use a bandwidth class for when things get busy. You will need to create this class, called "AbusiveReferer" and assign it an appropriate amount of bandwidth. Users are only put into this class when their referer is exceeding the rate of referrals set by the relevant rate class.   Shaping with Context   Rate Shaping classes can be given a context so you can apply the class to a subset of users, based on a piece of key data. The second script uses context to create an instance of the Rate Shaping class for each referer. If you do not use context, then all referers will share the same instance of the rate class.   Conclusion   Traffic Manager can use bandwidth and rate shaping classes to control the number of requests that can be made by any group of clients. In this article, we have covered choosing the class based on the referer, which has allowed us to restrict the rate at which any one site can refer visitors to us. These examples could be modified to base the restrictions on other data, such as cookies, or even extended to work with other protocols. A good example would be FTP, where you could extract the username from the FTP logon data and apply a bandwidth class based on the username.
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
We spend a great deal of time focusing on how to speed up customers' web services. We constantly research new techniques to load balance traffic, optimise network connections and improve the performance of overloaded application servers. The techniques and options available from us (and yes, from our competitors too!) may seem bewildering at times. So I would like to spend a short time singing the praises of one specific feature, which I can confidently say will improve your website's performance above all others - caching your website.   "But my website is uncacheable! It's full of dynamic, changing pages. Caching is useless to me!"   We'll answer that objection soon, but first, it is worth a quick explanation of the two main styles of caching:     Client-side caching   Most people's experience of a web cache is on their web browser. Internet Explorer or Firefox will store copies of web pages on your hard drive, so if you visit a site again, it can load the content from disk instead of over the Internet.   There's another layer of caching going on though. Your ISP may also be doing some caching. The ISP wants to save money on their bandwidth, and so puts a big web cache in front of everyone's Internet access. The cache keeps copies of the most-visited web pages, storing bits of many different websites. A popular and widely used open-source web cache is Squid.   However, not all web pages are cacheable near the client. Websites have dynamic content, so for example any web page containing personalized or changing information will not be stored in your ISP's cache. Generally the cache will fill up with "static" content such as images, movies, etc. These get stored for hours or days. For your ISP, this is great, as these big files take up the most of their precious bandwidth.   For someone running their own website, the browser caching or ISP caching does not do much. They might save a little bandwidth from the ISP image caching if they have lots of visitors from the same ISP, but the bulk of the website, including most of the content generated by their application servers, will not be cached and their servers will still have lots of work to do.   Server-side caching (with Traffic Manager)   Here, the main aim is not to save bandwidth, but to accelerate your website. The traffic manager sits in your datacenter (or your cloud provider), in front of your web and application servers. Access to your website is through the Traffic Manager software, so it sees both the requests and responses. Traffic Manager can then start to answer these requests itself, delivering cached responses. Your servers then have less work to do. Less work = faster responses = fewer servers needed = saves money!   "But I told you - my website isn't cacheable!"   There's a reason why your website is marked uncacheable. Remember the ISP caches...? They mustn't store your changing, constantly updating web pages. To enforce this, application servers send back instructions with every web page, the Cache-Control HTTP header, saying "Don't cache this". Traffic Manager obeys these cache instructions too, because it's well-behaved.   But, think - how often does your website really change? Take a very busy site, for example a popular news site. Its front page may be labelled as uncacheable so that vistors always see the latest news, since it changes as new stories are added. But new additions aren't happening every second of the day. What if the page was marked as cacheable - for just one second? Visitors would still see the most up-to-date news, but the load on the site servers would plummet. Even if the website had as few as ten views in a second, this simple change would reduce the load on the app servers ten-fold.   This isn't an isolated example - there are plenty of others: Think twitter searches, auction listings, "live" graphing, and so on. All such content can be cached briefly without any noticable change to the "liveness" of the site. Traffic Manager can deliver a cached version of your web page much faster than your application servers - not just because it is highly optimized, but because sending a cached copy of a page is so much less work than generating it from scratch.   So if this simple cache change is so great, why don't people use this technique more - surely app servers can mark their web pages as cacheable for one or two seconds without Traffic Manager's help, and those browser/ISP caches can then do the magic? Well, the browser caches aren't going to be any use - an individual isn't going to be viewing the same page on your website multiple times a second (and if they keep hitting the reload button, their page requests are not cacheable anyway). So how about those big ISP caches? Unfortunately, they aren't always clever enough either. Some see a web page marked as cacheable for a short time and will either:   Not cache it at all (it's going to expire soon, what's the point in keeping it?) or will cache it for much longer (if it is cacheable for 3 seconds, why not cache it for 300, right?)   Also, by leaving the caching to the client-side, the cache hit rate gets worse. A user in France isn't going to be able to make use of a cached copy of your site stored in a US ISP's cache, for instance.   If you use Traffic Manager to do the caching, these issues can be solved. First, the cache is held in one place - your datacenter, so it is available to all visitors. Second, Traffic Manager can tweak the cache instructions for the page, so it caches the page while forcing other people not to. Here is what's going on:     Request arrives at Traffic Manager, which sends it on to your application server. App server sends web page response back to the traffic manager. The page has a Cache-Control: no-cache header, since the app server thinks the page can't be cached. TrafficScript response rule identifies the page as one that can be cached, for a short time. It changes the cache instructions to Cache-Control: max-age=3, meaning that the page can now be cached for three seconds. Traffic Manager's web cache stores the page. Traffic Manager sends out the response to the user (and to anyone else for the next three seconds), but changes the cache instructions to Cache-Control: no-cache, to ensure downstream caches, ISP caches and web browsers do not try to cache the page further.   Result: a much faster web site, yet it still serves dynamic and constantly updating pages to viewers. Give it a try - you will be amazed at the performance improvements possible, even when caching for just a second. Remember, almost anything can be cached if you configure your servers correctly!   How to set up Traffic Manager   On the admin server, edit the virtual server that you want to cache, and click on the "Content Caching" link. Enable the cache. There are options here for the default cache time for pages. These can be changed as desired, but are primarily for the "ordinary" content that is cacheable normally, such as images, etc. The "webcache!control_out" setting allows you to change the Cache-Control header for your pages after they have been cached by the Traffic Manager software, so you can put "no-cache" here to stop others from caching your pages.   The "webcache!refresh_time" setting is a useful extra here. Set this to one second. This will smooth out the load on your app servers. When a cached page is about to expire (i.e. it's too old to stay cached) and a new request arrives, Traffic Manager will hand over a single request to your app servers, to see if there is a newer page available. Other requests continue to get served from the cache. This can prevent 'waves' of requests hitting your app servers when a page is about to expire from the cache.   Now, we need to make Traffic Manager cache the specific pages of your site that the app server claims are uncacheable. We do this using the RuleBuilder system for defining rules, so click on the "Catalogs" icon and then select the "Rules" tab. Now create a new RuleBuilder rule.   This rule needs to run for the specific web pages that you wish to make cacheable for short amounts of time. For an example, we'll make "/news" cacheable. Add a condition of "HTTP:URL Path" to match "/news", then add an action to set a HTTP response header. The rule should look like this:     Finally, add this rule as a response rule to your virtual server. That's it! Your site should now start to be cached. Just a final few words of caution:   Be selective in the pages that you mark as cacheable; remember that personalized pages (e.g. showing a username) cannot be cached otherwise other people will see those pages too! If necessary, some page redesign might be called for to split the content into "generic" and "user-specific" iframes or AJAX requests. Server-side caching saves you CPU time, not bandwidth. If your website is slow because you are hitting your site throughput limits, then other techniques are needed.
View full article
When you need to scale out your MySQL database, replication is a good way to proceed. Database writes (UPDATEs) go to a 'master' server and are replicated across a set of 'slave' servers. Reads (SELECTs) are load-balanced across the slaves.   Overview   MySQL's replication documentation describes how to configure replication:   MySQL Replication   A quick solution...   If you can modify your MySQL client application to direct 'Write' (i.e. 'UPDATE') connections to one IP address/port and 'Read' (i.e. 'SELECT') connections to another, then this problem is trivial to solve. This generally needs a code update (Using Replication for Scale-Out).   You will need to direct the 'Update' connections to the master database (or through a dedicated Traffic Manager virtual server), and direct the 'Read' connections to a Traffic Manager virtual server (in 'generic server first' mode) and load-balance the connections across the pool of MySQL slave servers using the 'least connections' load-balancing method: Routing connections from the application   However, in most cases, you probably don't have that degree of control over how your client application issues MySQL connections; all connections are directed to a single IP:port. A load balancer will need to discriminate between different connection types and route them accordingly.   Routing MySQL traffic   A MySQL database connection is authenticated by a username and password. In most database designs, multiple users with different access rights are used; less privileged user accounts can only read data (issuing 'SELECT' statements), and more privileged users can also perform updates (issuing 'UPDATE' statements). A well architected application with sound security boundaries will take advantage of these multiple user accounts, using the account with least privilege to perform each operation. This reduces the opportunities for attacks like SQL injection to subvert database transactions and perform undesired updates.   This article describes how to use Traffic Manager to inspect and manage MySQL connections, routing connections authenticated with privileged users to the master database and load-balancing other connects to the slaves:   Load-balancing MySQL connections   Designing a MySQL proxy   Stingray Traffic Manager functions as an application-level (layer-7) proxy. Most protocols are relatively easy for layer-7 proxies like Traffic Manager to inspect and load-balance, and work 'out-of-the-box' or with relatively little configuration.   For more information, refer to the article Server First, Client First and Generic Streaming Protocols.   Proxying MySQL connections   MySQL is much more complicated to proxy and load-balance.   When a MySQL client connects, the server immediately responds with a randomly generated challenge string (the 'salt'). The client then authenticates itself by responding with the username for the connection and a copy of the 'salt' encrypted using the corresponding password:   Connect and Authenticate in MySQL   If the proxy is to route and load-balance based on the username in the connection, it needs to correctly authenticate the client connection first. When it finally connects to the chosen MySQL server, it will then have to re-authenticate the connection with the back-end server using a different salt.   Implementing a MySQL proxy in TrafficScript   In this example, we're going to proxy MySQL connections from two users - 'mysqlmaster' and 'mysqlslave', directing connections to the 'SQL Master' and 'SQL Slaves' pools as appropriate.   The proxy is implemented using two TrafficScript rules ('mysql-request' and 'mysql-response') on a 'server-first' Virtual Server listening on port 3306 for MySQL client connections. Together, the rules implement a simple state machine that mediates between the client and server:   Implementing a MySQL proxy in TrafficScript   The state machine authenticates and inspects the client connection before deciding which pool to direct the connection to. The rule needs to know the encrypted password and desired pool for each user. The virtual server should be configured to send traffic to the built-in 'discard' pool by default.   The request rule:   Configure the following request rule on a 'server first' virtual server. Edit the values at the top to reflect the encrypted passwords (copied from the MySQL users table) and desired pools:   sub encpassword( $user ) { # From the mysql users table - double-SHA1 of the password # Do not include the leading '*' in the long 40-byte encoded password if( $user == "mysqlmaster" ) return "B17453F89631AE57EFC1B401AD1C7A59EFD547E5"; if( $user == "mysqlslave" ) return "14521EA7B4C66AE94E6CFF753453F89631AE57EF"; } sub pool( $user ) { if( $user == "mysqlmaster" ) return "SQL Master"; if( $user == "mysqlslave" ) return "SQL Slaves"; } $state = connection.data.get( "state" ); if( !$state ) { # First time in; we've just recieved a fresh connection $salt1 = randomBytes( 8 ); $salt2 = randomBytes( 12 ); connection.data.set( "salt", $salt1.$salt2 ); $server_hs = "\0\0\0\0" . # length - fill in below "\012" . # protocol version "Stingray Proxy v0.9\0" . # server version "\01\0\0\0" . # thread 1 $salt1."\0" . # salt(1) "\054\242" . # Capabilities "\010\02\0" . # Lang and status "\0\0\0\0\0\0\0\0\0\0\0\0\0" . # Unused $salt2."\0"; # salt(2) $l = string.length( $server_hs )-4; # Will be <= 255 $server_hs = string.replaceBytes( $server_hs, string.intToBytes( $l, 1 ), 0 ); connection.data.set( "state", "wait for clienths" ); request.sendResponse( $server_hs ); break; } if( $state == "wait for clienths" ) { # We've recieved the client handshake. $chs = request.get( 1 ); $chs_len = string.bytesToInt( $chs ); $chs = request.get( $chs_len + 4 ); # user starts at byte 36; password follows after $i = string.find( $chs, "\0", 36 ); $user = string.subString( $chs, 36, $i-1 ); $encpasswd = string.subString( $chs, $i+2, $i+21 ); $passwd2 = string.hexDecode( encpassword( $user ) ); $salt = connection.data.get( "salt" ); $passwd1 = string_xor( $encpasswd, string.hashSHA1( $salt.$passwd2 ) ); if( string.hashSHA1( $passwd1 ) != $passwd2 ) { log.warn( "User '" . $user . "': authentication failure" ); connection.data.set( "state", "authentication failed" ); connection.discard(); } connection.data.set( "user", $user ); connection.data.set( "passwd1", $passwd1 ); connection.data.set( "clienths", $chs ); connection.data.set( "state", "wait for serverhs" ); request.set( "" ); # Select pool based on user pool.select( pool( $user ) ); break; } if( $state == "wait for client data" ) { # Write the client handshake we remembered from earlier to the server, # and piggyback the request we've just recieved on the end $req = request.get(); $chs = connection.data.get( "clienths" ); $passwd1 = connection.data.get( "passwd1" ); $salt = connection.data.get( "salt" ); $encpasswd = string_xor( $passwd1, string.hashSHA1( $salt . string.hashSHA1( $passwd1 ) ) ); $i = string.find( $chs, "\0", 36 ); $chs = string.replaceBytes( $chs, $encpasswd, $i+2 ); connection.data.set( "state", "do authentication" ); request.set( $chs.$req ); break; } # Helper function sub string_xor( $a, $b ) { $r = ""; while( string.length( $a ) ) { $a1 = string.left( $a, 1 ); $a = string.skip( $a, 1 ); $b1 = string.left( $b, 1 ); $b = string.skip( $b, 1 ); $r = $r . chr( ord( $a1 ) ^ ord ( $b1 ) ); } return $r; }   The response rule   Configure the following as a response rule, set to run every time, for the MySQL virtual server.   $state = connection.data.get( "state" ); $authok = "\07\0\0\2\0\0\0\02\0\0\0"; if( $state == "wait for serverhs" ) { # Read server handshake, remember the salt $shs = response.get( 1 ); $shs_len = string.bytesToInt( $shs )+4; $shs = response.get( $shs_len ); $salt1 = string.substring( $shs, $shs_len-40, $shs_len-33 ); $salt2 = string.substring( $shs, $shs_len-13, $shs_len-2 ); connection.data.set( "salt", $salt1.$salt2 ); # Write an authentication confirmation now to provoke the client # to send us more data (the first query). This will prepare the # state machine to write the authentication to the server connection.data.set( "state", "wait for client data" ); response.set( $authok ); break; } if( $state == "do authentication" ) { # We're expecting two responses. # The first is the authentication confirmation which we discard. $res = response.get(); $res1 = string.left( $res, 11 ); $res2 = string.skip( $res, 11 ); if( $res1 != $authok ) { $user = connection.data.get( "user" ); log.info( "Unexpected authentication failure for " . $user ); connection.discard(); } connection.data.set( "state", "complete" ); response.set( $res2 ); break; }   Testing your configuration   If you have several MySQL databases to test against, testing this configuration is straightforward. Edit the request rule to add the correct passwords and pools, and use the mysql command-line client to make connections:   $ mysql -h zeus -u username -p Enter password: *******   Check the 'current connections' list in the Traffic Manager UI to see how it has connected each session to a back-end database server.   If you encounter problems, try the following steps:   Ensure that trafficscript!variable_pool_use is set to 'Yes' in the Global Settings page on the UI. This setting allows you to use non-literal values in pool.use() and pool.select() TrafficScript functions. Turn on the log!client_connection_failures and log!server_connection_failures settings in the Virtual Server > Connection Management configuration page; these settings will configure the traffic manager to write detailed debug messages to the Event Log whenever a connection fails.   Then review your Traffic Manager Event Log and your mysql logs in the event of an error.   Traffic Manager's access logging can be used to record every connection. You can use the special *{name}d log macro to record information stored using connection.data.set(), such as the username used in each connection.   Conclusion   This article has demonstrated how to build a fairly sophisticated protocol parser where the Traffic Manager-based proxy performs full authentication and inspection before making a load-balancing decision. The protocol parser then performs the authentication again against the chosen back-end server.   Once the client-side and server-side handshakes are complete, Traffic Manager will simply forward data back and fro between the client and the server.   This example addresses the problem of scaling out your MySQL database, giving load-balancing and redundancy for database reads ('SELECTs'). It does not address the problem of scaling out your master 'write' server - you need to address that by investing in a sufficiently powerful server, architecting your database and application to minimise the number and impact of write operations, or by selecting a full clustering solution.     The solution leaves a single point of failure, in the form of the master database. This problem could be effectively dealt with by creating a monitor that tests the master database for correct operation. If it detects a failure, the monitor could promote one of the slave databases to master status and reconfigure the 'SQLMaster' pool to direct write (UPDATE) traffic to the new MySQL master server.   Acknowledgements   Ian Redfern's MySQL protocol description was invaluable in developing the proxy code.     Appendix - Password Problems? This example assumes that you are using MySQL 4.1.x or later (it was tested with MySQL 5 clients and servers), and that your database has passwords in the 'long' 41-byte MySQL 4.1 (and later) format (see http://dev.mysql.com/doc/refman/5.0/en/password-hashing.html)   If you upgrade a pre-4.1 MySQL database to 4.1 or later, your passwords will remain in the pre-4.1 'short' format.   You can verify what password format your MySQL database is using as follows:   mysql> select password from mysql.user where user='username'; +------------------+ | password         | +------------------+ | 6a4ba5f42d7d4f51 | +------------------+ 1 rows in set (0.00 sec)   mysql> update mysql.user set password=PASSWORD('password') where user='username'; Query OK, 1 rows affected (0.00 sec) Rows matched: 1  Changed: 1  Warnings: 0   mysql> select password from mysql.user where user='username'; +-------------------------------------------+ | password                                  | +-------------------------------------------+ | *14521EA7B4C66AE94E6CFF753453F89631AE57EF | +-------------------------------------------+ 1 rows in set (0.00 sec)   If you can't create 'long' passwords, your database may be stuck in 'short' password mode. Run the following command to resize the password table if necessary:   $ mysql_fix_privilege_tables --password=admin password   Check that 'old_passwords' is not set to '1' (see here) in your my.cnf configuration file.   Check that the mysqld process isn't running with the --old-passwords option.   Finally, ensure that the privileges you have configured apply to connections from the Stingray proxy. You may need to GRANT... TO 'user'@'%' for example.
View full article
Top Deployment Guides   The following is a list of tested and validated deployment guides for common enterprise applications. Ask your sales team for information for the latest information    Microsoft   Virtual Traffic Manager and Microsoft Lync 2013 Virtual Traffic Manager and Microsoft Lync 2010 Virtual Traffic Manager and Microsoft Skype for Business Virtual Traffic Manager and Microsoft Exchange 2010 Virtual Traffic Manager and Microsoft Exchange 2013 Virtual Traffic Manager and Microsoft Exchange 2016 Virtual Traffic Manager and Microsoft SharePoint 2013 Virtual Traffic Manager and Microsoft SharePoint 2010 Virtual Traffic Manager and Microsoft Outlook Web Access Virtual Traffic Manager and Microsoft Intelligent Application Gateway Virtual Traffic Manager and Microsoft IIS  Oracle   Virtual Traffic Manager and Oracle EBS 12.1 Virtual Traffic Manager and Oracle Enterprise Manager 12c Virtual Traffic Manager and Oracle Application Server 10G Virtual Traffic Manager and Oracle WebLogic Applications (Ex: PeopleSoft and Blackboard) Virtual Traffic Manager and Glassfish Application Server   VMware   Virtual Traffic Manager and VMware Horizon View Servers Virtual Traffic Manager Plugin for VMware vRealize Orchestrator   Other Applications   Virtual Traffic Manager and SAP NetWeaver Virtual Traffic Manager and Magento  
View full article
This article illustrates how to write data to a MySQL database from a Java Extension, and how to use a background thread to minimize latency and control the load on the database.   Being Lazy with Java Extensions   With a Java Extension, you can log data in real time to an external database. The example in this article describes how to log the ‘referring’ source that each visitor comes in from when they enter a website. Logging is done to a MySQL database, and it maintains a count of how many times each key has been logged, so that you can determine which sites are sending you the most traffic.   The article then presents a modification that illustrates how to lazily perform operations such as database writes in the background (i.e. asynchronously) so that the performance the end user observes is not impaired.   Overview - let's count referers!   It’s often very revealing to find out which web sites are referring the most traffic to the sites that you are hosting. Tools like Google Analytics and web log analysis applications are one way of doing this, but in this example we’ll show an alternative method where we log the frequency of referring sites to a local database for easy access.   When a web browser submits an HTTP request for a resource, it commonly includes a header called "Referer" which identifies the page that linked to that resource. We’re not interested in internal referrers – where one page in the site links to another. We’re only interested in external referrers. We're going to log these 'external referrers' to a MySQL database, counting the frequency of each so that we can easily determine which occur most commonly. Create the database   Create a suitable MySQL database, with limited write access for a remote user: % mysql –h dbhost –u root –p Enter password: ******** mysql> CREATE DATABASE website; mysql> CREATE TABLE website.referers ( data VARCHAR(256) PRIMARY KEY, count INTEGER ); mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'%' IDENTIFIED BY 'W38_U5er'; mysql> GRANT SELECT,INSERT,UPDATE ON website.referers TO 'web'@'localhost' IDENTIFIED BY 'W38_U5er'; mysql> QUIT;   Verify that the table was correctly created and the ‘web’ user can access it:   % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> DESCRIBE website.referers; +-------+--------------+------+-----+---------+-------+ | Field | Type         | Null | Key | Default | Extra | +-------+--------------+------+-----+---------+-------+ | data  | varchar(256) | NO   | PRI |         |       | | count | int(11)      | YES  |     | NULL    |       | +-------+--------------+------+-----+---------+-------+ 2 rows in set (0.00 sec)   mysql> SELECT * FROM website.referers; Empty set (0.00 sec)   The database looks good...   Create the Java Extension   We'll create a Java Extension that writes to the database, adding rows with the provided 'data' value, and setting the 'count' value to '1', or incrementing it if the row already exists.   CountThis.java   Compile up the following 'CountThis' Java Extension:   import java.io.IOException; import java.io.PrintWriter; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThis extends HttpServlet { private static final long serialVersionUID = 1L; private Connection conn = null; private String userName = null; private String password = null; private String database = null; private String table = null; private String dbserver = null; public void init( ServletConfig config) throws ServletException { super.init( config ); userName = config.getInitParameter( "username" ); password = config.getInitParameter( "password" ); table = config.getInitParameter( "table" ); dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch( Exception e ) { throw new ServletException( "Could not initialize mysql: "+e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { try { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data == null ) return; if( conn == null ) { conn = DriverManager.getConnection( "jdbc:mysql://"+dbserver+"/", userName, password); } PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 ) " + "ON DUPLICATE KEY UPDATE count=count+1" ); s.setString(1, data); s.executeUpdate(); } catch( Exception e ) { conn = null; log( "Could not log data to database table '" + table + "': " + e.toString() ); } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } }   Upload the resulting CountThis.class file to Traffic Manager's Java Catalog. Click on the class name to configure the following initialization properties:   You must also upload the mysql connector (I used mysql-connector-java-5.1.24-bin.jar ) from dev.mysql.com to your Traffic Manager Java Catalog.   Add the TrafficScript rule   You can test the Extension very quickly using the following TrafficScript rule to log the each request:   java.run( "CountThis", http.getPath() );   Check the Traffic Manager  event log for any error messages, and query the table to verify that it is getting populated by the extension:   mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 5; +--------------------------+-------+ | data                     | count | +--------------------------+-------+ | /media/riverbed.png      |     5 | | /articles                |     3 | | /media/puppies.jpg       |     2 | | /media/ponies.png        |     2 | | /media/cats_and_mice.png |     2 | +--------------------------+-------+ 5 rows in set (0.00 sec)   mysql> TRUNCATE website.referers; Query OK, 0 rows affected (0.00 sec)   Use 'Truncate' to delete all of the rows in a table.   Log and count referer headers   We only want to log referrers from remote sites, so use the following TrafficScript rule to call the Extension only when it is required:   # This site $host = http.getHeader( "Host" ); # The referring site $referer = http.getHeader( "Referer" ); # Only log the Referer if it is an absolute URI and it comes from a different site if( string.contains( $referer, "://" ) && !string.contains( $referer, "://".$host."/" ) ) { java.run( "CountThis", $referer ); }   Add this rule as a request rule to a virtual server that processes HTTP traffic.   As users access the site, the referer header will be pushed into the database. A quick database query will tell you what's there: % mysql –h dbhost –u web –p Enter password: W38_U5er mysql> SELECT * FROM website.referers ORDER BY count DESC LIMIT 4; +--------------------------------------------------+-------+ | referer                                          | count | +--------------------------------------------------+-------+ | http://www.google.com/search?q=stingray        |    92 | | http://www.riverbed.com/products/stingray      |    45 | | http://www.vmware.com/appliances               |    26 | | http://www.riverbed.com/                       |     5 | +--------------------------------------------------+-------+ 4 rows in set (0.00 sec)   Lazy writes to the database   This is a useful application of Java Extensions, but it has one big drawback. Every time a visitor arrives from a remote site, his first transaction is stalled while the Java Extension writes to the database. This breaks one of the key rules of website performance architecture – do everything you can asynchronously (i.e. in the background) so that your users are not impeded (see "Lazy Websites run Faster").   Instead, a better solution would be to maintain a separate, background thread that wrote the data in bulk to the database, while the foreground threads in the Java Extension simply appended the Referer data to a table:     CountThisAsync.java   The following Java Extension (CountThisAsync.java) is a modified version of CountThis.java that illustrates this technique:   import java.io.IOException; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.LinkedList; import javax.servlet.ServletConfig; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; public class CountThisAsync extends HttpServlet { private static final long serialVersionUID = 1L; private Writer writer = null; protected static LinkedList theData = new LinkedList(); protected class Writer extends Thread { private Connection conn = null; private String table; private int syncRate = 20; public void init( String username, String password, String url, String table ) throws Exception { Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection( url, username, password); this.table = table; start(); } public void run() { boolean running = true; while( running ) { try { sleep( syncRate*1000 ); } catch( InterruptedException e ) { running = false; }; try { PreparedStatement s = conn.prepareStatement( "INSERT INTO " + table + " ( data, count ) VALUES( ?, 1 )" + "ON DUPLICATE KEY UPDATE count=count+1" ); conn.setAutoCommit( false ); synchronized( theData ) { while( !theData.isEmpty() ) { String data = theData.removeFirst(); s.setString(1, data); s.addBatch(); } } s.executeBatch(); } catch ( Exception e ) { log( e.toString() ); running = false; } } } } public void init( ServletConfig config ) throws ServletException { super.init( config ); String userName = config.getInitParameter( "username" ); String password = config.getInitParameter( "password" ); String table = config.getInitParameter( "table" ); String dbserver = config.getInitParameter( "dbserver" ); if( userName == null || password == null || table == null || dbserver == null ) throw new ServletException( "Missing username, password, table or dbserver config value" ); try { writer = new Writer(); writer.init( userName, password, "jdbc:mysql://"+dbserver+"/", table ); } catch( Exception e ) { throw new ServletException( e.toString() ); } } public void doGet( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { String[] args = (String[])req.getAttribute( "args" ); String data = args[0]; if( data != null && writer.isAlive() ) { synchronized( theData ) { theData.add( data ); } } } public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { doGet( req, res ); } public void destroy() { writer.interrupt(); try { writer.join( 1000L ); } catch( InterruptedException e ) {}; super.destroy(); } }   When the Extension is invoked by Traffic Manager , it simply stores the value of the Referer header in a local list and returns immediately. This minimizes any latency that the end user may observe.   The Extension creates a separate thread (embodied by the Writer class) that runs in the background. Every syncRate seconds, it removes all of the values from the list and writes them to the database.   Compile the extension: $ javac -cp servlet.jar:zxtm-servlet.jar CountThisAsync.java $ jar -cvf CountThisAsync.jar CountThisAsync*.class   ... and upload the resulting CountThisAsync.jar Jar file to your Java catalog . Remember to apply the four configuration parameters to the CountThisAsync.jar Java Extension so that it can access the database, and modify the TrafficScript rule so that it calls the CountThisAsync Java Extension.   You’ll observe that database updates may be delayed by up to 20 seconds (you can tune that delay in the code), but the level of service that end users experience will no longer be affected by the speed of the database.
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
Meta-tags and the meta-description are used by search engines and other tools to infer more information about a website, and their judicious and responsible use can have a positive effect on a page's ranking in search engine results. Suffice to say, a page without any 'meta' information is likely to score lower than the same page with some appropriate information.   This article (originally published December 2006) describes how you can automatically infer and insert meta tags into a web page, on the fly.   Rewriting a response   First, decide what to use to generate a list of related keywords.   It would have been nice to have been able to slurp up all the text on the page and calculate the most commonly occurring unusual words. Surely that would have been the über-geek thing to do? Well, not really: unless I was careful I could end-up slowing down each response, and  there would be the danger that I produced a strange list of keywords that didn’t accurately represent what the page is trying to say (and could also be widely “Off-message”).   So I instead turned to three sources of on-message page summaries - the title tag, the contents of the big h1 tag and the elements of the page path.   The script   First I had to get the response body:   $body = http.getResponseBody();   This will be grepped for keywords, mangled to add the meta-tags and then returned by setting the response body:   http.setResponseBody( $body );   Next I had to make a list of keywords. As I mentioned before, my first plan was to look at the path: by converting slashes to commas I should be able to generate some correct keywords, something like this:   $path = http.getPath(); $path = string.regexsub( $path, "/+", "; ","g" );   After adding a few lines to first tidy-up the path: removing slashes at the beginning and end; and replacing underscores with spaces, it worked pretty well.    And, for solely aesthetic reasons I added   $path = string.uppercase($path);   Then, I took a look at the title tag. Something like this did the trick:   if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title_tag_text = $1; }   (the “i” flag here makes the search case-insensitive, just in-case).   This, indeed, worked fine. With a little cleaning-up, I was able to generate a meta-description similarly: I just stuck them together after adding some punctuation (solely to make it nicer when read: search engines often return the meta-description in the search result).   After playing with this for a while I wasn’t completely satisfied with the results: the meta-keywords looked great; but the meta-description was a little lacking in the real english department.   So, instead I turned my attention to the h1 tag on each page: it should already be a mini-description of each page. I grepped it in a similar fashion to the title tag and the generated description looked vastly improved.   Lastly, I added some code to check if a page already has a meta-description or meta-keywords to prevent the automatic tags being inserted in this case. This allows us to gradually add meta-tags by hand to our pages - and it means we always have a backup should we forget to add metas to a new page in the future.   The finished script looked like this:   # Only process HTML responses $ct = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $ct, "text/html" ) ) break; $body = http.getResponseBody(); $path = http.getPath(); # remove the first and last slashes; convert remaining slashes $path = string.regexsub( $path, "^/?(.*?)/?$", "$1" ); $path = string.replaceAll( $path, "_", " " ); $path = string.replaceAll( $path, "/", ", " ); if( string.regexmatch( $body, "<h1.*?>\\s*(.*?)\\s*</h1>", "i" ) ) { $h1 = $1; $h1 = string.regexsub( $h1, "<.*?>", "", "g" ); } if( string.regexmatch( $body, "<title>\\s*(.*?)\\s*</title>", "i" ) ) { $title = $1; $title = string.regexsub( $title, "<.*?>", "", "g" ); } if( $h1 ) { $description = "Riverbed - " . $h1 . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $h1 . ", " . $title; } else { $description = "Riverbed - " . $path . ": " . $title; $keywords = "Riverbed, " . $path . ", " . $title; } # only rewrite the meta-keywords if we don't already have some if(! string.regexmatch( $body, "<meta\\s+name='keywords'", "i" ) ) { $meta_keywords = " <meta name='keywords' content='" . $keywords ."'/>\n"; } # only rewrite the meta-description if we don't already have one if(! string.regexmatch( $body, "<meta\\s+name='description'", "i" ) ) { $meta_description = " <meta name='description' content='" . $description . "'/>"; } # find the title and stick the new meta tags in afterwards if( $meta_keywords || $meta_description ) { $body = string.regexsub( $body, "(<title>.*</title>)", "$1\n" . $meta_keywords . $meta_description ); http.setResponseBody( $body ); }   It should be fairly easy to adapt it to another site assuming the pages are built consistently.   This article was originally written by Sam Phillips (his own pic above) in December 2006, modified and tested in February 2013
View full article
When deploying applications using content management systems, application owners are typically limited to the functionality of the CMS application in use or third party add-on's available. Unfortunately, these components alone may not deliver the application requirements.  Leaving the application owner to dedicate resources to develop a solution that usually ends up taking longer than it should, or not working at all. This article addresses some hypothetical production use cases, where the application does not provide the administrators an easy method to add a timer to the website.   This solution builds upon the previous articles (Embedded Google Maps - Augmenting Web Applications with Traffic Manager and Embedded Twitter Timeline - Augmenting Web Applications with Traffic Manager). "Using" a solution from Owen Garrett (See Instrument web content with Traffic Manager),This example will use a simple CSS overlay to display the added information.   Basic Rule   As a starting point to understand the minimum requirements, and to customize for your own use. I.E. Most people want to use "text-align:center". Values may need to be added to the $style or $html for your application, see examples.   1 2 3 4 5 6 7 8 9 10 11 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       $timer =  ( "366" - ( sys. gmtime . format ( "%j" ) ) );       $html =  '<div class="Countdown">' . $timer . ' DAYS UNTIL THE END OF THE YEAR</div>' ;       $style = '<style type="text/css">.Countdown{z-index:100;background:white}</style>' ;       $body = http.getResponseBody();  $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );  http.setResponseBody( $body );   Example 1 - Simple Day Countdown Timer   This example covers a common use case popular with retailers, a countdown for the holiday shopping season. This example also adds font formatting and additional text with a link.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 #Only process text/html content  if ( !string.startsWith (http.getResponseHeader ( "Content-Type" ), "text/html" )) break;       #Countdown target  #Julian day of the year "001" to "366"  $targetday = "359" ;  $bgcolor = "#D71920" ;  $labelday = "DAYS" ;  $title = "UNTIL CHRISTMAS" ;  $titlecolor = "white" ;  $link = "/dept.jump?id=dept20020200034" ;  $linkcolor = "yellow" ;  $linktext = "VISIT YOUR ONE-STOP GIFT SHOP" ;       #Calculate days between today and targetday  $timer = ( $targetday - ( sys. gmtime . format ( "%j" ) ) );       #Remove the S from "DAYS" if only 1 day left  if ( $timer == 1 ){     $labelday = string.drop( $label , 1 );  };       $html = '  <div class= "TrafficScriptCountdown" >     <h3>       <font color= "'.$titlecolor.'" >         '.$timer.' '.$labelday.' '.$title.'        </font>       <a href= "'.$link.'" >         <font color= "'.$linkcolor.'" >           '.$linktext.'          </font>       </a>     </h3>  </div>  ';       $style = '  <style type= "text/css" >  .TrafficScriptCountdown {     position:relative;     top:0;     width:100%;     text-align:center;     background: '.$bgcolor.' ;     opacity:100%;     z- index :1000;     padding:0  }  </style>  ';       $body = http.getResponseBody();       $body = string.regexsub( $body , "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" );       http.setResponseBody( $body );?    Example 1 in Action     Example 2 - Ticking countdown timer with second detail   This example covers how to dynamically display the time down to seconds. Opposed to sending data to the client every second, I chose to use a client side java script found @ HTML Countdown to Date v3 (Javascript Timer)  | ricocheting.com   Example 2 Response Rule   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;       #Countdown target  $year = "2014" ;  $month = "11" ;  $day = "3" ;  $hr = "8" ;  $min = "0" ;  $sec = "0" ;  #number of hours offset from UTC  $utc = "-8" ;       $labeldays = "DAYS" ;  $labelhrs = "HRS" ;  $labelmins = "MINS" ;  $labelsecs = "SECS" ;  $separator = ", " ;       $timer = '<script type= "text/javascript" >  var CDown=function(){this.state=0,this.counts=[],this.interval=null};CDown. prototype =\  {init:function(){this.state=1;var t=this;this.interval=window.setInterval(function()\  {t.tick()},1e3)},add:function(t,s){tzOffset= '.$utc.' ,dx=t.toGMTString(),dx=dx. substr \  (0,dx. length -3),tzCurrent=t.getTimezoneOffset()/60*-2,t.setTime(Date.parse(dx)),\  t.setHours(t.getHours()+tzCurrent-tzOffset),this.counts. push ({d:t,id:s}),this.tick(),\  0==this.state&&this.init()},expire:function(t){ for (var s in t)this.display\  (this.counts[t[s]], "Now!" ),this.counts. splice (t[s],1)}, format :function(t){var s= "" ;\  return 0!=t.d&&(s+=t.d+ " " +(1==t.d? "'.string.drop( $labeldays, 1 ).'" :" '.$labeldays.' \  ")+" '.$separator.' "),0!=t.h&&(s+=t.h+" "+(1==t.h?" '.string.drop( $labelhrs, 1 ).' ":\  "'.$labelhrs.'" )+ "'.$separator.'" ),s+=t.m+ " " +(1==t.m?"\  '.string.drop( $labelmins, 1 ).' ":" '.$labelmins.' ")+" '.$separator.' ",s+=t.s+" "\  +(1==t.s? "'.string.drop( $labelsecs, 1 ).'" : "'.$labelsecs.'" )+ "'.$separator.'" \  ,s. substr (0,s. length -2)},math:function(t){var i=w=d=h=m=s=ms=0; return ms=( "" +\  (t %1e3 +1e3)). substr (1,3),t=Math.floor(t/1e3),i=Math.floor(t/31536e3),w=Math.floor\  (t/604800),d=Math.floor(t/86400),t%=86400,h=Math.floor(t/3600),t%=3600,m=Math.floor\  (t/60),t%=60,s=Math.floor(t),{y:i,w:w,d:d,h:h,m:m,s:s,ms:ms}},tick:function()\  {var t=(new Date).getTime(),s=[],i=0,n=0; if (this.counts) for (var e=0,\  o=this.counts. length ;o>e;++e)i=this.counts[e],n=i.d.getTime()-t,0>n?s. push (e):\  this.display(i,this. format (this.math(n)));s. length >0&&this.expire(s),\  0==this.counts. length &&window.clearTimeout(this.interval)},display:function(t,s)\  {document.getElementById(t.id).innerHTML=s}},window.onload=function()\  {var t=new CDown;t.add(new Date\  ( '.$year.' , '.--$month.' , '.$day.' , '.$hr.' , '.$min.' , '.$sec.' ), "countbox1" )};  </script><span id= "countbox1" ></span>';       $html =  '<div class= "TrafficScriptCountdown" ><center><h3><font color= "white" >\  COUNTDOWN TO RIVERBED FORCE '.$timer.' </font>\  <a href= "https://secure3.aetherquest.com/riverbedforce2014/" ><font color= "yellow" >\  REGISTER NOW</a></h3></font></center></div>';       $style = '<style type= "text/css" >.TrafficScriptCountdown{position:relative;top:0;\  width:100%;background: #E9681D;opacity:100%;z-index:1000;padding:0}</style>';       http.setResponseBody( string.regexsub( http.getResponseBody(),  "(<body[^>]*>)" , $style . "$1\n" . $html . "\n" , "i" ) );    Example 2 in action     Notes   Example 1 results in faster page load time than Example 2. Example 1 can be easily extended to enable Traffic Script to set $timer to include detail down to the second as in example 2. Be aware of any trailing space(s) after the " \ " line breaks when copy and paste is used to import the rule. Incorrect spacing can stop the JS and the HTML from functioning. You may have to adjust the elements for your web application. (i.e. z-index, the regex sub match, div class, etc.).   This is a great example of using Traffic Manager to deliver a solution in minutes that could otherwise could take hours.
View full article
Update: See also this new article including a simple template rule: A Simple Template Rule SteelCentral Web Analyzer - BrowserMetrix   Riverbed SteelCentral Web Analyzer is a great tool for monitoring end-user experience (EUE) of web applications, even when they are hosted in the cloud. And because it is delivered as true Software-as-a-Service, you can monitor application performance form anywhere, and drill down to analyse individual transactions by URL, location or browser type, and highlight requests which t ook too long to respond.   In order to track statistics, your web application needs to send statistics on each transaction to Web Analyzer (formerly BrowserMetrix) using a small piece of JavaScript, and it is very easy to inject the extra JavaScript code without needing to change the application itself. This Solution Guide (attached) shows you how to use TrafficScript to inject the JavaScript snippet into your web applications, by inspecting all web pages and inserting into the right place in each document:   No modification needed to your application Easy to select which pages you want to instrument Use with all applications in your data center, or hosted in the cloud Even works with compressed HTML pages (eg, gzip encoded) Create dynamic JavaScript code to track session-level information Use Riverbed FlyScript to automate the integration between Web Analyzer and Traffic Manager   How does it work? SteelApp Traffic Manager sits in front of the web applications on the right, and inspects each web page before it is sent to the client. Stingray checks to see if the page has been selected for analysis by Web Analyzer, and then constructs the JavaScript fragment and injects into the web page at the right place in the HTML document.   When the web page arrives at the client browser, the JavaScript snippet is executed.  It builds a transaction profile with timing information and submits the information to the Web Analyzer SaaS platform managed by Riverbed.  You can then analyze the results, in near-realtime, using the Web Analyzer web portal.   Thanks also to Faisal Memon for his help creating the Solution Guide.   Read more In addition to the attached deployment guide showing how to create complex rules for JavaScript Injection, you may be also be interested in this new article showing how to use a simple template rule wit Traffic Manager and SteelCentral Web Analyzer: A Simple Template Rule for SteelCentral Web Analyzer - BrowserMetrix   For similar solutions, check out the Content Modification examples in the Top vADC Examples and Use Cases article.   Updated 15th July 2014 by Paul Wallace. Article formerly titled "Using Stingray with OPNET AppResponse Xpert BrowserMetrix" Thanks also to Mike Iem for his help updating this article. 29th July 2014 by Paul Wallace. Added note about the new article including the simple template rule          
View full article
The article Using Pulse vADC with SteelCentral Web Analyzer shows how to create and customize a rule to inject JavaScript into web pages to track the end-to-end performance and measure the actual user experience, and how to enhance it to create dynamic instrumentation for a variety of use cases.   But to make it even easier to use Traffic Manager and SteelCentral Web Analyzer - BrowserMetrix, we have created a simple, encapsulated rule (included in the file attached to this article, "SteelApp-BMX.txt") which can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet. In this example, we will add the new rule to our example web site, “http://www.northernlightsastronomy.com” using the following steps:   1. Create the new rule   The quickest way to create a new rule on the Traffic Manager console is to navigate to the virtual server for your web application, click through to the Rules linked to this virtual server, and then at the foot of the page, click “Manage Rules in Catalog.” Type in a name for your new rule, ensure the “Use TrafficScript” and “Associate with this virtual server” options are checked, then click on “Create Rule”     2. Copy in the encapsulated rule   In the new rule, simply copy and paste in the encapsulated rule (from the file attached to this article, "SteelApp-BMX.txt") and click on  “Update” at the end of the form:     3. Customize the rule   The rule is now transformed into a simple form which you can customize, and you can enter in the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. In addition, you must enter the ‘hostname’ which Traffic Manager uses to serve the web pages. Enter the hostname, but exclude any prefix such as “http://”or https:// and enter only the hostname itself.     The new rule is now enabled for your application, and you can track via the SteelCentral Web Analyzer console.   4.  How to find your clientId and appId parameters   Creating and modifying your JavaScript snippet requires that you enter the “clientId” and “appId” parameters from the Web Analyzer – BrowserMetrix console. To do this, go to the home page, and click on the “Application Settings” icon next to your application:     The next screen shows the plain JavaScript snippet – from this, you can copy the “clientId” and “appId” parameters:     5. Download the template rule now!   You can download the template rule from file attached to this article, "SteelApp-BMX.txt" - the rule can be copied directly into Traffic Manager, and includes a form to let you customize the rule to include your own ClientID and AppID in the snippet.
View full article
Dynamic information is more abundant now than ever, but we still see web applications provide static content. Unfortunately many websites are still using a static picture for a location map because of application code changes required. Traffic Manager provides the ability to insert the required code into your site with no changes to the application. This simplifies the ability to provide users dynamic and interactive content tailored for them.  Fortunately, Google provides an API to use embedded Google maps for your application. These maps can be implemented with little code changes and support many applications. This document will focus on using the Traffic Manager to provide embedded Google Maps without configuration or code changes to the application.   "The Google Maps Embed API uses a simple HTTP request to return a dynamic, interactive map. The map can be easily embedded in your web page by setting the Embed API URL as the src attribute of an iframe...   Google Maps Embed API maps are easy to add to your webpage—just set the URL you build as the value of an iframe's src attribute. Control the size of the map with the iframe's height and width attributes. No JavaScript required. "... -- Google Maps Embed API — Google Developers   Google Maps Embedded API Notes   Please reference the Google Documentation at Google Maps Embed API — Google Developers for additional information and options not covered in this document.   Google API Key   Before you get started with the Traffic Script, your need to get a Google API Key. Requests to the Google Embed API must include a free API key as the value of the URL key parameter. Your key enables you to monitor your application's Maps API usage, and ensures that Google can contact you about your website/application if necessary. Visit Google Maps Embed API — Google Developers to for directions to obtain an API key.   By default, a key can be used on any site. We strongly recommend that you restrict the use of your key to domains that you administer, to prevent use on unauthorized sites. You can specify which domains are allowed to use your API key by clicking the Edit allowed referrers... link for your key. -- Google Maps Embed API — Google Developers   The API key is included in clear text to the client ( search nerdydata for "https://www.google.com/maps/embed/v1/place?key=" ). I also recommend you restrict use of your key to your domains.   Map Modes   Google provides four map modes available for use,and the mode is specified in the request URL.   Place mode displays a map pin at a particular place or address, such as a landmark, business, geographic feature, or town. Directions mode displays the path between two or more specified points on the map, as well as the distance and travel time. Search mode displays results for a search across the visible map region. It's recommended that a location for the search be defined, either by including a location in the search term (record+stores+in+Seattle) or by including a center and zoom parameter to bound the search. View mode returns a map with no markers or directions.   A few use cases:   Display a map of a specific location with labels using place mode (Covered in this document). Display Parking and Transit information for a location with Search Mode.(Covered in this document). Provide directions (between locations or from the airport to a location) using Directions mode Display nearby Hotels or tourist information with Search mode using keywords or "lodging" or "landmarks" Use geo location and Traffic Script and provide a dynamic Search map of Gym's local to each visitor for your fitness blog. My personal favorite for Intranets Save time figuring out where to eat lunch around the office and use Search Mode with keyword "restaurant" Improve my Traffic Script productivity and use Search Mode with keyword "coffee+shops"   Traffic Script Examples   Example 1: Place Map (Replace a string)   This example covers a basic method to replace a string in the HTML code. This rule will replace a string within the existing HTML with Google Place map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;        $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;   $googleapikey = "YOUR_KEY_HERE" ;   $googlemapurl = "https://www.google.com/maps/embed/v1/place" ;   #Map height and width   $mapheight = "420" ;   $mapwidth = "420" ;        #String of HTML to be replaced   $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;        #Replacement HTML   $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .   "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=" .   "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .        #Get the existing HTTP Body for modification   $body = http.getResponseBody();        #Regex sub against the body looking for the defined string   $body = string.replaceall( $body , $insertstring , $googlemaphtml );   http.setResponseBody( $body );    Example 2: Search Map (Replace a string) This example is the same as Example 1, but a change in the map type (note the change in the $googlemapurl?q=parking+near). This rule will replace a string within the existing HTML with Google Search map iframe HTML, and has been formatted for easy customization and readability.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #Only process text/html content  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;           #String of HTML to be replaced    $insertstring = "<!-- TAB 2 Content (Office Locations) -->" ;           #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .           #Get the existing HTTP Body for modification    $body = http.getResponseBody();           #Regex sub against the body looking for the defined string    $body = string.replaceall( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );    Example 3: Search Map (Replace a section)   This example provides a different method to insert code into the existing HTML. This rule uses regex to replace a section of the existing HTML with Google map iframe HTML, and has also been formatted for easy customization and readability. The change from Example 2 can be noted (See $insertstring and string.regexsub).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 #Only process text/html content       if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;           $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;    $googleapikey = "YOUR_KEY_HERE" ;    $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;    #Map height and width    $mapheight = "420" ;    $mapwidth = "420" ;          #String of HTML to be replaced    $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;          #Replacement HTML    $googlemaphtml = "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" " .    "frameborder=\"0\" style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" .    "" . $nearaddress . "&key=" . $googleapikey . "\"></iframe>" .          #Get the existing HTTP Body for modification    $body = http.getResponseBody();          #Regex sub against the body looking for the defined string    $body = string.regexsub( $body , $insertstring , $googlemaphtml );    http.setResponseBody( $body );     Example 3.1 (Shortened)   For reference a shortened version of the Example 3 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" ) ) break;                http.setResponseBody ( string.regexsub( http.getResponseBody(),      "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,      "<iframe width=\"420\" height=\"420\" frameborder=\"0\" style=\"border:0\" " .      "src=\"https://www.google.com/maps/embed/v1/search?" .      "q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107" .      "&key=YOUR_KEY_HERE\"></iframe>" ) );     Example 4: Search Map ( Replace a section with formatting, select URL, & additional map)   This example is closer to a production use case. Specifically this was created with www.riverbed.com as my pool nodes. This rule has the following changes from Example 3: use HTML formatting to visually integrate with an existing application (<div class=\"six columns\">), only process for the desired URL path of contact (line #3), and provides an additional Transit Stop map (lines 27-31).   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 #Only process text/html content in the contact path  if (!string.startsWith(http.getResponseHeader( "Content-Type" ), "text/html" )       || http.getpath() == "contact" ) break;       $nearaddress = "680+Folsom+St.+San+Francisco,+CA+94107" ;  $mapcenter = string.urlencode( "37.784465,-122.398570" );  $mapzoom = "14" ;  #Google API key  $googleapikey = "YOUR_KEY_HERE" ;  $googlemapurl = "https://www.google.com/maps/embed/v1/search" ;  #Map height and width  $mapheight = "420" ;  $mapwidth = "420" ;       #Regex match for the HTML section to be replaced  $insertstring = "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ;       #Replacment HTML  $googlemapshtml =   #HTML cleanup (2x "</div>") and New Section title  "</div></div></a><h4>Parking and Transit Information</h4>" .  #BEGIN Parking Map. Using existing css for layout  "<div class=\"six columns\"><h5>Parking Map</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=parking+near+" . $nearaddress . "" .  "&key=" . $googleapikey . "\"></iframe></div>" .  #BEGIN Transit Map. Using existing css for layout  "<div class=\"six columns\"><h5>Transit Stop's</h5>" .  "<iframe width=\"" . $mapwidth . "\" height=\"" . $mapheight . "\" frameborder=\"0\" " .  "style=\"border:0\" src=\"" . $googlemapurl . "?q=Transit+Stop+near+" . $nearaddress . "" .  "&center=" . $mapcenter . "&zoom=" . $mapzoom . "&key=" . $googleapikey . "\"></iframe></div>" .  #Include the removed HTML comment  "<!-- TAB 2 Content (Office Locations) -->" ;       #Get the existing HTTP Body for modification  $body = http.getResponseBody();       #Regex sub against the body looking for the defined string  $body = string.regexsub( $body , $insertstring , $googlemapshtml );  http.setResponseBody( $body );    Example 4.1 (Shortened)   For reference a shortened version of the Example 4 Rule above (with line breaks for readability):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 if ( !string.startsWith ( http.getResponseHeader( "Content-Type" ), "text/html" )         || http.getpath() == "contact" ) break;           http.setResponseBody( string.regexsub(  http.getResponseBody() ,    "</a>Parking</h4>(?s)(.*)<!-- TAB 2 Content \\(Office Locations\\) -->" ,     "</div></div></a><h4>Parking and Transit Information</h4><div class=\"six columns\">" .    "<h5>Parking Map</h5><iframe width=\"420\" height=\"420\" frameborder=\"0\" " .    "style=\"border:0\" src=\"https://www.google.com/maps/embed/v1/search" .    "?q=parking+near+680+Folsom+St.+San+Francisco,+CA+94107&key=YOU_KEY_HERE\"></iframe>" .  "</div><div class=\"six columns\"><h5>Transit Stop's</h5><iframe width=\"420\" " .  "height=\"420\" frameborder=\"0\" style=\"border:0\" " .  "src=\"https://www.google.com/maps/embed/v1/search?q=Transit+Stop+near+" .  "680+Folsom+St.+San+Francisco,+CA+94107&center=37.784465%2C-122.398570&zoom=14" .  "&key=YOUR_KEY_HERE\"></iframe></div><!-- TAB 2 Content (Office Locations) -->" ) );  
View full article
Riverbed SteelApp™ Traffic Manager from Riverbed Technology is a high performance software-based application delivery controller (ADC), designed to deliver faster and more reliable access to Microsoft Azure applications as well as private applications. As a software-based ADC, it provides unprecedented scale and flexibility to deliver advanced application services.
View full article