cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
A great feature of the Stingray Traffic Manager is the ability to upload External Program Monitors.  An External Program Monitor is a custom health monitor that can be written to monitor any service.  An External Program Monitor for LDAP is available here.   To use it first install ldapsearch onto the Stingray Traffic Manager: apt-get install ldap-utils (For Ubuntu based distros) The key is to install ldap-utils.  Once that is installed, upload and install the monitor: In the Stingray web interface navigate to Catalogs -> Extra Files -> Monitor Programs .  Upload ldap.pl (in the ldap.zip file) Navigate to Catalogs -> Monitors .  Scroll down to Create new monitor .  Give it a name and select External program monitor as the type. Select ldap.pl from the drop down menu that appears. Scroll down to program arguments and create four arguments: base, filter, pass, user.  It should look like the below screenshot: Fill in the fields appropriately: base is your LDAP search base, user and pass are your LDAP login credentials, and filter should be set to the CN associated with user .  For the pass field, Stingray does not automatically insert asterisks, so please be aware of that. Attach the monitor to the appropriate pool. That completes the configuration of the LDAP Health Monitor for the Stingray Traffic Manager. Note: If you are using the virtual appliance, then follow the instructions in this KB article instead.
View full article
Request rule   The request rule below captures the start time for each request and sets a connection data value called “start” for each request:-   $tm = sys.time.highres(); # Don't store $tm directly, use sprintf to preserve precision connection.data.set("start", string.sprintf( "%f", $tm ) );   Response rule   The following response rule then tests each response against a threshold, which is currently set to 6 seconds. A log entry is written to the event log for each response that takes longer to complete than the 6 second threshold. Each log entry will show the response time in seconds, the back-end node used and the full URI of the request:   $THRESHOLD = 6; # Response time in (integer) seconds above # which requests are logged. $start = connection.data.get("start"); $now = sys.time.highres(); $diff = ($now - $start); if ( $diff > $THRESHOLD ) { $uri = http.getRawURL(); $node = connection.getNode(); log.info ("SLOW REQUEST (" . $diff . "s) " . $node . ":" . $uri ); }   The information in the event log will be useful to identify patterns in slow connections. For example, it might be that all log entries relate to RSS connections, indicating that there might be a problem with the RSS content.   Read more   Collected Tech Tips: TrafficScript examples
View full article
This article describes how to gather activity statistics across a cluster of traffic managers using Perl, SOAP::Lite and Stingray's SOAP Control API. Overview Each local Stingray Traffic Manager tracks a very wide range of activity statistics. These may be exported using SNMP or retrieved using the System/Stats interface in Stingray's SOAP Control API. When you use the Activity monitoring in Stingray's Administration Interface, a collector process communicates with each of the Traffic Managers in your cluster, gathering the local statistics from each and merging them before plotting them on the activity chart. 'Aggregate data across all traffic managers' However, when you use the SNMP or Control API interfaces directly, you will only receive the statistics from the Traffic Manager machine you have connected to. If you want to get a cluster-wide view of activity using SNMP or the Control API, you will need to poll each machine and merge the results yourself. Using Perl and SOAP::Lite to query the traffic managers and merge activity statistics The following code sample determines the total TCP connection rate across the cluster as follows: Connect to the named traffic manager and use the getAllClusterMachines() method to retrieve a list of all of the machines in the cluster; Poll each machine in the cluster for its current value of TotalConn (the total number of TCP connections processed since startup); Sleep for 10 seconds, then poll each machine again; Calculate the number of connections processed by each traffic manager in the 10-second window, and calculate the per-second rate accurately using high-res time. The code: #!/usr/bin/perl -w use SOAP::Lite 0.6; use Time::HiRes qw( time sleep ); $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; my $userpass = "admin:admin";      # SOAP-capable authentication credentials my $adminserver = "stingray:9090"; # Details of an admin server in the cluster my $sampletime = 10;               # Sample time (seconds) sub getAllClusterMembers( $$ ); sub makeConnections( $$$ ); sub makeRequest( $$ ); my $machines = getAllClusterMembers( $adminserver, $userpass ); print "Discovered cluster members ". ( join ", ", @$machines ) . "\n"; my $connections = makeConnections( $machines, $userpass,    " http://soap.zeus.com/zxtm/1.0/System/Stats/ " ); # sample the value of getTotalConn my $start = time(); my $res1 = makeRequest( $connections, "getTotalConn" ); sleep( $sampletime-(time()-$start) ); my $res2 = makeRequest( $connections, "getTotalConn" ); # Determine connection rate per traffic manager my $totalrate = 0; foreach my $z ( keys %{$res1} ) {    my $conns   = $res2->{$z}->result - $res1->{$z}->result;    my $elapsed = $res2->{$z}->{time} - $res1->{$z}->{time};    my $rate = $conns / $elapsed;    $totalrate += $rate; } print "Total connection rate across all machines: " .       sprintf( '%.2f', $totalrate ) . "\n"; sub getAllClusterMembers( $$ ) {     my( $adminserver, $userpass ) = @_;     # Discover cluster members     my $mconn =  SOAP::Lite          -> ns(' http://soap.zeus.com/zxtm/1.0/System/MachineInfo/ ')          -> proxy(" https://$userpass\@$adminserver/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );     $mconn->proxy->ssl_opts( SSL_verify_mode => 0 );      my $res = $mconn->getAllClusterMachines();     # $res->result is a reference to an array of System.MachineInfo.Machine objects     # Pull out the name:port of the traffic managers in our cluster     my @machines = grep s@ https://(.*?)/@$1@ ,        map { $_->{admin_server}; } @{$res->result};     return \@machines; } sub makeConnections( $$$ ) {     my( $machines, $userpass, $ns ) = @_;     my %conns;     foreach my $z ( @$machines ) {        $conns{ $z } = SOAP::Lite          -> ns( $ns )          -> proxy(" https://$userpass\@$z/soap ")          -> on_fault( sub  {               my( $conn, $res ) = @_;               die ref $res?$res->faultstring:$conn->transport->status; } );        $conns{ $z }->proxy->ssl_opts( SSL_verify_mode => 0 );     }     return \%conns; } sub makeRequest( $$ ) {     my( $conns, $req ) = @_;     my %res;     foreach my $z ( keys %$conns ) {        my $r = $conns->{$z}->$req();        $r->{time} = time();        $res{$z}=$r;     }     return \%res; } Running the script $ ./getConnections.pl Discovered cluster members stingray1-ny:9090, stingray1-sf:9090 Total connection rate across all machines: 5.02
View full article
I have several hundred websites that all use host headers in IIS. I would like to use a single virtual/Public IP address and have the traffic manager select the appropriate pool based on the host header passed in. I’ve been using a traffic script similar to the code snippet below. Is there a more efficient way to code this there will be several hundred pools and if statements? Can you do case statements in traffic script? $HostHeader = http.getHostHeader(); if( string.contains( $HostHeader, "site1.test.com" ) ){    pool.use( "Pool_site1.test.com_HTTP"); }else if( string.contains( $HostHeader, "site2.test.com" ) ){    pool.use( "Pool_site2.test.com_HTTP"); }else if( string.contains( $HostHeader, "site3.test.com" ) ){    pool.use( "Pool_site3.test.com_HTTP"); }else if( string.contains( $HostHeader, "site4.test.com" ) ){    pool.use( "Pool_site4.test.com_HTTP"); }else if( string.contains( $HostHeader, "site5.test.com" ) ){    pool.use( "Pool_site5.test.com_HTTP"); }else if( string.contains( $HostHeader, "site6.test.com" ) ){    pool.use( "Pool_site6.test.com_HTTP"); }else{    http.changeSite( " http://www.test.com " );   }
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2013.
View full article
What is Load Balancing?   Load Balancing is one of the many capabilities of Traffic Manager:   Load Balancing is one of the many capabilities of Traffic Manager   Load Balancing distributes network traffic across a ‘pool’ of servers (‘nodes’), selecting the most appropriate server for each individual request based on the current load balancing policy, session persistence considerations, node priorities and cache optimization hints.  Under certain circumstances, if a request fails to elicit a response from a server, the request may be tried against multiple nodes until a successful response is received.   Load Balancing meets several primary goals:   Scalability: The capability to transparently increase or decrease the capacity of a service (by adding or removing nodes) without changing the public access point (IP address, domain name or URL) for the service; Availability: The capability to route traffic to working nodes and avoid nodes that have failed or are under-performing; is the ability of a site to remain available and accessible even during the failure of one or more systems. Manageability: By abstracting the server infrastructure from the end user, load balancing makes it easy to remove nodes for maintenance (software or hardware upgrades or scheduled reboots) without interrupting the user experience.   Load Balancing also addresses performance optimization, supporting Traffic Manager's ability to deliver the best possible service level from your server infrastructure.   How does load balancing work?   For each request, Traffic Manager will select a ‘pool’ of servers to handle that request.  A pool represents a collection of servers (‘nodes’) that each performs the same function (such as hosting a web applications). The pool will specify a load-balancing algorithm that determines which of the nodes in that pool should be selected to service that request.   In some cases, Traffic Manager will then make a new connection to the selected node and forward the request across that connection.  In the case of HTTP, Traffic Manager maintains a collection of idle ‘keepalive’ connections to the nodes, and will use one of these established connections in favor of creating a new connection.  This reduces latency and reduces the connection-handling overhead on each server node.   What Load Balancing methods are available?   Traffic Manager offers multiple different load balancing algorithms:   Round Robin and Weighted Round Robin:  With these simple algorithms, the traffic manager cycles through the list of server nodes, picking the next one in turn for each request that it load-balances.  If nodes are assigned specific weights, then they are selected more or less frequently in proportion to their weights. Random: The traffic manager selects a node from the pool at random each time it performs a load-balancing decision. Least Connections and Weighted Least Connections: The traffic manager maintains a count of the number of ongoing transactions against each node.  On each load balancing decision, the traffic manager selects the node with the fewest ongoing connections.  Weights may be applied to each node to indicate that the node is capable of handing more or fewer concurrent transactions than its peers. Fastest Response Time: The traffic manager maintains a rolling average of the response time of each node.  When it makes a load-balancing decision, it selects the node with the lowest average response time. Perceptive: The Perceptive method addresses undesired behaviors from the least connections and fastest algorithms, blending the information to predict the optimal node based on past performance and current load.   A TrafficScript rule can override the load balancing decision, using either the ‘named node’ session persistence method to specify which node in the pool should be used, or by using the ‘forward proxy’ capability to completely ignore the list of nodes in the pool and explicitly specify the target node (IP address and port) for the request.   What factors influence the load balancing decision?   Other than the method chosen and the weights, a number of other factors influence the load-balancing decision:   Health Monitoring: Traffic Manager monitors the health and correct operation of each node, using both synthetic transactions (built-in and user-defined) and passive monitoring of real transactions.  If a node consistently fails to meet the health and operation parameters, Traffic Manager will temporarily remove it from future load-balancing decisions until health checks indicate that it is operating correctly again. Session Persistence: Session Persistence policies override the load balancing decision and may be used easily to pin transactions within the same session to the same server node.  This behavior is mandatory for stateful HTTP applications, and is useful for HTTP applications that share state but gain performance improvements if local state caches are used effectively. Locality-aware Request Distribution (LARD): LARD is automatically used to influence the least-connections, fastest-response-time and perceptive load balancing decisions for HTTP traffic.  If the metrics used for load-balancing decisions are finely-balanced (for example, several nodes have very similar response times or current connection counts), then Traffic Manager will also consider the specific URL being requested and will favor nodes that have served that URL recently.  These nodes are more likely to have the requested content in memory or in cache, and are likely to be able to respond more quickly than nodes that have not serviced that request recently. Past History: The perceptive algorithm builds a past history of node performance and uses this in its load balancing decision.  If a new node is introduced into the cluster, or a failed node recovers, no history exists for that node. The Perceptive Algorithm performs a ‘gradual start’ of that node, slowly ramping up the amount of traffic to that node until its performance stabilizes.  The ‘gradual restart’ avoids the problem that a node with unknown performance is immediately overloaded with more traffic than it can cope with, and the duration of the ramp up of the traffic adapts to how quickly and reliably the node responds.   What is connection draining?   To assist administrators who need to take a node out of service, Traffic Manager provides a ‘connection draining’ capability.  If a node is marked as ‘draining’, Stingray will not consider it during the load balancing decision and no new connections will be made to that node.  Existing connections can run to completion, and established, idle HTTP connections will be shut down.   However, session persistence classes override load balancing decisions.  If any sessions have been established to the draining node, then requests in that session will use the node.  There is no automatic way to determine when a client session has competed, but Traffic Manager provides a ‘most recently used’ report than indicates when a node was last used.  For example, if you are prepared to time sessions out after 20 minutes, then you can safely remove the node from the pool once the ‘most recently used’ measure exceeds 20 minutes.   Administrators may also mark nodes as ‘disabled’.  This has the same effect as ‘draining’, except that existing sessions are not honored and health-monitors are not invoked against ‘disabled’ nodes.  Once a node is ‘disabled’, it can be safely shut down and reintroduced later.   What Load Balancing method is best?   Least Connections is generally the best load-balancing algorithm for homogeneous traffic, where every request puts the same load on the back-end server and where every back-end server is the same performance. The majority of HTTP services fall into this situation. Even if some requests generate more load than others (for example, a database lookup compared to an image retrieval), the ‘least connections’ method will evenly distribute requests across the machines and if there are sufficient requests of each type, the load will be very effectively shared. However, Least Connections is not appropriate when infrequent high-load requests cause significant slowdowns. The Fastest Response Time algorithm will send requests to the server that is performing best (responding most quickly), but it is a reactive algorithm (it only notices slowdowns after the event) so it can often overload a fast server and create a choppy performance profile.   Perceptive is designed to take the best features of both ‘Least Connections’ and ‘Fastest Response’. It adapts according to the nature of the traffic and the performance of the servers; it will lean towards 'least connections' when traffic is homogeneous, and 'fastest response time' when the loads are very variable. It uses a combination of the number of current connections and recent response times to trend and predict the performance of each server. Under this algorithm, traffic is introduced to a new server (or a server that has returned from a failed state) gently, and is progressively ramped up to full operability. When a new server is added to a pool, the algorithm tries it with a single request, and if it receives a reply, it gradually increases the number of requests it sends the new server until it is receiving the same proportion of the load as other equivalent nodes in the pool. This ramping is done in an adaptive way, dependent on the responsiveness of the server. So, for example, a new web server serving a small quantity of static content will very quickly be ramped up to full speed, whereas a Java application server that compiles JSPs the first time they are used (and so is slow to respond to begin with) will be ramped up more slowly.   Least Connections is simpler and more deterministic than ‘Perceptive’, so should be used in preference when possible.   When are requests retried?   Traffic Manager monitors the response from each node when it forwards a request to it. Timeouts quickly detect failures of various types, and simple checks on the response body detect server failures.   Under certain, controlled circumstances, Traffic Manager will retry the request against another node in the pool. Traffic Manager will only retry requests that are judged to be ‘idempotent’ (based on guidelines in the HTTP specification – this includes requests that use GET and HEAD methods), or requests that failed completely against the server (no request data was written before the failure was detected).  This goes a long way to avoiding undesired side effects, such as processing a financial transactions twice.   In rare cases, the guidelines may not apply.  A administrator can easily indicate that all requests processed by a virtual server are non-idempotent (so should never be retried), or can selectively specify the status of each request to override the default decision.   Detecting and retrying when an application generates an error   Traffic Manager rules can also force requests to be retried. For example, a response rule might inspect a response, judge that it is not appropriate, and then instruct the traffic manager to re-try the request against a different node: Hiding Application Errors   Rules can also transparently prompt a client device to retry a request with a different URL (for example).  For example, a rule could detect 404 Not Found errors and prompt the client to try requesting the parent URL, working up the URL hierarchy until the client receives a valid response or cannot proceed any further (i.e. past the root page at ‘/’): No more 404 Not Found...?   Global Load Balancing   Traffic Manager also provides a ‘Global Server Load Balancing’ capability that manages DNS lookups to load-balance users across multiple datacenters. This capability functions in a different fashion to the server load balancing described in this brief.   Conclusion   ADCs today provide much more granular control over all areas that affect application performance. The ability to deliver advanced layer 7 services and enhanced application performance with ADCs is based on the foundation of a basic load balancing technology. Traffic Manager (vTM) is full software and virtual ADC that has been designed as a full-proxy, layer 7 load balancer. Traffic Manager's load balancing fabric enables applications to be delivered from any combination of physical, virtual or cloud-based datacenters.
View full article
A document to hold useful regular expressions that I have pulled together for things: RegExr is a great and very handy online tool for checking regular expression matches: RegExr A regex to validate a password string to ensure it does not contain dangerous punctuation characters and is less than 20 characters long.  useful for Stingray Application Firewall form field protection in login pages: ^[^;,{}\[\]\$\%\*\(\)<>:?\\/'"`]{0,20}$ A regex to check that a password has at least one Uppercase, Lowercase, Numbers and Punctuation from the approved list and is at least 8 but less than 20 characters. ^(?=.*[A-Z])(?=.*[a-z])(?=.*[\\@^!\.,~-])(?=.*\d)(.{8,20})$ A regex to check a field has a valid email address in it ^[^@]+@[^@]+ \. [^@]+ $
View full article
This article describes how to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port.   Overview   WebSockets is an emerging protocol that is used by many web developers to provide responsive and interactive applications.  It's commonly used for talk and email applications, real-time games, and stock market and other monitoring applications.   By design, WebSockets is intended to resemble HTTP.  It is transported over tcp/80, and the initial handshake resembles an HTTP transaction, but the underlying protocol is a simple bidirectional TCP connection.   For more information on the protocol, refer to the Wikipedia summary and RFC 6455.     Basic WebSockets load balancing   Basic WebSockets Load Balancing   Basic WebSockets load balancing is straightforward.  You must use the 'Generic Streaming' protocol type to ensure that Stingray will correctly handle the asynchronous nature of websockets traffic.   Inspecting and modifying the WebSocket handshake   A WebSocket handshake message resembles an HTTP request, but you cannot use the built-in http.* TrafficScript functions to manage it because these are only available in HTTP-type virtual servers.   The libWebSockets.rts library (see below) implements analogous functions that you can use instead:   libWebSockets.rts   Paste the libWebSockets.txt library to your Rules catalog and reference it from your TrafficScript rule as follows:   import libWebSockets.rts as ws;   You can then use the ws.* functions to inspect and modify WebSockets handshakes.  Common operations include fixing up host headers and URLs in the request, and selecting the target servers (the 'pool') based on the attributes of the request.   import libWebSockets.rts as ws; if( ws.getHeader( "Host" ) == "echo.example.com" ) { ws.setHeader( "Host", "www.example.com" ); ws.setPath( "/echo" ); pool.use( "WebSockets servers" ); }   Ensure that the rules associated with WebSockets virtual server are configured to run at the Request stage, and to run 'Once', not 'Every'.  The rule should just be triggered to read and process the initial client handshake, and does not need to run against subsequent messages in the websocket connection:   Code to handle the WebSocket handshake should be configured as a Request Rule, with 'Run Once'   SSL-encrypted WebSockets   Stingray can SSL-decrypt TCP connections, and this operates fully with the SSL-encrypted wss:// protocol: Configure your virtual server to listen on port 443 (or another port if necessary) Enable SSL decryption on the virtual server, using a suitable certificate Note that when testing this capability, we found that Chrome refused to connect to WebSocket services with untrusted or invalid certificates, and did not issue a warning or prompt to trust the certificate.  Other web browsers may operate similarly.  In Chrome's case, it was necessary to access the virtual server directly ( https:// ), save the certificate and then import it into the certificate store.   Stingray can also SSL-encrypt downstream TCP connections (enable SSL encryption in the pool containing the real websocket servers) and this operates fully with SSL-enabled origin WebSockets servers.   Handling HTTP and WebSockets traffic   HTTP traffic should be handled by an HTTP-type virtual server rather than a Generic Streaming one.  HTTP virtual servers can employ HTTP optimizations (keepalive handling, HTTP upgrades, Compression, Caching, HTTP Session Persistence) and can access the http.* TrafficScript functions in their rules.   If possible, you should run two public-facing virtual servers, listening on two separate IP addresses.  For example, HTTP traffic should be directed to www.site.com (which resolves to the public IP for the HTTP virtual server) and WebSockets traffic should be directed to ws.site.com (resolving to the other public IP): Configure two virtual servers, each listening on the appropriate IP address   Sometimes, this is not possible – the WebSockets code is hardwired to the main www domain, or it's not possible to obtain a second public IP address. In that case, all traffic can be directed to the WebSockets virtual server and then HTTP traffic can be demultiplexed and forwarded internally to an HTTP virtual server:   Listen on a single IP address, and split off the HTTP traffic to a second HTTP virtual server   The following TrafficScript code, attached to the 'WS Virtual Server', will detect if the request is an HTTP request (rather than a WebSockets one) and hand the request off internally to an HTTP virtual server by way of a special 'Loopback Pool':   import libWebSockets.rts as ws; if( !ws.isWS() ) pool.use( "Loopback Pool" );   Notes: Testing WebSockets   The implementation described in this article was developed using the following browser-based client, load-balancing traffic to public 'echo' servers (ws://echo.websocket.org/, wss://echo.websocket.org, ws://ajf.me:8080/).   testclient.html   At the time of testing: echo.websocket.org did not respond to ping tests, so the default ping health monitor needed to be removed Chrome24 refused to connect to SSL-enabled wss resources unless they had a trusted certificate, and did not warn otherwise If you find this solution useful, please let us know in the comments below.
View full article
Following up on this earlier article try using the below TrafficScript code snippet to automatically insert the Google Analytics code on all your webpages.  To use it: Copy the rule onto your Stingray Traffic Manager  by first navigating Catalogs -> Rules Scroll down to Create new rule, give the rule a name, and select Use TrafficScript Language.  Click Create Rule to create the rule. Copy and paste the rule below. Change $account to your Google Analytics account number. If you are using multiple domains as described here set $multiple_domains to TRUE and set $tld to your Top Level Domain as specified in your Google Analytics account. Set the rule as a Response Rule in your Virtual Server by navigating to Services -> Virtual Servers -> <your virtual server> -> Rules -> Response Rules and Add rule. After that you should be good to go.  No need to individually modify your web pages, TrafficScript will take care of it all. # # Replace UA-XXXXXXXX-X with your Google Analytics Account Number # $account = 'UA-XXXXXXXX-X'; # # If you are tracking multiple domains, ie yourdomain.com, # yourdomain.net, etc. then set $mutliple_domains to TRUE and # replace yourdomain.com with your Top Level Domain as specified # in your Google Analytics account # $multiple_domains = FALSE; $tld = 'yourdomain.com'; # # Only modify text/html pages # if( !string.startsWith( http.getResponseHeader( "Content-Type" ), "text/html" )) break; # # This variable contains the code to be inserted in the web page. Do not modify. # $html = "\n<script type=\"text/javascript\"> \n \   var _gaq = _gaq || []; \n \   _gaq.push(['_setAccount', " . $account . "]); \n"; if( $multiple_domains == TRUE ) {   $html .= " _gaq.push(['_setDomainName', " . $tld . "]); \n \   _gaq.push(['_setAllowLinker', true]); \n"; } $html .= " _gaq.push(['_trackPageview']); \n \   (function() { \n \   var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; \n \   ga.src=('https:' == document.location.protocol ? ' https://ssl ' : ' http://www ') + '.google-analytics.com/ga.js'; \n \   var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); \n \   })(); \n \ </script>\n"; # # Insert the code right before the </head> tag in the page # $body = http.getResponseBody(); $body = string.replace( $body, "</head>", $html . "</head>"); http.setResponseBody( $body );
View full article
In many cases, it is desirable to upgrade a virtual appliance by deploying a virtual appliance at the newer version and importing the old configuration.  For example, the size of the Traffic Manager disk image was increased in version 9.7, and deploying a new virtual appliance lets a customer take advantage of this larger disk.  This article documents the procedure for deploying a new virtual appliance with the old configuration in common scenarios.   These instructions describe how to upgrade and reinstall Traffic Manager appliance instances (either in a cluster or standalone appliances). For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Upgrading a standalone Virtual Appliance   This process will replace a standalone virtual appliance with another virtual appliance with the same configuration (including migrating network configuration). Note that the Traffic Manager Cloud Getting Started Guide contains instructions for upgrading a standalone EC2 instance from version 9.7 onwards; if upgrading from a version prior to 9.7 and using the Web Application Firewall these instructions must be followed to correctly back up and restore any firewall configuration.   Make a backup of the traffic manager configuration (See section "System > Backups" in the Traffic Manager User Manual), and export it. If you are upgrading from a  version prior to 9.7 and are using the Web Application Firewall, back up the Web Application Firewall configuration - Log on to a command line - Run /opt/zeus/stop-zeus - Copy /opt/zeus/zeusafm/current/var/lib/config.db off the appliance. Shut down the original appliance. Deploy a new appliance with the same network interfaces as the original. If you backed up the application firewall configuration earlier, restore it here onto the new appliance, before you restore the traffic manager configuration: - Copy the config.db file to /opt/zeus/stingrayafm/current/var/lib/config.db    (overwriting the original) - Check that the owner on the config.db file is root, and the mode is 0644. Import and restore the traffic manager configuration via the UI. If you have application firewall errors Use the Diagnose page to automatically fix any configuration errors Reset the Traffic Manager software.   Upgrading a cluster of Virtual Appliances (except Amazon EC2)   This process will replace the appliances in the cluster, one at a time, maintaining the same IP addresses. As the cluster will be reduced by one at points in the upgrade process, you should ensure that this is carried out at a time when the cluster is otherwise healthy, and of the n appliances in the cluster, the load can be handled by (n-1) appliances.   Before beginning the process, ensure that any cluster errors have been resolved. Nominate the appliance which will be the last to be upgraded (call it the final appliance).  When any of the other machines needs to be removed from the cluster, it should be done using the UI on this appliance, and when a hostname and port are required to join the cluster, this appliance's hostname should be used. If you are using the Web Application Firewall first ensure that vWAF on the final appliance in the cluster is upgraded to the most recent version, using the vWAF updater. Choose an appliance to be upgraded, and remove the machine from the cluster: - If it is not the final appliance (nominated in step 2),    this should be done via the UI on the final appliance - If it is the final appliance, the UI on any other machine may be used. Make a backup of the traffic manager configuration (System > Backups) on the appliance being upgraded, and export the backup.  This backup only contains the machine specific info for that appliance (networking config etc). Shut down the appliance, and deploy a new appliance at the new version.  When deploying, it needs to be given the identical hostname to the machine it's replacing. Log on to the admin UI of the new appliance, and import and restore the backup from step 5. If you are using the Web Application Firewall, accessing the Application Firewall tab in the UI will fail and there will be an error on the Diagnose page and an 'Update Configuration' button. Click the Update Configuration button once, then wait for the error to clear.  The configuration is now correct, but the admin server still needs to be restarted to pick up the configuration: # $ZEUSHOME/admin/rc restart Now, upgrade the application firewall on the new appliance to the latest version. Join into the cluster: For all appliances except the final appliance, you must not select any of the auto-detected existing clusters.  Instead manually specify the hostname and port of the final appliance. If you are using Web Application Firewall, there may be an issue where the config on the new machine hasn't synced the vWAF config from the old machine, and clicking the 'Update Application Firewall Cluster Status' button on the Diagnose page doesn't fix the problem. If this happens, firstly get the clusterPwd from the final appliance: # grep clusterPwd /opt/zeus/zxtm/conf/zeusafm.conf clusterPwd = <your cluster pwd> On the new appliance, edit /opt/zeus/zxtm/conf/zeusafm.conf (with e.g. nano or vi), and replace the clusterPwd with the final appliance's clusterPwd. The moment that file is saved, vWAF should get restarted, and the config should get synced to the new machine correctly. When you are upgrading the final appliance, you should select the auto-detected existing cluster entry, which should now list all the other cluster peers. Once a cluster contains multiple versions, configuration changes must not be made until the upgrade has been completed, and 'Cluster conflict' errors are expected until the end of the process. Repeat steps 4-9 until all appliances have been upgraded.   Upgrading a cluster of STM EC2 appliances   Because EC2 licenses are not tied to the IP address, it is recommended that new EC2 instances are deployed into a cluster before removing old instances.  This ensures that the capacity of the cluster is not reduced during the upgrade process.  This process is documented in the "Creating a Traffic Manager Instances on Amazon EC2" chapter in the Traffic Manager Cloud Getting Started Guide.  The clusterPwd may also need to be fixed as above.
View full article
This short article explains how you can match the IP addresses of remote clients with a DNS blacklist.  In this example, we'll use the Spamhaus XBL blacklist service (http://www.spamhaus.org/xbl/).   This article updated following discussion and feedback from Ulrich Babiak - thanks!   Basic principles   The basic principle of a DNS-based blacklist such as Spamhaus' is as follows:   Perform a reverse DNS lookup of the IP address in question, using xbl.spamhaus.org rather than the traditional in-addr.arpa domain Entries that are not in the blacklist don't return a response (NXDOMAIN); entries that are in the blacklist return a particular IP/domain response indicating their status   Important note: some public DNS servers don't respond to spamhaus.org lookups (see http://www.spamhaus.org/faq/section/DNSBL%20Usage#261). Ensure that Traffic Manager is configured to use a working DNS server.   Simple implementation   A simple implementation is as follows:   1 2 3 4 5 6 7 8 9 10 11 $ip = request.getRemoteIP();       # Reverse the IP, and append ".zen.spamhaus.org".  $bytes = string.dottedToBytes( $ip );  $bytes = string. reverse ( $bytes );  $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;       if ( $res = net.dns.resolveHost( $query ) ) {      log . warn ( "Connection from IP " . $ip . " should be blocked - status: " . $res );      # Refer to Zen return codes at http://www.spamhaus.org/zen/  }    This implementation will issue a DNS request on every request, but Traffic Manager caches DNS responses internally so there's little risk that you will overload the target DNS server with duplicate requests:   Traffic Manager DNS settings in the Global configuration   You may wish to increase the dns!negative_expiry setting because DNS lookups against non-blacklisted IP addresses will 'fail'.   A more sophisticated implementation may interpret the response codes and decide to block requests from proxies (the Spamhaus XBL list), while ignoring requests from known spam sources.   What if my DNS server is slow, or fails?  What if I want to use a different resolver for the blacklist lookups?   One undesired consequence of this configuration is that it makes the DNS server a single point of failure and a performance bottleneck.  Each unrecognised (or expired) IP address needs to be matched against the DNS server, and the connection is blocked while this happens.    In normal usage, a single delay of 100ms or so against the very first request is acceptable, but a DNS failure (Stingray times out after 12 seconds by default) or slowdown is more serious.   In addition, Traffic Manager uses a single system-wide resolver for all DNS operations.  If you are hosting a local cache of the blacklist, you'd want to separate DNS traffic accordingly.   Use Traffic Manager to manage the DNS traffic?   A potential solution would be to configure Traffic Manager to use itself (127.0.0.1) as a DNS resolver, and create a virtual server/pool listening on UDP:53.  All locally-generated DNS requests would be delivered to that virtual server, which would then forward them to the real DNS server.  The virtual server could inspect the DNS traffic and route blacklist lookups to the local cache, and other requests to a real DNS server.   You could then use a health monitor (such as the included dns.pl) to check the operation of the real DNS server and mark it as down if it has failed or times out after a short period.  In that event, the virtual server can determine that the pool is down ( pool.activenodes() == 0 ) and respond directly to the DNS request using a response generated by HowTo: Respond directly to DNS requests using libDNS.rts.   Re-implement the resolver   An alternative is to re-implement the TrafficScript resolver using Matthew Geldert's libDNS.rts: Interrogating and managing DNS traffic in Traffic Manager TrafficScript library to construct the queries and analyse the responses.  Then you can use the TrafficScript function tcp.send() to submit your DNS lookups to the local cache (unfortunately, we've not got a udp.send function yet!):   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 sub resolveHost( $host , $resolver ) {      import libDNS.rts as dns;           $packet = dns.newDnsObject();       $packet = dns.setQuestion( $packet , $host , "A" , "IN" );      $data = dns.convertObjectToRawData( $packet , "tcp" );            $sock = tcp. connect ( $resolver , 53, 1000 );      tcp. write ( $sock , $data , 1000 );      $rdata = tcp. read ( $sock , 1024, 1000 );      tcp. close ( $sock );           $resp = dns.convertRawDatatoObject( $rdata , "tcp" );           if ( $resp [ "answercount" ] >= 1 ) return $resp [ "answer" ][0][ "host" ];  }    Note that we're applying 1000ms timeouts to each network operation.   Let's try this, and compare the responses from OpenDNS and from Google's DNS servers.  Our 'bad guy' is 201.116.241.246, so we're going to resolve 246.241.116.201.xbl.spamhaus.org:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $badguy = "246.241.116.201.xbl.spamhaus.org " ;       $text .= "Trying OpenDNS...\n" ;  $host = resolveHost( $badguy , "208.67.222.222" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       $text .= "Trying Google...\n" ;  $host = resolveHost( $badguy , "8.8.8.8" );  if ( $host ) {      $text .= $badguy . " resolved to " . $host . "\n" ;  } else {      $text .= $badguy . " did not resolve\n" ;  }       http.sendResponse( 200, "text/plain" , $text , "" );    (this is just a snippet - remember to paste the resolveHost() implementation, and anything else you need, in here)   This illustrates that OpenDNS resolves the spamhaus.org domain fine, and Google does not issue a response.   Caching the responses   This approach has one disadvantage; because it does not use Traffic Manager's resolver, it does not cache the responses, so you'll hit the resolver on every request unless you cache the responses yourself.   Here's a function that calls the resolveHost function above, and caches the result locally for 3600 seconds.  It returns 'B' for a bad guy, and 'G' for a good guy:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 sub getStatus( $ip , $resolver ) {      $key = "xbl-spamhaus-org-" . $resolver . "-" . $ip ; # Any key prefix will do             $cache = data.get( $key );      if ( $cache ) {         $status = string.left( $cache , 1 );         $expiry = string.skip( $cache , 1 );                   if ( $expiry < sys. time () ) {            data.remove( $key );            $status = "" ;         }      }             if ( ! $status ) {              # We don't have a (valid) entry in our cache, so look the IP up                # Reverse the IP, and append ".xbl.spamhaus.org".         $bytes = string.dottedToBytes( $ip );         $bytes = string. reverse ( $bytes );         $query = string.bytesToDotted( $bytes ). ".xbl.spamhaus.org" ;                $host = resolveHost( $query , $resolver );                if ( $host ) {            $status = "B" ;         } else {            $status = "G" ;         }         data.set( $key , $status .(sys. time ()+3600) );      }      return $status ;  } 
View full article
This article uses the libDNS.rts trafficscript library as described in libDNS.rts: Interrogating and managing DNS traffic in Stingray.   In this example, we intercept DNS requests and respond directly for known A records.   The request rule   import libDNS.rts as dns; # Map domain names to lists of IP addresses they should resolve to $ipAddresses = [ "dev1.ha.company.internal." => [ "10.1.1.1", "10.2.1.1" ], "dev2.ha.company.internal." => [ "10.1.1.2", "10.2.1.2" ] ]; $packet = dns.convertRawDataToObject( request.get(), "udp" ); # Ignore unparsable packets and query responses to avoid # attacks like the one described in CVE-2004-0789. if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) { break; } $host = $packet["question"]["host"]; if( hash.contains( $ipAddresses, $host )) { foreach( $ip in $ipAddresses[$host] ) { $packet = dns.addResponse($packet, "answer", $host, $ip, "A", "IN", "60", []); } $packet["aa"] = "1"; # Make the answer authorative } else { $packet["rcode"] = "0011"; # Set NXDOMAIN error } $packet["qr"] = "1"; # Changes the packet to a response $packet["ra"] = "1"; # Pretend that we support recursion request.sendResponse( dns.convertObjectToRawData($packet, "udp"));
View full article
The Pulse Virtual Traffic Manager Kernel Modules may be installed on a supported Linux system to enable advanced networking functionality – Multi-Hosted Traffic IP Addresses.   Notes:  Earlier versions of this package contained two modules: ztrans (for IP Transparency) and zcluster (for Multi-Hosted Traffic IP Addresses). The Pulse Virtual Traffic Manager software has supported IP Transparency without requiring the ztrans kernel module since version 10.1, and the attached version of the Kernel Modules package only contains the zcluster module. The  Kernel Module is pre-installed in Pulse Secure Virtual Traffic Manager Appliances, and in Cloud images where they are applicable. The Kernel Modules are not available for Solaris.  The Multi-hosted IP Module (zcluster)   The Multi-hosted IP Module allows a set of clustered Traffic Managers to share the same IP address. The module manipulates ARP requests to deliver connections to a multicast group that the machines in the cluster subscribe to. Responsibility for processing data is distributed across the cluster so that all machines process an equal share of the load. Refer to the User Manual (Pulse Virtual Traffic Manager Product Documentation) for details of how to configure multi-hosted Traffic IP addresses. zcluster is supported for kernel versions up to and including version 5.2.   Installation   Prerequisites   Your build machine must have the kernel header files and appropriate build tools to build kernel modules.   You may build the modules on one machine and copy them to an identical machine if you wish to avoid installing build tools and kernel headers on your production traffic manager.   Installation   Unpack the kernel modules tarball, and cd into the directory created:   # tar –xzf pulse_vtm_modules_installer-2.14.tgz    # cd pulse_vtm_modules_installer-2.14   Review the README within for late-breaking news and to confirm kernel version compatibility.   As root, run the installation script install_modules.pl to install the zcluster module:   # ./install_modules.pl   If installation is successful, restart the vTM software:   # $ZEUSHOME/restart-zeus   If the installation fails, please refer to the error message given, and to the distribution specific guidelines you will find in the README file inside the pulse_vtm_modules_installer package.   Kernel Upgrades   If you upgrade your kernel, you will need to re-run the install-modules.pl script to re-install the modules after the kernel upgrade is completed.   Latest Packages   Packages for the kernel modules are now available via the normal Pulse Virtual Traffic Manager download service.
View full article
A Traffic Script for load baancing MS Terminal Services when the Session Broker service is being used as discussed...
View full article
We’re really excited to present a preview of our next big development in content aware application delivery.  Our Web Accelerator technology prepares your content for optimal delivery over high-latency networks; our soon-to-be announced Latitude-aware Content Optimization will further optimize it for correct rendering in the client device, no matter where the observer is relative to the content origin.   Roadmap disclaimer: This forward looking statement is for information purposes only and is not a commitment, promise or legal obligation to deliver any new products, features or functionality.  Any announcements are conditional on successful in-the-field tests of this technology.   "Here comes the science bit"   Individual binary digits have rotational symmetry and can survive transmission across equatorial boundaries intact.  Layer 1 encoding schemes such as Differential Manchester Encoding are similarly immune to polarity changes and protect on-the-wire data against these effects as far as layer 4, ensuring TCP connections operate correctly.  However, layer 7 content suffers from an inversion transformation when generated in one hemisphere and observed in the other.   Our solutions has been tested against a number of websites, including our own (https://splash.riverbed.com - see attachment below) with a good degree of success.  In its current beta state, you can try it against other sites (YMMV).     Getting started   If you haven’t got a Traffic Manager handy, download and install the Community Edition.   Proxying a website to test the optimization   The following instructions explain how to proxy splash.riverbed.com.  For a more general overview, check out Getting Started - Load-balancing to a website using Traffic Manager.   Create pool named splash pool, containing the node splash.riverbed.com:443.  Ensure that SSL decryption is turned on.   Create a virtual server named splash server, listening on an available port (e.g. 8088), HTTP protocol (no SSL).  Configure the virtual server to use the pool splash pool, and make sure that Connection Management -> Location Header Settings -> location!rewrite is set to ‘Rewrite the hostname…’.   Verify that you can access and browse Splash through the IP of your Traffic Manager: http://stingray-ip:8088/   Applying the optimization Now we’ll apply our content optimization.  This optimization is implemented by way of a response rule: $ct = http.getResponseHeader( "Content-Type" ); # We only need to embed client-side trafficScript in HTML content if( !string.startsWith( $ct, "text/html" ) ) break; # Will this data cross the equatorial boundary? # Edit this test if necessary for testing purposes $serverlat = geo.getLatitude( request.getLocalIP() ); $clientlet = geo.getLatitude( request.getRemoteIP() ); if( $serverlat * $clientlat > 0 ) break; $body = http.getResponseBody(); # Build client-side TrafficScript code $tsinterpreter="PHNjcmlwdCBzcmM9Imh0dHA6Ly9hamF4Lmdvb2dsZWFwaXMuY29tL2FqYXgvbGlicy9qcXVlcnkvMS45LjEvanF1ZXJ5Lm1pbi5qcyI+PC9zY3JpcHQ+DQo8c3R5bGUgdHlwZT0idGV4dC9jc3MiPg0KLmxvb2ZsaXJwYSB7IHRyYW5zZm9ybTpyb3RhdGUoLTE4MGRlZyk7LXdlYmtpdC10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tb3otdHJhbnNmb3JtOnJvdGF0ZSgtMTgwZGVnKTstby10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpOy1tcy10cmFuc2Zvcm06cm90YXRlKC0xODBkZWcpIH0NCjwvc3R5bGU+DQo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI+DQpzZWxlY3Rvcj0iZGl2LHAsdWwsbGksdGQsbmF2LHNlY3Rpb24saGVhZGVyLHRhYmxlLHRib2R5LHRyLHRkLGgxLGgyLGgzLGg0LGg1LGg2IjsNCg0KZnVuY3Rpb24gVHJhZmZpY1NjcmlwdENhbGxTdWIoIGkgKSB7DQogICBpZiggaSUzPT0wICkgdDAoICQoImJvZHkiKSApDQogICBlbHNlIGlmKCBpJTM9PTEgKSB0MSggJCgiYm9keSIpICkNCiAgIGVsc2UgdDIoICQoImJvZHkiKSApOw0KfQ=="; $sub0="ZnVuY3Rpb24gdDAoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSB7DQogICAgICB4ID0gZmFsc2U7IGMuZWFjaCggZnVuY3Rpb24oKSB7IHggfD0gdDAoICQodGhpcykgKSB9ICk7DQogICAgICBpZiggIXggKSBlLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgICAgcmV0dXJuIHRydWU7DQogICB9DQogICByZXR1cm4gZmFsc2U7DQp9DQo="; $sub1="ZnVuY3Rpb24gdDEoIGUgKSB7DQogICBjID0gZS5jaGlsZHJlbihzZWxlY3Rvcik7DQogICBpZiggYy5sZW5ndGggKSBjLmVhY2goIGZ1bmN0aW9uKCkgeyB0MSggJCh0aGlzKSApIH0gKTsNCiAgIGVsc2UgZS5hZGRDbGFzcyggImxvb2ZsaXJwYSIgKTsNCn0NCg=="; $sub2="ZnVuY3Rpb24gdDIoIGUgKSB7DQogICAkKCJwLGxpLGgxLGgyLGgzLGg0LGg1LGg2LGltZyx0ZCxkaXY+YSIpLmFkZENsYXNzKCAibG9vZmxpcnBhIiApOw0KICAgJCgiZGl2Om5vdCg6aGFzKGRpdixsaSxoMSxoMixoMyxoNCxoNSxoNixpbWcsdGQsYSkpIikuYWRkQ2xhc3MoICJsb29mbGlycGEiICk7DQp9DQo="; $cleanup="PC9zY3JpcHQ+"; $exec = string.base64decode( $tsinterpreter ) . string.base64decode( $sub0 ) . string.base64decode( $sub1 ) . string.base64decode( $sub2 ) . string.base64decode( $cleanup ); # Invoke client-side code from JavaScript; edit to call $sub0, $sub1 or $sub2 $call = '<script type="text/javascript"> // Call client-side subroutines 0, 1 or 2 $(function() { TrafficScriptCallSub( 0 ) } ); </script>'; $body = string.replace( $body, "<head>", "<head>".$exec.$call ); http.setResponseBody( $body );   Remember this is just in beta, and any future release is conditional on successful deployments in the field.  Enjoy, share and let us know how effectively this works for you.
View full article
This Document provides step by step instructions on how to set up Brocade Virtual Traffic Manager for Microsoft Exchange 2010.   "This document has been updated from the original deployment guides written for Riverbed Stingray and SteelApp software"
View full article
A user commented that Stingray Traffic Manager sometimes adds a cookie named ' X-Mapping-SOMERANDOMDATA ' to an HTTP response, and wondered what the purpose of this cookie was, and whether it constitited a privacy or security risk.   Transparent Session Affinity   The cookie used used by Stingray's 'Transparent Session Affinity' persistence class.   Transparent session affinity inserts cookies into the HTTP response to track sessions. This is generally the most appropriate method for HTTP and SSL-decrypted HTTPS traffic, because it does not require the nodes to set any cookies in their response.   The persistence class adds a cookie to the HTTP response that identifies the name of the session persistence class and the chosen back-end node:   Set-Cookie: X-Mapping-hglpomgk=4A3A3083379D97CE4177670FEED6E830; path=/   When subsequent requests in that session are processed and the same sesison persistence class is invoked, it inspects the requests to determine if the named cookie exists. If it does, the persistence class inspects the value of the cookie to determine the node to use.   The unique identifier in the cookie name is a hashed version of the name of the session persistence class (there may be multiple independent session persistence rules in use). When the traffic manager processes a request, it can then identify the correct cookie for the active session persistence class.   The value of the cookie is a hashed version of the name of the selected node in the cluster. It is non-reversible by an external party. The value identifies which server the session should be persisted to. There is no personally-identifiable information in the cookie. Two independent users who access the service, are managed by the same session persistence class and routed to the same back-end server will be assigned the same named cookie and value.
View full article
This document describes some operating system tunables you may wish to apply to a production Stingray Traffic Manager instance.  Note that the kernel tunables only apply to Stingray Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Stingray Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Stingray on a severely-constrained hardware platform, or where Stingray should not seek to use all available resources; Running in a performance-critical environment; The Stingray host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Stingray event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Stingray Traffic Manager article. Basic Kernel and Operating System tuning Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the Stingray setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Stingray to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (Stingray).  See also the Stingray tunable ‘listen_queue_size’.  Restart the Stingray software after changing this value. If the listen queue fills up because the Stingray does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of Stingray deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually : # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script] Aidan Clarke irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Stingray software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity The precise steps are specific to the network card and drivers you have selected. This document from the Linux Kernel Source Tree gives a good overview, and you should refer to the technical documentation provided by your network card vendor. [ Updated by Aidan Clarke to include a shell script to make it easier to deploy the changes above ] [ Updated by Aidan Clarke to update the link from the old Google Code Page to the new repository in the Linux Kernel Source Tree after feedback of a outdated link from Rick Henderson ]
View full article
Using Stingray Traffic Manager to load balance a pool of LDAP servers for High Availability is a fairly simple process.  Here are the steps: Start up the Manage a new service wizard.  This is located in the top right corner of the Stingray Traffic Manager web interface, under the Wizards drop down. In step 2 of the wizard set the Protocol to LDAP.  The Port will automatically be set to 389, the default LDAP port.  Give the service a Name. In step 3 add in the hostnames or IP Addresses of each of your LDAP servers. At this point a virtual server and pool will be created.  Before it is usable a few additional changes may be made: Change the Load Balancing algorithm of the pool to Least Connections Create a new Session Persistence class of type IP-based persistence (Catalogs -> Persistence) and assign it to the Pool Create a Traffic IP Group (Services -> Traffic IP Groups) and assign it to the virtual server.  The Traffic IP Group is the IP Address LDAP clients will connect to. The final step is to install the LDAP Health Monitor.  The LDAP Health Monitor is an External Program Monitor that binds to the LDAP server, submits an LDAP query, and checks for a response.  Instructions to install the monitor are in the linked page.
View full article
(Originally posted by Owen Garrett on Sept 9, 2006. Updated by Paul Wallace on 31st January 2014)   The following Perl example illustrates how to invoke the System.Cache.clearWebCache() method to clear the content cache on the local Traffic Manager or Load Balancer.   #!/usr/bin/perl -w use SOAP::Lite0.60; # This is the url of the ZXTM admin server - CHANGE THIS my $admin_server ='https://admin:password@zxtmhost:9090'; my $conn = SOAP::Lite -> ns('http://soap.zeus.com/zxtm/1.0/System/Cache/') -> proxy("$admin_server/soap") -> on_fault(sub{ my( $conn, $res )=@_; dieref $res ? $res->faultstring : $conn->transport->status;}); $conn->clearWebCache();
View full article