cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
The Pulse Services Director makes it easy to manage a fleet of virtual ADC services, with each application supported by dedicated vADC instances, such as Pulse Virtual Traffic Manager. This table summarises the compatibility between supported versions of Services Director and Virtual Traffic Manager.
View full article
This document covers updating the built-in GeoIP database. See TechTip: Extending the Pulse vTM GeoIP database for instructions on adding custom entries to the database.  
View full article
We have created dedicated installation and configuration guides for each type of deployment option, as part of the complete documentation set for Pulse vTM.
View full article
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
In this release, Pulse Secure Traffic Manager offers increased UDP performance, as well as additional functions to help with IPv6 geolocation and GLB workload. Highlights include:   UDP Performance Improvements  - Traffic Manager is now able to take advantage of the Linux kernel socket option SO_REUSEPORT to improve performance when load balancing UDP traffic. In addition, new configuration options are available to customize UDP behavior. See the release notes for more details.   TrafficScript support for IPv6 Geolocation APIs - Traffic Manager now includes both IPv4 and IPv6 geolocation data, and applications can now access both IPv4 and IPv6 geolocation data in TrafficScript with a single call. Previous releases included only the IPv4 data, and required IPv6 data to be loaded separately. Example usage is the same for both IPv4 and IPv6: $ip = request.getRemoteIP(); $country = geo.getCountry($ip);   Access to TimeZone information   - From this release, Traffic Manager has an additional geolocation API function geo.getTimeZone(IP), which uses the built-in geolocation database to return the IANA text format for the timezone corresponding to the given IP. In addition, a new systems function  sys.tztime.format(format, timezone, unixtime)   can be used to render the time in the current timezone, for example: $ip= request.getRemoteIP(); $str = sys.tztime.format(format, getTimeZone($ip));   Setting GLB workloads via Monitor Scripts   - Traffic Manager uses the TrafficScript function glb.service.getLocationLoad() to inspect the workload at a given location, but this must be set by an external monitor. In this release, Traffic Manager supports a simplified method to set the GLB workload by eating from stdout. In this way, a monitor script can emit a workload via stdout, which will be read directly by Traffic Manager and used for GLB weighting. The monitor script can set the workload by printing the numeric workload value to stdout, such as: vTM-set-node-load: 1500   For more information, please refer to the release notes, available on the download portal. A complete set of user documentation is also available on http://pulsesecure.net/vadc-docs including getting started guides, installation, configuration and API reference documentation. 
View full article
Upgrading Traffic Manager Software on EC2 The Traffic Manager user documentation contains a "Getting Started Guide" specific to each type of installation, including sections on upgrading to the latest version, depending on the platform type you are running on - refer to Installing and Upgrading your Pulse vADC for more information.   (this article has been updated to refer to a central location for Installation and Upgrades)  
View full article
Upgrading Traffic Manager Software The Traffic Manager user documentation contains a "Getting Started Guide" specific to each type of installation, including sections on upgrading to the latest version, depending on the platform type you are running on - refer to Installing and Upgrading your Pulse vADC for more information.   (this article has been updated to refer to a central location for Installation and Upgrades)  
View full article
Upgrading Traffic Manager Software The Traffic Manager user documentation contains a "Getting Started Guide" specific to each type of installation, including sections on upgrading to the latest version, depending on the platform type you are running on - refer to Installing and Upgrading your Pulse vADC for more information.   (this article has been updated to refer to a central location for Installation and Upgrades)  
View full article
We release major and minor updates to Traffic Manager on a periodic basis, and you are strongly advised to maintain production instances of Traffic Manager on recent releases for support, performance, stability and security reasons. How to Upgrade your Traffic Manager The Traffic Manager user documentation contains a "Getting Started Guide" specific to each type of installation, including section on upgrading to the latest version, depending on the platform type you are running on - refer to Installing and Upgrading your Pulse vADC for more information. Where to find updates   Software and Virtual Appliance updates are posted on the http://my.pulsesecure.net site, and updates are announced on the community pages, such as this article.   The update process is designed to be straightforward and minimizes disruption, although instantaneous downtime is inevitable. The process depends on the form factor of your Traffic Manager device:   Upgrading Traffic Manager Software Upgrading Traffic Manager Virtual Appliance   Am I running software or virtual appliance?   You can easily verify if you're running the software-only install (installed on your Linux/Solaris host) or a Virtual Appliance (running on VMware, Xen or another platform) by checking the header of an admin server page: Software install - identifies itself as "Traffic Manager 4000 VH" Virtual Appliance - identifies itself as "Traffic Manager Virtual Appliance 4000 VH"     Updating Cloud Instances of Traffic Manager   Public Cloud instances of Traffic Manager are provided and supported directly by Pulse - there is a "Cloud Services Getting Started" document in the article. For third-party instances of Traffic Manager, please refer to your cloud provider.   More information   For more detailed information on the installation and upgrade process, please refer to the relevant Getting Started guide in the Product Documentation
View full article
In order to support our new Certified Technical Expert training course for Pulse vADC, we have created a demonstration package which contains files to support the training course.
View full article
Services Director 20.1 supports secure authentication for administrators, and is also made available as a Long-Term Support release for extended operations,
View full article
You've just downloaded and installed Traffic Manager, and you're wondering "where next?". This article talks through the process of load-balancing traffic to a public website, and explains the potential pitfalls and how to overcome them. We'll look at how to set up a basic load balanced service using the Manage a New Service wizard, then consider four potential problems: Problem Solution When you access the Web site through Traffic Manager, it responds with a 404 Not Found or other error, or redirects you directly to the website You need to correct the Host Header in the request your web browser has sent.  Use a simple Request Rule in Traffic Manager to patch the request up correctly. The web site stops working when you access it through Traffic Manager Traffic Manager is running Ping health checks against the web servers, and these are failing because the public servers won't respond to pings.  Remove the health checks, or replace them with HTTP checks. Links in the web content refer directly to the fully-qualified domain of the website, rather than to the website delivered through Traffic Manager You need to rewrite the web content to correct the fully-qualified links.  Use a response rule in Traffic Manager to make this change. HTTP Location redirects and cookies issued by the website refer to the fully-qualified domain of the website, rather than to the website delivered through Traffic Manager You need to use the connection management settings to transparently rewrite the Location and Set-Cookie headers as appropriate   Basic Load Balancing Let's start with a simple example. Select a target website, such as www.w3.org. Fire up the Manage a New Service wizard: This will pop up a new window to step you through the process of creating a new service. Warning:  If you don't see the pop-up window, your web browser may be configured to prevent popups.  Check and fix this problem.   Step through the wizard.  Create a service named web site, listening on port 80: Specify the servers ("nodes") that will host the website.  In this example, enter www.w3.org, port 80: Note:  If you get an "ERROR: Cannot resolve" message, then most likely your Traffic Manager is not configured with a correct nameserver address, or it cannot route to the outside world.  You'll need to fix these problems before continuing: You can use 8.8.8.8 as the nameserver Ensure that the networking is configured so that the Traffic Manager has external connectivity   Review your settings and commit the result: Note:  If you get a 'Cannot Bind' error when you commit the changes, then another service on the Traffic Manager is listening on port 80. If it's a pre-existing Traffic Manager Virtual Server, you should stop this virtual server It it's another service on the same host (for example, a webserver), you should stop this service Alternatively, select another port (instead of port 80). The wizard will create a virtual server object listening on port 80, and a pool object containing the www.w3.org nodes (or whatever you chose).  The Virtual Server will receive traffic and then pass it on to the pool for load balancing.   Try it out Try it out.  Go to http://your-traffic-manager-ip-address/ with your web browser.  If you are lucky, it will work first time, but most likely, you'll get an error message, or possibly a redirect to http://www.yourwebsite.com/. Problem #1: The Host Header Most webservers host many websites on the same IP address and port.  They determine which website a particular request is destined for by inspecting a parameter in the request called the 'Host Header'. The 'Host Header' is constructed from the URL that you typed in your web browser (for example: http://192.168.35.10/).  This will cause the webbrowser to include the following header in the request:   Host: 192.168.35.10 The web server will reject this request, returning either an error message, 404 Not Found, or a forceful redirect to the correct page. You can use a TrafficScript Rule to change the host header in the request to a value that the web site will recognise.   How to create the TrafficScript Rule Edit the 'web site' virtual server you created and navigate to the 'Rules' page.  In the 'Request Rules' section, click the 'Manage Rules in Catalog' link to create a new rule that is associated with that virtual server: Create a TrafficScript rule called 'w3.org host header':   with the following text http.setHeader( "Host", "www.w3.org" ); and save your changes. Now, Traffic Manager will fix up the host header in every request and the site should render correctly. Problem #2: Health Monitors If your Traffic Manager service works for a short time, then starts returning a "Service Unavailable" error message, you've most likely hit a health monitoring problem. When Traffic Manager creates a new pool, it assigns a 'ping' health monitor to the nodes.  Many public webservers, and websites that are delivered over a CDN, do not respond to 'ping' healthchecks, so this monitor will quickly fail and mark the nodes as unavailable. Edit the 'web site pool' Pool object and locate the Health Monitoring section.  Remove the Ping health monitor: This will clear the error.  You could replace the Ping health check with an HTTP health check (for example) if you wished. Problem #3: Embedded Links As you click round the website that is delivered through Traffic Manager, you may find that you jump off the http://your-traffic-manager-IP-address/ version of the site and start going directly to the http://www.site.com/ URLs. This may happen because the website content contains absolute, fully-qualified links: <a href="http://www.nytimes.com/headlines">Headlines</a> rather than unqualified links: <a href="https://community.brocade.com/headlines">Headlines</a>   Yes, this is a problem if you load-balance to www.nytimes.com for example. You can fix those links up by rewriting the HTML responses from the webservers using a Response Rule in Traffic Manager: $contentType = http.getResponseHeader( "Content-Type" ); if( string.startsWith( $contentType, "text/html" ) ) {   $body = http.getResponseBody(); $body = string.replaceAll( $body, "http://www.nytimes.com/", "/" );   http.setResponseBody( $body ); }   Problem #4: Cookies and Location redirects The origin webserver may issue cookies and Location redirects that reference the fully-qualified domain of the website, rather than the IP address of the Traffic Manager device.  Your web browser will not submit those cookies, and it will jump to the origin website if it follows a Location redirect.   Traffic Manager can automatically patch up these parts of the HTTP response, using the Cookie and Location Header settings in the Connection Management page of your Virtual Server's configuration:   Rewrite domains, paths and other cookie parameters when proxying a website using a different URL Intelligently rewrite location redirects so that users are not hopped on to the origin server Use these settings to address any inconsistencies and problems related to cookies or location redirects. Conclusion These three problems (host header, health monitors, embedded links) can occur when you load-balance public websites; they are a lot less likely to happen in production because there won't be a firewall blocking pings between Traffic Manager and the local webservers, and the DNS for www.site.com will resolve to a traffic IP address on the Traffic Manager so the host header and embedded links will be correct. Watch out for situations where the web server sends HTTP redirects to another domain.  For example, when I tested by load balancing to www.yahoo.com, the website immediately tried to redirect me to www.uk.yahoo.com (I was based in the UK).  You have no control over this behavior by a public website; I configured my Traffic Manager to forward traffic to www.uk.yahoo.com instead. Now that you have a working load-balancing configuration, you can: Check out the Activity Monitors and Connections Reports to observe the traffic that Traffic Manager is managing Start experimenting with some of the examples and use cases from the list in Top Pulse vADC Examples and Use Cases Read some of the Product Briefs for Traffic Manager to understand how it can manage and control traffic  
View full article
Welcome to Pulse Secure Application Delivery solutions!  
View full article
In this release, Pulse Secure Traffic Manager offers some additional capabilities to support secure authentication for administrators.
View full article
This technical brief describes recommended techniques for installing, configuring and tuning Traffic Manager.  You should also refer to the Product Documentation for detailed instructions on the installation process of Traffic Manager software. Getting started Hardware and Software requirements for Traffic Manager Pulse Virtual Traffic Manager Kernel Modules for Linux Software Tuning Stingray Traffic Manager Tuning Traffic Manager for best performance Tech Tip: Where to find a master list of the Traffic Manager configuration keys Tuning the operating system kernel The following instructions only apply to Traffic Manager software running on a customer-supplied Linux or Solaris kernel: Tuning the Linux operating system for Traffic Manager Routing and Performance tuning for Traffic Manager on Linux Tuning the Solaris operating system for Traffic Manager Debugging procedures for Performance Problems Tech Tip: Debugging Techniques for Performance Investigation Load Testing Load Testing recommendations for Traffic Manager Conclusion The Traffic Manager software and the operating system kernels both seek to optimize the use of the resources available to them, and there is generally little additional tuning necessary except when running in heavily-loaded or performance-critical environments. When tuning is required, the majority of tunings relate to the kernel and tcp stack and are common to all networked applications.  Experience and knowledge you have of tuning webservers and other applications on Linux or Solaris can be applied directly to Traffic Manager tuning, and skills that you gain working with Traffic Manager can be transferred to other situations. The importance of good application design TCP and kernel performance tuning will only help to a small degree if the application running over HTTP is poorly designed.  Heavy-weight web pages with large quantities of referenced content and scripts will tend to deliver a poorer user experience and will limit the capacity of the network to support large numbers of users. Traffic Manager's Web Content Optimization capability ("Aptimizer") applies best-practice rules for content optimization dynamically, as the content is delivered by Traffic Manager.  It applies browser-aware techniques to reduce bandwidth and TCP round-trips (image, CSS, JavaScript and HTML minification, image resampling, CSS merging, image spriting) and it automatically applies URL versioning and far-future expires to ensure that clients cache all content and never needlessly request an update for a resource which has not changed. Traffic Manager's Aptimizer is a general purpose solution that complements TCP tuning to give better performance and a better service level.  If you’re serious about optimizing web performance, you should apply a range of techniques from layer 2-4 (network) up to layer 7 and beyond to deliver the best possible end-user experience while maximizing the capacity of your infrastructure.
View full article
Linux kernel settings can be set and read using entries in the /proc filesystem or using sysctl.  Permanent settings that should be applied on boot are defined in sysctl.conf. Example: To set the maximum number of file descriptors from the command line:   # echo 2097152 > /proc/sys/fs/file-max  …or… # sysctl –w fs.file-max=2097152   Example: To set the maximum number of file descriptors using sysctl.conf, add the following to /etc/sysctl.conf:   fs.file-max = 2097152   Sysctl.conf is applied at boot, or manually using sysctl -p
View full article
This document describes performance-related tuning you may wish to apply to a production Stingray Traffic Manager software, virtual appliance or cloud instance.  For related documents (e.g. operating system tuning), start with the Tuning Pulse Virtual Traffic Manager article.   Tuning Pulse Traffic Manager   Traffic Manager will auto-size the majority of internal tables based on available memory, CPU cores and operating system configuration.  The default behavior is appropriate for typical deployments and it is rarely necessary to tune it. Several changes can be made to the default configuration to improve peak capacity if necessary. Collectively, they may give a 5-20% capacity increase, depending on the specific test. Basic performance tuning Global settings Global settings are defined in the ‘System’ part of the configuration. Recent Connections table: Set recent_conns to 0 to prevent Stingray from archiving recent connection data for debugging purposes Verbose logging: Disable flipper!verbose, webcache!verbose and gslb!verbose to disable verbose logging Virtual Server settings Most Virtual Server settings relating to performance tuning are to be found in the Connection Management section of the configuration. X-Cluster-Client-IP: For HTTP traffic, Traffic Manager adds an 'X-Cluster-Client-IP' header containing the remote client's IP address by default.  You should disable this feature if your back-end applications do not inspect this header. HTTP Keepalives: enable support for Keepalives; this will reduce the rate at which TCP connections must be established and torn down.  Not only do TCP handshakes incur latency and additional network traffic, but closed TCP connections consume operating system resources until TCP timeouts are hit. UDP Port SMP: set this to 'yes' if you are managing simple UDP protocols such as DNS.  Otherwise, all UDP traffic is handled by a single Traffic Manager process (so that connections can be effectively tracked) Pool settings HTTP Keepalives: enable support for Keepalives (Pool: Connection Management; see Virtual Server note above). This will reduce the load on your back-end servers and the Traffic Manager system. Session Persistence: Session Persistence overrides load balancing and can prevent the traffic manager from selecting the optimal node and applying optimizations such as LARD. Use session persistence selectively and only apply to requests that must be pinned to a node. Advanced Performance Tuning General Global Settings: Maximum File Descriptors (maxfds): File Descriptors are the basic operating system resource that Traffic Manager consumes.  Typically, Traffic Manager will require two file descriptors per active connection (client and server side) and one file descriptor for each idle keepalive connection and for each client connection that is pending or completing. Traffic Manager will attempt to bypass any soft per-process limits (e.g. those defined by ulimit) and gain the maximum number of file descriptors (per child process). There are no performance impacts, and minimal memory impact to doing this.  You can tune the maximum number of file descriptors in the OS using fs.file-max The default value of 1048576 should be sufficient. Traffic Manager will warn if it is running out of file descriptors, and will proactively close idle keepalives and slow down the rate at which new connections are accepted. Listen queue size (listen_queue_size): this should be left to the default system value, and tuned using somaxconn (see above) Number of child processes (num_children): this is auto-sized to the number of cores in the host system.  You can force the number of child processes to a particular number (for example, when running Traffic Manager on a shared server) using the tunable ‘num_children’ which should be added manually to the global.cfg configuration file. Tuning Accept behavior The default accept behavior is tuned so that child processes greedily accept connections as quickly as possible.  With very large numbers of child processes, if you see uneven CPU usage, you may need to tune the multiple_accept, max_accepting and accepting_delay values in the Global Settings to limit the rate at which child processes take work. Tuning network read/write behavior The Global Settings values so_rbuff_size and so_wbuff_size are used to tune the size of the operating system (kernel-space) read and write buffers, as restricted by the operating system limits /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max. These buffer sizes determine how much network data the kernel will buffer before refusing additional data (from the client in the case of the read buffer, and from the application in the case of the write buffer).  If these values are increased, kernel memory usage per socket will increase. In normal operation, Traffic Manager will move data from the kernel buffers to its user-space buffers sufficiently quickly that the kernel buffers do not fill up.  You may want to increase these buffer sizes when running under connection high load on a fast network. The Virtual Server settings max_client_buffer and max_server_buffer define the size of the Traffic Manager (user-space) read and write buffers, used when Traffic Manager is streaming data between the client and the server.  The buffers are temporary stores for the data read from the network buffers. Larger values will increase memory usage per connection, to the benefit of more efficient flow control; this will improve performance for clients or servers accessing over high-latency networks. The value chunk_size controls how much data Traffic Manager reads and writes from the network buffers when processing traffic, and internal application buffers are allocated in units of chunk_size.  To limit fragmentation and assist scalability, the default value is quite low (4096 bytes); if you have plenty of free memory, consider setting it to 8192 or 16384. Doing so will increase Traffic Manager's memory footprint but may reduce the number of system calls, slightly reducing CPU usage (system time). You may wish to tune the buffer size parameters if you are handling very large file transfers or video downloads over congested networks, and the chunk_size parameter if you have large amounts of free memory that is not reserved for caching and other purposes. Tuning SSL performance Some modern ciphers such as TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 are faster than older ciphers in Traffic Manager.  SSL uses a private/public key pair during the initial client handshake.  1024-bit keys are approximately 5 times faster than 2048-bit keys (due to the computational complexity of the key operation), and are sufficiently secure for applications that require a moderate degree of protection. SSL sessions are cached locally, and shared between all traffic manager child processes using a fixed-size (allocated at start-up) cache.  On a busy site, you should check the size, age and miss-rate of the SSL Session ID cache (using the Activity monitor) and increase the size of the cache (ssl!cache!size) if there is a significant number of cache misses. Tuning from-Client connections Timeouts are the key tool to controlling client-initiated connections to the traffic manager: connect_timeout discards newly-established connections if no data is received within the timeout; keepalive_timeout holds client-side keepalive connections open for a short time before discarding them if they are not reused; timeout is a general-purpose timeout that discards an active connection if no data is received within the timeout period. If you suspect that connections are dropped prematurely due to timeouts, you can temporarily enable the Virtual Server setting log!client_connection_failures to record the details of dropped client connections. Tuning to-Server connections When processing HTTP traffic, Traffic Manager uses a pool of Keep-Alive connections to reuse TCP connections and reduce the rate at which TCP connections must be established and torn down.  If you use a webserver with a fixed concurrency limit (for example, Apache with its MaxClients and ServerLimit settings ), then you should tune the connection limits carefully to avoid overloading the webserver and creating TCP connections that it cannot service.   Pool: max_connections_pernode: This setting limits the total number of TCP connections that this pool will make to each node; keepalive connections are included in that count. Traffic Manager will queue excess requests and schedule them to the next available server. The current count of established connections to a node is shared by all Traffic Manager processes.   Pool: max_idle_connections_pernode: When an HTTP request to a node completes, Traffic Manager will generally hold the TCP connection open and reuse it for a subsequent HTTP request (as a KeepAlive connection), avoiding the overhead of tearing down and setting up new TCP connections.  In general, you should set this to the same value as max_connections_pernode, ensuring that neither setting exceeds the concurrency limit of the webserver.   Global Setting: max_idle_connections: Use this setting to fine-tune the total number of keepalive connections Traffic Manager will maintain to each node.  The idle_connection_timeout setting controls how quickly keepalive connections are closed.You should only consider limiting the two max_idle_connections settings if you have a very large number of webservers that can sustain very high degrees of concurrency, and you find that the traffic manager routinely maintains too many idle keepalive connections as a result of very uneven traffic. When running with very slow servers, or when connections to servers have a high latency or packet loss, it may be necessary to increase the Pool timeouts: max_connect_time discards connections that fail to connect within the timeout period; the requests will be retried against a different server node; max_reply_time discards connections that fail to respond to the request within the desired timeout; requests will be retried against a different node if they are idempotent. When streaming data between server and client, the general-purpose Virtual Server ‘timeout’ setting will apply.  If the client connection times out or is closed for any other reason, the server connection is immediately discarded. If you suspect that connections are dropped prematurely due to timeouts, you can enable the Virtual Server setting log!server_connection_failures to record the details of dropped server connections. Nagle’s Algorithm You should disable “Nagle’s Algorithm” for traffic to the backend servers, unless you are operating in an environment where the servers have been explicitly configured not to use delayed acknowledgements.  Set the node_so_nagle setting to ‘off’ in the Pool Connection Management configuration. If you notice significant delays when communicating with the back-end servers, Nagle’s Algorithm is a likely candidate. Other settings Ensure that you disable or de-configure any Traffic Manager features that you do not need to use, such as health monitors, session persistence, TrafficScript rules, logging and activity monitors.  Disable debug logging in service protection classes, autoscaling settings, health monitors, actions (used by the eventing system) and GLB services. For more information, start with the Tuning Pulse Virtual Traffic Manager article.  
View full article
This document describes some operating system tunables you may wish to apply to a production Traffic Manager instance.  Note that the kernel tunables only apply to Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Traffic Manager on a severely-constrained hardware platform, or where Traffic Manager should not seek to use all available resources; Running in a performance-critical environment; The Traffic Manager host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Traffic Manager event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Pulse Virtual Traffic Manager article. Basic Kernel and Operating System tuning   Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max   Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Traffic Manager to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (on Traffic Manager).  See also the tunable parameter ‘listen_queue_size’.  Restart the Traffic Manager software after changing this value. If the listen queue fills up because the Traffic Manager does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually: # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script]  irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Traffic Manager software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity You should also refer to the technical documentation provided by your network card vendor. [Updates by Aidan Clarke and   Rick Henderson ]  
View full article
You can create monitors, event action scripts and other utilities using Perl, but if you install them on a system that does not have a suitable Perl interpreter, they will not function correctly. For example, the Traffic Manager Virtual Appliance does not have a system-wide Perl interpreter.   The Traffic Manager includes a slightly cut-down version of Perl that is used to run many parts of the Administration Server. You can modify an existing perl script to use the Traffic Manager distribution if necessary.   Replace the standard Perl preamble: #!/usr/bin/perl -w ... with the following: #!/bin/sh exec $ZEUSHOME/perl/miniperl -wx $0 ${1+"$@"}     if 0; #!/usr/bin/perl #line 7 BEGIN{        # The Stingray-provided perl uses its own libraries        @INC=("$ENV{ZEUSHOME}/zxtmadmin/lib/perl","$ENV{ZEUSHOME}/perl"); } Note that Traffic Manager's Perl distribution contains a limited set of libraries, and it is not possible to add further libraries to it. Nevertheless, it is complete enough for many of the common administration tasks that you may wish to perform on a Traffic Manager Virtual Appliance, including using the Control API (SOAP::Lite).
View full article
Why write a health monitor in TrafficScript?   The Health Monitoring capabilities (as described in Feature Brief: Health Monitoring in Traffic Manager) are very comprehensive, and the built-in templates allow you to conduct sophisticated custom dialogues, but sometimes you might wish to resort to a full programming language to implement the tests you need.   Particularly on the Traffic Manager Virtual Appliance, your options can be limited. There's a minimal Perl interpreter included (see Tech Tip: Running Perl code on the Traffic Manager Virtual Appliance), and you can upload compiled binaries (Writing a custom Health Monitor in C) and shell scripts. This article explains how you can use TrafficScript to implement health monitors, and of course with Java Extensions, TrafficScript can 'call out' to a range of third-party libraries as well.   Overview   We'll implement the solution using a custom 'script' health monitor.  This health monitor will probe a virtual server running on the local Traffic Manager (using an HTTP request), and pass it all of the parameters relevant to the health request.   A TrafficScript rule running on the Traffic Manager can perform the appropriate health check and respond with a 'PASS' (200 OK) or 'FAIL' (500 Error) response.   The health monitor script   The health monitor script is straightforward and should not need any customization.  It will take its input from the health monitor configuration.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 #!/bin/sh  exec $ZEUSHOME /perl/miniperl -wx $0 ${1+ "$@" }       if 0;       #!/usr/bin/perl  #line 7       BEGIN{          # Pull in the Traffic Manager libraries for HTTP requests          unshift @INC , "$ENV{ZEUSHOME}/zxtmadmin/lib/perl" , "$ENV{ZEUSHOME}/zxtm/lib/perl" ;  }       use Zeus::ZXTM::Monitor qw( ParseArguments MonitorWorked MonitorFailed Log );  use Zeus::HTMLUtils qw( make_query_string );  use Zeus::HTTP;       my %args = ParseArguments();       my $url = "http://localhost:$args{vsport}$args{path}?" .make_query_string( %args );  my $http = new Zeus::HTTP( GET => $url );  $http ->load();       Log ( "HTTP GET for $url returned status: " . $http ->code() );       if ( $http ->code() == 200 ) {      MonitorWorked();  } else {      MonitorFailed( "Monitor failed: " . $http ->code() . " " . $http ->body() );  }   Upload this to the Monitor Programs of the Extra Files section of the catalog, and then create an "External Program Monitor" based on that script.  You will need to add two more configuration parameters to this health monitor configuration:   vsport: This should be set to the port of the virutal server that will host the trafficscript test path: This is optional - you can use it if you want to run several different health tests from the trafficscript rule   Your configuration should look something like this:   The virtual server   Create an HTTP virtual server listening on the appropriate port number (vsport).  You can bind this virtual server to localhost if you want to prevent external clients from accessing it.   The virtual server should use the 'discard' pool - we're going to add a request rule that always sends a response, so there's no need for any backend nodes.   The TrafficScript Rule   The 'business end' of your TrafficScript health monitor resides in the TrafficScript rule.  This rule is invoked every time the health monitor script is run, and it is given the details of the node which is to be checked.   The rule should return a 200 OK HTTP response if the node is OK, and a different response (such as 500 Error) if the node has failed the test.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 $path = http.getPath(); # Use 'path' if you would like to publish                           # several different tests from this rule       $ip = http.getFormParam( "ipaddr" );  $port = http.getFormParam( "port" );  $nodename = http.getFormParam( "node" );       # We're going to test the node $nodename on $ip:$port  #  # Useful functions include:  #   http.request.get/put/post/delete()  #   tcp.connect/read/write/close()  #   auth.query()  #   java.run()            sub Failed( $msg ) {      http.sendResponse( 500, "text/plain" , $msg , "" );  }       # Let's run a simple GET  $req = 'GET / HTTP/1.0  Host: www.riverbed.com       ';  $timeout = 1000; # ms  $sock = tcp. connect ( $ip , $port , $timeout );  tcp. write ( $sock , $req , $timeout );  $resp = tcp. read ( $sock , 102400, $timeout );       # Perform whatever tests we want on the response data.   # For example, it should begin with '200 OK'       if ( ! string.startsWith( $resp , "HTTP/1.1 200 OK" ) ) {      Failed( "Didn't get expected response status" );  }       # All good  http.sendResponse( 200, "text/plain" , "" , "" );  
View full article