cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Stingray Traffic Manager ships with SNMP MIBs which facilitate the process of querying the SNMP agent.  You can download these MIB files from the Admin Interface: Download these two MIB files. Step 1: Determine the appropriate location for MIB files on your client system: # net-snmp-config --default-mibdirs /home/owen/.snmp/mibs:/usr/share/snmp/mibs Copy the MIB files into one of these locations. Step 2: Determine the appropriate location for your snmp.conf: # net-snmp-config --snmpconfpath /etc/snmp:/usr/share/snmp:/usr/lib/snmp:/home/owen/.snmp:/var/lib/snmp Add the following two lines to your snmp.conf (creating a new snmp.conf if necessary): mibs +ZXTM-MIB mibs +ZXTM-MIB-SMIv2 Step 3: Test that you get the SNMP 'friendy names' when you query the Stingray SNMP agent: # snmpwalk -v2c -c public localhost 1.3 | head DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (239872900) 27 days, 18:18:49.00 SNMPv2-MIB::sysName.0 = STRING: athene.cam.zeus.com ZXTM-MIB::version.0 = STRING: "9.1" ZXTM-MIB::numberChildProcesses.0 = INTEGER: 4 ZXTM-MIB::upTime.0 = Timeticks: (239872900) 27 days, 18:18:49.00 ZXTM-MIB::timeLastConfigUpdate.0 = Timeticks: (267400) 0:44:34.00
View full article
You can create monitors, event action scripts and other utilities using Perl, but if you install them on a system that does not have a suitable Perl interpreter, they will not function correctly. For example, the Stingray Virtual Appliance does not have a system-wide Perl interpreter. The Stingray product includes a slightly cut-down version of Perl that is used to run many parts of the Stingray Administration Server. You can modify an existing perl script to use the Stingray distribution if necessary. Replace the standard Perl preamble: #!/usr/bin/perl -w ... with the following: #!/bin/sh exec $ZEUSHOME/perl/miniperl -wx $0 ${1+"$@"}     if 0; #!/usr/bin/perl #line 7 BEGIN{        # The Stingray-provided perl uses its own libraries        @INC=("$ENV{ZEUSHOME}/zxtmadmin/lib/perl","$ENV{ZEUSHOME}/perl"); } Note that Stingray's Perl distribution contains a limited set of libraries, and it is not possible to add further libraries to it. Nevertheless, it is complete enough for many of the common administration tasks that you may wish to perform on a Stingray Virtual Appliance, including using the Control API (SOAP::Lite).
View full article
This technical brief describes recommended techniques for installing, configuring and tuning Stingray Traffic Manager.  You should also refer to the Stingray Product Documentation for detailed instructions on the installation process of Stingray software.   Getting started   Hardware and Software requirements for Stingray Traffic Manager Stingray Kernel Modules for Linux Software   Tuning Stingray Traffic Manager   Tuning Stingray Traffic Manager for best performance Tech Tip: Where to find a master list of the Stingray configuration keys   Tuning the operating system kernel   The following instructions only apply to Stingray software running on a customer-supplied Linux or Solaris kernel: Tuning the Linux operating system for Stingray Traffic Manager Routing and Performance tuning for Stingray Traffic Manager on Linux Tuning the Solaris operating system for Stingray Traffic Manager   Debugging procedures for Performance Problems   Tech Tip: Debugging Techniques for Performance Investigation   Load Testing   Load Testing recommendations for Stingray Traffic Manager   Conclusion   The Stingray software and the operating system kernels both seek to optimize the use of the resources available to them, and there is generally little additional tuning necessary except when running in heavily-loaded or performance-critical environments.   When tuning is required, the majority of tunings relate to the kernel and tcp stack and are common to all networked applications.  Experience and knowledge you have of tuning webservers and other applications on Linux or Solaris can be applied directly to Stingray tuning, and skills that you gain working with Stingray can be transferred to other situations.   Good background references include:   NetScalingGuide - kernel - linux scaling.txt: guide to network stack scaling with multiqueue in the linux kernel Intel’s design reference for networking on multicore servers VMware’s recommendations on Receive-Side Scaling with vmxnet3 IBM paper: Tuning 10GbE devices in Linux   The importance of good application design   TCP and kernel performance tuning will only help to a small degree if the application running over HTTP is poorly designed.  Heavy-weight web pages with large quantities of referenced content and scripts will tend to deliver a poorer user experience and will limit the capacity of the network to support large numbers of users.   Initiatives such as Google’s PageSpeed and Yahoo’s YSlow seek to promote good practice in web page design in order to optimize performance and capacity.   Stingray Aptimizer Web Content Optimization capability applies best-practice rules for content optimization dynamically, as the content is delivered by the Stingray ADC.  It applies browser-aware techniques to reduce bandwidth and TCP round-trips (image, CSS, JavaScript and HTML minification, image resampling, CSS merging, image spriting) and it automatically applies URL versioning and far-future expires to ensure that clients cache all content and never needlessly request an update for a resource which has not changed.   Stingray Aptimizer is a general purpose solution that complements TCP tuning to give better performance and a better service level.  If you’re serious about optimizing web performance, you should apply a range of techniques from layer 2-4 (network) up to layer 7 and beyond to deliver the best possible end-user experience while maximizing the capacity of your infrastructure.
View full article
dmesg: use dmesg to quickly display recent kernel messages.  Any warnings about resource starvation, overfilled kernel tables and the like should be addressed by appropriate kernel tuning. vmstat: vmstat 3 is a quick and easy way to monitor CPU utilization.  On a well-utilized Stingray system, the user (us) and system (sy) CPU times will give a rough indication of the utilization, and the idle (id) time a rough indication of the spare capacity. Stingray workload is shared between user and system time; user time will predominate for a complex configuration or one that uses CPU-intensive operations (e.g. SSL, compression), and system time will predominate for a simple configuration with minimal traffic inspection.  The wait time (wa) should always be low. User, System and Idle time is not a good indication of spare capacity because Stingray uses system resources as eagerly as possible, and it will operate more efficiently the more highly-loaded it is.  For example, even if a Stingray system is 25% utilized, it will likely be at less than 25% its total capacity. /proc: use /proc/<pid> for a quick investigation of the state of Stingray processes – memory usage, number of open file descriptors, process limits ethtool: use ethtool to query and configure the network interface hardware: e thtool eth0 - to determine interface speed and negotiated options; ethtool –S eth0 to dump statistics for a network interface.  Large numbers of retransmits, errors or collisions may indicate a faulty NIC or poor cabling, or a congested network; ethtool –i eth0 to confirm the driver in use; ethtool –k eth0 to list the offload features employed by the card. tcpdump: use tcpdump to capture raw packets from named interfaces.  A tcpdump analysis can uncover unexpected problems such as slow closes and inappropriate use of tcp optimizations as well as application level problems. trace: Stingray includes a wrapper (ZEUSHOME/zxtm/bin/trace) around the standard operating system system trace tools.  trace will output all system calls performed by the traced Stingray processes.
View full article
The Stingray Configuration Guide document (seeStingray Product Documentation) lists all of the tunables that are used to configure Stingray.  Take care if you modify any of these tunables directly, because this will bypass the extensive validation stages in the UI, and refer to Riverbed support if you have any questions. You can also use the undocumented UI page ‘KeyInfo’ to list all of the tunables that are used to configure Stingray. https://stingray-host:9090/apps/zxtm/index.fcgi?section=KeyInfo
View full article
Load Testing is a useful activity to stress test a deployment to find weaknesses or instabilities, and it’s a useful way to compare alternative configurations to determine which is more efficient.  It should not be used for sizing calculations unless you can take great care to ensure that the synthetic load generated by the test framework is an accurate representation of real-world traffic.   One useful application of load testing is to verify whether a configuration change makes a measurable difference to the performance of the system under test.  You can usually infer that a similar effect will apply to a production system.   Introducing zeusbench   The zeusbench load testing tool is a load generator that Brocade vADC engineering uses for our own internal performance testing.  zeusbench can be found in $ZEUSHOME/admin/bin .   Use the --help option to display comprehensive help documentation.   Typical uses include:   Test the target using 100 users who each repeatedly request the named URL; each user will use a single dedicated keepalive connection.  Run for 30 seconds and report the result:   # zeusbench –t 30 –c 100 –k http://host:port/path   Test the target, starting with a request rate of 200 requests and stepping up by 50 requests per second every 30 seconds, to a maximum of 10 steps up.  Run forever (until Ctrl-C), using keepalive connection; only use a keepalive connection 3 times, then discard.  Print verbose (per-second) progress reports:   # zeusbench –f –r 200,50,10,30 –k –K 3 –v http://host:port/path   For more information, please refer to Introducing Zeusbench   Load testing checklist If you conduct a load-testing exercise, bear the following points in mind:   Understand your tests   Ensure that you plan and understand your test fully, and use two or more independent methods to verify that it is behaving the way that you intend.  Common problems to watch out for include:   Servers returning error messages rather than correct content; the test will only measure how quickly a server can error!; Incorrect keepalive behavior; verify that connections are kept-alive and reused as you intended; Connection rate limits and concurrency control will limit the rate at which Brocade will forward requests to the servers; SSL handshakes; most simple load tests will perform an SSL handshake for each request; reusing SSL session data will significantly alter the result.   Verify that you have disabled or de-configured features that you do not want to skew the test results.  You want to reduce the configuration to the simplest possible so that you can focus on the specific configuration options you intend to test.  Candidates to simplify include:   Access and debug logging; IP Transparency (and any other configuration that requires iptables and conntrack); Optimization techniques like compression or other web content optimization; Security policies such as service protection policies or application firewall rules; Unnecessary request and response rules; Advanced load balancing methods (for simplicity, use round robin or least connections).   It’s not strictly necessary to create a production-identical environment if the goal of your test is simply to compare various configuration alternatives – for example, which rule is quicker.  A simple environment, even if suboptimal, will give you more reliable test results.   Run a baseline test and find the bottleneck   Perform end-to-end tests directly from client to server to determine the maximum capacity of the system and where the bottleneck resides.  The bottleneck is commonly either CPU utilization on the server or client, or the capacity of the network between the two.   Re-run the tests through the traffic manager, with a basic configuration, to determine where the bottleneck is now.  This will help you to interpret the results and focus your tuning efforts.  Measure your performance data using at least two independent methods – benchmark tool output, activity monitor, server logs, etc – to verify that your chosen measurement method is accurate and consistent.  Investigate any discrepancies and ensure that you understand their cause, and disable the additional instrumentation before you run the final tests.   Important: Note that tests that do not overload the system can be heavily skewed by latency effects.  For example, a test that repeats the same fast request down a small number of concurrent connections will not overload the client, server or traffic manager, but the introduction of an additional hop (adding in the traffic manager for example) may double the latency and halve the performance result.  In reality, you will never see such an effect because the additional latency added by the traffic manager hop is not noticeable, particularly in the light of the latency of the client over a slow network.   Understand the different between concurrency and rate tests   zeusbench and other load testing tools can often operate in two different modes – concurrent connections tests (-c) and connection rate tests (-r).   The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:   Measuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests   The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.   The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.   Concurrency-based tests are often quicker to conduct (no binary-chop to find the optimal request rate) and give more stable results.  For example, if you want to determine if a configuration change affects the capacity of the system (by altering the CPU demands of the traffic manager or kernel), it’s generally sufficient to find a concurrency value that gives a good, near-maximum result and repeat the tests with the two configurations.   Always check dmesg and other OS logs   Resource starvation (file descriptors, sockets, internal tables) will all affect load testing results and may not be immediately obvious.  Make a habit of following the system log and dmesg regularly.   Remember to tune and monitor your clients and servers as well as the Traffic Manager; many of the kernel tunables descried above are also relevant to the clients and servers.
View full article
Before reading this document, please refer to the documents Basic performance tuning for Stingray Traffic Manager on Linux and Advanced performance tuning for Stingray Traffic Manager on Linux. This document summarizes some routing-related kernel tunables you may wish to apply to a production Stingray Traffic Manager instance.  It only applies to Stingray Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Stingray Traffic Manager Virtual Appliance or Cloud instances. Note: Tech tip: How to apply kernel tunings on Linux Using Netfilter conntrack Note: Only use Netfilter conntrack in a performance-critical environment when absolutely necessary as it adds significant load. If you're getting the error message: "ip_conntrack: table full, dropping packet" in dmesg or your system logs, you can check the number of entries in the table by reading from /proc/sys/net/ipv4/ip_conntrack_count, and the size of the table using ip_conntrack_max.  On most kernels, you can dynamically raise the maximum number of entries: # echo 131072 > /proc/sys/net/ipv4/ip_conntrack_max If the ‘ip_conntrack_max’ file is missing, you most likely have not loaded the ip_conntrack module. The best way to permanently set the conntrack table sizes is by adding the following options to /etc/modules.conf (or /etc/modprobe.d/<filename>): options ip_conntrack hashsize=1310719 options nf_conntrack hashsize=1310719 Note that Netfilter conntrack (used by Stingray’s IP transparency and other NAT use cases) adds significant load to the kernel and should only be used if necessary.  When you enable NAT or other features that use conntrack, the conntrack kernel modules are loaded; they are not always unloaded once these features are disabled.  Search for and unload the unused modules ip_conntrack, iptable_filter, ip_tables and anything else with iptables in its name. Packet forwarding and NAT Stingray Traffic Manager is typically deployed in a two-armed fashion, spanning a front-end public network and a back-end private network. In this case, IP forwarding should be disabled because the back-end private IP addresses are not routable from the front-end network: # echo 0 > /proc/sys/net/ipv4/ip_forward Stingray will not forward any IP packets.  Only traffic that is directed to a listening service on the traffic manager will be relayed between networks.  Although Stingray should not be regarded as a replacement for a network firewall, this configuration provides a strong security boundary which only allows known, identified traffic to reach the back-end servers from the front-end network, and all traffic is automatically ‘scrubbed’ at L2-L4. In some cases, you may wish to allow the back-end servers to initiate connections to external addresses, for example, to call out to a public API or service.  The Stingray host can be configured to forward traffic and NAT outgoing connections to the external IP of the host so that return traffic is routable; the following example assumes that eth0 is the external interface: # echo 1 > /proc/sys/net/ipv4/ip_forward Flush existing iptables rules if required: # iptables --flush Masquerade traffic from the external interface # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Optionally, drop all forwarded connections except those initiated through the external interface # iptables -A FORWARD -m state --state NEW -i eth0 -j ACCEPT # iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # iptables -A FORWARD -j DROP Unless you are using asymmetric routing, you should disable source routing as follows: # echo 0 > /proc/sys/net/ipv4/conf/default/accept_source_route # echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter Duplicate Address Detection The Duplicate Address Detection (DAD) feature seeks to ensure that two machines don't raise the same address simultaneously. This feature can conflict with the traffic manager’s fault tolerance; when an IP is transferred from one traffic manager system to another, timing conditions may trigger DAD on the traffic manager that is raising the address. # echo 0 > /proc/sys/net/ipv6/conf/default/dad_transmits # echo 0 > /proc/sys/net/ipv6/conf/all/dad_transmits
View full article
This document describes some operating system tunables you may wish to apply to a production Stingray Traffic Manager instance.  Note that the kernel tunables only apply to Stingray Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Stingray Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Stingray on a severely-constrained hardware platform, or where Stingray should not seek to use all available resources; Running in a performance-critical environment; The Stingray host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Stingray event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Stingray Traffic Manager article. Basic Kernel and Operating System tuning Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the Stingray setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Stingray to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (Stingray).  See also the Stingray tunable ‘listen_queue_size’.  Restart the Stingray software after changing this value. If the listen queue fills up because the Stingray does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of Stingray deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually : # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script] Aidan Clarke irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Stingray software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity The precise steps are specific to the network card and drivers you have selected. This document from the Linux Kernel Source Tree gives a good overview, and you should refer to the technical documentation provided by your network card vendor. [ Updated by Aidan Clarke to include a shell script to make it easier to deploy the changes above ] [ Updated by Aidan Clarke to update the link from the old Google Code Page to the new repository in the Linux Kernel Source Tree after feedback of a outdated link from Rick Henderson ]
View full article
This document describes performance-related tuning you may wish to apply to a production Stingray Traffic Manager software, virtual appliance or cloud instance.  For related documents (e.g. operating system tuning), start with the Tuning Stingray Traffic Manager article. Tuning Stingray Traffic Manager Stingray will auto-size the majority of internal tables based on available memory, CPU cores and operating system configuration.  The default behavior is appropriate for typical deployments and it is rarely necessary to tune it. Several changes can be made to the default configuration to improve peak capacity if necessary. Collectively, they may give a 5-20% capacity increase, depending on the specific test. Basic performance tuning Global settings Global settings are defined in the ‘System’ part of the configuration. Recent Connections table: Set recent_conns to 0 to prevent Stingray from archiving recent connection data for debugging purposes Verbose logging: Disable flipper!verbose, webcache!verbose and gslb!verbose to disable verbose logging. Virtual Server settings Most Virtual Server settings relating to performance tuning are to be found in the Connection Management section of the configuration. X-Cluster-Client-IP: For HTTP traffic, Zeus Traffic Manager adds an 'X-Cluster-Client-IP' header containing the remote client's IP address by default.  You should disable this feature if your back-end applications do not inspect this header. HTTP Keepalives: enable support for Keepalives; this will reduce the rate at which TCP connections must be established and torn down.  Not only do TCP handshakes incur latency and additional network traffic, but closed TCP connections consume operating system resources until TCP timeouts are hit. UDP Port SMP: set this to 'yes' if you are managing simple UDP protocols such as DNS.  Otherwise, all UDP traffic is handled by a single Stingray process (so that connections can be effectively tracked) Pool settings HTTP Keepalives: enable support for Keepalives (Pool: Connection Management; see Virtual Server note above).  This will reduce the load on your back-end servers and the Stingray system. Session Persistence: Session Persistence overrides load balancing and can prevent the traffic manager from selecting the optimal node and applying optimizations such as LARD .  Use session persistence selectively and only apply to requests that must be pinned to a node. Advanced Performance Tuning General Global Settings Maximum File Descriptors (maxfds): File Descriptors are the basic operating system resource that Stingray consumes.  Typically, Stingray will require two file descriptors per active connection (client and server side) and one file descriptor for each idle keepalive connection and for each client connection that is pending or completing. Stingray will attempt to bypass any soft per-process limits (e.g. those defined by ulimit) and gain the maximum number of file descriptors (per child process).  There are no performance impacts, and minimal memory impact to doing this.  You can tune the maximum number of file descriptors in the OS using fs.file-max The default value of 1048576 should be sufficient.  Stingray will warn if it is running out of file descriptors, and will proactively close idle keepalives and slow down the rate at which new connections are accepted. Listen queue size (listen_queue_size): this should be left to the default system value, and tuned using somaxconn (see above) Number of child processes (num_children): this is auto-sized to the number of cores in the host system.  You can force the number of child processes to a particular number (for example, when running Stingray on a shared server) using the tunable ‘num_children’ which should be added manually to the global.cfg configuration file. Tuning Accept behavior The default accept behavior is tuned so that child processes greedily accept connections as quickly as possible.  With very large numbers of child processes, if you see uneven CPU usage, you may need to tune the multiple_accept, max_accepting and accepting_delay values in the Global Settings to limit the rate at which child processes take work. Tuning network read/write behaviour The Global Settings values so_rbuff_size and so_wbuff_size are used to tune the size of the operating system (kernel-space) read and write buffers, as restricted by the operating system limits /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max. These buffer sizes determine how much network data the kernel will buffer before refusing additional data (from the client in the case of the read buffer, and from the application in the case of the write buffer).  If these values are increased, kernel memory usage per socket will increase. In normal operation, Stingray will move data from the kernel buffers to its user-space buffers sufficiently quickly that the kernel buffers do not fill up.  You may want to increase these buffer sizes when running under connection high load on a fast network. The Virtual Server settings max_client_buffer and max_server_buffer define the size of the Stingray (user-space) read and write buffers, used when Stingray is streaming data between the client and the server.  The buffers are temporary stores for the data read from the network buffers.  Larger values will increase memory usage per connection, to the benefit of more efficient flow control; this will improve performance for clients or servers accessing over high-latency networks. The value chunk_size controls how much data Stingray reads and writes from the network buffers when processing traffic, and internal application buffers are allocated in units of chunk_size.  To limit fragmentation and assist scalability, the default value is quite low (4096 bytes); if you have plenty of free memory, consider setting it to 8192 or 16384.  Doing so will increase Stingray’s memory footprint but may reduce the number of system calls, slightly reducing CPU usage (system time). You may wish to tune the buffer size parameters if you are handling very large file transfers or video downloads over congested networks, and the chunk_size parameter if you have large amounts of free memory that is not reserved for caching and other purposes. Tuning SSL performance In general, the fastest secure ciphers that Stingray supports are SSL_RSA_WITH_RC4_128_SHA and SSL_RSA_WITH_RC4_128_MD5.  These are enabled by default. SSL uses a private/public key pair during the initial client handshake.  1024-bit keys are approximately 5 times faster than 2048-bit keys (due to the computational complexity of the key operation), and are sufficiently secure for applications that require a moderate degree of protection. SSL sessions are cached locally, and shared between all traffic manager child processes using a fixed-size (allocated at start-up) cache.  On a busy site, you should check the size, age and miss-rate of the SSL Session ID cache (using the Activity monitor) and increase the size of the cache (ssl!cache!size) if there is a significant number of cache misses. Tuning from-Client connections Timeouts are the key tool to controlling client-initiated connections to the traffic manager: connect_timeout discards newly-established connections if no data is received within the timeout; keepalive_timeout holds client-side keepalive connections open for a short time before discarding them if they are not reused; timeout is a general-purpose timeout that discards an active connection if no data is received within the timeout period. If you suspect that connections are dropped prematurely due to timeouts, you can temporarily enable the Virtual Server setting log!client_connection_failures to record the details of dropped client connections. Tuning to-Server connections When processing HTTP traffic, Stingray uses a pool of Keep-Alive connections to reuse TCP connections and reduce the rate at which TCP connections must be established and torn down.  If you use a webserver with a fixed concurrency limit (for example, Apache with its MaxClients and ServerLimit settings ), then you should tune the connection limits carefully to avoid overloading the webserver and creating TCP connections that it cannot service. Pool: max_connections_pernode: This setting limits the total number of TCP connections that this pool will make to each node; keepalive connections are included in that count. Stingray will queue excess requests and schedule them to the next available server.  The current count of established connections to a node is shared by all Stingray processes. Pool: max_idle_connections_pernode: When an HTTP request to a node completes, Stingray will generally hold the TCP connection open and reuse it for a subsequent HTTP request (as a KeepAlive connection), avoiding the overhead of tearing down and setting up new TCP connections.  In general, you should set this to the same value as max_connections_pernode, ensuring that neither setting exceeds the concurrency limit of the webserver. Global Setting: max_idle_connections: Use this setting to fine-tune the total number of keepalive connections Stingray will maintain to each node.  The idle_connection_timeout setting controls how quickly keepalive connections are closed.You should only consider limiting the two max_idle_connections settings if you have a very large number of webservers that can sustain very high degrees of concurrency, and you find that the traffic manager routinely maintains too many idle keepalive connections as a result of very uneven traffic. When running with very slow servers, or when connections to servers have a high latency or packet loss, it may be necessary to increase the Pool timeouts: max_connect_time discards connections that fail to connect within the timeout period; the requests will be retried against a different server node; max_reply_time discards connections that fail to respond to the request within the desired timeout; requests will be retried against a different node if they are idempotent. When streaming data between server and client, the general-purpose Virtual Server ‘timeout’ setting will apply.  If the client connection times out or is closed for any other reason, the server connection is immediately discarded. If you suspect that connections are dropped prematurely due to timeouts, you can enable the Virtual Server setting log!server_connection_failures to record the details of dropped server connections. Nagle’s Algorithm You should disable “Nagle’s Algorithm” for traffic to the backend servers, unless you are operating in an environment where the servers have been explicitly configured not to use delayed acknowledgements.  Set the node_so_nagle setting to ‘off’ in the Pool Connection Management configuration. If you notice significant delays when communicating with the back-end servers, Nagle’s Algorithm is a likely candidate. Other settings Ensure that you disable or de-configure any Stingray features that you do not need to use, such as health monitors, session persistence, TrafficScript rules, logging and activity monitors.  Disable debug logging in service protection classes, autoscaling settings, health monitors, actions (used by the eventing system) and GLB services. For more information, start with the Tuning Stingray Traffic Manager article.
View full article
Linux kernel settings can be set and read using entries in the /proc filesystem or using sysctl.  Permanent settings that should be applied on boot are defined in sysctl.conf. Example: To set the maximum number of file descriptors from the command line: # echo 2097152 > /proc/sys/fs/file-max  …or… # sysctl –w fs.file-max=2097152 Example: To set the maximum number of file descriptors using sysctl.conf, add the following to /etc/sysctl.conf: fs.file-max = 2097152 Sysctl.conf is applied at boot, or manually using sysctl -p
View full article
Stingray Traffic Manager is supported on any modern Linux operating system, running on standard x86 (32 and 64 bit) platforms.  Riverbed develops and routinely tests the Stingray software on various systems, including RedHat Linux, CentOS, Debian, Ubuntu and SuSE, and on a range of hardware and virtualized platforms (VMware, Xen, OracleVM, KVM and HyperV). Stingray’s requirements on the operating system (OS) are light and there are no non-standard dependencies between the software and the OS.  We recommend a modern kernel (2.6.39 or later, or 3.0 or later), and very strongly recommend a 64-bit variant of that kernel for performance and scalability (memory size) reasons.  You should select the OS based on your preferred supplier and your internal expertise. Stingray will operate on any industry-standard x86 server hardware that is supported by the operating system vendor.  If you intend to use any non-standard hardware (third-party network cards for example), you should verify that they are adequately supported by your chosen OS vendor.  Riverbed does not publish a preferred hardware list, and we test with a range of components on HP, Dell, IBM, Sun/Oracle and other hardware. Minimum hardware requirements CPU: CPU-bound operations such as SSL decryption depend on available CPU resource and they typically scale linearly with the number of cores available and the clock speed.  A minimum of 4 cores is recommended for a moderate workload, depending on configuration.  Stingray scales comfortably on 12-16 core systems. Memory: A minimum of 2 GB memory is recommended, although Stingray will function comfortably in 512 MB or less with low traffic levels.  Additional memory will increase the number of concurrent connections that can be sustained (approximately 50,000 concurrent connections per Gb memory), and additional memory may also be used for content caching and other internal traffic manager caches. NICs: Stingray is typically deployed in a 2-armed fashion (a front-end and a back-end NIC), sometimes with an additional management interface.  There are no software limits on the numbers or types of physical interfaces that can be supported.  Routing, tagging and interface bonding are performed by operating system configuration and do not affect the operation of the Stingray software. Security configuration Stingray has a strong security model.  The software is installed and run as the root user and the processes that handle network traffic explicitly drop privileges and run in a local chroot jail. You may use additional security measures such as SELinux and iptables if desired. Expected performance Riverbed publish performance data that is based on benchmark testing of Stingray software on a range of hardware platforms.  This data will give an indication of what is possible, but real-world throughput and requests-per-second data is very dependent on latency, packet loss and traffic types and will deviate from what was achieved in ideal conditions.  If you have firm performance requirements, you should validate that Stingray can meet them with real-world traffic (just as you would with any ADC device – software, virtual appliance or hardware). Note that Stingray software is licensed on real performance, not on theoretical performance.  You are free to select the hardware that best meets your needs, and upgrade at any point.   Installing the Stingray software on the target host Stingray software should be installed and run as root.  Root privileges allow the software to bind to low ports (e.g. port 80) and to allocate additional operating system resources (e.g. file descriptors).  For detailed installation instructions, refer to the Software Getting Started guide. Installing additional software components on the target host Stingray software can take advantage of two Riverbed-supplied kernel modules that extend the packet-handling capability of the Linux kernel (see Stingray Kernel Modules for Linux Software): ztrans The ztrans kernel module exposes a hook into the IP stack’s NAT capability, allowing Stingray to control source-NAT for outgoing traffic.  This capability is used by Stingray’s IP Transparency functionality to force the source IP address of traffic to the back-end servers so that the connection appears to originate from the remote client’s IP address (or another non-default address if desired).  ztrans depends on standard kernel modules (nat, conntrack, ip_tables) which are loaded automatically if required. NAT and connection tracking adds a significant load to the kernel as all ingress traffic that is not addressed to a local interface must be matched against the kernel NAT table, and entries in that table must be managed. You can safely compile and register the kernel module.  It is only loaded if you enable IP Transparency on one or more pools, but the performance hit is incurred against all traffic processed by the kernel. zcluster The zcluster kernel module applies a low-level filter to the IP stack.  This filter is used by Stingray’s multi-hosted IP address capability; a multi-hosted Traffic IP address is raised by several Stingray devices using a common multicast MAC address.  Traffic destined for that IP address is multi-casted to all the Stingray devices and the zcluster module filters the packets so that each UDP datagram or TCP connection is handled by a unique traffic manager in the cluster. The zcluster module does not add a significant load to the kernel, but the use of a multicast address means that ingress network traffic is replicated across two or more Stingray devices, increasing the traffic volume that each Stingray must process. In practice, the effect is generally low.  The total volume of ingress traffic to each Stingray is capped by the available upstream bandwidth, and in the majority of cases, ingress traffic is significantly lower than the egress traffic (protocols like HTTP are generally very asymmetric).  The zcluster kernel module can be safely compiled and registered; it is only loaded and activated if multi-hosted IP addresses are in use. You can download the source for these kernel modules here: Stingray Kernel Modules for Linux Software  Note that these modules are pre-installed in Stingray Virtual Appliances and they are not available for Solaris. Stingray Traffic Manager does not require any other specialized kernel modules.
View full article
Stingray can load balance servers in a few different ways. Looking at a Pool's Load Balancing configuration page shows the different options:     They're all pretty straightforward except for Perceptive; how does that one work?  Perceptive can be thought of as Least Connections skewed to favor the servers with the Fastest Response Time.  Perceptive factors in both connection counts and response times into the load balancing decision to ensure that traffic is distributed evenly amongst the servers in a farm.  It is best understood in the context of a few examples:   Heterogeneous Server Farm   A great scenario in which to use Perceptive is when your server farm is heterogeneous,  where some servers are more powerful than others.  The challenge is to ensure that the more powerful servers get a greater share of the traffic, but that the weaker servers are not starved.   Perceptive will begin by distributing traffic based on connection counts, like Least Connections.  This ensures that the weaker servers are getting traffic and not sitting idle.  As traffic increases the powerful servers will naturally be able to handle it better, leading to a disparity in response times.  This will trigger Perceptive to begin favoring those more powerful servers, as they are responding quicker, by giving them a greater share of the traffic.   Heterogeneous workloads   Another great scenario in which to use Perceptive is when your workload is heteregeneous, where some requests generate a lot more load on your servers than others.  As in the Heterogeneous Server Farm case, Perceptive will begin by distributing traffic like Least Connections. When the workload becomes more heterogeneous,  some servers will get bogged down with the more CPU intensive requests and begin to respond slower.  This will trigger Perceptive to send traffic away from those servers, to the other servers that are not bogged down and responding quicker.   Ramping up traffic to a new server   The perceptive algorithm introduces traffic to a new server (or a server that has returned from a failed state) gently. When a new server is added to a pool, the algorithm tries it with a single request, and if it receives a reply, gradually increases the number of requests it sends the new server until it is receiving the same proportion of the load as other equivalent nodes in the pool. The algorithm used to ramp up the load is adaptive, so it isn't possible to make statements of the sort "the load will be increased from 0 to 100% of its fair share over 2 minutes"; the rate at which the load is increased is dependent on the responsiveness of the server. So, for example, a new web server serving a small quantity of static content will very quickly be ramped up to full speed, whereas a Java application server that compiles JSPs the first time they are used (and so is slow to respond to begin with) will be ramped up more slowly.   Summary   The Perceptive load balancing algorithm factors in both connection counts along with response times into a two step load balancing decision.  When there is little disparity in response times, traffic will be distributed like Least Connections.  When there is a larger disparity in response times, Perceptive will factor this in and favor the servers that are responding quicker, like Fastest Response Time.  Perceptive is great for handling heterogeneity in both the server farm and the workload, ensuring effecient load balancing across your server farm in either case.   Read more   For a more detailed discussion of the load balancing capabilities of Stingray, check out Feature Brief: Load Balancing in Stingray Traffic Manager, and take a look at the video introduction: Video: Introduction to Stingray Load Balancing  
View full article
Using Stingray Traffic Manager to load balance a pool of LDAP servers for High Availability is a fairly simple process.  Here are the steps: Start up the Manage a new service wizard.  This is located in the top right corner of the Stingray Traffic Manager web interface, under the Wizards drop down. In step 2 of the wizard set the Protocol to LDAP.  The Port will automatically be set to 389, the default LDAP port.  Give the service a Name. In step 3 add in the hostnames or IP Addresses of each of your LDAP servers. At this point a virtual server and pool will be created.  Before it is usable a few additional changes may be made: Change the Load Balancing algorithm of the pool to Least Connections Create a new Session Persistence class of type IP-based persistence (Catalogs -> Persistence) and assign it to the Pool Create a Traffic IP Group (Services -> Traffic IP Groups) and assign it to the virtual server.  The Traffic IP Group is the IP Address LDAP clients will connect to. The final step is to install the LDAP Health Monitor.  The LDAP Health Monitor is an External Program Monitor that binds to the LDAP server, submits an LDAP query, and checks for a response.  Instructions to install the monitor are in the linked page.
View full article
It's that time of the year, when the boss reminds you that you've got to change every single 'Copyright 2006' in the footer of your web pages to 'Copyright 2007'... at midnight, New Year's Eve.   Fear not! With a little TrafficScript, you can celebrate with everyone else and lay the guilt on the boss when you return in the New Year.  The trick is to add a TrafficScript response rule that rewrites all of your outgoing web pages, but only after midnight on January 1st:   # First, check the date if( sys.time.year() < 2007 ) break; # Now, check it's a web page $contenttype = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contenttype, "text/html" ) ) break;   The difficult bit is working out which bits of content to rewrite. You can't just change every 2006 to the new date, because there may be lots of dates in the web content that you don't want to change.   For my site, the footer of every page says:   Copyright MySite.com 1995-2006   ... and it's in the last 250 bytes of the page (at least, before we insert our Tracking user activity with Google Analytics code).   The following code reads the entire response and rewrites it on the fly.  Note that ' http.getResponseBody() ' deals with all of the awkward HTTP protocol parsing for you, decompressing compressed responses and reassembling chunked transfers for dynamic applications, so you don't need to downgrade the request to HTTP/1.0, disable keepalives, remove Accept-Encoding headers or anything else:   $body = http.getResponseBody(); $start = string.drop( $body, 250 ); $end = string.skip( $body, string.len( $body )-250 ); $end = string.replace( $end, " 1995-2006", " 1995-" . sys.time.year() ); http.setResponseBody( $start . $end );   That's it - Happy New Year!   For reference, here's the entire rule:   # First, check the date if( sys.time.year() < 2007 ) break; # Now, check it's a web page $contenttype = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contenttype, "text/html" ) ) break; $body = http.getResponseBody(); $start = string.drop( $body, 250 ); $end = string.skip( $body, string.len( $body )-250 ); $end = string.replace( $end, " 1995-2006", " 1995-" . sys.time.year() ); http.setResponseBody( $start . $end );   This article was originally published 22 December 2006
View full article
A document to hold useful regular expressions that I have pulled together for things: RegExr is a great and very handy online tool for checking regular expression matches: RegExr A regex to validate a password string to ensure it does not contain dangerous punctuation characters and is less than 20 characters long.  useful for Stingray Application Firewall form field protection in login pages: ^[^;,{}\[\]\$\%\*\(\)<>:?\\/'"`]{0,20}$ A regex to check that a password has at least one Uppercase, Lowercase, Numbers and Punctuation from the approved list and is at least 8 but less than 20 characters. ^(?=.*[A-Z])(?=.*[a-z])(?=.*[\\@^!\.,~-])(?=.*\d)(.{8,20})$ A regex to check a field has a valid email address in it ^[^@]+@[^@]+ \. [^@]+ $
View full article
Update: 2013 06018 - I had to do 50 conversions today, so I have attached a shell script to to automate this process. == Assumptions: You have a pkcs12 bundle with a private key and certificate in it - in this example we will use a file called www.website.com.p12.  I use SimpleAuthority as it is cross platform and the free edition lets you create up to 5 keypairs, which is plenty for the lab... You don't have a password on the private key (passwords on machine loaded keys are a waste of time IMHO) You have a Linux / MacOS X / Unix system with openssl installed (Mac OS X does by default, so do most Linux installs...) 3 commands you need: First we take the p12 and export just the private key (-nocerts) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nocerts -out www.website.com.key.pem -nodes Second we take the p12 and export just the certificate (-nokeys) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nokeys -out www.website.com.cert.pem -nodes Third, we convert the private key into the format Stingray wants it in (-text) openssl rsa -in www.website.com.key.pem -out www.website.com.key.txt.pem -text You are left with a list of files, only two of them are needed to import into the Stingray: www.website.com.key.txt.pem is the private key you need www.website.com.cert.pem is the certificate you need These can then be imported into the STM under Catalogues > SSL > Server Certs Hope this helps.. 1 ~ $ ./p12_convert.sh -h ./p12_convert.sh written by Aidan Clarke <aidan.clarke at riverbed.com> Copyright Riverbed Technologies 2013 usage: ./p12_convert.sh -i inputfile -o outputfile This script converts a p12 bundle to PEM formated key and certificate ready for import into Stingray Traffif Manager OPTIONS:    -h      Show this message    -i      Input file name    -o      Output file name stub
View full article