cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
TrafficScript provides ‘if’ and ‘if/else’ statements for conditional execution.  The condition is an expression, which is evaluated.   The return value of the expression determines whether the condition is ‘true’ or ‘false’:   A non-zero number or non-empty string is ‘true’; A zero number or empty string is ‘false’.   For example:   if( $url == "/" ) { log.info( "Access to home page" ); }   ... or   if( $isAuthenticated ) { pool.use( "local pool" ); } else { log.info( "Unauthenticated user accessed the service" ); connection.close(); }   Read more: Collected Tech Tips: TrafficScript examples
View full article
A client-first virtual server accepts traffic, waits for the first data from the client and then makes a load balancing decision (selects a pool and node). It connects to the selected node (server) and shuttles data back and fro between the client and the server. For more details, take a look at the document: Server First, Client First and Generic Streaming Protocols. The SSL passthrough virtual server type is a slightly specialized version of the client-first virtual server. It forwards SSL-encrypted data without any modification, but with two key differences: Difference 1: Session Persistence You can use an SSL Session ID persistence class with an SSL passthrough virtual server. This persistence class identifies when a new SSL session is established by a server, and pins future connections that use that session to the same server. This technique is used to improve performance. An SSL session identifies the encryption key and connection state; using persistence with the session ID allows clients to re-use previously-negotiated SSL credentials, so the compute-intensive RSA operation does not need to be repeated. Not using this technique will decrease the capacity of your server farm and increase latency. Note: many clients (browsers) routinely renegotiate their SSL session every few minutes to reduce the opportunity to sniff large quantities of data that use the same key, so SSL session ID persistence is not appropriate to pin client sessions to the same server. Because you cannot inspect the encrypted application data (to accurately identify user sessions), you would need to use IP address persistence for this purpose. Difference 2: SSL transparency SSL Passthrough allows you to use a Stingray-specific hack to the SSL protocol that prepends the connection with data that identifes the client IP address and port. This is an alternative to the 'X-Cluster-Client-Ip' header that Stingray adds to plaintext HTTP connections. This capability is disabled by default, and only functions if the destination node is another Stingray Traffic Manager; check the Stingray Product Documentation (keys 'ssl_enhance' and 'ssl_trust_magic') for more details. Processing SSL traffic None of the ssl TrafficScript functions operate with an SSL Passthrough virtual server.If you need to inspect, persist or modify the data in an SSL connection, or you want to centralize the SSL decryption, then you should terminate and decrypt the SSL connection on Stingray Traffic Manager. This is easy to do; start with an existing SSL Passthrough service and apply the 'SSL Decrypt a Service' wizard to apply the correct configuration:
View full article
A user commented that Stingray Traffic Manager sometimes adds a cookie named ' X-Mapping-SOMERANDOMDATA ' to an HTTP response, and wondered what the purpose of this cookie was, and whether it constitited a privacy or security risk.   Transparent Session Affinity   The cookie used used by Stingray's 'Transparent Session Affinity' persistence class.   Transparent session affinity inserts cookies into the HTTP response to track sessions. This is generally the most appropriate method for HTTP and SSL-decrypted HTTPS traffic, because it does not require the nodes to set any cookies in their response.   The persistence class adds a cookie to the HTTP response that identifies the name of the session persistence class and the chosen back-end node:   Set-Cookie: X-Mapping-hglpomgk=4A3A3083379D97CE4177670FEED6E830; path=/   When subsequent requests in that session are processed and the same sesison persistence class is invoked, it inspects the requests to determine if the named cookie exists. If it does, the persistence class inspects the value of the cookie to determine the node to use.   The unique identifier in the cookie name is a hashed version of the name of the session persistence class (there may be multiple independent session persistence rules in use). When the traffic manager processes a request, it can then identify the correct cookie for the active session persistence class.   The value of the cookie is a hashed version of the name of the selected node in the cluster. It is non-reversible by an external party. The value identifies which server the session should be persisted to. There is no personally-identifiable information in the cookie. Two independent users who access the service, are managed by the same session persistence class and routed to the same back-end server will be assigned the same named cookie and value.
View full article
A complex topic that relates to the many techniques that Stingray uses to accelerate services, offload work so they function more efficiently and rate-limits excessive transactions to maintain acceptable levels of performance. Question 1. Are client connections terminated on Stingray itself? Yes. Stingray handles slow WAN-side connections very efficiently, and terminates them completely. A  separate TCP connection is established with the server, with TCP options chosen to optimize the local link between Stingray and the server, and (generaly) minimal packet loss, latency and jitter. This places the server into the optimal benchmark-like environment. Question 2. Is there any type of request multiplexing done at all to the pool servicing a given virtual server? Or are client connections simply passthrough? Yes. For HTTP, Stingray carefully manages a pool of connections to each node in a pool. When a request to a pool completes, provided the server does not close the connection, we then keep the connection open (it is idle). For subsequent requests, Stingray prefers to reuse an idle connection rather than creating a new one. Stingray holds a maximum of max_idle_connections_pernode connections in the idle state (so that we don’t tie up too many resources (e.g. threads or processes) on the server) up to a limit of max_idle_connections in total (don’t use too many resources on the traffic manager), and we will only ever open a total of max_connections_pernode connections simultaneously (default – no limit) so that if the server has a concurrency limit (e.g. mpm_common - Apache HTTP Server: maxclients) we won’t overload it. If the incoming request rate cannot be serviced within the max_connections_pernode limit, requests are queued internally in the traffic manager and released when a concurrency slot becomes available. 3. Are there any request buffering done, in both directions? Full request buffering, up to the memory limits defined in max_client_buffer and max_server_buffer. We override these limits if you read a request or response using a trafficscript rule. The implication is that if the client is slow, then we: Accept the client connection Read the entire request (slow, over slow, lossy WAN) Process it internally Choose a pool and node Select an idle connection to the node, or open a new connection Write the request to the node (fast, over local LAN) Node processes request and generates response Read the response from the node (fast, over the local LAN) Release the connection to the node (either close it or hold it as idle) Process the response internally Write the response back to the client (slow, over WAN) Either close the client connection, or (more typically) keep it open as a KeepAlive connection The connection to the node only lasts for steps 5-9, i.e. is very quick. This lets the nodes process connections as quickly as possible, offloading the slow TCP connection on the WAN side; this is one aspect of the acceleration we deliver (putting the node in the optimal environment so that you can get benchmark performance from the node). If we do not read the entire response, and it exceeds the max_server_buffer, then we will read as much as we can, write to the client and refill the buffer as fast as possible. Finally, don't forget the potential to use caching on the Load Balancer / Traffic Manager to reduce the number of transactions the servers must handle.
View full article
Stingray Virtual Appliances automatically export lots of standard information about the running system, such as traffic through the network cards, memory usage and CPU usage.  This information is obtained from the operating system-level SNMP service. A standard Stingray software installation contains an SNMP agent that provides a wealth of data about the performance and activity of the Stingray software.  With a little bit of configuration, you can integrate the Stingray SNMP agent with your local OS agent to give a single point for both OS and Stingray SNMP data. Step 1: Verify that the Stingray SNMP agent is running on the public SNMP port (161): # snmpget -v1 -c public localhost SNMPv2-SMI::enterprises.7146.1.2.1.1.0 SNMPv2-SMI::enterprises.7146.1.2.1.1.0 = STRING: "9.1" SNMPv2-SMI::enterprises.7146.1.2.1.1.0 is the SNMP OID corresponding to the version if the running Stingray software.  You can use the 'friendly names' if you follow these instructions: Installing the Stingray SNMP MIBs: # snmpget -v1 -c public localhost ZXTM-MIB::version.0 ZXTM-MIB::version.0 = STRING: "9.1" Step 2: Reconfigure the Stingray SNMP agent to run on an internal port Go to the Stingray Admin Server and reconfigure the SNMP settings as follows: Setting Value snmp!enabled Yes snmp!bindip 127.0.0.1 snmp!allow localhost snmp!port 1161 snmp!community private Step 3: Install Net-SNMP and configure it to proxy to the Stingray SNMP agent Install Net-SNMP on the host server (often named 'snmpd' by package managers such as apt). Either edit the Net-SNMP configuration file (often /etc/snmp/snmpd.conf) or create a new /etc/snmp/snmpd.local.conf; add the following to the end: # Proxy Stingray oids to the Stingray server proxy -v 1 -c private localhost:1161 .1.3.6.1.4.1.7146 4. Restart the snmp server. You should now be able to see both system and Stingray oids through the SNMP server running on the standard port (161). If you can only see a limited subset of the system OIDS (and none of the Stingray ones) ensure that the access control in the snmpd.conf file is not too restrictive.
View full article
Stingray Traffic Manager ships with SNMP MIBs which facilitate the process of querying the SNMP agent.  You can download these MIB files from the Admin Interface: Download these two MIB files. Step 1: Determine the appropriate location for MIB files on your client system: # net-snmp-config --default-mibdirs /home/owen/.snmp/mibs:/usr/share/snmp/mibs Copy the MIB files into one of these locations. Step 2: Determine the appropriate location for your snmp.conf: # net-snmp-config --snmpconfpath /etc/snmp:/usr/share/snmp:/usr/lib/snmp:/home/owen/.snmp:/var/lib/snmp Add the following two lines to your snmp.conf (creating a new snmp.conf if necessary): mibs +ZXTM-MIB mibs +ZXTM-MIB-SMIv2 Step 3: Test that you get the SNMP 'friendy names' when you query the Stingray SNMP agent: # snmpwalk -v2c -c public localhost 1.3 | head DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (239872900) 27 days, 18:18:49.00 SNMPv2-MIB::sysName.0 = STRING: athene.cam.zeus.com ZXTM-MIB::version.0 = STRING: "9.1" ZXTM-MIB::numberChildProcesses.0 = INTEGER: 4 ZXTM-MIB::upTime.0 = Timeticks: (239872900) 27 days, 18:18:49.00 ZXTM-MIB::timeLastConfigUpdate.0 = Timeticks: (267400) 0:44:34.00
View full article
dmesg: use dmesg to quickly display recent kernel messages.  Any warnings about resource starvation, overfilled kernel tables and the like should be addressed by appropriate kernel tuning. vmstat: vmstat 3 is a quick and easy way to monitor CPU utilization.  On a well-utilized Stingray system, the user (us) and system (sy) CPU times will give a rough indication of the utilization, and the idle (id) time a rough indication of the spare capacity. Stingray workload is shared between user and system time; user time will predominate for a complex configuration or one that uses CPU-intensive operations (e.g. SSL, compression), and system time will predominate for a simple configuration with minimal traffic inspection.  The wait time (wa) should always be low. User, System and Idle time is not a good indication of spare capacity because Stingray uses system resources as eagerly as possible, and it will operate more efficiently the more highly-loaded it is.  For example, even if a Stingray system is 25% utilized, it will likely be at less than 25% its total capacity. /proc: use /proc/<pid> for a quick investigation of the state of Stingray processes – memory usage, number of open file descriptors, process limits ethtool: use ethtool to query and configure the network interface hardware: e thtool eth0 - to determine interface speed and negotiated options; ethtool –S eth0 to dump statistics for a network interface.  Large numbers of retransmits, errors or collisions may indicate a faulty NIC or poor cabling, or a congested network; ethtool –i eth0 to confirm the driver in use; ethtool –k eth0 to list the offload features employed by the card. tcpdump: use tcpdump to capture raw packets from named interfaces.  A tcpdump analysis can uncover unexpected problems such as slow closes and inappropriate use of tcp optimizations as well as application level problems. trace: Stingray includes a wrapper (ZEUSHOME/zxtm/bin/trace) around the standard operating system system trace tools.  trace will output all system calls performed by the traced Stingray processes.
View full article
The Stingray Configuration Guide document (seeStingray Product Documentation) lists all of the tunables that are used to configure Stingray.  Take care if you modify any of these tunables directly, because this will bypass the extensive validation stages in the UI, and refer to Riverbed support if you have any questions. You can also use the undocumented UI page ‘KeyInfo’ to list all of the tunables that are used to configure Stingray. https://stingray-host:9090/apps/zxtm/index.fcgi?section=KeyInfo
View full article
Load Testing is a useful activity to stress test a deployment to find weaknesses or instabilities, and it’s a useful way to compare alternative configurations to determine which is more efficient.  It should not be used for sizing calculations unless you can take great care to ensure that the synthetic load generated by the test framework is an accurate representation of real-world traffic.   One useful application of load testing is to verify whether a configuration change makes a measurable difference to the performance of the system under test.  You can usually infer that a similar effect will apply to a production system.   Introducing zeusbench   The zeusbench load testing tool is a load generator that Brocade vADC engineering uses for our own internal performance testing.  zeusbench can be found in $ZEUSHOME/admin/bin .   Use the --help option to display comprehensive help documentation.   Typical uses include:   Test the target using 100 users who each repeatedly request the named URL; each user will use a single dedicated keepalive connection.  Run for 30 seconds and report the result:   # zeusbench –t 30 –c 100 –k http://host:port/path   Test the target, starting with a request rate of 200 requests and stepping up by 50 requests per second every 30 seconds, to a maximum of 10 steps up.  Run forever (until Ctrl-C), using keepalive connection; only use a keepalive connection 3 times, then discard.  Print verbose (per-second) progress reports:   # zeusbench –f –r 200,50,10,30 –k –K 3 –v http://host:port/path   For more information, please refer to Introducing Zeusbench   Load testing checklist If you conduct a load-testing exercise, bear the following points in mind:   Understand your tests   Ensure that you plan and understand your test fully, and use two or more independent methods to verify that it is behaving the way that you intend.  Common problems to watch out for include:   Servers returning error messages rather than correct content; the test will only measure how quickly a server can error!; Incorrect keepalive behavior; verify that connections are kept-alive and reused as you intended; Connection rate limits and concurrency control will limit the rate at which Brocade will forward requests to the servers; SSL handshakes; most simple load tests will perform an SSL handshake for each request; reusing SSL session data will significantly alter the result.   Verify that you have disabled or de-configured features that you do not want to skew the test results.  You want to reduce the configuration to the simplest possible so that you can focus on the specific configuration options you intend to test.  Candidates to simplify include:   Access and debug logging; IP Transparency (and any other configuration that requires iptables and conntrack); Optimization techniques like compression or other web content optimization; Security policies such as service protection policies or application firewall rules; Unnecessary request and response rules; Advanced load balancing methods (for simplicity, use round robin or least connections).   It’s not strictly necessary to create a production-identical environment if the goal of your test is simply to compare various configuration alternatives – for example, which rule is quicker.  A simple environment, even if suboptimal, will give you more reliable test results.   Run a baseline test and find the bottleneck   Perform end-to-end tests directly from client to server to determine the maximum capacity of the system and where the bottleneck resides.  The bottleneck is commonly either CPU utilization on the server or client, or the capacity of the network between the two.   Re-run the tests through the traffic manager, with a basic configuration, to determine where the bottleneck is now.  This will help you to interpret the results and focus your tuning efforts.  Measure your performance data using at least two independent methods – benchmark tool output, activity monitor, server logs, etc – to verify that your chosen measurement method is accurate and consistent.  Investigate any discrepancies and ensure that you understand their cause, and disable the additional instrumentation before you run the final tests.   Important: Note that tests that do not overload the system can be heavily skewed by latency effects.  For example, a test that repeats the same fast request down a small number of concurrent connections will not overload the client, server or traffic manager, but the introduction of an additional hop (adding in the traffic manager for example) may double the latency and halve the performance result.  In reality, you will never see such an effect because the additional latency added by the traffic manager hop is not noticeable, particularly in the light of the latency of the client over a slow network.   Understand the different between concurrency and rate tests   zeusbench and other load testing tools can often operate in two different modes – concurrent connections tests (-c) and connection rate tests (-r).   The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:   Measuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests   The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.   The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.   Concurrency-based tests are often quicker to conduct (no binary-chop to find the optimal request rate) and give more stable results.  For example, if you want to determine if a configuration change affects the capacity of the system (by altering the CPU demands of the traffic manager or kernel), it’s generally sufficient to find a concurrency value that gives a good, near-maximum result and repeat the tests with the two configurations.   Always check dmesg and other OS logs   Resource starvation (file descriptors, sockets, internal tables) will all affect load testing results and may not be immediately obvious.  Make a habit of following the system log and dmesg regularly.   Remember to tune and monitor your clients and servers as well as the Traffic Manager; many of the kernel tunables descried above are also relevant to the clients and servers.
View full article
Before reading this document, please refer to the documents Basic performance tuning for Stingray Traffic Manager on Linux and Advanced performance tuning for Stingray Traffic Manager on Linux. This document summarizes some routing-related kernel tunables you may wish to apply to a production Stingray Traffic Manager instance.  It only applies to Stingray Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Stingray Traffic Manager Virtual Appliance or Cloud instances. Note: Tech tip: How to apply kernel tunings on Linux Using Netfilter conntrack Note: Only use Netfilter conntrack in a performance-critical environment when absolutely necessary as it adds significant load. If you're getting the error message: "ip_conntrack: table full, dropping packet" in dmesg or your system logs, you can check the number of entries in the table by reading from /proc/sys/net/ipv4/ip_conntrack_count, and the size of the table using ip_conntrack_max.  On most kernels, you can dynamically raise the maximum number of entries: # echo 131072 > /proc/sys/net/ipv4/ip_conntrack_max If the ‘ip_conntrack_max’ file is missing, you most likely have not loaded the ip_conntrack module. The best way to permanently set the conntrack table sizes is by adding the following options to /etc/modules.conf (or /etc/modprobe.d/<filename>): options ip_conntrack hashsize=1310719 options nf_conntrack hashsize=1310719 Note that Netfilter conntrack (used by Stingray’s IP transparency and other NAT use cases) adds significant load to the kernel and should only be used if necessary.  When you enable NAT or other features that use conntrack, the conntrack kernel modules are loaded; they are not always unloaded once these features are disabled.  Search for and unload the unused modules ip_conntrack, iptable_filter, ip_tables and anything else with iptables in its name. Packet forwarding and NAT Stingray Traffic Manager is typically deployed in a two-armed fashion, spanning a front-end public network and a back-end private network. In this case, IP forwarding should be disabled because the back-end private IP addresses are not routable from the front-end network: # echo 0 > /proc/sys/net/ipv4/ip_forward Stingray will not forward any IP packets.  Only traffic that is directed to a listening service on the traffic manager will be relayed between networks.  Although Stingray should not be regarded as a replacement for a network firewall, this configuration provides a strong security boundary which only allows known, identified traffic to reach the back-end servers from the front-end network, and all traffic is automatically ‘scrubbed’ at L2-L4. In some cases, you may wish to allow the back-end servers to initiate connections to external addresses, for example, to call out to a public API or service.  The Stingray host can be configured to forward traffic and NAT outgoing connections to the external IP of the host so that return traffic is routable; the following example assumes that eth0 is the external interface: # echo 1 > /proc/sys/net/ipv4/ip_forward Flush existing iptables rules if required: # iptables --flush Masquerade traffic from the external interface # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Optionally, drop all forwarded connections except those initiated through the external interface # iptables -A FORWARD -m state --state NEW -i eth0 -j ACCEPT # iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # iptables -A FORWARD -j DROP Unless you are using asymmetric routing, you should disable source routing as follows: # echo 0 > /proc/sys/net/ipv4/conf/default/accept_source_route # echo 1 > /proc/sys/net/ipv4/conf/default/rp_filter Duplicate Address Detection The Duplicate Address Detection (DAD) feature seeks to ensure that two machines don't raise the same address simultaneously. This feature can conflict with the traffic manager’s fault tolerance; when an IP is transferred from one traffic manager system to another, timing conditions may trigger DAD on the traffic manager that is raising the address. # echo 0 > /proc/sys/net/ipv6/conf/default/dad_transmits # echo 0 > /proc/sys/net/ipv6/conf/all/dad_transmits
View full article
Stingray Traffic Manager is supported on any modern Linux operating system, running on standard x86 (32 and 64 bit) platforms.  Riverbed develops and routinely tests the Stingray software on various systems, including RedHat Linux, CentOS, Debian, Ubuntu and SuSE, and on a range of hardware and virtualized platforms (VMware, Xen, OracleVM, KVM and HyperV). Stingray’s requirements on the operating system (OS) are light and there are no non-standard dependencies between the software and the OS.  We recommend a modern kernel (2.6.39 or later, or 3.0 or later), and very strongly recommend a 64-bit variant of that kernel for performance and scalability (memory size) reasons.  You should select the OS based on your preferred supplier and your internal expertise. Stingray will operate on any industry-standard x86 server hardware that is supported by the operating system vendor.  If you intend to use any non-standard hardware (third-party network cards for example), you should verify that they are adequately supported by your chosen OS vendor.  Riverbed does not publish a preferred hardware list, and we test with a range of components on HP, Dell, IBM, Sun/Oracle and other hardware. Minimum hardware requirements CPU: CPU-bound operations such as SSL decryption depend on available CPU resource and they typically scale linearly with the number of cores available and the clock speed.  A minimum of 4 cores is recommended for a moderate workload, depending on configuration.  Stingray scales comfortably on 12-16 core systems. Memory: A minimum of 2 GB memory is recommended, although Stingray will function comfortably in 512 MB or less with low traffic levels.  Additional memory will increase the number of concurrent connections that can be sustained (approximately 50,000 concurrent connections per Gb memory), and additional memory may also be used for content caching and other internal traffic manager caches. NICs: Stingray is typically deployed in a 2-armed fashion (a front-end and a back-end NIC), sometimes with an additional management interface.  There are no software limits on the numbers or types of physical interfaces that can be supported.  Routing, tagging and interface bonding are performed by operating system configuration and do not affect the operation of the Stingray software. Security configuration Stingray has a strong security model.  The software is installed and run as the root user and the processes that handle network traffic explicitly drop privileges and run in a local chroot jail. You may use additional security measures such as SELinux and iptables if desired. Expected performance Riverbed publish performance data that is based on benchmark testing of Stingray software on a range of hardware platforms.  This data will give an indication of what is possible, but real-world throughput and requests-per-second data is very dependent on latency, packet loss and traffic types and will deviate from what was achieved in ideal conditions.  If you have firm performance requirements, you should validate that Stingray can meet them with real-world traffic (just as you would with any ADC device – software, virtual appliance or hardware). Note that Stingray software is licensed on real performance, not on theoretical performance.  You are free to select the hardware that best meets your needs, and upgrade at any point.   Installing the Stingray software on the target host Stingray software should be installed and run as root.  Root privileges allow the software to bind to low ports (e.g. port 80) and to allocate additional operating system resources (e.g. file descriptors).  For detailed installation instructions, refer to the Software Getting Started guide. Installing additional software components on the target host Stingray software can take advantage of two Riverbed-supplied kernel modules that extend the packet-handling capability of the Linux kernel (see Stingray Kernel Modules for Linux Software): ztrans The ztrans kernel module exposes a hook into the IP stack’s NAT capability, allowing Stingray to control source-NAT for outgoing traffic.  This capability is used by Stingray’s IP Transparency functionality to force the source IP address of traffic to the back-end servers so that the connection appears to originate from the remote client’s IP address (or another non-default address if desired).  ztrans depends on standard kernel modules (nat, conntrack, ip_tables) which are loaded automatically if required. NAT and connection tracking adds a significant load to the kernel as all ingress traffic that is not addressed to a local interface must be matched against the kernel NAT table, and entries in that table must be managed. You can safely compile and register the kernel module.  It is only loaded if you enable IP Transparency on one or more pools, but the performance hit is incurred against all traffic processed by the kernel. zcluster The zcluster kernel module applies a low-level filter to the IP stack.  This filter is used by Stingray’s multi-hosted IP address capability; a multi-hosted Traffic IP address is raised by several Stingray devices using a common multicast MAC address.  Traffic destined for that IP address is multi-casted to all the Stingray devices and the zcluster module filters the packets so that each UDP datagram or TCP connection is handled by a unique traffic manager in the cluster. The zcluster module does not add a significant load to the kernel, but the use of a multicast address means that ingress network traffic is replicated across two or more Stingray devices, increasing the traffic volume that each Stingray must process. In practice, the effect is generally low.  The total volume of ingress traffic to each Stingray is capped by the available upstream bandwidth, and in the majority of cases, ingress traffic is significantly lower than the egress traffic (protocols like HTTP are generally very asymmetric).  The zcluster kernel module can be safely compiled and registered; it is only loaded and activated if multi-hosted IP addresses are in use. You can download the source for these kernel modules here: Stingray Kernel Modules for Linux Software  Note that these modules are pre-installed in Stingray Virtual Appliances and they are not available for Solaris. Stingray Traffic Manager does not require any other specialized kernel modules.
View full article
Stingray can load balance servers in a few different ways. Looking at a Pool's Load Balancing configuration page shows the different options:     They're all pretty straightforward except for Perceptive; how does that one work?  Perceptive can be thought of as Least Connections skewed to favor the servers with the Fastest Response Time.  Perceptive factors in both connection counts and response times into the load balancing decision to ensure that traffic is distributed evenly amongst the servers in a farm.  It is best understood in the context of a few examples:   Heterogeneous Server Farm   A great scenario in which to use Perceptive is when your server farm is heterogeneous,  where some servers are more powerful than others.  The challenge is to ensure that the more powerful servers get a greater share of the traffic, but that the weaker servers are not starved.   Perceptive will begin by distributing traffic based on connection counts, like Least Connections.  This ensures that the weaker servers are getting traffic and not sitting idle.  As traffic increases the powerful servers will naturally be able to handle it better, leading to a disparity in response times.  This will trigger Perceptive to begin favoring those more powerful servers, as they are responding quicker, by giving them a greater share of the traffic.   Heterogeneous workloads   Another great scenario in which to use Perceptive is when your workload is heteregeneous, where some requests generate a lot more load on your servers than others.  As in the Heterogeneous Server Farm case, Perceptive will begin by distributing traffic like Least Connections. When the workload becomes more heterogeneous,  some servers will get bogged down with the more CPU intensive requests and begin to respond slower.  This will trigger Perceptive to send traffic away from those servers, to the other servers that are not bogged down and responding quicker.   Ramping up traffic to a new server   The perceptive algorithm introduces traffic to a new server (or a server that has returned from a failed state) gently. When a new server is added to a pool, the algorithm tries it with a single request, and if it receives a reply, gradually increases the number of requests it sends the new server until it is receiving the same proportion of the load as other equivalent nodes in the pool. The algorithm used to ramp up the load is adaptive, so it isn't possible to make statements of the sort "the load will be increased from 0 to 100% of its fair share over 2 minutes"; the rate at which the load is increased is dependent on the responsiveness of the server. So, for example, a new web server serving a small quantity of static content will very quickly be ramped up to full speed, whereas a Java application server that compiles JSPs the first time they are used (and so is slow to respond to begin with) will be ramped up more slowly.   Summary   The Perceptive load balancing algorithm factors in both connection counts along with response times into a two step load balancing decision.  When there is little disparity in response times, traffic will be distributed like Least Connections.  When there is a larger disparity in response times, Perceptive will factor this in and favor the servers that are responding quicker, like Fastest Response Time.  Perceptive is great for handling heterogeneity in both the server farm and the workload, ensuring effecient load balancing across your server farm in either case.   Read more   For a more detailed discussion of the load balancing capabilities of Stingray, check out Feature Brief: Load Balancing in Stingray Traffic Manager, and take a look at the video introduction: Video: Introduction to Stingray Load Balancing  
View full article
Using Stingray Traffic Manager to load balance a pool of LDAP servers for High Availability is a fairly simple process.  Here are the steps: Start up the Manage a new service wizard.  This is located in the top right corner of the Stingray Traffic Manager web interface, under the Wizards drop down. In step 2 of the wizard set the Protocol to LDAP.  The Port will automatically be set to 389, the default LDAP port.  Give the service a Name. In step 3 add in the hostnames or IP Addresses of each of your LDAP servers. At this point a virtual server and pool will be created.  Before it is usable a few additional changes may be made: Change the Load Balancing algorithm of the pool to Least Connections Create a new Session Persistence class of type IP-based persistence (Catalogs -> Persistence) and assign it to the Pool Create a Traffic IP Group (Services -> Traffic IP Groups) and assign it to the virtual server.  The Traffic IP Group is the IP Address LDAP clients will connect to. The final step is to install the LDAP Health Monitor.  The LDAP Health Monitor is an External Program Monitor that binds to the LDAP server, submits an LDAP query, and checks for a response.  Instructions to install the monitor are in the linked page.
View full article
It's that time of the year, when the boss reminds you that you've got to change every single 'Copyright 2006' in the footer of your web pages to 'Copyright 2007'... at midnight, New Year's Eve.   Fear not! With a little TrafficScript, you can celebrate with everyone else and lay the guilt on the boss when you return in the New Year.  The trick is to add a TrafficScript response rule that rewrites all of your outgoing web pages, but only after midnight on January 1st:   # First, check the date if( sys.time.year() < 2007 ) break; # Now, check it's a web page $contenttype = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contenttype, "text/html" ) ) break;   The difficult bit is working out which bits of content to rewrite. You can't just change every 2006 to the new date, because there may be lots of dates in the web content that you don't want to change.   For my site, the footer of every page says:   Copyright MySite.com 1995-2006   ... and it's in the last 250 bytes of the page (at least, before we insert our Tracking user activity with Google Analytics code).   The following code reads the entire response and rewrites it on the fly.  Note that ' http.getResponseBody() ' deals with all of the awkward HTTP protocol parsing for you, decompressing compressed responses and reassembling chunked transfers for dynamic applications, so you don't need to downgrade the request to HTTP/1.0, disable keepalives, remove Accept-Encoding headers or anything else:   $body = http.getResponseBody(); $start = string.drop( $body, 250 ); $end = string.skip( $body, string.len( $body )-250 ); $end = string.replace( $end, " 1995-2006", " 1995-" . sys.time.year() ); http.setResponseBody( $start . $end );   That's it - Happy New Year!   For reference, here's the entire rule:   # First, check the date if( sys.time.year() < 2007 ) break; # Now, check it's a web page $contenttype = http.getResponseHeader( "Content-Type" ); if( ! string.startsWith( $contenttype, "text/html" ) ) break; $body = http.getResponseBody(); $start = string.drop( $body, 250 ); $end = string.skip( $body, string.len( $body )-250 ); $end = string.replace( $end, " 1995-2006", " 1995-" . sys.time.year() ); http.setResponseBody( $start . $end );   This article was originally published 22 December 2006
View full article
A document to hold useful regular expressions that I have pulled together for things: RegExr is a great and very handy online tool for checking regular expression matches: RegExr A regex to validate a password string to ensure it does not contain dangerous punctuation characters and is less than 20 characters long.  useful for Stingray Application Firewall form field protection in login pages: ^[^;,{}\[\]\$\%\*\(\)<>:?\\/'"`]{0,20}$ A regex to check that a password has at least one Uppercase, Lowercase, Numbers and Punctuation from the approved list and is at least 8 but less than 20 characters. ^(?=.*[A-Z])(?=.*[a-z])(?=.*[\\@^!\.,~-])(?=.*\d)(.{8,20})$ A regex to check a field has a valid email address in it ^[^@]+@[^@]+ \. [^@]+ $
View full article
Update: 2013 06018 - I had to do 50 conversions today, so I have attached a shell script to to automate this process. == Assumptions: You have a pkcs12 bundle with a private key and certificate in it - in this example we will use a file called www.website.com.p12.  I use SimpleAuthority as it is cross platform and the free edition lets you create up to 5 keypairs, which is plenty for the lab... You don't have a password on the private key (passwords on machine loaded keys are a waste of time IMHO) You have a Linux / MacOS X / Unix system with openssl installed (Mac OS X does by default, so do most Linux installs...) 3 commands you need: First we take the p12 and export just the private key (-nocerts) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nocerts -out www.website.com.key.pem -nodes Second we take the p12 and export just the certificate (-nokeys) and export it in RSA format with no encryption (-nodes) openssl pkcs12 -in www.website.com.p12 -nokeys -out www.website.com.cert.pem -nodes Third, we convert the private key into the format Stingray wants it in (-text) openssl rsa -in www.website.com.key.pem -out www.website.com.key.txt.pem -text You are left with a list of files, only two of them are needed to import into the Stingray: www.website.com.key.txt.pem is the private key you need www.website.com.cert.pem is the certificate you need These can then be imported into the STM under Catalogues > SSL > Server Certs Hope this helps.. 1 ~ $ ./p12_convert.sh -h ./p12_convert.sh written by Aidan Clarke <aidan.clarke at riverbed.com> Copyright Riverbed Technologies 2013 usage: ./p12_convert.sh -i inputfile -o outputfile This script converts a p12 bundle to PEM formated key and certificate ready for import into Stingray Traffif Manager OPTIONS:    -h      Show this message    -i      Input file name    -o      Output file name stub
View full article