cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Session persistence ties the requests from one client (ie, in one 'session') to the same back-end server node. It defeats the intelligence of the load-balancing algorithm, which tries to select the fastest, most available node for each request.   In a web session, often it's only necessary to tie some requests to the same server node. For example, you may want to tie requests that begin "/servlet" to a server node, but let Stingray be free to load-balance all other requests (images, static content) as appropriate.   Configure a session persistence class with the desired configuration for your /servlet requests, then use the following request rule:   if( string.startswith( http.getPath(), "/servlet" ) ) { connection.setPersistenceClass( "servlet persistence" ); }   Read more   Collected Tech Tips: TrafficScript examples
View full article
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
TrafficScript is the programming language that is built into the Traffic Manager.  With TrafficScript, you can create traffic management 'rules' to control the behaviour of Traffic Manager in a wide manner of ways, inspecting, modifying and routing any type of TCP or UDP traffic.   The language is a simple, procedural one - the style and syntax will be familiar to anyone who has used Perl, PHP, C, BASIC, etc. Its strength comes from its integration with Traffic Manager, allowing you to perform complex traffic management tasks simply, such as controlling traffic flow, reading and parsing HTTP requests and responses, and managing XML data.   This article contains a selection of simple technical tips to illustrate how to perform common tasks using TrafficScript.   TrafficScript Syntax   HowTo: TrafficScript Syntax HowTo: TrafficScript variables and types HowTo: if-then-else conditions in TrafficScript HowTo: loops in TrafficScript HowTo: TrafficScript rules processing and flow control HowTo: TrafficScript String Manipulation HowTo: TrafficScript Libraries and Subroutines HowTo: TrafficScript Arrays and Hashes   HTTP operations   HowTo: Techniques to read HTTP headers HowTo: Set an HTTP Response Header HowTo: Inspect HTTP Request Parameters HowTo: Rewriting HTTP Requests HowTo: Rewriting HTTP Responses HowTo: Redirect HTTP clients HowTo: Inspect and log HTTP POST data HowTo: Handle cookies in TrafficScript   XML processing   HowTo: Inspect XML and route requests Managing XML SOAP data with TrafficScript   General examples   HowTo: Controlling Session Persistence HowTo: Control Bandwidth Management HowTo: Monitor the response time of slow services HowTo: Query an external datasource using HTTP HowTo: Techniques for inspecting binary protocols HowTo: Spoof Source IP Addresses with IP Transparency HowTo: Use low-bandwidth content during periods of high load HowTo: Log slow connections in Stingray Traffic Manager HowTo: Inspect and synchronize SMTP HowTo: Write Health Monitors in TrafficScript HowTo: Delete Session Persistence records   More information   For a more rigorous introduction to the TrafficScript language, please refer to the TrafficScript guide in the Product Documentation
View full article
This document describes some operating system tunables you may wish to apply to a production Traffic Manager instance.  Note that the kernel tunables only apply to Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Traffic Manager on a severely-constrained hardware platform, or where Traffic Manager should not seek to use all available resources; Running in a performance-critical environment; The Traffic Manager host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Traffic Manager event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Pulse Virtual Traffic Manager article. Basic Kernel and Operating System tuning   Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max   Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Traffic Manager to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (on Traffic Manager).  See also the tunable parameter ‘listen_queue_size’.  Restart the Traffic Manager software after changing this value. If the listen queue fills up because the Traffic Manager does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually: # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script]  irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Traffic Manager software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity You should also refer to the technical documentation provided by your network card vendor. [Updates by Aidan Clarke and   Rick Henderson ]  
View full article
This document describes performance-related tuning you may wish to apply to a production Stingray Traffic Manager software, virtual appliance or cloud instance.  For related documents (e.g. operating system tuning), start with the Tuning Pulse Virtual Traffic Manager article.   Tuning Pulse Traffic Manager   Traffic Manager will auto-size the majority of internal tables based on available memory, CPU cores and operating system configuration.  The default behavior is appropriate for typical deployments and it is rarely necessary to tune it. Several changes can be made to the default configuration to improve peak capacity if necessary. Collectively, they may give a 5-20% capacity increase, depending on the specific test. Basic performance tuning Global settings Global settings are defined in the ‘System’ part of the configuration. Recent Connections table: Set recent_conns to 0 to prevent Stingray from archiving recent connection data for debugging purposes Verbose logging: Disable flipper!verbose, webcache!verbose and gslb!verbose to disable verbose logging Virtual Server settings Most Virtual Server settings relating to performance tuning are to be found in the Connection Management section of the configuration. X-Cluster-Client-IP: For HTTP traffic, Traffic Manager adds an 'X-Cluster-Client-IP' header containing the remote client's IP address by default.  You should disable this feature if your back-end applications do not inspect this header. HTTP Keepalives: enable support for Keepalives; this will reduce the rate at which TCP connections must be established and torn down.  Not only do TCP handshakes incur latency and additional network traffic, but closed TCP connections consume operating system resources until TCP timeouts are hit. UDP Port SMP: set this to 'yes' if you are managing simple UDP protocols such as DNS.  Otherwise, all UDP traffic is handled by a single Traffic Manager process (so that connections can be effectively tracked) Pool settings HTTP Keepalives: enable support for Keepalives (Pool: Connection Management; see Virtual Server note above). This will reduce the load on your back-end servers and the Traffic Manager system. Session Persistence: Session Persistence overrides load balancing and can prevent the traffic manager from selecting the optimal node and applying optimizations such as LARD. Use session persistence selectively and only apply to requests that must be pinned to a node. Advanced Performance Tuning General Global Settings: Maximum File Descriptors (maxfds): File Descriptors are the basic operating system resource that Traffic Manager consumes.  Typically, Traffic Manager will require two file descriptors per active connection (client and server side) and one file descriptor for each idle keepalive connection and for each client connection that is pending or completing. Traffic Manager will attempt to bypass any soft per-process limits (e.g. those defined by ulimit) and gain the maximum number of file descriptors (per child process). There are no performance impacts, and minimal memory impact to doing this.  You can tune the maximum number of file descriptors in the OS using fs.file-max The default value of 1048576 should be sufficient. Traffic Manager will warn if it is running out of file descriptors, and will proactively close idle keepalives and slow down the rate at which new connections are accepted. Listen queue size (listen_queue_size): this should be left to the default system value, and tuned using somaxconn (see above) Number of child processes (num_children): this is auto-sized to the number of cores in the host system.  You can force the number of child processes to a particular number (for example, when running Traffic Manager on a shared server) using the tunable ‘num_children’ which should be added manually to the global.cfg configuration file. Tuning Accept behavior The default accept behavior is tuned so that child processes greedily accept connections as quickly as possible.  With very large numbers of child processes, if you see uneven CPU usage, you may need to tune the multiple_accept, max_accepting and accepting_delay values in the Global Settings to limit the rate at which child processes take work. Tuning network read/write behavior The Global Settings values so_rbuff_size and so_wbuff_size are used to tune the size of the operating system (kernel-space) read and write buffers, as restricted by the operating system limits /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max. These buffer sizes determine how much network data the kernel will buffer before refusing additional data (from the client in the case of the read buffer, and from the application in the case of the write buffer).  If these values are increased, kernel memory usage per socket will increase. In normal operation, Traffic Manager will move data from the kernel buffers to its user-space buffers sufficiently quickly that the kernel buffers do not fill up.  You may want to increase these buffer sizes when running under connection high load on a fast network. The Virtual Server settings max_client_buffer and max_server_buffer define the size of the Traffic Manager (user-space) read and write buffers, used when Traffic Manager is streaming data between the client and the server.  The buffers are temporary stores for the data read from the network buffers. Larger values will increase memory usage per connection, to the benefit of more efficient flow control; this will improve performance for clients or servers accessing over high-latency networks. The value chunk_size controls how much data Traffic Manager reads and writes from the network buffers when processing traffic, and internal application buffers are allocated in units of chunk_size.  To limit fragmentation and assist scalability, the default value is quite low (4096 bytes); if you have plenty of free memory, consider setting it to 8192 or 16384. Doing so will increase Traffic Manager's memory footprint but may reduce the number of system calls, slightly reducing CPU usage (system time). You may wish to tune the buffer size parameters if you are handling very large file transfers or video downloads over congested networks, and the chunk_size parameter if you have large amounts of free memory that is not reserved for caching and other purposes. Tuning SSL performance Some modern ciphers such as TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 are faster than older ciphers in Traffic Manager.  SSL uses a private/public key pair during the initial client handshake.  1024-bit keys are approximately 5 times faster than 2048-bit keys (due to the computational complexity of the key operation), and are sufficiently secure for applications that require a moderate degree of protection. SSL sessions are cached locally, and shared between all traffic manager child processes using a fixed-size (allocated at start-up) cache.  On a busy site, you should check the size, age and miss-rate of the SSL Session ID cache (using the Activity monitor) and increase the size of the cache (ssl!cache!size) if there is a significant number of cache misses. Tuning from-Client connections Timeouts are the key tool to controlling client-initiated connections to the traffic manager: connect_timeout discards newly-established connections if no data is received within the timeout; keepalive_timeout holds client-side keepalive connections open for a short time before discarding them if they are not reused; timeout is a general-purpose timeout that discards an active connection if no data is received within the timeout period. If you suspect that connections are dropped prematurely due to timeouts, you can temporarily enable the Virtual Server setting log!client_connection_failures to record the details of dropped client connections. Tuning to-Server connections When processing HTTP traffic, Traffic Manager uses a pool of Keep-Alive connections to reuse TCP connections and reduce the rate at which TCP connections must be established and torn down.  If you use a webserver with a fixed concurrency limit (for example, Apache with its MaxClients and ServerLimit settings ), then you should tune the connection limits carefully to avoid overloading the webserver and creating TCP connections that it cannot service.   Pool: max_connections_pernode: This setting limits the total number of TCP connections that this pool will make to each node; keepalive connections are included in that count. Traffic Manager will queue excess requests and schedule them to the next available server. The current count of established connections to a node is shared by all Traffic Manager processes.   Pool: max_idle_connections_pernode: When an HTTP request to a node completes, Traffic Manager will generally hold the TCP connection open and reuse it for a subsequent HTTP request (as a KeepAlive connection), avoiding the overhead of tearing down and setting up new TCP connections.  In general, you should set this to the same value as max_connections_pernode, ensuring that neither setting exceeds the concurrency limit of the webserver.   Global Setting: max_idle_connections: Use this setting to fine-tune the total number of keepalive connections Traffic Manager will maintain to each node.  The idle_connection_timeout setting controls how quickly keepalive connections are closed.You should only consider limiting the two max_idle_connections settings if you have a very large number of webservers that can sustain very high degrees of concurrency, and you find that the traffic manager routinely maintains too many idle keepalive connections as a result of very uneven traffic. When running with very slow servers, or when connections to servers have a high latency or packet loss, it may be necessary to increase the Pool timeouts: max_connect_time discards connections that fail to connect within the timeout period; the requests will be retried against a different server node; max_reply_time discards connections that fail to respond to the request within the desired timeout; requests will be retried against a different node if they are idempotent. When streaming data between server and client, the general-purpose Virtual Server ‘timeout’ setting will apply.  If the client connection times out or is closed for any other reason, the server connection is immediately discarded. If you suspect that connections are dropped prematurely due to timeouts, you can enable the Virtual Server setting log!server_connection_failures to record the details of dropped server connections. Nagle’s Algorithm You should disable “Nagle’s Algorithm” for traffic to the backend servers, unless you are operating in an environment where the servers have been explicitly configured not to use delayed acknowledgements.  Set the node_so_nagle setting to ‘off’ in the Pool Connection Management configuration. If you notice significant delays when communicating with the back-end servers, Nagle’s Algorithm is a likely candidate. Other settings Ensure that you disable or de-configure any Traffic Manager features that you do not need to use, such as health monitors, session persistence, TrafficScript rules, logging and activity monitors.  Disable debug logging in service protection classes, autoscaling settings, health monitors, actions (used by the eventing system) and GLB services. For more information, start with the Tuning Pulse Virtual Traffic Manager article.  
View full article
In many cases, it is desirable to upgrade a virtual appliance by deploying a virtual appliance at the newer version and importing the old configuration.  For example, the size of the Traffic Manager disk image was increased in version 9.7, and deploying a new virtual appliance lets a customer take advantage of this larger disk.  This article documents the procedure for deploying a new virtual appliance with the old configuration in common scenarios.   These instructions describe how to upgrade and reinstall Traffic Manager appliance instances (either in a cluster or standalone appliances). For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Upgrading a standalone Virtual Appliance   This process will replace a standalone virtual appliance with another virtual appliance with the same configuration (including migrating network configuration). Note that the Traffic Manager Cloud Getting Started Guide contains instructions for upgrading a standalone EC2 instance from version 9.7 onwards; if upgrading from a version prior to 9.7 and using the Web Application Firewall these instructions must be followed to correctly back up and restore any firewall configuration.   Make a backup of the traffic manager configuration (See section "System > Backups" in the Traffic Manager User Manual), and export it. If you are upgrading from a  version prior to 9.7 and are using the Web Application Firewall, back up the Web Application Firewall configuration - Log on to a command line - Run /opt/zeus/stop-zeus - Copy /opt/zeus/zeusafm/current/var/lib/config.db off the appliance. Shut down the original appliance. Deploy a new appliance with the same network interfaces as the original. If you backed up the application firewall configuration earlier, restore it here onto the new appliance, before you restore the traffic manager configuration: - Copy the config.db file to /opt/zeus/stingrayafm/current/var/lib/config.db    (overwriting the original) - Check that the owner on the config.db file is root, and the mode is 0644. Import and restore the traffic manager configuration via the UI. If you have application firewall errors Use the Diagnose page to automatically fix any configuration errors Reset the Traffic Manager software.   Upgrading a cluster of Virtual Appliances (except Amazon EC2)   This process will replace the appliances in the cluster, one at a time, maintaining the same IP addresses. As the cluster will be reduced by one at points in the upgrade process, you should ensure that this is carried out at a time when the cluster is otherwise healthy, and of the n appliances in the cluster, the load can be handled by (n-1) appliances.   Before beginning the process, ensure that any cluster errors have been resolved. Nominate the appliance which will be the last to be upgraded (call it the final appliance).  When any of the other machines needs to be removed from the cluster, it should be done using the UI on this appliance, and when a hostname and port are required to join the cluster, this appliance's hostname should be used. If you are using the Web Application Firewall first ensure that vWAF on the final appliance in the cluster is upgraded to the most recent version, using the vWAF updater. Choose an appliance to be upgraded, and remove the machine from the cluster: - If it is not the final appliance (nominated in step 2),    this should be done via the UI on the final appliance - If it is the final appliance, the UI on any other machine may be used. Make a backup of the traffic manager configuration (System > Backups) on the appliance being upgraded, and export the backup.  This backup only contains the machine specific info for that appliance (networking config etc). Shut down the appliance, and deploy a new appliance at the new version.  When deploying, it needs to be given the identical hostname to the machine it's replacing. Log on to the admin UI of the new appliance, and import and restore the backup from step 5. If you are using the Web Application Firewall, accessing the Application Firewall tab in the UI will fail and there will be an error on the Diagnose page and an 'Update Configuration' button. Click the Update Configuration button once, then wait for the error to clear.  The configuration is now correct, but the admin server still needs to be restarted to pick up the configuration: # $ZEUSHOME/admin/rc restart Now, upgrade the application firewall on the new appliance to the latest version. Join into the cluster: For all appliances except the final appliance, you must not select any of the auto-detected existing clusters.  Instead manually specify the hostname and port of the final appliance. If you are using Web Application Firewall, there may be an issue where the config on the new machine hasn't synced the vWAF config from the old machine, and clicking the 'Update Application Firewall Cluster Status' button on the Diagnose page doesn't fix the problem. If this happens, firstly get the clusterPwd from the final appliance: # grep clusterPwd /opt/zeus/zxtm/conf/zeusafm.conf clusterPwd = <your cluster pwd> On the new appliance, edit /opt/zeus/zxtm/conf/zeusafm.conf (with e.g. nano or vi), and replace the clusterPwd with the final appliance's clusterPwd. The moment that file is saved, vWAF should get restarted, and the config should get synced to the new machine correctly. When you are upgrading the final appliance, you should select the auto-detected existing cluster entry, which should now list all the other cluster peers. Once a cluster contains multiple versions, configuration changes must not be made until the upgrade has been completed, and 'Cluster conflict' errors are expected until the end of the process. Repeat steps 4-9 until all appliances have been upgraded.   Upgrading a cluster of STM EC2 appliances   Because EC2 licenses are not tied to the IP address, it is recommended that new EC2 instances are deployed into a cluster before removing old instances.  This ensures that the capacity of the cluster is not reduced during the upgrade process.  This process is documented in the "Creating a Traffic Manager Instances on Amazon EC2" chapter in the Traffic Manager Cloud Getting Started Guide.  The clusterPwd may also need to be fixed as above.
View full article
A user commented that Stingray Traffic Manager sometimes adds a cookie named ' X-Mapping-SOMERANDOMDATA ' to an HTTP response, and wondered what the purpose of this cookie was, and whether it constitited a privacy or security risk.   Transparent Session Affinity   The cookie used used by Stingray's 'Transparent Session Affinity' persistence class.   Transparent session affinity inserts cookies into the HTTP response to track sessions. This is generally the most appropriate method for HTTP and SSL-decrypted HTTPS traffic, because it does not require the nodes to set any cookies in their response.   The persistence class adds a cookie to the HTTP response that identifies the name of the session persistence class and the chosen back-end node:   Set-Cookie: X-Mapping-hglpomgk=4A3A3083379D97CE4177670FEED6E830; path=/   When subsequent requests in that session are processed and the same sesison persistence class is invoked, it inspects the requests to determine if the named cookie exists. If it does, the persistence class inspects the value of the cookie to determine the node to use.   The unique identifier in the cookie name is a hashed version of the name of the session persistence class (there may be multiple independent session persistence rules in use). When the traffic manager processes a request, it can then identify the correct cookie for the active session persistence class.   The value of the cookie is a hashed version of the name of the selected node in the cluster. It is non-reversible by an external party. The value identifies which server the session should be persisted to. There is no personally-identifiable information in the cookie. Two independent users who access the service, are managed by the same session persistence class and routed to the same back-end server will be assigned the same named cookie and value.
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
This article explains how to use Pulse vADC RESTful Control API with Perl.  It's a little more work than with Tech Tip: Using the RESTful Control API with Python - Overview but once the basic environment is set up and the framework in place, you can rapidly create scripts in Perl to manage the configuration.   Getting Started   The code examples below depend on several Perl modules that may not be installed by default on your client system: REST::Client, MIME::Base64 and JSON.   On a Linux system, the best way to pull these in to the system perl is by using the system package manager (apt or rpm). On a Mac (or a home-grown perl instance), you can install them using CPAN   Preparing a Mac to use CPAN   Install the package 'Command Line Tools for Xcode' either from within the Xcode or directly from https://developer.apple.com/downloads/.   Some of the CPAN build scripts indirectly seek out /usr/bin/gcc-4.2 and won't build if /usr/bin/gcc-4.2 is missing.  If gcc-4.2 is missing, the following should help:   $ ls -l /usr/bin/gcc-4.2 ls: /usr/bin/gcc-4.2: No such file or directory $ sudo ln -s /usr/bin/gcc /usr/bin/gcc-4.2   Installing the perl modules   It may take 20 minutes for CPAN to initialize itself, download, compile, test and install the necessary perl modules:   $ sudo perl –MCPAN –e shell cpan> install Bundle::CPAN cpan> install REST:: Client cpan> install MIME::Base64 cpan> install JSON   Your first Perl REST client application   This application looks for a pool named 'Web Servers'.  It prints a list of the nodes in the pool, and then sets the first one to drain.   #!/usr/bin/perl use REST::Client; use MIME::Base64; use JSON; # Configurables $poolname = "Web Servers"; $endpoint = "stingray:9070"; $userpass = "admin:admin"; # Older implementations of LWP check this to disable server verification $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0; # Set up the connection my $client = REST::Client->new( ); # Newer implementations of LWP use this to disable server verification # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 ); $client->setHost( "https://$endpoint" ); $client->addHeader( "Authorization", "Basic ".encode_base64( $userpass ) ); # Perform a HTTP GET on this URI $client->GET( "/api/tm/1.0/config/active/pools/$poolname" ); die $client->responseContent() if( $client->responseCode() >= 300 ); # Add the node to the list of draining nodes my $r = decode_json( $client->responseContent() ); print "Pool: $poolname:\n"; print " Nodes: " . join( ", ", @{$r->{properties}->{basic}->{nodes}} ) . "\n"; print " Draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n"; # If the first node is not already draining, add it to the draining list $node = $r->{properties}->{basic}->{nodes}[0]; if( ! ($node ~~ @{$r->{properties}->{basic}->{draining}}) ) { print " Planning to drain: $node\n"; push @{$r->{properties}->{basic}->{draining}}, $node; } # Now put the updated configuration $client->addHeader( "Content-Type", "application/json" ); $client->PUT( "/api/tm/1.0/config/active/pools/$poolname", encode_json( $r ) ); die $client->responseContent() if( $client->responseCode() >= 300 ); my $r = decode_json( $client->responseContent() ); print " Now draining: " . join( ", ", @{$r->{properties}->{basic}->{draining}} ) . "\n";   Running the script   $ perl ./pool.pl Pool: Web Servers: Nodes: 192.168.207.101:80, 192.168.207.103:80, 192.168.207.102:80 Draining: 192.168.207.102:80 Planning to drain: 192.168.207.101:80 Now draining: 192.168.207.101:80, 192.168.207.102:80   Notes   This script was tested against two different installations of perl, with different versions of the LWP library.  It was necessary to disable SSL certificate checking using:   $ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;   ... with the older, and:   # Try SSL_verify_mode => SSL_VERIFY_NONE. 0 is more compatible, but may be deprecated $client->getUseragent()->ssl_opts( SSL_verify_mode => 0 );   with the new.  The older implementation failed when using SSL_VERIFY_NONE.  YMMV.
View full article
The following code uses Stingray's RESTful API to enable or disabled a specific Virtual Server.   The code is written in Perl.  This program checks to see if the Virtual Server "test vs" is enabled and if it is, it disables it and if it is disabled, it enables it.  A GET is done to retrieve the configuration data for the Virtual Server and the "enabled" value in the "basic" properties section is checked.  This is a boolean value, so if it is true it is set to false and if it is false it is set to true. The changed data is then sent to the server using a PUT.   startstopvs.pl   #!/usr/bin/perl use REST::Client; use MIME::Base64; use JSON; use URI::Escape; # Since Stingray is using a self-signed certificate we don't need to verify it $ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0; my $vs = "test vs"; # Because there is a space in the virtual serve name it must be escaped my $url = "/api/tm/1.0/config/active/vservers/" . uri_escape($vs); # Set up the connection my $client = REST::Client->new(); $client->setHost("https://stingray.example.com:9070"); $client->addHeader("Authorization", "Basic " . encode_base64("admin:admin")); # Get configuration data for the virtual server $client->GET($url); # Decode the json response. The result will be a hash my $vsConfig = decode_json $client->responseContent(); if ($client->responseCode() == 200) { if ($vsConfig->{properties}->{basic}->{enabled}) { # The virtual server is enabled, disable it. We only need to send the data that we # are changing so create a new hash with just this data. %newVSConfig = (properties => { basic => { enabled => JSON::false}}); print "$vs is Enabled. Disable it.\n"; } else { # The virtual server is disabled, enable it. %newVSConfig = (properties => { basic => { enabled => JSON::true}}); print "$vs is Diabled. Enable it.\n"; } $client->addHeader("Content-Type", "application/json"); $client->PUT($url, encode_json(\%newVSConfig)); $vsConfig = decode_json $client->responseContent(); if ($client->responseCode() != 200) { print "Error putting virtual server config. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n"; } } else { print "Error getting pool config. status=" . $client->responseCode() . " Id=" . $vsConfig->{error_id} . ": " . $vsConfig->{error_text} . "\n"; }   Running the example   This code was tested with Perl 5.14.2 and version 249 of the REST::Client module.   Run the Perl script as follows:   $ startstopvs.pl test vs is enabled. Disable it.   Notes   This program it is sending only the 'enabled' value to the server by creating a new hash with just this value in the 'basic' properties section.  Alternatively, the entire Virtual Server configuration could have been returned to the server with just the enabled value changed.  Sending just the data that has changed reduces the chances of overwriting another user's changes if multiple programs are concurrently accessing the RESTful API.   Read More   Stingray REST API Guide in the Stingray Product Documentation Feature Brief: Stingray's RESTful Control API Tech Tip: Using Stingray's RESTful Control API Tech Tip: Using the RESTful Control API with Perl Collected Tech Tips: Using the RESTful Control API
View full article
This user-contributed article describes how to parse and decode credentials in NTLM authentication. You can then log these credentials for audit reasons. It was a requirement that we needed to log all usernames against incoming requests, so that should there be a case of misuse, we would know which user generated the request, and which workstation was used.   This rule extracts the NTLM authentication details and logs them to the Stingray Event Log. NTLM background NTLM uses three different NTLM message types to complete a handshake for a given request. These are: NTLM Type-1 Message: This contains the hostname, the domain name, and the fact that it is a NTLM request type1, to initiate the correct stage in the handshake. NTLM Type-2 Message: This contains a NTLM challenge from the server. NTLM Type-3 Message: This contains DOMAIN, USERNAME, and Workstation\Hostname. For further information on the NTLM handshake go here: http://davenport.sourceforge.net/ntlm.html The Plan To fulfill the requirements and to ease load on the servers we will ignore Type-1/Type-2 messages, and instead just decode/process Type-3 Messages. We want to make the data available to the traffic manager to log, but still keep the request secure, i.e. not transmit the decoded information in headers. We will use the connection.data.set($key, $value) function; to access the values, use the %{key}d macro in the Virtual Server -> Edit -> Request Logging -> log!format parameter. The Code Please note Basic Authentication is included as a just in case you need it: $h = http.getHeader( "Authorization" ); # Although none of our websites use this,i have still included Basic Authenticaton. if( string.startsWith( $h, "Basic " ) ) {    $enc = string.skip( $h, 6 );    $userpasswd = string.base64decode( $enc );    log.info( "Basic: User used: ".$userpasswd ); } # Test to see if the Authorization token, begins with NTLM if(string.startsWith( $h, "NTLM " )) {    # Skip Authorization Token Header 'NTLM ', so we can decode just token.    $enc = string.skip( $h, 5 );    $ntlmpacket= string.base64decode( $enc ); #Decode TOKEN       # Username, DOMAIN, and Workstation are only in the third NTLM handshake.    # So we ignore the initial handshakes and test for the third, '3'.    if((string.bytesToInt(string.substring($ntlmpacket, 8, 8))) == 3) {       # Extract header fields.       $B = string.substring($ntlmpacket, 28, 51);             # Select and decode the Domain Offset, Domain length, Username length and Workstation length.       # Numbers are little-endian, so can't just use string.bytesToInt() on the entire substring       $dLen = ((string.bytesToInt(string.substring($B, 0, 0)))          +((string.bytesToInt(string.substring($B, 1, 1)))*256));       $dOff = ((string.bytesToInt(string.substring($B, 4, 4)))          +((string.bytesToInt(string.substring($B, 5, 5)))*256)          +((string.bytesToInt(string.substring($B, 6, 6)))*(256*256))          +((string.bytesToInt(string.substring($B, 7, 7)))*(256*256*256)));       $uLen = ((string.bytesToInt(string.substring($B, 8, 8)))          +((string.bytesToInt(string.substring($B, 9, 9)))*256));       $wLen = ((string.bytesToInt(string.substring($B, 16, 16)))          +((string.bytesToInt(string.substring($B, 17, 17)))*256));       # The data we are after is back to back, so we only need the initial offset;       # we can decode the rest with the lengths of the previous key.       connection.data.set( "NTLM_Domain", string.substring($ntlmpacket, $dOff, $dOff+$dLen-1 ) );       connection.data.set( "NTLM_User", string.substring($ntlmpacket, $dOff + $dLen, $dOff + $dLen + $uLen - 1 ) );       connection.data.set( "NTLM_Workstation", string.substring($ntlmpacket, $dOff + $dLen + $uLen, $dOff + $dLen + $uLen + $wLen - 1 ) );    } } You can log the NTLM authentication parameters in the Request log using the appropriate log format macros: %{NTLM_Domain}d etc.
View full article
Top examples of Pulse vADC in action   Examples of how SteelApp can be deployed to address a range of application delivery challenges.   Modifying Content   Simple web page changes - updating a copyright date Adding meta-tags to a website with Traffic Manager Tracking user activity with Google Analytics and Google Analytics revisited Embedding RSS data into web content using Traffic Manager Add a Countdown Timer Using TrafficScript to add a Twitter feed to your web site Embedded Twitter Timeline Embedded Google Maps Watermarking PDF documents with Traffic Manager and Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking web content with Pulse vADC and TrafficScript   Prioritizing Traffic   Evaluating and Prioritizing Traffic with Traffic Manager HowTo: Control Bandwidth Management Detecting and Managing Abusive Referers Using Pulse vADC to Catch Spiders Dynamic rate shaping slow applications Stop hot-linking and bandwidth theft! Slowing down busy users - driving the REST API from TrafficScript   Performance Optimization   Cache your website - just for one second? HowTo: Monitor the response time of slow services HowTo: Use low-bandwidth content during periods of high load   Fixing Application Problems   No more 404 Not Found...? Hiding Application Errors Sending custom error pages   Compliance Problems   Satisfying EU cookie regulations using The cookiesDirective.js and TrafficScript   Security problems   The "Contact Us" attack against mail servers Protecting against Java and PHP floating point bugs Managing DDoS attacks with Traffic Manager Enhanced anti-DDoS using TrafficScript, Event Handlers and iptables How to stop 'login abuse', using TrafficScript Bind9 Exploit in the Wild... Protecting against the range header denial-of-service in Apache HTTPD Checking IP addresses against a DNS blacklist with Traffic Manager Heartbleed: Using TrafficScript to detect TLS heartbeat records TrafficScript rule to protect against "Shellshock" bash vulnerability (CVE-2014-6271) SAML 2.0 Protocol Validation with TrafficScript Disabling SSL v3.0 for SteelApp   Infrastructure   Transparent Load Balancing with Traffic Manager HowTo: Launch a website at 5am Using Stingray Traffic Manager as a Forward Proxy Tunnelling multiple protocols through the same port AutoScaling Docker applications with Traffic Manager Elastic Application Delivery - Demo How to deploy Traffic Manager Cluster in AWS VPC   Other solutions   Building a load-balancing MySQL proxy with TrafficScript Serving Web Content from Traffic Manager using Python and Serving Web Content from Traffic Manager using Java Virtual Hosting FTP services Managing WebSockets traffic with Traffic Manager TrafficScript can Tweet Too Instrument web content with Traffic Manager Antivirus Protection for Web Applications Generating Mandelbrot sets using TrafficScript Content Optimization across Equatorial Boundaries
View full article
With more services being delivered through a browser, it's safe to say web applications are here to stay. The rapid growth of web enabled applications and an increasing number of client devices mean that organizations are dealing with more document transfer methods than ever before. Providing easy access to these applications (web mail, intranet portals, document storage, etc.) can expose vulnerable points in the network.   When it comes to security and protection, application owners typically cover the common threats and vulnerabilities. What is often overlooked happens to be one of the first things we learned about the internet, virus protection. Some application owners consider the response “We have virus scanners running on the servers” sufficient. These same owners implement security plans that involve extending protection as far as possible, but surprisingly allow a virus sent several layers within the architecture.   Pulse vADC can extend protection for your applications with unmatched software flexibility and scale. Utilize existing investments by installing Pulse vADC on your infrastructure (Linux, Solaris, VMWare, Hyper-V, etc.) and integrate with existing antivirus scanners. Deploy Pulse vADC (available with many providers: Amazon, Azure, CoSentry, Datapipe, Firehost, GoGrid, Joyent, Layered Tech, Liquidweb, Logicworks, Rackspace, Sungard, Xerox, and many others) and externally proxy your applications to remove threats before they are in your infrastructure. Additionally, when serving as a forward proxy for clients, Pulse vADC can be used to mitigate virus propagation by scanning outbound content.   The Pulse Web Application Firewall ICAP Client Handler provides the possibility to integrate with an ICAP server. ICAP (Internet Content Adaption Protocol) is a protocol aimed at providing simple object-based content vectoring for HTTP services. The Web Application Firewall acts as an ICAP client and passes requests to a specified ICAP server. This enables you to integrate with third party products, based on the ICAP protocol. In particular, you can use the ICAP Client Handler as a virus scanner interface for scanning uploads to your web application.   Example Deployment   This deployment uses version 9.7 of the Pulse Traffic Manager with open source applications ClamAV and c-icap installed locally. If utilizing a cluster of Traffic Managers, this deployment should be performed on all nodes of the cluster. Additionally, Traffic Manager could be utilized as an ADC to extend availability and performance across multiple external ICAP application servers. I would also like to credit Thomas Masso, Jim Young, and Brian Gautreau - Thank you for your assistance!   "ClamAV is an open source (GPL) antivirus engine designed for detecting Trojans, viruses, malware and other malicious threats." - http://www.clamav.net/   "c-icap is an implementation of an ICAP server. It can be used with HTTP proxies that support the ICAP protocol to implement content adaptation and filtering services." - The c-icap project   Installation of ClamAV, c-icap, and libc-icap-mod-clamav   For this example, public repositories are used to install the packages on version 9.7 of the Traffic Manager virtual appliance with the default configuration. To install in a different manner or operating system, consult the ClamAV and c-icap documentation.   Run the following commands (copy and paste) to backup and update sources.list file cp /etc/apt/sources.list /etc/apt/sources.list.rvbdbackup   Run the following commands to update the sources.list file. *Tested with Traffic Manager virtual appliance version 9.7. For other Ubuntu releases replace the 'precise' with the current version installed. Run "lsb_release -sc" to find out your release. cat <> /etc/apt/sources.list deb http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://ch.archive.ubuntu.com/ubuntu/ precise main restricted deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe EOF   Run the following command to retrieve the updated package lists   apt-get update   Run the following command to install ClamAV, c-icap, and libc-icap-mod-clamav.   apt-get install clamav c-icap libc-icap-mod-clamav   Run the following command to restore your sources.list.   cp /etc/apt/sources.list.rvbdbackup /etc/apt/sources.list   Configure the c-icap ClamAV service   Run the following commands to add lines to the /etc/c-icap/c-icap.conf   cat <> /etc/c-icap/c-icap.conf Service clamav srv_clamav.so ServiceAlias avscan srv_clamav?allow204=on&sizelimit=off&mode=simple srv_clamav.ScanFileTypes DATA EXECUTABLE ARCHIVE GIF JPEG MSOFFICE srv_clamav.MaxObjectSize 100M EOF   *Consult the ClamAV and c-icap documentation and customize the configuration and settings for ClamAV and c-icap (i.e. definition updates, ScanFileTypes, restricting c-icap access, etc.) for your deployment.   Just for fun run the following command to manually update the clamav database. /usr/bin/freshclam   Configure the ICAP Server to Start   This process can be completed a few different ways, for this example we are going to use the Event Alerting functionality of Traffic Manager to start i-cap server when the Web Application Firewall is started.   Save the following bash script (for this example start_icap.sh) on your computer. #!/bin/bash /usr/bin/c-icap #END   Upload the script via the Traffic Manager UI under Catalogs > Extra Files > Action Programs. (see Figure 1) Figure 1      Create a new event type (for this example named "Firewall Started") under System > Alerting > Manage Event Types. Select "appfirewallcontrolstarted: Application firewall started" and click update to save. (See Figure 2) Figure 2      Create a new action (for this example named "Start ICAP") under System > Alerting > Manage Actions. Select the "Program" radio button and click "Add Action" to save. (See Figure 3) Figure 3     Configure the "Start ICAP" Action Program to use the "start_icap.sh" script, and for this example we will adjust the timeout setting to 300. Click Update to save. (See Figure 4) Figure 4      Configure the Alert Mapping under System > Alerting to use the Event type and Action previously created. Click Update to save your changes. (See Figure 5) Figure 5      Restart the Application Firewall or reboot to automatically start i-cap server. Alternatively you can run the /usr/bin/c-icap command from the console or select "Update and Test" under the "Start ICAP" alert configuration page of the UI to manually start c-icap.   Configure the Web Application Firewall Within the Web Application Firewall UI, Add and configure the ICAPClientHandler using the following attribute and values.   icap_server_location - 127.0.0.1 icap_server_resource - /avscan   Testing Notes   Check the WAF application logs. Use Full logging for the Application configuration and enable_logging for the ICAPClientHandler. As with any system use full logging with caution, they could fill fast! Check the c-icap logs ( cat /var/log/c-icap/access.log & server.log). Note: Changing the /etc/c-icap/c-icap.conf "DebugLevel" value to 9 is useful for testing and recording to the /var/log/c-icap/server.log. *You may want to change this back to 1 when you are done testing. The Action Settings page in the Traffic Manager UI (for this example  Alerting > Actions > Start ICAP) also provides an "Update and Test" that allows you to trigger the action and start the c-icap server. Enable verbose logging for the "Start ICAP" action in the Traffic Manager for more information from the event mechanism. *You may want to change this setting back to disable when you are done testing.   Additional Information Pulse Secure Virtual Traffic Manager Pulse Secure Virtual Web Application Firewall Product Documentation RFC 3507 - Internet Content Adaptation Protocol (ICAP) The c-icap project Clam AntiVirus  
View full article
Request rule   The request rule below captures the start time for each request and sets a connection data value called “start” for each request:-   $tm = sys.time.highres(); # Don't store $tm directly, use sprintf to preserve precision connection.data.set("start", string.sprintf( "%f", $tm ) );   Response rule   The following response rule then tests each response against a threshold, which is currently set to 6 seconds. A log entry is written to the event log for each response that takes longer to complete than the 6 second threshold. Each log entry will show the response time in seconds, the back-end node used and the full URI of the request:   $THRESHOLD = 6; # Response time in (integer) seconds above # which requests are logged. $start = connection.data.get("start"); $now = sys.time.highres(); $diff = ($now - $start); if ( $diff > $THRESHOLD ) { $uri = http.getRawURL(); $node = connection.getNode(); log.info ("SLOW REQUEST (" . $diff . "s) " . $node . ":" . $uri ); }   The information in the event log will be useful to identify patterns in slow connections. For example, it might be that all log entries relate to RSS connections, indicating that there might be a problem with the RSS content.   Read more   Collected Tech Tips: TrafficScript examples
View full article
This article explains how to use Traffic Manager's REST Control API using the excellent requests Python library.   There are many ways to install the requests library.  On my test client (MacOSX), the following was sufficient:   $ sudo easy_install pip $ sudo pip install requests   Resources   The REST API gives you access to the Traffic Manager Configuration, presented in the form of resources.  The format of the data exchanged using the RESTful API will depend on the type of resource being accessed:   Data for Configuration Resources, such as Virtual Servers and Pools are exchanged in JSON format using the MIME type of “application/json”, so when getting data on a resource with a GET request the data will be returned in JSON format and must be deserialized or decoded into a Python data structure.  When adding or changing a resource with a PUT request the data must be serialized or encoded from a Phython data structure into JSON format. Files, such as rules and those in the extra directory are exchanged in raw format using the MIME type of “application/octet-stream”.   Working with JSON and Python   The json module provides functions for JSON serializing and deserializing.  To take a Python data structure and serialize it into JSON format use json.dumps() and to deserialize a JSON formatted string into a Python data structure use json.loads() .   Working with a RESTful API and Python   To make the programming easier, the program examples that follow utilize the requests library as the REST client. To use the requests library you first setup a requests session as follows, replacing <userid> and <password> with the appropriate values:   client = requests.Session() client.auth = ('<userid>', '<password>') client.verify = False   The last line prevents it from verifying that the certificate used by Traffic Manager is from a certificate authority so that the self-signed certificate used by Traffic Manager will be allowed.  Once the session is setup, you can make GET, PUT and DELETE calls as follows:   response = client.get() response = client.put(, data = , headers = ) response = client.delete()   The URL for the RESTful API will be of the form:   https:// <STM hostname or IP>:9070/api/tm/1.0/config/active/   followed by a resource type or a resource type and resource, so for example to get a list of all the pools from the Traffic Manager instance, stingray.example.com, it would be:   https://stingray.example.com:9070/api/tm/1.0/config/active/pools   And to get the configuration information for the pool, “testpool” the URL would be:   https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool   For most Python environments, it will probably be necessary to install the requests library.  For some Python environments it may also be necessary to install the httplib2 module.   Data Structures   JSON responses from a GET or PUT are deserialized into a Python dictionary that always contains one element.   The key to this element will be:   'children' for lists of resources.  The value will be a Python list with each element in the list being a dictionary with the key, 'name', set to the name of the resource and the key, 'href', set to the URI of the resource. 'properties' for configuration resources.  The value will be a dictionary with each key value pair being a section of properties with the key being set to the name of the section and the value being a dictionary containing the configuration values as key/value pairs.  Configuration values can be scalars, lists or dictionaries.   Please see Feature Brief: Traffic Manager's RESTful Control API for examples of these data structures and something like the Chrome REST Console can be used to see what the actual data looks like.   Read More   The REST API Guide in the Product Documentation Feature Brief: Traffic Manager's RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
This technical brief describes recommended techniques for installing, configuring and tuning Traffic Manager.  You should also refer to the Product Documentation for detailed instructions on the installation process of Traffic Manager software. Getting started Hardware and Software requirements for Traffic Manager Pulse Virtual Traffic Manager Kernel Modules for Linux Software Tuning Stingray Traffic Manager Tuning Traffic Manager for best performance Tech Tip: Where to find a master list of the Traffic Manager configuration keys Tuning the operating system kernel The following instructions only apply to Traffic Manager software running on a customer-supplied Linux or Solaris kernel: Tuning the Linux operating system for Traffic Manager Routing and Performance tuning for Traffic Manager on Linux Tuning the Solaris operating system for Traffic Manager Debugging procedures for Performance Problems Tech Tip: Debugging Techniques for Performance Investigation Load Testing Load Testing recommendations for Traffic Manager Conclusion The Traffic Manager software and the operating system kernels both seek to optimize the use of the resources available to them, and there is generally little additional tuning necessary except when running in heavily-loaded or performance-critical environments. When tuning is required, the majority of tunings relate to the kernel and tcp stack and are common to all networked applications.  Experience and knowledge you have of tuning webservers and other applications on Linux or Solaris can be applied directly to Traffic Manager tuning, and skills that you gain working with Traffic Manager can be transferred to other situations. The importance of good application design TCP and kernel performance tuning will only help to a small degree if the application running over HTTP is poorly designed.  Heavy-weight web pages with large quantities of referenced content and scripts will tend to deliver a poorer user experience and will limit the capacity of the network to support large numbers of users. Traffic Manager's Web Content Optimization capability ("Aptimizer") applies best-practice rules for content optimization dynamically, as the content is delivered by Traffic Manager.  It applies browser-aware techniques to reduce bandwidth and TCP round-trips (image, CSS, JavaScript and HTML minification, image resampling, CSS merging, image spriting) and it automatically applies URL versioning and far-future expires to ensure that clients cache all content and never needlessly request an update for a resource which has not changed. Traffic Manager's Aptimizer is a general purpose solution that complements TCP tuning to give better performance and a better service level.  If you’re serious about optimizing web performance, you should apply a range of techniques from layer 2-4 (network) up to layer 7 and beyond to deliver the best possible end-user experience while maximizing the capacity of your infrastructure.
View full article
Full Proxy load balancing   Stingray Traffic Manager functions as a proxy. It terminates TCP (and UDP) connections locally, and makes new connections to the target (back-end) servers. This is a consequence of the architecture of the Stingray software (user-level software running on a general-purpose kernel), and most modern traffic management devices use a similar architecture.   Previous-generation load balancers (aka layer 3-4 load balancers) are based on NAT-capable routers; their mode of operation is simply to make intelligent, load-based destination-NAT decisions on incoming traffic, rather than relying on a static routing table. The proxy mode of operation allows Stingray to perform a range of network optimizations (including TCP offload and HTTP multiplexing) that is not possible with NAT-based L3/4 balancers. However, the proxy mode  is not 'transparent' to clients and servers in the fashion that at layer 3/4 load balancer would be:   Clients must be directed to connect to an IP address and port that the load balancer is listening on. This is generally achieved by mapping the DNS name of a service to a traffic IP address that the load balancer listens on, but some legacy or inflexible network architectures may not make this possible Servers will observe that the connections originate from the load balancer, not the remote client. This can be a problem if the server needs to perform logging or access control based on the client's IP address.   Transparent Proxy Load Balancing   It is possible to run Stingray in a fashion that appears transparent to clients and servers, so it appears like a L3/4 proxy. There are two  independent steps to this:   Step 1. Transparent capturing of incoming connections   Put Stingray inline in your network, i.e. as an intermediate gateway. Use iptables to capture selected packets that would otherwise be forwarded and raise them up to Stingray:   # iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 80   This iptables rule intercepts all incoming TCP packets that are destined to port 80, any destination IP address, and rewrites the destination IP address to a local one (the primary IP address on the interface the packet was received on). It can optionally also rewrite the port (--to-port). The Linux kernel then makes a routing decision, observes that the packet is targeted for a local IP and passes it up to the application listening on the destination IP and port (i.e. Stingray). You can manually enter the iptables rule from the Linux command line on the Stingray system.   Step 2. Transparent forwarding of data   Because Stingray acts as a proxy, it makes a new connection to the destination server. This connection will originate from an IP address on the Stingray system; the back-end server will observe that the connection comes from Stingray. This is not transparent.   The IP Transparency capability in a pool can spoof the source address of a connection to a back-end server - HowTo: Spoof source IP addresses with IP Transparency.  By default, it will set the source address to be the remote IP address of the client-side connection. From the back-end server’s perspective, the TCP or UDP packets it receives appear to originate from the remote client, so the Stingray system is transparent.   This capability is enabled by the IP Transparency setting in the connection management properties of a pool:     There are two caveats to this technique:   It requires that the Stingray box lies on the back-end server’s default route (typically, the Stingray box is on the same local network so it is configured to be the default gateway for the server); It has a couple of performance impacts: Stingray cannot use HTTP keepalive optimization, and the IP transparency module imposes a performance hit on the Stingray kernel.   Nevertheless, it's a common deployment method when the traffic manager should appear transparent to the back-end servers.   Additional Notes   If you use the iptables technique to capture and rewrite incoming packets, the TrafficScript function request.getLocalIP() will return a local IP address.  You can use the function request.getDestIP() to determine the original destination IP address.   You can control how the IP address spoofing capability functions in two ways:   To avoid using IP transparency (for example, when managing protocols such as HTTP where transparency is often not necessary), select a pool that does not use IP transparency.  You can employ very fine-grained selection  - for example, only using IP Transparency for HTTP transactions where it is absolutely necessary   To explicitly control the source IP address, use the TrafficScript function request.setRemoteIP() as described in the article HowTo: Spoof source IP addresses with IP Transparency   This method does not enable you to use legacy layer 2/3 load balancing methods such as Techniques for Direct Server Return with Stingray Traffic Manager; Stingray still functions as a full proxy, giving you the ability to apply the full suite of layer 7 optimizations and traffic manipulation that Stingray makes available.
View full article
Why write a health monitor in TrafficScript?   The Health Monitoring capabilities (as described in Feature Brief: Health Monitoring in Traffic Manager) are very comprehensive, and the built-in templates allow you to conduct sophisticated custom dialogues, but sometimes you might wish to resort to a full programming language to implement the tests you need.   Particularly on the Traffic Manager Virtual Appliance, your options can be limited. There's a minimal Perl interpreter included (see Tech Tip: Running Perl code on the Traffic Manager Virtual Appliance), and you can upload compiled binaries (Writing a custom Health Monitor in C) and shell scripts. This article explains how you can use TrafficScript to implement health monitors, and of course with Java Extensions, TrafficScript can 'call out' to a range of third-party libraries as well.   Overview   We'll implement the solution using a custom 'script' health monitor.  This health monitor will probe a virtual server running on the local Traffic Manager (using an HTTP request), and pass it all of the parameters relevant to the health request.   A TrafficScript rule running on the Traffic Manager can perform the appropriate health check and respond with a 'PASS' (200 OK) or 'FAIL' (500 Error) response.   The health monitor script   The health monitor script is straightforward and should not need any customization.  It will take its input from the health monitor configuration.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 #!/bin/sh  exec $ZEUSHOME /perl/miniperl -wx $0 ${1+ "$@" }       if 0;       #!/usr/bin/perl  #line 7       BEGIN{          # Pull in the Traffic Manager libraries for HTTP requests          unshift @INC , "$ENV{ZEUSHOME}/zxtmadmin/lib/perl" , "$ENV{ZEUSHOME}/zxtm/lib/perl" ;  }       use Zeus::ZXTM::Monitor qw( ParseArguments MonitorWorked MonitorFailed Log );  use Zeus::HTMLUtils qw( make_query_string );  use Zeus::HTTP;       my %args = ParseArguments();       my $url = "http://localhost:$args{vsport}$args{path}?" .make_query_string( %args );  my $http = new Zeus::HTTP( GET => $url );  $http ->load();       Log ( "HTTP GET for $url returned status: " . $http ->code() );       if ( $http ->code() == 200 ) {      MonitorWorked();  } else {      MonitorFailed( "Monitor failed: " . $http ->code() . " " . $http ->body() );  }   Upload this to the Monitor Programs of the Extra Files section of the catalog, and then create an "External Program Monitor" based on that script.  You will need to add two more configuration parameters to this health monitor configuration:   vsport: This should be set to the port of the virutal server that will host the trafficscript test path: This is optional - you can use it if you want to run several different health tests from the trafficscript rule   Your configuration should look something like this:   The virtual server   Create an HTTP virtual server listening on the appropriate port number (vsport).  You can bind this virtual server to localhost if you want to prevent external clients from accessing it.   The virtual server should use the 'discard' pool - we're going to add a request rule that always sends a response, so there's no need for any backend nodes.   The TrafficScript Rule   The 'business end' of your TrafficScript health monitor resides in the TrafficScript rule.  This rule is invoked every time the health monitor script is run, and it is given the details of the node which is to be checked.   The rule should return a 200 OK HTTP response if the node is OK, and a different response (such as 500 Error) if the node has failed the test.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 $path = http.getPath(); # Use 'path' if you would like to publish                           # several different tests from this rule       $ip = http.getFormParam( "ipaddr" );  $port = http.getFormParam( "port" );  $nodename = http.getFormParam( "node" );       # We're going to test the node $nodename on $ip:$port  #  # Useful functions include:  #   http.request.get/put/post/delete()  #   tcp.connect/read/write/close()  #   auth.query()  #   java.run()            sub Failed( $msg ) {      http.sendResponse( 500, "text/plain" , $msg , "" );  }       # Let's run a simple GET  $req = 'GET / HTTP/1.0  Host: www.riverbed.com       ';  $timeout = 1000; # ms  $sock = tcp. connect ( $ip , $port , $timeout );  tcp. write ( $sock , $req , $timeout );  $resp = tcp. read ( $sock , 102400, $timeout );       # Perform whatever tests we want on the response data.   # For example, it should begin with '200 OK'       if ( ! string.startsWith( $resp , "HTTP/1.1 200 OK" ) ) {      Failed( "Didn't get expected response status" );  }       # All good  http.sendResponse( 200, "text/plain" , "" , "" );  
View full article
Most of the data manipulation you'll do with TrafficScript will involve manipulating strings. Here is a quick account of the most useful string functions.   Strings are managed efficiently within the TrafficScript runtime engine, and memory copies are kept to a minimum.   Creating and concatenating strings   Assign a string to a variable:   1 $password = "Secret" ;   Many functions take strings as arguments, and return strings as values:   1 $host = http.getHeader( "Host" );    Concatenate strings with the '.' operator:   1 2 $greeting = "Hello, " . $name ;  $link = "<a href=\"" . $url . "\">Click here" ;<a>    Basic string functions   The length of a string:   1 $len = string.len( $mystring );   Skip or drop bytes off the start or end of a string:   1 2 3 4 5 $interpreter = "#!/bin/sh" ;  $str = string.skip( $interpreter , 2 ); # returns "/bin/sh"       $greeting = "Hello, world!..." ;  $msg = string.drop( $greeting , 4 ); # returns "Hello, world"    Remove whitespace from the start and end of a string:   1 2 $email = "  foo@example.com  " ;  $email = string.trim( $email );  # returns "foo@example.com"   The string.substring() function returns a substring from a string:   1 2 3 4 $time = "09:15:45" ;  $hr   = string.substring( $time , 0, 2 );  # returns "09"  $min = string.substring( $time , 3, 2 );  # returns "15"  $sec = string.substring( $time , 6, 2 );  # returns "45"    String tests   The following functions test the contents of a string:   1 2 3 4 5 6 7 8 9 10 11 12 if ( string.startsWith( $url , "http://" ) ) {      ... # $url begins "http://"  }       if ( string.endsWith( $url , ".php" ) ) {      ... # url ends ".php"  }       if ( string.contains( $url , "/.." ) ) {      ... # url contains "/.."  }    The string.find() function searches for a substring and returns its location:   1 2 3 4 $greeting = "Hello, world!" ;  $i = string.find( $greeting , "llo" );   # returns 2  $j = string.find( $greeting , "Hello" ); # returns 0  $k = string.find( $greeting , "name" );  # returns -1 (not found)    String compares can be performed with the '==', '!=', '<', '>', '<=' and '>=' operators:   1 2 3 if ( $protocol <= "1.0" ) {     ...  }   Note that TrafficScript's Type Casting Rules can affect the behaviour of a compare operation:   1 2 3 4 5 6 7 8 9 10 11 $count = "100" ;       # Do a string compare  if ( $protocol < "99" ) {      ...   }       # Do an integer compare  if ( $protocol < 99 ) {      ...  }   The string.cmp() and string.icmp() functions perform case sensistive and case insensitive string compares, returning negative, zero or positive values. Both arguments are converted to strings if necessary:   1 2 3 4 5 6 7 8 if ( string.cmp( $protocol , "1.0" ) < 0 ) {      ...  }       # This is identical  if ( string.icmp( $protocol , 1.0 ) < 0 ) {      ...  }   String matching and substitution   The string.wildmatch() function performs a shell-like wildcard match on a string. The character '?' matches any single character, and '*' matches any substring:   1 2 3 4 $url = http.getPath();  if ( string.wildmatch( $url , "/cgi-bin/*.cgi" ) ) {      ... # is a request for a CGI script, without pathinfo  }   Regular expression matching can be performed with the string.regexMatch() function. Stingray uses the standard PCRE regular expression syntax, and matches are placed in the magic $1 to $9 variables:   1 2 3 4 $id = "user=Joe, password=Secret" ;  if ( string.regexmatch( $id , "^user=(.*), password=(.*)$" ) ) {      log .info( "Got UserID: " . $1 . ", Password: " . $2 );  }    Perform case-insensitive regular expression matches using the optional third function argument:   1 string.regexmatch( $username , "joe" , "i" );   Regular expression substitutions are the easiest and most powerful way to perform complex string manipulation. Text in a string which matches the regular expression is replaced by the substitution:   1 2 # Rewrite requests for "/secure/something" to "/private/something"  $url = string.regexsub( $url , "^/secure" , "/private" );    Normally, only the first match is replaced; the optional "g" flag indicates that a 'global' replace should be performed, where every match is replaced.   1 2 3 4 5 # The document contains references to "oldsite.example.com";   # replace these with "newsite.enterprise.com"  $response = http.getResponseBody();  $response = string.regexsub( $response , "oldsite.example.com" , "newsite.example.com" , "g" );  http.setResponseBody( $response );    String encoding and decoding   Convert between case using string.toUpper() and string.toLower() :   1 2 3 $string = "AbCdEfG" ;  $upper = string.toUpper( $string ); # returns "ABCDEFG"  $lower = string.toLower( $string ); # returns "abcdefg"   HTML-encode a string for safe rendering in a browser using string.htmlEncode(); use string.htmlDecode() to reverse the operation:   1 2 3 4 5 $xss = " <script type= "mce-no/type" >// alert( 'Hello!' ); // </script> "; # This returns " <script>alert( 'Hello!' );</script>" $safe = string.htmlencode( $xss );   %-encode control characters, spaces and '%' in a string using string.escape(); use string.unescape() to reverse the operation:   1 2 # returns "Hello%20World!%0D%0A"  $str = string.escape( "Hello World!\r\n" );   You may want to manually replace incidences of "&", "?" and "=" with their %-encoded counterparts if you want to use the result in a URL.   Use Base64 encoding for a more universal encoding scheme: string.base64encode() and string.base64decode() encode and decode strings. Base64 is used for MIME-encoded messages, and in the HTTP Basic Authorization header.   1 2 3 # Encodes a username and password for HTTP BASIC authentication  $enc = string.base64encode( "user:passwd" );  http.setHeader( "Authorization" , "Basic " . $enc );   An alternative pair of functions would be string.hexEncode() and string.hexDecode() .   Finally, you can encrypt and decrypt strings using a passphrase and the AES cipher:   1 2 $encrypted = string.encrypt( $plaintext , $passphrase );  $plaintext = string.decrypt( $encrypted , $passphrase );    Read more: Collected Tech Tips: TrafficScript examples
View full article
The TrafficScript function http.changeSite() makes it easy to redirect clients from one domain to another.  You can also use it to reliably redirect clients from http to https (or https to http), or from one document tree on a website (e.g. /products) to another (e.g /sales). # Example: Redirect client from www.site.com to www.site.co.uk if( geo.getCountryCode( request.getRemoteIP() ) == "GB" ) {   http.changeSite( "www.site.co.uk" ); } # Example: Force client to https (assuming this rule is attached to an HTTP virtual server) http.changeSite( " https:// " . http.getHostHeader() ); # Example: move client from one tree to another $path = http.getPath(); if( string.startsWith( $path, "/products" ) ) http.changeSite( http.getHostHeader(). "/sales" ); For more fine-grained control of HTTP redirects, you can also use the http.redirect() function. Read more Collected Tech Tips: TrafficScript examples
View full article