cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Request rule   The request rule below captures the start time for each request and sets a connection data value called “start” for each request:-   $tm = sys.time.highres(); # Don't store $tm directly, use sprintf to preserve precision connection.data.set("start", string.sprintf( "%f", $tm ) );   Response rule   The following response rule then tests each response against a threshold, which is currently set to 6 seconds. A log entry is written to the event log for each response that takes longer to complete than the 6 second threshold. Each log entry will show the response time in seconds, the back-end node used and the full URI of the request:   $THRESHOLD = 6; # Response time in (integer) seconds above # which requests are logged. $start = connection.data.get("start"); $now = sys.time.highres(); $diff = ($now - $start); if ( $diff > $THRESHOLD ) { $uri = http.getRawURL(); $node = connection.getNode(); log.info ("SLOW REQUEST (" . $diff . "s) " . $node . ":" . $uri ); }   The information in the event log will be useful to identify patterns in slow connections. For example, it might be that all log entries relate to RSS connections, indicating that there might be a problem with the RSS content.   Read more   Collected Tech Tips: TrafficScript examples
View full article
A document to hold useful regular expressions that I have pulled together for things: RegExr is a great and very handy online tool for checking regular expression matches: RegExr A regex to validate a password string to ensure it does not contain dangerous punctuation characters and is less than 20 characters long.  useful for Stingray Application Firewall form field protection in login pages: ^[^;,{}\[\]\$\%\*\(\)<>:?\\/'"`]{0,20}$ A regex to check that a password has at least one Uppercase, Lowercase, Numbers and Punctuation from the approved list and is at least 8 but less than 20 characters. ^(?=.*[A-Z])(?=.*[a-z])(?=.*[\\@^!\.,~-])(?=.*\d)(.{8,20})$ A regex to check a field has a valid email address in it ^[^@]+@[^@]+ \. [^@]+ $
View full article
In many cases, it is desirable to upgrade a virtual appliance by deploying a virtual appliance at the newer version and importing the old configuration.  For example, the size of the Traffic Manager disk image was increased in version 9.7, and deploying a new virtual appliance lets a customer take advantage of this larger disk.  This article documents the procedure for deploying a new virtual appliance with the old configuration in common scenarios.   These instructions describe how to upgrade and reinstall Traffic Manager appliance instances (either in a cluster or standalone appliances). For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Upgrading a standalone Virtual Appliance   This process will replace a standalone virtual appliance with another virtual appliance with the same configuration (including migrating network configuration). Note that the Traffic Manager Cloud Getting Started Guide contains instructions for upgrading a standalone EC2 instance from version 9.7 onwards; if upgrading from a version prior to 9.7 and using the Web Application Firewall these instructions must be followed to correctly back up and restore any firewall configuration.   Make a backup of the traffic manager configuration (See section "System > Backups" in the Traffic Manager User Manual), and export it. If you are upgrading from a  version prior to 9.7 and are using the Web Application Firewall, back up the Web Application Firewall configuration - Log on to a command line - Run /opt/zeus/stop-zeus - Copy /opt/zeus/zeusafm/current/var/lib/config.db off the appliance. Shut down the original appliance. Deploy a new appliance with the same network interfaces as the original. If you backed up the application firewall configuration earlier, restore it here onto the new appliance, before you restore the traffic manager configuration: - Copy the config.db file to /opt/zeus/stingrayafm/current/var/lib/config.db    (overwriting the original) - Check that the owner on the config.db file is root, and the mode is 0644. Import and restore the traffic manager configuration via the UI. If you have application firewall errors Use the Diagnose page to automatically fix any configuration errors Reset the Traffic Manager software.   Upgrading a cluster of Virtual Appliances (except Amazon EC2)   This process will replace the appliances in the cluster, one at a time, maintaining the same IP addresses. As the cluster will be reduced by one at points in the upgrade process, you should ensure that this is carried out at a time when the cluster is otherwise healthy, and of the n appliances in the cluster, the load can be handled by (n-1) appliances.   Before beginning the process, ensure that any cluster errors have been resolved. Nominate the appliance which will be the last to be upgraded (call it the final appliance).  When any of the other machines needs to be removed from the cluster, it should be done using the UI on this appliance, and when a hostname and port are required to join the cluster, this appliance's hostname should be used. If you are using the Web Application Firewall first ensure that vWAF on the final appliance in the cluster is upgraded to the most recent version, using the vWAF updater. Choose an appliance to be upgraded, and remove the machine from the cluster: - If it is not the final appliance (nominated in step 2),    this should be done via the UI on the final appliance - If it is the final appliance, the UI on any other machine may be used. Make a backup of the traffic manager configuration (System > Backups) on the appliance being upgraded, and export the backup.  This backup only contains the machine specific info for that appliance (networking config etc). Shut down the appliance, and deploy a new appliance at the new version.  When deploying, it needs to be given the identical hostname to the machine it's replacing. Log on to the admin UI of the new appliance, and import and restore the backup from step 5. If you are using the Web Application Firewall, accessing the Application Firewall tab in the UI will fail and there will be an error on the Diagnose page and an 'Update Configuration' button. Click the Update Configuration button once, then wait for the error to clear.  The configuration is now correct, but the admin server still needs to be restarted to pick up the configuration: # $ZEUSHOME/admin/rc restart Now, upgrade the application firewall on the new appliance to the latest version. Join into the cluster: For all appliances except the final appliance, you must not select any of the auto-detected existing clusters.  Instead manually specify the hostname and port of the final appliance. If you are using Web Application Firewall, there may be an issue where the config on the new machine hasn't synced the vWAF config from the old machine, and clicking the 'Update Application Firewall Cluster Status' button on the Diagnose page doesn't fix the problem. If this happens, firstly get the clusterPwd from the final appliance: # grep clusterPwd /opt/zeus/zxtm/conf/zeusafm.conf clusterPwd = <your cluster pwd> On the new appliance, edit /opt/zeus/zxtm/conf/zeusafm.conf (with e.g. nano or vi), and replace the clusterPwd with the final appliance's clusterPwd. The moment that file is saved, vWAF should get restarted, and the config should get synced to the new machine correctly. When you are upgrading the final appliance, you should select the auto-detected existing cluster entry, which should now list all the other cluster peers. Once a cluster contains multiple versions, configuration changes must not be made until the upgrade has been completed, and 'Cluster conflict' errors are expected until the end of the process. Repeat steps 4-9 until all appliances have been upgraded.   Upgrading a cluster of STM EC2 appliances   Because EC2 licenses are not tied to the IP address, it is recommended that new EC2 instances are deployed into a cluster before removing old instances.  This ensures that the capacity of the cluster is not reduced during the upgrade process.  This process is documented in the "Creating a Traffic Manager Instances on Amazon EC2" chapter in the Traffic Manager Cloud Getting Started Guide.  The clusterPwd may also need to be fixed as above.
View full article
This article uses the libDNS.rts trafficscript library as described in libDNS.rts: Interrogating and managing DNS traffic in Stingray.   In this example, we intercept DNS requests and respond directly for known A records.   The request rule   import libDNS.rts as dns; # Map domain names to lists of IP addresses they should resolve to $ipAddresses = [ "dev1.ha.company.internal." => [ "10.1.1.1", "10.2.1.1" ], "dev2.ha.company.internal." => [ "10.1.1.2", "10.2.1.2" ] ]; $packet = dns.convertRawDataToObject( request.get(), "udp" ); # Ignore unparsable packets and query responses to avoid # attacks like the one described in CVE-2004-0789. if( hash.count( $packet ) == 0 || $packet["qr"] == "1" ) { break; } $host = $packet["question"]["host"]; if( hash.contains( $ipAddresses, $host )) { foreach( $ip in $ipAddresses[$host] ) { $packet = dns.addResponse($packet, "answer", $host, $ip, "A", "IN", "60", []); } $packet["aa"] = "1"; # Make the answer authorative } else { $packet["rcode"] = "0011"; # Set NXDOMAIN error } $packet["qr"] = "1"; # Changes the packet to a response $packet["ra"] = "1"; # Pretend that we support recursion request.sendResponse( dns.convertObjectToRawData($packet, "udp"));
View full article
A user commented that Stingray Traffic Manager sometimes adds a cookie named ' X-Mapping-SOMERANDOMDATA ' to an HTTP response, and wondered what the purpose of this cookie was, and whether it constitited a privacy or security risk.   Transparent Session Affinity   The cookie used used by Stingray's 'Transparent Session Affinity' persistence class.   Transparent session affinity inserts cookies into the HTTP response to track sessions. This is generally the most appropriate method for HTTP and SSL-decrypted HTTPS traffic, because it does not require the nodes to set any cookies in their response.   The persistence class adds a cookie to the HTTP response that identifies the name of the session persistence class and the chosen back-end node:   Set-Cookie: X-Mapping-hglpomgk=4A3A3083379D97CE4177670FEED6E830; path=/   When subsequent requests in that session are processed and the same sesison persistence class is invoked, it inspects the requests to determine if the named cookie exists. If it does, the persistence class inspects the value of the cookie to determine the node to use.   The unique identifier in the cookie name is a hashed version of the name of the session persistence class (there may be multiple independent session persistence rules in use). When the traffic manager processes a request, it can then identify the correct cookie for the active session persistence class.   The value of the cookie is a hashed version of the name of the selected node in the cluster. It is non-reversible by an external party. The value identifies which server the session should be persisted to. There is no personally-identifiable information in the cookie. Two independent users who access the service, are managed by the same session persistence class and routed to the same back-end server will be assigned the same named cookie and value.
View full article
This document describes some operating system tunables you may wish to apply to a production Stingray Traffic Manager instance.  Note that the kernel tunables only apply to Stingray Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Stingray Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Stingray on a severely-constrained hardware platform, or where Stingray should not seek to use all available resources; Running in a performance-critical environment; The Stingray host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Stingray event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Stingray Traffic Manager article. Basic Kernel and Operating System tuning Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the Stingray setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Stingray to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (Stingray).  See also the Stingray tunable ‘listen_queue_size’.  Restart the Stingray software after changing this value. If the listen queue fills up because the Stingray does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of Stingray deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually : # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script] Aidan Clarke irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Stingray software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity The precise steps are specific to the network card and drivers you have selected. This document from the Linux Kernel Source Tree gives a good overview, and you should refer to the technical documentation provided by your network card vendor. [ Updated by Aidan Clarke to include a shell script to make it easier to deploy the changes above ] [ Updated by Aidan Clarke to update the link from the old Google Code Page to the new repository in the Linux Kernel Source Tree after feedback of a outdated link from Rick Henderson ]
View full article
Using Stingray Traffic Manager to load balance a pool of LDAP servers for High Availability is a fairly simple process.  Here are the steps: Start up the Manage a new service wizard.  This is located in the top right corner of the Stingray Traffic Manager web interface, under the Wizards drop down. In step 2 of the wizard set the Protocol to LDAP.  The Port will automatically be set to 389, the default LDAP port.  Give the service a Name. In step 3 add in the hostnames or IP Addresses of each of your LDAP servers. At this point a virtual server and pool will be created.  Before it is usable a few additional changes may be made: Change the Load Balancing algorithm of the pool to Least Connections Create a new Session Persistence class of type IP-based persistence (Catalogs -> Persistence) and assign it to the Pool Create a Traffic IP Group (Services -> Traffic IP Groups) and assign it to the virtual server.  The Traffic IP Group is the IP Address LDAP clients will connect to. The final step is to install the LDAP Health Monitor.  The LDAP Health Monitor is an External Program Monitor that binds to the LDAP server, submits an LDAP query, and checks for a response.  Instructions to install the monitor are in the linked page.
View full article
This is an update to the "Simply WURFL" article written in 2009.  Since that time, some of the underlying libraries have changed, especially with regards to logging.   What is WURFL?  WURFL stands for Wireless Universal Resource FiLe.  From the WURFL webpage: "WURFL is a Device Description Repository (DDR), i.e. a framework that enables applications to map HTTP requests to a description of the capability of the mobile device that requests the page."  WURFL is licensed and maintained by ScientiaMobile, Inc. and includes a Java API so it can be used with the Stingray Traffic Manager.  It is up to the user to make sure they comply with the ScientiaMobile WURFL license.  You do not need to know Java to get these examples working since all the necessary class and jar files are either attached to this article or available at the links below, but if you do want to modify the source code, the source files are also attached.  To get started with WURFL you first need to ensure your Stingray Traffic Manager has working Java support and then download the following items.   wurfl Java API : This code has been tested with version 1.4.4.3.  You will need wurfl-1.4.4.3.jar. wurfl: The WURFL respository.  wurfl.zip which can be used as is or extracted as wurfl.xml.  This code has been tested with version 2.3.4. commons-collections: Java interfaces, implementations and utilities.  This code has been tested with version 3.2.1.  You will need commons-collections-3.2.1.jar. commons-lang: Helper utilities.  This code has been tested with version 3.1.  You will need commons-lang3-3.1.jar. sl4j: Logging framework.  This code has been tested with version 1.7.5.  You will need slf4j-api-1.7.5.jar and slf4j-noop-1.7.5.jar or slf4j-simple-1.7.5.jar depending on what you want to see with regards to logging messages.  slf4j-noop-1.7.5.jar will cause all messages to be suppressed while slf4j-simple-1.7.5.jar will cause all messages to appear in the Stingray event log as warnings.   Upload the files specified using the Calalogs > Java UI page on your Stingray Traffic Manager. Now you're all set to experiment with the following examples.   The first sample servlet is a very simple use-case of WURFL, useful as a base for your own servlet or for debugging. The intention is for it to introduce the WURFL API as it fits within the framework of Stingray Java Extensions. WURFL is typically configured using the ConfigListener model, but Stingray doesn't go as far as implementing all the nuts and bolts required by full web applications. Our WURFL servlet must perform the required initialization itself, so an init method has been implemented that sets up a WURFLManager. As much work as possible is done at servlet initialization time. Then all the doGet method needs to do is check the request against the pre-initialised WURFLManager.   The source code and the compiled class for this example, StingrayWURFLInfoServlet are attached to this article.  To compile the code yourself on the Stingray instance, after uploading the specified jar files, you can upload the source file as well and then from the $ZEUSHOME/zxtm/conf/jars directory execute the following command:   javac -cp "$ZEUSHOME/zxtm/lib/*:$ZEUSHOME/zxtm/conf/jars/* StingrayWURFLInfoServlet.java   To compile it on a different machine, you will need the following jar files that were uploaded to Stingray: commons-collections-3.2.1.jar, commons-lang3.3.1.jar, wurfl-1.4.4.3.jar as well as servlet.jar and zxtm-servlet.jar from Stingray.  These are available via links in the Stingray UI under Catalogs > Java.  Please see the Stingray Java Development Guide for more information.   To get the servlet working follow these steps:   Upload the StingrayWURFLInfoServlet.class file using Stingray Catalogs > Java UI page, leave the "Automatically create TrafficScript rule" checkbox ticked If you want devices that are not included in the standard WURFL repository, like desktop browsers, to be detected properly, a patches file can be created and uploaded to Stingray using the Catalogs > Java page.   If this file is uploaded, click on the StingrayWURFLInfoServlet link on the UI page and add the parameter "wurfl_patches" with the value of the name of the patches file, e.g., "web_browsers_patch.xml".  For details on creating a patches file see the ScientiaMobile website. Set up a Virtual Server for testing.  The pool can be set to "discard" since StingrayWURFLInfoServlet will create a response. Associate the auto-generated StingrayWURFLInfoServlet rule with the test Virtual Server as a request rule. Visit the virtual server from different browsers on different devices.   The result should be a page showing some general information about your browser at the top followed by the full table of WURFL capabilities and their values. The following screenshots show the top part of the output using a iPhone and a desktop browser:   Safari on iPhone:     Firefox on Windows:   This is fine as a demo but not particularly useful for a real world application. What we really want is a module that can export capabilities so that they can be used by TrafficScript and other extensions. We also don't want to specifically be a doGet processor, so we'll modify the servlet along the lines described in the article: Writing TrafficScript functions in Java   in Java*</a>   There are a lot of capabilities covered by WURFL, the tables in the above screenshots go on for several pages - more than 500 capabilities in all. So we'll make it possible for TrafficScript to specify what capability fields it wants.   This gives us a servlet that could be used from TrafficScript using a rule such as the following example.  This rule extracts a few values and caches them in the Global Associative Array using the user-agent as the key so that subsequent requests for the same user-agent don't require another look up.   sub checkWURFL() { $ua = http.getHeader( "User-Agent" ); if (!string.length($ua)) return; $markup = data.get("WURFL" . $ua . "preferred_markup"); $datarate = data.get("WURFL" . $ua . "max_data_rate"); $brand = data.get("WURFL" . $ua . "brand_name"); $cookie = data.get("WURFL" . $ua . "cookie_support"); $os = data.get("WURFL" . $ua . "device_os"); $osVersion = data.get("WURFL" . $ua . "device_os_version"); $isWireless = data.get("WURFL" . $ua . "is_wireless_device"); if (string.length($markup)) { log.info("Returning cached values for User-Agent: " . $ua . ", datarate: " . $datarate . ", markup: " . $markup . ", brand: " . $brand . ", cookies: " . $cookie . ", os: " . $os . ", version: " . $osVersion . ", iswireless: " . $isWireless); $1 = $markup; $2 = $datarate; $3 = $brand; $4 = $cookie; $5 = $os; $6 = $osVersion; $7 = $isWireLess; return; } # no cached values for the UA, so run it through WURFL java.run("StingrayWURFLServlet", "max_data_rate", "preferred_markup", "brand_name", "cookie_support", "device_os", "device_os_version", "is_wireless_device"); $markup = connection.data.get("preferred_markup"); $datarate = connection.data.get("max_data_rate"); $brand = connection.data.get("brand_name"); $cookie = connection.data.get("cookie_support"); $os = connection.data.get("device_os"); $osVersion = connection.data.get("device_os_version"); $isWireless = connection.data.get("is_wireless_device"); data.set("WURFL" . $ua . "preferred_markup", $markup); data.set("WURFL" . $ua . "max_data_rate", $datarate); data.set("WURFL" . $ua . "max_data_rate", $datarate); data.set("WURFL" . $ua . "brand_name", $brand); data.set("WURFL" . $ua . "cookie_support", $cookie); data.set("WURFL" . $ua . "device_os", $os); data.set("WURFL" . $ua . "device_os_version", $osVersion); data.set("WURFL" . $ua . "is_wireless_device", $isWireless); log.info("Returning fresh WURFL values for User-Agent: " . $ua . ", datarate: " . $datarate . ", markup: " . $markup . ", brand: " . $brand . ", cookies: " . $cookie . ", os: " . $os . ", version: " . $osVersion . ", iswireless:" . $isWireless); $1 = $markup; $2 = $datarate; $3 = $brand; $4 = $cookie; $5 = $os; $6 = $osVersion; $7 = $isWireless; return; } # simple case to test the checkWURFL function checkWURFL(); $html = "Max Data Rate: " . $2 . "kbps Preferred Markup: " . $1 . " Brand: " . $3 . " OS: " . $5 . " Version: " . $6 . " Cookie Support: " . $4 . " Is Wireless Device: " . $7; http.sendResponse("200", "text/html", $html, "");   The corresponding StingrayWURFLServlet source and compiled class file and slf4j-stingray.jar file are attached to this article.  The slf4j-stingray.jar file includes an implementation to properly direct slf4j messages to the Stingray Event Log. The logging code is documented here: slf4j Logging and Stingray Java Extensions . The slf4j-noop-1.7.5.jar or slf4j-simple-1.7.5.jar file uploaded for the previous example should be deleted.  To control what level of logging to output to the Stingray Event Log, a parameter, "log_level" can be set to a value (case insensitive) of "debug", "info", "warn" or "error".  If no value is set, the default is "warn".   Rather than the doGet method implemented in the StingrayWURFLInfoServlet we now have a simpler service method.   public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String[] args = (String[])request.getAttribute( "args" ); if (args == null || args.length == 0) { throw new ServletException("error: no arguments supplied"); } Device device = manager.getDeviceForRequest(request); Map capabilities = device.getCapabilities(); for (int i = 0; i < args.length; ++i) { String cap = (String)capabilities.get(args[i]); if (cap == null) { Logger.warn("Java Servlet StingrayWURFLServlet: No capability found matching: " + args[i]); } else { ((ZXTMServletRequest)request).setConnectionData( args[i], cap ); } } }   There is much more you could do with the StingrayWURFLServlet Java Extension provided in this article. Think of it as a starting point for developing your own solutions to improve the web browsing experience of your mobile users.   A few examples:   The max_data_rate value retrieved above could be used to reduce image quality or size for people with low bandwidth devices. This would result in a snappier web browsing experience for these people as there would be less data for them to retrieve over their slow links. The preferred_markup value can be used to direct clients to different backend pools based on whether they can handle XHTML, or should be served WML. The streaming_flv value can be checked to see if the device has Flash video support and can thus be sent to your full bells-and-whistles website. A scaled down version could be made for those that only have Flash Lite, which is specified by the value of flash_lite_version. Devices that don't support flash at all (such as the iPhone) can be sent to a plain HTML version of the site, or WML as in the previous bullet point.   Speaking of the iPhone, it doesn't have Flash but its browser does have excellent AJAX support. You know your site is being visited by an iPhone user using the normal iPhone web browser when model_name is "iPhone" and mobile_browser is "Safari". If there are important differences between iPhone OS releases you can also check model_extra_info ordevice_os_version for this detail. For AJAX in general there are a whole set of specific properties: ajax_manipulate_css ajax_manipulate_dom ajax_support_event_listener ajax_support_events   Up to date documentation on all the WURFL capabilities can be found on the WURFL website.   <a title=h   Any of these values can also be passed on to your backend nodes of course. You could add special headers containing the values, a cookie, or a URL argument. You could also cache browser capabilities uniquely to each device with cookies or another method of session tracking, rather than cache the capabilities based solely on the user agent. Then you could offer users the ability to override special mobile device modes.   We would love to hear your ideas and learn how we can help you in this exciting area - the opportunities are practically limitless.
View full article
Why write a health monitor in TrafficScript?   The Health Monitoring capabilities (as described in Feature Brief: Health Monitoring in Stingray Traffic Manager) are very comprehensive, and the built-in templates allow you to conduct sophisticated custom dialogues, but sometimes you might wish to resort to a full programming language to implement the tests you need.   Particularly on the Stingray Virtual Appliance, your options can be limited.  There's a minimal Perl interpreter included (see Tech Tip: Running Perl code on the Stingray Virtual Appliance), and you can upload compiled binaries (Writing a custom Stingray Health Monitor in C) and shell scripts.  This article explains how you can use TrafficScript to implement health monitors, and of course with Java Extensions, TrafficScript can 'call out' to a range of third-party libraries as well.   Overview   We'll implement the solution using a custom 'script' health monitor.  This health monitor will probe a virtual server running on the local Stingray (using an HTTP request), and pass it all of the parameters relevant to the health request.   A TrafficScript rule running on the Stingray can perform the appropriate health check and respond with a 'PASS' (200 OK) or 'FAIL' (500 Error) response.   The health monitor script   The health monitor script is straightforward and should not need any customization.  It will take its input from the health monitor configuration.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 #!/bin/sh  exec $ZEUSHOME /perl/miniperl -wx $0 ${1+ "$@" }       if 0;       #!/usr/bin/perl  #line 7       BEGIN{          # Pull in the Stingray (Zeus) libraries for HTTP requests          unshift @INC , "$ENV{ZEUSHOME}/zxtmadmin/lib/perl" , "$ENV{ZEUSHOME}/zxtm/lib/perl" ;  }       use Zeus::ZXTM::Monitor qw( ParseArguments MonitorWorked MonitorFailed Log );  use Zeus::HTMLUtils qw( make_query_string );  use Zeus::HTTP;       my %args = ParseArguments();       my $url = "http://localhost:$args{vsport}$args{path}?" .make_query_string( %args );  my $http = new Zeus::HTTP( GET => $url );  $http ->load();       Log ( "HTTP GET for $url returned status: " . $http ->code() );       if ( $http ->code() == 200 ) {      MonitorWorked();  } else {      MonitorFailed( "Monitor failed: " . $http ->code() . " " . $http ->body() );  }   Upload this to the Monitor Programs of the Extra Files section of the catalog, and then create an "External Program Monitor" based on that script.  You will need to add two more configuration parameters to this health monitor configuration:   vsport: This should be set to the port of the virutal server that will host the trafficscript test path: This is optional - you can use it if you want to run several different health tests from the trafficscript rule   Your configuration should look something like this:   The virtual server   Create an HTTP virtual server listening on the appropriate port number (vsport).  You can bind this virtual server to localhost if you want to prevent external clients from accessing it.   The virtual server should use the 'discard' pool - we're going to add a request rule that always sends a response, so there's no need for any backend nodes.   The TrafficScript Rule   The 'business end' of your TrafficScript health monitor resides in the TrafficScript rule.  This rule is invoked every time the health monitor script is run, and it is given the details of the node which is to be checked.   The rule should return a 200 OK HTTP response if the node is OK, and a different response (such as 500 Error) if the node has failed the test.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 $path = http.getPath(); # Use 'path' if you would like to publish                           # several different tests from this rule       $ip = http.getFormParam( "ipaddr" );  $port = http.getFormParam( "port" );  $nodename = http.getFormParam( "node" );       # We're going to test the node $nodename on $ip:$port  #  # Useful functions include:  #   http.request.get/put/post/delete()  #   tcp.connect/read/write/close()  #   auth.query()  #   java.run()            sub Failed( $msg ) {      http.sendResponse( 500, "text/plain" , $msg , "" );  }       # Let's run a simple GET  $req = 'GET / HTTP/1.0  Host: www.riverbed.com       ';  $timeout = 1000; # ms  $sock = tcp. connect ( $ip , $port , $timeout );  tcp. write ( $sock , $req , $timeout );  $resp = tcp. read ( $sock , 102400, $timeout );       # Perform whatever tests we want on the response data.   # For example, it should begin with '200 OK'       if ( ! string.startsWith( $resp , "HTTP/1.1 200 OK" ) ) {      Failed( "Didn't get expected response status" );  }       # All good  http.sendResponse( 200, "text/plain" , "" , "" );  
View full article
This guide will walk you through the setup to deploy Global Server Load Balancing on Traffic Manager using the Global Load Balancing feature. In this guide, we will be using the "company.com" domain.     DNS Primer and Concept of operations: This document is designed to be used in conjuction with the Traffic Manager User Guide.   Specifically, this guide assumes that the reader: is familiar with load balancing concepts; has configured local load balancing for the the resources requiring Global Load Balancing on their existing Traffic Managers; and has read the section "Global Load Balancing" of the Traffic Manager User Guide in particular the "DNS Primer" and "About Global Server Load Balancing" sections.   Pre-requisite:   You have a DNS sub-domain to use for GLB.  In this example we will be using "glb.company.com" - a sub domain of "company.com";   You have access to create A records in the glb.company.com (or equivalent) domain; and   You have access to create CNAME records in the company.com (or equivalent) domain.   Design: Our goal in this exercise will be to configure GLB to send users to their geographically closes DC as pictured in the following diagram:   Design Goal We will be using an STM setup that looks like this to achieve this goal: Detailed STM Design     Traffic Manager will present a DNS virtual server in each data center.  This DNS virtual server will take DNS requests for resources in the "glb.company.com" domain from external DNS servers, will forward the requests to an internal DNS server, an will intelligently filter the records based on the GLB load balancing logic.     In this design, we will use the zone "glb.company.com".  The zone "glb.company.com" will have NS records set to the two Traffic IP addresses presented by vTM for DNS load balancing in each data centre (172.16.10.101 and 172.16.20.101).  This set up is done in the "company.com" domain zone setup.  You will need to set this up yourself, or get your DNS Administrator to do it.       DNS Zone File Overview   On the DNS server that hosts the "glb.company.com" zone file, we will create two Address (A) records - one for each Web virtual server that the vTM's are hosting in their respective data centre.     Step 0: DNS Zone file set up Before we can set up GLB on Traffic Manager, we need to set up our DNS Zone files so that we can intelligently filter the results.   Create the GLB zone: In our example, we will be using the zone "glb.company.com".  We will configure the "glb.company.com" zone to have two NameServer (NS) records.  Each NS record will be pointed at the Traffic IP address of the DNS Virtual Server as it is configured on vTM.  See the Design section above for details of the IP addresses used in this sample setup.   You will need an A record for each data centre resource you want Traffic Manager to GLB.  In this example, we will have two A records for the dns host "www.glb.company.com".  On ISC Bind name servers, the zone file will look something like this: Sample Zone FIle     ; ; BIND data file for glb.company.com ; $TTL 604800 @ IN SOA stm1.glb.company.com. info.glb.company.com. ( 201303211322 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) @ IN NS stm1.glb.company.com. @ IN NS stm2.glb.company.com. ; stm1 IN A 172.16.10.101 stm2 IN A 172.16.20.101 ; www IN A 172.16.10.100 www IN A 172.16.20.100   Pre-Deployment testing:   - Using DNS tools such as DiG or nslookup (do not use ping as a DNS testing tool) make sure that you can query your "glb.company.com" zone and get both the A records returned.  This means the DNS zone file is ready to apply your GLB logic.  In the following example, we are using the DiG tool on a linux client to *directly* query the name servers that the vTM is load balancing  to check that we are being served back two A records for "www.glb.company.com".  We have added comments to the below section marked with <--(i)--| : Test Output from DiG user@localhost$ dig @172.16.10.40 www.glb.company.com A ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.40 www.glb.company.com A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19013 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 604800 IN A 172.16.20.100 <--(i)--| HERE ARE THE A RECORDS WE ARE TESTING www.glb.company.com. 604800 IN A 172.16.10.100 <--(i)--| ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 0 msec ;; SERVER: 172.16.10.40#53(172.16.10.40) ;; WHEN: Wed Mar 20 16:39:52 2013 ;; MSG SIZE rcvd: 139       Step 1: GLB Locations GLB uses locations to help STM understand where things are located.  First we need to create a GLB location for every Datacentre you need to provide GLB between.  In our example, we will be using two locations, Data Centre 1 and Data Centre 2, named DataCentre-1 and DataCentre-2 respectively: Creating GLB  Locations   Navigate to "Catalogs > Locations > GLB Locations > Create new Location"   Create a GLB location called DataCentre-1   Select the appropriate Geographic Location from the options provided   Click Update Location   Repeat this process for "DataCentre-2" and any other locations you need to set up.     Step 2: Set up GLB service First we create a GLB service so that vTM knows how to distribute traffic using the GLB system: Create GLB Service Navigate to "Catalogs > GLB Services > Create a new GLB service" Create your GLB Service.  In this example we will be creating a GLB service with the following settings, you should use settings to match your environment:   Service Name: GLB_glb.company.com   Domains: *.glb.company.com   Add Locations: Select "DataCentre-1" and "DataCentre-2"   Then we enable the GLB serivce:   Enable the GLB Service Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Basic Settings" Set "Enabled" to "Yes"   Next we tell the GLB service which resources are in which location:   Locations and Monitoring Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" Add the IP addresses of the resources you will be doing GSLB between into the relevant location.  In my example I have allocated them as follows: DataCentre-1: 172.16.10.100 DataCentre-2: 172.16.20.100 Don't worry about the "Monitors" section just yet, we will come back to it.     Next we will configure the GLB load balancing mechanism: Load Balancing Method Navigate to "GLB Services > GLB_glb.company.com > Load Balancing"   By default the load balancing "algorithm" will be set to "Adaptive" with a "Geo Effect" of 50%.  For this set up we will set the "algorithm" to "Round Robin" while we are testing.   Set GLB Load Balancing Algorithm Set the "load balancing algorithm" to "Round Robin"   Last step to do is bind the GLB service "GLB_glb.company.com" to our DNS virtual server.   Binding GLB Service Profile Navigate to "Services > Virtual Servers > vs_GLB_DNS > GLB Services > Add new GLB Service" Select "GLB_glb.company.com" from the list and click "Add Service" Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|: Step 3 - Testing Round Robin Now that we have GLB applied to the "glb.company.com" zone, we can test GLB in action. Using DNS tools such as DiG or nslookup (again, do not use ping as a DNS testing tool) make sure that you can query against your STM DNS virtual servers and see what happens to request for "www.glb.company.com". Following is test output from the Linux DiG command. We have added comments to the below section marked with the <--(i)--|:   Testing user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17761 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.2(i)(i)0.100 <--(i)--| DataCentre-2 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm1.glb.company.com. glb.company.com. 604800 IN NS stm2.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 1 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123 user@localhost $ dig @172.16.10.101 www.glb.company.com ; <<>> DiG 9.8.1-P1 <<>> @172.16.10.101 www.glb.company.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9098 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;www.glb.company.com. IN A ;; ANSWER SECTION: www.glb.company.com. 60 IN A 172.16.1(i)0.100 <--(i)--| DataCentre-1 response ;; AUTHORITY SECTION: glb.company.com. 604800 IN NS stm2.glb.company.com. glb.company.com. 604800 IN NS stm1.glb.company.com. ;; ADDITIONAL SECTION: stm1.glb.company.com. 604800 IN A 172.16.10.101 stm2.glb.company.com. 604800 IN A 172.16.20.101 ;; Query time: 8 msec ;; SERVER: 172.16.10.101#53(172.16.10.101) ;; WHEN: Thu Mar 21 13:32:27 2013 ;; MSG SIZE rcvd: 123   Step 4: GLB Health Monitors Now that we have GLB running in round robin mode, the next thing to do is to set up HTTP health monitors so that GLB can know if the application in each DC is available before we send customers to the data centre for access to the website:     Create GLB Health Monitors Navigate to "Catalogs > Monitors > Monitors Catalog > Create new monitor" Fill out the form with the following variables: Name:   GLB_mon_www_AU Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.10.100:80 Repeat for the other data centre: Name:   GLB_mon_www_US Type:    HTTP monitor Scope:   GLB/Pool IP or Hostname to monitor: 172.16.20.100:80   Navigate to "Catalogs > GLB Services > GLB_glb.company.com > Locations and Monitoring" In DataCentre-1, in the field labled "Add new monitor to the list" select "GLB_mon_www_AU" and click update. In DataCentre-2, in the field labled "Add new monitor to the list" select "GLB_mon_www_US" and click update.   Step 5: Activate your preffered GLB load balancing logic Now that you have GLB set up and you can detect application failures in each data centre, you can turn on the GLB load balancing algorithm that is right for your application.  You can chose between: GLB Load Balancing Methods Load Geo Round Robin Adaptive Weighted Random Active-Passive The online help has a good description of each of these load balancing methods.  You should take care to read it and select the one most appropriate for your business requirements and environment.   Step 6: Test everything Once you have your GLB up and running, it is important to test it for all the failure scenarios you want it to cover. Remember: failover that has not been tested is not failover...   Following is a test matrix that you can use to check the essentials: Test # Condition Failure Detected By / Logic implemented by GLB Responded as designed 1 All pool members in DataCentre-1 not available GLB Health Monitor Yes / No 2 All pool members in DataCentre-2 not available GLB Health Monitor Yes / No 3 Failure of STM1 GLB Health Monitor on STM2 Yes / No 4 Failure of STM2 GLB Health Monitor on STM1 Yes / No 5 Customers are sent to the geographically correct DataCentre GLB Load Balancing Mechanism Yes / No   Notes on testing GLB: The reason we instruct you to use DiG or nslookup in this guide for testing your DNS rather than using a tool that also does an DNS resolution, like ping, is because Dig and nslookup tools bypass your local host's DNS cache.  Obviously cached DNS records will prevent you from seeing changes in status of your GLB while the cache entries are valid.     The Final Step - Create your CNAME: Now that you have a working GLB entry for "www.glb.company.com", all that is left to do is to create or change the record for the real site "www.company.com" to be a CNAME for "www.glb.company.com". Sample Zone File ; ; BIND data file for company.com ; $TTL 604800 @ IN SOA ns1.company.com. info.company.com. ( 201303211312 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800 ; Default TTL ) ; @ IN NS ns1.company.com. ; Here is our CNAME www IN CNAME www.glb.company.com.
View full article
Stingray provides a module to help you write custom monitors in Perl. However Perl is not the only way to write a custom monitor - you can use any programming or scripting language that is supported on the system hosting the Stingray software. And if you want to avoid installing a large number of libraries, CPAN modules (or other scripting languages) you can write your custom monitors in C. In this article I walk-through a few examples of simple custom monitors written in C that query a MySQL database.  Hopefully you will be able to use the example code as the basis of your own monitors. MySQL The purpose of this article is to document using C to write a custom monitor. I chose to write monitors for MySQL because it is a commonly used server with a well-known (and simple) C API. Hopefully the concepts (and some of the code here) will be transferable to other custom monitors. And, of course, someone who is looking to provide a more thorough monitor for their MySQL pool should have a good idea of where to start after reading this. I am using MySQL v4.1 (ver 14.7). Depending on backwards compatibility, the code samples should work with more recent releases of MySQL. A ping monitor Let's start with a MySQL ping monitor. mysql_ping checks whether the connection to the server is working. If the connection has gone down, an automatic reconnection is attempted. It should be a little more thorough than the standard built-in ping monitor plus it has the advantage of keeping the connection to the server open. mysql_ping.c #include <stdlib.h> #include <stdio.h> #include <mysql/mysql.h> MYSQL mysql; int main(void) {    mysql_init(&mysql);    mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,MONITORNAME);    /* connect */    if (!mysql_real_connect(&mysql,HOST,USER,PASSWD,DATABASE,0,NULL,0)) {        fprintf(stderr, "Failed to connect to database: Error: %s\n",              mysql_error(&mysql));        exit(EXIT_FAILURE);    }       /* try a ping */    if (mysql_ping(&mysql)) {        fprintf(stderr, "Cannot ping database: Error: %s\n",              mysql_error(&mysql));        exit(EXIT_FAILURE);    }    exit(EXIT_SUCCESS); } Compiling the monitor A zip archive of all the c code and Makefiles to build these monitors is included at the end of this article. The database details are hard-coded into this monitor so you will need to change them before you build it. Edit the Makefile and enter your hostname, username password and the name of the database you want to ping. Type make . If all is well you should now have a monitor called mysql_ping . Problems? If it didn't build you will probably need to install the mysqlclient-dev libraries. On my Debian/Ubuntu system, this was sufficient: $ sudo apt-get install libmysqlclient-dev Of course, your milage may differ. Once you've istalled them, try typing mysql_config --cflags . This should return the include directory for the MySQL client library. Testing the monitor Test the monitor by running it from the shell: $ ./mysql_ping If all is well you should see nothing! If there is any problem connecting with the database (or if any of your settings are wrong) you should get an appropriate error message. Installing the monitor in Stingray Copy the built monitor to your Stingray and place it in your monitors directory $ZEUSHOME/zxtm/monitors . Ensure it has the correct user, group and executable permissions.  You can also upload it via the Extra Files section of the catalog. In the Stingray admin interface, Create a virtual server and pool to manage MySQL traffic. MySQL is a generic server-first protocol that uses port 3036. Now you can create your custom monitor. In the Monitors Catalog create a new external program monitor. Enter mysql_ping in the Program box. Go to your MySQL pool and set add the monitor to it. (It's easy to forget this step.) If all is well nothing should happen. To see something more interesting, try stopping the MySQL server (or recompiling the monitor with some bogus settings). You can also edit the settings and turn verbose mode on. You should then see some output from the monitor. A more sophisticated monitor A MySQL ping monitor is only midly more useful than a generic ping monitor. It would be much more useful if we could connect to the server, run a query and check the result. This monitor does exactly that. To demonstrate this I have created a database with a simple table that stores name-value pairs. I have a table called vars that looks like this: +-------+-------------+------+-----+---------+-------+ | Field | Type        | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | name  | varchar(32) |      |     |         |       | | value | tinytext    | YES  |     | NULL    |       | +-------+-------------+------+-----+---------+-------+ ... and I have an entry in vars like this: +-------------------+-------+ | name              | value | +-------------------+-------+ | database_is_happy | yes   | +-------------------+-------+ The monitor queries this database table to check that this value is correct. i.e SELECT value FROM vars where name='database_is_happy'" If it can't connect, ping or query the database, or if value returned by the query isn't 'yes' the monitor fails. #include <stdlib.h> #include <stdio.h> #include <string.h> #include <mysql/mysql.h> MYSQL mysql; MYSQL_RES *res; MYSQL_ROW row; int main(void) {    mysql_init(&mysql);    mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,MONITORNAME);    /* connect */    if (!mysql_real_connect(&mysql,HOST,USER,PASSWD,DATABASE,0,NULL,0)) {        fprintf(stderr, "Cannot connect to database: Error: %s.\n",              mysql_error(&mysql));        exit(EXIT_FAILURE);    }       /* try a ping */    if (mysql_ping(&mysql)) {        fprintf(stderr, "Cannot ping database: Error: %s.\n",              mysql_error(&mysql));        exit(EXIT_FAILURE);    }    /* try a query */    if (mysql_query(&mysql,"SELECT value FROM vars where name='database_is_happy'"))  {        fprintf(stderr, "Cannot query database: Error: %s.\n",              mysql_error(&mysql));        exit(EXIT_FAILURE);   }    /* get the result */    if (!(res = mysql_store_result(&mysql)))    {        fprintf(stderr, "Cannot store database result: Error: %s.\n",                mysql_error(&mysql));        exit(EXIT_FAILURE);    }    /* check result */      row = mysql_fetch_row(res);    if(strcmp(row[0],"yes")) {      fprintf(stderr, "Unexpected data (\'%s\') returned by database.\n",              row[0]);      exit(EXIT_FAILURE);    }    exit(EXIT_SUCCESS); } Now if you change the entry in the table to 'no' you should see your pool fail. And then if you change it back everything should go green once again. Full example with optional parameters It would be even more useful if we didn't have to compile our settings into the monitor. Stingray will pass arguments to your monitor, plus it will provide it with the ipaddress, port number to test (along with other options). In this example I have used getopt() to parse the options and use them to decide which MySQL server to connect to. Plus the query and expected result can be passed too. #include <stdlib.h> #include <stdio.h> #include <string.h> #include <getopt.h> #include <mysql/mysql.h> /* options sent by Stingray */ char* ipaddr = ""; /* load defaults from Makefile */ /* can be overrided by options */ char* host = HOST; char* user = USER; char* passwd = PASSWD; char* database = DATABASE; char* query = QUERY; char* result = RESULT; void parseArguments(int argc, char **argv) {   int c;     while (1) {         static struct option long_options[] =     {        {"verbose",       no_argument,        0,  'v'},        {"ipaddr",        required_argument,  0,  'i'},        {"port",          required_argument,  0,  'o'},        {"failures_left", required_argument,  0,  'f'},        {"host",          required_argument,  0,  'h'},        {"user",          required_argument,  0,  'u'},        {"passwd",        required_argument,  0,  'p'},        {"database",      required_argument,  0,  'd'},        {"query",         required_argument,  0,  'q'},        {"result",        required_argument,  0,  'r'},        {0, 0, 0, 0}     };         /* getopt_long stores the option index here. */     int option_index = 0;         c = getopt_long (argc, argv, "h:u:p:d:q:r:",       long_options, &option_index);         /* Detect the end of the options. */     if (c == -1)       break;         switch (c) {         case 'o':     case 'f':     case 'v':       /* ignore */       break;     case 'i':       ipaddr = optarg;       break;     case 'h':       host = optarg;       break;           case 'u':       user = optarg;       break;           case 'p':       passwd = optarg;       break;           case 'd':       database = optarg;       break;           case 'q':       query = optarg;       break;           case 'r':       result = optarg;       break;           default:            exit (EXIT_FAILURE);     }   } } MYSQL mysql; MYSQL_RES *res; MYSQL_ROW row; int main(int argc, char **argv) {   parseArguments(argc,argv);     if (*ipaddr) /* ipaddr overrides host when live */     host = ipaddr;   mysql_init(&mysql);     mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,MONITORNAME);     /* connect */   if (!mysql_real_connect(&mysql,host,user,passwd,database,0,NULL,0)) {     fprintf(stderr, "Cannot connect to database: Error: %s.\n",      mysql_error(&mysql));     exit(EXIT_FAILURE);   }     /* try a ping -- ping zero if ok */   if (mysql_ping(&mysql)) {     fprintf(stderr, "Cannot ping database: Error: %s.\n",      mysql_error(&mysql));     exit(EXIT_FAILURE);   }     /* try a query */   if (mysql_query(&mysql,query))  {     fprintf(stderr, "Cannot query database: Error: %s.\n",      mysql_error(&mysql));     exit(EXIT_FAILURE);   }     /* get the result */   if (!(res = mysql_store_result(&mysql))) {     fprintf(stderr, "Cannot store database result: Error: %s.\n",      mysql_error(&mysql));     exit(EXIT_FAILURE);   }     /* check result */     row = mysql_fetch_row(res);     if(strcmp(row[0],result)) {     fprintf(stderr, "Unexpected data (\'%s\') returned by database.\n",      row[0]);     exit(EXIT_FAILURE);   }     exit(EXIT_SUCCESS); } To test it, create an external program monitor with these arguments:- ... and these settings (obviously you should enter the full query: select value from vars where name='database_is_happy') You can, of course, pass: username, password, hostname and database this way too. Static Builds To avoid installing the mysql client libraries onto your Stingray (or if you are using a Stingray Virtual Appliance) you will need to make static builds of these monitors. The Make files included have a make static option. However, please note that this just blindly builds everything statically including libc, libz, etc which are already present on your Stingray. This may not be the best idea for your setup - especially if you can build on an identical architecture - and may produce linker warnings as recent versions of libc complain, eg: /usr/lib/libmysqlclient.a(libmysql.o)(.text+0xb2): In function `mysql_server_init':: warning: Using 'getservbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking For a production system you are better to only statically link in any libraries not present on the Stingray (in this case just libmysqlclient) and dynamically link the others. But, if you have to compile for a 32 bit chip and run on a 64 bit chip (which is what I had to do to test this monitor) you will need to statically link everything (and the linker may complain that you still need to copy the 32 bit libc to the Stingray). If you are using Stingray software on your own machine then you are, of course, free to install any library you wish which does away with all this faff. This article was originally written by Sam Phillips in February 2006
View full article
# change this to control debug messages: # 0 -> debug off # 1 -> only log what happens when mappings are reloaded # 2 -> also log what goes on in each request $debug = 0; # The file that we parse is sourced from the "Extra Files" section # of the Traffic Manager and we expect it to be named the same as # the TrafficScript rule (so if your rule is named MyRewrite, the rule will # expect a file called "MyRewrite" in the "extra" directory). # The file we read has 3 elements that are space separated: # Element one is the string RD or RW depending if we are redirecting #   the connection or rewriting it # Element two is the old URL # Element three is the new URL # example: # RD /oldurl http://newsite.com/newurl   # We cannot store the hash-table of mappings in data.get/set because # that is inefficient due to the constant (de-)serialization. # Instead, we employ the following 'flattening' strategy: all direct # mappings (no wildcard or regex) are stored in data.get/set as is. # Wildcards and regexes are sorted by how specific they are (most # specific first) and then stored under the key $file . $idx, where # $idx is their position in the pecking order.  When we do a lookup, # we check the direct mappings first.  If we don't find anything, we # go through the wc/regex keys starting with index 0, counting up. # As long as we keep finding the key, we check for a match.  When we # fail to find the key, we know we've checked all entries and give # up. sub sortAndInsertMappings( $prefix,$mappings,$debug ) {    # the '1' indicates reverse order: we want big numbers first    $sorted = array.sort( hash.keys( $mappings ), 1 );    $upper = array.length( $sorted );    for( $i = 0; $i < $upper; ++$i ) {       $k = $sorted[$i];       $path = string.skip( $k, 2 ); # strip off number of slashes       $key = $prefix . $i;       $value = $path . " " . $mappings[$k];       if( $debug ) { log.info( "Mapping from " . $key . " to '" . $value . "'" ); }       data.set( $key, $value );    } } sub reloadRedirects( $file, $fileTime,$debug ) {    $nonflat_rd = []; # empty hash    $nonflat_rw = [];    # data.reset is really expensive, but could be avoided by actually    # storing an array of 'our' keys under a dedicated entry.  That    # would only be worth it if the number of elements in data that    # are not ours is significant, since then the deserialization    # followed by an iteration over the relevant elements would    # be cheaper than the full scan.    data.reset( $file );    $paths = resource.getlines( $file );    foreach ( $path in $paths ) {       $data = string.split( $path );       if( 3 != array.length( $data ) ) {          log.warn( "Invalid line: " . $line );          continue;       }       $prefix =  $data[0];       $path = $data[1];       $mapping = $data[2];       if( !string.contains( $mapping, "$1" ) && !string.contains( $path, "*" ) ) {          # simple mapping          $k = $file . $prefix . $path;          if( $debug ) { log.info( "Direct mapping from " . $k . " to " . $mapping ); }          data.set( $k, $mapping );       } else {          # wc or regex          # Create a 2-byte binary representation of the number in network          # byte order.  This means we can use alphabetical sorting and still          # end up with an array sorted numerically          $num_slashes = string.intToBytes( string.count( $path, "/" ), 2 );          if( $prefix == "RD" ) {             $nonflat_rd[ $num_slashes . $path ] = $mapping;          } else if ( $prefix == "RW" ) {             $nonflat_rw[ $num_slashes . $path ] = $mapping;          } else {             log.warn( "Invalid prefix: " . $prefix );          }       }    }    sortAndInsertMappings( $file . "RD", $nonflat_rd, $debug );    sortAndInsertMappings( $file . "RW", $nonflat_rw, $debug );    data.set( $file . "-MTIME", $fileTime ); } sub checkPaths( $prefix, $path, $debug ) {    $k = rule.getname() . $prefix . $path;    $mapping = data.get( $k );    if ( string.length( $mapping ) ) {       if( $debug > 1 ) { log.info( "Straight swap from " . $path . " to " . $mapping ); }       return $mapping;    }    for( $i = 0; 1; ++$i ) {       $k = rule.getname() . $prefix . $i;       $data = data.get( $k );       if( $debug > 1 ) { log.info( "Checking key " . $k ); }       if( 0 == string.length( $data ) ) {          if( $debug > 1 ) {             log.info( "No more entries, no match found after " . $i . " entries" );          }          return "";       }       if( $debug > 1 ) { log.info( "Found data " . $data ); }       $arr = string.split( $data );       if( 2 != array.length( $arr ) ) {          log.warn( "Invalid data entry: " . $data );          continue;       }       $match = $arr[0];       $mapping = $arr[1];       if( string.contains( $mapping, "$1" ) ) {          # User needs a regex match /foo/(.*) /bar/$1          if( string.regexmatch( $path, $match ) ) {             if( $debug > 1 ) { log.info( "Regex matched" ); }             return string.regexsub( $path, $match, $mapping );          }       } else if( string.endswith( $match, "*" ) ) {          # Redirect "/foo/*" to /bar          $p = string.drop( $match, 1 );          if( string.startswith( $path, $p ) ) {             if( $debug > 1 ) { log.info( "Wildcard matched" ); }             return string.replace( $path, $p, $mapping );          }       }    }    return ""; } if( $debug > 1 ) { $start_time = sys.time.highres(); } checkMappingsUpToDate(); $path = http.getPath(); $rdPath = checkPaths( "RD", $path, $debug ); if ( string.length( $rdPath ) ) {    if( $debug > 1 ) {       log.info( "Redirecting: " . $path . " to " . $rdPath . "; Elapsed: "          . (sys.time.highres() - $start_time) );     }    http.redirect( $rdPath ); } $rwPath = checkPaths( "RW", $path, $debug ); if ( string.length( $rwPath ) ) {    if( $debug > 1 ) {       log.info( "Rewriting: " . $path . " to " . $rwPath . "; Elapsed: "          . (sys.time.highres() - $start_time) );    }    http.setPath( $rwPath ); } if( $debug > 1 ) {    log.info( "No match, request goes through unchanged; Elapsed: "          . (sys.time.highres() - $start_time) ); } # Since we store the mappings as multiple values, when the file has # changed, we have to delete all mappings from the file and then step # by step populate the map again with the new mappings.  This means # that while we're re-populating, a lookup for a particular value might # incorrectly find neither the old nor the new value.  TrafficScript doesn't # have real locks that guarantee access by only one process.  We can # emulate them closely, however, by guarding write access to the # mappings with a single 0/1 entry. sub checkMappingsUpToDate() {    $pid = sys.getpid();    if( !string.length( data.get( $pid ) ) ) { data.set( $pid, 0 ); }    $file = rule.getname();    if ( resource.exists( $file ) ) {       # we could use 'mtime' as the 'lock' key as well by setting it to       # a 'magic' value to indicate we're updating       $lock_key = $file . "-LOCK";       $mod_key = $file . "-MTIME";       while( 1 ) {          $mtime = data.get( $mod_key );          $fileTime = resource.getMTime( $file );          if ( $mtime == $fileTime ) {             break;          }          if ( !data.get( $lock_name ) ) {             data.set( $lock_name, "1" ); # 'lock'             log.info( $pid . " parsing file" );             reloadRedirects( $file, $fileTime, $debug );             data.remove( $lock_name ); # 'unlock'             break;          } else {             # wait for the other process to reload the file             $waits = data.get( $pid );             data.set( $pid, $waits+1 );             connection.sleep( 2 );          }       }    } }
View full article
Java Extensions are one of the 'data plane' APIs provided by Traffic Manager to process network transactions.  Java Extensions are invoked from TrafficScript using the java.run() function.   This article contains a selection of technical tips and solutions to illustrate the use of Java Extensions.   Basic Language Examples   Writing Java Extensions - an introduction (presenting a template and 'Hello World' application) Writing TrafficScript functions in Java (illustrating how to use the GenericServlet interface) Tech Tip: Prompting for Authentication in a Java Extension Tech Tip: Reading HTTP responses in a Java Extension   Advanced Language Examples   Apache Commons Logging (TODO) Authenticating users with Active Directory and Stingray Java Extensions Watermarking Images with Traffic Manager and Java Extensions Watermarking PDF documents with Traffic Manager and Java Extensions Being Lazy with Java Extensions XML, TrafficScript and Java Extensions Merging RSS feeds using Java Extensions (12/17/2008) Serving Web Content from Traffic Manager using Java Stingray-API.jar: A Java Interface Library for Traffic Manager's SOAP Control API TrafficManager Status - Using the Control API from a Java Extension   Java Extensions in other languages   PyRunner.jar: Running Python code in Traffic Manager Making Traffic Manager more RAD with Jython! Scala, Traffic Manager and Java Extensions (06/30/2009)   More information   Feature Brief: Java Extensions in Traffic Manager Java Development Guide documentation in the Product Documentation
View full article
The following code uses Stingray's Control API to list all the running virtual servers on a cluster. The code is written in Java and uses the Stingray-API.jar library described in this article: Using Stingray's SOAP Control API with Java. listVS.java Make sure to edit the endpoint address (https://username:password@host:9090/soap) so that the username, password and host match the admin interface for your Stingray. import com.zeus.soap.zxtm._1_0.*; import java.security.Security; import java.security.KeyStore; import java.security.Provider; import java.security.cert.X509Certificate; import javax.net.ssl.ManagerFactoryParameters; import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactorySpi; import javax.net.ssl.X509TrustManager; public class listVS {    public static void main( String[] args ) {       // Install the all-trusting trust manager       Security.addProvider( new MyProvider() );       Security.setProperty( "ssl.TrustManagerFactory.algorithm", "TrustAllCertificates");       try {          VirtualServerLocator vsl = new VirtualServerLocator();          vsl.setVirtualServerPortEndpointAddress(             " https://username:password@host:9090/soap " );          VirtualServerPort vsp = vsl.getVirtualServerPort();          String[] vsnames = vsp.getVirtualServerNames();          boolean[] vsenabled = vsp.getEnabled( vsnames );          for( int i = 0; i < vsnames.length; i++ ){             if( vsenabled ){                System.out.println( vsnames );             }          }       } catch (Exception e) {          System.out.println( e.toString() );       }    }    /* The following code disables certificate checking.    * Use the Security.addProvider and Security.setProperty    * calls to enable it */    public static class MyProvider extends Provider {       public MyProvider() {          super( "MyProvider", 1.0, "Trust certificates" );          put( "TrustManagerFactory.TrustAllCertificates", MyTrustManagerFactory.class.getName() );       }       protected static class MyTrustManagerFactory extends TrustManagerFactorySpi {          public MyTrustManagerFactory() {}          protected void engineInit( KeyStore keystore ) {}          protected void engineInit(             ManagerFactoryParameters mgrparams ) {}          protected TrustManager[] engineGetTrustManagers() {             return new TrustManager[] { new MyX509TrustManager() };          }       }       protected static class MyX509TrustManager implements X509TrustManager {          public void checkClientTrusted( X509Certificate[] chain, String authType) {}          public void checkServerTrusted( X509Certificate[] chain, String authType) {}          public X509Certificate[] getAcceptedIssuers() { return null; }       }    } } Running the example To build and run the code, you'll first need to do the following: Download axis 1.4 from Index of /dist/ws/axis/1_4. You can unzip the axis-bin package in your working directory, or you can install the jar files permanently (e.g. in the JAVAHOME/jre/lib/ext/ directory). Download the JavaMail library; either unzip the package in your working directory, or install the mail.jar file to the JAVAHOME/jre/lib/ext/ directory.  This package provides class implementations that, though not required, will avoid warning about missing classes Compile and run the example as follows: $ javac -cp Stingray-API.jar:axis-1_4/lib/* listVS.java $ jar -cvfe listVS.jar listVS listVS*.class $ java -cp Stingray-API.jar:axis-1_4/lib/*:javamail-1.4.7/lib/*:listVS.jar listVS Main website Mail servers Test site If you install the Stingray-API, Apache Axis 1.2 and JavaMail libraries in your system classpath, then you don't need to reference them explicity when you build and run this example. Notes The bulk of this code disables client certificate checking. Details of the code and surrounding infrastructure are at http://java.sun.com/j2se/1.5.0/docs/guide/security/jsse/JSSERefGuide.html. Read more Collected Tech Tips: SOAP Control API examples
View full article
The TrafficScript function http.changeSite() makes it easy to redirect clients from one domain to another.  You can also use it to reliably redirect clients from http to https (or https to http), or from one document tree on a website (e.g. /products) to another (e.g /sales). # Example: Redirect client from www.site.com to www.site.co.uk if( geo.getCountryCode( request.getRemoteIP() ) == "GB" ) {   http.changeSite( "www.site.co.uk" ); } # Example: Force client to https (assuming this rule is attached to an HTTP virtual server) http.changeSite( " https:// " . http.getHostHeader() ); # Example: move client from one tree to another $path = http.getPath(); if( string.startsWith( $path, "/products" ) ) http.changeSite( http.getHostHeader(). "/sales" ); For more fine-grained control of HTTP redirects, you can also use the http.redirect() function. Read more Collected Tech Tips: TrafficScript examples
View full article
Stingray Traffic Manager version 9.5 includes some important enhancements to the RESTful API.  These enhancements include the following:   A new API version The API version has moved to 2.0.  Versions 1.0 and 1.1 are still available but have been deprecated.   Statistics and Version Information A new resource, "status", is available that contains the child resources "information" and "statistics", under the host name.  Data can only be retrieved for these resources; no updates are allowed.  The URL for "information" is: http(s)://<host>:<port>/api/tm/2.0/status/<host>/information   and the URI for "statistics" is:   http(s)://<host>:<port>/api/tm/2.0/status/<host>/statistics   <host> can also be "local_tm", which is an alias for the Traffic Manager processing the REST request.  For this release, only statistics for the local Traffic Manager are available.   The "information" resource contains the version of the Stingray Traffic Manager, so for example the request:   http(s)://<host>:<port>/api/tm/2.0/status/local_tm/information   for version 9.5 would return:   tm_version9.5   The "statistics" resource contains the Stingray statistics that are also available with SNMP or the SOAP API.  The following child resources are available under "statistics":   actions, bandwidth, cache, cloud_api_credentials, connection_rate_limit, events, glb_services, globals, listen_ips, locations, network_interface, nodes, per_location_service, per_node_slm, pools, rule_authenticators, rules, service_level_monitors, service_protection, ssl_ocsp_stapling, traffic_ips, virtual_servers   The statistics that are available vary by resource.   Example:   To get the statistics for the pool "demo" on the Stingray Traffic Manager "stingray.example.com": https://stingray.example.com:9070/api/tm/2.0/status/local_tm/statistics/pools/demo { "statistics": { "algorithm": "roundrobin", "bytes_in": 20476976, "bytes_out": 53323, "conns_queued": 0, "disabled": 0, "draining": 0, "max_queue_time": 0, "mean_queue_time": 0, "min_queue_time": 0, "nodes": 1, "persistence": "none", "queue_timeouts": 0, "session_migrated": 0, "state": "active", "total_conn": 772 } } Resource Name Changes Some resources have been renamed to be more clear:   actionprogs-> action_programs auth-> user_authenticators authenticators-> rule_authenticators cloudcredentials-> cloud_api_credentials events-> event_types extra-> extra_files flipper-> traffic_ip_groups groups-> user_groups scripts-> monitor_scripts services-> glb_services settings.cfg-> global_settings slm-> service_level_monitors vservers-> virtual_servers zxtms-> traffic_managers   New Resource   One new resource, "custom" has been added to support the new Custom Configuration Sets feature.  This allows arbitrary name:value configuration pairs to be stored in the Traffic Manager configuration system. As part of the Traffic Manager configuration, this data is replicated across a cluster and is accessible using the REST API, SOAP API and ZCLI.  All data structures supported by the Stinray REST API are also supported for Custom Configuration Sets.  Please see the REST API Guide for more information.
View full article
  1. The Issue   When using perpetual licensing on a Traffic Manager, it is restricted to a throughput licensing limitation as per the license.  If this limitation is reached, traffic will be queued and in extreme situations, if the throughput reaches much higher than expected levels, some traffic could be dropped because of the limitation.   2. The Solution   Automatically increase the allocated bandwidth for the Traffic Manager!!   3. A Brief Overview of the Solution   An SSC holds the licensed bandwidth configuration for the Traffic Manager instance.   The Traffic Manager is configured to execute a script on an event being raised, the bwlimited event.   The script makes REST calls to the SSC in order to obtain and then increment if necessary, the Traffic Manager's bandwidth allocation.   I have written the script used here, to only increment if the resulting bandwidth allocation is 5Mbps or under, but this restriction could be removed if it's not required.  The idea behind this was to allow the Traffic Manager to increment it's allocation, but to only let it have a certain maximum amount of bandwidth from the SSC bandwidth "bucket".   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set.   For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out the automated bandwidth allocation increase on the Traffic Manager, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the instance in the event of a bandwidth limitation event firing. b. Upload the script to be used, on to the Traffic Manager. c. Create a new event and action on the Traffic Manager which will be initiated when the bandwidth limitation is hit, calling the script mentioned in point a above.   4.2.a. The Script to increment the Traffic Manager Bandwidth Allocation   This script, called  and attached, is shown below.   Script Function:   Obtain the Traffic Manager instance configuration from the SSC. Extract the current bandwidth allocation for the Traffic Manager instance from the information obtained. If the current bandwidth is less then 5Mbps, then increment the allocation by 1Mbps and issue the REST call to the SSC to make the changes to the instance configuration as required.  If the bandwidth is currently 5Mbps, then do nothing, as we've hit the limit for this particular Traffic Manager instance.   #!/bin/bash # # Bandwidth_Increment # ------------------- # Called on event: bwlimited # # Request the current instance information requested_instance_info=$(curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" \ -X GET -u admin:password https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002) # Extract the current bandwidth figure for the instance current_instance_bandwidth=$(echo $requested_instance_info | sed -e 's/.*"bandwidth": \(\S*\).*/\1/g' | tr -d \,) # Add 1 to the original bandwidth figure, imposing a 5Mbps limitation on this instance bandwidth entry if [ $current_instance_bandwidth -lt 5 ] then new_instance_bandwidth=$(expr $current_instance_bandwidth + 1) # Set the instance bandwidth figure to the new bandwidth figure (original + 1) curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \ '{"bandwidth":'"${new_instance_bandwidth}"'}' \ https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002 fi   There are some obvious parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton which you can use to carry out whatever configuration is required on the SSC for a particular Traffic Manager.  Events and actions can be set up on the Traffic Manager which can then be used to execute scripts which can access the SSC and make the changes necessary based on any logic you see fit.   4.2.b. Upload the Bash Scripts to be Used   On the Traffic Manager, upload the bash script that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.   4.2.c. Create a New Event and Action for the Bandwidth Limitation Hit   On the Traffic Manager, create a new event type as shown in the screenshot below - I've created Bandwidth_Increment, but this event could be called anything relevant.  The important factor here is that the event is raised from the bwlimited event.     Once this event has been created, an action must be associated with it.   Create a new external program action as shown in the screenshot below - I've created one called Bandwidth_Increment, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called SSC_Bandwidth_Increment.     5. Testing   In order to test the solution, on the SSC, set the initial bandwidth for the Traffic Manager instance to 1Mbps.   Generate some traffic through to a service on the Traffic Manager that will force the Traffic Manager to hit it's 1Mbps limitation for a succession of time.  This will cause the bwlimited event to fire and for the Bandwidth_Increment action to be executed, running the SSC_Bandwidth_Increment script.   The script will increment the Traffic Manager bandwidth by 1Mbps.   Check and confirm this on the SSC.   Once confirmed, stop the traffic generation.   Note: As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Manager.   You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Riverbed Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
  1. The Issue   When using perpetual licensing on Traffic Manager instances which are clustered, the failure of one of the instances results in licensed throughput capability being lost until that instance is recovered.   2. The Solution   Automatically adjust the bandwidth allocation across cluster members so that wasted or unused bandwidth is used effectively.   3. A Brief Overview of the Solution   An SSC holds the configuration for the Traffic Manager cluster members. The Traffic Managers are configured to execute scripts on two events being raised, the machinetimeout event and the allmachinesok event.   Those scripts make REST calls to the SSC in order to dynamically and automatically amend the Traffic Manager instance configuration held for the two cluster members.   4. The Solution in a Little More Detail   4.1. Move to an SSC Licensing Model   If you're currently running Traffic Managers with perpetual licenses, then you'll need to move from the perpetual licensing model to the SSC licensing model.  This effectively allows you to allocate bandwidth and features across multiple Traffic Managers within your estate.  The SSC has a "bucket" of bandwidth along with configured feature sets which can be allocated and distributed across the estate as required, allowing for right-sizing of instances, features and also allowing multi-tenant access to various instances as required throughout the organisation.   Instance Hosts and Instance resources are configured on the SSC, after which a Flexible License is uploaded on each of the Traffic Manager instances which you wish to be licensed by the SSC, and those instances "call home" to the SSC regularly in order to assess their licensing state and to obtain their feature set. For more information on SSC, visit the Riverbed website pages covering this product, here - SteelCentral Services Controller for SteelApp Software.   There's also a Brochure attached to this article which covers the basics of the SSC.   4.2. Traffic Manager Configuration and a Bit of Bash Scripting!   The SSC has a REST API that can be accessed from external platforms able to send and receive REST calls.  This includes the Traffic Manager itself.   To carry out automated bandwidth allocation on cluster members, we'll need to carry out the following;   a. Create a script which can be executed on the Traffic Manager, which will issue REST calls in order to change the SSC configuration for the cluster members in the event of a cluster member failure. b. Create another script which can be executed on the Traffic Manager, which will issue REST calls to reset the SSC configuration for the cluster members when all of the cluster members are up and operational. c. Upload the two scripts to be used, on to the Traffic Manager cluster. d. Create a new event and action on the Traffic Manager cluster which will be initiated when a cluster member fails, calling the script mentioned in point a above. e. Create a new event and action on the Traffic Manager cluster which will be initiated when all of the cluster members are up and operational, calling the script mentioned in point b above.   4.2.a. The Script to Re-allocate Bandwidth After a Cluster Member Failure This script, called Cluster_Member_Fail_Bandwidth_Allocation and attached, is shown below.   Script Function:   Determine which cluster member has executed the script. Make REST calls to the SSC to allocate bandwidth according to which cluster member is up and which is down.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash  #  # Cluster_Member_Fail_Bandwidth_Allocation  # ----------------------------------------  # Called on event: machinetimeout  #  # Checks which host calls this script and assigns bandwidth in SSC accordingly  # If demo-1 makes the call, then demo-1 gets 999 and demo-2 gets 1  # If demo-2 makes the call, then demo-2 gets 999 and demo-1 gets 1  #       # Grab the hostname of the executing host  Calling_Hostname=$(hostname -f)       # If demo-1.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-1.example.com" ]  then           # Set the demo-1.example.com instance bandwidth figure to 999 and           # demo-2.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com  fi       # If demo-2.example.com is executing then issue REST calls accordingly  if [ $Calling_Hostname == "demo-2.example.com" ]  then           # Set the demo-2.example.com instance bandwidth figure to 999 and           # demo-1.example.com instance bandwidth figure to 1           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":999}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com           curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                              '{"bandwidth":1}' \                              https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com  fi    There are some obvious parts to the script that will need to be changed to fit your own environment.  The hostname validation, the admin username and password in the REST calls and the SSC name, port and path used in the curl statements.  Hopefully from this you will be able to see just how easy the process is, and how the SSC can be manipulated to contain the configuration that you require.   This script can be considered a skeleton, as can the other script for resetting the bandwidth, shown later.   4.2.b. The Script to Reset the Bandwidth   This script, called Cluster_Member_All_Machines_OK and attached, is shown below.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #!/bin/bash  #  # Cluster_Member_All_Machines_OK  # ------------------------------  # Called on event: allmachinesok  #  # Resets bandwidth for demo-1.example.com and demo-2.example.com - both get 500  #       # Set both demo-1.example.com and demo-2.example.com bandwidth figure to 500  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-1.example.com-00002  curl -k --basic -H "Content-Type: application/json" -H "Accept: application/json" -u adminuser:adminpassword -d \                      '{"bandwidth":500}' \                      https://ssc.example.com:8000/api/tmcm/1.1/instance/demo-2.example.com-00002    Again, there are some parts to the script that will need to be changed to fit your own environment.  The admin username and password in the REST calls and the SSC name, port and path used in the curl statements.   4.2.c. Upload the Bash Scripts to be Used   On one of the Traffic Managers, upload the two bash scripts that will be needed for the solution to work.  The scripts are uploaded in the Catalogs > Extra Files > Action Programs section of the Traffic Manager, and can then be referenced from the Actions when they are created later.     4.2.d. Create a New Event and Action for a Cluster Member Failure   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created Cluster_Member_Down, but this event could be called anything relevant.  The important factor here is that the event is raised from the machinetimeout event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called Cluster_Member_Down, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_Fail_Bandwidth_Allocation.   4.2.e. Create a New Event and Action for All Cluster Members OK   On the Traffic Manager (any one of the cluster members), create a new event type as shown in the screenshot below - I've created All_Cluster_Members_OK, but this event could be called anything relevant.  The important factor here is that the event is raised from the allmachinesok event.   Once this event has been created, an action must be associated with it. Create a new external program action as shown in the screenshot below - I've created one called All_Cluster_Members_OK, but again this could be called anything relevant.  The important factor for the action is that it's an external program action and that it calls the correct bash script, in my case called Cluster_Member_All_Machines_OK.   5. Testing   In order to test the solution, simply DOWN Traffic Manager A from an A/B cluster.  Traffic Manager B should raise the machinetimeout event which will in turn execute the Cluster_Member_Down event and associated action and script, Cluster_Member_Fail_Bandwidth_Allocation.   The script should allocate 999Mbps to Traffic Manager B, and 1Mbps to Traffic Manager A within the SSC configuration.   As the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state, you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions. You can force the Traffic Manager to poll the SSC by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as shown in the screenshot below;     To test the resetting of the bandwidth allocation for the cluster, simply UP Traffic Manager B.  Once Traffic Manager B re-joins the cluster communications, the allmachinesok event will be raised which will execute the All_Cluster_Members_OK event and associated action and script, Cluster_Member_All_Machines_OK. The script should allocate 500Mbps to Traffic Manager B, and 500Mbps to Traffic Manager A within the SSC configuration.   Just as before for the failure event and changes, the Flexible License on the Traffic Manager polls the SSC every 3 minutes for an update on it's licensed state so you may not see an immediate change to the bandwidth allocation of the Traffic Managers in questions.   You can force the Traffic Manager to poll the SSC once again, by removing the Flexible License and re-adding the license again - the re-configuration of the Flexible License will then force the Traffic Manager to re-poll the SSC and you should then see the updated bandwidth in the System > Licenses (after expanding the license information) page of the Traffic Manager as before (and shown above).   6. Summary   Please feel free to use the information contained within this post to experiment!!!   If you do not yet have an SSC deployment, then an Evaluation can be arranged by contacting your Partner or Brocade Salesman.  They will be able to arrange for the Evaluation, and will be there to support you if required.
View full article
This article discusses how to use the Stingray Traffic Manager's RESTful Control API with PHP.  The are different options for accessing the RESTful API with PHP, but I decided to create my own PHP REST Client, STMRESTClient, to make working with RESTful API easier, and this has been used in the PHP examples. Instructions for using STMRESTClient and the code can be found here: Tech Tip: A Stingray Traffic Manager REST Client for PHP   Resources   The RESTful API gives you access to the Stingray Configuration and statistics, presented in the form of resources.  The format of the data exchanged using the Stingray RESTful API will depend on the type of resource being accessed:   Data for Configuration ans Status Resources, such as Virtual Servers and Pools are exchanged in JSON format using the MIME type “application/json”, so when getting data for a resource with a GET request, the data will be returned in JSON format and must be deserialized or decoded into a PHP data structure.  When adding or changing a resource with a PUT request, the data must be serialized or encoded from a PHP data structure into JSON format.  Files, such as rules and those in the extra directory are exchanged in raw format using the MIME type “application/octet-stream”.   Working with JSON and PHP   PHP provides functions for JSON encoding and decoding. To take a PHP data structure and encode it into JSON format, use json_encode() and to decode a JSON formatted string into a PHP structure, use json_decode(). If using STMRestClient, JSON encoding and decoding will be done for you.   Working with the RESTful API and PHP   The base form of the URI for the Stingray RESTful API is:   https://<host>:<port>/api/tm/<version>   followed by paths to the different resource types:   For configuration resources: /config/active/ For statistic resources: /status/<host>/information/ "local_tm" can be used in place of <host> For information resources: /status/<host>/statistics/ "local_tm" can be used in place of <host>   followed by a actual resource, so for example to get a list of all the pools from the Stingray instance, stingray.example.com, it would be:   https://stingray.example.com:9070/api/tm/2.0/config/active/pools   and to get the configuration information for the pool, “testpool” it would be:   https://stingray.example.com:9070/api/tm/2.0/config/active/pools/testpool   and to get statistics for the pool "testpool", it would be:   https://stingray.example.com:9070/api/tm/2.0/status/local_tm/statistics/pools/testpool   Prerequisites   If your PHP environment does not have cURL installed, you will need to install it, even if you are using STMRestClient.   If using apt (assuming apache is the web server):   sudo apt-get install php5-curl sudo service apache2 restart   Data Structures   The PHP functions json_decode and json_encode convert between JSON strings and PHP data structures.  The PHP data structure can be either an associative array or an object.  In either case, the array or object will have one element.   The key to this element will be:   'children' for lists of configuration resources.  The value will be a PHP array with each element in the array being an associative array with the key, 'name', set to the name of the resource and the key, 'href', set to the URI of the resource. 'properties' for configuration resources.  The value will be an associative array with each key value pair being a section of properties with the key being set to the name of the section and the value being an associative array containing the configuration values as key/value pairs.  Configuration values can be scalars, arrays or associative arrays. 'statistics' for statistics resources.  The value will be an associative array. 'information' for information resources.  The value will be an associative array.   Please see Feature Brief: Stingray's RESTful Control API for examples of these data structures and something like the Chrome REST Console can be used to see what the actual data looks like.   Read More   The REST API Guide in the Stingray Product Documentation Feature Brief: Stingray's RESTful Control API Collected Tech Tips: Using the RESTful Control API Tech Tip: A Stingray Traffic Manager REST Client for PHP
View full article
The following code uses Stingray's RESTful API to delete a pool.  The code is written in TrafficScript.  This rule deletes the "tstest" pool created by the stmrest_addpool example.  To delete a resource you do a HTTP DELETE on the URI for the resource.  If the delete is successful a 204 HTTP status code will be returned.  Subroutines in stmrestclient are used to do the actual RESTful API calls.  stmrestclient is attached to the article Tech Tip: Using the RESTful Control API with TrafficScript - Overview. stmrest_deletepool ################################################################################ # stmrest_deletepool # # This rule deletes the pool "tspool". # # To run this rule add it as a request rule to an HTTP Virtual Server and in a # browser enter the path /rest/deletepool. # # It uses the subroutines in stmrestclient ################################################################################ import stmrestclient; if (http.getPath() != "/rest/deletepool") break; $pool = "tspool"; $resource = "pools/" . string.escape($pool); $accept = "json"; $html = "<br><b>Delete Pool " . $pool . "</b><br><br>"; # Check to make sure that the Pool exists $response = stmrestclient.stmRestGet($resource, $accept); if ($response["rc"] == 1) {    $response = stmrestclient.stmRestDelete($resource);    if ($response["rc"] == 1) {       $html = $html . "Pool " . $pool . " deleted";    } else {       $html = $html . "There was an error deleting pool " . $pool . ": " . $response['info'];    } } else {    if ($response['status'] == 404) {       $html = $html . "Pool " . $pool . " not found";    } else {       $html = $html . "There was an error getting the configuration for pool " . $pool . ": " . $response['info'];    } } http.sendResponse("200 OK", "text/html", $html, ""); Running the example This rule should be added as a request rule to a Virtual Server and run with the URL: http://<hostname>/rest/deletepool Pool tstest deleted Read More Stingray REST API Guide in the Stingray Product Documentation Tech Tip: Using the RESTful Control API with TrafficScript - Overview Feature Brief: Stingray's RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article