cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
The AWS Marketplace provides a "launch with 1-click" button to simplify deployment of the Pulse Secure Virtual Traffic Manager in EC2.   If you accept the Marketplace defaults a new security group is created for you automatically. The security group is fine for single instances of vTM, but attempts to cluster traffic managers associated with this security group will fail.   Prior to clustering traffic managers, modify your security group to open ports 9080 and 9090 for UDP traffic. For more information read the Before You Begin section in the Cloud Services Installation and Getting Started Guide - see vADC Product Documentation
View full article
In a recent conversation, a user wished to use the Traffic Manager's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Traffic Manager's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Traffic Manager processes all of the requests without excessive queuing; the response time stays within desired limits. Traffic Manager typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
Services Director 19.1 introduced a new communications channel to connect to Traffic Manager behind a NAT or Firewall.
View full article
Services Director certificates can be updated with a script, to ensure continued access on the secure authenticated communications channel.
View full article
When managing the Traffic Manager Virtual Appliance you might need to free some space to perform certain tasks. This is usually most visible when performing upgrades where you might be presented with the following message: Analyzing system: failed  ERROR: Not enough space available, need 300000KB, however only xxxxxxKB is available   You can perform the following steps:   If you've previously uploaded a zeus upgrade package (.zpkg file) for a major upgrade, this can now be safely removed. Remove unnecessary files left over from a previous upgrade: (remove /opt/zeus/.upgrade and /opt/zeus.extract.* ) Remove any cached charts from the activity graphs: /opt/zeus/zxtmadmin/docroot/cache/ * If you've uploaded anything to /root that isn't needed any more, delete those files. Remove old versions of the traffic manager software as described below. Back up some old log files and remove them from /logs. Remove any temporary files from the /logs partition: /logs/.tmp/* Extend the logs partition as described in the Product Documentation - Virtual Appliance Getting Started Guide   Generally, the /logs partition has more free space than the /root partition.  You should move the upgrade package (.zpkg) and other temporary files to that partition rather than root.   If all else fails, you can generate a disk space report that lists the largest folders and files on each partition as follows:   # du -ax / | sort -rn | head -50 # du -ax /logs | sort -rn | head -50   Provide this information to Riverbed Technical Support.   Removing Previous Software Versions   When you upgrade the Virtual Appliance, the previous version of the software is retained so that you can revert back to it at any point in time. If you upgrade through a series of minor revisions you might have several older versions of the software still installed. In practice, it is unlikely that you need to retain all of these; having a single version that is known to be good should suffice.   Use the following command to list the software versions that are installed in the currently running partition:   # /opt/zeus/zxtm/bin/rollback --delete   You can safely delete old software versions using this interactive command if you are certain that you will not need to roll-back to them at some point in the future.   Web Application Firewall   In a manner similar to the Traffic Manager itself, the previous versions of the Web Application Firewall are retained following an upgrade. As with the Traffic Manager you can remove these old software versions to free disk space if you are certain that you will no longer need to roll-back to them.   Version 4.5 or later of the Web Application Firewall provides an interface to manage installed versions, the interface can be accessed via the Traffic Manager administration interface.   Application Firewall > Administration > Cluster Management > Updater > Open Update Center   After logging into the Update Center, use the "Undeploy" action to remove any inactive versions you no longer require.
View full article
In many cases, it is desirable to upgrade a virtual appliance by deploying a virtual appliance at the newer version and importing the old configuration.  For example, the size of the Traffic Manager disk image was increased in version 9.7, and deploying a new virtual appliance lets a customer take advantage of this larger disk.  This article documents the procedure for deploying a new virtual appliance with the old configuration in common scenarios.   These instructions describe how to upgrade and reinstall Traffic Manager appliance instances (either in a cluster or standalone appliances). For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Upgrading a standalone Virtual Appliance   This process will replace a standalone virtual appliance with another virtual appliance with the same configuration (including migrating network configuration). Note that the Traffic Manager Cloud Getting Started Guide contains instructions for upgrading a standalone EC2 instance from version 9.7 onwards; if upgrading from a version prior to 9.7 and using the Web Application Firewall these instructions must be followed to correctly back up and restore any firewall configuration.   Make a backup of the traffic manager configuration (See section "System > Backups" in the Traffic Manager User Manual), and export it. If you are upgrading from a  version prior to 9.7 and are using the Web Application Firewall, back up the Web Application Firewall configuration - Log on to a command line - Run /opt/zeus/stop-zeus - Copy /opt/zeus/zeusafm/current/var/lib/config.db off the appliance. Shut down the original appliance. Deploy a new appliance with the same network interfaces as the original. If you backed up the application firewall configuration earlier, restore it here onto the new appliance, before you restore the traffic manager configuration: - Copy the config.db file to /opt/zeus/stingrayafm/current/var/lib/config.db    (overwriting the original) - Check that the owner on the config.db file is root, and the mode is 0644. Import and restore the traffic manager configuration via the UI. If you have application firewall errors Use the Diagnose page to automatically fix any configuration errors Reset the Traffic Manager software.   Upgrading a cluster of Virtual Appliances (except Amazon EC2)   This process will replace the appliances in the cluster, one at a time, maintaining the same IP addresses. As the cluster will be reduced by one at points in the upgrade process, you should ensure that this is carried out at a time when the cluster is otherwise healthy, and of the n appliances in the cluster, the load can be handled by (n-1) appliances.   Before beginning the process, ensure that any cluster errors have been resolved. Nominate the appliance which will be the last to be upgraded (call it the final appliance).  When any of the other machines needs to be removed from the cluster, it should be done using the UI on this appliance, and when a hostname and port are required to join the cluster, this appliance's hostname should be used. If you are using the Web Application Firewall first ensure that vWAF on the final appliance in the cluster is upgraded to the most recent version, using the vWAF updater. Choose an appliance to be upgraded, and remove the machine from the cluster: - If it is not the final appliance (nominated in step 2),    this should be done via the UI on the final appliance - If it is the final appliance, the UI on any other machine may be used. Make a backup of the traffic manager configuration (System > Backups) on the appliance being upgraded, and export the backup.  This backup only contains the machine specific info for that appliance (networking config etc). Shut down the appliance, and deploy a new appliance at the new version.  When deploying, it needs to be given the identical hostname to the machine it's replacing. Log on to the admin UI of the new appliance, and import and restore the backup from step 5. If you are using the Web Application Firewall, accessing the Application Firewall tab in the UI will fail and there will be an error on the Diagnose page and an 'Update Configuration' button. Click the Update Configuration button once, then wait for the error to clear.  The configuration is now correct, but the admin server still needs to be restarted to pick up the configuration: # $ZEUSHOME/admin/rc restart Now, upgrade the application firewall on the new appliance to the latest version. Join into the cluster: For all appliances except the final appliance, you must not select any of the auto-detected existing clusters.  Instead manually specify the hostname and port of the final appliance. If you are using Web Application Firewall, there may be an issue where the config on the new machine hasn't synced the vWAF config from the old machine, and clicking the 'Update Application Firewall Cluster Status' button on the Diagnose page doesn't fix the problem. If this happens, firstly get the clusterPwd from the final appliance: # grep clusterPwd /opt/zeus/zxtm/conf/zeusafm.conf clusterPwd = <your cluster pwd> On the new appliance, edit /opt/zeus/zxtm/conf/zeusafm.conf (with e.g. nano or vi), and replace the clusterPwd with the final appliance's clusterPwd. The moment that file is saved, vWAF should get restarted, and the config should get synced to the new machine correctly. When you are upgrading the final appliance, you should select the auto-detected existing cluster entry, which should now list all the other cluster peers. Once a cluster contains multiple versions, configuration changes must not be made until the upgrade has been completed, and 'Cluster conflict' errors are expected until the end of the process. Repeat steps 4-9 until all appliances have been upgraded.   Upgrading a cluster of STM EC2 appliances   Because EC2 licenses are not tied to the IP address, it is recommended that new EC2 instances are deployed into a cluster before removing old instances.  This ensures that the capacity of the cluster is not reduced during the upgrade process.  This process is documented in the "Creating a Traffic Manager Instances on Amazon EC2" chapter in the Traffic Manager Cloud Getting Started Guide.  The clusterPwd may also need to be fixed as above.
View full article
These instructions describe how to upgrade Traffic Manager AMIs on Amazon EC2. For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Upgrade strategy   Unlike physical machines or regular virtual appliances which tend to be very long-lived, EC2 instances are intended to be transient. If an EC2 instance develops a fault, it is easier to terminate it and replace it with a new one than to try to repair it.   Upgrades can be handled in a similar way: instead of upgrading a running instance when a new version of the software is released, it is easier to start an instance of the newer software, migrate the configuration over from the old one, and then terminate the old instance. Using clustering and fault tolerant Traffic IP addresses it is possible to upgrade a cluster in place, replacing each traffic manager with one running a newer version of the software, while continuing to serve application traffic.   Upgrade howto Important: When the cluster is in a mixed state (i.e. the Traffic Managers are using different software versions) do not make any configuration changes. This should remain the case until all traffic manager instances in the cluster are running the upgraded version.   For each Traffic Manager in your cluster, perform the following steps:   Start an instance of the new AMI. Using the Admin Server, or the user-data pre-configuration parameters, join the new instance to your cluster. If using user-data pre-configuration, you can set the new instance to join the Traffic IP Groups by setting  join_tips=y but do not use this option if there are multiple Traffic IP groups configured in the cluster. Also for Amazon VPC instances, it will be ignored if the instance doesn't have a secondary IP address assigned whilst launching. Note that per-node hostname mappings (configured the in System > Networking page) will not be migrated automatically - you must set these manually on each new instance.   3. Terminate one of the old instances in your cluster.   Repeat these steps until all the traffic managers in your cluster have been replaced. Replace instances one by one - do not terminate an old instance until its replacement has successfully joined the cluster.   Upgrading to new product functionality   The instructions described in the previous section can also be used to change the product version you are running. For example, you can use this method to upgrade from one Traffic Manager to another that offers a higher bandwidth capacity or more functionality.  You can also use this method to upgrade from one Amazon instance size to another.   Note that you cannot use this method to downgrade the product you are currently running to one with fewer features, because your current configuration will not be applicable to a lower-featured instance. In this case, you must create a new cluster of simpler Traffic Manager instances and migrate the relevant configuration from the old cluster manually.
View full article
These instructions describe how to upgrade Traffic Manager Virtual Appliance instances. For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   Before you start   There are a few things that have to be checked before an upgrade is attempted to make sure it goes smoothly:   Memory requirements: Make sure the machine has enough memory. Traffic Manager requires at the very least 1GB of RAM; 2GB or more are recommended. If the traffic manager to be upgraded has less memory, please assign more memory to the virtual machine. Disk Space requirements: Ensure there is enough free disk space. For the upgrade to succeed, at least 500Mb must be free on the root partition, and at least 300MB on the /logs partition. The unix command df shows how much space is available, for example:   root@stingray-1/ # df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda5 1426384 839732 514764 62% / varrun    517680      44 517636 1% /var/run varlock   517680       0 517680 0% /var/lock udev      517680      48 517632 1% /dev devshm    517680       0 517680 0% /dev/shm /dev/sda1 139985    8633 124125 7% /boot /dev/sda8 621536   17516 604020 3% /logs   If the disks are too full, you have to free up some space. Please follow the suggestions in the topic Freeing disk space on the Virtual Appliance.   Upgrading the Virtual Appliance   Traffic Manager software is stored on one of two primary partitions, and log files are stored on a separate disk partition.   Full Upgrades are required when you upgrade to a new major or minor version number, such as from 18.2 to 19.1, or 19.1 to 19.2.  Full upgrades include a new operating system installation. A full upgrade is installed in the unused primary partition, configuration (including the /root directory) is migrated across and the bootloader updated to point to the new partition. You can edit the bootloader configuration to fall back to the other primary partition if you need to roll back to the previous instance. Incremental Upgrades are required when you install a release with a new revision number, such as from 9.1 to 9.1r1.  The new software is added to the currently active primary partition. You can use the 'rollback' script to make a previous revision active.   Important note   If you wish to upgrade from one major.minor version to a later major.minor version with a later revision, you will need to upgrade in two steps: the full upgrade, and the subsequent incremental upgrade.   For example, suppose that you are running version 17.3r2 and you wish to upgrade to version 18.2r1.  You must perform the following two steps:   Perform a full upgrade from your current version to the closest major.minor version, i.e a full upgrade to 18.2 Perform a subsequent incremental upgrade from 18.2 to 18.2r1.   Performing a Full Upgrade   Upgrades between major and minor versions (e.g. 18.2 to 19.1 or 19.1 to 19.2) can either be performed via Administration Server (when upgrading from version 9.0 or later) or using a command-line script ( z-upgrade-appliance ) to install the new version into a spare section of the hard disk. This process involves one reboot and the downtime associated with that reboot.   Any configuration changes made in the existing version after the upgrade has been run won't be preserved when the new version is started; you should reboot the appliance as soon as possible (using the System -> Reboot button in the UI or using the 'reboot' command).   Before upgrading it is prudent to have a backup of your configuration.   Command line method Download the installation zpkg package from. This will be a file called something like ZeusTM_91_Appliance-x86_64.zpkg . Copy the file onto the appliance to the /logs partition using an scp or sftp client (e.g. psftp ). Log in to the appliance using an 'ssh' client (putty is a good choice) or the console; you can log in using any username that is in the admin group. Check the disk space requirements explained above are still fulfilled after you've uploaded the package. Once connected to the console of the appliance run: z-upgrade-appliance <filename> Confirm that you want to upgrade the appliance.   Administration Server (upgrading from 9.0 or later) Download the tgz upgrade package from the download site, go to the System -> Upgrade page, upload the upgrade tgz package, and follow the instructions.   Once complete, the current configuration will be migrated to the newly installed version, and this version will be automatically selected on the next reboot.   Performing an Incremental Upgrade   Upgrading revisions in the same product version (e.g. 18.2 to 18.2r2) are performed using the Administration Server. Download the tgz upgrade package from the download site, go to the System -> Upgrade page, upload the upgrade tgz package, and follow the instructions.   You will need to complete this process for each Appliance in your cluster.   Expected downtime for an upgrade will be a couple of seconds while the Traffic Manager software is restarted. On very rare occasions, it will be necessary to reboot the Appliance to complete the upgrade. The user interface will inform you if this is necessary when the upgrade is complete. You should ensure that the Appliance is rebooted at the most appropriate time.
View full article
These instructions describe how to upgrade Traffic Manager Software instances. For instructions on upgrading on other platforms, please refer to Upgrading Traffic Manager.   You can upgrade a running copy of the Traffic Manager software with very little downtime. There are two ways to perform the upgrade: using the Administration Interface, or using the command line.   The upgrade procedure:   Installs the new software into a different, version-controlled directory; Copies the configuration from the running version of the software; Stops the running software, swaps some symlinks and then starts the new software.   The downtime is rarely more than one or two seconds. If necessary, you can then install a new license key file using the Traffic Manager Admin Server, for example, to enable new product features.   Upgrading via the Administration Interface   Obtain the new installation package ( ZeusTM_ProductVersion_OS.tgz ); Navigate to the System > Traffic Managers page on the Admin Interface of the Traffic Manager you intend to upgrade; Click the Upgrade button and upload the new installation package; Follow the instructions to apply the software upgrade.   Upgrading via the command line   Unpack your new software distribution file ( ZeusTM_ProductVersion_OS.tgz ) on the server. Become root (assuming your existing installation is as root), and move into the directory that has just been created by extracting the distribution file. Run the ./zinstall command   The upgrade program automatically stops your existing version of Traffic Manager, upgrades it, and restarts it, keeping all your existing configuration. Upgrading a Cluster   If you have a cluster of Traffic Managers, you can upgrade each one in turn. Do not make any configuration changes until you have upgraded the entire cluster to the same software version.   To roll back to a previous software version   You can use the ' rollback ' script to revert back to a previously installed version of Traffic Manager software, or to roll forward to a later version that you installed previously:   Become 'root' and run the ZEUSHOME/zxtm/bin/rollback .   You will then be shown a list of the different software versions that you have installed, and asked to chose which one you would like to roll back or forwards to.  When you roll back (or forwards) to a different version, the software will use the configuration that was last active with that version. For example, If you upgraded from version X.1 to version X.2 two weeks ago, then roll back to version X.1, it will use the configuration that version X.1 last used two weeks ago.
View full article
We release major and minor updates to Traffic Manager on a periodic basis, and you are strongly advised to maintain production instances of Traffic Manager on recent releases for support, performance, stability and security reasons.   Where to find updates   Software and Virtual Appliance updates are posted on the http://my.pulsesecure.net site, and updates are announced on the community pages, such as this article.   The update process is designed to be straightforward and minimizes disruption, although instantaneous downtime is inevitable. The process depends on the form factor of your Traffic Manager device:   Upgrading Traffic Manager Software Upgrading Traffic Manager Virtual Appliance   Am I running software or virtual appliance?   You can easily verify if you're running the software-only install (installed on your Linux/Solaris host) or a Virtual Appliance (running on VMware, Xen or another platform) by checking the header of an admin server page: Software install - identifies itself as "Traffic Manager 4000 VH" Virtual Appliance - identifies itself as "Traffic Manager Virtual Appliance 4000 VH"     Updating Cloud Instances of Traffic Manager   The Amazon EC2 instances of Traffic Manager are provided and supported directly by Pulse:   Upgrading Traffic Manager on Amazon EC2   For third-party instances of Traffic Manager, please refer to your cloud provider.   More information   For more detailed information on the installation and upgrade process, please refer to the relevant Getting Started guide in the Product Documentation
View full article
This article explains how to use Traffic Manager's REST Control API using the excellent requests Python library.   There are many ways to install the requests library.  On my test client (MacOSX), the following was sufficient:   $ sudo easy_install pip $ sudo pip install requests   Resources   The REST API gives you access to the Traffic Manager Configuration, presented in the form of resources.  The format of the data exchanged using the RESTful API will depend on the type of resource being accessed:   Data for Configuration Resources, such as Virtual Servers and Pools are exchanged in JSON format using the MIME type of “application/json”, so when getting data on a resource with a GET request the data will be returned in JSON format and must be deserialized or decoded into a Python data structure.  When adding or changing a resource with a PUT request the data must be serialized or encoded from a Phython data structure into JSON format. Files, such as rules and those in the extra directory are exchanged in raw format using the MIME type of “application/octet-stream”.   Working with JSON and Python   The json module provides functions for JSON serializing and deserializing.  To take a Python data structure and serialize it into JSON format use json.dumps() and to deserialize a JSON formatted string into a Python data structure use json.loads() .   Working with a RESTful API and Python   To make the programming easier, the program examples that follow utilize the requests library as the REST client. To use the requests library you first setup a requests session as follows, replacing <userid> and <password> with the appropriate values:   client = requests.Session() client.auth = ('<userid>', '<password>') client.verify = False   The last line prevents it from verifying that the certificate used by Traffic Manager is from a certificate authority so that the self-signed certificate used by Traffic Manager will be allowed.  Once the session is setup, you can make GET, PUT and DELETE calls as follows:   response = client.get() response = client.put(, data = , headers = ) response = client.delete()   The URL for the RESTful API will be of the form:   https:// <STM hostname or IP>:9070/api/tm/1.0/config/active/   followed by a resource type or a resource type and resource, so for example to get a list of all the pools from the Traffic Manager instance, stingray.example.com, it would be:   https://stingray.example.com:9070/api/tm/1.0/config/active/pools   And to get the configuration information for the pool, “testpool” the URL would be:   https://stingray.example.com:9070/api/tm/1.0/config/active/pools/testpool   For most Python environments, it will probably be necessary to install the requests library.  For some Python environments it may also be necessary to install the httplib2 module.   Data Structures   JSON responses from a GET or PUT are deserialized into a Python dictionary that always contains one element.   The key to this element will be:   'children' for lists of resources.  The value will be a Python list with each element in the list being a dictionary with the key, 'name', set to the name of the resource and the key, 'href', set to the URI of the resource. 'properties' for configuration resources.  The value will be a dictionary with each key value pair being a section of properties with the key being set to the name of the section and the value being a dictionary containing the configuration values as key/value pairs.  Configuration values can be scalars, lists or dictionaries.   Please see Feature Brief: Traffic Manager's RESTful Control API for examples of these data structures and something like the Chrome REST Console can be used to see what the actual data looks like.   Read More   The REST API Guide in the Product Documentation Feature Brief: Traffic Manager's RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
We have created dedicated installation and configuration guides for each type of deployment option, as part of the complete documentation set for Pulse vTM.
View full article
In this release, Pulse Secure Virtual Traffic Manager has additional tools to help with intelligent load balancing of Pulse Connect Secure (PCS) and Pulse Policy Secure (PPS). In addition, new global settings for Session Persistence allow for simpler workload management with timeout of unused session entries in the persistence cache table. Intelligent LB for PCS/PPS - Traffic Manager now supports intelligent load-balancing for Pulse Connect Secure VPN gateways and Pulse Policy Secure network access control. This capability uses a new built-in service discovery plugin to discover PCS/PPS cluster nodes, and can optimize the license usage across cluster nodes by directing new sessions based on available license capacity. Session Persistence Timeouts - Closer control over the persistence cache in Traffic Manager makes it easier to redistribute workload following node reconfiguration or failure, by providing all session persistence entries with an optional lifetime. After an entry expires it is deleted from the persistence cache: a global timeout value can be set for each of the three persistence methods, Source IP, J2EE and Universal persistence. Note that the timeout value is measured since last use, rather than first use: new SNMP monitors are also available to help track session expiry. Long-Term Support release - For customers who prefer longer support cycles to support their operational model, Pulse Secure is identifying Pulse vTM 19.2 as an LTS (Long Term Support) release. As a result, support for Pulse vTM 19.2 will be available for three years after the release date. For more information, please refer to the release notes, available on the download portal. A complete set of user documentation is also available on http://pulsesecure.net/vadc-docs including getting started guides, installation, configuration and API reference documentation.  
View full article
Overview list of SteelApp Videos
View full article
This video provides an overview on how you can use FlyScript and Stingray to automatically add the JavaScript snippet required by Riverbed OPNET BrowserMetrix to web pages.
View full article
Vinay Reddy demonstrates Riverbed's Stingray Traffic Manager virtual application delivery controller in a VMware vFabric Application Director environment.
View full article
This video gives a general overview of Load Balancing with Stingray as well as recommendations on what Load Balancing algorithms to use depending on the situation.
View full article
Video: Introduction to TrafficScript
View full article
  This video discusses what SSL Decryption with Stingray is, why to use it, and how to configure it.
View full article