cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
The following code uses Stingray's RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in Python. This example builds on the previous listpools.py example.  This program does a GET request for the list of pool and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.   listpoolnodes.py #! /usr/bin/env python import requests import json import sys print "Pools:\n" url = 'https://stingray.example.com:9070/api/tm/1.0/config/active/pools' jsontype = {'content-type': 'application/json'} client = requests.Session() client.auth = ('admin', 'admin') client.verify = False try: # Do the HTTP GET to get the lists of pools. We are only putting this client.get within a try # because if there is no error connecting on this one there shouldn't be an error connnecting # on later client.get so that would be an unexpected exception. response = client.get(url) except requests.exceptions.ConnectionError: print "Error: Unable to connect to " + url sys.exit(1) data = json.loads(response.content) if response.status_code == 200: if data.has_key('children'): pools = data['children'] for i, pool in enumerate(pools): poolName = pool['name'] # Do the HTTP GET to get the properties of a pool response = client.get(url + "/" + poolName) poolConfig = json.loads(response.content) if response.status_code == 200: # Since we are getting the properties for a pool we expect the first element to be 'properties' if poolConfig.has_key('properties'): # The value of the key 'properties' will be a dictionary containing property sections # All the properties that this program cares about are in the 'basic' section # nodes is the list of all active or draining nodes in this pool # draining the list of all draining nodes in this pool # disabled is the list of all disabled nodes in this pool nodes = poolConfig['properties']['basic']['nodes'] draining = poolConfig['properties']['basic']['draining'] disabled = poolConfig['properties']['basic']['disabled'] print pool['name'] print " Nodes: ", for n, node in enumerate(nodes): print node + " ", print "" if len(draining) > 0: print " Draining Nodes: ", for n, node in enumerate(draining): print node + " ", print "" if len(disabled) > 0: print " Disabled Nodes: ", for n, node in enumerate(disabled): print node + " ", print "" else: print "Error: No properties found for pool " + poolName print "" else: print "Error getting pool config: URL=%s Status=%d Id=%s: %s" %(url + "/" + poolName, response.status_code, poolConfig['error_id'], poolConfig['error_text']) else: print 'Error: No chidren found' else: print "Error getting pool list: URL=%s Status=%d Id=%s: %s" %(url, response.status_code, data['error_id'], data['error_text']) Running the example   This code was tested with Python 2.7.3 and version 1.1.0 of the requests library.   Run the Python script as follows:   $ listpoolnodes.py Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
The following code uses the RESTful API to list all the pools defined for a cluster and for each pool it lists the nodes defined for that pool, including draining and disabled nodes. The code is written in TrafficScript. This example builds on the previous stmrest_listpools example.  This rule does a GET request for the list of pools and then while looping through the list of pools, a GET is done for each pool to retrieve the configuration parameters for that pool.  A subroutine in stmrestclient is used to do the actual RESTful API call.  stmrestclient is attached to the article Tech Tip: Using the RESTful Control API with TrafficScript - Overview.   stmrest_listpoolnodes   ################################################################################ # stmrest_listpoolnodes # # This rule lists the names of all pools and also the nodes, draining nodes # and disable nodes in each pool. # # To run this rule add it as a request rule to an HTTP Virtual Server and in a # browser enter the path /rest/listpoolnodes. # # It uses the subroutines in stmrestclient ################################################################################ import stmrestclient; if (http.getPath() != "/rest/listpoolnodes") break; $resource = "pools"; $accept = "json"; $html = "<br><b>Pools:</b><br>"; $response = stmrestclient.stmRestGet($resource, $accept); if ($response["rc"] == 1) { $pools = $response["data"]["children"]; foreach ($pool in $pools) { $poolName = $pool["name"]; $response = stmrestclient.stmRestGet($resource. "/" . string.escape($poolName), $accept); if ($response["rc"] == 1) { $poolConfig = $response["data"]; $nodes = $poolConfig["properties"]["basic"]["nodes"]; $draining = $poolConfig["properties"]["basic"]["draining"]; $disabled = $poolConfig["properties"]["basic"]["disabled"]; $html = $html . "<br>" . $poolName . ":<br>"; $html = $html . "<br> Nodes: "; foreach ($node in $nodes) { $html = $html . $node . " "; } $html = $html . "\n"; if (array.length($draining) > 0) { $html = $html . "<br> Draining Nodes: "; foreach ($node in $draining) { $html = $html . $node . " "; } $html = $html . "\n"; } if (array.length($disabled) > 0) { $html = $html . "<br> Disabled Nodes: "; foreach ($node in $disabled) { html = $html . $node . " "; } $html = $html . "\n"; } $html = $html . "<br>\n"; } else { $html = $html . "There was an error getting the pool configuration for pool . " . $poolName . ": " . $response['info']; } } } else { $html = $html . "<br>There was an error getting the pool list: " . $response['info']; } http.sendResponse("200 OK", "text/html", $html, "");   Running the example   This rule should be added as a request rule to a Virtual Server and run with the URL:   http://<hostname>/rest/listpoolnodes   Pools:   Pool1     Nodes:  192.168.1.100 192.168.1.101     Draining:  192.168.1.101     Disabled:  192.168.1.102   Pool2     Nodes:  192.168.1.103 192.168.1.104   Read More   REST API Guide in the vADC Product Documentation Tech Tip: Using the RESTful Control API with TrafficScript - Overview Feature Brief: RESTful Control API Collected Tech Tips: Using the RESTful Control API
View full article
Pulse Secure vADC now offers support for applications deployed in Kubernetes  
View full article
This Document provides step by step instructions on how to set up Pulse Virtual Traffic Manager for Microsoft Exchange 2016   Note that this deployment guide is out of date, and expected to be updated soon.   
View full article
This document covers updating the built-in GeoIP database. See TechTip: Extending the Brocade vTM GeoIP database for instructions on adding custom entries to the database.  
View full article
Welcome to Pulse Secure Application Delivery solutions!  
View full article
In this release, Pulse Secure Services Director introduces a simpler way to upgrade to use the advanced Analytics Application in Services Director.
View full article
The Pulse Services Director makes it easy to manage a fleet of virtual ADC services, with each application supported by dedicated vADC instances, such as Pulse Virtual Traffic Manager. This table summarises the compatibility between supported versions of Services Director and Virtual Traffic Manager.
View full article
We have made it easier to see which features are offered in which model of Pulse Virtual Traffic Manager: there are two feature groups, which are common to both fixed-sized licenses using the Pulse vTM, and in the capacity-based licensing scheme using the Pulse Services Director.  
View full article
In this release, Pulse vTM focuses on product platform enhancement for cloud, open-source, hardware support platforms and security features.  
View full article
The Pulse vADC Community Edition is a free-to-download, free-to-use, full-featured virtual application delivery controller (ADC) solution, which you can use immediately to build smarter applications.  
View full article
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
We have created dedicated installation and configuration guides for each type of deployment option, as part of the complete documentation set for Pulse vTM.
View full article
Need more capacity for your applications? Technical support options? It’s easy to upgrade Pulse vADC!  
View full article
I n a recent conversation, a user wished to use Stingray's rate shaping capability to throttle back the requests to one part of his web site that was particularly sensitive to high traffic volumes (think a CGI, JSP Servlet, or other type of dynamic application). This article describes how you might go about doing this, testing and implementing a suitable limit using Service Level Monitoring, Rate Shaping and some TrafficScript magic.   The problem   Imagine that part of your website is particularly sensitive to traffic load and is prone to overloading when a crowd of visitors arrives. Connections queue up, response time becomes unacceptable and it looks like your site has failed.   If your website were a tourist attraction or a club, you’d employ a gatekeeper to manage entry rates. As the attraction began to fill up, you’d employ a queue to limit entry, and if the queue got too long, you’d want to encourage new arrivals to leave and return later rather than to join the queue.   This is more-or-less the solution we can implement for a web site. In this worked example, we're going to single out a particular application (named search.cgi) that we want to control the traffic to, and let all other traffic (typically for static content, etc) through without any shaping.   The approach   We'll first measure the maximum rate at which the application can process transactions, and use this value to determine the rate limit we want to impose when the application begins to run slowly.   Using Stingray's Service Level Monitoring classes, we'll monitor the performance (response time) of the search.cgi application. If the application begins to run slower than normal, we'll deploy a queuing policy that rate-limits new requests to the application. We'll monitor the queue and send a 'please try later' message when the rate limit is met, rather than admitting users to the queue and forcing them to wait.   Our goal is to maximize utilization (supporting as many transactions as possible), but minimise response time, returning a 'please wait' message rather than queueing a user.   Measuring performance   We first use zeusbench to determine the optimal performance that the application can achieve. We several runs, increasing the concurrency until the performance (responses-per-second) stabilizes at a consistent level:   zeusbench –c  5 –t 20 http://host/search.cgi zeusbench –c  10 –t 20 http://host/search.cgi zeusbench –c  20 –t 20 http://host/search.cgi   ... etc   Run:   zeusbench –c 20 –t 20 http://host/search.cgi     From this, we conclude that the maximum number of transactions-per-second that the application can comfortably sustain is 100.   We then use zeusbench to send transactions at that rate (100 / second) and verify that performance and response times are stable. Run:   zeusbench –r 100 –t 20 http://host/search.cgi     Our desired response time can be deduced to be approximately 20ms.   Now we perform the 'destructive' test, to elicit precisely the behaviour we want to avoid. Use zeusbench again to send requests to the application at higher than the sustainable transaction rate:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe how the response time for the transactions steadily climbs as requests begin to be queued and the successful transaction rate falls steeply. Eventually when the response time falls past acceptable limits, transactions are timed out and the service appears to have failed.   This illustrates how sensitive a typical application can be to floods of traffic that overwhelm it, even for just a few seconds. The effects of the flood can last for tens of seconds afterwards as the connections complete or time out.   Defining the policy   We wish to implement the following policy:   If all transactions complete within 50 ms, do not attempt to shape traffic. If some transactions take more than 50 ms, assume that we are in danger of overload. Rate-limit traffic to 100 requests per second, and if requests exceed that rate limit, send back a '503 Too Busy' message rather then queuing them. Once transaction time comes down to less than 50ms, remove the rate limit.   Our goal is to repeat the previous zeusbench test, showing that the maximum transaction rate can be sustained within the desired response time, and any extra requests receive an error message quickly rather than being queued.   Implementing the policy   The Rate Class   Create a rate shaping class named Search limit with a limit of 100 requests per second.     The Service Level Monitoring class   Create a Service Level Monitoring class named Search timer with a target response time of 50 ms.     If desired, you can use the Activity monitor to chart the percentage of requests that confirm, i.e. complete within 50 ms while you conduct your zeusbench runs. You’ll notice a strong correlation between these figures and the increase in response time figures reported by zeusbench.   The TrafficScript rule   Now use these two classes with the following TrafficScript request rule:   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 # We're only concerned with requests for /search.cgi  $url = http.getPath();  if ( $url != "/search.cgi" ) break;       # Time this request using the Service Level Monitoring class  connection.setServiceLevelClass( "Search timer" );       # Test if any of the recent requests fell outside the desired SLM threshold  if ( slm.conforming( "Search timer" ) < 100 ) {      if ( rate.getBacklog( "Search limit" ) > 0 ) {         # To minimize response time, always send a 503 Too Busy response if the          # request exceeds the configured rate of 100/second.         # You could also use http.redirect() to a more pleasant 'sorry' page, but         # 503 errors are easier to monitor when testing with ZeusBench         http.sendResponse( "503 Too busy" ,  "text/html"           "<h1>We're too busy!!!</h1>" ,            "Pragma: no-cache" );      } else {         # Shape the traffic to 100/second         rate. use ( "Search limit" );      }  }     Testing the policy   Rerun the 'destructive' zeusbench run that produced the undesired behaviour previously:   Running:   zeusbench –r 110 –t 20 http://host/search.cgi     Observe that:   Stingray processes all of the requests without excessive queuing; the response time stays within desired limits. Stingray typically processes 110 requests per second. There are approximately 10 'Bad' responses per second (these are the 503 Too Busy responses generated by the rule), so we can deduce that the remaining 100 (approx) requests were served correctly.   These tests were conducted in a controlled environment, on an otherwise-idle machine that was not processing any other traffic. You could reasonably expect much more variation in performance in a real-world situation, and would be advised to set the rate class to a lower value than the experimentally-proven maximum.   In a real-world situation, you would probably choose to redirect a user to a 'sorry' page rather than returning a '503 Too Busy' error. However, because ZeusBench counts 4xx and 5xx responses as 'Bad', it is easy to determine how many requests complete successfully, and how many return the 'sorry' response.   For more information on using ZeusBench, take a look at the Introducing Zeusbench article.
View full article
For versions of the Traffic Manager Appliance before 9.7, we support customers installing software only via our standard APIs/interfaces (using extra files, custom action scripts).   This constraint has been relaxed at version 9.7. We still do not support customers modifying the tested software shipped with the appliance, but we do allow installation of additional software.   Examples of where this might be useful include:   Installing monitoring agents that customers use to monitor the rest of their infrastructure (e.g. Nagios) Installing software such as BIND to avoid having to deploy an extra host when setting up GLB.   Operating system   Traffic Manager virtual appliances use a customized build of Ubuntu, with an optimized kernel from which some unused features have been removed - check the latest release notes for details of the build included in your version.   What you may change   You may install additional software not shipped with the appliance, but note that some Ubuntu packages may rely on kernel features not available on the appliance.   You may modify configuration not managed by the appliance.   What you may not change   You may not install a different kernel.   You may not install different versions of any debian packages that were installed on the appliance as shipped, nor remove any of these packages (see the licence acknowledgements doc for a list).   You may not directly modify configuration that is managed from the traffic manager (e.g. sysctl values, network configuration).   You may not change configuration explicitly set by the appliance (usually marked with a comment containing ZOLD or  BEGIN_STINGRAY_BLOCK).   What happens when you need support   You should mention any additional software you have installed when requesting support, the Technical Support Report will also contain information about it. If the issue is found to be caused by interaction with the additional software we will ask you to remove it, or to seek advice or a remedy from its supplier.   What happens on reset or upgrade   z-reset-to-factory-defaults will not remove additional software but may rewrite some system configuration files.   An incremental upgrade may upgrade some installed packages, and may rewrite system configuration files.   A full upgrade will install a fresh appliance image on a separate disk partition, and will not copy additional software or configuration changes across. The /logs partition will be preserved.   Note that future appliance versions may change the set of installed packages, or even the underlying operating system.
View full article
Pulse Secure vADC solutions are supported on Google Cloud Platform, with hourly billing options for applications that need to scale on-demand to match varying workloads. A range of Pulse Secure Virtual Traffic Manager (Pulse vTM) editions are available, including options for the Pulse vTM Developer Edition and Pulse Secure Virtual Web Application Firewall (Pulse vWAF), available as both a virtual machine and as a software installation on a Linux virtual machine.   This article describes how to quickly  create  a new Pulse   vTM instance through the Google Cloud Launcher. For additional information about the use and configuration of your Pulse vTM instance, see the product documentation available at www.pulsesecure.net/vadc-docs.   Launching a  Pulse   vTM Virtual Machine Instance   To launch a new instance of the  Pulse   vTM virtual machine, use the GCE Cloud Launcher Web site. Type the following URL into your Web browser:   https://cloud.google.com/launcher Browse or use the search tool to locate the Pulse Secure package applicable to your requirements, then click the package icon to see the   package detail screen.       To deploy a new   Pulse  vTM instance   1.  To start the process of deploying a new instance, click Launch on Compute Engine.   2.  Type an identifying name for the instance, select the image version, then select the desired geographic zone and machine type. Individual zones might have differing computing resources available and specific access restrictions. Contact your support provider for further details. 3.  Ensure the boot disk correspond to your computing resource requirements. Pulse Secure recommends not changing the default disk size as this might affect the performance of your Pulse vTM.   4.  By default, GCE creates firewall rules to allow HTTP and HTTPS traffic, and to allow access to the Web-based Pulse vTM Admin UI on TCP port 9090. To instead restrict access to these services, untick the corresponding firewall checkboxes.   Note: If you disable access to TCP port 9090, you cannot access the Pulse vTM Admin UI to configure the instance.   5.  If you want to use IP Forwarding with this instance, click More and set IP forwarding to "On".   6.  Pulse  vTM needs access to the Google Cloud Compute API, as indicated in the API Access section. Keep this option enabled to ensure your instance can function correctly.   7.  Click Deploy  to launch the Pulse vTM instance.   The Google Developer Console confirms that your Pulse vTM instance is being deployed.     Next Steps   After your new instance has been created, you can proceed to configure your Pulse vTM software through its Admin UI.   To access the Admin UI for a successfully deployed instance, click Log into the admin panel.       When you connect to the Admin UI for the first time, Pulse vTM presents the  Initial Configuration wizard . This wizard captures the networking, date/time, and basic system settings needed by your Pulse vTM software to operate normally.   For full details of the configuration process, and for instructions on performing various other administrative tasks, see the Cloud Services Installation and Getting Started Guide .
View full article
In this release, Pulse vTM offers enhanced support for DevOps application teams looking for closer integration and automation in customized cloud deployments.
View full article
In this release, Pulse Services Director offers enhanced Analytics support. Main highlights include enhanced chart formats and telemetry capability.  
View full article
In this first article, Dmitri covers the basics of setting up Terraform for Pulse vADC.
View full article