cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Looking for Installation and User Guides for Brocade vADC? User documentation is no longer included in the software download package with Brocade vTM, so the documentation can now be found on the  Brocade.com web pages .  
View full article
The Pulse Services Director makes it easy to manage a fleet of virtual ADC services, with each application supported by dedicated vADC instances, such as Pulse Virtual Traffic Manager. This table summarises the compatibility between supported versions of Services Director and Virtual Traffic Manager.
View full article
In this release, Pulse vTM offers enhanced support for DevOps application teams looking for closer integration and automation in customized cloud deployments.
View full article
In this release, Pulse Services Director offers enhanced Analytics support. Main highlights include enhanced chart formats and telemetry capability.  
View full article
In this first article, Dmitri covers the basics of setting up Terraform for Pulse vADC.
View full article
The Pulse Services Director vADC Analytics Application is intended to be both accessible and intuitive to use, with powerful graphic visualizations and insights into the traffic flows around your application. 
View full article
The Analytics Application included in Services Director can apply a Sample Filter to reduce query times.  
View full article
The Analytics Application included in Services Director can apply a Component Filter to refine queries.
View full article
Report on the most common events using the  Top Events  tab.
View full article
In the final article, Dmitri completes the deployment of a complete Terraform project, including TrafficScript templates for dynamic services.
View full article
In part 3, Dmitri shows how to use conditional logic to control resource creation.
View full article
In part 2, we learn about Data Sources and Resources, and set up a vTM with a variable number of nodes.  
View full article
The latest version of Pulse vTM v18.1 was released with a Terraform Provider for vTM
View full article
Comparative Analysis views include Horseshoe and Timing Charts.
View full article
The Analytics Application included with Pulse Services Director offers a unique Table View in the Explore functions.
View full article
The Pulse Secure Virtual Traffic Manager includes a useful benchmarking tool named 'zeusbench' that we use for internal performance testing and as a load generation tool for training courses. The first incarnation of ZeusBench was donated to the Apache project a long time ago, and is now well known as ApacheBench. The new incarnation starts from a completely fresh codebase; it has a new 'rate' mode of operation, is less resource intensive and is often more accurate in its reporting.   You’ll find ZeusBench in the admin/bin directory of your Traffic Manager installation ( $ZEUSHOME ). Run zeusbench with the -–help option to display the comprehensive help documentation:   $ $ZEUSHOME/admin/bin/zeusbench --help   Using zeusbench   zeusbench generates HTTP load by sending requests to the desired server. It can run in two different modes:   Concurrency   When run with the –c N option, zeusbench simulates N concurrent HTTP clients. Each client makes a request, reads the response and then repeats, as quickly as possible.   Concurrency mode is very stable and will quickly push a web server to its maximum capacity if sufficient concurrent connections are used. It's very difficult to overpower a web server unless you select far too many concurrent connections, so it's a good way to get stable, repeatable transactions-per-second results. This makes it suitable for experimenting with different performance tunings, or looking at the effect of adding TrafficScript rules.   Rate   When run with the –r N option, zeusbench sends new HTTP requests at the specified rate (requests per second) and reads responses when they are returned.   Rate mode is suitable to test whether a web server can cope with a desired transaction rate, but it is very easy to overwhelm a server with requests. It's great for testing how a service copes with a flash-crowd - try running one zeusbench instance for background traffic, then fire off a second instance to simulate a short flash crowd effect. Rate-based tests are very variable; it's difficult to get repeatable results when the server is overloaded, and it’s difficult to determine the maximum capacity of the server (use a concurrency test for this).   Comparing concurrency and rate   The charts below illustrate two zeusbench tests against the same service; one where the concurrency is varied, and one where the rate is varied:   Measuring transactions-per-second (left hand axis, blue) and response times (right hand axis, red) in concurrency and rate-based tests   The concurrency-based tests apply load in a stable manner, so are effective at measuring the maximum achievable transactions-per-second. However, they can create a backlog of requests at high concurrencies, so the response time will grow accordingly.   The rate-based tests are less prone to creating a backlog of requests so long as the request rate is lower then the maximum transactions-per-second. For lower request rates, they give a good estimate of the best achievable response time, but they quickly overload the service when the request rate nears or exceeds the maximum sustainable transaction rate.   Controlling the tests   The –n N option instructs zeusbench to run until it has sent N requests, then stop and report the results. The –t N option instructs zeusbench to run for N seconds, then stop and report the results. The –f option instructs zeusbench to run forever (or until you hit Ctrl-C, at which point zeusbench stops and reports the results). The –v option instructs zeusbench to report each second on the progress of the tests, including the number of connections started and completed, and the number of timeouts or unexpected error responses.   Keepalives   Keepalives can make a very large difference to the nature of a test. By default, zeusbench will open a TCP connection and use it for one request and response before closing it. If zeusbench is instructed to use keepalives (with the –k flag), it will reuse TCP connections indefinitely; the –k N option can specify the number of times a connection is reused before it is closed.   Other zeusbench options   We’ve just scratched the surface of the options that zeusbench offers. zeusbench gives you great control over timeouts, the HTTP requests that it issues, the ability to ramp up concurrency or rate values over time, the SSL parameters and more.   Run   $ $ZEUSHOME/admin/bin/zeusbench --help   for the detailed help documentation   Running benchmarks   When running a benchmark, it is always wise to sanity-check the results you obtain by comparing several different measurements. For example, you can compare the results reported by zeusbench with the connection counts charted in the Traffic Manager Activity Monitor. Some discrepancies are inevitable, due to differences in per-second counting, or differences in data transfers and network traffic counts.   Benchmarking requires a careful, scientific approach, and you need to understand what load you are generating and what you are measuring before drawing any detailed conclusions. Furthermore, the performance you measure over a low-latency local network may be very different to the performance you can achieve when you put a service online at the edge of a high-latency, lossy WAN.   For more information, check out the document:Tuning Virtual Traffic Manager   The article Dynamic Rate Shaping of Slow Applications gives a worked example of the use of zeusbench to determine how to rate-shape traffic to a slow application.   Also note that many Traffic Manager licenses impose a maximum bandwidth limit (development licenses are typically limited to 1Mbits); this will obviously impede any benchmarking attempts. You can commence a managed evaluation and obtain a license key that gives unrestricted performance if required.
View full article
The Analytics Application included with Services Director displays a number of different types of metrics. The process by which metrics are generated and displayed varies between different types of graph, but the metric definition itself remains constant.
View full article
Pulse Virtual Traffic Manager v18.1 has introduced plugin-based Service Discovery that shipped bundled with two plugins: for Google Cloud, and for DNS.   The included DNS plugin was designed to help with a specific use case, where an authoritative DNS server returns a *subset* of the records every time it is queried, instead of the full set of records.   An example of this is AWS Route53 serving A-records for a large Elastic Load Balancer (ELB). Route53 will return up to 8 healthy records. This means that for ELBs with more than 8 nodes DNS query will only ever return 8, which a non-authoritative DNS server will cache and return for all subsequent queries.   If a regular DNS resolver is used to populate vTM pool nodes, this Route53 behaviour may lead to excessive vTM pool node churn. Additionally, traffic will only be ever sent to a maximum of 8 nodes.   To work around this issue, the bundled DNS plugin implements the following behaviour:   - For the hostname specified, find the authoritative DNS server(s) - Send a query for hostname's A-records directly to the discovered authoritative DNS servers - Cache the received results, along with each record's TTL - Check the cache for any existing records with TTL that hasn't expired - Combine the new records with the cached records, and return that superset as the result for vTM to use.   This behaviour has a side effect for some publicly registered domains with name servers that can't resolve the records within the domain, such as internally used domains. In this case, DNS resolver plugin will fail to work because in the process of discovering authoritative DNS servers upper level domain servers will respond with nameservers that can't resolve the names in question.   To work around this situation, DNS plugin can be run with an option "--nameservers" with IP address(es) of the internal authoritative DNS servers for the internal domain. This will bypass the logic for authoritative nameserver discovery.
View full article
The Virtual Traffic Manager is often used as an application delivery controller which passes HTTP traffic to backend nodes using the Apache HTTP Server. This guide aims to provide some general advice on tuning such a deployment. Virtual Server Configuration   A virtual server passing traffic to Apache backends can use a number of protocols including HTTP, Generic client first and SSL (if the website uses HTTPS). Of all these protocols HTTP (with SSL Decryption enabled if HTTPS is used) will provide the best experience: Backend connections can be reused for different frontend connections. The traffic manager's builtin webcache can be used to cache static content to decrease latency and reduce backend load. The traffic manager can offer the HTTP/2 protocol on frontend connections even if the Apache version on the backends doesn't support it. TrafficScripts rules can inspect and change HTTP headers in both directions.   Backend Connection Management Traffic Manager The Traffic Manager offers two important per pool settings for controlling the connections to backend nodes: The configuration key max_connections_pernode controls how many concurrent connections will be created by a single traffic manager to a single backend node. Please note that the limit is enforced per single traffic manager and not for the whole in a cluster. In a clustered deployment the number of concurrent connections scale up with the the number of traffic managers using a backend node at the same time in an active active setup. Depending on the maximum number of concurrent connections that a single Apache instance can handle it can be advisable to adjust this setting accordingly to prevent overloading the backend nodes. Please refer to Apache Performance Tuning for more information on tuning Apache's connection limits. The configuration key max_idle_connections_pernode controls how many idle connections to a single backend the traffic manager will keep open. Keeping idle connections open will speed up handling further requests but will consume slightly more resources on the backends. For websites which handle a large number of concurrent, short lived HTTP requests increasing the number of allowed idle connections can reduce request latency and thereby improve performance. Apache By default Apache closes idle keep-alive connections after 5 seconds while the Traffic Manager will try to reuse them for up to 10 seconds (controlled by the global configuration key idle_connection_timeout). This can cause retries or even request failures when the Traffic Manager tries to reuse a connection at the exact moment when Apache decides to close it. To avoid this it is advisable to change Apache's timeout via the KeepAliveTimeout configuration directive to at least five second higher than the traffic manager's timeout. SSL Configuration   The Apache HTTP Server offers multiple loaded modules to add support for HTTPS. The most popular is mod_ssl which uses the OpenSSL TLS stack.   Using SSL Encryption on a pool configured on the traffic manager should work with the combination of Apache and mod_ssl without any specific tuning. The following adjustments might however help to improve performance. Traffic Manager Ensure the the client side SSL cache of the traffic manager is enabled. This cache will make sure that most SSL connections to the backend nodes use SSL session resumption which is considerably faster and consumes less CPU power on both the traffic manager and the backend node. The client cache is controlled via the ssl!client_cache!enabled config key and enabled by default. Ensure the the client side SSL cache of the traffic manager is size appropriately. It should be at least as large as the total of distinct nodes that the traffic managers connects to, across all pools, via SSL. The size can be adjusted using the config key ssl!client_cache!size which defaults to 1024. Ensure that TLS 1.2 is enabled and a ciphers based on AES_128_GCM_SHA256 (for example SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256) are enabled and preferred. These ciphers combine security with excellent performance, in particular on CPUs which support the AES-NI and AVX instruction sets. Apache In the unlikely case that the Traffic Manager SSL settings suggested above cause connection failures changing the mod_ssl configuration as suggested below should fix the problems:   SSLCipherSuite HIGH SSLProtocol all -SSLv3 The SSL session cache is also disabled in the Apache HTTP server by default although Linux distributions often enabled it in the distributed configuration file. Please refer to the documentation of the SSLSessionCache configuration directive for information how to enable and size the session cache.   For optimal performance is also advisable to use a Linux distribution which ships version 2.4 or newer of the Apache HTTP server and OpenSSL version 1.0.2 or newer (for example Debian 9 or newer or Ubuntu 16.04 or newer).  
View full article
The Analytics Application included with Pulse Services Director operates on a dataset which is made from individual records, each of which describes a single  transaction
View full article