cancel
Showing results for 
Search instead for 
Did you mean: 

Pulse Secure vADC

Sort by:
Welcome to Pulse Secure Application Delivery solutions!  
View full article
In this release, Pulse Secure Traffic Manager offers some additional capabilities to support secure authentication for administrators.
View full article
This technical brief describes recommended techniques for installing, configuring and tuning Traffic Manager.  You should also refer to the Product Documentation for detailed instructions on the installation process of Traffic Manager software. Getting started Hardware and Software requirements for Traffic Manager Pulse Virtual Traffic Manager Kernel Modules for Linux Software Tuning Stingray Traffic Manager Tuning Traffic Manager for best performance Tech Tip: Where to find a master list of the Traffic Manager configuration keys Tuning the operating system kernel The following instructions only apply to Traffic Manager software running on a customer-supplied Linux or Solaris kernel: Tuning the Linux operating system for Traffic Manager Routing and Performance tuning for Traffic Manager on Linux Tuning the Solaris operating system for Traffic Manager Debugging procedures for Performance Problems Tech Tip: Debugging Techniques for Performance Investigation Load Testing Load Testing recommendations for Traffic Manager Conclusion The Traffic Manager software and the operating system kernels both seek to optimize the use of the resources available to them, and there is generally little additional tuning necessary except when running in heavily-loaded or performance-critical environments. When tuning is required, the majority of tunings relate to the kernel and tcp stack and are common to all networked applications.  Experience and knowledge you have of tuning webservers and other applications on Linux or Solaris can be applied directly to Traffic Manager tuning, and skills that you gain working with Traffic Manager can be transferred to other situations. The importance of good application design TCP and kernel performance tuning will only help to a small degree if the application running over HTTP is poorly designed.  Heavy-weight web pages with large quantities of referenced content and scripts will tend to deliver a poorer user experience and will limit the capacity of the network to support large numbers of users. Traffic Manager's Web Content Optimization capability ("Aptimizer") applies best-practice rules for content optimization dynamically, as the content is delivered by Traffic Manager.  It applies browser-aware techniques to reduce bandwidth and TCP round-trips (image, CSS, JavaScript and HTML minification, image resampling, CSS merging, image spriting) and it automatically applies URL versioning and far-future expires to ensure that clients cache all content and never needlessly request an update for a resource which has not changed. Traffic Manager's Aptimizer is a general purpose solution that complements TCP tuning to give better performance and a better service level.  If you’re serious about optimizing web performance, you should apply a range of techniques from layer 2-4 (network) up to layer 7 and beyond to deliver the best possible end-user experience while maximizing the capacity of your infrastructure.
View full article
Linux kernel settings can be set and read using entries in the /proc filesystem or using sysctl.  Permanent settings that should be applied on boot are defined in sysctl.conf. Example: To set the maximum number of file descriptors from the command line:   # echo 2097152 > /proc/sys/fs/file-max  …or… # sysctl –w fs.file-max=2097152   Example: To set the maximum number of file descriptors using sysctl.conf, add the following to /etc/sysctl.conf:   fs.file-max = 2097152   Sysctl.conf is applied at boot, or manually using sysctl -p
View full article
This document describes performance-related tuning you may wish to apply to a production Stingray Traffic Manager software, virtual appliance or cloud instance.  For related documents (e.g. operating system tuning), start with the Tuning Pulse Virtual Traffic Manager article.   Tuning Pulse Traffic Manager   Traffic Manager will auto-size the majority of internal tables based on available memory, CPU cores and operating system configuration.  The default behavior is appropriate for typical deployments and it is rarely necessary to tune it. Several changes can be made to the default configuration to improve peak capacity if necessary. Collectively, they may give a 5-20% capacity increase, depending on the specific test. Basic performance tuning Global settings Global settings are defined in the ‘System’ part of the configuration. Recent Connections table: Set recent_conns to 0 to prevent Stingray from archiving recent connection data for debugging purposes Verbose logging: Disable flipper!verbose, webcache!verbose and gslb!verbose to disable verbose logging Virtual Server settings Most Virtual Server settings relating to performance tuning are to be found in the Connection Management section of the configuration. X-Cluster-Client-IP: For HTTP traffic, Traffic Manager adds an 'X-Cluster-Client-IP' header containing the remote client's IP address by default.  You should disable this feature if your back-end applications do not inspect this header. HTTP Keepalives: enable support for Keepalives; this will reduce the rate at which TCP connections must be established and torn down.  Not only do TCP handshakes incur latency and additional network traffic, but closed TCP connections consume operating system resources until TCP timeouts are hit. UDP Port SMP: set this to 'yes' if you are managing simple UDP protocols such as DNS.  Otherwise, all UDP traffic is handled by a single Traffic Manager process (so that connections can be effectively tracked) Pool settings HTTP Keepalives: enable support for Keepalives (Pool: Connection Management; see Virtual Server note above). This will reduce the load on your back-end servers and the Traffic Manager system. Session Persistence: Session Persistence overrides load balancing and can prevent the traffic manager from selecting the optimal node and applying optimizations such as LARD. Use session persistence selectively and only apply to requests that must be pinned to a node. Advanced Performance Tuning General Global Settings: Maximum File Descriptors (maxfds): File Descriptors are the basic operating system resource that Traffic Manager consumes.  Typically, Traffic Manager will require two file descriptors per active connection (client and server side) and one file descriptor for each idle keepalive connection and for each client connection that is pending or completing. Traffic Manager will attempt to bypass any soft per-process limits (e.g. those defined by ulimit) and gain the maximum number of file descriptors (per child process). There are no performance impacts, and minimal memory impact to doing this.  You can tune the maximum number of file descriptors in the OS using fs.file-max The default value of 1048576 should be sufficient. Traffic Manager will warn if it is running out of file descriptors, and will proactively close idle keepalives and slow down the rate at which new connections are accepted. Listen queue size (listen_queue_size): this should be left to the default system value, and tuned using somaxconn (see above) Number of child processes (num_children): this is auto-sized to the number of cores in the host system.  You can force the number of child processes to a particular number (for example, when running Traffic Manager on a shared server) using the tunable ‘num_children’ which should be added manually to the global.cfg configuration file. Tuning Accept behavior The default accept behavior is tuned so that child processes greedily accept connections as quickly as possible.  With very large numbers of child processes, if you see uneven CPU usage, you may need to tune the multiple_accept, max_accepting and accepting_delay values in the Global Settings to limit the rate at which child processes take work. Tuning network read/write behavior The Global Settings values so_rbuff_size and so_wbuff_size are used to tune the size of the operating system (kernel-space) read and write buffers, as restricted by the operating system limits /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max. These buffer sizes determine how much network data the kernel will buffer before refusing additional data (from the client in the case of the read buffer, and from the application in the case of the write buffer).  If these values are increased, kernel memory usage per socket will increase. In normal operation, Traffic Manager will move data from the kernel buffers to its user-space buffers sufficiently quickly that the kernel buffers do not fill up.  You may want to increase these buffer sizes when running under connection high load on a fast network. The Virtual Server settings max_client_buffer and max_server_buffer define the size of the Traffic Manager (user-space) read and write buffers, used when Traffic Manager is streaming data between the client and the server.  The buffers are temporary stores for the data read from the network buffers. Larger values will increase memory usage per connection, to the benefit of more efficient flow control; this will improve performance for clients or servers accessing over high-latency networks. The value chunk_size controls how much data Traffic Manager reads and writes from the network buffers when processing traffic, and internal application buffers are allocated in units of chunk_size.  To limit fragmentation and assist scalability, the default value is quite low (4096 bytes); if you have plenty of free memory, consider setting it to 8192 or 16384. Doing so will increase Traffic Manager's memory footprint but may reduce the number of system calls, slightly reducing CPU usage (system time). You may wish to tune the buffer size parameters if you are handling very large file transfers or video downloads over congested networks, and the chunk_size parameter if you have large amounts of free memory that is not reserved for caching and other purposes. Tuning SSL performance Some modern ciphers such as TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 are faster than older ciphers in Traffic Manager.  SSL uses a private/public key pair during the initial client handshake.  1024-bit keys are approximately 5 times faster than 2048-bit keys (due to the computational complexity of the key operation), and are sufficiently secure for applications that require a moderate degree of protection. SSL sessions are cached locally, and shared between all traffic manager child processes using a fixed-size (allocated at start-up) cache.  On a busy site, you should check the size, age and miss-rate of the SSL Session ID cache (using the Activity monitor) and increase the size of the cache (ssl!cache!size) if there is a significant number of cache misses. Tuning from-Client connections Timeouts are the key tool to controlling client-initiated connections to the traffic manager: connect_timeout discards newly-established connections if no data is received within the timeout; keepalive_timeout holds client-side keepalive connections open for a short time before discarding them if they are not reused; timeout is a general-purpose timeout that discards an active connection if no data is received within the timeout period. If you suspect that connections are dropped prematurely due to timeouts, you can temporarily enable the Virtual Server setting log!client_connection_failures to record the details of dropped client connections. Tuning to-Server connections When processing HTTP traffic, Traffic Manager uses a pool of Keep-Alive connections to reuse TCP connections and reduce the rate at which TCP connections must be established and torn down.  If you use a webserver with a fixed concurrency limit (for example, Apache with its MaxClients and ServerLimit settings ), then you should tune the connection limits carefully to avoid overloading the webserver and creating TCP connections that it cannot service.   Pool: max_connections_pernode: This setting limits the total number of TCP connections that this pool will make to each node; keepalive connections are included in that count. Traffic Manager will queue excess requests and schedule them to the next available server. The current count of established connections to a node is shared by all Traffic Manager processes.   Pool: max_idle_connections_pernode: When an HTTP request to a node completes, Traffic Manager will generally hold the TCP connection open and reuse it for a subsequent HTTP request (as a KeepAlive connection), avoiding the overhead of tearing down and setting up new TCP connections.  In general, you should set this to the same value as max_connections_pernode, ensuring that neither setting exceeds the concurrency limit of the webserver.   Global Setting: max_idle_connections: Use this setting to fine-tune the total number of keepalive connections Traffic Manager will maintain to each node.  The idle_connection_timeout setting controls how quickly keepalive connections are closed.You should only consider limiting the two max_idle_connections settings if you have a very large number of webservers that can sustain very high degrees of concurrency, and you find that the traffic manager routinely maintains too many idle keepalive connections as a result of very uneven traffic. When running with very slow servers, or when connections to servers have a high latency or packet loss, it may be necessary to increase the Pool timeouts: max_connect_time discards connections that fail to connect within the timeout period; the requests will be retried against a different server node; max_reply_time discards connections that fail to respond to the request within the desired timeout; requests will be retried against a different node if they are idempotent. When streaming data between server and client, the general-purpose Virtual Server ‘timeout’ setting will apply.  If the client connection times out or is closed for any other reason, the server connection is immediately discarded. If you suspect that connections are dropped prematurely due to timeouts, you can enable the Virtual Server setting log!server_connection_failures to record the details of dropped server connections. Nagle’s Algorithm You should disable “Nagle’s Algorithm” for traffic to the backend servers, unless you are operating in an environment where the servers have been explicitly configured not to use delayed acknowledgements.  Set the node_so_nagle setting to ‘off’ in the Pool Connection Management configuration. If you notice significant delays when communicating with the back-end servers, Nagle’s Algorithm is a likely candidate. Other settings Ensure that you disable or de-configure any Traffic Manager features that you do not need to use, such as health monitors, session persistence, TrafficScript rules, logging and activity monitors.  Disable debug logging in service protection classes, autoscaling settings, health monitors, actions (used by the eventing system) and GLB services. For more information, start with the Tuning Pulse Virtual Traffic Manager article.  
View full article
This document describes some operating system tunables you may wish to apply to a production Traffic Manager instance.  Note that the kernel tunables only apply to Traffic Manager software installed on a customer-provided Linux instance; it does not apply to the Traffic Manager Virtual Appliance or Cloud instances. Consider the tuning techniques in this document when: Running Traffic Manager on a severely-constrained hardware platform, or where Traffic Manager should not seek to use all available resources; Running in a performance-critical environment; The Traffic Manager host appears to be overloaded (excessive CPU or memory usage); Running with very specific traffic types, for example, large video downloads or heavy use of UDP; Any time you see unexpected errors in the Traffic Manager event log or the operating system syslog that relate to resource starvation, dropped connections or performance problems For more information on performance tuning, start with the Tuning Pulse Virtual Traffic Manager article. Basic Kernel and Operating System tuning   Most modern Linux distributions have sufficiently large defaults and many tables are autosized and growable, so it is often not be necessary to change tunings.  The values below are recommended for typical deployments on a medium-to-large server (8 cores, 4 GB RAM). Note: Tech tip: How to apply kernel tunings on Linux File descriptors # echo 2097152 > /proc/sys/fs/file-max   Set a minimum of one million file descriptors unless resources are seriously constrained.  See also the setting maxfds below. Ephemeral port range # echo "1024 65535" > /proc/sys/net/ipv4/ip_local_port_range # echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout Each TCP and UDP connection from Traffic Manager to a back-end server consumes an ephemeral port, and that port is retained for the ‘fin_timeout’ period once the connection is closed.  If back-end connections are frequently created and closed, it’s possible to exhaust the supply of ephemeral ports. Increase the port range to the maximum (as above) and reduce the fin_timeout to 30 seconds if necessary. SYN Cookies # echo 1 > /proc/sys/net/ipv4/tcp_syncookies SYN cookies should be enabled on a production system.  The Linux kernel will process connections normally until the backlog grows , at which point it will use SYN cookies rather than storing local state.  SYN Cookies are an effective protection against syn floods, one of the most common DoS attacks against a server. If you are seeking a stable test configuration as a basis for other tuning, you should disable SYN cookies. Increase the size of net/ipv4/tcp_max_syn_backlog if you encounter dropped connection attempts. Request backlog # echo 1024 > /proc/sys/net/core/somaxconn The request backlog contains TCP connections that are established (the 3-way handshake is complete) but have not been accepted by the listening socket (on Traffic Manager).  See also the tunable parameter ‘listen_queue_size’.  Restart the Traffic Manager software after changing this value. If the listen queue fills up because the Traffic Manager does not accept connections sufficiently quickly, the kernel will quietly ignore additional connection attempts.  Clients will then back off (they assume packet loss has occurred) before retrying the connection. Advanced kernel and operating system tuning In general, it’s rarely necessary to further tune Linux kernel internals because the default values that are selected on a normal-to-high-memory system are sufficient for the vast majority of deployments, and most kernel tables will automatically resize if necessary.  Any problems will be reported in the kernel logs; dmesg is the quickest and most reliable way to check the logs on a live system. Packet queues In 10 GbE environments, you should consider increasing the size of the input queue: # echo 5000 > net.core.netdev_max_backlog TCP TIME_WAIT tuning TCP connections reside in the TIME_WAIT state in the kernel once they are closed.  TIME_WAIT allows the server to time-out connections it has closed in a clean fashion. If you see the error “TCP: time wait bucket table overflow”, consider increasing the size of the table used to store TIME_WAIT connections: # echo 7200000 > /proc/sys/net/ipv4/tcp_max_tw_buckets TCP slow start and window sizes In earlier Linux kernels (pre-2.6.39), the initial TCP window size was very small.  The impact of a small initial window size is that peers communicating over a high-latency network will take a long time (several seconds or more) to scale the window to utilize the full bandwidth available – often the connection will complete (albeit slowly) before an efficient window size has been negotiated. The 2.6.39 kernel increases the default initial window size from 2 to 10.  If necessary, you can tune it manually: # ip route change default via 192.168.1.1 dev eth0 proto static initcwnd 10 If a TCP connection stalls, even briefly, the kernel may reduce the TCP window size significantly in an attempt to respond to congestion.  Many commentators have suggested that this behavior is not necessary, and this “slow start” behavior should be disabled: # echo 0 > /proc/sys/net/ipv4/tcp_slow_start_after_idle TCP options for Spirent load generators If you are using older Spirent test kit, you may need to set the following tunables to work around optimizations in their TCP stack: # echo 0 > /proc/sys/net/ipv4/tcp_timestamps # echo 0 > /proc/sys/net/ipv4/tcp_window_scaling [Note: See attachments for the above changes in an easy to run shell script]  irqbalance Interrupts (IRQs) are wake-up calls to the CPU when new network traffic arrives. The CPU is interrupted and diverted to handle the new network data. Most NIC drivers will buffer interrupts and distribute them as efficiently as possible.  When running on a machine with multiple CPUs/cores, interrupts should be distributed across cores roughly evenly. Otherwise, one CPU can be the bottleneck in high network traffic. The general-purpose approach in Linux is to deploy irqbalance , which is a standard package on most major Linux distributions.  Under extremely high interrupt load, you may see one or more ksoftirqd processes exhibiting high CPU usage.  In this case, you should configure your network driver to use multiple interrupt queues (if supported) and then manually map those queues to one or more CPUs using SMP affinity. Receive-Side Scaling (RSS) Modern network cards can maintain multiple receive queues. Packets within a particular TCP connection can be pinned to a single receive queue, and each queue has its own interrupt.  You can map interrupts to CPU cores to control which core each packet is delivered to. This affinity delivers better performance by distributing traffic evenly across cores and by improving connection locality (a TCP connection is processed by a single core, improving CPU affinity). For optimal performance, you should: Allow the Traffic Manager software to auto-size itself to run one process per CPU core (two when using hyperthreading), i.e. do not modify the num_children configurable.  Configure the network driver to create as many queues as you have cores, and verify the IRQs that the driver will raise per queue by checking /proc/interrupts. Map each queue interrupt to one core using /proc/irq/<irq-number>/smp_affinity You should also refer to the technical documentation provided by your network card vendor. [Updates by Aidan Clarke and   Rick Henderson ]  
View full article
In order to support our new Certified Technical Expert training course for Pulse vADC, we have created a demonstration package which contains files to support the training course.
View full article
We have created dedicated installation and configuration guides for each type of deployment option, as part of the complete documentation set for Pulse vTM.
View full article
Looking for Installation and User Guides for Pulse vADC? User documentation is no longer included in the software download package with Pulse vTM, so the documentation can now be found on the Pulse Techpubs pages  
View full article
You can create monitors, event action scripts and other utilities using Perl, but if you install them on a system that does not have a suitable Perl interpreter, they will not function correctly. For example, the Traffic Manager Virtual Appliance does not have a system-wide Perl interpreter.   The Traffic Manager includes a slightly cut-down version of Perl that is used to run many parts of the Administration Server. You can modify an existing perl script to use the Traffic Manager distribution if necessary.   Replace the standard Perl preamble: #!/usr/bin/perl -w ... with the following: #!/bin/sh exec $ZEUSHOME/perl/miniperl -wx $0 ${1+"$@"}     if 0; #!/usr/bin/perl #line 7 BEGIN{        # The Stingray-provided perl uses its own libraries        @INC=("$ENV{ZEUSHOME}/zxtmadmin/lib/perl","$ENV{ZEUSHOME}/perl"); } Note that Traffic Manager's Perl distribution contains a limited set of libraries, and it is not possible to add further libraries to it. Nevertheless, it is complete enough for many of the common administration tasks that you may wish to perform on a Traffic Manager Virtual Appliance, including using the Control API (SOAP::Lite).
View full article
Why write a health monitor in TrafficScript?   The Health Monitoring capabilities (as described in Feature Brief: Health Monitoring in Traffic Manager) are very comprehensive, and the built-in templates allow you to conduct sophisticated custom dialogues, but sometimes you might wish to resort to a full programming language to implement the tests you need.   Particularly on the Traffic Manager Virtual Appliance, your options can be limited. There's a minimal Perl interpreter included (see Tech Tip: Running Perl code on the Traffic Manager Virtual Appliance), and you can upload compiled binaries (Writing a custom Health Monitor in C) and shell scripts. This article explains how you can use TrafficScript to implement health monitors, and of course with Java Extensions, TrafficScript can 'call out' to a range of third-party libraries as well.   Overview   We'll implement the solution using a custom 'script' health monitor.  This health monitor will probe a virtual server running on the local Traffic Manager (using an HTTP request), and pass it all of the parameters relevant to the health request.   A TrafficScript rule running on the Traffic Manager can perform the appropriate health check and respond with a 'PASS' (200 OK) or 'FAIL' (500 Error) response.   The health monitor script   The health monitor script is straightforward and should not need any customization.  It will take its input from the health monitor configuration.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 #!/bin/sh  exec $ZEUSHOME /perl/miniperl -wx $0 ${1+ "$@" }       if 0;       #!/usr/bin/perl  #line 7       BEGIN{          # Pull in the Traffic Manager libraries for HTTP requests          unshift @INC , "$ENV{ZEUSHOME}/zxtmadmin/lib/perl" , "$ENV{ZEUSHOME}/zxtm/lib/perl" ;  }       use Zeus::ZXTM::Monitor qw( ParseArguments MonitorWorked MonitorFailed Log );  use Zeus::HTMLUtils qw( make_query_string );  use Zeus::HTTP;       my %args = ParseArguments();       my $url = "http://localhost:$args{vsport}$args{path}?" .make_query_string( %args );  my $http = new Zeus::HTTP( GET => $url );  $http ->load();       Log ( "HTTP GET for $url returned status: " . $http ->code() );       if ( $http ->code() == 200 ) {      MonitorWorked();  } else {      MonitorFailed( "Monitor failed: " . $http ->code() . " " . $http ->body() );  }   Upload this to the Monitor Programs of the Extra Files section of the catalog, and then create an "External Program Monitor" based on that script.  You will need to add two more configuration parameters to this health monitor configuration:   vsport: This should be set to the port of the virutal server that will host the trafficscript test path: This is optional - you can use it if you want to run several different health tests from the trafficscript rule   Your configuration should look something like this:   The virtual server   Create an HTTP virtual server listening on the appropriate port number (vsport).  You can bind this virtual server to localhost if you want to prevent external clients from accessing it.   The virtual server should use the 'discard' pool - we're going to add a request rule that always sends a response, so there's no need for any backend nodes.   The TrafficScript Rule   The 'business end' of your TrafficScript health monitor resides in the TrafficScript rule.  This rule is invoked every time the health monitor script is run, and it is given the details of the node which is to be checked.   The rule should return a 200 OK HTTP response if the node is OK, and a different response (such as 500 Error) if the node has failed the test.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 $path = http.getPath(); # Use 'path' if you would like to publish                           # several different tests from this rule       $ip = http.getFormParam( "ipaddr" );  $port = http.getFormParam( "port" );  $nodename = http.getFormParam( "node" );       # We're going to test the node $nodename on $ip:$port  #  # Useful functions include:  #   http.request.get/put/post/delete()  #   tcp.connect/read/write/close()  #   auth.query()  #   java.run()            sub Failed( $msg ) {      http.sendResponse( 500, "text/plain" , $msg , "" );  }       # Let's run a simple GET  $req = 'GET / HTTP/1.0  Host: www.riverbed.com       ';  $timeout = 1000; # ms  $sock = tcp. connect ( $ip , $port , $timeout );  tcp. write ( $sock , $req , $timeout );  $resp = tcp. read ( $sock , 102400, $timeout );       # Perform whatever tests we want on the response data.   # For example, it should begin with '200 OK'       if ( ! string.startsWith( $resp , "HTTP/1.1 200 OK" ) ) {      Failed( "Didn't get expected response status" );  }       # All good  http.sendResponse( 200, "text/plain" , "" , "" );  
View full article
In this article I walk-through a few examples of simple custom monitors written in C that query a MySQL database. Hopefully you will be able to use the example code as the basis of your own monitors.
View full article
The famous TrafficScript Mandelbrot generator!
View full article
The Pulse vADC Community Edition is a free-to-download, free-to-use, full-featured virtual application delivery controller (ADC) solution, which you can use immediately to build smarter applications.  
View full article
Need more capacity for your applications? Technical support options? It’s easy to upgrade Pulse vADC!  
View full article
This document covers updating the built-in GeoIP database. See TechTip: Extending the Pulse vTM GeoIP database for instructions on adding custom entries to the database.  
View full article
In this release, Pulse Secure Virtual Traffic Manager has more enhancements for closer integration with Pulse Connect Secure (PCS) and Pulse Policy Secure (PPS), including support for simpler session persistence of RADIUS.
View full article
The Pulse Virtual Traffic Manager Kernel Modules may be installed on a supported Linux system to enable advanced networking functionality – Multi-Hosted Traffic IP Addresses.   Notes:  Earlier versions of this package contained two modules: ztrans (for IP Transparency) and zcluster (for Multi-Hosted Traffic IP Addresses). The Pulse Virtual Traffic Manager software has supported IP Transparency without requiring the ztrans kernel module since version 10.1, and the attached version of the Kernel Modules package only contains the zcluster module. The  Kernel Module is pre-installed in Pulse Secure Virtual Traffic Manager Appliances, and in Cloud images where they are applicable. The Kernel Modules are not available for Solaris.  The Multi-hosted IP Module (zcluster)   The Multi-hosted IP Module allows a set of clustered Traffic Managers to share the same IP address. The module manipulates ARP requests to deliver connections to a multicast group that the machines in the cluster subscribe to. Responsibility for processing data is distributed across the cluster so that all machines process an equal share of the load. Refer to the User Manual (Pulse Virtual Traffic Manager Product Documentation) for details of how to configure multi-hosted Traffic IP addresses. zcluster is supported for kernel versions up to and including version 5.2.   Installation   Prerequisites   Your build machine must have the kernel header files and appropriate build tools to build kernel modules.   You may build the modules on one machine and copy them to an identical machine if you wish to avoid installing build tools and kernel headers on your production traffic manager.   Installation   Unpack the kernel modules tarball, and cd into the directory created:   # tar –xzf pulse_vtm_modules_installer-2.14.tgz    # cd pulse_vtm_modules_installer-2.14   Review the README within for late-breaking news and to confirm kernel version compatibility.   As root, run the installation script install_modules.pl to install the zcluster module:   # ./install_modules.pl   If installation is successful, restart the vTM software:   # $ZEUSHOME/restart-zeus   If the installation fails, please refer to the error message given, and to the distribution specific guidelines you will find in the README file inside the pulse_vtm_modules_installer package.   Kernel Upgrades   If you upgrade your kernel, you will need to re-run the install-modules.pl script to re-install the modules after the kernel upgrade is completed.   Latest Packages   Packages for the kernel modules are now available via the normal Pulse Virtual Traffic Manager download service.
View full article
Customers may occasionally need to install additional software on a Virtual Appliance, and this document shows how you can install the software in a way which will be supported. Examples of where this might be useful include:   Installing monitoring agents that customers use to monitor the rest of their infrastructure (e.g. Nagios) Installing other data collection tools (e.g. for Splunk or ELK) Note that for earlier versions of the Traffic Manager Virtual Appliance (before 9.7) we support customers installing software only via our standard APIs/interfaces (using extra files, custom action scripts). This "open access virtual appliance" support policy was introduced at version 9.7, to allow installation of additional software. However, we still do not support customers modifying the tested software shipped with the appliance.   Operating system   Traffic Manager virtual appliances use a customized build of Ubuntu, with an optimized kernel from which some unused features have been removed - check the latest release notes for details of the build included in your version.   What you may change   You may install additional software not shipped with the appliance, but note that some Ubuntu packages may rely on kernel features not available on the appliance.   You may modify configuration not managed by the appliance.   What you may not change   You may not install a different kernel.   You may not install different versions of any debian packages that were installed on the appliance as shipped, nor remove any of these packages (see the licence acknowledgements doc for a list).   You may not directly modify configuration that is managed from the traffic manager (e.g. sysctl values, network configuration).   You may not change configuration explicitly set by the appliance (usually marked with a comment containing ZOLD or  BEGIN_STINGRAY_BLOCK).   What happens when you need support   You should mention any additional software you have installed when requesting support, the Technical Support Report will also contain information about it. If the issue is found to be caused by interaction with the additional software we will ask you to remove it, or to seek advice or a remedy from its supplier.   What happens on reset or upgrade   z-reset-to-factory-defaults will not remove additional software but may rewrite some system configuration files.   An upgrade will install a fresh appliance image on a separate disk partition, and will not copy additional software or configuration changes across. The /logs partition will be preserved.   Note that future appliance versions may change the set of installed packages, or even the underlying operating system.
View full article
TrafficScript is a simple, command-based language. A command is called a 'statement', and each statement is terminated with a ';'. Comments begin with a '#' symbol, and finish at the end of the line:   # Store the value '2' in the variable named $a $a = 2; # call the 'connection.close' function connection.close();   Variables are indicated by the '$' symbol. There is no typing in TrafficScript, and you do not need to pre-declare a variable before you use it. Variables are not persistent - they go out of scope when a rule completes.   You can use variables in expressions to calculate new values (numbers and strings). Common mathematical, comparison and boolean operators are available, and '.' is used to concatenate strings.   # Set the value of $a to 1.75 $a = 1 + 1/2 + 0.25; # create a new string with the '.' operator $fullname = $firstname . " " . $lastname;   Functions are called using the normal bracket-and-argument-list syntax, and many functions can take different numbers of arguments. Function names often contain two or three parts, separated by '.'; this conveniently groups functions into different families.   $path = http.getPath(); $cookie = http.getCookie( "ASPSESSIONID" ); $browser = http.getHeader( "User-Agent" );   TrafficScript also provides data structures in the form of arrays and hashes. Arrays and hashes allow you to store multiple values in one TrafficScript structure. For more information, see the HowTo: TrafficScript Arrays and Hashes article.   Read more   Collected Tech Tips: TrafficScript examples
View full article