Stingray Traffic Manager functions as a proxy. It terminates TCP (and UDP) connections locally, and makes new connections to the target (back-end) servers. This is a consequence of the architecture of the Stingray software (user-level software running on a general-purpose kernel), and most modern traffic management devices use a similar architecture.
Previous-generation load balancers (aka layer 3-4 load balancers) are based on NAT-capable routers; their mode of operation is simply to make intelligent, load-based destination-NAT decisions on incoming traffic, rather than relying on a static routing table. The proxy mode of operation allows Stingray to perform a range of network optimizations (including TCP offload and HTTP multiplexing) that is not possible with NAT-based L3/4 balancers. However, the proxy mode is not 'transparent' to clients and servers in the fashion that at layer 3/4 load balancer would be:
Clients must be directed to connect to an IP address and port that the load balancer is listening on. This is generally achieved by mapping the DNS name of a service to a traffic IP address that the load balancer listens on, but some legacy or inflexible network architectures may not make this possible
Servers will observe that the connections originate from the load balancer, not the remote client. This can be a problem if the server needs to perform logging or access control based on the client's IP address.
It is possible to run Stingray in a fashion that appears transparent to clients and servers, so it appears like a L3/4 proxy. There are two independent steps to this:
Put Stingray inline in your network, i.e. as an intermediate gateway. Use iptables to capture selected packets that would otherwise be forwarded and raise them up to Stingray:
# iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 80
This iptables rule intercepts all incoming TCP packets that are destined to port 80, any destination IP address, and rewrites the destination IP address to a local one (the primary IP address on the interface the packet was received on). It can optionally also rewrite the port (--to-port). The Linux kernel then makes a routing decision, observes that the packet is targeted for a local IP and passes it up to the application listening on the destination IP and port (i.e. Stingray). You can manually enter the iptables rule from the Linux command line on the Stingray system.
Because Stingray acts as a proxy, it makes a new connection to the destination server. This connection will originate from an IP address on the Stingray system; the back-end server will observe that the connection comes from Stingray. This is not transparent.
The IP Transparency capability in a pool can spoof the source address of a connection to a back-end server - HowTo: Spoof source IP addresses with IP Transparency. By default, it will set the source address to be the remote IP address of the client-side connection. From the back-end server’s perspective, the TCP or UDP packets it receives appear to originate from the remote client, so the Stingray system is transparent.
This capability is enabled by the IP Transparency setting in the connection management properties of a pool:
There are two caveats to this technique:
Nevertheless, it's a common deployment method when the traffic manager should appear transparent to the back-end servers.
If you use the iptables technique to capture and rewrite incoming packets, the TrafficScript function request.getLocalIP() will return a local IP address. You can use the function request.getDestIP() to determine the original destination IP address.
You can control how the IP address spoofing capability functions in two ways:
This method does not enable you to use legacy layer 2/3 load balancing methods such as Techniques for Direct Server Return with Stingray Traffic Manager; Stingray still functions as a full proxy, giving you the ability to apply the full suite of layer 7 optimizations and traffic manipulation that Stingray makes available.