It would be great if someone can help me out on this. We are building a high availiability cluster of Pulse Secure vADC/WAF on Azure. We would like to make this as active/active with configuration sync. Azure load balancer will take care of traffic distribution to these hosts. Is this a good method?
in this case the configs are synced across and both devices enforces same policy. Had one query here, Will the connection table on the devices in cluster will be synced across? Also if we apply policies like rate limiting and bot detection, Will the decision making be done based on the traffic pattern/hits coming in from both devices in cluster? For example if one host is trying to access a resource through both the WAFs and which is rate limited, will the traffic intelligence from the session table across the cluster, be used for taking a decision?
Solved! Go to Solution.
Q: Azure load balancer will take care of traffic distribution to these hosts. Is this a good method?
A: This is as good a method as any other. Some customers use multiple public IP addresses and Use RoundRobin DNS. An advantage of the Azure LB is that you can scale the vTM Cluster size behind the AzureLB pretty seamlessly. Just make sure you are health checking your application from the Azure LB.
Q: Will the connection table on the devices in cluster will be synced across?
A: Important distinction here - with vTM with an HTTP vServer there is no "connection table" as each HTTP request is atomic, and the protocol supports retry. Load balancers that DO offer state sync always tell you to NEVER turn it on for HTTP, as it more often than not kills your ADC. Net:Net answer, don't worry about "connection table" sync for HTTP.
Q: Also if we apply policies like rate limiting and bot detection, Will the decision making be done based on the traffic pattern/hits coming in from both devices in cluster?
A: Here is the real question, and the answer is, it depends on how you set it up. In more detail:
1) Bandwidth Classes can be applied per connection, per vTM or per Cluster, as seen in the screenshot below:
2) Rate Classes are deployed using TrafficScript and give you ultimate control over how they are applied. If you want to leverage them between cluster members, you would need to use a data.set() function in TrafficScript to store a key/value pair that can be leveraged between vTMs (ie: if you wanted to record an IP address as having triggered a condition that means you want to rate limit them, for example, so that other vTMs could apply the same restrictions.
3) Service Protection Classes (or SPCs, which is what I think you were actually referring to in your question) are actually applied a little differently for individual settings.
a) Network Access Restrictions are configuration element, thus the IP white/black lists are synchronised across the cluster whenever changes are made.
b) Concurrent connection limits are applied per vTM process (and vTM typically has several running per vTM instance, one per CPU core) by default. Concurrent limits are handled this way as they need to be FAST more than anything. Within the SPC Concurrent Connections setup, you have the option to modify this default behaviour to apply per vTM Instance rather than per vTM Process on an instance.
c) SPC Connection Rates are applied per Client IP address and are applied on a per vTM basis. Typically in an active/active scenario, some form of persistance ensures all requests from a particular client are glued to a specific vTM, so the enforcement of rates don't typically need to be tracked across vTMs in a cluster. For more granular control, we use specific rate classes that can be synchronised using TrafficScript functions, as mentioned earlier.
d) SPC HTTP Specific restrictions are essentially a protocol level enforcement, so no tracking between vTMs is needed, as the config is enforced no matter which vTM instance you are talking to
e) SPC Service Protection Rules are where you get full flexibility. It is a TrafficScript that is run as part of the SPC that gives you total control, and you can set/get data to set up business logic across the cluster to do pretty much whatever you want..
I hope this helps you understand how this all works. Let us know if this fits the bill or if you need more info...