Here is the KB article on load-balancing with SA's - http://kb.pulsesecure.net/InfoCenter/index?page=content&id=KB17848
The key item about load-balancing between two members of an active/active cluster is that you want to ensure that - under normal circumstances - the same SA is accessed for the duration of a user session. If you are doing round-robin DNS load-balancing, this may not be true, as the user workstation may resolve the name of the server more than once during a session.
If your users are using NC, an entry is placed in the HOSTS file of the user's workstation which maps the server DNS name to the address of the server at the time the NC tunnel was established. This helps mitigate against the flip-flopping which could occur with a subsequent DNS lookup.
Unless you absolutely need load-balancing, i would recommend that you configure your GTM(s) in failover mode. This would normally send all users to data center "A", and only route them to data center "B" if the device in "A" is no longer responding to the Internet. I like the deterministic nature of this - you know that a user logging on will go to data center "A" unless it is down.
As far as the process for breaking the cluster / moving the server goes, I think you have it about right. One thing to note - an active/active cluster has no VIP. Each server has its own unique internal and external addresses. What this means is that your users will access the primary machine on a different external address after the active/passive cluster is broken. My recommendation of process -
First step -
Second step -
Third step -
Your service will be down for a short period in steps one and three. If this is unacceptable, you can minimize the impact by complicating the process. Under the best circumstances, user sessions will get interrupted.
Hope this is helpful.
Thank you for your detailed response and this is very helpful. Obviously I have follow-up questions.
1) I donÕt know why admin guide version 7.0 page 887 and the article KB3179 has a diagram showing as if a/a cluster requires a VIP. So you are saying DNS Name, https://vpn.company.com will point to the external IP addresses of both devices. In this case, SA1 will be contacted first and SA2 will be in failover mode.
2) In the first step, you stated that Ôremove both the active (SA1) and passive (SA2) from the A/P clusterÕ. Instead of removing both, can I just remove the SA2 only so that the SA1 continue to provide service? While the SA1 provide continued services, I can physically move the SA2 to the new data center, the location is about 40 minute drive from where the SA2 is currently located. So far, there is no outage.
3) Once the SA2 is ready at the new data center, can I use the external IP address to test the login to the SA, instead of creating a DNS entry as you mentioned? Same for the SA1, can I use the IP address to test?
4) Once the test is successful, I can go ahead with configuring A/A clustering on SA1 and add the SA2 to the cluster configuration. The DNS name, https://vpn.company.com is pointing to cluster VIP say, 18.104.22.168 in A/S configuration, Can I assign the IP address, 22.214.171.124 to the external interface of the SA1 so that I donÕt have to change the DNS name to point to a different IP address. If I assign 126.96.36.199 to the external interface of the SA2, then I will have to update the DNS name to point to the new IP address, 188.8.131.52.
1) The external VIP in those diagrams would be provided by an external load balancer, like a F5 Big-IP LTM. This does not apply when the two node of the active/active cluster are in different locations.
2) Removing SA1 from the cluster does not cause it not to function. It just deletes the cluster and, in doing so, eliminates the VIP address. You could leave it as part of the cluster and keep the VIP active till you are ready to build the active/active cluster.
3) Yes. You will get certificate errors when you access the SA.
4) (a) can't test SA2 without adding it to the cluster. It needs to get the configuration from SA1. Also, a device with only cluster licenses cannot run independently.
4) (b) You could re-address the external interface of SA1 to be the old VIP addresses to avoid DNS changes. If you have static routes for your NC subnets, you may also need to change the address of the internal interface to avoid changing static routes (they currently should point to the VIP address). If you change the internal address, watch out for authentication problems if you authenticate by any method which is sentive to client address (e.g., Radius).
4) (c) Remember that you are going from an architecture which expects little of DNS (it always resolves vpn.company.com to the external VIP address) to one where DNS (the GTM) is playing a significant role. I'm not a DNS expert, so I'm not much help with that. Here is what we do where I am -
Thanks again for the information. I have been assigned to other tasks last two weeks so I could not post. Questions,
(1) You would either need to (1) change the native address of SA1 to the VIP address or (2) change the DNS name to point to the native address of SA1.
(2) I think the answer to (1) addresses this. A GTM is a DNS server with logic, so it is could be part of the answer.
(3) I'm not a GTM (or DNS) expert, but I think you have it about right. The one thing missing is the rules for handing out the address of SA1 or the address of SA2 when a request to resolve vpn.mycompany.com is made, something like -
"Resolve vpn.mycompany.com to 184.108.40.206 unless 220.127.116.11 does not respond to https. If 18.104.22.168 does not respond to https, resolve vpn.mycompany.com to 22.214.171.124.
(4) If they use the same name, they can use the same certificate. I use a wildcard certificate, so I don't think about this too much. My sense is that all members of a cluster share the same certificates, but my memory on this is weak.
I think I am getting to the point where I feel comfortable to go ahead with the changes.
I was going to move the SA2 to a new data center after upgrading IVE to the latest version while IVEs are active/passive. However I was asked to move the SA2 as soon as I can before the IVE upgrade._ I suppose I can upgrade IVEs after removing them from the A/P cluster
Here are the steps I will take and I hope I got it all correct.
1. Remove the SA2 from A/P cluster and configure it for the new data center, i.e) change IP address of both interfaces, configure DHCP pool for NC clients, default gateway, etc... I understand that I can't connect to this until I create a A/A
cluster on the SA1 and make the SA2 a member_
2. Remove the SA1 from A/P cluster which will eliminates the cluster. Changes the IP address of inside and outside to
VIP addresses so that it continues to provide services to clients. I suppose reboot is not required. At this point, the SA1 is the standalone device.
Would changes, e.g.) user accounts and NC access policies, made to the SA1 while standalone will be copied
to the SA2 when I create A/A cluster and make the SA2 a member?_
3. Once required information is entered on the GTM and necessary DNS changes are made, then I can create A/A cluster._ Would creating a A/A cluster cause an outage?
Some additional questions.
4. Would the device certificate remain installed on both SA1 and SA2 when I remove them from the cluster?
5. I would have to recreate User Record Synchronization since there are IP address changes to both devices.Should I remove it before removing them from A/P cluster?_
Before starting anything, take backups of the user and system configuration on both systems.
(1) This sounds correct.
(2) I'd reboot any time I change addresses on a device. Maybe not necessary, but it gives me peace of mind at very little cost. Changes made to SA1 while it is standalone will be copied onto SA2 when it enters the cluster.
(3) I can't think of any reason creating the A/A cluster would cause an outage.
(4) I think so. It should definitely stay on SA1. Just in case, you can always import it from the system configuration you backed up at the start of the process.
(5) I have no experience with user record synchronization, so I can't answer this question.
A couple of more questions.
1. When the SA2 joins the cluster, the configuration will be copied from the SA1. Will this erase the new IP address of the network interfaces? on page 881 of the adminguide, it says, 'existing node-specific settings are erased when an IVE joins a cluster'. My concern is that I will not be able to access the SA2 remotely after the SA2 joins the cluster since it will erase the inside interface IP address. Does this mean I have to physically next to the SA2 in order to change the IP address back?
2. in active/active, when the SA1 goes down, will connected users be statefully failover to the SA2? Or will they need to reconnect?
(1) Addresses of SA2 will not be changed when it is added to the cluster
(2) There is no stateful failover for NC users, as they cannot keep the same IP addresses. I think there will be stateful failover for other users, but I have never tested this.
I have removed SA2 and it is at a new location. SA1 is still active/passive cluster and I will remove it from the cluster soon and assign the VIP addresses to internal and external interfaces.
Can changing the IP address of the interfaces be done remotely? Or do I need to have access to the console port?
My concern is that I will lose remote access and I will have to be near the device physically.
I am assuming that after changing the internal IP address and click 'Save', I will lose the connection.
Can I reconnect using the new IP address?