We have a MAG-SM160 cluster running 7.1R1
I'm having trouble getting a VPN to work. I suspect it's something embarrassingly stupid.
Network Connect Access Policies
1 initial allow *:* applies to joe
2 local allow 192.168.0.0/16:* applies to joe
3 vlan1 allow 192.168.4.0/24 applies to joe,users
connection profile - test-profile
ip addresses 192.168.92-125-192.168.92.126 applies to role joe
User Realms
joe - when username is joe assign role joe
users - when username is "*" assign role users
Roles - joe
overview - network connect enabled, junos pulse selected
restrictions - allow from any ip address
vlan source ip - vlan = internal port ip (only choice), source ip = interface ip (192.168..92.132) (only choice)
network connect - client = pulse, split tunnel disable, route override yes
junos pulse - connections
default - allow user connections
connection name SA - allow user override, this server, automatic connect
junos pulse - components
default component set (2.0.0.8491), minimal components
network settings - network connect
ip address filter "*"
network connect server ip address 192.168.92.135
Our network is basically on the Internet with some filtering (names and numbers changed here to protect the guilty). So there's no NAT or routing, and the VPN internal and external addresses are in different subnets of our internet address space. The MAG is supposed to be giving remote clients a corporate IP address and bypassing the SMB/NFS filtering, with some limits on what they can connect to.
If I logon as joe with role joe, I can run ssh sessions, access NFS shares etc. So the basic networking is OK - the MAG can talk to our local network. If I run either network connect or Junos Pulse (at least on a Vista client), it connects and modifies the local routing table as expected, giving the client end of the VPN tunnel an address out of the small pool (i.e.. 192.168.92.126). I can then continue to talk to the MAG over HTTP, and ping its various IP addresses (the internal and external addresses of the cluster members as well as the cluster addresses). But I can't ping anything else, either on our corporate network or outside (at least with split tunnel disabled). It looks like an ACL issue, but I have a wildcard "allow" rule on network connect.
Apart from this basic problem, I'm also wondering what the "network connect server ip address" does, and what it should be set to.
If I try a traceroute from the client, that shows up as the first hop but subsequent hops time out.
Solved! Go to Solution.
What is your actual IP pool defined in the connection profile and what IP address is assigned to the VPN adapter on the Vista once NC/Pulse is connected? From your description I'm seeing 2 different IP's
Initially you mentioned the pool is:
"connection profile - test-profile
ip addresses 192.168.92-125-192.168.92.126 applies to role joe"
And in your next post you mentioned that the VPN address is:
"but not from the VPN address 192.168.126."
*IF* your actual VPN adapter IP address is in a subnet that is different from the MAG SM-160 internal interface then the MAG will not proxy arp for the VPN adapter IP which means when the target/backend server responds those packets will not reach the MAG's internal interface. To workaround this you have to add static routes on your network such that any traffic for the NC IP pool is sent to the internal interface of the MAG SM 160 so it can send it back to the client machine over the tunnel. This issue and solution is described in detail in http://kb.pulsesecure.net/InfoCenter/index?page=content&id=KB23048
Note: The "network connect server ip address" parameter is internal to the MAG and is used to terminate the client tunnels. You will never see it being used when routing packets to backend, etc its only used internal within the MAG/SA device. The reason its configurable is this IP has to be unique on your network (does not have to be on any specific subnet, etc) so in the event 10.200.200.200 already exists on your network then you need to change the parameter, else there is no need to change.
Hello,
Can you please change the below:
network settings - network connect
ip address filter "*"
network connect server ip address 192.168.92.135
Change the IP to 10.200.200.200 which is the default IP. Check if that resolves your issue.
Network connect server IP address is the server side IP address to establish a socket with your NC clients.
Please mark this post as 'accepted solution' if this answers your question that way it might help others as well, a kudo would be a bonus thanks
If you check the caption with the "Network Connect Server IP Address" - you will see the below:
Specify the base (server) IP for the IVE to apply to Network Connect IP pools.This server side IP will be common to all nodes (if clustered).
Be careful to choose an IP other than your IVE external/internal IPs.
To get more detail as to why you need "network connect server ip address" - please search for "
That makes no difference. It doesn't make a lot of sense, either, as we don't use 10/8.
I can see ICMP packets arriving on the target, and replies sent, but they do not make it to the client, or that I can see with tcpdump on the MAG, to the MAG chassis. I tried turning off Windows Firewall to no avail.
If I ping from the target, I get replies from the MAG ports on 192.168.92.124 and 192.168.92.129, which makes me think I have a good route to 192.168.92/24, but not from the VPN address 192.168.126.
What is your actual IP pool defined in the connection profile and what IP address is assigned to the VPN adapter on the Vista once NC/Pulse is connected? From your description I'm seeing 2 different IP's
Initially you mentioned the pool is:
"connection profile - test-profile
ip addresses 192.168.92-125-192.168.92.126 applies to role joe"
And in your next post you mentioned that the VPN address is:
"but not from the VPN address 192.168.126."
*IF* your actual VPN adapter IP address is in a subnet that is different from the MAG SM-160 internal interface then the MAG will not proxy arp for the VPN adapter IP which means when the target/backend server responds those packets will not reach the MAG's internal interface. To workaround this you have to add static routes on your network such that any traffic for the NC IP pool is sent to the internal interface of the MAG SM 160 so it can send it back to the client machine over the tunnel. This issue and solution is described in detail in http://kb.pulsesecure.net/InfoCenter/index?page=content&id=KB23048
Note: The "network connect server ip address" parameter is internal to the MAG and is used to terminate the client tunnels. You will never see it being used when routing packets to backend, etc its only used internal within the MAG/SA device. The reason its configurable is this IP has to be unique on your network (does not have to be on any specific subnet, etc) so in the event 10.200.200.200 already exists on your network then you need to change the parameter, else there is no need to change.
Thank you - although I missed your reply before solving it.
"VPN address 192.168.126" was a typo - it was supposed to be 192.168.92.126. Looking again at this, I think we had the pool addresses in the same subnet as the external cluster interface not the internal interface.
We have separate narrow VLANs for the connections from our BigIron router to the MAG, so the interfaces are not seeing our entire network.
In any case, we changed to a bigger address pool like 192.168.79/24 and added a static route to the internal interface. This now seems to be working as expected