Hello, on Friday 19th, I implemented split tunnelling for O365 as per the Pulse documents and latest MS docs. All seems to be going well and we have seen a noticable improvement in the Throughput and CPU usage of our 5K appliance. However, I have noticed that despite the rule in place, I still see quite a large amount of TEAMS traffic trying to traverse the VPN, where it will be proxied by the on prem proxy. We are having a few authentication issues with our proxy today and as a result this has impacted on teams. The destination I noticed in wireshark that was still traversing the VPN was
Your post caught my eye, as I've been involved in split-tunneling O365's "optimized" category on Pulse as well. Unfortunately I don't really have an answer for you (sorry), but I will share what I'm seeing as there is some weirdness (similar to what you describe).
Both 184.108.40.206/14 and 220.127.116.11/14 are set to be excluded from the tunnel on our policy.
Out of curiousity, I did some investigation and noticed that for users on Pulse with Split-Tunneling setup for O365, I see nbname traffic (udp 137) is still using the tunnel back to the data center [no idea if that traffic is important or used], but tcp traffic (essentially just 443) is being split-tunneled as desired (based on wireshark captures). I also see some the udp 3478-3481 traffic coming back to the data center, when I expect that to be split-tunneled based on configuration. Kinda strange. I wouldn't expect protocol (tcp, udcp, icmp, etc) to play any role in how the traffic is routed...it should just get routed all the same! Especially since it's the split-tunneling policy is set via IPv4 for this, so no FQDN bugs at play.
I would say the bulk of the traffic to these 18.104.22.168/14 and 22.214.171.124/14 subnets is in fact being split-tunneled and going straight to O365 and we have performance gains because of this -- however there's still some traffic coming back via tunnel as you mentioned. Not sure why that is! Certainly would like to hear others' comments or ideas as to why.
Cheers @elevator4 , It is the exact same senario, it is clear that in best, the split tunnelling IS working because the performance gains have been phenominal (not sure I've ever typed that word...) But what made me query this, was due to the fact that we had an authentication issue with our Fortigateway / Web filter that was no automatically picking up authenticated users from AD/FSSO so everyone was being asked to re-authenticate.. This seemed to be having an adverse effect on our users being able to use teams, we'd either get a message saying that we weren't connected, or, the pressence side of things wasn't working (available, busy etc) or we couldn't send chat. (Video and audio seemed to not be affected) I had also played with persisten routes on the routing table to ensure that the traffic was destined to the correct gateway, but launching teams seemed to overwrite some of the addresses that would fall into the 126.96.36.199/14 network and send them out over the VPN... I agree, hearing from others would be great. Thanks for your message.
If you're seeing O365 traffic sent through the VPN tunnel, then I'd say it's expected to see such behavior as the non-sensitive traffic still continue to use tunnel where the rest of the traffic (sensitive to latency) will use physical interface.
When using application like MS TEAMS, I have seen issues where the Split tunnel DENY resources are passing through both interfaces (known issue with TEAMS app). MS recommendation is to block access to certain TCP/UDP ports to fix that issue.