Showing results for 
Search instead for 
Did you mean: 

Persistence feature causing Terminal Server problem ?

Occasional Contributor

Persistence feature causing Terminal Server problem ?



I’m having a some confusing issue with my Load Balanced Windows Terminal Server IP-based persistence behaviour:


IP-based persistence ON:

When the persistence is turned on, drainstop-ing from the SteelApp doesn’t do anything at all, the client still redirected to the node that I drain stopped, hence causing some issue during the node maintenance.


Expected result: drain stopped node should no longer accept any client / user traffic at all.



IP-based persistence OFF:

When the persistence is turned off, one by one the node is dropping from the load balancer with the error:

Passive monitoring detected an error: Read error, Failed to read data from this node.” In red with the exclamation mark on almost all nodes.

SERIOUS Pool RDP-Farm1:3389, Node <one of the Windows TS IP>:3389: Node <one of the Windows TS IP> has failed - Read error: Connection reset by peer


Expected result: when the persistence is turned off, the node should not be dropping off the load balancer, it should stay operational with no error.


Note, I'm using 2x Stingray Traffic Manager Virtual Appliance 1000 M 9.6r1.


Is that the default behaviour or something wrong happens ?


Thanks in advance.




Re: Persistence feature causing Terminal Server problem ?

Thanks for the information - you should raise a support request to look at the passive monitoring errors you are seeing, but I think you will need session persistence to manage RDP successfully.
Meanwhile, your traffic will continue to be sent to the nodes which are being drained, when you are using session persistence. Draining does not actively purge sessions from a node, but the process ensures that no new sessions are associated with the draining node. You need to wait for existing sessions to be closed, either ended gracefully by the application, or timeout due to inactivity.
In general, RDP sessions are usually long-lifetime sessions, so draining is more suitable for HTTP sessions which have shorter lifetimes.
You may already have seen the article on RDP load balancing: there are a few, including this one:
Occasional Contributor

Re: Persistence feature causing Terminal Server problem ?

Thanks for the reply Paul,


So what should I do to make sure that I can drainstop the client RDP connection to single RDSH (TS) VM from SteelApp perspective ?

because the client complaints that when I perform drainstop, they cannot get into any terminal server at all.



Do I just kill all of the RDP session from the Remote Desktop Services Manager console ?

Occasional Contributor

Re: Persistence feature causing Terminal Server problem ?

Hi Paul,

Can I just use Brocade Virtual Traffic Manager Appliance (VMware OVF) Version 10.1 (64-bit) when I'm now on version 9.6R1 ?

lastly, where should I import that script from the link that you've suggested above ?

Re: Persistence feature causing Terminal Server problem ?

It is really easy to create and import TrafficScript rules, because it uses an intuitive user interface, and an application-aware script and function library. However, if you have not previously created TrafficScript rules, then the best thing to do is to create a test application with the Developer Edition, and try out some simple examples first.


Upgrading from 9.6 to 10.1 is a major version change, and will need additional steps: the Installation guides also include sections on minor/major upgrades:




Occasional Contributor

Re: Persistence feature causing Terminal Server problem ?

Many thanks @PaulWallace so I guess in this case I will have to create or import the traffic script for the Load Balancer to do the job properly with Terminal Server.


I was under the impression that the RIverbed Steel App straight out of the box can do it for me with no issue, but yes it turns out not really well.

Frequent Contributor

Re: Persistence feature causing Terminal Server problem ?


  So just to clarify a few things: 


1) the Persistence behaviour you are seeing appears to be working as designed. "Drain Stop" is designed to allow any existing connection to continue to the node, while preventing new connections. The way we know if the connection is a pre-existing one is, it has a persistence record.  Any Persistence mechanism will allow traffic to be maintained to a server that is is "Drain Stop". This is designed to allow customers to finish their transactions and disconnect once they are done.


2) There is a difference between "Drain Stop" which behaves as I have mentioned in (1) and "Disabled" which will prevent any traffic from being sent to the server. Perhaps the behaviour you are looking for is this one. A normal "graceful takedown" of a production system would see the server placed into "Drain Stop" for an appropriate amount of time until the user count was low enough to ensure minimal disruption, then being disabled and taken down for maintenance.


3) Be careful using Source IP based persistence. I have seen many customers over the years come unstuck when NAT was in use for inbound traffic (ie: where all inbound connections are "NATted" from behind a firewall for example.) This results in all traffic appearing to the ADC as coming from a single source IP address, and all client's sessions are seen as the same session from a persistence perspective.


4) The solution suggested by @paulWallace has merit - it is a specific solution designed when RDP is used with a dedicated Session Broker. The broker tracks which users are logged in where, and will allow a user to be reconnected to the correct server when they log back in next. This is done using an x224 header in the RDP server's response (post authentication, after it checks with the session broker to see where the user is currently logged in), the Traffic Script in the solution is designed to parse this x224 header and redirect the user to the correct node.



5) If you are not using any persistence mechanism, you run the risk of the sessions timing out, or the user being re-load balanced regularly and the passive monitoring behaviour you mentioned.

Aidan Clarke
Pulse Secure vADC Product Manager