We're using ZXTM to monitor our application service and have configured acive healt monitors to check for availability.
The example is:
Pool 1 (HTTP):
Pool 2 (HTTP):
Currently, if either node fails to respond on it's port, it's taken out of the pool in which the health monitor exists.
However, my application on port 2345 (for example) also requires the node to be operational on port 1234 as well, so if a server has failed the connection on 1234 it should be removed from both pools.
I was initially thinking that I'd like tos etup the health monitor for Pool 2 to check both ports, but this doesn't appear to be possible.
Any brilliant ideas?
You would have to use an External Program Monitor. The Stingray Traffic Manager 8.1 User Manual shows how this is configured on page 174. I have pasted the manual section below for your reference:
*External Program Monitors
An external program monitor can be written in any language. The traffic manager passes command- line arguments to the executable:
domonitor --ipaddr= --node= --port=
Thanks for the note - we'd pretty much come to the same conclusion, but very useful to hear it from elsewhere as well - thanks a million.
Is there anywhere which allows for a server in a pool to be remvoed from all pools if it fails a health check? That would also solve our problem.
I took a look through the advanced settings, but didn't see anything that struck me as appropriate.
I.e. if we've got two pools looking at a server, a failure of a health monitor in one pool removes the node from ALL applicable pools which contain that server.
You can attach the monitor to all of the pools that need to conduct the test. If the monitor fails, it will drop the node from all these pools.
A couple of notes:
1. If a monitor fails against a node in poolA, we don't automatically drop that node from other pools.
For example, you may have two identical pools, and one is used for images, the other for Java servlets. You might assign a monitor that tests your sevlet engine is working and not overloaded to the second pool. If that monitor failed, you would not want to drop the node for all of the other purposes it is used for.
2. If a monitor is assigned to multiple pools, we don't overload the node by running the monitor multiple times. zeus.monitord gathers together all of the monitors it needs to use and rate-limits them according to the monitor configuration; the results are then applied to all of the pools that reference each monitor.