Showing results for 
Search instead for 
Did you mean: 

Analytics: Interpreting Horseshoe and Timeline Charts

This article is part of a series, beginning with Analytics Application - Concepts and Metrics Explained


Comparative Analysis

The Analytics Application included with Services Director offers a Comparative Analysis view, which provides the means to plot up to two further metrics (or a single metric with a split applied) against the same timeline represented on the primary timechart. Metrics are derived in the same way as for the primary timechart.


The Alternative Views tab displayed under the line view provides a means to visualise the average timeline of requests passing through a single vServer, including "Horseshoe" and “Timeline” chart options.


Deriving Timestamp Metrics

The visualisation is based on timestamps relating to the start and end of phases of processing that vTM undertakes to handle a transaction. These timestamps are measured in seconds relative to the start of the connection (or, in the case of a request protocol on a keepalive connection, the end of the previous request on the connection).

The timestamps are defined as follows:


 Request Handling Timestamps
 Client Timestamps
“crqs”Client Request Start - first read from client after connection (or the end of previous request on a keepalive connection)
“crqe”Client Request End - last read from client. That is, the end of the request body for HTTP
 vTM Timestamps
"trqs"Traffic Manager Request Start - the point at which vTM has enough information to make a load balancing decision (i.e. end of headers for HTTP)
"rrqs"Rules Request Start - about to run the first TrafficScript request rule
"rrqe"Rules Request End - finished running TrafficScript request rules
"trqe"Traffic Manager Request End - the vTM has initiated connection to the server, in preparation for transmitting the request and receiving a response
  Server Timestamps
"srqs"Server Request Start - first write from vTM to the backend server node
"srqe"Server Request End - last write from vTM to the backend server node



 Response Handling Timestamps
 Server Timestamps
"srss"Server Response Start - first read from the backend server node by the vTM
"srse"Server Response End - last read from the backend server node by the vTM
 vTM Timestamps
"trss"Traffic Manager Response Start - the point at which vTM has enough information to process response. That is, the end of HTTP headers
"rrss"Rules Response Start - about to run first TrafficScript response rule
"rrse"Rules Response End - finished running TrafficScript response rules
"trse"Traffic Manager Response End - the point at which vTM has completed processing the response and is simply forwarding to the client
 Client Timestamps
"crss"Client Response Start - first write to the client
"crse"Client Response End - last write to the client



From the timestamps, we can derive the durations of a number of processing tasks within the transaction handling:

MetricCalculated AsDescription
"crq"("crqe"-"crqs")Duration of vTM reception of client request
"trq"("trqe"-"trqs")Duration of vTM processing of client request
"srq"("srqe"-"srqs")Duration of vTM transmission of processed client request to server
"spr"("srss"-"srqe")Duration of server processing
"srs"("srse"-"srss")Duration of vTM reception of server response
"trs"("trse"-"trss")Duration of vTM processing of server response
"crs"("crse"-"crss")Duration of vTM transmission of server response


These timestamps are illustrated on the following timeline of a typical vTM transaction, showing how the segments of the transaction map to the metrics calculated above:an4-timestamp.png


The "Horseshoe" and “Timeline” charts in the analytics application are based on averages of the timings and derived durations from the transaction records that fit within the timescale and filter set in.an4-horseshoe.png


The color of each segment reflects the duration represented by it, with values close to zero are represented in green, values of 1000ms or more are represented in red, with a gradient of colors between (as shown in the key/legend). This can provide a handy visual clue where processing is taking human-discernible periods of time, and which phases of processing are taking longest to complete.


Timeline Charts

The timeline chart represents the same durations represented in the horseshoe chart, but combines these durations with the average startpoints for each phase of processing to present an aggregate timeline:an4-timebar1.png


Note - The start times for each phase are defined as the start time of the phase minus crqs; this is because crqs represents the wait time from client connection establishment to the first request byte (or the wait time between the end of one request and the first byte of the next request). These wait times can be of a very variable length, and can distort the timeline chart without adding much information about how the request processing time breaks down between the vTM and the back-end server. Hence, the left of the bars of the timeline view can be considered to represent the point in time where the first request byte is received by vTM (or in other words, crqs).


The timeline will often require careful interpretation, for a number of reasons:


1. Phases of processing can be too small to visualize

The phases may be so fast that they are almost unnoticeable when plotted on the timeline. For example, see the Request from Client, vTM Req Processing and Request To Server phases on the chart above. This indicates that the client request handling on vTM is trivial for this vServer.


2. Phases of processing can and often do overlap

For example, note from the timeline of a typical vTM transaction diagram that a vTM can commence sending a request to the server before having fully received the client request. Also note in the example timeline chart above how the start of the Response from Server bar is almost immediately followed by a 0ms vTM Response Processing phase and a Response to Client bar. This is a reasonable indicator that for this request, the vTM is simply forwarding the response to the client. Where a later phase of processing commences before the end of an earlier phase, the bar representing the earlier phase is split into two differently-colored sections:


  • A non-overlapping section, where only the earlier phase is in progress. This may represent a 'critical path' of processing before which the next phase cannot commence. However, it may also indicate that the next phase cannot commence for some other reason, for example while waiting for server connection. The color of this section of the bar reflects the duration it represents, using the same scale as the horseshoe segments. That is, values close to zero are represented in green, values of 1000ms or more are represented in red, with a gradient of colors between.

  • An overlapping section, where both phases are in progress. This color of this bar is a darker shade of the color used for the non-overlapping section.

Note that where two processing phases share long overlapping sections, it will be the bar of the 'later' phase whose colors will reflect the duration of the processing (for example, showing red for phases of 1000ms or more). While this is helpful for identifying processing phases that are taking longer than desirable, this should notbe taken to automatically mean that it is the later phase that is the cause of the delay. For example, when a server slowly streams a response to a vTM and the vTM streams this immediately back to the client, the Response to Client bar will be shown red, while the Response from Server bar will be shown green.


3. Similar traffic patterns may be processed differently

The traffic that passes through a vServer does not always follow a single homogenous pattern of processing. The configuration of a vServer may result in a number of traffic processing patterns that lead to very different timing patterns, some of which will entirely skip some phases of processing. For example, a vServer that has caching enabled is sitting in front of a relatively slow server. The traffic passing through that vServer will likely fall into two categories:


  • Traffic requiring server interaction. There will be definite Request to Server and Response from Server / Response to Client sections in the timing patterns, potentially separated by a Server Processing bar, during which the vServer is waiting for the server to start responding. For example, the following is a timeline showing averages over a five minute period for a vServer. The vServer is fronting a server that returns a large, static payload with caching enabled. In this case, there is an Extended Filter clause set to HTTP Response Cache Hit IS FALSE.


  • Traffic served from cache. The Request from Client can be very short (possibly registering as 0ms if the request itself is trivial). Also, Response to Client will start without any delay for Server Processing and is potentially much shorter in duration than the equivalent non-cached case. This is because the response can be served by vTM from memory without making any connection/request to the back end server. As with Server Processing, the Response from Server and vTM Response Processing phases are not required in the cached case, and will show as 0ms in duration. The following is an example timeline from the same vServer. Cacheing is still enabled, the the same large static payload is being delivered over the same time period as above. However, the Extended Filter clause is now set to HTTP Response Cache Hit IS TRUE:an4-timebar3.png


  • The combination of heterogeneous traffic patterns and averaging can lead to timing charts that appear to depict "impossible" timelines. For example, a Response to Client occurring before Server Processing. This can be seen by combining the graphs above (by removing any filtering based on cache hits) to deliver the following timeline. Note that the average Response to Client begins before the average Response from Server:an4-timebar4.png

    As a result, when viewing such charts, it is important to remember that the chart depicts averagestart times and durations, and that apparent timeline anomalies are likely a sign that there are two or more traffic patterns combined in the same dataset. These datasets differ - potentially radically - in terms of average timings. The Dataset View can be used to investigate potential reasons for these differences, such as responses from cache, responses from TrafficScript, connection failures, and so on. Further filtering can then be applied to separate out these different timing pattern groups, to produce a more standard "waterfall" timeline for each group.



This article is part of a series, beginning with Analytics Application - Concepts and Metrics Explained

Prev: Exploring Table Views

Next: Reporting on Top Events



Version history
Revision #:
3 of 3
Last update:
‎06-01-2018 08:17:AM
Updated by:
Labels (3)