There are certain circumstances where I want to cache a request even though the client terminates the connection.
This is due to the client application timing out and continually re-issuing the same request.
Is this possible to achieve in any way? Spawning off a new (duplicate request) unaffected by the client connection?
Any feedback would be helpful. (I have the code in place that handles the caching, however it only does so if the request can be fulfilled in under the client application timeout.)
I've added an example transaction trace of a request where the client terminates the request; causing the request to be fulfilled on our backend servers, but not allowing a response to be sent.
Time Event Description
|0.001 ms||TCPReadClient (361 bytes)||Read TCP data from client|
|0.006 ms||HTTPReqGot||HTTP headers read from client|
|0.011 ms||HTTPReqParsed||HTTP headers parsed|
|0.022 ms||RuleRun (cache_disable)||Rule started|
|0.030 ms||RuleRun (gisservices_internal)||Rule started|
|0.039 ms||RuleRun (private_pool_selection)||Rule started|
|0.044 ms||RuleRun (https_redirects)||Rule started|
|0.049 ms||RuleRun (tilecache_rewrite)||Rule started|
|0.051 ms||RuleRun (streaming_rewrite)||Rule started|
|0.055 ms||RuleRun (navico_signup)||Rule started|
|0.058 ms||RuleRun (large_request)||Rule started|
|0.091 ms||RulesStopped||All rule processing stopped by a rule|
|0.096 ms||HTTPCacheMiss||HTTP request not found in web cache|
|0.100 ms||HTTPReqBuilt||HTTP request constructed for back-end server|
|0.111 ms||NodeReuseConn (redacted:80)||Connection to back-end node re-used|
|0.130 ms||TCPWroteServer (397 bytes)||Wrote TCP data to back-end server|
|3008.866 ms||ConnFailedClient ("Read failure - Connection closed by peer")||Client connection failed|
|3008.868 ms||TCPClientClosed||Client closed TCP connection|
|3008.871 ms||End||Request finished|
Stingray should still be caching the requests even though the client is timing out. It's possible that one of your rules is preventing that from happening. One of your rules is cache_disable, can you post that one?
I have the cache enabled on a number of virtual severs, however I only want to cache certain "large requests".
I globablly disable the cache at the beginning of each transaction and re-enable to cache when it meets certain circumstances.
The cache_disable rule is simply:
However in the subsequent rule large_request I re-enable if it meets specific criteria (i.e. width & height query paramater exceed 3000x3000):
$urlPath = http.getRawUrl();
'width=()'</span><span>,</span><span>"i"</span><span>)</span><span>)</span><span>break</span><span>;</span><br><span>if</span><span>(</span><span>!</span><span>(</span><span>$1 </span><span>> </span><span>3000</span><span>)</span><span>)</span><span>break</span><span>;</span><br><span>if</span><span>(</span><span>!</span><span>string.regexmatch</span><span>(</span><span>$urlPath</span><span>, </span><span>'height=()',"i"))break;
if(!($1 > 3000))break;
http.removeHeader( "Cache-Control" );
It appears that after the client closes the connection the Stingray Traffic Manager gives up on the request that was sent to the backend servers.
This leaves our backend servers doing the work while not reaping the benefit of caching the completed request to be available when the client application issues another request.
Thanks for the feedback, looking forward to additional information.
I'm sorry but I gave you incorrect information earlier. If the client request times out, the requested object will not be cached. I'm sorry for any inconvenience this caused you.
Is there a way to remidiate this problem with trafficscript.
I.E. spin up a secondary request that is independent of the client request?
I was unable to find anything capable of doing this.
Unfortuantely the response rule won't be run if the client closes the connection so I don't see a way to do that either. Why is the client connection closing?
The client application that our customer base uses has a default timeout of 30 to 60 seconds. After this timeout another request is sent until the next timeout and then another request after the next.... and this cycle continues indefinitely until the client successfully recieves that data.
It is possible some of the requests for data can take anywhere from 5-10 minutes.
Previously we were using Squid to accomplish the caching of requests where the client application was closing the connection. We would accept the first request and then hold onto any duplicate requests until the backend servers successfully completed the original request.
I've been working through the possiblity of migrating this functionality to Stingray using trafficscript.
However I beleive I'll need some mechanism of spawning a connection independent of the client connection to succesfully fulfill the initial request. Once the initial request is fulfilled the client undoubtably gets the data from the cache without any problems.
As always any feedback in appreciated.