A correction to prevent misunderstandings. The time mentioned for the
second request "13:30:13" isn't correct and you should just see the
second request as being executed right after the first.
Nobody answered my post yet. Henrik, you as the patch writer, could you
maybe help me.
Thanks,
Brendan
On Wed, Aug 27, 2003 at 01:45:36PM +0200, Brendan Keessen wrote:
> Hi,
>
> I patched squid-2.5.3 with the collapsed_forwarding-2_5.patch from Hendrik
> Nordstrom. Now I have some behaviour I don't understand. I have set up the
> patched Squid as a reverse proxy (127.0.0.1:80) and turned on
> collapsed_forwarding in the squid configuration file. The website(127.0.0.1:81)
> which it accelerates has a nice perl script which gives _no_ output for 10
> seconds and after this it returns a nice HTTP conform header with an expire of
> now() + 120 seconds. ok now let me try to describe a situation I don't
> understand:
>
> * The object index.pl was purged from the cache
>
> Client 1 requests http://127.0.0.1/index.pl (I turned on debug loglevel 33,7):
>
> 2003/08/27 13:18:58| clientProcessRequest2: storeGet() MISS
> 2003/08/27 13:18:58| clientProcessRequest: TCP_MISS for 'http://127.0.0.1:81/index.pl'
> 2003/08/27 13:18:58| clientProcessMiss: 'GET http://127.0.0.1:81/index.pl'
>
> I added some debug messages just to see if the new object is made immediately
> public, allowing new requests to attach to the pending request. As I understand
> correctly from the documentation
> (http://devel.squid-cache.org/collapsed_forwarding/)
>
> 2003/08/27 13:18:58| storeSetPublicKey(http->entry)
>
> It does!
>
> During the 10 seconds it needs to get the content of index.pl I request 1 or
> several other requests for the same object (index.pl). I expect the request to
> be attached to the pending request and only 1 total request should arrive on
> the backend server. This doesn't happen. All the requests send to squid for the
> same object during the 10 seconds index.pl needs to send output is forwarded to
> the backendserver. A more detailed debug output for such a request:
>
> 2003/08/27 13:30:13| clientRedirectDone: 'http://127.0.0.1:81/index.pl' result=NULL
> 2003/08/27 13:30:13| clientInterpretRequestHeaders: REQ_NOCACHE = NOT SET
> 2003/08/27 13:30:13| clientInterpretRequestHeaders: REQ_CACHABLE = SET
> 2003/08/27 13:30:13| clientInterpretRequestHeaders: REQ_HIERARCHICAL = SET
> 2003/08/27 13:30:13| clientProcessRequest: GET 'http://127.0.0.1:81/index.pl'
> 2003/08/27 13:30:13| clientProcessRequest2: !storeEntryValidToSend MISS
> 2003/08/27 13:30:13| clientProcessRequest: TCP_MISS for 'http://127.0.0.1:81/index.pl'
> 2003/08/27 13:30:13| clientProcessMiss: 'GET http://127.0.0.1:81/index.pl'
> 2003/08/27 13:30:13| storeSetPublicKey(http->entry)
>
> To shortely describe my problem again. If I eg. concurrently make 50 requests
> for the same url (which is not in the cache) with a Squid with the
> collapsed_forwarding patch. When the request in my test setup hangs for 10
> seconds all the 50 concurrent requests are send to the backend server as I
> understand from the documentation this is just the thing that the patch should
> prevent it from doing.
>
> Could anyone help me in understanding why this behaviour occures? Or what I
> should investigate more to solve the problem.
>
> Thanks,
> Brendan
Received on Thu Aug 28 2003 - 03:33:34 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:19:16 MST