I have a strong opinion that the store should not know about HTTP, or at
most have a very limited knowledge. (some abstract knowledge will be
required for ranges, ETag support and similar things)
This is why I put the store on a T connection
Cache miss:
server >---.---> client
\
\-> store
Cache hit:
store >---.---> client
Partial cache hit (including revalidations which updates headers):
server >---.---> client
/ \
store >-/ \-> store
Yes, there is a need for a intermediary layer that collects what is
available and constructs the appropriate upstream query and which splits
the data flow. This layer is the . in the graphs above. But not even
this layer theoretically needs to be very specific for HTTP. But calling
this layer for the store is very inappropriate as it stores nothing. It
is only logics.
/Henrik
Robert Collins wrote:
>
> ----- Original Message -----
> From: "Adrian Chadd" <adrian@creative.net.au>
> To: "Robert Collins" <robert.collins@itdomain.com.au>
> Cc: <squid-dev@squid-cache.org>
> Sent: Wednesday, February 14, 2001 9:24 AM
> Subject: Re: some new branches
>
> > On Wed, Feb 14, 2001, Robert Collins wrote:
> >
> <snip background stuff> >
> > > Rob
> > >
> >
> > Uhm, perhaps. :-) The way I see it - i'd like it to be the store's
> > responsibility to parse say a chunked request or a range request
> > and spit out the correct reply back. This would mean that the
> > storeClientCopy() call becomes the "other side" of storeAppend().
>
> Yes, nearly what I had in mind. My idea was to ask the store for the exact byte ranges needed (via a parsed request_t ?) and then
> let the store make as many discrete upstream requests as needed. the client_side callback would only get called when the 1st byte
> range was ready, or an error condition occured. (And it would be passed entity data only, not the range response with all it's
> muck..
>
> > This would mean that the existing range request handling code in
> > the client/server http modules would need ripping out and the
> > request was just passed straight through to storeLookup(), and
> > storeLookup() handles all the tricky stuff.
>
> I think that the server side needs to break open the range response, and feed offset data back into the store. This means that other
> server side protocols such as ftp can do ranges too in the future. So StoreAppend gets a offset parameter, and in the store it can
> check to see where the data receieved actually goes.
>
> > You then end up with a very simple request-reply flow, where
> > you have something in the middle (the storage manager, filters, etc)
> > which can change the request and do the tricky stuff.
>
> Exactly.
>
> > I've just hit a bit of a pinch in my time management, so I don't
> > mind if either of you want to go ahead and continue tidying up
> > the http code in modio. It'll make my job easier down the track.
> > :-)
>
> Umm. Love to, really. I'm not that flush with time myself, but I'll try and have a look see.
>
> The changes I'm interested in getting stable, as a first step to the solution, is getting the range response parsing code into
> http.c and the data flowing back through the store with this mythical offset. When that's done, the content processing framework
> will be mostly complete except for a few boundary cases. I am happy to do that work in modio and bring it from there to
> content_processing.. but I'm a blind man when it comes to the store manager at the moment... guidance will be needed.
>
> Rob
Received on Wed Feb 14 2001 - 02:33:32 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:30 MST