----- Original Message -----
From: "Adrian Chadd" <adrian@creative.net.au>
To: "Robert Collins" <robert.collins@itdomain.com.au>
Cc: <squid-dev@squid-cache.org>
Sent: Saturday, February 03, 2001 12:44 AM
Subject: Re: storeAppend
> On Sat, Feb 03, 2001, Robert Collins wrote:
> >
> Ok, a chunk here is a part of an object, not a "te chunk".
> Right.
Oops. Forgot to mention that little terminology thing :]
> > Now, the store doesn't know when chunk m is reached until after the fact (storeComplete gets called), so client_side *cannot*
set
> > that flag.
>
> Why does client_side need to set that flag?
content processing filters may have buffered data that needs to be flushed. Using a input buffer of NULL, len NULL is very
inefficient (*)
> > What about checking content-length you say? Well transfer encoding for one, and potentially other filters, will invalidate that.
>
> Yup, I've already seen the content-length as being a problem.
Ranges are as well. We probably need to treat ranges like transfer encoding and un-range in http.c, and re-encode as appropriate for
the client in client_side. We'll _have_ to do this for range combining anyway. This then implies that data passing functions need to
send (buf, len, body_offset, with body_offset of -1 meaning headers only...
So what I'd ideally like to do is call storeAppend(entry, buf, len, offset, flags) and have the store create a new blob if the
offsets aren't contiguous..
for headers, either a special case test (like offset=-1 as I described above) or a new funciton storeAppend Headers, that will
update &| replace headers for the entry...
> > What about catching it in clientWriteComplete. Well yes I can do that and it does work, but it pushes everything through another
> > loop of code for little reason. So I'm look for ways of improving it.
> >
>
> I admit that I didn't pay much attention to the thread. I've since
> reread the thread, and I'm still partially confused.
see (*)
> how do you then deal with errors where something calls storeAbort() ?
> (for example)
I'm considering this. currently filters don't call storeAbort on pain of SIGSEGV. The client_side can Abort requests happily, and
the filters get the instances removed, and the chance to clean up tidily. If a filter wants to abort, I think it either a) adds
FILTER_ABORT to the flags and the terminating filter takes care of it, or b) returns an unsigned int with a set of backwards
propogating flags.
In a nutshell, content processing filters have some restrictions on what they can do. They do make RFC 2616 violations dead easy,
which isn't a great thing, but as much as possible the framework should make rfc 2616 violations deliberate actions, rather than
mistakes. Or at least that's my goal.
> I dunno. I think you might actually benefit from sitting this thing behind
> modio and helping me use it as Yet Another Thing To Redesign The
> Storage Interface Again(tm).
Sure - I'm happy to make the branch track modio. It'll still need to track te, and when I have enough module code together I'll be
creating generic_modules which it will also track :-].
> Adrian
>
> --
> in the name of experimentation, but at this point I'd say something
> Adrian Chadd "Romance novel?"
> <adrian@creative.net.au> "Girl Porn."
> - http://www.sinfest.net/d/20010202.html
>
*: consider two filters in a row, both with buffers:
the first flushes its buffer when it gets called with (NULL,0,.. ) (buf,len,...)
It does this by calling the second filter (local buffer, len,...)
the second then treats this as business as usual, and so on down the chain.
the first filter then has to call second_filter (NULL,0,..)
the second filter now realises it's EOF time, and flushes it's own buffer.
Now if the fitlers can signal EOF directly, the first filter can tell the second filter
(local buffer, len, EOF,..) skipping the second iteration through the chain.
ie the number of function calls is either n, the number of filters, or sum(1 .. n)
Received on Fri Feb 02 2001 - 07:00:02 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:28 MST