> > Another thing: I think that COSS might be a wonderful test of
> > Linux's O_DIRECT open flag. Only caveat, you have to read
> in PAGE_SIZE
> > blocks, at PAGE_SIZE boundaries. But it could increase
> performance, at
> > least this is what Andrea Arcangeli promises, since it will
> use zero-copy
> > DMA from disk to userspace (IIRC) and bypass all kernel caches.
>
> Yup. The BSDs let you get at the raw char devices too which can
> implement this kind of thing. FreeBSD can do this kind of tricks
> with files and page-aligned RAM.
BTW: that's n*PAGE_SIZE IIRC.
> > > * finish up async_io.c (add open/close wrappers which
> handle flushing
> > > pending IO, just in case)
> > > * Fix up the read code to read back the entire object in one
> > > system read,
> > > rather than 4k chunks
> >
> > Uhm.. I'm not very convinced of this. Can improve the
> memory pressure
> > considerably. Maybe you could make it 100k chunks or something, but
> > I'd prefer to keep the chunking.
>
> OH, perhaps I didn't explain it properly. Instead of the higher
> layer giving storeRead() a buffer to copy into, the storeRead()
> callback returns a pointer to a cbdata buffer containing the read
> results. The buffer should be variable size, but yeah there'll be
> an upper limit of say 128k rather than 4k. For large objects
> in a ufs-style FS, the data may come back in chunks since the object
> is big, but if the object is a small image it'll come back
> in one chunk.
Ok. I second this then.
It would be best if there was some kind of pressure on the chunk size:
when RAM becomes scarce decrease the chunking. If we go out of core
performance is going to be horrible anyways.
> There's some magic with the SM_PAGE_SIZE size in the code, and there
> was even before I got to it. the CLIENT_MEM_SOCK_BUF is also 4096
> bytes and when I tried upping it a while back things fell over.
We can keep it as a basic block, shouldn't be a problem to temporarily waste
a few k's.
> > > - This involves changing storeRead() to return a
> > > cbdataFree'able buf,
> > > rather than taking a buffer to copy into
> > > * Up the COSS FS size to be >2gb
> > > * Look at adding the swaplog stuff to the beginning of each stripe
> > >
> > Can't we just use two files? That way we can fsync() only
> metadata or
> > something.
>
> A seperate metadata file could work.
>
> > > So, I'll leave the last step until the rest of them are
> done, and then
> > > work on it. That might require the most squid internal reworking.
> > >
> > > I'm getting there. :-)
> > >
> > > So, here's a question: should we put the slab allocator into
> > > squid-HEAD
> > > now?
> >
> > Uhm... I'm supposed to deploy 2.5 today. Can't you wait a sec before
> > breaking
> > things?
> > Pretty please?
>
> Tsk. I don't like how people are using squid-HEAD as production code.
Worse than that. I'm using the sourceforge ntlm-tag (which is better than
what I'm running now in production, a 2.4-DEVEL3-NTLM-worse-than-hell).
Unfortunately it can't be avoided. I _HAVE_TO_ properly implement NTLM
authentication NOW.
On the plus side, Aug 10 ntlm branch has faitfully served over 5.5 million
hits
over 10 days (before crashing, but I try not to think to that).
> This is really starting to irk me somewhat. I'd suggest
> either creating
> a 2.5 branch for you guys to keep doing NTLM stuff on, or force you
> two to MFC the NTLM and auth code back to squid-2.4.
No way to do that. Auth-changes are way to wide.
> I note that more and more people on squid-users are using squid-2.5
> in production, and its rather scary for the developers. Well, at
> least me. :-)
It is scary, and I am scared (but somewhat happy because this means that we
get more meaningful bugreports) :)
Maybe it really is time to freeze 2.5 and short-schedule COSS and aio and
similar for a 2.6 to follow 2.5 by a couple of months?
-- /kinkieReceived on Mon Aug 20 2001 - 18:27:21 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:13 MST