I'm trying to set up a Squid gateway to my mod_perl server. Each of my
httpd processes is 20MB, with only 10MB shared, so I'm keen to minimise
the
amount of time they are just transferring data. I want Squid to work in
HTTP accelerator mode, and get the entire request from the user (who is
often only connected via modem), then pass on the whole request to my
httpd
process via a fast connection, get the entire result back via the fast
connection, and leave the httpd process ready to handle the next request.
I've got two worries about using Squid for this:
- It looks like Squid buffers 16k (by default) chunks of the data to send
back to the client. Can I get Squid to get all data before sending it to
the client? Does the buffering/chunking mean that if the client can't
accept the data at the speed it is available, Squid doesn't ask for more
until the client is ready?
- How are large POSTs handled? My users often upload large files through
a
POST request (since I run a webmail system). Does Squid wait for the
whole
request to arrive before passing it on to the HTTP server (which would
avoid making my big httpd process wait)? ...Or does it chunk or stream
the
POST data to the HTTP server?
Any additional advice on setting up this kind of gateway would be much
appreciated.
-- Jeremy Howard NOjhoward(at)fastmail(dot)fmSPAMReceived on Thu Jul 13 2000 - 21:54:47 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:54:31 MST