Hi, Adrian:
->Hm!
->
->What, it aborted even though others were downloading the file?
Yes, large files are hard to be fetched and cached by squid if these files
are
accessed simultaneously by many clients with multi-thread download tools.
For example, If the first client request one cache miss file ( such as 2
Megabytes
large) by two threads, then the first thread request range: 0~1Megabytes,
and
the second thread request range: 1M ~ 2Megabytes. If the first request
aborted before have received more than 1Megabytes, We often find squid's
back
side connection stopped at the point of 1Megabytes , so this file can not be
whole
fetched and normally cached.
Our purpose:
1. squid can cache large files normally.
2. squid can support multi-thread requests once large files are cached.
(squid
response TCP_HIT/206 once file was cached. )
Our configuration:
range_offset_limit -1 KB # support byte_range at the front
side, and request
without range header at back side.
quick_abort_min -1 KB #keep fetching whole file ,no matter when
clients
abort.
Adrian, please help to take a look , thank you in advanced.
Adam
->
->->
->->On Tue, Sep 05, 2006, adam.cheng wrote:
->->> HI, Adriam
->->>
->->> We have a similar implementation as youtobe . But when we deploy squid
for
->->> large files, we found quick_abort does not work well. Problem looks
like
->->> this:
->->>
->->> When cache miss , it is very hard for squid to get whole large file
and
->->> cache it if there are lots of multi-thread download tools, like
flashget
->->> or netransport are accessing this file at the same time. Observing
through
->->> sniffer, we found the back side connection often stopped at the point
of the
->->> first thread stopped.
->->
->->Hm!
->->
->->What, it aborted even though others were downloading the file?
->->
->->
->->
->->
->->
->->Adrian
->->
Received on Tue Sep 05 2006 - 04:19:12 MDT
This archive was generated by hypermail pre-2.1.9 : Sun Oct 01 2006 - 12:00:03 MDT