Hello all,
Has anyone here come across problems when using squid to cache larger
files? I know about the upper limit on the size of objects when using
Squid 2.x, and I have the maximum_object_size set to 1048576KB (1G), so
that shouldn't be a concern. I'm coming across two problems:
1) When I try and download larger files, from 100M to 700M or so, a
decent amount of the time the download will fail. The weird thing is
that the download manager claims that the download succeeded. For
example, let's say I'm downloading a CD image at 600M. Part-way through
a download, the download manager will tell me that the file is done.
When I look at the file, I see that it's not, it could be 20 Megs it
could be 500 Megs, but it's not the full 600 Megs that it should be. It
doesn't matter what client I'm using, and it only happens when I'm using
the proxy (If I go directly, the download succeed correctly).
2) After those times where a larger file (again, 600M or so) succeeds,
the object is there, and I'm able to download the file again from the
cache rather than directly from the site. Shortly afterwards, however,
the object is overwritten by something else. I am using the ufs cache
type, and it's been given 10 Gigs of space, plenty for the tests I'm
running. I've also upped the sub directories to 256 and 256, so there
should be plenty of object "placeholders" available. In fcct, when I
look in the cache directories only the first few subdirectories within
the first 00 directory are being used.
I'm guessing that I have something configured wrong, but I can't see
what. Has anyone here had these problems? If so, could you point me to
what might be causing them?
I'm using Squid 2.5.
Thanks for any help.
--Brennon
Received on Wed Oct 13 2004 - 14:45:18 MDT
This archive was generated by hypermail pre-2.1.9 : Mon Nov 01 2004 - 12:00:02 MST