Goal maximise byte hit rate to alleviate bandwidth issue across a limited
connection (till it is upgraded, or the systems relocated).
I have squid configured as a reverse proxy, and that I now have some free
bandwidth shows it is doing something useful - much thanks!
However revisiting my bandwidth statistics surprised me, that I'm still
shipping a lot more duplicated content over our connection than I expected,
to the Squid proxy. The biggest offender being a WMV file (it was the biggest
offender first thing this week when I realised I hadn't set the maximum size
of objects high enough to get it cached! It is still the biggest offender).
For various reasons we have a number of multimedia files on this end of the
connection, all large, and all with no explicit expiry information (which I
can adjust if it helps).
What I am hoping is that I can persuade Squid to do a "TCP_REFRESH_HIT" and
burn 350 odd bytes instead of 8 MB when serving our most popular WMV file
across this connection, or other media files it has cached.
Squid has 420MB of RAM and 17GB of cache (now all populated).
Tuesday, before the cache was full, it was behaving as I expected, since I
explicitly tested the top WMV file after spotting the object size mistake.
And now it intermittently does what I want it to. I assume this is simply
that the cache is full, and that it choosing to drop this object from cache.
I see advice to try "heap LFUDA" as the cache policy for maximizing byte hit
rate - which I will try.
However are there other likely "gotchas" with handling larger files?
Are there other levers to twiddle to persuade Squid to hang onto larger files?
I'm seeing HITS or REFRESHES about 50% of the time, the other 50% are straight
TCP_MISS for my worst WMV. Bandwidth figures suggest similar results for
other files.
Received on Thu Aug 28 2008 - 17:01:12 MDT
This archive was generated by hypermail 2.2.0 : Fri Aug 29 2008 - 12:00:04 MDT