Well it's a bit hard to say that squid is not caching anything by only this.
It depends on what you expect allow and config it to cache.
Since squid uses a Store-ID to identify content you may think that two
request for the same "file" so to speak will be the same but in many
cases it is not due to couple additions in the url by the user or the
client.
There is also the possibility that you haven't configured squid to cache
big size files.
Start with making the cache_dir in a size of more then 100MB(unless it's
good for you) also try to take a look\change:
http://www.squid-cache.org/Doc/config/maximum_object_size/
and other object_size affecting config directives that you can see here:
http://www.squid-cache.org/Doc/config/
Note that it is possible that you yet do not understand the complexity
of caching and I can redirect you if you want to couple articles or
config directives that can help you understand more about it.
I have seen a nice site you can try to see that is cached well:
http://www.djmaza.info/
I do not know that much examples of a nice and cachable websites these
days due to the basic nature of dynamic content and SLL encrypted sites
out there in the wide web.
If you have specific site you want to analyze you can try redbot at:
https://redbot.org/
You can insert the url and the interface will verify if the site\url is
cachable.
Eliezer
On 06/29/2014 09:54 AM, liam_at_kzz.se wrote:
> I have removed all comments from my config file:
> http://pastebin.com/kqvNszyp
>
> And here is a short excerpt from my access.log - don't think it will be too
> helpful though. I have removed some IP addresses and URLs.
>
> http://pastebin.com/bZiZ3tUN
>
> Note that I do get some TCP_MEM_HIT/200 sometimes.
>
> I have tried using Firefox 29, Internet Explorer 11 and the latest version
> of Chrome for Debian 7 stable.
Received on Sun Jun 29 2014 - 12:48:45 MDT
This archive was generated by hypermail 2.2.0 : Mon Jun 30 2014 - 12:00:05 MDT