Amos Jeffries wrote:
> Tom Williams wrote:
>> Amos Jeffries wrote:
>>>> So, I setup my first Squid 3.0STABLE9 proxy in HTTP accelerator mode
>>>> over the weekend. Squid 3 is running on the same machine as the web
>>>> server and here are my HTTP acceleration related config options:
>>>>
>>>> http_port 80 accel vhost
>>>> cache_peer 192.168.1.19 parent 8085 0 no-query originserver login=PASS
>>>>
>>>>
>>>> Here are the cache related options:
>>>>
>>>> cache_mem 64 MB
>>>> maximum_object_size_in_memory 50 KB
>>>> cache_replacement_policy heap LFUDA
>>>> cache_dir aufs /mnt/drive3/squid-cache 500 32 256
>>>>
>>>> As described in this mailing list thread:
>>>>
>>>> http://www2.gr.squid-cache.org/mail-archive/squid-users/199906/0756.html
>>>>
>>>>
>>>> all of the entries in my store.log have RELEASE as the action:
>>>>
>>>> 1223864638.986 RELEASE -1 FFFFFFFF
>>>> A1FE29E96A44936155BB873BDC882B12 200
>>>> 1223864638 -1 375007920 text/html 2197/2197 GET
>>>> http://aaa.bbb.ccc.ddd/locations/
>>>>
>>>> Here is a snipet from the cache.log file:
>>>>
>>>> 2008/10/12 21:23:36| Done reading /mnt/drive3/squid-cache swaplog (0
>>>> entries)
>>>> 2008/10/12 21:23:36| Finished rebuilding storage from disk.
>>>> 2008/10/12 21:23:36| 0 Entries scanned
>>>> 2008/10/12 21:23:36| 0 Invalid entries.
>>>> 2008/10/12 21:23:36| 0 With invalid flags.
>>>> 2008/10/12 21:23:36| 0 Objects loaded.
>>>> 2008/10/12 21:23:36| 0 Objects expired.
>>>> 2008/10/12 21:23:36| 0 Objects cancelled.
>>>> 2008/10/12 21:23:36| 0 Duplicate URLs purged.
>>>> 2008/10/12 21:23:36| 0 Swapfile clashes avoided.
>>>> 2008/10/12 21:23:36| Took 0.01 seconds ( 0.00 objects/sec).
>>>> 2008/10/12 21:23:36| Beginning Validation Procedure
>>>> 2008/10/12 21:23:36| Completed Validation Procedure
>>>> 2008/10/12 21:23:36| Validated 25 Entries
>>>> 2008/10/12 21:23:36| store_swap_size = 0
>>>> 2008/10/12 21:23:37| storeLateRelease: released 0 objects
>>>> 2008/10/12 21:24:07| Preparing for shutdown after 2 requests
>>>> 2008/10/12 21:24:07| Waiting 30 seconds for active connections to
>>>> finish
>>>> 2008/10/12 21:24:07| FD 14 Closing HTTP connection
>>>> 2008/10/12 21:24:38| Shutting down...
>>>> 2008/10/12 21:24:38| FD 15 Closing ICP connection
>>>> 2008/10/12 21:24:38| aioSync: flushing pending I/O operations
>>>> 2008/10/12 21:24:38| aioSync: done
>>>> 2008/10/12 21:24:38| Closing unlinkd pipe on FD 12
>>>> 2008/10/12 21:24:38| storeDirWriteCleanLogs: Starting...
>>>> 2008/10/12 21:24:38| Finished. Wrote 0 entries.
>>>> 2008/10/12 21:24:38| Took 0.00 seconds ( 0.00 entries/sec).
>>>> CPU Usage: 0.041 seconds = 0.031 user + 0.010 sys
>>>> Maximum Resident Size: 0 KB
>>>> Page faults with physical i/o: 0
>>>> Memory usage for squid via mallinfo():
>>>> total space in arena: 3644 KB
>>>> Ordinary blocks: 3511 KB 8 blks
>>>> Small blocks: 0 KB 1 blks
>>>> Holding blocks: 1784 KB 9 blks
>>>> Free Small blocks: 0 KB
>>>> Free Ordinary blocks: 132 KB
>>>> Total in use: 5295 KB 145%
>>>> Total free: 132 KB 4%
>>>> 2008/10/12 21:24:38| aioSync: flushing pending I/O operations
>>>> 2008/10/12 21:24:38| aioSync: done
>>>> 2008/10/12 21:24:38| aioSync: flushing pending I/O operations
>>>> 2008/10/12 21:24:38| aioSync: done
>>>> 2008/10/12 21:24:38| Squid Cache (Version 3.0.STABLE9): Exiting
>>>> normally.
>>>>
>>>> I'm running on RedHat EL 5. With Squid running, I can access the
>>>> website just fine and pages load without problems or issues. It's
>>>> just
>>>> nothing is being cached.
>>>>
>>>> This is my first time configuring Squid as a HTTP accelerator so I
>>>> probably missed something when I set it up. Any ideas on what
>>>> might be
>>>> wrong?
>>>>
>>>> Thanks in advance for your time and assistance! :)
>>>>
>>>>
>>>
>>> Q) Do you have any of the routing access controls (http_access,
>>> never_direct, cache_peer_access, cache_peer_domain) which make squid
>>> pass
>>> the accelerated requests back to the web server properly?
>>>
>>
>> I have the default http_access options except I have http_access
>> allow all at the end of them:
>
> Ouch. You have a semi-open proxy.
> If anyone identifies your public IP they can point a domain DNS at
> your IP and have it accelerated. Or even configure port 80 as their
> proxy IP and browse through it. A firewall or NAT layer cannot prevent
> this happening.
>
> You should at the very least be limiting requests to the domains you
> are serving.
>
> I prefer a config like the one listed:
> http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341
>
Thanks again for the tip. I've got a configuration similar to the one
described in that article setup and it's working well. :) I still
need to setup the HTTP headers accordingly but at least now I've got a
better test bed. :)
>
>>
>> #Recommended minimum configuration:
>> #
>> # Only allow cachemgr access from localhost
>> http_access allow manager localhost
>> http_access deny manager
>> # Deny requests to unknown ports
>> http_access deny !Safe_ports
>> # Deny CONNECT to other than SSL ports
>> http_access deny CONNECT !SSL_ports
>> #
>> # We strongly recommend the following be uncommented to protect innocent
>> # web applications running on the proxy server who think the only
>> # one who can access services on "localhost" is a local user
>> http_access deny to_localhost
>> #
>> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
>>
>> # Example rule allowing access from your local networks.
>> # Adapt localnet in the ACL section to list your (internal) IP networks
>> # from where browsing should be allowed
>> http_access allow localnet
>>
>> # And finally deny all other access to this proxy
>> #http_access deny all
>> http_access allow all
>>
>> I did this mainly to get things working and I plan on refining these
>> options. I do not have never_direct, cache_peer_access,
>> cache_peer_domain explicitly set in the config file.
>>> DNS should be pointing the domain at Squid so its DIRECT access lookups
>>> will normally loop back inwards and fail. The resulting error pages may
>>> not be cachable.
>>>
>>
>> I'll be sure to keep this in mind. For now, I'm using the server IP
>> address to access it.
>>> Q) what does your access.log say about the requests?
>>>
>>
>> I'm using Apache 2.2.3 as the web server and its logs do show I'm
>> accessing the pages without problems:
>>
>> 1.1.1.1 - tom [13/Oct/2008:10:15:39 -0500] "GET /topics/ HTTP/1.1"
>> 200 2316 "http://aaa.bbb.ccc.ddd/locations/" "Mozilla/5.0 (X11; U;
>> Linux x86_64; en-US; rv:1.9.0.3) Gecko/2008092510 Ubuntu/8.04 (hardy)
>> Firefox/3.0.3"
>>
>> as an example. The pages load fine in my browser.
>>
>>> Q) are the test pages you are requesting cachable?
>>> This tester should tell you what and for how long it caches
>>> http://www.ircache.net/cgi-bin/cacheability.py
>>>
>> Good question. I'll use this site to make sure.
>>
>> Thanks!
>>
>> Peace...
>>
>> Tom
>
> Amos
Peace...
Tom
Received on Fri Oct 24 2008 - 02:39:32 MDT
This archive was generated by hypermail 2.2.0 : Fri Oct 24 2008 - 12:00:04 MDT