Chris Woodfield wrote:
> Take a careful look at the stale-if-error Cache-control header, as
> described below:
>
> http://tools.ietf.org/html/draft-nottingham-http-stale-if-error-01
>
> In a nutshell, this allows you to force squid to serve up objects if the
> origin is down, even if those objects are stale, for a configurable
> number of seconds after the object's original stale timestamp.
>
> However, you'll still have the overhead of squid attempting to reach the
> origin, failing, then serving up the stale object on each request - as
> such, I'd highly recommend making sure that if you use this, you shut
> down the server in such a way that it generates an ICMP Destination
> Unreachable reply when squid attempts to connect.
> If you take the server off the air completely, squid will have to wait
> for network timeouts before returning the stale content, which your
> users will notice.
>
> Of course, you'll need to make sure that squid has your site cached in
> its entirety - it can't retrieve not-cached content from a dead server :)
>
> Amos, can you confirm that 3.x supports this? I'm using it in 2.7.
Currently only operational in 2.7.
There is an aging experimental patch.
http://www.squid-cache.org/bugs/show_bug.cgi?id=2255
But the feature is still awaiting an upgrade to current 3.x code and
testing. I'm too bogged down with 3.1 stabilizing to re-code and test
myself.
Volunteers to do that small task are welcome, the 3.2 tree is already
open for additions.
Amos
>
> -C
>
> On Jun 22, 2009, at 9:50 PM, Amos Jeffries wrote:
>
>> On Mon, 22 Jun 2009 16:44:34 -0400, Myles Merrell <mmerrell_at_cleverex.com>
>> wrote:
>>> Is it possible to configure squid to continue serving from the cache,
>>> even if the originserver has crashed?
>>>
>>> We have squid setup using acceleration through a virtual host. Squid
>>> listens on 80, and our web server works on another server on port 81.
>>> Squid serves the majority of pages through the cache, and when it has to
>>> it gets them from the server. We'd like to be able to take the server
>>> down periodically, and have the squid cache continue to serve pages in
>>> the cache.
>>>
>>> Is this reasonable?, if so, is it possible?
>>>
>>
>> Sort of. Squid does this routinely for all objects which it can cache.
>> The
>> state of the backend server is irrelevant for HIT traffic.
>>
>> I'm sure some of those who deal with high-uptime requirements have
>> more to
>> add on this. These are just the bits I can think of immediately.
>>
>> For regular usage make sure that sufficiently long expiry and max-age are
>> set so things get cached for as long as possible. Also check the
>> cache_peer
>> monitor* settings are in use. These will greatly reduce minor outages or
>> load hiccups.
>>
>> For best affect, the monitor settings and several duplicate parent peers
>> would be recommended. So that when one peer is detected down Squid simply
>> makes requests to the next one. Only the requests in action to the first
>> peer will experience any error.
>>
>> The newer the Squid (up to 2.HEAD snapshots) the better the tuning and
>> more
>> options available for this type of usage. Several sponsors have spent
>> a lot
>> getting 2.7 and 2.HEAD acceleration features added.
>>
>>
>>
>> For long'ish scheduled outages there are some other settings which can
>> further reduce impact but take planning to use properly. When an
>> outage is
>> being scheduled ensure the max_stale config option has a reasonable but
>> longer period than you need for downtime.
>>
>> Give its some time to grab as much content as possible. You may want
>> to run
>> a sequence of requests for no-so-popular pages that MUST be cached for
>> the
>> duration. Then set the inappropriately named offline_mode in Squid just
>> before dropping the back-end. These will combine to make squid cache as
>> aggressively as possible and not seek external sources unless absolutely
>> required.
>>
>>
>> Amos
>>
>
-- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16 Current Beta Squid 3.1.0.8Received on Fri Jun 26 2009 - 02:51:56 MDT
This archive was generated by hypermail 2.2.0 : Fri Jun 26 2009 - 12:00:04 MDT