Kinkie wrote:
> On Thu, Feb 25, 2010 at 5:19 PM, Denys Fedorysychenko
> <nuclearcat_at_nuclearcat.com> wrote:
>> On Thursday 25 February 2010 13:42:52 Amos Jeffries wrote:
>>> My opinion of RAID behind Squid is very poor. Avoid if at all possible.
>>>
>>> HW RAID is claimed to be workable though, particularly as the price
>>> range and quality goes up.
What hardware RAID solution is being used ?
I like to know details like battery-backed write cache and its size.
For the RAID0 part, the stripe size is important and should be
larger than the average object size (usually 13 KB). This is to avoid
using 2 disks for 1 logical read. I would recommend a stripe size of 64 KB
or more. When you have lots of large objects, an even larger stripe size
is recommended.
To reduce the time spend in the routine that removes old objects from disk
(runs every hour), the difference between cache_swap_low and cache_swap_high
must be reduced (is currently 5%).
I suggest to use
cache_swap_low 92
cache_swap_high 93
>>> The RAID0 operations of striping are duplication of the object spread
>>> Squid itself performs between its cache_dir. So all you really gain
>>> there is a larger total disk (Squid don't care about that) and risk of
>>> loosing the entire lot if any stripe platters die.
>> What about that sequental read of file on RAID0 is faster?
>> Sure if file is larger than double stripe size.
>
> There's pros and cons. Squid reads data in in small chunks; RAID0 has
> the pro of being able to parallelize reads across spindles, but then
> often has the disadvantage of reading in a whole stripe when accessing
> a block; This most likely means that squid's access patterns trash the
> RAID controller's cache.
>
> In other words, if you can please go JBOD and let Squid balance across
> disks. It's simply a safer choice to get performance. In case of HDD
> failure cache data are by definition easilty re-imported (it would be
> nice if squid was more graceful in handling disk failures, but
> still..)
Received on Sun Feb 28 2010 - 01:25:58 MST
This archive was generated by hypermail 2.2.0 : Sun Feb 28 2010 - 12:00:06 MST