Aside from the slight RAID5 performance drawback
and the RAID0 failure case drawbacks, I thought the
main performance issue with any RAID under squid
was with aufs only having a single writer thread,
as compared to giving squid multiple writer
threads if you mount the disks individually.
Of course, you could make multiple aufs cache_dirs
under one RAID mount... I haven't tried that yet,
but assume that the OS writing to a different disk
than one to which it's reading is faster than the OS
waiting on the single(from it's perspective) RAID "disk"
to finish an operation before issuing another.
As for Squid handling a JBOD single disk failure,
not stacking up more reads on an (assumed) failed
disk would be great, but the process still needs to
be killed to get rid of those that blocked before it
noticed and to replace the disk, right?
You could run one squid per disk (which oddly
almost matches up on dual quad cores w/8 disks),
although you'd want a consistent hash to maximize
your hitrate, and if you had that, losing an entire
squid for a little bit to restart/replace one disk
isn't so bad anyway... definately better than not
gaining performance from half your disks to RAID1.
-neil
On Wed, Mar 26, 2008 at 6:59 PM, Marcus Kool
<marcus.kool@urlfilterdb.com> wrote:
> Richard,
>
> RAID0 is considered to have a worse performance than JBOD with 2 disks
> with one cache directory per disk. Since you mentioned that you have
> to stick with RAID0 all you can do is optimize the RAID0 usage.
>
> Only one cache directory per disk is recommended while you have 4 cache
> directories on one file system. Consider dropping 2 COSS cache directories
> so that you have 1 COSS and 1 AUFS.
>
> Kinkie and I rewrote the RAID for Squid section of the FAQ and
> it includes more details about price, performance and reliability trade-offs.
> You will find that Software RAID5 is the slowest option.
>
> -Marcus
>
>
> Richard Wall wrote:
> > On Tue, Mar 25, 2008 at 1:23 PM, Marcus Kool
>
> > <marcus.kool@urlfilterdb.com> wrote:
>
> >> I wish that the wiki for RIAD is rewritten.
>
> >> Companies depend on internet access and a working Squid proxy
> >> and therefore the advocated "no problem if a single disk fails"
> >> is not from today's reality.
>
> >> One should also consider the difference between
> >> simple RAID and extremely advanced RAID disk systems
> >
> > Recently I've spent a fair bit of time benchmarking a Squid system
> > whose COSS and AUFS storage (10GB total) + access logging are on a
> > RAID0 array of two consumer grade SATA disks. For various reasons, I'm
> > stuck with RAID0 for now, but I thought you might be interested to
> > hear that the box performs pretty well.
> >
> > The box can handle a 600 - 700 Req/Sec Polygraph polymix-4 benchmark with a
> > ~40% document hit ratio.
> > usage
> > Doubling the total storage to 20GB, increased the doc hit ratio to
> > 55%, but hit response times began to increase noticably during the top
> > phases.
> >
> > CPU was about 5% idle during the top phases. Logs were being rotated
> > and compressed every five minutes. CPU usage never
> >
> > Some initial experiments suggest that removing RAID doesn't
> > particularly improve performance, but I intend to do a more thorough
> > set of benchmarks soon.
> >
> > I'm not sure how relevant this is to your discussion. I don't know how
> > RAID0 performance is expected to compare to RAID5.
> >
> > I'll post here if and when I do more benchmarking without RAID.
> >
> > -RichardW.
> >
> > == Spec ==
> > CPU: Intel(R) Celeron(R) CPU 2.53GHz
> > RAM: 3GB
> > Disks: 2 x Seagate Barracuda 160GB
> > Squid: 2.6.STABLE17
> > Linux Kernel: 2.6.23.8
> > FS: reiserfs
> >
> > == Squid Conf (extract) ==
> > # NETWORK OPTIONS
> > http_port 800 transparent
> >
> > # MEMORY CACHE OPTIONS
> > cache_mem 152 MB
> > maximum_object_size_in_memory 50 KB
> >
> > # DISK CACHE OPTIONS
> > cache_replacement_policy lru
> > # TOTAL AVAILABLE STORAGE: 272445 MB
> > # MEMORY STORAGE LIMIT: 46694 MB
> > # CONFIGURED STORAGE LIMIT: 10000 MB
> > cache_dir coss /squid_data/squid/coss0 2000 max-size=16000
> > cache_swap_log /squid_data/squid/%s
> > cache_dir coss /squid_data/squid/coss1 2000 max-size=16000
> > cache_swap_log /squid_data/squid/%s
> > cache_dir coss /squid_data/squid/coss2 2000 max-size=16000
> > cache_swap_log /squid_data/squid/%s
> > cache_dir aufs /squid_data/squid 4000 16 256
> > max_open_disk_fds 0
> > maximum_object_size 20000 KB
> >
> > # LOGFILE OPTIONS
> > debug_options ALL,1
> > buffered_logs on
> > logfile_rotate 10
> >
> > # MISCELLANEOUS
> > memory_pools_limit 10 MB
> > memory_pools off
> > cachemgr_passwd none all
> > client_db off
> >
> >
>
Received on Thu Mar 27 2008 - 14:39:26 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Apr 01 2008 - 13:00:05 MDT