Thanks for the great response!
> On Thursday 27 June 2002 02:28 pm, you wrote:
>> Back to the point, I am looking for a cheaper solution while trying to
>> add redundancy. My overall goal is to hit 600 req/sec. using a
>> minimum of two machines. My hardware budget is $7000. (The Inktomi
>> appliance is supposedly capable of 600 req/sec in reverse proxy
>> mode.)-
>
> 2 servers gets you nothing for redundancy, since neither server can
> handle the load alone. You really need 3 servers, each able to handle
> 300 req/sec.
You are right about this. I am not actually pushing 600 req/sec yet, I
guess I am really just using the Inktomi platform as a gauge for ROI. If
either box ever hits 50% capacity (will have to benchmark them), I will
add a third.
>
>> So far, I am on this track for each server's hardware: Single 1.13Ghz
>> PIII, 1GB RAM, 2x18GB 10K Ultra 160 HDs (1 for OS/logs, 1 for cache).
>> I
>
> If disk becomes your bottleneck, you're toast. Disk cache is king at
> these speeds. Consider going with 1 or 2 IDE disks and sink the
> savings into more RAM.
I guess you are saying go heavy on RAM no matter what, and buy more disks
if possible? Is there any perf hit to having a cache partition on the
same disk as OS and swap?
>
>> can get 2 1650s with this config from Dell for ~$4,300 which is well
>> under budget which is good. I am leaning towards FreeBSD. Based on
>> old
>
> Between Linux and FreeBSD, go with the one you know. The vfs and NIC
> performance should be fine on either one. Many around here like
> ReiserFS (meaning Linux), but I found it more practical to just wipe
> the cache partition on reboot.
I actually prefer Solaris, but it seems like most people are running
FreeBSD or Linux, and of those 2 I prefer FreeBSD. I like Solaris for
multi-tasking and NFS (and multi-cpu), but FreeBSD will probably kick its
butt at single-tasking with a single cpu. Any arguments for Squid/Solaris
out there?
>
>> These will be caching primarily images (70% of which are over 40K).
>> Considering my low requirements for cache storage space and that I
>> will be using squid in reverse mode does 600 req/sec seem feasible
>> with the above scenario? If not, can hardware upgrades get me there
>> (15K drives, dual proc, more RAM, etc)?
>
> Faster drives and multiple procs will not help you here. More RAM and
> faster procs will help, but you can probably hit 300 req/sec each
> as-is. It depends on the size of the request of course -- I deal with
> a lot of zips and mp3s, so my req rate is probably lower than your's
> will be.
Even allowing for growth I can't see ever having more than 50,000 objects
(average 40KB, ~2GB total) in the cache. Does this make any difference?
Based on what I read into this so far, I would probably stick with the
2x18GB 10K with 5.2 seek time, but go woth 2GB RAM, instead of going to
15K 3.9 seek time disks and leaving the RAM at 1GB. I guess I could
always add RAM later, but it is damn cheap right now. It sounds like
processor is not really an issue.
Another scenario might be to add a third disk and keep the RAM at 1GB. But
I guess it seems ridiculous to have 54GB of storage for 2 to 3 GB of
cacheable objects. Unless seek time is really the end of the road. What
do you think?
>
> -- Brian
I am hoping to get more comments on this before I move forward. Anyone?
Thanks.
-Marshall
Received on Thu Jun 27 2002 - 17:13:11 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:51 MST