> Found this test by Scott
>
> http://www.net.oregonstate.edu/~kveton/fs/
>
> Reiserfs (noatime, notail) seems to be superior... Any comments?
Yeah ... its out-of-date ... :-)
A couple of things I've noticed since I ran those tests (most of which I
learned from pilferring Joe's comments on this list):
1. I increased the physical memory (RAM) to 2GB and lowered the size of
the cache_dir to about 50GB (althought I still think that is too high).
I also upgraded the machine to a dual-processor Xeon on a Dell 2650
which helped a great deal. The machine now handles close to 4500 unique
clients a day at a sustained rate of 125+ requests/s (while keeping the
average reponse time for all HTTP requests at < 0.09s) The load rarely
gets above 2.00.
2. Don't ever get in a situation where you swap with the >2.4.9 kernel.
I'm convinced that something is still screwy with the new VM model and
everytime I have heavy swapping (i.e. using _all_ of your swap) I run
into all sorts of problems. With 2GB and an appropriate cache_mem
setting I don't have this problem as I barely even touch swap.
3. Hardware RAID, RAID, RAID. I'm not constantly writing out to disk
but I do read at a pretty decent rate (with RAID5 reading is good,
writing bad). With a large memory cache on my hardware RAID controller
it makes life that much better.
As for ReiserFS v. ext3? I tested ftp.orst.edu with ext3 this summer
for about 3 days while I migrated to some new, permanent disks.
Normally this machine is a Dell 2550 with 500GB of attached RAID5
storage running ReiserFS on Debian. This machine routinely does 400GB
of traffic a day off campus.
I moved it over to a Dell 2650 with 800GB of attached storage, Debian
and ext3 while I upgraded the disks on the 2550 configuration. Needless to
say I couldn't wait to get it back to ReiserFS. rsync's to the machine
running ext3 where merciless and even proftpd was bogging down with only
about 100+ users attached. This is routinely quite easy for the
ReiserFS machine.
Don't get me wrong; I like the idea of ext3 and will probably start to
use it as it matures. Until then, ReiserFS it is.
Scott :-)
> On Mon, 21 Oct 2002, Joe Cooper wrote:
>
> > Hi Wei Keong,
> >
> > Comments inline:
> >
> > Wei Keong wrote:
> > > Hi Joe,
> > >
> > > My squid is currently running ext2, which is very stable for 100-120 req/s.
> > > To achieve better performance, am thinking of changing to ext3. But, from the
> > > forum, there seems to be some performance issue on ext3. Am thinking of
> > > testing both fs on kernel 2.4.19. Hope you can share what you have done...
> > >
> > > - When was the last time you tested the performance of ext3 & reiserfs?
> >
> > About 8 months ago. The kernel revision was the then current Red Hat
> > 2.4.9-31 (I think 31 is right--it was some kernel package from Red Hat).
> > I'm planning another round of tests, because both ReiserFS and ext3
> > have had significant enhancements that might lead to performance
> > improvements for Squid. I'm waiting until those enhancements become
> > mainlined, however. (Specifically, indexes in ext3, and write barriers
> > in ReiserFS, among other general improvements.)
> >
> > > - How did you test the performance?
> >
> > Polygraph, of course.
> >
> > > - What kind of workload you use?
> >
> > Polymix-4 and Datacomm-1.
> >
> > > - What kind of performance did your box achieve (req/s & response time)?
> >
> > On modest hardware (450Mhz K6-2/256MB/2x7200RPM IDE):
> >
> > ext3 maxed at about 60 reqs/sec on polymix-4, and about 70 on
> > datacomm-1. Some modes performed worse than others, but I'd have to dig
> > up my notes to be more specific.
> >
> > ReiserFS remained stable at about 85 req/sec on polymix-4, and about 95
> > on datacomm-1.
> >
> > Response time is always what I consider 'good'. If a box doesn't remain
> > under 2000ms average latency (the average latency of a machine
> > performing extremely well on a polygraph workload is around 1500ms or
> > less), I don't consider the run 'passed'. Hit rates are expected to be
> > above 50%.
> >
> > I'd love to hear about your results. It would be nice to have some
> > additional data points from other configurations.
> > --
> > Joe Cooper <joe@swelltech.com>
> > Web caching appliances and support.
> > http://www.swelltech.com
> >
Received on Tue Oct 22 2002 - 21:44:38 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:10:47 MST