On Mon, Oct 16, 2000, Chemolli Francesco (USI) wrote:
> > On Wed, Oct 11, 2000, Chemolli Francesco (USI) wrote:
> > > Yesterday I started a cache double-check.
> > > After 14 hours, it was still crunching, the
> > > disk cache at the moment was 15 gigs big,
> > > distributed among 5 diskd-based dirs on different
> > > HDDs.
> > >
> > > CPU usage was very low, disk usage very high,
> > > response by squid was very sluggish.
> > >
> > > So.. how long should I expect it to run?
> > > 1 hour/gig seems a pretty big time...
> >
> > Its synchronous. This is probably your problem.
>
> Actually my problem is that it's badly designed.
> With help from Robert Collins, I figured out the problem:
>
> When started with -S, squid stat()s every file it knows
> about, and compares its known size with the on-disk size.
> If it differs, it print an error message to cache.log
> IT DOESN'T PERFORM ANY OTHER OPERATION.
> Once it is finished going through all the files, it
> will abort, upon failing the assertion that storeerrors==0.
> It will then start over, with the same result, in
> an endless loop.
Wow. I wonder why it does that ..
Well, how about in DoubleCheck() we forcibly rm the file and ++ the
store_errors, so that on the second pass there shouldn't be any
bad files ? Make sure of course that you're not arbitrarily duplicating
StoreEntry's here of course ..
Adrian
-- Adrian Chadd "It was then that I knew that I wouldn't <adrian@creative.net.au> die, as a doctor wouldn't fart in front of a dying boy." -- Angela's AshesReceived on Mon Oct 16 2000 - 09:47:49 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:43 MST