On 3 Nov 2000, at 6:21, Joe Cooper <joe@swelltech.com> wrote:
> > > I'm not going to argue that it shouldn't go in because this is much more
> > > stable than the current 2.3 async code (which is probably unusable), but
> >
> > hmm, I'm using async on 2.3 and am pretty happy. no problems and its faster
> > than without. Although I couldn't use it on Linux when I tested an alpha box.
> > Linux threads are no good.
>
> Haven't noticed severe degradation in hit ratio?
hard to say. I've got 25-30% hitrate here. lots of local traffic (20% of all)
that is not cached, and probably underscaled box for our userbase. So I can't
blame squid.
> > > 2000/11/03 01:20:15| comm_poll: poll failure: (12) Cannot allocate
> > > 2000/11/03 01:20:15| Select loop Error. Retry 2
> > > 2000/11/03 04:14:49| comm_poll: poll failure: (12) Cannot allocate
> >
> Yeah...Confused me too. But here's my theory...the deadlock froze Squid
> in mid-action.
wierd, then it must have been blocked in debug()
> After I consoled in and got the box out of deadlock
hmm, how do you get a box out of a deadlock? with a larger hammer? ;)
> > > r b w swpd free buff cache si so bi bo in cs us sy id
> > > 17 0 0 136 2796 115564 57664 0 0 8 0 508 283 1 99 0
> > > 18 0 1 136 2792 115564 57664 0 0 2 690 347 164 0 100 0
> >
> Don't have mpstat. Not running SMP, what does mpstat tell us that
> vmstat doesn't for a single CPU?
mpstat 1 (I'm on solaris now, but linux mpstat should be pretty similar)
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 44 15 0 1288 1101 1089 273 0 25 3 2682 23 26 5 47
0 24 45 0 2337 2160 1853 736 0 72 11 4330 44 56 0 0
0 10 40 0 2312 2151 1626 640 0 59 3 3966 42 57 1 0
0 17 45 0 2221 2068 1358 515 0 58 7 3501 41 58 1 0
minor/majro faults, interrupts, ints as threads, context switches, involuntary
contex switches, thread migrations, spinonmutexes, spin on reader/writer locks.
the only difference for SMP is that mpstat reports this for each cpu.
mpstat 1
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 4 1 34 586 448 362 10 14 9 6 803 2 2 5 91
1 6 0 33 42 0 367 11 14 8 4 1167 3 2 4 90
> Not running SCSI. It is running on two IDE disks, with the latest
> stable Linux kernel and IDE patchset. This combination of hardware and
> I don't think it's hardware issue, but I won't rule it out without
> further testing.
probably not. yet if IDE uses DMA, then it needs continuous physical
ram for the transfers. Kernel space fragmentation could be an issue,
although quite unlikely.
> > brickwall effect. Seen such before on Linux with weird loads.
>
> (in fact I strongly believe) that in the weeks of heavy stress testing
> I've done on the old Squid I would have run into any problems that still
> existed.
don't bet. workloads may change alot. I've had a system that worked
flawlessly for 4 years, and with a slight sw change and different
workload it unmasked PCI problems that ended with heavy panics.
and eventually I had to reflash the BIOS to get things stable again.
weird.
------------------------------------
Andres Kroonmaa <andre@online.ee>
Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Fri Nov 03 2000 - 05:58:17 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:55 MST