Hi,
I am runing Squid 2.0 PATCH 2 on Ultra-Sparc-2 under Solaris 2.6 with havy
loads (more than 100 http-reqests/sec over ca. 6 hours/day) and it works
fine..... but after some time (ca. 2 or 3 days) it crashes with:
cache.log:
1998/11/15 16:47:30| clientKeepaliveNextRequest: FD 833 Sending next
1998/11/15 16:47:30| clientKeepaliveNextRequest: entry->swap_status == STORE_ABORTED
1998/11/15 16:47:35| Starting Squid Cache version 2.0.PATCH2 for sparc-sun-solaris2.6...
1998/11/15 16:47:35| Process ID 9068
I think Squid has a corupted swap.log after that, because I get tousands of:
1998/11/15 16:49:54| WARNING: newer swaplog entry for fileno 0100038E
and
1998/11/15 17:10:41| comm_write: fd_table[3259].rwstate != NULL
1998/11/15 17:10:41| comm_write: fd_table[3264].rwstate != NULL
1998/11/15 17:10:41| comm_open: socket failure: (24) Too many open files
....
1998/11/15 17:11:08| WARNING! Your cache is running out of filedescriptors
......
1998/11/15 17:15:12| commResetFD: socket: (24) Too many open files
1998/11/15 17:20:19| ipcacheEnqueue: WARNING: All dnsservers are busy.
1998/11/15 17:20:19| ipcacheEnqueue: WARNING: 16 DNS lookups queued
1998/11/15 17:20:19| ipcacheEnqueue: Consider increasing 'dns_children' in your config file.
1998/11/15 17:20:51| ipcache_nbgethostbyname: 'doc1.provo.novell.com' PENDING for 752 seconds, aborting
1998/11/15 17:20:51| ipcacheChangeKey: from 'doc1.provo.novell.com' to '1/doc1.provo.novell.com'
1998/11/15 17:20:51| commConnectDnsHandle: Bad dns_error_message
and than craches again:
FATAL: Too many queued DNS lookups
Squid Cache (Version 2.0.PATCH2): Terminated abnormally.
and so on..
any idea, how can I aviod this?
bye
Zoltan Recean
-- zoltan.recean@germany.net Callisto Germany.Net GmbH - Technik/NOC Kennedyallee 89 - 60596 Frankfurt a.M. - Germany Tel: +49-69-63397456 Fax: +49-69-63397444Received on Mon Nov 16 1998 - 01:34:29 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:43:04 MST