Dear Parshant,
The reason for dieing the squid is that you must have specified that size of cache_dir in squid.conf much bigger then the Original size of the filesystem Like if I have the 9 GB hd and its formatted and become 8.6 GB so my cache_dir size is 8GB not 8.6. Its no Standard But safe. Or This could be Problem of squid2.3 Stable release which is infect not stable till now, Please check squid home page for details.
I will recommend you that if you are facing one or more problems with this then use 2.2 stable or apply the patches available from squid home page.
swapt.stat file is located in your cache_dir and its unique for every cache dir.
I will suggest that you should remove all the cache if possible and recreate the filsystem by unmounting them and then use mkfs to recreate them. after that mount them again and run squid -z and then restart squid.
Moreover user cachemgr.cgi to find out the current status of your disk & cache,
With Regards
Ahsan Khan
Sr. System Admin
Internet Division (OneNet)
Sun Communication Pvt. Ltd.
http://www.one.net.pk
----- Original Message -----
From: Prashant Desai
To: Ahsan Khan
Sent: Friday, March 24, 2000 11:33 AM
Subject: Re: very urgent help required !! please
really sorry ahsan , felling really bad for not able to send the cache.log , now i have just cleaned up the cache ,and started
squid with empty cache,
but when the squid died , and then i tried to start .it was giving " every thing ok , but "no space left can't write into swap.stat", creating swap.stat.new ::: terminating abruptly"
in log file ,
please tell me what could be the problem "there was an enough disk spsce left on the device ( where swap.stat was)
it is gieving this same message in log file again and agian whenever i tried to start squid process,and this happened around
10 times
in case of giving " ./squid status" it's giving "squid died : subsys locked , error no running copy "
please suggest me the possible reasons for this to happen
thanks & regards
Prashant desai
-----Original Message-----
From: Ahsan Khan <ahsank@one.net.pk>
To: prashant <reach_prashant@worldgatein.net>
Date: Friday, March 24, 2000 4:11 AM
Subject: Re: very urgent help required !! please
arry bhai ,, send me your cache.log file as I need to know the exact error/. I am sitting on my system right now so you can send me right now,
With Regards
Ahsan Khan
Sr. System Admin
Internet Division (OneNet)
Sun Communication Pvt. Ltd.
http://www.one.net.pk
----- Original Message -----
From: prashant
To: Ahsan Khan
Sent: Thursday, March 23, 2000 12:06 AM
Subject: Re: very urgent help required !! please
dear ahsan
have u gone through it , please let me know i am desparately waiting for ur reply
thanks and regads
Prashant Desai
----- Original Message -----
From: Ahsan Khan
To: prashant ; squid-users@ircache.net
Sent: Friday, March 24, 2000 12:45 AM
Subject: Re: very urgent help required !! please
please send the cache.log out on die so that I can diagnose the problem
With Regards
Ahsan Khan
Sr. System Admin
Internet Division (OneNet)
Sun Communication Pvt. Ltd.
http://www.one.net.pk
----- Original Message -----
From: prashant
To: squid-users@ircache.net
Sent: Wednesday, March 22, 2000 10:03 PM
Subject: very urgent help required !! please
dear fiends
please help me , the problem is that i am running squid 2.2 Stable 4 on redhat 6.1 ,
my config is
cache_dir 6 GB (SCSI disk)
cache_mem 64 MB
Hardware -> IBM Netfinity 5000 , RAM -> 512 MB
now the thing is that ,
squid is woking great only if i will restart it after serving around 1500 K request, with avg load max 600 req/min at most,
now if i dont restart it , it dies after seving 3000 k requests (total) and subsys gets loked ( i gets "sybsys loked " message when i give "./squid status" what exatly it is ?)
i have restarted the mechine few times still getting the same message,
now the first thing that i have to do is again start squid (As soon as possible) , then please tell me what i have to do
to solve this poblem ?
if u want any other details then then please write me ,i will certainly provide you with it ?
please help me as soon as possible
thanks in advance
Pashant Desai
Received on Fri Mar 24 2000 - 00:33:43 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:52:22 MST