On 03/04/11 19:55, squid_at_sourcesystemsonline.com wrote:
> Good day,
>
> Is there any way i can use to stop squid store rebuilding process,
> why is squid rebuilding its store and why does it longer time to conclude?
>
> Why are mine seeing some http log in cache log instead of access log?
"http log"? cache.log displays details about the problems seen by
Squid. Being an HTTP proxy almost all problems are somehow related to HTTP.
>
> Why are mine getting these errors (see below) in my squid cache log?
>
> What can i do to prevent these error from occurrrng?
>
> Regards,
> Yomi
>
>
>
> 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
> WSAEWOULDBLOCK, Resource temporarily unavailable.
> 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
> WSAEWOULDBLOCK, Resource temporarily unavailable.
> 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
> WSAEWOULDBLOCK, Resource temporarily unavailable.
> 2011/03/31 15:02:01| unlinkdUnlink: write FD 10 failed: (10035)
>
... "Resource temporarily unavailable".
"unlinkd" is the cache file eraser. It looks like a file delete failed
due to something moving or double-deleting the file or locking the disk
access away from Squid's eraser.
> 011/04/02 07:13:51| Store rebuilding is 13.9% complete
> 2011/04/02 07:14:06| Store rebuilding is 14.9% complete
> 2011/04/02 07:14:21| Store rebuilding is 16.0% complete
> 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
> 002C3FA8
> 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
> 002C3FA8
> 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
> 002C3FA8
> 2011/04/02 07:14:26| WARNING: newer swaplog entry for dirno 0, fileno
> 002C3FA8
The Squid meta-data about the file 0/002C3FA8 in the cache is different
to the file actually in the cache. Somebody has tampered with the file,
or the disk is getting corruption.
When store problems are found Squid drops/erases the cache entry for
safety.
>
> 2011/04/01 17:14:44| Preparing for shutdown after 18038 requests
> 2011/04/01 17:14:44| Waiting 0 seconds for active connections to finish
> 2011/04/01 17:14:44| FD 21 Closing HTTP connection
> 2011/04/01 17:14:45| Shutting down...
> 2011/04/01 17:14:45| FD 22 Closing ICP connection
> 2011/04/01 17:14:45| FD 23 Closing HTCP socket
> 2011/04/01 17:14:45| FD 24 Closing SNMP socket
> 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
> due to lifetime timeout
> 2011/04/01 17:14:45|
> http://prod1.rest-notify.msg.yahoo.com/v1/pushchannel/michealomoniyi90?sid=oxsQqevs3BrUSPEmbgkZN8DUms6k5KS3kNONWA--&c=t3i2XkVUiKv&seq=5&cb=ca7u4bqs&format=json&idle=110&cache=1301674432962
> 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.112 connection
> due to lifetime timeout
> 2011/04/01 17:14:45|
> http://0.145.channel.facebook.com/x/1534213288/1667359870/false/p_100001050471985=0
> 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.110 connection
> due to lifetime timeout
> 2011/04/01 17:14:45|
> http://0.63.channel.facebook.com/x/2878475822/2523481944/false/p_100002078361547=44
> 2011/04/01 17:14:45| WARNING: Closing client 192.168.137.111 connection
> due to lifetime timeout
> 2011/04/01 17:14:45|
> http://0.192.channel.facebook.com/x/461011288/1190138722/false/p_100001927647801=1
> 2011/04/01 17:14:45| Closing unlinkd pipe on FD 10
So far a normal shutdown process. The WARNING are not errors, just
important information that some clients connections are being killed
while still in use.
channel.facebook uses a HTTP technique they call long-polling
apparently, to keep connections open for hours or days. These can be
ignored. I'm not sure about the yahoo URL.
> 2011/04/01 17:14:45| Not currently OK to rewrite swap log.
> 2011/04/01 17:14:45| storeDirWriteCleanLogs: Operation aborted.
Hmm, that may be a problem. COuld be related or from the same cause as
the unlinkd problems. Something is preventing Squid from saving its meta
data to disk.
This will result in your cache rebuilding slowely next startup.
> CPU Usage: 7147.781 seconds = 1196.078 user + 5951.703 sys
> Maximum Resident Size: 113604 KB
> Page faults with physical i/o: 28549
Here is some useful information about how much RAM your Squid needed to
run and how much swapping was done.
You can use the "Resident Size" to check that you have enough RAM
allocated for Squid to use. That is how much it needed to use since the
last restart.
The "Page faults" indicates that there was not enough RAM available
(needed 113604 KB) and swapping was done 28,549 times. "0" is best.
Amos
-- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.11 Beta testers wanted for 3.2.0.5Received on Sun Apr 03 2011 - 10:39:42 MDT
This archive was generated by hypermail 2.2.0 : Sun Apr 03 2011 - 12:00:01 MDT