> The NONE/411 reply may be due to a Squid bug. But in this
> case it is working to your benefit. Squid sends the NONE/411
> reply very quickly and should close the connection. Thus, the
> worm requests shouldn't tie up very many resources in your
> Squid process.
I've said in previous posts that I was amazed that a single user with a 33.6
kbps connection was stopping Squid. Today I got a clue about the problem:
Squid can't send fast enough the error message to the user (that "very quick
reply") because there is no bandwidth available. All the 33.600 bps are in
use by the worm (and whatever other traffic my customer is generating).
Today, while turning on transparent proxy for testing, one infected customer
got online. While Squid was crawling, I did a "netstat -n" and noticed 130
open connections to my customer's IP. That should not be much, but it was
the only thing I noticed that could be bringing it to an almost complete
stop.
The requests themselves are not much. If I "tail -f access.log" I can hardly
notice them among valid requests. That means they are, let's say, an
increase of less than 10 or 15% in traffic. That happens every Sunday night
and never stopped Squid.
There something else in the worm request - or its consequences - that is
VERY different from regular, valid, requests because even if when I have
twice as much traffic as I did this morning (when I was testing it) Squid
NEVER gave me any trouble.
> If squid is dying/exiting, you need to try to find out why. Look
> in cache.log for error messages. Check your syslog for
> messages about Squid too.
It's not dying/exiting. It stays there, like a dead man, until the swtich
tests it and remove transparent proxy (it's a switch feature). When it
remove port 80 redirection, Squid gets time to recover and that happens in a
few seconds. Then, the switch tests it again and since it's recovered, the
test returns positive and transparent proxy goes on again.... just for time
enough for Squid begins to be unresponsive again.
It has never died or crashed. Never.
> Perhaps you are running out of file descriptors. The
> Squid FAQ has instructions for increasing your filedescriptor
> limits.
I compiled and installed 2.4-STABLE1 today with 4096 file descriptors. It
didn't solve it, even though I had less than 150 active connections due to
Code Red (from a total of about 450 valid active connections).
> Perhaps you are running out of TCP ports. The worm's
> requests may tie up ports in TIME_WAIT state. See
> http://www.ncftpd.com/ncftpd/doc/misc/ephemeral_ports.html
> for information on how to increase the ephemeral port range.
Did it to. I've set the ports from 32768 to 61000, as suggested in Squid
docs.
> Perhaps you're running out of disk space. If the worm makes
> a high rate of requests, your access.log file is growing quickly.
Nope. As I've said, I don't need THAT much requests to bring Squid to be
unresponsive. I can serve thousands of valid requests per minute without any
problem but if only a few hundred are Code Red, I'm dead.
Can I just ignore these requests and NOT TRY to send an error message?
--- Luiz Lima Image Link Internet http://www.imagelink.com.brReceived on Fri Aug 10 2001 - 23:41:30 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:34 MST