I have squid as a transparent proxy for a few thousand clients, and I
didn't have any problems like this. It seems that the code red II http
request is not perfect, because squid rejects them with 411
(HTTP_LENGTH_REQUIRED), it doesn't even get to any acl checks. So squid as
a transparent proxy at least stops the spreading of code red II to
outside...
On Mon, 6 Aug 2001, Roddy Strachan wrote:
> > Hi,
> >
> > This was posted to another one of my lists, is anyone else being affected
> by
> > it?
> >
> > This is what my problem I posted about earlier is about.
> >
> > > Code Red and it's new (and quite different) derivative, Code Red II,
> > > have a nasty side effect for those of us running transparent proxy
> > > servers.
> > >
> > > It tends to bring them down (out of service).
> > >
> > > I've reported this to my own vendor concerned (Cisco), but it'll
> > > probably affect practically all transparent proxy servers.
> > >
> > > What's happening? Read on.
> > >
> > > Consider that your customer base, if you have a reasonably sized one,
> > > will have lots of compromised hosts inside it already (and more by
> > > the hour). At our place on Sunday alone I was seeing over 35,000
> > > outbound queries from compromised hosts in our customer base, going
> > > back out to the Internet at large.
> > >
> > > These attack attempts involve attempts to do an http 'get' to random
> > > IP addresses. Many (indeed most) won't respond at all to this query.
> > >
> > > If you are running a transparent proxy, these requests will be
> > > captured and re-issued by your transparent proxy. For each of these
> > > queries, one connect table entry will be used up as your proxy tries
> > > to open a tcp connection to the destination host concerned.
> > > Additional buffer space and other resources will be consumed in
> > > buffering the pending request from the worm-compromised system while
> > > your server waits, in vain, for the request to complete, before
> > > eventually timing it out.
> > >
> > > While its doing this, the compromised hosts are making other similar
> > > attempts - up to 600 in parallel, per compromised host, in the case
> > > of some variations of the latest worm.
> > >
> > > Think about what you think the concurrent connection handling
> > > capabilities of your proxy are, and imagine how quickly those
> > > resources will be chewed up and blocked by even half a dozen
> > > compromised hosts inside your downstream customer base.
> > >
> > > Oops.
> > >
> > > So you can blame Microsoft's lax coding practices, again, this time
> > > for costing you the money you'll blow in extra downloads, due to
> > > needing to turn your transparent proxy off until this particular worm
> > > blows over.
> > >
> > > You can pick this happening, easily - just log into your proxy server
> > > and display the tcp connection table. If its got lots of 'SYN_SENT'
> > > or similar entries, those are probably (in the main) worm-generated
> > > cruft that's eating your system resources for lunch.
> > >
> > > On my transparent proxy platform (Cisco) I've been able to work
> > > around the problem by configuring blocking rules that notice and stop
> > > the get requests from the worm. These work on some code releases of
> > > the Cisco Cache Engine code (2.x releases) but while they also work
> > > on the latest (3.x) code train (madly blocking attack attempts),
> > > unfortunately my observation today is that my Cache Engine 3.x system
> > > is still being killed by the worm attacks,
> > >
> > > I'm still trying to work out why (but the blocking rule
> > > implementation is different in 3.x and that is probably part of the
> > > issue). My 3.x based engine lasts about 2 minutes before all web i/o
> > > has come to a standstill, each time I try to fire it up. Oh well,
> > > back to my old system (glad I still have it!)
> > >
> > > [yep, I'm already taking this up with Cisco, of course; Dealing with
> > > it may not be a simple task; For interest, the 'fix' on a 2.51 system
> > > is:
> > >
> > > rule enable
> > > rule block url-regex ^http://.*www\.worm\.com/default\.ida$
> > > rule block url-regex ^http://.*/default\.ida$
> > >
> > > ]
> > >
> > > On (say) squid based transproxy code, you should be able to set up a
> > > blocking rule in the squid configuration file. Its not hard to find a
> > > request to use as an example to block - just look for 'default.ida'
> > > in your proxy logs (and be prepared to be surprised at how many of
> > > them you find).
> >
> >
> >
> >
>
-- Madarasz Gergely gorgo@sztaki.hu gorgo@linux.rulez.org It's practically impossible to look at a penguin and feel angry. Egy pingvinre gyakorlatilag lehetetlen haragosan nezni. HuLUG: http://mlf.linux.rulez.org/Received on Mon Aug 06 2001 - 01:19:19 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:01:29 MST