----------------------------------------
> Date: Sun, 12 Jun 2011 03:35:28 -0700
> From: david_at_lang.hm
> To: bodycare_5_at_live.com
> CC: squid-users_at_squid-cache.org
> Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
>
> On Sun, 12 Jun 2011, Jenny Lee wrote:
>
> >> Date: Sun, 12 Jun 2011 03:02:23 -0700
> >> From: david_at_lang.hm
> >> To: bodycare_5_at_live.com
> >> CC: squid3_at_treenet.co.nz; squid-users_at_squid-cache.org
> >> Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
> >>
> >> On Sun, 12 Jun 2011, Jenny Lee wrote:
> >>
> >>>> On 12/06/11 18:46, Jenny Lee wrote:
> >>>>>
> >>>>> On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
> >>>>>
> >>>>> I like to know how you are able to do>13000 requests/sec.
> >>>>> tcp_fin_timeout is 60 seconds default on all *NIXes and available ephemeral port range is 64K.
> >>>>> I can't do more than 1K requests/sec even with tcp_tw_reuse/tcp_tw_recycle with ab. I get commBind errors due to connections in TIME_WAIT.
> >>>>> Any tuning options suggested for RHEL6 x64?
> >>>>> Jenny
> >>>>>
> >>>>> I would have a concern using both those at the same time. reuse and recycle. Reuse a socket, but recycle it, I've seen issues when testing my own linux distro's with both of these settings. Right or wrong that was my experience.
> >>>>> fin_timeout, if you have a good connection, there should be no reason that a system takes 60 seconds to send out a fin. Cut that in half, if not by 2/3's
> >>>>> And what is your limitation at 1K requests/sec, load (if so look at I/O) Network saturation? Maybe I missed an earlier thread and I too would tilt my head at 13K requests sec!
> >>>>> Tory
> >>>>> ---
> >>>>>
> >>>>>
> >>>>> As I mentioned, my limitation is the ephemeral ports tied up with TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
> >>>>>
> >>>>> When you are tuning, you apply options one at a time. tw_reuse/tc_recycle were not used togeter and I had 10 sec fin_timeout which made no difference.
> >>>>>
> >>>>> Jenny
> >>>>>
> >>>>>
> >>>>> nb: i still dont know how to do indenting/quoting with this hotmail... after 10 years.
> >>>>>
> >>>>
> >>>> Couple of thing to note.
> >>>> Firstly that this was an ab (apache bench) reported figure. It
> >>>> calculates the software limitation based on speed of transactions done.
> >>>> Not necessarily accounting for things like TIME_WAIT. Particularly if it
> >>>> was extrapolated from say, 50K requests, which would not hit that OS limit.
> >>>
> >>> Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. Of course if you send in 50K it would not be subject to this but I usually send couple 10+ million to simulate load at least for a while.
> >>>
> >>>
> >>>> He also mentioned using a "local IP address". If that was on the lo
> >>>> interface. It would not be subject to things like TIME_WAIT or RTT lag.
> >>>
> >>> When I was running my benches on loopback, I had tons of TIME_WAITS for 127.0.0.1 and squid would bail out with: "commBind: Cannot bind socket..."
> >>>
> >>> Of course, I might be doing things wrong.
> >>>
> >>> I am interested in what to optimize on RHEL6 OS level to achieve higher requests per second.
> >>>
> >>> Jenny
> >>
> >> I'll post my configs when I get back to the office, but one thing is that
> >> if you send requests faster than they can be serviced the pending requests
> >> build up until you start getting timeouts. so I have to tinker with the
> >> number of requests that can be sent in parallel to keep the request rate
> >> below this point.
> >>
> >> note that when I removed the long list of ACLs I was able to get this 13K
> >> requests/sec rate going from machine A to squid on machine B to apache on
> >> machine C so it's not a localhost thing.
> >>
> >> getting up to the 13K rate on apache does require doing some tuning and
> >> tweaking of apache, stock configs that include dozens of dynamically
> >> loaded modules just can't achieve these speeds. These are also fairly
> >> beefy boxes, dual quad core opterons with 64G ram and 1G ethernet
> >> (multiple cards, but I haven't tried trunking them yet)
> >>
> >> David Lang
> >
> >
> > Ok, I am assuming that persistent-connections are on. This doesn't simulate any real life scenario.
> >
> > I would like to know if anyone can do more than 500 reqs/sec with persistent connections off.
>
> I'm not using persistant connections. I do this same sort of testing to
> validate various proxies that don't support persistant connections.
>
> I'm remembering the theoretical max of the TCP stack (from one source IP
> to one destination IP) as being ~16K requests/sec, but I don't have
> references to point to at the moment.
>
> David Lang
With tcp_fin_timeout set at theoretical minimum of 12 secs, we can do 5K req/s with 64K ports.
Setting tcp_fin_timeout had no effect for me. Apparently there is conflicting / outdated information everywhere and I could not lower TIME_WAIT from its default of 60 secs which is hardcoded into include/net/tcp.h. But I doubt this would have any effect when you are constantly loading the machine.
Making localhost to localhost connections didn't help either.
I am not a network guru, so of course I am probably doing things wrong. But no matter how wrong you do stuff, they cannot escape brute-forcing :) And I have tried everything!
I Can't do more than 450-470 reqs/sec even with 200K in "/proc/sys/net/netfilter/nf_conntrack_max" and "/sys/module/nf_conntrack/parameters/hashsize". This allows me bypass "CONNTRACK table full" issues, but my ports run out.
Could you be kind enough to specify which OS you are using and if you are running the benches for extended periods of time?
Any TCP tuning options you are doing also would be very useful. Of course, when you are back in the office.
As I mentioned, we find your work on acls and workers valuable.
Jenny
Received on Sun Jun 12 2011 - 11:35:37 MDT
This archive was generated by hypermail 2.2.0 : Sun Jun 12 2011 - 12:00:02 MDT