Hi,
Actually the 503 code is the expected behavior. This is response from
the our URL Filtering engine because we are accessing a blocked URL.
Let me try to convey the problem more clearly as follows:
My deployment is as follows:
I have a squid proxy installed on a unix machine which is sending
handling http requests coming from an trusted source. Squid then
forwards these request to a URL Filter which has a list of whitelists
and blacklists. This URL filtering engine which will allow/block URL
according to the rule.
From client I have run a script which does 500 wget to www.naukri.com
in a loop continuously. This URL I have blocked on URL filtering
engine.
After some requests ~120 wget got hanged in between for exactly 1 min.
During this hanged state, I took tcpdump on server and found that it
is showing "TCP port numbers reused" and start sending sync packet
with same port which was used earlier and showing "TCP Retransmisson".
Also FIN, ACK and RST received for the earlier request.
I have also sent the packet capture in last mail you can refer.
Can you please let me know why it is using same port for the new
request and re-transmitting the packet again? Is squid is responsible
for this reusing the same port or its an issue with Unix networking
internals? Is there a way to avoid port reuse?
Also please let us know how squid is handling http requests? Is squid
changes the port while establishing connection for every request?
Many Thanks for your continuous help.
Bhagwat
On Thu, Nov 28, 2013 at 12:26 PM, Eliezer Croitoru <eliezer_at_ngtech.co.il> wrote:
> Hey Bhagwat,
>
> I am having some trouble understanding what is the problem since squid.conf
> is not available to me.
>
> Most of the times that one machine or more will try to access a service on
> the internet 120 every second it is probably since there is a DOS attack
> prevention mechanism and this is why the connection is being refused.
>
> Sometimes the source of the problem is the src service that actually replies
> with a 503 code..
> It is kind of happening everyday in the world.
>
> I have tested squid in a far more advanced load then this and which if you
> want to "test" setup the test environment with 1-2 apache\nginx\other and
> use squid as a proxy towards it.
> I have done it in the past and if you will use one tiny file the service
> will serve it from MEMORY while writing to disk the logs.
>
> If the problem is really series then probably it's not squid as a direct
> reason to this unless it's a really troubling bug.
> To make sure it is a bug that should be handled by squid development team
> you will need to match the development cycle of the project.
>
> So try to do couple things in your test environment:
> - http service which can handle more then 150 requests per sec(2 cores)
> - tiny/medium/big size files as a target for tests throw squid
> - squid instance with the versions you are using with default settings.
>
> test 10 requests one after the other and make sure the logs are in the http
> service.
> Then add more requests each time you request from this testing host throw
> squid.
> There is an option to use ab (apache benchmark) which is a very nice tool
> for checking proxy loads.
>
> You can also try to remove any logs(squid and http) to see if it has any
> effect on the load that the service can take.
>
> For me it seems like the service is responding with 503 and not squid but
> since we do not have squid.conf I and you are at the wide open blue skies of
> "unknown".
>
> All The Bests,
> Eliezer
>
>
> On 25/11/13 09:06, Bhagwat Yadav wrote:
>>
>> Hi,
>>
>> Upgraded squid to 3.1.20-2.2 from debian.org. Issue still persists.
>> Note: I have not disable the stats collection as mentioned in earlier
>> mails.
>>
>> Please suggest how to resolve this?
>>
>> Thanks,
>> Bhagwat
>
>
Received on Thu Nov 28 2013 - 08:15:00 MST
This archive was generated by hypermail 2.2.0 : Fri Nov 29 2013 - 12:00:05 MST