Dear Henrik
Could you please tell me an advice?
Should I recompile the linux kernel and patch it with Netfilter?
Or should I try to configure Squid so that it will limit the bandwidth
rate of the users downloading files from my site?
The main problem that I have, is that when we release a popular file
(say an EXE file), there are so many people downloading the file (mostly
with Internet Explorer and Windows XP) than it's broken randomly. And
users have to redownload the file again. Many times, they download an
incomplete file, and when they try to install it, they receive an
"Invalid data" warning.
Regards
--------------------
S. A. Tech Department
Henrik Nordstrom wrote:
>On Tue, 27 Apr 2004, Xavier Baez wrote:
>
>
>
>>Please take a moment to read the lines I've added/changed to my
>>squid.conf file. I run squid on port 80 (http accelelator with proxy)
>>and apache at port 81
>>
>>
>
>Ok. Please note that the delay pools feature available in Squid is
>designed for proxies and limiting the Internet bandwidth used, not for
>shaping clients. Because of this it only applies on cache misses.
>
>
>
>>acl socceraccess url_regex -i 192.168
>>
>>
>
>This looks very odd for being an url_regex.. what is it you want this acl
>to match?
>
>
>
>>acl badinternet url_regex -i ftp \.exe \.zip \.rar \.r01 \.r02 \.r03
>>\.r04 \.r05
>>acl day time 09:00-23:59
>>
>>#We have two different delay_pools
>>delay_pools 2
>>
>>#First delay pool
>>#We don't want to delay our local traffic.
>>#There are three pool classes; here we will deal only with the second.
>>#First delay class (1) of second type (2).
>>delay_class 1 2
>>
>>#-1/-1 mean that there are no limits.
>>delay_parameters 1 -1/-1 -1/-1
>>
>>
>
>This pool is equivalent to not assigning a pool to the request, but is
>wasting a lot of memory only to keep track of that the clients are not
>limited. Why have you defined this pool?
>
>
>
>>#socceraccess: 192.168 we have set before
>>delay_access 1 allow socceraccess
>>
>>
>
>As per the url_regex comment above, I do not think this does what you
>want..
>
>
>
>>#Second delay pool.
>>#we want to delay downloading files mentioned in badinternet.
>>#Second delay class (2) of second type (2).
>>delay_class 2 1
>>
>>#The numbers here are values in bytes;
>>#we must remember that Squid doesn't consider start/stop bits
>>#5000/150000 are values for the whole network
>>#5000/120000 are values for the single IP
>>#after downloaded files exceed about 150000 bytes,
>>#(or even twice or three times as much)
>>#they will continue to download at about 5000 bytes/s
>>
>>delay_parameters 2 1250/1250 1250/1250
>>
>>
>
>There is no use of defining a higher class pool if the per-user limit is
>identical to the global limit. You would get the same effect using a class
>1 pool here as you have defined the global limit to 1250 and each single
>user is allowed to use up to 1250...
>
>
>
>>#We have set day to 09:00-23:59 before.
>>delay_access 2 allow day
>>delay_access 2 deny !day
>>delay_access 2 allow badintern
>>
>>
>
>The last lime will never be reached as the first two lines matches all
>requests.
>
>What is your goal with these lines?
>
>Regards
>Henrik
>
>
>
>
>
Received on Tue Apr 27 2004 - 16:01:55 MDT
This archive was generated by hypermail pre-2.1.9 : Fri Apr 30 2004 - 12:00:03 MDT