I'm trying to understand the nature of squid bandwidth vs user/group delay
pools.
Is it that squid delays the rate at which a user can receive it's data? Or
the rate at which squid will receive data? or a combination of both with
some meaningful guestimate algorithm behind it all?
When reading some posting where administrators are concerned that squid is
taking considerable bandwidth, it seems that perhaps slowing the rate of
squid's own download to match that of the enduser would be meaningful. That
way squid will take longer to complete the request, but won't consume all
it's available bandwidth. I don't know if it's possible to implement
something like this, or if it will even show to be a means of managing's
squid's own share of bandwidth. Is there some way to set an upper throttle
for squid's fetches based on ip ranges? Is it a meaningful question even?
Something along the lines of
If squid receives ICMP source quenches from it's client, it's therefore
sending too fast for the client. Squid then inturn sends source quench to
it's target host.
That said unless squid is already consuming too much bandwidth, just pass
the ICMP source quench to reduce the inflow rate to squid, matching that of
the client, rather than simply delaying what the user can receive.
Perhaps I'm missing something, I don't even know if it's possible for squid
to determine if it's clients are asking for source quench or if squid can
selectively slow it's own downloads.
Just some vague pondering, perhaps nothing, perhaps the start of another
squid idea.
Cheerio!
Richard Lim
Received on Thu Feb 25 1999 - 08:01:26 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:44:44 MST