On 12/02/2013 12:17 p.m., Eliezer Croitoru wrote:
> I gave you an option to install on the squid server a BIND cache
> server wasn't talking about your main DNS server.
> Note the you can always use a secondary dns instance to serve this
> purpose to filter aaaa responses.
>
>
> On 2/11/2013 2:48 PM, Sandrini Christian (xsnd) wrote:
>> Hi
>>
>> Thanks for your reply.
>>
>> I can't really mess around with our main DNS servers.
>>
>> On our 3.1 squids we just disabled ipv6 module which does not sound
>> right to me but works fine.
> I suggest to not disable v6 and work with it if you can.
>
>>
>> What we see is
>>
>> 2013/01/30 09:52:00.296| idnsGrokReply: www2.zhlex.zh.ch AAAA query
>> failed. Trying A now instead.
>>
>> We do not need any ipv6 support. I'd rather have a way to tell squid
>> to look first for an A record.
>
> Please take your time to file a bug-report in the bugzilla:
> http://bugs.squid-cache.org
>
> describe the problem and add any logs you can into the report to help
> the development team track and fix it.
> It seems like a *big* issue to me since this points about dns_v4_first
> failure.
No. A bug report will not make any difference here. dns_v4_first is
about the sorting the results found, not the lookup order. AAAA is
faster than A in most networks, so we perform that lookup first in 3.1.
This was altered in 3.2 to perform happy-eyeballs parallel lookups
anyway so most bugs in the lookup code of 3.1 will be closed as irrelevant.
Note that the current supported release is now 3.3.1.
>
> Try to use the BIND solution I am using.
>
> I have been logging my dns server and it seems like squid 3.HEAD tries
> to resolve A before AAAA but tries to resolve AAAA after A record.
>
> You can try to remove manually ipv6 address from lo and other devices
> to make sure there is no v6 address initialized by centos scripts.
>
> In my testing server the system starts with lo adapter
> inet6 addr: ::1/128 Scope:Host
> and also on another devices with a local auto v6 address.
> so remove them and try restarting squid service to see what is going on.
This is VERY likely to be the problem. Squid tests for IPv6 ability
automatically by opening a socket on a private IP address, if that works
the socket options are noted and used. There is no way for Squid to
identify in advance of opening upstream connections whether the NIC the
kernel chooses to use will be v6-enabled or not.
Notice that the method used to disable IPv6 was to simply not assign
IPv6 address to the NIC, nothing at the sockets layer was actually
disabled. So every NIC needs to be checked and disabled individually as
well, and any sub-system loading IPv6 functionality into the kernel also
needs disabling as well.
(Warning: soapbox)
The big question is, why disable in the first place? v6 is faster and
more efficient than v4 when you get it going properly. And one he*l of a
lot easier to administrate. If any of your upstreams supply native
connections it is well worth taking the option up. If not there is
always 6to4 or other tunnel types that can be built right to the proxy
box to get IPv6 at only a small initial latency on the SYN packet (ping
192.88.99.1 to see what 6to4 adds for you). Note that these are IPv6
connectivity initiated from the proxy to the Internet *only*, so
firewall alterations are minimal to get Squid v6-enabled.
Amos
Received on Tue Feb 12 2013 - 00:10:07 MST
This archive was generated by hypermail 2.2.0 : Tue Feb 12 2013 - 12:00:05 MST