On Sat, 2022-03-19 at 11:33 +0100, Sumit Bose wrote:
Hi,
Hi Sumit. Thanks for the response. Some comments below...
there is the 'lookup_family_order' option for the
[domain/...]
section
of sssd.conf. The default is 'ipv4_first' and it looks like you might
want to change this on the given hosts to 'ipv6_first' or even
'ipv6_only'. Please see man sssd.conf for details.
Hrm. Any of the choices don't seem to be ideal. In fact the idea of
having to choose an address family and then only writing the addresses
that are found in the chosen family to that file seems very rigid,
particularly with the *_first choices. The name of the default choice
is not even accurate: ipv4_first because once the IPv4 values have been
cached to that file, IPv6 is never even tried (so there is no "first"
aspect to it after the caching), ever again, even if IPv4 connectivity
no longer works.
I believe the general approach for dual-stack machines is that IPv6
should always be tried first and IPv4 should automatically fall-back if
IPv6 fails, so the default of ipv4_first seems to violate that
approach.
But even the caching the addresses from a single family into that file
seems broken. Are those addresses cached in that file when found, or
after they are successfully used? Because in a network supporting
mixed family machines, there is always going to be addressing for both
families.
Why are those addresses even cached like that? Why aren't they queried
on each use? Seems like any kind of DNS caching performance belongs in
the domain (NPI) of software specifically designed to do that such as
nscd, or unbound/dnsmasq/bind9 (in caching resolve mode).
Can this caching be turned off given that it's so broken?
It looks like I am not the first person to find this entire mechanism
lacking:
https://pagure.io/SSSD/sssd/issue/2015
https://github.com/SSSD/sssd/issues/3057
https://bugzilla.redhat.com/show_bug.cgi?id=1849710
It's really disturbing to see the BZ closed as WONTFIX as this
behaviour seems all quite broken IMHO.
Cheers,
b.