Enumerate users from external group from AD trust
by Bolke de Bruin
Hello,
I have sssd 1.13.00 working against FreeIPA 4.2 domain. This domain has a trust relationship with a active directory domain.
One of the systems we are using requires to enumerate all users in groups by (unfortunate) design (Apache Ranger). This is done by using
“getent group”. During this enumeration the full user list for a group that has a nested external member group* is not always returned so we thought to
add “getent group mygroup” in order to get more details. Unfortunately this does not seem to work consistently: sometimes this gives information sometimes it does not:
[root@master centos]# getent group ad_users
ad_users:*:1950000004:
[root@master centos]# id bolke(a)ad.local
UID=1796201107(bolke(a)ad.local) GID=1796201107(bolke(a)ad.local) groepen=1796201107(bolke(a)ad.local),1796200513(domain users@ad.local),1796201108(test(a)ad.local)
[root@master centos]# getent group ad_users
ad_users:*:1950000004:bolke@ad.local <mailto:bolke@ad.local>
If I clear the cache (sss_cache -E) the entry is gone again:
[root@master centos]# getent group ad_users
ad_users:*:1950000004:
My question is how do I get sssd to enumerate *all users* in a group consistently?
Thanks!
Bolke
* https://docs.fedoraproject.org/en-US/Fedora/18/html/FreeIPA_Guide/trust-g...
1 week, 2 days
SSSD strangeness
by simonc99@hotmail.com
Hi All
We've got SSSD 1.13.0 installed as part of a Centos 7.2.1511 installation.
We've used realmd to join the host concerned to our 2008R2 AD system. This went really well, and consequently we've been using SSSD to provide login services and kerberos integration for our fairly large hadoop system.
The authconfig that's implicitly run as part of realmd produces the following sssd.conf:
[sssd]
domains = <joined domain>
config_file_version = 2
services = nss, pam
[pam]
debug_level = 0x0080
[nss]
timeout = 20
force_timeout = 600
debug_level = 0x0080
[domain/<joined domain>]
ad_domain = <joined domain>
krb5_realm = <JOINED DOMAIN>
realmd_tags = manages-system joined-with-samba
cache_credentials = true
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u@%d
access_provider = simple
simple_allow_groups = <AD group allowing logins>
krb5_use_kdc_info = False
entry_cache_timeout = 300
debug_level = 0x0080
ad_server = <active directory server>
As I've said - this works really well. We did have some stability issues initially, but they've been fixed by defining the 'ad_server' rather than using autodiscovery.
Logins work fine, kerberos TGTs are issued on login, and password changes are honoured correctly.
However, in general day to day use, we have noticed a few anomalies, that we just can't track down.
Firstly (this has happened a few times), a user will change their AD password (via a Windows PC).
Subsequent logins - sometimes with specific client software - fail with
pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<remote PC name> user=<username>
pam_sss(sshd:auth): received for user <username>: 17 (failure setting user credentials)
So in this example, the person concerned has changed their AD password. Further attempts to access this system via SSH work fine. However, using SFTP doesn't work (the above is output into /var/log/secure).
There are no local controls on sftp logins, and the user concerned was working fine (using both sftp and ssh) until they updated their password.
There is no separate sftp daemon running, and it only affects one individual currently (but we have seen some very similar instances before)
The second issue we have is around phantom groups in AD.
Hadoop uses an id -Gn command to see group membership for authorisation.
With some users - we've seen 6 currently - we see certain groups failing to be looked up:
id -Gn <username>
id: cannot find name for group ID xxxxyyyyy
<group name> <group name> <group name> <group name> <etc...>
The xxxxyyyyy indicates:
xxxx = hashed realm name
yyyyy = RID from group in AD
We can't find any group with that number on the AD side!
We can work around this by adding a local group (into /etc/group) for the GIDs affected. This means the id -Gn runs correctly, and the hadoop namenode can function correctly - but this is a workaround and we'd like to get to the bottom of the issue.
Rather than flooding this post now with logfiles, just thought I'd see if this looked familiar to anyone. Happy to upload any logs, amend logging levels, etc.
Many thanks
Simon
1 month
sssd[be[1320]: Backend is offline
by Harald Dunkel
Hi folks,
sssd 1.16.3-1 (rebuilt for Debian 9), systemd
At boot time sssd_nss fails to initialize. systemctl status sssd
shows
root@srvl061:~# systemctl status sssd
* sssd.service - System Security Services Daemon
Loaded: loaded (/lib/systemd/system/sssd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-11-22 11:57:30 CET; 46s ago
Main PID: 1312 (sssd)
Tasks: 5 (limit: 7372)
CGroup: /system.slice/sssd.service
|-1312 /usr/sbin/sssd -i --logger=files
|-1345 /usr/lib/x86_64-linux-gnu/sssd/sssd_be --domain example.com --uid 0 --gid 0 --logger=files
|-1533 /usr/lib/x86_64-linux-gnu/sssd/sssd_nss --uid 0 --gid 0 --logger=files
|-1534 /usr/lib/x86_64-linux-gnu/sssd/sssd_pam --uid 0 --gid 0 --logger=files
`-1535 /usr/lib/x86_64-linux-gnu/sssd/sssd_pac --uid 0 --gid 0 --logger=files
Nov 22 11:57:25 srvl061.ac.example.com systemd[1]: Starting System Security Services Daemon...
Nov 22 11:57:25 srvl061.ac.example.com sssd[1312]: Starting up
Nov 22 11:57:25 srvl061.ac.example.com sssd[be[1345]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1533]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1534]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com sssd[1535]: Starting up
Nov 22 11:57:30 srvl061.ac.example.com systemd[1]: Started System Security Services Daemon.
Nov 22 11:57:45 srvl061.ac.example.com sssd[be[1345]: Backend is offline
Apparently this is a problem of resolvconf generating /etc/\
resolv.conf at boot time. If I replace it by a static file, then
the problem is gone.
Question is, how can I tell systemd to wait for resolv.conf?
Is there some timeout in the backend I could adjust? Does it
wait for the network at all?
Every helpful comment is highly appreciated
Regards
Harri
7 months
yubikey-based pkinit stopped working switching from sssd 1.15.2/Ubuntu 16.04 auf sssd 1.16.1/Ubuntu 18.04
by tallinn1960@yahoo.de
My client has a working setup of sssd/kerberos/ldap utilizing yubikeys and pkinit as the login mechanism, based on sssd 1.15.2 and Ubuntu 16.04.
My client wants to advance from Ubuntu 16.04 LTS to Ubuntu 18.04 LTS. A test installation of the latter with the corresponding sssd-version 1.16.1 does not allow yubikey-based login, although both kinit and p11_child do see the yubikey and the certificate on it. Kinit with yubikey does work.
Analysis of log gives that krb5_child behavior has changed. The function answer_pkinit is called with kr->pd->cmd set to SSS_PAM_AUTHENTICATE and kr->pd->authtok set to SSS_AUTHTOK_TYPE_SC_PIN in 1.15.2, but with kr->pd->cmd set to SSS_PAM_PREAUTH and kr->pd->authtok set to 0 in 1.16.1, causing the function to skip all pkinit/smarcard-related prompting and processing.
Both installations are using the same sssd.conf,krb5.conf etc.
How shall we fix this?
11 months
Cannot authenticate user from parent domain in a child-domain joined server
by Chris J
Hi all,
I'm having problems having sssd authenticate a user in a parent domain
in the same
forest with SSSD. In brief, it's an Ubuntu 18.04 box with sssd 1.16.1:
the box was
joined to the domain 'development.cseserve.com' with 'realm join'. Users
in the
that domain can authenticate successfully, but users in the parent
domain
cseserve.com cannot.
After some reading, I found the sssctl command, and that the sssd.conf
file needed
a tweak to add 'ifp' to the list of services, which gave access to the
user-checks. Configuration file and output of various sssctl checks is
at the bottom of this email.
If I attempt authenticate as user in cseserv.com, I get:
root@hs-svn-02:/var/log/sssd# sssctl user-checks
chris.johnson(a)cseserv.com -a auth
user: chris.johnson(a)cseserv.com
action: auth
service: system-auth
SSSD nss user lookup result:
- user name: chris.johnson(a)cseserv.com
- user id: 715601141
- group id: 715601141
- gecos: Chris Johnson
- home directory: /home/chris.johnson(a)cseserv.com
- shell: /bin/bash
SSSD InfoPipe user lookup result:
- name: chris.johnson(a)cseserv.com
- uidNumber: 715601141
- gidNumber: 715601141
- gecos: Chris Johnson
- homeDirectory:
- loginShell:
testing pam_authenticate
Password:
pam_authenticate for user [chris.johnson(a)cseserv.com]: Authentication
failure
PAM Environment:
- no env -
root@hs-svn-02:/var/log/sssd#
Now in /var/log/syslog, when I tail -f during sssctl user-checks, I get
the error:
Dec 11 10:59:20 hs-svn-02 [sssd[krb5_child[20446]]]: Server not found
in Kerberos database
Dec 11 10:59:20 hs-svn-02 [sssd[krb5_child[20446]]]: Server not found
in Kerberos database
I can't see any other pertinent errors in log files, but I'm happy to
provide more
if I know what to send over :-)
This error does not occur for a user in the development.cseserv.com
domain, which
completes successfully:
[...deleted the preamble...]
testing pam_authenticate
Password:
pam_authenticate for user [cjohnson(a)development.cseserve.com]: Success
PAM Environment:
- KRB5CCNAME=FILE:/tmp/krb5cc_376801009_vS8U1c
I've tried various things based on various searches, including creating
a /etc/krb5.conf
file to specify encryption protocols, and after a restart this did not
change
the behaviour:
[libdefaults]
allow_weak_crypto = true
default_tgs_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
default_tkt_enctypes = arcfour-hmac-md5 des-cbc-crc des-cbc-md5
rdns=false
dns_lookup_kdc = true
Additionally I've tried explicitly declaring the cseserv domain as a
trusted domain in sssd.conf (based on
https://docs.pagure.org/SSSD.sssd/users/ad_provider.html#etc-sssd-sssd-conf),
and this failed as well:
[sssd]
domains = development.cseserv.com, cseserv.com
{...rest unchanged...}
[domain/development.cseserve.com/cseserve.com]
ad_server = hs-dc-01.cseserve.com
What obvious thing am I missing? From what I'm reading, this should
work.
Regards,
Chris
====================================================================
Sanity checking the domain configuration:
realm list gives:
root@hs-svn-02:/var/log/sssd# realm list
development.cseserv.com
type: kerberos
realm-name: DEVELOPMENT.CSESERV.COM
domain-name: development.cseserv.com
configured: kerberos-member
server-software: active-directory
client-software: sssd
required-package: sssd-tools
required-package: sssd
required-package: libnss-sss
required-package: libpam-sss
required-package: adcli
required-package: samba-common-bin
login-formats: %U(a)development.cseserv.com
login-policy: allow-realm-logins
root@hs-svn-02:/var/log/sssd#
sssctl domain-list shows that the parent domain was auto-discovered:
root@hs-svn-02:/var/log/sssd# sssctl domain-list
development.cseserve.com
test.cseserve.com
hst.cseserve.com
cseserve.com
root@hs-svn-02:/var/log/sssd#
sssctl domain-status development.cseserv.com gives:
Online status: Online
Active servers:
AD Global Catalog: hs-dc-01.development.cseserv.com
AD Domain Controller: hs-dc-01.development.cseserv.com
Discovered AD Global Catalog servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
- gsh-dc-04.cseserv.com
- gsh-dc-05.cseserv.com
- gsh-dc-01.cseserv.com
Discovered AD Domain Controller servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
sssctl domain-status cseserv.com gives:
root@hs-svn-02:/var/log/sssd# sssctl domain-status cseserv.com
Online status: Online
Active servers:
AD Domain Controller: gsh-dc-04.cseserv.com
AD Global Catalog: hs-dc-01.development.cseserv.com
Discovered AD Domain Controller servers:
- gsh-dc-04.cseserv.com
- gsh-dc-01.cseserv.com
- gsh-dc-05.cseserv.com
- gln-dc-01.cseserv.com
Discovered AD Global Catalog servers:
- hs-dc-01.development.cseserv.com
- hs-dc-02.development.cseserv.com
- gsh-dc-04.cseserv.com
- gsh-dc-05.cseserv.com
- gsh-dc-01.cseserv.com
My sssd.conf file:
[sssd]
domains = development.cseserve.com
config_file_version = 2
services = nss, pam, ifp
debug_level = 9
[domain/development.cseserve.com]
ad_domain = development.cseserve.com
krb5_realm = DEVELOPMENT.CSESERVE.COM
realmd_tags = manages-system joined-with-adcli
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = True
fallback_homedir = /home/%u@%d
access_provider = ad
11 months, 4 weeks
Can I have sssd manage known_hosts with LDAP?
by George Diamantopoulos
Hello all,
I've been trying (and failing) to configure sssd to use LDAP to retrieve
hosts' public SSH keys. I'd like to ask if this is possible with LDAP at
all, or this feature is only supported with FreeIPA.
If yes, what search filter does sssd use to lookup keys in LDAP? I'm using
the sshPublicKey attribute for both people and machines in my LDAP schema,
but I can't figure out what attribute is checked to determine the hostname.
User ssh public key retrieval works fine in my configuration. I'm using
sssd 1.15 which ships with debian stretch.
Thanks!
BR,
George
12 months
sssd AD authentication working; sssd autofs against LDAP / rfc2307bis not working...
by Spike White
Sssd experts,
This is all on RHEL7.
I have sssd properly authenticating against AD for my multi-domain forest.
All good – even cross-domain auth (as long as I don’t use tokengroups.)
Our company’s AD implementation is RFC2307bis schema-extended.
Now – for complicated reasons – I’m told I need to get nis automaps and nis
netgroups in AD and also working on the clients (via sssd) also.
As a first testing step, I’ve stood up an openLDAP server on another RHEL7
server. And schema extended it with RFC 2307 bis.
http://bubblesorted.raab.link/content/replace-nis-rfc2307-rfc2307bis-sche...
I added an initial automap.
When I query via ldapsearch, all looks good:
[root@spikerealmd02 sssd]# ldapsearch -LLL -x -H ldap://
austgcore17.us.example.com -b 'ou=automount,ou=admin,dc=itzgeek,dc=local'
-s sub -D 'cn=ldapadm,dc=itzgeek,dc=local' -w ldppassword
'objectClass=automountMap'
dn: automountMapName=auto.master,ou=automount,ou=admin,dc=itzgeek,dc=local
objectClass: top
objectClass: automountMap
automountMapName: auto.master
dn: automountMapName=auto.home,ou=automount,ou=admin,dc=itzgeek,dc=local
objectClass: top
objectClass: automountMap
automountMapName: auto.home
[root@spikerealmd02 sssd]# ldapsearch -LLL -x -H ldap://
austgcore17.us.example.com -b 'ou=automount,ou=admin,dc=itzgeek,dc=local'
-s sub -D 'cn=ldapadm,dc=itzgeek,dc=local' -w ldppassword
'objectClass=automount'
dn:
automountKey=/home2,automountMapName=auto.master,ou=automount,ou=admin,dc=
itzgeek,dc=local
objectClass: top
objectClass: automount
automountKey: /home2
automountInformation:
ldap:automountMapName=auto.home,ou=automount,ou=admin,dc
=itzgeek,dc=local --timeout=60 --ghost
dn:
automountKey=/,automountMapName=auto.home,ou=automount,ou=admin,dc=itzgeek
,dc=local
objectClass: top
objectClass: automount
automountKey: /
automountInformation:
-fstype=nfs,rw,hard,intr,nodev,exec,nosuid,rsize=8192,ws
ize=8192 austgcore17.us.example.com:/export/&
[root@spikerealmd02 sssd]#
Next, the sssd client configuration.
In my good sssd client’s sssd.conf file, I added “autofs” to my services
line and added an “autofs” section. That is, I have changed my
/etc/sssd/sssd.conf file as so:
[sssd]
…
services = nss,pam,autofs
…
[autofs]
debug_level = 9
autofs_provider = ldap
ldap_uri= ldap://austgcore17.us.example.com
ldap_schema = rfc2307bis
ldap_default_bind_dn = cn=ldapadm,dc=itzgeek,dc=local
ldap_default_authtok = ldppassword
ldap_autofs_search_base = ou=automount,ou=admin,dc=itzgeek,dc=local
ldap_autofs_map_object_class = automountMap
ldap_autofs_map_name = automountMapName
ldap_autofs_entry_object_class = automount
ldap_autofs_entry_key = automountKey
ldap_autofs_entry_value = automountInformation
[nss]
debug_level = 9
I appended sss to automount line in /etc/nsswitch.conf file:
automount: files sss
Yet, when I try to restart autofs service it (eventually) times out:
[root@spikerealmd02 sssd]# systemctl restart sssd
[root@spikerealmd02 sssd]# systemctl restart autofs
Job for autofs.service failed because a timeout was exceeded. See
"systemctl status autofs.service" and "journalctl -xe" for details.
Journalctl –xe reports this:
Dec 03 11:14:09 spikerealmd02.us.example.com [sssd[ldap_child[9653]]][9653]:
Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]:
Preauthentication failed. Unable to create GSSAPI-encrypted LDAP connection.
…
Dec 03 11:14:15 spikerealmd02.us.example.com [sssd[ldap_child[9680]]][9680]:
Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]:
Preauthentication faile
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: autofs.service
start operation timed out. Terminating.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: Failed to start
Automounts filesystems on demand.
-- Subject: Unit autofs.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit autofs.service has failed.
--
-- The result is failed.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: Unit
autofs.service entered failed state.
Dec 03 11:14:22 spikerealmd02.us.example.com systemd[1]: autofs.service
failed.
Dec 03 11:14:22 spikerealmd02.us.example.com polkitd[897]: Unregistered
Authentication Agent for unix-process:9073:241010 (system bus :1.132,
object path /org/freedeskt
/var/log/sssd/ssd_nss.log looks like this:
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [sysdb_get_certmap] (0x0020):
Failed to read certmap config, skipping.
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Added timed event
"ltdb_callback": 0x55f7263a1fc0
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Added timed event
"ltdb_timeout": 0x55f7263a2080
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Running timer
event 0x55f7263a1fc0 "ltdb_callback"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Destroying timer
event 0x55f7263a2080 "ltdb_timeout"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [ldb] (0x4000): Ending timer
event 0x55f7263a1fc0 "ltdb_callback"
(Mon Dec 3 11:11:13 2018) [sssd[autofs]] [sysdb_get_certmap] (0x0400): No
certificate maps found.
What is wrong? BTW, for now – I don’t care about a GSSAPI SASL LDAP
binding; a simple binding is what I want.
BTW, I have not modified the /etc/autofs.conf file. I considered this, but
it seems that if I did – it’d be bypassing nss / sssd. Also, I’d have
another SASL creds hanging out there that’d I’d have to periodically rotate
on all clients, instead of relying on SSSD’s machine account that’s
auto-rotated every 30 days.
12 months
filter out disabled ipa user
by Stijn De Weirdt
hi all,
we are using ipa as id_provider/access_provider/auth_provider for a domain, and we want to somehow completely hide users that are disabled in ipa. for now, disabled users are still known on the hosts (eg "getent passwd userxyz" works and gives the correct userid). we would like that eg "getent passwd userxyz" returns nothing (in particular we want that that userid can't start any new process anymore, and that the nfs mounts show that files the belong to the disabled user show up as owned by nobody etc etc.
is there any way to filter these users? perhaps some config setting i overlooked, or some ldap filter i can use?
many thanks,
stijn
12 months
Failed gssapi-with-mic
by Sergei Gerasenko
Hi,
I've run into a dead end debugging a case of passwordless authentication between two IPA'd hosts. Running `sshd -p 5000 -d` on the receiving host (let's call it HOST_B), I see this:
```
Postponed gssapi-with-mic for postgres from x.x.x.x port 57607 ssh2 [preauth]
debug1: Received some client credentials
debug1: ssh_gssapi_k5login_exists: Checking existence of file /home/USER/.k5login
Failed gssapi-with-mic for postgres from x.x.x.x port 57607 ssh2
```
The client then gets an interactive password prompt. Here are some facts and things I've tried:
* If I put the user into `.k5login` on the receiving host and it works.
* The receiving host is correctly enrolled into IPA. I can ssh from it to other hosts using GSSAPI.
* I can issue `kvno host/HOST_B` on the connecting host and I get a service ticket.
* It looks like all this happens before any pam stuff kicks in (?). So I'm ruling PAM issues out.
* No errors in the logs of the KDCs.
* The ticket from the connecting host is not expired.
* The sssd version is 1.16.0.
* Turning up the debugging in sssd with `debug_level = 7` for the domain section doesn't reveal anything obvious.
What else could I check?
Thanks for any ideas,
SG
12 months
Re: Problem with resolving unqualified group names
by Jakub Hrozek
On Fri, Nov 23, 2018 at 10:16:26AM +0000, Ondrej Valousek wrote:
> Hi List,
>
>
> I have noticed that in my case both
>
> getent passwd <username>@<domain> and getent passwd <username>
>
> works, but
>
> getent group <groupname>@<domain>
>
> does not, only:
>
> getent group <groupname>
>
> works.
>
>
> Is that expected behavior?
No (but I don't know what else to add except worksforme..)
12 months