System is busy - mouse and keyboard not useable
by JOHE (John Hearns)
I have set up sssd authentication on a Ubuntu Xenial workstation, with the Lightdm windowing manager.
When the sssd service starts the sssd_be process is taking 100% CPU. I am not that concerned with this.
However I see that when I am using the windowing system the mouse 'goes away' and sometimes the keyboard too,
ie there is no mouse pointer and the keyboard does not respond. This says to me that the OS is very busy doing things,
and does not have time to service interrupts from the keyboard/mouse.
Has anyone else seen this behaviour?
I increased the nss stanza to have enum_cache_timeout = 1200
Clearly this will not help with the first enumeration - but it does keep the data for longer in the cache.
Also when sssd first starts up it seems to look at every account in the local /etc/passwd file and request information about it.
We have several hundred locally defined users in the passwd file at the moment.
Is this expected behaviour? I would have though that only if an account actually makes a login attempt or uses a service then the information would be collected from AD/IPA/LDAP I may be wrong and I am sure I will learn something here.
5 years, 11 months
Cache flushing after password change
by JOHE (John Hearns)
I know I could look this one up in the docs somewhere...
If I have a Linux workstation which is using AD for the authentication provider.
If I change my password using a Windows machine, what then happens when I log into Linux if the Linux machine has
cached my credentials?
5 years, 11 months
Slow login and sudo
by Bastian Rosner
Hi,
we are running sssd-ad 1.15.0-3 (Debian Stretch) in a global AD
infrastructure consisting of a single forest with four (sub-)domains in
two-way trust. No FreeIPA, just Windows 2012 AD servers.
Users are typically members of up to 250 groups distributed across
multiple domains. Each domain has a local Domain-Controller on each site
to improve lookup times.
Time required for running sudo directly after login with a Kerberos
ticket is pretty long, usually around 20 seconds but it can also be up
to 40 seconds. Consecutive sudo commands will be fast.
$ date ; ssh server "sudo date"
Wed May 9 10:16:38 CEST 2018
Wed May 9 10:16:56 CEST 2018
We assume most of the time is spent in looking up all the group
memberships, which we can easily see as in the debug log. Is there a
configuration option or some other way to reduce the required lookups
and to improve the time it takes for login + sudo?
Thanks and kind regards,
Bastian
5 years, 11 months
Strangeness with groups returned using id user
by Max DiOrio
So we are having issues with a couple servers where users suddenly won't be
able to log in. All our auth is done through AD and not a thing has
changed.
On a working server, I can do 'id username' and get back the proper list of
groups the user is a member of.
On the non-working server, 'id username' returns *mostly* the same list.
However the one group that the user needs to be a member to log in is
missing.
There are some groups in both lists that that have a group ID, but not a
group name. And the one non-working server has a single group entry
duplicated. The results of 'id username' match throughout, except the
noted areas below and a few entries that are listed out of order between
the two.
Here are the differences "non-working" on top, "working" on bottom
(gs-technology is the group in question that I need on the non-working
server). It doesn't make sense that 1002201991 is showing up twice in the
list.
1002201991
1002201991(fs01-technology-all(rw))
1002201620(infrastructureteam)
1002201620
1002201991
1002204761(gs-technology)
Thanks!
Max
5 years, 11 months
Is SSSD needed with samba winbind - centos 7 ?
by Edouard Guigné
Hello Dear SSSD Users,
I recently configured a Samba share on a centos 7 linux as server member
of a Active Directory domain.
I installed Kerberos, SSSD, and add Winbind for Samba.
I used Winbind for mapping posix attributes (RFC2307) added on the AD
and I need SSSD to allow authentication with sFtp, to enable access to
files updates from an other system...
Some people tell me Samba needs only Winbind or only SSSD to work with AD.
I noticed that SSSD was needed to retrieve secondary GUID on my samba share.
By example, to update a list of secondary GUID (add 601 GUID on AD for a
user), I do the following commands on the linux server :
# sss_cache -E
# id -G username
513 600 627 615 617 580 584 626 629 595 564 601
Then it is updated on the Windows client.
Can someone know if SSSD is requiered with Winbind in that case ?
Or did I not well configured Winbind to retrieve secondary GUID ?
my smb.conf :
winbind nss info = rfc2307
idmap config MYDOMAINAD : backend = ad
idmap config MYDOMAINAD : schema_mode = rfc2307
idmap config MYDOMAINAD : range = 1-14999
idmap config MYDOMAINAD : unix_nss_info = yes
idmap config MYDOMAINAD : unix_primary_group = yes
Best Regards,
Ed
5 years, 11 months
credentials cache cleared at sssd restart pam_sss + krb5
by cedric hottier
Dear sssd users,
I observe that at each sssd start, the credentials cache is cleared. Is it
an expected behavior ?
If yes, is there a parameter to make this caching permanent (or at least
not erased at each sssd restart ).
My issue is that If I reboot my laptop without connection to my KDC, I am
not able to log due to [sysdb_cache_auth] cached credentials not available.
here is my config : Debian testing / SSD version : 1.16.1
/etc/sssd/sssd.conf :
[sssd]
services = nss, pam, ifp
domains = ECCM.LAN
[pam]
pam_verbosity = 2
offline_credentials_expiration = 0
/etc/sssd/conf.d/01_ECCM_LAN.conf
[domain/ECCM.LAN]
debug_level = 10
id_provider = files
auth_provider = krb5
krb5_server = DebianCubox.eccm.lan
krb5_realm = ECCM.LAN
krb5_validate = true
krb5_ccachedir = /var/tmp
krb5_keytab = /etc/krb5.keytab
krb5_store_password_if_offline = true
cache_credentials = true
After a fresh reboot, I am able to log in only if the krb5_server is
available.
As long as I do not restart the sssd daemon, I am able to log in.
The credentials caching seems to work properly as I see
" Authenticated with cache credentials " at each TTY console just before
the usual loggin message.
But If restart the sssd daemon while disconnected from the network, I am
not able to log in anymore. The credentials cache seems to have been
cleared.
Here is the /var/log/sssd/krb5_child.log
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [main] (0x0400):
krb5_child started.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x1000): total buffer size: [128]
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x0100): cmd [241] uid [1000] gid [1000] validate [true] enterprise
principal [false] offline [true] UPN [cedric(a)ECCM.LAN]
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x2000): No old ccache
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x0100): ccname: [FILE:/var/tmp/krb5cc_1000_XXXXXX] old_ccname: [not set]
keytab: [/etc/krb5.keytab]
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [check_use_fast]
(0x0100): Not using FAST.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]]
[k5c_precreate_ccache] (0x4000): Recreating ccache
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]]
[privileged_krb5_setup] (0x0080): Cannot open the PAC responder socket
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [main] (0x2000):
Running as [1000][1000].
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [become_user]
(0x0200): Already user [1000].
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [k5c_setup] (0x2000):
Running as [1000][1000].
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]]
[set_lifetime_options] (0x0100): No specific renewable lifetime requested.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]]
[set_lifetime_options] (0x0100): No specific lifetime requested.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [main] (0x0400): Will
perform offline auth
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [create_empty_ccache]
(0x1000): Creating empty ccache
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [create_empty_cred]
(0x2000): Created empty krb5_creds.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [create_ccache]
(0x4000): Initializing ccache of type [FILE]
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [create_ccache]
(0x4000): returning: 0
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [k5c_send_data]
(0x0200): Received error code 0
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]]
[pack_response_packet] (0x2000): response packet size: [56]
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [k5c_send_data]
(0x4000): Response sent.
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [main] (0x0400):
krb5_child completed successfully
On /var/log/sssd/sssd_ECCM_LAN.log :
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
dbus conn: 0x560ea04b4be0
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
Dispatching.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_message_handler]
(0x2000): Received SBUS method
org.freedesktop.sssd.dataprovider.getAccountInfo on path
/org/freedesktop/sssd/dataprovider
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_get_sender_id_send]
(0x2000): Not a sysbus message, quit
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[dp_get_account_info_handler] (0x0200): Got request for
[0x3][BE_REQ_INITGROUPS][name=files_initgr_request]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sss_domain_get_state]
(0x1000): Domain ECCM.LAN is Active
...
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_attach_req] (0x0400):
DP Request [Initgroups #2]: New request. Flags [0x0001].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_attach_req] (0x0400):
Number of active DP request: 1
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sss_domain_get_state]
(0x1000): Domain ECCM.LAN is Active
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[files_account_info_handler_send] (0x1000): The files domain no longer
needs an update
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_done] (0x0400): DP
Request [Initgroups #2]: Request handler finished [0]: Succès
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [_dp_req_recv] (0x0400): DP
Request [Initgroups #2]: Receiving request data.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_reply_list_success]
(0x0400): DP Request [Initgroups #2]: Finished. Success.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_reply_std]
(0x1000): DP Request [Initgroups #2]: Returning [Success]: 0,0,Success
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_table_value_destructor]
(0x0400): Removing [0:1:0x0001:3::ECCM.LAN:name=files_initgr_request] from
reply table
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_destructor]
(0x0400): DP Request [Initgroups #2]: Request removed.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_destructor]
(0x0400): Number of active DP request: 0
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
dbus conn: 0x560ea04b4be0
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_dispatch] (0x4000):
Dispatching.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_message_handler]
(0x2000): Received SBUS method org.freedesktop.sssd.dataprovider.pamHandler
on path /org/freedesktop/sssd/dataprovider
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sbus_get_sender_id_send]
(0x2000): Not a sysbus message, quit
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_pam_handler] (0x0100):
Got request with the following data
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
command: SSS_PAM_AUTHENTICATE
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
domain: ECCM.LAN
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
user: cedric(a)eccm.lan
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
service: login
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
tty: /dev/tty2
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
ruser:
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
rhost:
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
authtok type: 1
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
newauthtok type: 0
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
priv: 1
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
cli_pid: 8940
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [pam_print_data] (0x0100):
logon name: not set
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_attach_req] (0x0400):
DP Request [PAM Authenticate #3]: New request. Flags [0000].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_attach_req] (0x0400):
Number of active DP request: 1
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sss_domain_get_state]
(0x1000): Domain ECCM.LAN is Active
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_auth_queue_send]
(0x1000): Wait queue of user [cedric(a)eccm.lan] is empty, running request
[0x560ea0455cc0] immediately.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_setup] (0x4000): No
mapping for: cedric(a)eccm.lan
...
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_get_simple_upn]
(0x4000): Using simple UPN [cedric(a)ECCM.LAN].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [check_ccache_re] (0x1000):
Ccache directory name [/var/tmp] does not contain illegal patterns.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [check_ccache_re] (0x1000):
Ccache directory name [FILE:/var/tmp/krb5cc_1000_XXXXXX] does not contain
illegal patterns.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[krb5_auth_prepare_ccache_name] (0x1000): No ccache file for user
[cedric(a)eccm.lan] found.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [fo_resolve_service_send]
(0x0100): Trying to resolve service 'KERBEROS'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [get_server_status]
(0x1000): Status of server 'DebianCubox.eccm.lan' is 'name not resolved'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [get_port_status] (0x1000):
Port status of port 0 for server 'DebianCubox.eccm.lan' is 'neutral'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[fo_resolve_service_activate_timeout] (0x2000): Resolve timeout set to 6
seconds
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [get_server_status]
(0x1000): Status of server 'DebianCubox.eccm.lan' is 'name not resolved'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_is_address]
(0x4000): [DebianCubox.eccm.lan] does not look like an IP address
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_gethostbyname_step]
(0x2000): Querying files
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[resolv_gethostbyname_files_send] (0x0100): Trying to resolve A record of
'DebianCubox.eccm.lan' in files
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [set_server_common_status]
(0x0100): Marking server 'DebianCubox.eccm.lan' as 'resolving name'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_gethostbyname_step]
(0x2000): Querying files
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[resolv_gethostbyname_files_send] (0x0100): Trying to resolve AAAA record
of 'DebianCubox.eccm.lan' in files
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_gethostbyname_next]
(0x0200): No more address families to retry
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_gethostbyname_step]
(0x2000): Querying DNS
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[resolv_gethostbyname_dns_query] (0x0100): Trying to resolve A record of
'DebianCubox.eccm.lan' in DNS
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [schedule_request_timeout]
(0x2000): Scheduling a timeout of 6 seconds
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [schedule_timeout_watcher]
(0x2000): Scheduling DNS timeout watcher
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [request_watch_destructor]
(0x0400): Deleting request watch
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [resolv_gethostbyname_done]
(0x0040): querying hosts database failed [5]: Erreur d'entrée/sortie
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [fo_resolve_service_done]
(0x0020): Failed to resolve server 'DebianCubox.eccm.lan': Could not
contact DNS servers
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [set_server_common_status]
(0x0100): Marking server 'DebianCubox.eccm.lan' as 'not working'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_resolve_server_process]
(0x0080): Couldn't resolve server (DebianCubox.eccm.lan), resolver returned
[5]: Erreur d'entrée/sortie
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_resolve_server_process]
(0x1000): Trying with the next one!
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [fo_resolve_service_send]
(0x0100): Trying to resolve service 'KERBEROS'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [get_server_status]
(0x1000): Status of server 'DebianCubox.eccm.lan' is 'not working'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [get_server_status]
(0x1000): Status of server 'DebianCubox.eccm.lan' is 'not working'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [fo_resolve_service_send]
(0x0020): No available servers for service 'KERBEROS'
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_resolve_server_done]
(0x1000): Server resolution failed: [5]: Erreur d'entrée/sortie
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_mark_dom_offline]
(0x1000): Marking back end offline
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_mark_offline] (0x2000):
Going offline!
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_mark_offline] (0x2000):
Initialize check_if_online_ptask.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [ldb] (0x4000): Added timed
event "ltdb_callback": 0x560ea048ceb0
...
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_ptask_create] (0x0400):
Periodic task [Check if online (periodic)] was created
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_ptask_schedule]
(0x0400): Task [Check if online (periodic)]: scheduling task 73 seconds
from now [1525171017]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [be_run_offline_cb]
(0x0080): Going offline. Running callbacks.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [child_handler_setup]
(0x2000): Setting up signal handler up for pid [8942]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [child_handler_setup]
(0x2000): Signal handler set up for pid [8942]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [write_pipe_handler]
(0x0400): All data has been sent!
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [child_sig_handler]
(0x1000): Waiting for child [8942].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [child_sig_handler]
(0x0100): child [8942] finished successfully.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [read_pipe_handler]
(0x0400): EOF received, client finished
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [parse_krb5_child_response]
(0x1000): child response [0][3][44].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_mod_ccname] (0x4000):
Save ccname [FILE:/var/tmp/krb5cc_1000_IBmIM5] for user [cedric(a)eccm.lan].
....
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]]
[check_failed_login_attempts] (0x4000): Failed login attempts [0], allowed
failed login attempts [0], failed login delay [5].
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [sysdb_cache_auth]
(0x0100): Cached credentials not available.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [ldb] (0x4000): cancel ldb
transaction (nesting: 0)
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_auth_cache_creds]
(0x0020): Offline authentication failed
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [check_wait_queue]
(0x1000): Wait queue for user [cedric(a)eccm.lan] is empty.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [krb5_auth_queue_done]
(0x1000): krb5_auth_queue request [0x560ea0455cc0] done.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_done] (0x0400): DP
Request [PAM Authenticate #3]: Request handler finished [0]: Succès
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [_dp_req_recv] (0x0400): DP
Request [PAM Authenticate #3]: Receiving request data.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_destructor]
(0x0400): DP Request [PAM Authenticate #3]: Request removed.
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_req_destructor]
(0x0400): Number of active DP request: 0
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_method_enabled]
(0x0400): Target selinux is not configured
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [dp_pam_reply] (0x1000): DP
Request [PAM Authenticate #3]: Sending result [6][ECCM.LAN]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [remove_krb5_info_files]
(0x0200): Could not remove [/var/lib/sss/pubconf/kdcinfo.ECCM.LAN],
[2][Aucun fichier ou dossier de ce type]
(Tue May 1 12:35:44 2018) [sssd[be[ECCM.LAN]]] [remove_krb5_info_files]
(0x0200): Could not remove [/var/lib/sss/pubconf/kpasswdinfo.ECCM.LAN],
[2][Aucun fichier ou dossier de ce type]
I am a bit confused with credentials cache which as far as I understood are
stored in /var/lib/sss/db/ but looking at the logfile, it seems that the
target of offline authentification is to find the kerberos ticket into
/var/tmp ( by the way those tickets are still present after reboot). If as
I understood the passwd is cached as a Hash key in the sss/db files, we
should be able to be authenticated even if we do not have the kerberos
ticket anymore isn't it ?
Actually, it seems that credentials cache cleaning erases the old_ccname if
I look at the following log line :
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x2000): No old ccache
(Tue May 1 12:35:44 2018) [[sssd[krb5_child[8942]]]] [unpack_buffer]
(0x0100): ccname: [FILE:/var/tmp/krb5cc_1000_XXXXXX] old_ccname: [not set]
keytab: [/etc/krb5.keytab]
When the Offline authentication succeed, I observe the following
krb5_child.log :
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [main] (0x0400):
krb5_child started.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [unpack_buffer]
(0x1000): total buffer size: [160]
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [unpack_buffer]
(0x0100): cmd [241] uid [1000] gid [1000] validate [true] enterprise
principal [false] offline [true] UPN [cedric(a)ECCM.LAN]
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [unpack_buffer]
(0x0100): ccname: [FILE:/var/tmp/krb5cc_1000_XXXXXX] old_ccname:
[FILE:/var/tmp/krb5cc_1000_vbmc1v] keytab: [/etc/krb5.keytab]
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [check_use_fast]
(0x0100): Not using FAST.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [switch_creds]
(0x0200): Switch user to [1000][1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [switch_creds]
(0x0200): Switch user to [0][0].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[k5c_check_old_ccache] (0x4000): Ccache_file is
[FILE:/var/tmp/krb5cc_1000_vbmc1v] and is active and TGT is valid.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[privileged_krb5_setup] (0x0080): Cannot open the PAC responder socket
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [main] (0x2000):
Running as [1000][1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [become_user]
(0x0200): Trying to become user [1000][1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [become_user]
(0x0200): Already user [1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [k5c_setup]
(0x2000): Running as [1000][1000].
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[set_lifetime_options] (0x0100): No specific renewable lifetime requested.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[set_lifetime_options] (0x0100): No specific lifetime requested.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [main] (0x0400):
Will perform offline auth
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[create_empty_ccache] (0x1000): Existing ccache still valid, reusing
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [k5c_send_data]
(0x0200): Received error code 0
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]]
[pack_response_packet] (0x2000): response packet size: [56]
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [k5c_send_data]
(0x4000): Response sent.
(Tue May 1 13:17:16 2018) [[sssd[krb5_child[10318]]]] [main] (0x0400):
krb5_child completed successfully
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [fo_resolve_service_send]
(0x0020): No available servers for service 'KERBEROS'
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_resolve_server_done]
(0x1000): Server resolution failed: [5]: Erreur d'entrée/sortie
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_mark_dom_offline]
(0x1000): Marking back end offline
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_mark_offline] (0x2000):
Going offline!
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_mark_offline] (0x2000):
Enable check_if_online_ptask.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_ptask_enable] (0x0400):
Task [Check if online (periodic)]: enabling task
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_ptask_schedule]
(0x0400): Task [Check if online (periodic)]: scheduling task 80 seconds
from now [1525174023]
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [be_run_offline_cb]
(0x0080): Going offline. Running callbacks.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [child_handler_setup]
(0x2000): Setting up signal handler up for pid [10471]
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [child_handler_setup]
(0x2000): Signal handler set up for pid [10471]
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [write_pipe_handler]
(0x0400): All data has been sent!
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [read_pipe_handler]
(0x0400): EOF received, client finished
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [parse_krb5_child_response]
(0x1000): child response [0][3][44].
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [krb5_mod_ccname] (0x4000):
Save ccname [FILE:/var/tmp/krb5cc_1000_vbmc1v] for user [cedric(a)eccm.lan].
....
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [sysdb_set_entry_attr]
(0x0200): Entry [name=cedric(a)eccm.lan,cn=users,cn=ECCM.LAN,cn=sysdb] has
set [ts_cache] attrs.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]]
[check_failed_login_attempts] (0x4000): Failed login attempts [0], allowed
failed login attempts [0], failed login delay [5].
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [sysdb_cache_auth]
(0x0100): Hashes do match!
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [ldb] (0x4000): commit ldb
transaction (nesting: 0)
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]]
[add_user_to_delayed_online_authentication] (0x4000): Saved authtok of user
[cedric(a)eccm.lan] with serial [184462369].
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]]
[add_user_to_delayed_online_authentication] (0x4000): Added user
[cedric(a)eccm.lan] successfully to delayed online authentication.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [check_wait_queue]
(0x1000): Wait queue for user [cedric(a)eccm.lan] is empty.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [krb5_auth_queue_done]
(0x1000): krb5_auth_queue request [0x55ffb8dee140] done.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [authenticate_user_done]
(0x0020): Failed to authenticate user [cedric(a)eccm.lan].
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [child_sig_handler]
(0x1000): Waiting for child [10471].
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [child_sig_handler]
(0x0100): child [10471] finished successfully.
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [remove_krb5_info_files]
(0x0200): Could not remove [/var/lib/sss/pubconf/kdcinfo.ECCM.LAN],
[2][Aucun fichier ou dossier de ce type]
(Tue May 1 13:25:43 2018) [sssd[be[ECCM.LAN]]] [remove_krb5_info_files]
(0x0200): Could not remove [/var/lib/sss/pubconf/kpasswdinfo.ECCM.LAN],
[2][Aucun fichier ou dossier de ce type]
Thanks a lot for your help.
Regards
Cedric
5 years, 11 months
Re: sssd-users@lists.fedorahosted.org post from cedric@hottier.com requires approval
by Jakub Hrozek
> On 1 May 2018, at 23:30, admin(a)fedoraproject.org wrote:
>
> As list administrator, your authorization is requested for the
> following mailing list posting:
>
> List: sssd-users(a)lists.fedorahosted.org
> From: cedric(a)hottier.com
> Subject: Re: [SSSD-users] Re: credentials cache cleared at sssd restart pam_sss + krb5
>
> The message is being held because:
>
> The message is not from a list member
>
> At your convenience, visit your dashboard to approve or deny the
> request.
>
> From: cedric hottier <cedric(a)hottier.com>
> Subject: Re: [SSSD-users] Re: credentials cache cleared at sssd restart pam_sss + krb5
> Date: 1 May 2018 at 23:30:53 CEST
> To: sssd-users(a)lists.fedorahosted.org
>
>
> Dear Jakub,
> Thanks a lot for your workaround. It works perfectly now.
>
> I guess that fixing this issue is not a priority as the proxy_lib_names=files works fine.
> I did not find any bug report regarding this issue, and I think it would be worth to create one with your workaround .
Well, it’s causing issues, so it’s a priority :) the bug link is https://pagure.io/SSSD/sssd/issue/3591
> Regards
> Cedric
> -
>
>
>
> From: sssd-users-request(a)lists.fedorahosted.org
> Subject: confirm 87d041498495ab1b6d4a693f3d23ef672262cce9
> Date: 1 May 2018 at 23:30:59 CEST
>
>
> If you reply to this message, keeping the Subject: header intact,
> Mailman will discard the held message. Do this if the message is
> spam. If you reply to this message and include an Approved: header
> with the list password in it, the message will be approved for posting
> to the list. The Approved: header can also appear in the first line
> of the body of the reply.
>
5 years, 11 months
AD in mixed OS environment with SSSD
by Zdravko Zdravkov
HI all.
I've got working samba AD server. It is playing nicely with Windows 10 and
also successfully authenticating Linux machines with SSSD.
On the Windows machines I have our EMC storage smb mounted via group
policy. Managing permissions for users and groups there, as you know,
happens with right click, security etc..
As you may have already guessed the troubles come when my Linux machines,
that access the storage via nfs mount, need to work with folders and files
created from the Windows PCs. Linux doesn't "see" the actual user/group
that owns given folder. It interprets it into ID numbers starting from 1000.
I'm quite sure that this is common and known issue, but I don't know what
is the right way to deal with it, so any wisdom will be helpful.
Here's smb.conf from server
[global]
> netbios name = AD
> realm = XXXXXX
> server role = active directory domain controller
> server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl,
> winbindd, ntp_signd, kcc, dnsupdate
> workgroup = XXXX
> idmap config XXXX:unix_nss_info = yes
> log file = /var/log/samba/samba.log
> log level = 3
> [netlogon]
> path = /usr/local/samba/var/locks/sysvol/XXXXXX/scripts
> read only = No
> [sysvol]
> path = /usr/local/samba/var/locks/sysvol
> read only = No
also, sssd.conf from client
[sssd]
> domains = xxxx
> config_file_version = 2
> services = nss, pam
> [domain/xxxx]
> ad_domain = xxxx
> krb5_realm = XXXX
> realmd_tags = manages-system joined-with-samba
> cache_credentials = True
> id_provider = ad
> krb5_store_password_if_offline = True
> default_shell = /bin/bash
> ldap_id_mapping = True
> use_fully_qualified_names = False
> fallback_homedir = /home/%u
> access_provider = ad
and nsswitch.conf
passwd: files sss
> shadow: files sss
> group: files sss
Will appreciate any wisdom.
Thanks!
Z
5 years, 11 months
Re: credentials cache cleared at sssd restart pam_sss + krb5
by cedric hottier
Dear Jakub,
Thanks a lot for your workaround. It works perfectly now.
I guess that fixing this issue is not a priority as the
proxy_lib_names=files works fine.
I did not find any bug report regarding this issue, and I think it would be
worth to create one with your workaround .
Regards
Cedric
-
5 years, 11 months
Multi protocol issue with UIDs
by Zdravko Zdravkov
HI all.
I've got working samba AD server. It is playing nicely with Windows 10 and
also successfully authenticating Linux machines with SSSD.
On the Windows machines I have our EMC storage smb mounted via group
policy. Managing permissions for users and groups there, as you know,
happens with right click, security etc..
As you may have already guessed the troubles come when my Linux machines,
that access the storage via nfs mount, need to work with folders and files
created from the Windows PCs.
The CentOS computers see created files and folders with the Windows SIDs
(from 10000 on) while doing id user provides uid like 1115001181. Obviously
this is a problem when I need to access windows created stuff in Linux, and
the other way around.
I'm quite sure that this is common and known issue, but I don't know what
is the right way to deal with it, so any wisdom will be helpful.
Here's smb.conf from server
[global]
> netbios name = AD
> realm = XXXXXX
> server role = active directory domain controller
> server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl,
> winbindd, ntp_signd, kcc, dnsupdate
> workgroup = XXXX
> idmap config XXXX:unix_nss_info = yes
> log file = /var/log/samba/samba.log
> log level = 3
> [netlogon]
> path = /usr/local/samba/var/locks/sysvol/XXXXXX/scripts
> read only = No
> [sysvol]
> path = /usr/local/samba/var/locks/sysvol
> read only = No
also, sssd.conf from client
[sssd]
> domains = xxxx
> config_file_version = 2
> services = nss, pam
> [domain/xxxx]
> ad_domain = xxxx
> krb5_realm = XXXX
> realmd_tags = manages-system joined-with-samba
> cache_credentials = True
> id_provider = ad
> krb5_store_password_if_offline = True
> default_shell = /bin/bash
> ldap_id_mapping = True
> use_fully_qualified_names = False
> fallback_homedir = /home/%u
> access_provider = ad
and nsswitch.conf
passwd: files sss
> shadow: files sss
> group: files sss
5 years, 11 months