Best practices for upgrading when running dockerized FreeIPA
by Sebastiano Pomata
Hi all,
I successfully deployed a FreeIPA installation with a master server and two replicas using podman and the container images provided on docker.io (specifically, those based on fedora 36) on RHEL 8.
Time has passed (indeed flied) and fedora 36 is now about to reach end of security support and I started thinking about upgrading to either the 4.10 freeipa based on fedora 38 or the one based on RHEL 9.
Whatever the final choice, I wonder what's the recommended path to follow? I remember having asked in the past on the freeipa IRC channel and the most common suggestion was to avoid mounting the same ipa-data directory under a new, upgraded container image, but rather creating a new replica directly based on the updated container image.
This is very sensible however now I'm faced with a practical issue on the steps to take: assuming I wanted to upgrade the master and two replicas from 4.9 to 4.10 one by one, shall I create a temporary replica under a new hostname (and same IP), delete the old replica from topology and bring its container down, then re-create a new replica with the proper previous hostname?
Or just give up on the old hostname and stick with the new one for the upgraded replica? As I manage the installation with SRV records from DNS, ditching the old name for a new one doesn't seem painful, however we have some services that rely on the LDAP hostname of the current IPA servers and would still require manual upgrade.
DNS is not managed by FreeIPA but externally on another server, which I fully control.
Hope my question is clear and somebody who dealt with upgrades more often can provide some feedback.
Thanks
Regards
11 months, 3 weeks
SSSD Log stops working - Backtrafe dump ends here
by Finn Fysj
I've tried to install and re-install the IPAserver on my node. Even tried to re-provision it. When I look in the SSSD log for my domain I get the following:
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_get_generic_ext_step] (0x2000): [RID#16] ldap_search_ext called, msgid = 48
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_op_add] (0x2000): [RID#16] New operation 48 timeout 60
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_process_result] (0x2000): Trace: sh[0x560c8dff6e30], connected[1], ops[0x560c8e064050], ldap[0x560c8e0abcc0]
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_process_result] (0x2000): Trace: sh[0x560c8dff6e30], connected[1], ops[0x560c8e064050], ldap[0x560c8e0abcc0]
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_process_message] (0x4000): [RID#16] Message type: [LDAP_RES_SEARCH_RESULT]
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_get_generic_op_finished] (0x0400): [RID#16] Search result: Success(0), no errmsg set
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_get_generic_op_finished] (0x2000): [RID#16] Total count [0]
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_op_destructor] (0x2000): [RID#16] Operation 48 finished
* (2023-05-04 6:30:59): [be[lab.local]] [ipa_hbac_rule_info_done] (0x0400): [RID#16] No rules apply to this host
* (2023-05-04 6:30:59): [be[lab.local]] [sdap_id_op_done] (0x4000): [RID#16] releasing operation connection
* (2023-05-04 6:30:59): [be[lab.local]] [ipa_pam_access_handler_done] (0x0020): [RID#16] No HBAC rules found, denying access
********************** BACKTRACE DUMP ENDS HERE *********************************
(2023-05-04 6:39:00): [be[lab.local]] [orderly_shutdown] (0x3f7c0): SIGTERM: killing children
(2023-05-04 6:39:00): [be[lab.local]] [orderly_shutdown] (0x3f7c0): Shutting down (status = 0)(2023-05-04 6:39:00): [be[lab.local]] [server_setup] (0x3f7c0): Starting with debug level = 0x0070
(2023-05-04 6:41:04): [be[lab.local]] [orderly_shutdown] (0x3f7c0): SIGTERM: killing children
(2023-05-04 6:41:04): [be[lab.local]] [orderly_shutdown] (0x3f7c0): Shutting down (status = 0)(2023-05-04 6:41:04): [be[lab.local]] [server_setup] (0x3f7c0): Starting with debug level = 0x0070
(2023-05-04 6:43:33): [be[lab.local]] [orderly_shutdown] (0x3f7c0): SIGTERM: killing children
(2023-05-04 6:43:33): [be[lab.local]] [orderly_shutdown] (0x3f7c0): Shutting down (status = 0)(2023-05-04 6:43:33): [be[lab.local]] [server_setup] (0x3f7c0): Starting with debug level = 0x0070
I tried to turn the debug_level = 8 and 9, without any good results. The look doesn't change when I try to login or run any "privileged" commands.
11 months, 3 weeks
Free-IPA to RHEL IPA: ipa-crlgen-manage not present, manual options
by John Burns
Greetings!
Can the actions within the two commands below can be done manually (outside the RPM)?
ipa-crlgen-manage status
ipa-crlgen-manage disable
Context: I inherited an old Centos IPA cluster, centered on VMs. Suboptimal, given that the VM datastore itself is NFS-shared. Bare metal hardware is in place, running RHEL IPA. The "ipa-crlgen-manage" is actually not present on the running ipa-server-4.X) but is on a later version. Yes, I could run "yum update" and restart; but this VM can't go down until 1) offloading the last crucial functions to the bare metal 2) removing it from topology. There is limited opportunity to confirm things don't break after that update.
Thanks for any leads.
11 months, 3 weeks
Yum-based upgrade causes group lookup failures.
by Jeff Goddard
Hello and thank you for providing such a useful product.
We recently used yum to update our freeipa infrastructure and everything
went off without a hitch with the exception of one of our integrated apps
which now cannot determine group membership. We know the lookup is
successful as the user can login but we use group-based ACLs from freeipa
and when the groups are not populated, the users cannot access the rundeck
jobs. Nothing on rundeck was changed, the only actions taken were "yum
update" and "yum upgrade" on the freeipa servers. I've reviewed the
configurations and done some troubleshooting but can't find an answer so
I'm reaching out in hopes someone can give me a clue.
Environment details:
Server: CentOS Linux release 7.9.2009 (Core)
Freeipa version: 4.6.8-5.el7.centos.12
Problematic app: Rundeck
Update logs are attached as is the application ladp lookup config and
application login error. Any help is greatly appreciated.
Jeff
--
Jeff Goddard
Director of Information Technology
SureCost
Email: jgoddard@ <jgoddard(a)emerlyn.com>surecost.com
Telephone: (603) 447-8571
Toll free: (888) 363-7596 ext. 108
Fax: (603) 356-3346
11 months, 3 weeks
SSL errors ... again
by Justin Sanderson
Ok. So once again my IPA server is having cert issues. Everything seems
to be working except when I am in the web interface and goto
"Authentication" --> "Certificates" --> Click any of the certs in the list.
---- I get this error from the browser.------
IPA ERROR 907: NetworkError
cannot connect to
https://[myservernamehere.fqdn]:443/ca/agent/ca/displayBySerial' :
SSL_HANDSHAKE_FAILURE
# getcert list |grep expires --> everything checks out ok. no expiry on
any of the certs
--- checked all the certs on there "Not Before" and "Not After" dates
for the following NSS db's
certutil -L -d /etc/pki/pki-tomcat/alias
certutil -L -d /etc/httpd/alias
---- In /var/log/httpd/error_log, I do see some errors: ----
Bad Remote Server Certificate -8181
SSL Library Error: -8181 Certificate has expired
I know it's an expired cert obviously from httpd errorlog but where is
the darn thing. I thought i checked all the places and looked ok but I'm
definitely missing something....
could use some advice.
TIA
11 months, 3 weeks
Help with Ghost Replica removal please.
by Nicholas Cross
I am on a long journey upgrading from a mixture of different versions of
freeipa to the lastest and i am almost there, currently on 4.10.
After removing some older replicas yesterday, a new replica had a fault and
we had to restart dirsrv.
When it came back up we saw we have ghost replicas across the whole estate
6x replicas in two locations.
Issues are reported via `cipa`
Digging into the tombstone records, i find there are not the usual
tombstone replica records but more like the below.
My question is, how do i remove these redundant records? As I believe this
is what `cipa` is counting to report the errors.
(marked with <<<<<<<<<<<<<<)
The usual things do not work:
# ipa-replica-manage -p $pass clean-ruv 15
Replica ID 15 not found
$ cat remove_ruv.ldif
dn: cn=clean 15,cn=cleanallruv,cn=tasks,cn=config
changetype: add
objectclass: top
objectclass: extensibleObject
replica-base-dn: dc=ad,dc=companyx,dc=fm
replica-id: 15
cn: clean 15
ldapmodify -Y GSSAPI -f remove_ruv.ldif
Thanks in advance.
Nick
$ ldapsearch -D "cn=Directory Manager" -w $pass -Q -o ldif-wrap=no -LLL -b
"dc=ad,dc=companyx,dc=fm"
'(&(objectclass=nstombstone)(nsUniqueId=ffffffff-ffffffff-ffffffff-ffffffff))'
dn: cn=replica,cn=dc\3Dad\2Cdc\3Dcompanyx\2Cdc\3Dfm,cn=mapping
tree,cn=config
cn: replica
nsDS5Flags: 1
nsDS5ReplicaBindDN: cn=replication manager,cn=config
nsDS5ReplicaBindDNGroup: cn=replication
managers,cn=sysaccounts,cn=etc,dc=ad,dc=companyx,dc=fm
nsDS5ReplicaBindDnGroupCheckInterval: 60
nsDS5ReplicaId: 56
nsDS5ReplicaName: a6b5640c-ad3911ed-a50980fb-6203228c
nsDS5ReplicaRoot: dc=ad,dc=companyx,dc=fm
nsDS5ReplicaType: 3
nsState:: OAAAAAAAAABSPEJkAAAAAAAAAAAAAAAA7AAAAAAAAAAGAAAAAAAAAA==
nsds5ReplicaBackoffMax: 300
nsds5ReplicaLegacyConsumer: off
nsds5ReplicaReleaseTimeout: 60
objectClass: top
objectClass: nsds5replica
objectClass: extensibleobject
nsds50ruv: {replicageneration} 5d9e2076000000040000
nsds50ruv: {replica 56 ldap://ipa006.ad.companyx.fm:389}
63ece66f000000380000 64423d4e000000380000
nsds50ruv: {replica 46 ldap://ipa005.ad.companyx.fm:389}
63dbcc200001002e0000 64423cf6000f002e0000
nsds50ruv: {replica 21 ldap://etcd0dc.ad.companyx.fm:389}
5f2d4bc1000000150000 64319ec3276200150000
nsds50ruv: {replica 48 ldap://ipa007.ad.companyx.fm:389}
63ea4e54000100300000 6442397e000a00300000
nsds50ruv: {replica 58 ldap://ipa001dc.ad.companyx.fm:389}
643d03280001003a0000 644232c00001003a0000
nsds50ruv: {replica 60 ldap://ipa002dc.ad.companyx.fm:389}
643d19680001003c0000 644232660004003c0000
nsds50ruv: {replica 62 ldap://ipa003dc.ad.companyx.fm:389}
643d491e0001003e0000 644233340000003e0000
nsds5agmtmaxcsn: dc=ad,dc=companyx,dc=fm;
ipa006.ad.companyx.fm-to-ipa003dc.ad.companyx.fm;ipa003dc.ad.companyx.fm
;389;62;644238c9000000380000
nsds5agmtmaxcsn: dc=ad,dc=companyx,dc=fm;
ipa006.ad.companyx.fm-to-ipa007.ad.companyx.fm;ipa007.ad.companyx.fm
;389;48;644238c9000000380000
nsds5agmtmaxcsn: dc=ad,dc=companyx,dc=fm;
ipa006.ad.companyx.fm-to-etcd2dc.ad.companyx.fm;etcd2dc.ad.companyx.fm;389;unavailable;64423cf6000f002e0000
<<<<<<<<<<<<<<
nsds5agmtmaxcsn: dc=ad,dc=companyx,dc=fm;
ipa006.ad.companyx.fm-to-ipa005.ad.companyx.fm;ipa005.ad.companyx.fm
;389;46;644238c9000000380000
nsruvReplicaLastModified: {replica 56 ldap://ipa006.ad.companyx.fm:389}
64423c62
nsruvReplicaLastModified: {replica 46 ldap://ipa005.ad.companyx.fm:389}
64423c0c
nsruvReplicaLastModified: {replica 21 ldap://etcd0dc.ad.companyx.fm:389}
00000000
nsruvReplicaLastModified: {replica 48 ldap://ipa007.ad.companyx.fm:389}
64423894
nsruvReplicaLastModified: {replica 58 ldap://ipa001dc.ad.companyx.fm:389}
644231da
nsruvReplicaLastModified: {replica 60 ldap://ipa002dc.ad.companyx.fm:389}
6442317a
nsruvReplicaLastModified: {replica 62 ldap://ipa003dc.ad.companyx.fm:389}
64423249
nsruvReplicaLastModified: {replica 12} 644180f7 <<<<<<<<<<<<<<
nsruvReplicaLastModified: {replica 15} 644180f7 <<<<<<<<<<<<<<
nsruvReplicaLastModified: {replica 25} 644180f7 <<<<<<<<<<<<<<
nsruvReplicaLastModified: {replica 23} 644180f7 <<<<<<<<<<<<<<
nsruvReplicaLastModified: {replica 40} 644180f7 <<<<<<<<<<<<<<
nsds5ReplicaChangeCount: 734077
nsds5replicareapactive: 0
11 months, 4 weeks