Over the summer we announced the freeipa-healthcheck project which is designed to look at an IdM cluster and look for common problems so you can have some level of assurance that the system is running as it should.
It was built against the IPA 4.8.x branch and originally released only for Fedora 29+. It is also included in the newly released RHEL 8.1.0.
My curious nature led me to see if it would also work in in the IPA 4.6.x branch. It was a bit of a challenge backing down to Python 2 but I was able to get something working. I tested primarily on Fedora 27 but it should also work in RHEL/CentOS 7 (I smoke tested 7.8).
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/
Enable the repo and do: yum install freeipa-healthcheck
Then run: ipa-healthcheck --failures-only
Ideally there will be no output but an empty list []. Otherwise the output is JSON and hopefully has enough information to point you in the right direction. Feel free to ask if need help.
False positives are always a possibility and many of the checks run independently so it's possible to get multiple issues from a single root problem. It's hard to predict all possible installations so some fine-tuning may be required.
I'd recommend running it every now and then at least, like prior to updating IPA packages, creating a new master, etc, if not daily. It will, for example, warn of impending cert expiration.
The more feedback I get on it the better and more useful I can make it.
This is my own personal backport and is not officially supported by anyone but me. It's preferred to report issues on this mailing list. I'll see them and others may be able to chime in as well.
rob
Rob, thanks for the efforts on this! highly appreciated I will try it out on some setups I have around and will give you some feedback.
best regards, JP
El mar., 5 nov. 2019 a las 12:35, Rob Crittenden via FreeIPA-users (< freeipa-users@lists.fedorahosted.org>) escribió:
Over the summer we announced the freeipa-healthcheck project which is designed to look at an IdM cluster and look for common problems so you can have some level of assurance that the system is running as it should.
It was built against the IPA 4.8.x branch and originally released only for Fedora 29+. It is also included in the newly released RHEL 8.1.0.
My curious nature led me to see if it would also work in in the IPA 4.6.x branch. It was a bit of a challenge backing down to Python 2 but I was able to get something working. I tested primarily on Fedora 27 but it should also work in RHEL/CentOS 7 (I smoke tested 7.8).
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/
Enable the repo and do: yum install freeipa-healthcheck
Then run: ipa-healthcheck --failures-only
Ideally there will be no output but an empty list []. Otherwise the output is JSON and hopefully has enough information to point you in the right direction. Feel free to ask if need help.
False positives are always a possibility and many of the checks run independently so it's possible to get multiple issues from a single root problem. It's hard to predict all possible installations so some fine-tuning may be required.
I'd recommend running it every now and then at least, like prior to updating IPA packages, creating a new master, etc, if not daily. It will, for example, warn of impending cert expiration.
The more feedback I get on it the better and more useful I can make it.
This is my own personal backport and is not officially supported by anyone but me. It's preferred to report issues on this mailing list. I'll see them and others may be able to chime in as well.
rob _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Hello Rob,
I saw this post last night (Finland time) and decided to give it a shot first thing in the morning. My setup: 2x CentOS 7.7 (ipa-server 4.6.5) with a cross forest trust to 2012 R2 AD domain. Ran ipa-healthcheck --failures-only on the first master and it returned 0 issues as expected. Then I ran it on the replica and it printed out this: [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Request for certificate failed, <TagSet object, tags 0:32:16> not in asn1Spec: <OctetString schema object, tagSet <TagSet object, tags 0:0:4>, encoding iso-8859-1>", "key": "20181207074138" }, "uuid": "34135eaf-31be-49a2-b101-b449b904d5af", "duration": "2.261236", "when": "20191106055010Z", "check": "IPACertRevocation", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Request for certificate failed, <TagSet object, tags 0:32:16> not in asn1Spec: <OctetString schema object, tagSet <TagSet object, tags 0:0:4>, encoding iso-8859-1>", "key": "20181207073914" }, "uuid": "018787f8-dfc3-4b4b-ac16-777d3f651282", "duration": "3.471975", "when": "20191106055011Z", "check": "IPACertRevocation", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Request for certificate failed, <TagSet object, tags 0:32:16> not in asn1Spec: <OctetString schema object, tagSet <TagSet object, tags 0:0:4>, encoding iso-8859-1>", "key": "20181207073850" }, "uuid": "8e4573f8-9a74-4b6e-ac2c-45cddb181521", "duration": "3.782566", "when": "20191106055011Z", "check": "IPACertRevocation", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Request for certificate failed, <TagSet object, tags 0:32:16> not in asn1Spec: <OctetString schema object, tagSet <TagSet object, tags 0:0:4>, encoding iso-8859-1>", "key": "20181207074203" }, "uuid": "02628931-d6c6-479e-b473-e93f626841eb", "duration": "4.037205", "when": "20191106055012Z", "check": "IPACertRevocation", "result": "ERROR" } ]
The replica has never had any problems and https://github.com/peterpakos/checkipaconsistency reports no problems. I was wondering if this is something that I should fix on the replica or something that needs to be fixed in your magnificent tool that you so kindly backported?
I can provide any diagnostics and/or logs as needed.
- Kimmo
Figured it out. I think it was this was because of I compiled checkipaconsistency on the replica. The errors pointed me towards pyasn1 and sure enough the pip versions on the first master and the replica differed:
First master: pyasn1 0.1.9 pyasn1-modules 0.0.8
Replica: pyasn1 0.4.7 pyasn1-modules 0.2.6
After doing pip uninstall pyasn1 ; pip uninstall pyasn1-modules, pip list shows on the replica: pyasn1 0.1.9 pyasn1-modules 0.0.8
After this ipa-healthcheck --failures-only returns 0 issues.
Kimmo Rantala via FreeIPA-users wrote:
Figured it out. I think it was this was because of I compiled checkipaconsistency on the replica. The errors pointed me towards pyasn1 and sure enough the pip versions on the first master and the replica differed:
First master: pyasn1 0.1.9 pyasn1-modules 0.0.8
Replica: pyasn1 0.4.7 pyasn1-modules 0.2.6
After doing pip uninstall pyasn1 ; pip uninstall pyasn1-modules, pip list shows on the replica: pyasn1 0.1.9 pyasn1-modules 0.0.8
After this ipa-healthcheck --failures-only returns 0 issues.
Perfect, thanks for testing.
rob
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
One replica reports no problems. Another replica reports the following. This replica is installed and running in a LXC container (Ubuntu host). Healthcheck reports: [ { "source": "ipahealthcheck.system.filesystemspace", "kw": { "exception": "[Errno 2] No such file or directory: '/var/log/audit/'" }, "uuid": "087b9370-7d5a-4814-8a0b-956bdeed5ae7", "duration": "0.000464", "when": "20191106094813Z", "check": "FileSystemSpaceCheck", "result": "CRITICAL" } ] Strangely enough the package audit wasn't installed, only audit-libs and audit-libs-python. It seems to function alright though. -- Kees
On 05-11-19 16:34, Rob Crittenden via FreeIPA-users wrote:
*** EXTERNAL E-MAIL ***
Over the summer we announced the freeipa-healthcheck project which is designed to look at an IdM cluster and look for common problems so you can have some level of assurance that the system is running as it should.
It was built against the IPA 4.8.x branch and originally released only for Fedora 29+. It is also included in the newly released RHEL 8.1.0.
My curious nature led me to see if it would also work in in the IPA 4.6.x branch. It was a bit of a challenge backing down to Python 2 but I was able to get something working. I tested primarily on Fedora 27 but it should also work in RHEL/CentOS 7 (I smoke tested 7.8).
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/
Enable the repo and do: yum install freeipa-healthcheck
Then run: ipa-healthcheck --failures-only
Ideally there will be no output but an empty list []. Otherwise the output is JSON and hopefully has enough information to point you in the right direction. Feel free to ask if need help.
False positives are always a possibility and many of the checks run independently so it's possible to get multiple issues from a single root problem. It's hard to predict all possible installations so some fine-tuning may be required.
I'd recommend running it every now and then at least, like prior to updating IPA packages, creating a new master, etc, if not daily. It will, for example, warn of impending cert expiration.
The more feedback I get on it the better and more useful I can make it.
This is my own personal backport and is not officially supported by anyone but me. It's preferred to report issues on this mailing list. I'll see them and others may be able to chime in as well.
rob _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Kees Bakker via FreeIPA-users wrote:
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
To see what the request is run:
# getcert list -i 20190412141828
It may be perfectly fine, it is acceptable to track other certs on the master, it is just unexpected so healthcheck is warning about it.
One replica reports no problems. Another replica reports the following. This replica is installed and running in a LXC container (Ubuntu host). Healthcheck reports: [ { "source": "ipahealthcheck.system.filesystemspace", "kw": { "exception": "[Errno 2] No such file or directory: '/var/log/audit/'" }, "uuid": "087b9370-7d5a-4814-8a0b-956bdeed5ae7", "duration": "0.000464", "when": "20191106094813Z", "check": "FileSystemSpaceCheck", "result": "CRITICAL" } ] Strangely enough the package audit wasn't installed, only audit-libs and audit-libs-python. It seems to function alright though.
It isn't dependent upon installed packages, it just checks a bunch of filesystems. I'd have sworn we've seen a similar issue when someone ran healthcheck in a docker container and I thought we considered the context when checking. I'll take a look.
This is one of those false-positives I was worried about :/
thanks
rob
-- Kees
On 05-11-19 16:34, Rob Crittenden via FreeIPA-users wrote:
*** EXTERNAL E-MAIL ***
Over the summer we announced the freeipa-healthcheck project which is designed to look at an IdM cluster and look for common problems so you can have some level of assurance that the system is running as it should.
It was built against the IPA 4.8.x branch and originally released only for Fedora 29+. It is also included in the newly released RHEL 8.1.0.
My curious nature led me to see if it would also work in in the IPA 4.6.x branch. It was a bit of a challenge backing down to Python 2 but I was able to get something working. I tested primarily on Fedora 27 but it should also work in RHEL/CentOS 7 (I smoke tested 7.8).
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/
Enable the repo and do: yum install freeipa-healthcheck
Then run: ipa-healthcheck --failures-only
Ideally there will be no output but an empty list []. Otherwise the output is JSON and hopefully has enough information to point you in the right direction. Feel free to ask if need help.
False positives are always a possibility and many of the checks run independently so it's possible to get multiple issues from a single root problem. It's hard to predict all possible installations so some fine-tuning may be required.
I'd recommend running it every now and then at least, like prior to updating IPA packages, creating a new master, etc, if not daily. It will, for example, warn of impending cert expiration.
The more feedback I get on it the better and more useful I can make it.
This is my own personal backport and is not officially supported by anyone but me. It's preferred to report issues on this mailing list. I'll see them and others may be able to chime in as well.
rob _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
On 06-11-19 17:16, Rob Crittenden wrote:
Kees Bakker via FreeIPA-users wrote:
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
To see what the request is run:
# getcert list -i 20190412141828
It may be perfectly fine, it is acceptable to track other certs on the master, it is just unexpected so healthcheck is warning about it.
The warning is for a cert that I created for a FreeRADIUS server (which I never actually managed to get working).
The warning is a bit annoying because the cert is alright, I think. It is listed with "status: MONITORING". So, I think that the cert is not unknown to certmonger, despite what the error suggests.
I am considering to create another cert for some other service, in the same manner as I did for freeRADIUS. That new cert would then also be flagged with a warning.
Kees Bakker wrote:
On 06-11-19 17:16, Rob Crittenden wrote:
Kees Bakker via FreeIPA-users wrote:
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
To see what the request is run:
# getcert list -i 20190412141828
It may be perfectly fine, it is acceptable to track other certs on the master, it is just unexpected so healthcheck is warning about it.
The warning is for a cert that I created for a FreeRADIUS server (which I never actually managed to get working).
The warning is a bit annoying because the cert is alright, I think. It is listed with "status: MONITORING". So, I think that the cert is not unknown to certmonger, despite what the error suggests.
I am considering to create another cert for some other service, in the same manner as I did for freeRADIUS. That new cert would then also be flagged with a warning.
This particular check isn't verifying whether the cert is ok. It is checking that the tracking for the standard IPA certs is done correctly.
If there are additional certs it has no way to know to validate them so warns instead. We discourage running additional software on an IPA master. Using a master to manage a cert is probably fine but is a grey area. I chose to warn as a heads-up, to keep a paranoid stance of warning on anything unexpected.
I have an idea to create an ignore list but it probably won't see the light of day for a while.
This is good feedback, thanks.
rob
On 13-12-19 15:00, Rob Crittenden wrote:
Kees Bakker wrote:
On 06-11-19 17:16, Rob Crittenden wrote:
Kees Bakker via FreeIPA-users wrote:
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
To see what the request is run:
# getcert list -i 20190412141828
It may be perfectly fine, it is acceptable to track other certs on the master, it is just unexpected so healthcheck is warning about it.
The warning is for a cert that I created for a FreeRADIUS server (which I never actually managed to get working).
The warning is a bit annoying because the cert is alright, I think. It is listed with "status: MONITORING". So, I think that the cert is not unknown to certmonger, despite what the error suggests.
I am considering to create another cert for some other service, in the same manner as I did for freeRADIUS. That new cert would then also be flagged with a warning.
This particular check isn't verifying whether the cert is ok. It is checking that the tracking for the standard IPA certs is done correctly.
If there are additional certs it has no way to know to validate them so warns instead. We discourage running additional software on an IPA master. Using a master to manage a cert is probably fine but is a grey area. I chose to warn as a heads-up, to keep a paranoid stance of warning on anything unexpected.
Ah, I see. So, I better not do that then.
I have an idea to create an ignore list but it probably won't see the light of day for a while.
This is good feedback, thanks.
Likewise.
Kees Bakker wrote:
On 13-12-19 15:00, Rob Crittenden wrote:
Kees Bakker wrote:
On 06-11-19 17:16, Rob Crittenden wrote:
Kees Bakker via FreeIPA-users wrote:
Thanks Rob
Here are my findings, mainly as an FYI.
On the CA master it reports the following (which I have to investigate) [ { "source": "ipahealthcheck.ipa.certs", "kw": { "msg": "Unknown certmonger id 20190412141828", "key": "20190412141828" }, "uuid": "f3d6ccb9-fb82-49ac-aa02-f485d08826c3", "duration": "0.980984", "when": "20191106095349Z", "check": "IPACertTracking", "result": "WARNING" } ]
To see what the request is run:
# getcert list -i 20190412141828
It may be perfectly fine, it is acceptable to track other certs on the master, it is just unexpected so healthcheck is warning about it.
The warning is for a cert that I created for a FreeRADIUS server (which I never actually managed to get working).
The warning is a bit annoying because the cert is alright, I think. It is listed with "status: MONITORING". So, I think that the cert is not unknown to certmonger, despite what the error suggests.
I am considering to create another cert for some other service, in the same manner as I did for freeRADIUS. That new cert would then also be flagged with a warning.
This particular check isn't verifying whether the cert is ok. It is checking that the tracking for the standard IPA certs is done correctly.
If there are additional certs it has no way to know to validate them so warns instead. We discourage running additional software on an IPA master. Using a master to manage a cert is probably fine but is a grey area. I chose to warn as a heads-up, to keep a paranoid stance of warning on anything unexpected.
Ah, I see. So, I better not do that then.
I have an idea to create an ignore list but it probably won't see the light of day for a while.
This is good feedback, thanks.
Likewise.
What I'll do in the short term is add a much longer message that includes some of what I said here. There is no need to me to be so terse for some of these messages :-(
rob
Thanks for healthcheck Rob,
In our setup (2 CentOS 7.7 servers, running ipa-server-4.6.5-11.el7.centos.3.x86_64) I get the output below when ipa-healthcheck runs at the replica. The output is identical at master too, except the first warning ("No DNA range defined. If no masters define a range then users and groups cannot be created."). How serious is my case? Any recommendation is highly appreciated.
Thanks again, Petros
[ { "source": "ipahealthcheck.ipa.dna", "kw": { "msg": "No DNA range defined. If no masters define a range then users and groups cannot be created.", "range_start": 0, "next_start": 0, "next_max": 0, "range_max": 0 }, "uuid": "f414f514-38b2-4381-a161-f43ea81ffbae", "duration": "0.578066", "when": "20191107160820Z", "check": "IPADNARangeCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/cert8.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_cert8.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/cert8.db", "type": "mode" }, "uuid": "5a4a4d41-0761-403e-82f2-485bcfff5dd9", "duration": "0.000125", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/key3.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_key3.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/key3.db", "type": "mode" }, "uuid": "8fd976a9-d011-4e2b-a77d-792f50b1f1e4", "duration": "0.000593", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/secmod.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_secmod.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/secmod.db", "type": "mode" }, "uuid": "a0f8da6d-79ec-419d-9288-144a3a33cd97", "duration": "0.000902", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certmap,dc=geo,dc=ss,dc=lan", "key": "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan" }, "uuid": "b9e9c71d-c97c-43be-806f-b37bdc3607c3", "duration": "0.005029", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certmaprules,cn=certmap,dc=geo,dc=ss,dc=lan", "key": "cn=certmaprules+nsuniqueid=ebb8b8b7-a2c811e7-8f22c768-d7e7aa51,cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan" }, "uuid": "2973a679-166c-48e1-b291-ed025b9ec727", "duration": "0.005333", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certificate identity mapping administrators,cn=privileges,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=Certificate Identity Mapping Administrators+nsuniqueid=ebb8b8b9-a2c811e7-8f22c768-d7e7aa51,cn=privileges,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "989214c0-b8bb-43fc-918a-8c752f62258c", "duration": "0.005616", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: modify certmap configuration,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Modify Certmap Configuration+nsuniqueid=ebb8b8bf-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "b124c247-30ff-4f22-9b73-57001aa8ae8f", "duration": "0.005921", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: read certmap configuration,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Read Certmap Configuration+nsuniqueid=ebb8b8c3-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "4f55e952-a32a-477d-a512-49651c63d4be", "duration": "0.006249", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: add certmap rules,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Add Certmap Rules+nsuniqueid=ebb8b8c6-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "86e874d3-3dcd-455d-920f-37a64551ee9f", "duration": "0.006553", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: delete certmap rules,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Delete Certmap Rules+nsuniqueid=ebb8b8ca-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "6d9c8d6d-639b-4d95-904e-9b50d0a38f4f", "duration": "0.006855", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: modify certmap rules,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Modify Certmap Rules+nsuniqueid=ebb8b8ce-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "dfc6f379-e490-4745-a472-b090ce840498", "duration": "0.007169", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: read certmap rules,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Read Certmap Rules+nsuniqueid=ebb8b8d2-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "a2c329f0-a3f1-4139-8f4b-2b3dcf9e65c0", "duration": "0.007470", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: modify external group membership,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Modify External Group Membership+nsuniqueid=ebb8b8db-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "e2cc295f-7e21-424d-bdd6-b3150b1fab27", "duration": "0.007771", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: read external group membership,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Read External Group Membership+nsuniqueid=ebb8b8e2-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "fd9600f0-aba2-41bf-8cca-f84ca75cc90e", "duration": "0.008072", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=system: manage user certificate mappings,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan", "key": "cn=System: Manage User Certificate Mappings+nsuniqueid=ebb8b8e9-a2c811e7-8f22c768-d7e7aa51,cn=permissions,cn=pbac,dc=geo,dc=ss,dc=lan" }, "uuid": "8727dad3-da17-4cd6-94e0-eef33a9236da", "duration": "0.008386", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" } ]
Petros Triantafyllidis wrote:
Thanks for healthcheck Rob,
In our setup (2 CentOS 7.7 servers, running ipa-server-4.6.5-11.el7.centos.3.x86_64) I get the output below when ipa-healthcheck runs at the replica. The output is identical at master too, except the first warning ("No DNA range defined. If no masters define a range then users and groups cannot be created."). How serious is my case? Any recommendation is highly appreciated.
Thanks again, Petros
[ { "source": "ipahealthcheck.ipa.dna", "kw": { "msg": "No DNA range defined. If no masters define a range then users and groups cannot be created.", "range_start": 0, "next_start": 0, "next_max": 0, "range_max": 0 }, "uuid": "f414f514-38b2-4381-a161-f43ea81ffbae", "duration": "0.578066", "when": "20191107160820Z", "check": "IPADNARangeCheck", "result": "WARNING" },
This is just a heads-up. It means that this master doesn't have a DNA range. If your other master dies then you'll get the dreaded "ERROR: Operations error: Allocation of a new value for range failed".
We don't allocate a range to every master because there are some users that have a LOT of masters and each time a range is allocated it splits in half.
So it may be perfectly fine, hence the warning.
{ "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/cert8.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_cert8.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/cert8.db", "type": "mode" }, "uuid": "5a4a4d41-0761-403e-82f2-485bcfff5dd9", "duration": "0.000125", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/key3.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_key3.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/key3.db", "type": "mode" }, "uuid": "8fd976a9-d011-4e2b-a77d-792f50b1f1e4", "duration": "0.000593", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.files", "kw": { "msg": "Permissions of /etc/dirsrv/slapd-GEO-SS-LAN/secmod.db are 0600 and should be 0640", "key": "_etc_dirsrv_slapd-GEO-SS-LAN_secmod.db_mode", "got": "0600", "expected": "0640", "path": "/etc/dirsrv/slapd-GEO-SS-LAN/secmod.db", "type": "mode" }, "uuid": "a0f8da6d-79ec-419d-9288-144a3a33cd97", "duration": "0.000902", "when": "20191107160820Z", "check": "IPAFileNSSDBCheck", "result": "WARNING" },
Yeah, these are tricky. Also a warning. Permissions can be such that stricter or looser perms work just fine and are relatively equivalent in protection. We warn if it doesn't match the installed default but in this case the warnings can probably be ignored.
{ "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certmap,dc=geo,dc=ss,dc=lan", "key": "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan" }, "uuid": "b9e9c71d-c97c-43be-806f-b37bdc3607c3", "duration": "0.005029", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" },
[ snip ]
What you'll want to do is compare the conflict entry with the "real" entry to see if there are any differences. Chances are there aren't and the conflict entries can be deleted.
rob
Thanks for your response Rob. Please see my questions inline.
On 11/7/19 6:48 PM, Rob Crittenden via FreeIPA-users wrote:
Petros Triantafyllidis wrote:
Thanks for healthcheck Rob,
In our setup (2 CentOS 7.7 servers, running ipa-server-4.6.5-11.el7.centos.3.x86_64) I get the output below when ipa-healthcheck runs at the replica. The output is identical at master too, except the first warning ("No DNA range defined. If no masters define a range then users and groups cannot be created."). How serious is my case? Any recommendation is highly appreciated.
Thanks again, Petros
[ { "source": "ipahealthcheck.ipa.dna", "kw": { "msg": "No DNA range defined. If no masters define a range then users and groups cannot be created.", "range_start": 0, "next_start": 0, "next_max": 0, "range_max": 0 }, "uuid": "f414f514-38b2-4381-a161-f43ea81ffbae", "duration": "0.578066", "when": "20191107160820Z", "check": "IPADNARangeCheck", "result": "WARNING" },
This is just a heads-up. It means that this master doesn't have a DNA range. If your other master dies then you'll get the dreaded "ERROR: Operations error: Allocation of a new value for range failed".
We don't allocate a range to every master because there are some users that have a LOT of masters and each time a range is allocated it splits in half.
So it may be perfectly fine, hence the warning.
Do you recommend I set DNA range for my second server too? I will hardly have more than four servers in our environment and that only in a transition/upgrade phase.
[...]
{ "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certmap,dc=geo,dc=ss,dc=lan", "key": "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan" }, "uuid": "b9e9c71d-c97c-43be-806f-b37bdc3607c3", "duration": "0.005029", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" },
[ snip ]
What you'll want to do is compare the conflict entry with the "real" entry to see if there are any differences. Chances are there aren't and the conflict entries can be deleted.
Assuming I have the following output:
ldapsearch -D "cn=Directory Manager" -W "cn=certmap *" Enter LDAP Password: # extended LDIF # # LDAPv3 # base <dc=geo,dc=ss,dc=lan> (default) with scope subtree # filter: cn=certmap * # requesting: ALL #
# certmap, geo.ss.lan dn: cn=certmap,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer objectClass: ipaCertMapConfigObject ipaCertMapPromptUsername: FALSE cn: certmap
# certmaprules, certmap, geo.ss.lan dn: cn=certmaprules,cn=certmap,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer cn: certmaprules
# certmap + ebb8b88e-a2c811e7-8f22c768-d7e7aa51, geo.ss.lan dn: cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc= lan objectClass: top objectClass: nsContainer objectClass: ipaCertMapConfigObject ipaCertMapPromptUsername: FALSE cn: certmap
# certmaprules + ebb8b8b7-a2c811e7-8f22c768-d7e7aa51, certmap + ebb8b88e-a2c811 e7-8f22c768-d7e7aa51, geo.ss.lan dn: cn=certmaprules+nsuniqueid=ebb8b8b7-a2c811e7-8f22c768-d7e7aa51,cn=certmap+ nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer cn: certmaprules
# search result search: 2 result: 0 Success
# numResponses: 5 # numEntries: 4
Am I safe to delete like this?
ldapdelete -D "cn=Directory Manager" -W -x "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan"
Thanks, Petros
Petros Triantafyllidis wrote:
Thanks for your response Rob. Please see my questions inline.
On 11/7/19 6:48 PM, Rob Crittenden via FreeIPA-users wrote:
Petros Triantafyllidis wrote:
Thanks for healthcheck Rob,
In our setup (2 CentOS 7.7 servers, running ipa-server-4.6.5-11.el7.centos.3.x86_64) I get the output below when ipa-healthcheck runs at the replica. The output is identical at master too, except the first warning ("No DNA range defined. If no masters define a range then users and groups cannot be created."). How serious is my case? Any recommendation is highly appreciated.
Thanks again, Petros
[ { "source": "ipahealthcheck.ipa.dna", "kw": { "msg": "No DNA range defined. If no masters define a range then users and groups cannot be created.", "range_start": 0, "next_start": 0, "next_max": 0, "range_max": 0 }, "uuid": "f414f514-38b2-4381-a161-f43ea81ffbae", "duration": "0.578066", "when": "20191107160820Z", "check": "IPADNARangeCheck", "result": "WARNING" },
This is just a heads-up. It means that this master doesn't have a DNA range. If your other master dies then you'll get the dreaded "ERROR: Operations error: Allocation of a new value for range failed".
We don't allocate a range to every master because there are some users that have a LOT of masters and each time a range is allocated it splits in half.
So it may be perfectly fine, hence the warning.
Do you recommend I set DNA range for my second server too? I will hardly have more than four servers in our environment and that only in a transition/upgrade phase.
It shouldn't hurt anything. All you need to do is add a user or group on that master directly. It should see it has no range and get one for itself automatically. This will let you avoid the pain of having to recover the range if the original master ever goes down.
[...]
{ "source": "ipahealthcheck.ds.replication", "kw": { "msg": "Replication conflict", "glue": false, "conflict": "namingConflict cn=certmap,dc=geo,dc=ss,dc=lan", "key": "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan" }, "uuid": "b9e9c71d-c97c-43be-806f-b37bdc3607c3", "duration": "0.005029", "when": "20191107160829Z", "check": "ReplicationConflictCheck", "result": "ERROR" },
[ snip ]
What you'll want to do is compare the conflict entry with the "real" entry to see if there are any differences. Chances are there aren't and the conflict entries can be deleted.
Assuming I have the following output:
ldapsearch -D "cn=Directory Manager" -W "cn=certmap *" Enter LDAP Password: # extended LDIF # # LDAPv3 # base <dc=geo,dc=ss,dc=lan> (default) with scope subtree # filter: cn=certmap * # requesting: ALL #
# certmap, geo.ss.lan dn: cn=certmap,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer objectClass: ipaCertMapConfigObject ipaCertMapPromptUsername: FALSE cn: certmap
# certmaprules, certmap, geo.ss.lan dn: cn=certmaprules,cn=certmap,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer cn: certmaprules
# certmap + ebb8b88e-a2c811e7-8f22c768-d7e7aa51, geo.ss.lan dn: cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc= lan objectClass: top objectClass: nsContainer objectClass: ipaCertMapConfigObject ipaCertMapPromptUsername: FALSE cn: certmap
# certmaprules + ebb8b8b7-a2c811e7-8f22c768-d7e7aa51, certmap + ebb8b88e-a2c811 e7-8f22c768-d7e7aa51, geo.ss.lan dn: cn=certmaprules+nsuniqueid=ebb8b8b7-a2c811e7-8f22c768-d7e7aa51,cn=certmap+ nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan objectClass: top objectClass: nsContainer cn: certmaprules
# search result search: 2 result: 0 Success
# numResponses: 5 # numEntries: 4
Am I safe to delete like this?
ldapdelete -D "cn=Directory Manager" -W -x "cn=certmap+nsuniqueid=ebb8b88e-a2c811e7-8f22c768-d7e7aa51,dc=geo,dc=ss,dc=lan"
Yes
rob
Thanks, Petros
-- Dr. TRIANTAFYLLIDIS PETROS Aristotle University - Department of Geophysics, POBox 112, 54124 Thessaloniki,GREECE-TEL:+30-2310998585,FAX:2310991403
Hi Rob,
On Tue, Nov 5, 2019 at 4:35 PM Rob Crittenden via FreeIPA-users < freeipa-users@lists.fedorahosted.org> wrote:
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/
The more feedback I get on it the better and more useful I can make it.
Awesome work, thanks. I tried it running in my personal IPA instance. I get the following:
WARNING "No DNA range defined. If no masters define a range then users and groups cannot be created."
This is on my replica and was already reported by someone else. Fixed it by adding and removing a user on the web ui of the replica, as you described.
CRITICAL "[Errno 2] No such file or directory: '/var/log/audit/'"
This also has been reported; my replica is running as an LXC container under Proxmox. Hacked it by creating the directory.
WARNING "Unexpected SRV entry in DNS" "_ntp._udp.<my_domain>.:<replica hostname>."
I think this is correct because I'm not running ntpd on the replica. I've removed the entry.
WARNING "Got 1 ipa-ca A records, expected 2" WARNING "Expected SRV record missing" "_<service>._(tcp|udp).<my domain>.:<replica hostname>."
Those are problematic for me, I guess because I'm running a probably unsupported configuration:
* My first master is public on the Internet * My second master is not public on the Internet * Public DNS contains entries for the first master * The DNS server which servers in the second master's network use contains entries for both masters * My first public master uses another DNS server* which does not have specific IPA entries and thus uses the public Internet DNS's entries, which do not contain the second master (* actually the DNS server for the first master is running on the same host, using dnsmasq)
I "fixed" this by putting all the DNS entries in all my internal DNS servers, but then healthcheck won't be verifying the public Internet's DNS records. This is not ideal, but I think it's fine.
...
I now have clean runs in all my masters, so I'll work to add it on my monitoring agent ( https://github.com/alexpdp7/ragent ). I'm running my agent every minute, and ipa-healthcheck seems to be quite expensive to run, so I'll probably run it in cron every hour or so and then have the agent gather the results.
Cheers,
Álex
Alex Corcoles via FreeIPA-users wrote:
Hi Rob,
On Tue, Nov 5, 2019 at 4:35 PM Rob Crittenden via FreeIPA-users <freeipa-users@lists.fedorahosted.org mailto:freeipa-users@lists.fedorahosted.org> wrote:
I made an EPEL 7 build in COPR, https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/ The more feedback I get on it the better and more useful I can make it.
Awesome work, thanks. I tried it running in my personal IPA instance. I get the following:
WARNING "No DNA range defined. If no masters define a range then users and groups cannot be created."
This is on my replica and was already reported by someone else. Fixed it by adding and removing a user on the web ui of the replica, as you described.
I'm open to suggestions on this. I don't mean for it to scare anyone but the consequences can be head scratching. I have a blog entry on it that gets quite a few views.
CRITICAL "[Errno 2] No such file or directory: '/var/log/audit/'"
This also has been reported; my replica is running as an LXC container under Proxmox. Hacked it by creating the directory.
I've got a PR upstream to not enforce /var/log/audit when healthcheck is executed inside a container. I will hopefully have an updated build later this week.
WARNING "Unexpected SRV entry in DNS" "_ntp._udp.<my_domain>.:<replica hostname>."
I think this is correct because I'm not running ntpd on the replica. I've removed the entry.
Ok, that very well could be true.
WARNING "Got 1 ipa-ca A records, expected 2" WARNING "Expected SRV record missing" "_<service>._(tcp|udp).<my domain>.:<replica hostname>."
Those are problematic for me, I guess because I'm running a probably unsupported configuration:
- My first master is public on the Internet
- My second master is not public on the Internet
- Public DNS contains entries for the first master
- The DNS server which servers in the second master's network use
contains entries for both masters
- My first public master uses another DNS server* which does not have
specific IPA entries and thus uses the public Internet DNS's entries, which do not contain the second master (* actually the DNS server for the first master is running on the same host, using dnsmasq)
I "fixed" this by putting all the DNS entries in all my internal DNS servers, but then healthcheck won't be verifying the public Internet's DNS records. This is not ideal, but I think it's fine.
Ok yes, this is certainly not a scenario I imagined.
...
I now have clean runs in all my masters, so I'll work to add it on my monitoring agent ( https://github.com/alexpdp7/ragent ). I'm running my agent every minute, and ipa-healthcheck seems to be quite expensive to run, so I'll probably run it in cron every hour or so and then have the agent gather the results.
You can probably get away with running it once a day. With the exception of the replication checks these aren't all that dynamic. You would catch things like permission and FS space issues earlier I suppose.
I'll make a mental note to see if I can categorize things that can be frequently run vs those that can probably get by on a daily basis. I don't want to explode the number of switches but it might make sense to check services frequently and certs daily, for example.
This is great feedback, thanks!
rob
On Mon, Nov 11, 2019 at 1:30 AM Rob Crittenden rcritten@redhat.com wrote:
I'm open to suggestions on this. I don't mean for it to scare anyone but the consequences can be head scratching. I have a blog entry on it that gets quite a few views.
Well, I think the ideal would be to prevent this from happening in FreeIPA. If that doesn't make sense, the next best thing would be to report what to do when the error is shown.
Ok yes, this is certainly not a scenario I imagined.
Yeah, I think running FreeIPA servers on the public Internet is really not a supported configuration, so I wouldn't worry too much about this (IMHO, supporting running FreeIPA on the public Internet would be nice, but this has already been discussed).
You can probably get away with running it once a day. With the exception of the replication checks these aren't all that dynamic. You would catch things like permission and FS space issues earlier I suppose.
I'll make a mental note to see if I can categorize things that can be frequently run vs those that can probably get by on a daily basis. I don't want to explode the number of switches but it might make sense to check services frequently and certs daily, for example.
Oh, I think running a check daily is probably the way to go. FS space is of course something that needs to be monitored closely, but I would expect most people who would use healthcheck are already monitoring that.
I would guess that if you do standard monitoring on your FreeIPA hosts (ping, agent-based ping, disk space/inodes, services running, clock properly synchronized, URL checks) + stuff like sssd caching + replication the chances of FreeIPA having a significant failure that goes undetected are pretty slim, so I wouldn't worry much about that use case.
It's just that it is convenient for me to roll this up in my monitoring which runs daily, but that's not a use-case you should consider. Daily monitoring should be fine for most.
Perhaps I would suggest adding a /health public (or IP-restricted) URL to FreeIPA, that would be far more useful, IMHO.
This is great feedback, thanks!
I worked for a few years in an organization where monitoring was very important, so I kinda love tools which are easily monitorizable :)
Cheers,
Álex
I use Kerberos at home. So do a couple of faculty. I have a Kerberos https: proxy set up on one of our public web servers. This is less than ideal, as it requires installing separate Kerberos software for both Mac and Windows. The Kerberos protocol is standardized across OSs, but not the proxy support (nor the OTP support).
On Nov 11, 2019, at 5:00 AM, Alex Corcoles via FreeIPA-users <freeipa-users@lists.fedorahosted.orgmailto:freeipa-users@lists.fedorahosted.org> wrote:
Yeah, I think running FreeIPA servers on the public Internet is really not a supported configuration, so I wouldn't worry too much about this (IMHO, supporting running FreeIPA on the public Internet would be nice, but this has already been discussed).
On Mon, Nov 11, 2019 at 5:45 PM Charles Hedrick hedrick@rutgers.edu wrote:
I use Kerberos at home. So do a couple of faculty. I have a Kerberos https: proxy set up on one of our public web servers. This is less than ideal, as it requires installing separate Kerberos software for both Mac and Windows. The Kerberos protocol is standardized across OSs, but not the proxy support (nor the OTP support).
Oh, FreeIPA runs a proxy in the standard setup (see /etc/httpd/conf.d/ipa-kdc-proxy.conf ), so I suppose clientwise if you just expose tcp:443 to the Internet things should just work.
Wouldn’t that also expose the main web UI, and IPA commands? Seems like a much larger attack surface.
On Nov 11, 2019, at 1:27 PM, Alex Corcoles <alex@corcoles.netmailto:alex@corcoles.net> wrote:
On Mon, Nov 11, 2019 at 5:45 PM Charles Hedrick <hedrick@rutgers.edumailto:hedrick@rutgers.edu> wrote: I use Kerberos at home. So do a couple of faculty. I have a Kerberos https: proxy set up on one of our public web servers. This is less than ideal, as it requires installing separate Kerberos software for both Mac and Windows. The Kerberos protocol is standardized across OSs, but not the proxy support (nor the OTP support).
Oh, FreeIPA runs a proxy in the standard setup (see /etc/httpd/conf.d/ipa-kdc-proxy.conf ), so I suppose clientwise if you just expose tcp:443 to the Internet things should just work.
On Nov 10, 2019, at 7:30 PM, Rob Crittenden via FreeIPA-users freeipa-users@lists.fedorahosted.org wrote:
You can probably get away with running it once a day. With the exception of the replication checks these aren't all that dynamic. You would catch things like permission and FS space issues earlier I suppose.
I'll make a mental note to see if I can categorize things that can be frequently run vs those that can probably get by on a daily basis. I don't want to explode the number of switches but it might make sense to check services frequently and certs daily, for example.
If you’re making these sorts of changes, might I suggest a flag to generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format.
Thanks, — Bob Jones Lead Linux Services Engineer ITS ECP - Linux Services
Jones, Bob (rwj5d) via FreeIPA-users wrote:
On Nov 10, 2019, at 7:30 PM, Rob Crittenden via FreeIPA-users freeipa-users@lists.fedorahosted.org wrote:
You can probably get away with running it once a day. With the exception of the replication checks these aren't all that dynamic. You would catch things like permission and FS space issues earlier I suppose.
I'll make a mental note to see if I can categorize things that can be frequently run vs those that can probably get by on a daily basis. I don't want to explode the number of switches but it might make sense to check services frequently and certs daily, for example.
If you’re making these sorts of changes, might I suggest a flag to generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format.
I don't know what you mean by "nagios-safe output". Are you suggesting a sort of --summary option that just reports the number and types of output?
rob
On Mon, Nov 11, 2019 at 3:48 PM Rob Crittenden rcritten@redhat.com wrote:
Jones, Bob (rwj5d) via FreeIPA-users wrote:
If you’re making these sorts of changes, might I suggest a flag to
generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format.
I don't know what you mean by "nagios-safe output". Are you suggesting a sort of --summary option that just reports the number and types of output?
I think the idea is to follow the Nagios plugin API:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginap...
Strictly speaking, the output of a Nagios plugin is not so important- unless you manage to output "valid" perfdata, Nagios will chug along (maybe it will not show pretty service status) and things will just work IFF the return code from the process follows the Nagios standards (0: OK, 1: WARNING, 2: CRITICAL, 3 or other: UNKNOWN).
IMHO, if the tool provides structured output like it currently does (JSON), writing a Nagios wrapper should be "easy" and it wouldn't be significantly worse than implementing "Nagios"-mode within ipa-healthcheck.
OTOH, Nagios is probably one of the most popular monitoring solutions right now, IIRC, it's the only monitoring solution that RedHat packages in RHEL and a lot of other monitoring solutions can use Nagios plugins, so it would be very nice if yum install freeipa-server automatically installed a Nagios check.
Cheers,
Álex
Yes, the checkipaconsistency normal output is something like this:
+--------------------+----------+----------+----------+-------+ | FreeIPA servers: | host01 | host02 | host03 | STATE | +--------------------+----------+----------+----------+-------+ | Active Users | 8 | 8 | 8 | OK | | Stage Users | 0 | 0 | 0 | OK | | Preserved Users | 0 | 0 | 0 | OK | | Hosts | 68 | 68 | 68 | OK | | Services | 13 | 13 | 13 | OK | | User Groups | 75 | 75 | 75 | OK | | Host Groups | 12 | 12 | 12 | OK | | Netgroups | 11 | 11 | 11 | OK | | HBAC Rules | 34 | 34 | 34 | OK | | SUDO Rules | 23 | 23 | 23 | OK | | DNS Zones | 0 | 0 | 0 | OK | | Certificates | 27 | 27 | 27 | OK | | LDAP Conflicts | 0 | 0 | 0 | OK | | Ghost Replicas | 0 | 0 | 0 | OK | | Anonymous BIND | ON | ON | ON | OK | | Microsoft ADTrust | True | True | True | OK | | Replication Status | host02 0 | host03 0 | host02 0 | OK | | | host03 0 | host01 0 | host01 0 | | +--------------------+----------+----------+----------+-------+
However if you pass it the -n flag it looks something like this:
OK - 17/17 checks passed
That’s the sort of format for the summary output, or if something is wrong maybe something like:
WARNING - 16/17 checks passed, 1 check warning
The other part of Nagios formatted output would be the exit code, as 0 means ok, 1 is warning, 2 is critical, and 3 is unknown.
So, with a flag for nagios output you would just have summary output line and exit with the correct code.
I have never used python otherwise I would look to contribute this myself.
I am probably not the only user who would find this useful.
— Bob Jones Lead Linux Services Engineer ITS ECP - Linux Services
On Nov 11, 2019, at 10:00 AM, Alex Corcoles via FreeIPA-users freeipa-users@lists.fedorahosted.org wrote:
On Mon, Nov 11, 2019 at 3:48 PM Rob Crittenden rcritten@redhat.com wrote: Jones, Bob (rwj5d) via FreeIPA-users wrote:
If you’re making these sorts of changes, might I suggest a flag to generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format.
I don't know what you mean by "nagios-safe output". Are you suggesting a sort of --summary option that just reports the number and types of output?
I think the idea is to follow the Nagios plugin API:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginap...
Strictly speaking, the output of a Nagios plugin is not so important- unless you manage to output "valid" perfdata, Nagios will chug along (maybe it will not show pretty service status) and things will just work IFF the return code from the process follows the Nagios standards (0: OK, 1: WARNING, 2: CRITICAL, 3 or other: UNKNOWN).
IMHO, if the tool provides structured output like it currently does (JSON), writing a Nagios wrapper should be "easy" and it wouldn't be significantly worse than implementing "Nagios"-mode within ipa-healthcheck.
OTOH, Nagios is probably one of the most popular monitoring solutions right now, IIRC, it's the only monitoring solution that RedHat packages in RHEL and a lot of other monitoring solutions can use Nagios plugins, so it would be very nice if yum install freeipa-server automatically installed a Nagios check.
Cheers,
Álex
-- ___ {~._.~} ( Y ) ()~*~() mail: alex at corcoles dot net (_)-(_) http://alex.corcoles.net/
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Alex Corcoles wrote:
On Mon, Nov 11, 2019 at 3:48 PM Rob Crittenden <rcritten@redhat.com mailto:rcritten@redhat.com> wrote:
Jones, Bob (rwj5d) via FreeIPA-users wrote: > If you’re making these sorts of changes, might I suggest a flag to generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format. I don't know what you mean by "nagios-safe output". Are you suggesting a sort of --summary option that just reports the number and types of output?
I think the idea is to follow the Nagios plugin API:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginap...
Strictly speaking, the output of a Nagios plugin is not so important- unless you manage to output "valid" perfdata, Nagios will chug along (maybe it will not show pretty service status) and things will just work IFF the return code from the process follows the Nagios standards (0: OK, 1: WARNING, 2: CRITICAL, 3 or other: UNKNOWN).
IMHO, if the tool provides structured output like it currently does (JSON), writing a Nagios wrapper should be "easy" and it wouldn't be significantly worse than implementing "Nagios"-mode within ipa-healthcheck.
OTOH, Nagios is probably one of the most popular monitoring solutions right now, IIRC, it's the only monitoring solution that RedHat packages in RHEL and a lot of other monitoring solutions can use Nagios plugins, so it would be very nice if yum install freeipa-server automatically installed a Nagios check.
I looked at this prior to writing healthcheck and managed to write a generic Nagios handled that slurped in the healthcheck JSON output and generated items for each one. It was just a POC to see if I was heading in the right direction but it seemed to work.
I didn't expect that ipa-healthcheck return value would be all that useful other than "the tool itself blew up"
rob
OK, I just set up Nagios monitoring with ipa-healthcheck. In case someone wants to replicate, this is roughly what I did with Puppet:
FreeIPA Puppet manifest:
Install the package:
+ exec {'/usr/bin/curl https://copr.fedorainfracloud.org/coprs/rcritten/ipa-healthcheck/repo/epel-7... -o /etc/yum.repos.d/rcritten-ipa-healthcheck-epel-7.repo': + creates => '/etc/yum.repos.d/rcritten-ipa-healthcheck-epel-7.repo', + } + -> + package {'freeipa-healthcheck':}
Ensure /var/log/audit exists:
+ file {'/var/log/audit/': + ensure => directory, + }
Run the process daily and put the output in /var/www/html
+ file {'/etc/cron.daily/ipa-healthcheck': + content => "#!/bin/sh + +/bin/ipa-healthcheck --failures-only --output-file /var/www/html/ipa-healthcheck +", + mode => "0500", + }
Nagios configuration:
define hostgroup { hostgroup_name ipa }
define servicegroup{ servicegroup_name ipa-healthcheck }
define service{ use generic-service check_command check_http!-S -u /ipa-healthcheck -M 173800 -l -r '^[]$' service_description ipa-healthcheck servicegroups ipa-healthcheck hostgroup_name ipa }
; I check that /var/www/html/ipa-healthcheck contains [] and that it has been updated in the last two days + 1000s
Now I just need to add my IPA servers to the ipa hostgroup and they'll automatically get the check.
Cheers,
Álex
On Mon, Nov 11, 2019 at 8:03 PM Rob Crittenden via FreeIPA-users < freeipa-users@lists.fedorahosted.org> wrote:
Alex Corcoles wrote:
On Mon, Nov 11, 2019 at 3:48 PM Rob Crittenden <rcritten@redhat.com mailto:rcritten@redhat.com> wrote:
Jones, Bob (rwj5d) via FreeIPA-users wrote: > If you’re making these sorts of changes, might I suggest a flag to generate Nagios safe output that is just a summary of how many warnings/errors were found like the way checkipaconsistency does it? Otherwise we will have to come up with a wrapper to parse the output and create the correct output format. I don't know what you mean by "nagios-safe output". Are you
suggesting a
sort of --summary option that just reports the number and types of output?
I think the idea is to follow the Nagios plugin API:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginap...
Strictly speaking, the output of a Nagios plugin is not so important- unless you manage to output "valid" perfdata, Nagios will chug along (maybe it will not show pretty service status) and things will just work IFF the return code from the process follows the Nagios standards (0: OK, 1: WARNING, 2: CRITICAL, 3 or other: UNKNOWN).
IMHO, if the tool provides structured output like it currently does (JSON), writing a Nagios wrapper should be "easy" and it wouldn't be significantly worse than implementing "Nagios"-mode within
ipa-healthcheck.
OTOH, Nagios is probably one of the most popular monitoring solutions right now, IIRC, it's the only monitoring solution that RedHat packages in RHEL and a lot of other monitoring solutions can use Nagios plugins, so it would be very nice if yum install freeipa-server automatically installed a Nagios check.
I looked at this prior to writing healthcheck and managed to write a generic Nagios handled that slurped in the healthcheck JSON output and generated items for each one. It was just a POC to see if I was heading in the right direction but it seemed to work.
I didn't expect that ipa-healthcheck return value would be all that useful other than "the tool itself blew up"
rob _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Hi Rob,
I have run your tool and found it to report some issues. I wonder if you could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......”
Really ken to solve this but no expert. Centos 7.8 server and clients ipa-server-4.6.6
[ { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ntp._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7", "duration": "0.024868", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8", "duration": "0.025853", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619", "duration": "0.027710", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "44a47316-ba13-4226-9625-2f29f369cdd4", "duration": "0.027825", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "313a97f5-9f05-4465-a50f-27996c22c306", "duration": "0.028995", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a", "duration": "0.030514", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b", "duration": "0.031876", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd", "duration": "0.033430", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00839d9-6e83-481d-9685-8eaca6caea14", "duration": "0.034777", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c", "duration": "0.036379", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ldap._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "2091880e-5777-4854-abb4-bc14c032b1af", "duration": "0.037861", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1", "duration": "0.038836", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30", "duration": "0.040348", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896", "duration": "0.041473", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93", "duration": "0.042447", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "expected ipa-ca IPAddr missing", "key": "10.126.18.129" }, "uuid": "59581cec-e08f-4e67-aed1-697698d66e92", "duration": "0.044304", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "expected": 1, "count": 2, "msg": "Got {count} ipa-ca A records, expected {expected}" }, "uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209", "duration": "0.044392", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "e5386d69-3028-4c71-8a93-87de8e954682", "duration": "0.002170", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6", "duration": "0.005159", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" } ]
Chris Welsh via FreeIPA-users wrote:
Hi Rob,
I have run your tool and found it to report some issues. I wonder if you could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......”
Really ken to solve this but no expert. Centos 7.8 server and clients ipa-server-4.6.6
The "Unexpected SRV entry in DNS" warnings mean that some servers are defined in the IPA domain with services that IPA provides but those servers aren't IPA servers.
Similarly, "Expected SRV record missing", a SRV record is missing for an IPA service for one or more IPA servers.
"expected ipa-ca IPAddr missing" means that the IPA server at 10.126.18.129 is not in the ipa-ca CNAME (and also caught with the count of ipa-ca records).
The final errors are due to your installation still using domain level 0. You can ignore these if you don't want to or can't update domain levels. https://www.freeipa.org/page/Domain_Levels
rob
[ { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ntp._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7", "duration": "0.024868", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8", "duration": "0.025853", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619", "duration": "0.027710", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "44a47316-ba13-4226-9625-2f29f369cdd4", "duration": "0.027825", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "313a97f5-9f05-4465-a50f-27996c22c306", "duration": "0.028995", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a", "duration": "0.030514", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b", "duration": "0.031876", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd", "duration": "0.033430", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00839d9-6e83-481d-9685-8eaca6caea14", "duration": "0.034777", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c", "duration": "0.036379", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ldap._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "2091880e-5777-4854-abb4-bc14c032b1af", "duration": "0.037861", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1", "duration": "0.038836", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30", "duration": "0.040348", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896", "duration": "0.041473", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93", "duration": "0.042447", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "expected ipa-ca IPAddr missing", "key": "10.126.18.129" }, "uuid": "59581cec-e08f-4e67-aed1-697698d66e92", "duration": "0.044304", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "expected": 1, "count": 2, "msg": "Got {count} ipa-ca A records, expected {expected}" }, "uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209", "duration": "0.044392", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "e5386d69-3028-4c71-8a93-87de8e954682", "duration": "0.002170", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6", "duration": "0.005159", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" } ] _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Hi Rob,
Could this be because I removed the replica and there are records still dangling in the config? Is there a way to find out where they are and remove them?
At the moment we have no active replicas, as I wanted to simplify the config so as to find the root cause of intermittent loss of groups. Looks like this could be adding to my headaches.
And finally, having domain level not set to one will prevent me from creating replicas on the first place?
On Fri, 21 Aug 2020, 6:42 am Rob Crittenden, rcritten@redhat.com wrote:
Chris Welsh via FreeIPA-users wrote:
Hi Rob,
I have run your tool and found it to report some issues. I wonder if you
could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......”
Really ken to solve this but no expert. Centos 7.8 server and clients ipa-server-4.6.6
The "Unexpected SRV entry in DNS" warnings mean that some servers are defined in the IPA domain with services that IPA provides but those servers aren't IPA servers.
Similarly, "Expected SRV record missing", a SRV record is missing for an IPA service for one or more IPA servers.
"expected ipa-ca IPAddr missing" means that the IPA server at 10.126.18.129 is not in the ipa-ca CNAME (and also caught with the count of ipa-ca records).
The final errors are due to your installation still using domain level 0. You can ignore these if you don't want to or can't update domain levels. https://www.freeipa.org/page/Domain_Levels
rob
[ { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ntp._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7", "duration": "0.024868", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8", "duration": "0.025853", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619", "duration": "0.027710", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "44a47316-ba13-4226-9625-2f29f369cdd4", "duration": "0.027825", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "313a97f5-9f05-4465-a50f-27996c22c306", "duration": "0.028995", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a", "duration": "0.030514", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._udp.unix.foo.org.au.:
vmdr-linuxidm.unix.foo.org.au."
}, "uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b", "duration": "0.031876", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd", "duration": "0.033430", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00839d9-6e83-481d-9685-8eaca6caea14", "duration": "0.034777", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c", "duration": "0.036379", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ldap._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "2091880e-5777-4854-abb4-bc14c032b1af", "duration": "0.037861", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1", "duration": "0.038836", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._tcp.unix.foo.org.au.:
vmdr-linuxidm.unix.foo.org.au."
}, "uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30", "duration": "0.040348", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896", "duration": "0.041473", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
}, "uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93", "duration": "0.042447", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "expected ipa-ca IPAddr missing", "key": "10.126.18.129" }, "uuid": "59581cec-e08f-4e67-aed1-697698d66e92", "duration": "0.044304", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "expected": 1, "count": 2, "msg": "Got {count} ipa-ca A records, expected {expected}" }, "uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209", "duration": "0.044392", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management
requires minimum domain level 1 "
}, "uuid": "e5386d69-3028-4c71-8a93-87de8e954682", "duration": "0.002170", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management
requires minimum domain level 1 "
}, "uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6", "duration": "0.005159", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" } ] _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to
freeipa-users-leave@lists.fedorahosted.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
On Fri, Aug 21, 2020 at 1:08 AM Chris Welsh via FreeIPA-users freeipa-users@lists.fedorahosted.org wrote:
Hi Rob,
Could this be because I removed the replica and there are records still dangling in the config? Is there a way to find out where they are and remove them?
At worst, use ldapsearch to identify remaining objects.
At the moment we have no active replicas,
So you have a single instance? OK. Please don't run that for too long.
as I wanted to simplify the config so as to find the root cause of intermittent loss of groups. Looks like this could be adding to my headaches.
And finally, having domain level not set to one will prevent me from creating replicas on the first place?
Domain Level 0 (DL0) support has been removed. You will be able to create replicas using old versions, but ideally, once the above problem is sorted out, you might be better off updating to DL1.
On Fri, 21 Aug 2020, 6:42 am Rob Crittenden, rcritten@redhat.com wrote:
Chris Welsh via FreeIPA-users wrote:
Hi Rob,
I have run your tool and found it to report some issues. I wonder if you could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......”
Really ken to solve this but no expert. Centos 7.8 server and clients ipa-server-4.6.6
The "Unexpected SRV entry in DNS" warnings mean that some servers are defined in the IPA domain with services that IPA provides but those servers aren't IPA servers.
Similarly, "Expected SRV record missing", a SRV record is missing for an IPA service for one or more IPA servers.
"expected ipa-ca IPAddr missing" means that the IPA server at 10.126.18.129 is not in the ipa-ca CNAME (and also caught with the count of ipa-ca records).
The final errors are due to your installation still using domain level 0. You can ignore these if you don't want to or can't update domain levels. https://www.freeipa.org/page/Domain_Levels
rob
[ { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ntp._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7", "duration": "0.024868", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8", "duration": "0.025853", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619", "duration": "0.027710", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "44a47316-ba13-4226-9625-2f29f369cdd4", "duration": "0.027825", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "313a97f5-9f05-4465-a50f-27996c22c306", "duration": "0.028995", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a", "duration": "0.030514", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b", "duration": "0.031876", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd", "duration": "0.033430", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kpasswd._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "d00839d9-6e83-481d-9685-8eaca6caea14", "duration": "0.034777", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c", "duration": "0.036379", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_ldap._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "2091880e-5777-4854-abb4-bc14c032b1af", "duration": "0.037861", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_ldap._tcp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1", "duration": "0.038836", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Unexpected SRV entry in DNS", "key": "_kerberos-master._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au." }, "uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30", "duration": "0.040348", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896", "duration": "0.041473", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "Expected SRV record missing", "key": "_kerberos._udp.dc._msdcs.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au." }, "uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93", "duration": "0.042447", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "msg": "expected ipa-ca IPAddr missing", "key": "10.126.18.129" }, "uuid": "59581cec-e08f-4e67-aed1-697698d66e92", "duration": "0.044304", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.idns", "kw": { "expected": 1, "count": 2, "msg": "Got {count} ipa-ca A records, expected {expected}" }, "uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209", "duration": "0.044392", "when": "20200820104327Z", "check": "IPADNSSystemRecordsCheck", "result": "WARNING" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "e5386d69-3028-4c71-8a93-87de8e954682", "duration": "0.002170", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" }, { "source": "ipahealthcheck.ipa.topology", "kw": { "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " }, "uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6", "duration": "0.005159", "when": "20200820104332Z", "check": "IPATopologyDomainCheck", "result": "ERROR" } ] _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
Hi François,
Thx for getting back to me. So far no luck.
On Fri, 21 Aug 2020 at 9:05 pm, François Cami fcami@redhat.com wrote:
On Fri, Aug 21, 2020 at 1:08 AM Chris Welsh via FreeIPA-users
freeipa-users@lists.fedorahosted.org wrote:
Hi Rob,
Could this be because I removed the replica and there are records still
dangling in the config? Is there a way to find out where they are and remove them?
At worst, use ldapsearch to identify remaining objects.
I have now moved to domain level “1” and re-joined the replica (2nd master with ca), but got the original message beck in the new masters logs which was the reason why I removed it (tried to simplify to get to the root cause of intermittent loss of groups for users). And unfortunately this did not solve the issue with users looking their group creds (I do not enumerate groups) . (6 users today). :-(
Aug 21 19:22:38 vmdr-linuxidm ns-slapd: [21/Aug/2020:19:22:38.153428704 +1000] - ERR - ipapwd_getkeytab - [file ipa_pwd_extop.c, line 1647]: Not allowed to retrieve keytab on [UNIX$@FOO.ORG.AU UNIX$@PETERMAC.ORG.AU] as user [ fqdn=vmdr-linuxidm.unix.foo.org.au http://vmdr-linuxidm.unix.petermac.org.au/ ,cn=computers,cn=accounts,dc=unix,dc=foo,dc=org,dc=au]! Aug 21 19:22:38 vmdr-linuxidm sssd: Failed to parse result: Insufficient access rights Aug 21 19:22:38 vmdr-linuxidm sssd: Failed to get keytab Aug 21 19:22:38 vmdr-linuxidm ns-slapd: [21/Aug/2020:19:22:38.254032634 +1000] - ERR - is_allowed_to_access_attr - [file ipa_pwd_extop.c, line 787]: slapi_access_allowed does not allow READ to ipaProtectedOpe ration;read_keys!
At the moment we have no active replicas,
So you have a single instance? OK. Please don't run that for too long.
Thx
as I wanted to simplify the config so as to find the root cause of
intermittent loss of groups. Looks like this could be adding to my headaches.
And finally, having domain level not set to one will prevent me from
creating replicas on the first place?
Domain Level 0 (DL0) support has been removed. You will be able to
create replicas using old versions, but ideally, once the above
problem is sorted out, you might be better off updating to DL1.
Thx
On Fri, 21 Aug 2020, 6:42 am Rob Crittenden, rcritten@redhat.com
wrote:
Chris Welsh via FreeIPA-users wrote:
Hi Rob,
I have run your tool and found it to report some issues. I wonder if
you could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......”
Really ken to solve this but no expert.
Centos 7.8 server and clients
ipa-server-4.6.6
The "Unexpected SRV entry in DNS" warnings mean that some servers are
defined in the IPA domain with services that IPA provides but those
servers aren't IPA servers.
Similarly, "Expected SRV record missing", a SRV record is missing for an
IPA service for one or more IPA servers.
"expected ipa-ca IPAddr missing" means that the IPA server at
10.126.18.129 is not in the ipa-ca CNAME (and also caught with the count
of ipa-ca records).
The final errors are due to your installation still using domain level
- You can ignore these if you don't want to or can't update domain
rob
[
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_ntp._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au."
},
"uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7",
"duration": "0.024868",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8",
"duration": "0.025853",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kerberos._tcp.unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au
."
},
"uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619",
"duration": "0.027710",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kerberos._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au
."
},
"uuid": "44a47316-ba13-4226-9625-2f29f369cdd4",
"duration": "0.027825",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "313a97f5-9f05-4465-a50f-27996c22c306",
"duration": "0.028995",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kerberos._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au
."
},
"uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a",
"duration": "0.030514",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kerberos-master._udp.unix.foo.org.au.:
vmdr-linuxidm.unix.foo.org.au."
},
"uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b",
"duration": "0.031876",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kpasswd._udp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au
."
},
"uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd",
"duration": "0.033430",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kpasswd._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au
."
},
"uuid": "d00839d9-6e83-481d-9685-8eaca6caea14",
"duration": "0.034777",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c",
"duration": "0.036379",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_ldap._tcp.unix.foo.org.au.:vmdr-linuxidm.unix.foo.org.au."
},
"uuid": "2091880e-5777-4854-abb4-bc14c032b1af",
"duration": "0.037861",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_ldap._tcp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1",
"duration": "0.038836",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Unexpected SRV entry in DNS",
"key": "_kerberos-master._tcp.unix.foo.org.au.:
vmdr-linuxidm.unix.foo.org.au."
},
"uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30",
"duration": "0.040348",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.
unix.foo.org.au.:vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896",
"duration": "0.041473",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "Expected SRV record missing",
"key": "_kerberos._udp.dc._msdcs.unix.foo.org.au.:
vmpr-linuxidm.unix.foo.org.au."
},
"uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93",
"duration": "0.042447",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"msg": "expected ipa-ca IPAddr missing",
"key": "10.126.18.129"
},
"uuid": "59581cec-e08f-4e67-aed1-697698d66e92",
"duration": "0.044304",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.idns",
"kw": {
"expected": 1,
"count": 2,
"msg": "Got {count} ipa-ca A records, expected {expected}"
},
"uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209",
"duration": "0.044392",
"when": "20200820104327Z",
"check": "IPADNSSystemRecordsCheck",
"result": "WARNING"
},
{
"source": "ipahealthcheck.ipa.topology",
"kw": {
"msg": "topologysuffix-verify domain failed, Topology management
requires minimum domain level 1 "
},
"uuid": "e5386d69-3028-4c71-8a93-87de8e954682",
"duration": "0.002170",
"when": "20200820104332Z",
"check": "IPATopologyDomainCheck",
"result": "ERROR"
},
{
"source": "ipahealthcheck.ipa.topology",
"kw": {
"msg": "topologysuffix-verify domain failed, Topology management
requires minimum domain level 1 "
},
"uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6",
"duration": "0.005159",
"when": "20200820104332Z",
"check": "IPATopologyDomainCheck",
"result": "ERROR"
}
]
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to
freeipa-users-leave@lists.fedorahosted.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to
freeipa-users-leave@lists.fedorahosted.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste...
--
regards, Christopher Welsh
Chris Welsh wrote:
Hi François,
Thx for getting back to me. So far no luck.
On Fri, 21 Aug 2020 at 9:05 pm, François Cami <fcami@redhat.com mailto:fcami@redhat.com> wrote:
On Fri, Aug 21, 2020 at 1:08 AM Chris Welsh via FreeIPA-users <freeipa-users@lists.fedorahosted.org <mailto:freeipa-users@lists.fedorahosted.org>> wrote: > > Hi Rob, > > Could this be because I removed the replica and there are records still dangling in the config? Is there a way to find out where they are and remove them? At worst, use ldapsearch to identify remaining objects.
I have now moved to domain level “1” and re-joined the replica (2nd master with ca), but got the original message beck in the new masters logs which was the reason why I removed it (tried to simplify to get to the root cause of intermittent loss of groups for users). And unfortunately this did not solve the issue with users looking their group creds (I do not enumerate groups) . (6 users today). :-(
Got what original message back?
What issue with looking for groups?
Aug 21 19:22:38 vmdr-linuxidm ns-slapd: [21/Aug/2020:19:22:38.153428704 +1000] - ERR - ipapwd_getkeytab - [file ipa_pwd_extop.c, line 1647]: Not allowed to retrieve keytab on [UNIX$@FOO.ORG.AU mailto:UNIX$@PETERMAC.ORG.AU] as user [ fqdn=vmdr-linuxidm.unix.foo.org.au http://vmdr-linuxidm.unix.petermac.org.au/,cn=computers,cn=accounts,dc=unix,dc=foo,dc=org,dc=au]! Aug 21 19:22:38 vmdr-linuxidm sssd: Failed to parse result: Insufficient access rights Aug 21 19:22:38 vmdr-linuxidm sssd: Failed to get keytab Aug 21 19:22:38 vmdr-linuxidm ns-slapd: [21/Aug/2020:19:22:38.254032634 +1000] - ERR - is_allowed_to_access_attr - [file ipa_pwd_extop.c, line 787]: slapi_access_allowed does not allow READ to ipaProtectedOpe ration;read_keys!
What is the context of this error?
rob
> At the moment we have no active replicas, So you have a single instance? OK. Please don't run that for too long.
Thx
> as I wanted to simplify the config so as to find the root cause of intermittent loss of groups. Looks like this could be adding to my headaches. > > And finally, having domain level not set to one will prevent me from creating replicas on the first place? Domain Level 0 (DL0) support has been removed. You will be able to create replicas using old versions, but ideally, once the above problem is sorted out, you might be better off updating to DL1.
Thx
> On Fri, 21 Aug 2020, 6:42 am Rob Crittenden, <rcritten@redhat.com <mailto:rcritten@redhat.com>> wrote: >> >> Chris Welsh via FreeIPA-users wrote: >> > Hi Rob, >> > >> > I have run your tool and found it to report some issues. I wonder if you could help me figure out what they are. Our problem is that we often have staff who loose their groups and this has been happening for 3 years. sss_cache -u username sometimes fixes it. Any advise greatly welcome. Note that I have removed our send are master “vmpdr-linuxidm......” >> > >> > Really ken to solve this but no expert. >> > Centos 7.8 server and clients >> > ipa-server-4.6.6 >> >> The "Unexpected SRV entry in DNS" warnings mean that some servers are >> defined in the IPA domain with services that IPA provides but those >> servers aren't IPA servers. >> >> Similarly, "Expected SRV record missing", a SRV record is missing for an >> IPA service for one or more IPA servers. >> >> "expected ipa-ca IPAddr missing" means that the IPA server at >> 10.126.18.129 is not in the ipa-ca CNAME (and also caught with the count >> of ipa-ca records). >> >> The final errors are due to your installation still using domain level >> 0. You can ignore these if you don't want to or can't update domain >> levels. https://www.freeipa.org/page/Domain_Levels >> >> rob >> >> > >> > >> > [ >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_ntp._udp.unix.foo.org.au <http://udp.unix.foo.org.au>.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "57735f69-6d98-4ae1-9f0a-dd848bbfa1f7", >> > "duration": "0.024868", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_kerberos._tcp.dc._msdcs.unix.foo.org.au <http://unix.foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "3b789068-16ff-4684-bb5e-3add8a62b2b8", >> > "duration": "0.025853", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kerberos._tcp.unix.foo.org <http://tcp.unix.foo.org>.au.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "bab58235-1a9b-48bc-9b4c-b0e75b91d619", >> > "duration": "0.027710", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kerberos._tcp.unix.foo.org <http://tcp.unix.foo.org>.au.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "44a47316-ba13-4226-9625-2f29f369cdd4", >> > "duration": "0.027825", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au <http://unix.foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "313a97f5-9f05-4465-a50f-27996c22c306", >> > "duration": "0.028995", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kerberos._udp.unix.foo.org <http://udp.unix.foo.org>.au.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "d00274ff-12a9-465f-957e-392c4edd7e5a", >> > "duration": "0.030514", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kerberos-master._udp.unix.foo.org.au <http://foo.org.au>.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "0e50f8e7-6321-429a-b84e-3a88922ec07b", >> > "duration": "0.031876", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kpasswd._udp.unix.foo.org <http://udp.unix.foo.org>.au.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "011bf574-e7ea-4f5d-8bf6-f5ecdd722ecd", >> > "duration": "0.033430", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kpasswd._tcp.unix.foo.org <http://tcp.unix.foo.org>.au.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "d00839d9-6e83-481d-9685-8eaca6caea14", >> > "duration": "0.034777", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_kerberos._udp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au <http://unix.foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "8bff3eb5-521d-4029-b368-c1b4cd39047c", >> > "duration": "0.036379", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_ldap._tcp.unix.foo.org.au <http://tcp.unix.foo.org.au>.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "2091880e-5777-4854-abb4-bc14c032b1af", >> > "duration": "0.037861", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_ldap._tcp.dc._msdcs.unix.foo.org.au <http://foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "8f9862fa-45a0-4bdd-b561-93a6a15ac7f1", >> > "duration": "0.038836", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Unexpected SRV entry in DNS", >> > "key": "_kerberos-master._tcp.unix.foo.org.au <http://foo.org.au>.:vmdr-linuxidm.unix.foo.org.au <http://vmdr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "cfd7b896-da90-4ac4-9b08-eccdbafeca30", >> > "duration": "0.040348", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_kerberos._tcp.Default-First-Site-Name._sites.dc._msdcs.unix.foo.org.au <http://unix.foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "3c38ad1e-96a5-41fd-a161-56dde9601896", >> > "duration": "0.041473", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "Expected SRV record missing", >> > "key": "_kerberos._udp.dc._msdcs.unix.foo.org.au <http://unix.foo.org.au>.:vmpr-linuxidm.unix.foo.org.au <http://vmpr-linuxidm.unix.foo.org.au>." >> > }, >> > "uuid": "fd6a163f-a338-4ff0-a2f2-9fb00064ab93", >> > "duration": "0.042447", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "msg": "expected ipa-ca IPAddr missing", >> > "key": "10.126.18.129" >> > }, >> > "uuid": "59581cec-e08f-4e67-aed1-697698d66e92", >> > "duration": "0.044304", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.idns", >> > "kw": { >> > "expected": 1, >> > "count": 2, >> > "msg": "Got {count} ipa-ca A records, expected {expected}" >> > }, >> > "uuid": "6852b70e-b366-44a3-bc1f-6bde42f79209", >> > "duration": "0.044392", >> > "when": "20200820104327Z", >> > "check": "IPADNSSystemRecordsCheck", >> > "result": "WARNING" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.topology", >> > "kw": { >> > "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " >> > }, >> > "uuid": "e5386d69-3028-4c71-8a93-87de8e954682", >> > "duration": "0.002170", >> > "when": "20200820104332Z", >> > "check": "IPATopologyDomainCheck", >> > "result": "ERROR" >> > }, >> > { >> > "source": "ipahealthcheck.ipa.topology", >> > "kw": { >> > "msg": "topologysuffix-verify domain failed, Topology management requires minimum domain level 1 " >> > }, >> > "uuid": "c50ccc80-d031-4a52-a097-43b6b09c46c6", >> > "duration": "0.005159", >> > "when": "20200820104332Z", >> > "check": "IPATopologyDomainCheck", >> > "result": "ERROR" >> > } >> > ] >> > _______________________________________________ >> > FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org <mailto:freeipa-users@lists.fedorahosted.org> >> > To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org <mailto:freeipa-users-leave@lists.fedorahosted.org> >> > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ >> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines >> > List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org >> > >> > _______________________________________________ > FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org <mailto:freeipa-users@lists.fedorahosted.org> > To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org <mailto:freeipa-users-leave@lists.fedorahosted.org> > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
-- regards, Christopher Welsh
freeipa-users@lists.fedorahosted.org