minssf and TLS cipher ordering
by Trevor Vaughan
Hi All,
OS Version: CentOS 8
389-DS Version: 1.4.3.22 from EPEL
I have a server set up with minssf=256 and have been surprised that either
389-DS, or openssl, does not appear to be doing what I would consider a
logical TLS negotiation.
I had thought that the system would start with the strongest cipher and
then negotiate down to something that was acceptable.
Instead, I'm finding that I have to nail up the ciphers to something that
the 389-DS server both recognizes and is within the expected SSF.
Is this expected behavior or do I have something configured incorrectly?
Thanks,
Trevor
--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699 x788
-- This account not approved for unencrypted proprietary information --
1 year, 9 months
how to configure cn attribute case sensitive
by Ghiurea, Isabella
Hi List,
I need help with the following ldap issue , we are running
389-ds-base-1.3.7.5-24.el7_5.x86_64
-how to check if 389-DS is cfg to be case sensitive?
- how to cfg the cn attribute which is indexed in my DS to be case sensitive ?
Thank you
Isabella
1 year, 9 months
dsctl healthcheck bug - or bad at least a bad resolution
by Gary Waters
Hi Guys!
I think I found a bug in dsctl, and wanted to give some background and
see what you guys thought.
I am setting up my ldaphub.. and I am getting an odd issue when running
the dsctl $instance healthcheck on it, but the dsctl $instance
get-nsstate shows that the missing part is right there. I have confirmed
this by looking directly at the dse.ldif file and finding the
"resolution" is already present.
Error and get-nsstate are below. It will be same the error 8 times in a
row.
Hmm.. it seems to be related to maybe how I setup the replication
agreement and consumer, so I added that at the bottom as well.
I found something interesting, if i set the replication ID for the hub,
dsconf wont use the ID number I put in, dsconf puts in a number outside
a valid range 65535. Have you guys seen this ?
Thanks guys for everything!
-Gary
Here is the error (8x):
Severity: MEDIUM
Check: backends:somesuffixroot:mappingtree
Affects:
-- somesuffixroot
Details:
-----------
This backend may be missing the correct mapping tree references. Mapping
Trees allow
the directory server to determine which backend an operation is routed
to in the
abscence of other information. This is extremely important for correct
functioning
of LDAP ADD for example.
A correct Mapping tree for this backend must contain the suffix name,
the database name
and be a backend type. IE:
cn=o3Dexample,cn=mapping tree,cn=config
cn: o=example
nsslapd-backend: userRoot
nsslapd-state: backend
objectClass: top
objectClass: extensibleObject
objectClass: nsMappingTree
Resolution:
-----------
Either you need to create the mapping tree, or you need to repair the
related
mapping tree. You will need to do this by hand by editing cn=config, or
stopping
the instance and editing dse.ldif.
dsctl ldaphub get-nsstate
Replica DN:
cn=replica,cn=ou\3dsomesuffix\2co\3dcaltech\2cc\3dus,cn=mapping
tree,cn=config
Replica Suffix: ou=somesuffix,o=caltech,c=us
Replica ID: 65535
Gen Time: 1618442292
Gen Time String: Wed Apr 14 16:18:12 2021
Gen as CSN: 607778340002655350000
Local Offset: 0
Local Offset String: 0 seconds
Remote Offset: 7
Remote Offset String: 7 seconds
Time Skew: 7
Time Skew String: 7 seconds
Seq Num: 2
System Time: Wed Apr 14 17:30:50 2021
Diff in Seconds: 4358
Diff in days/secs: 0:4358
Endian: Little Endian
Dse.ldif section that already has the resolution present:
dn: cn=ou\3Dsomesuffix\2Co\3Dcaltech\2Cc\3Dus,cn=mapping tree,cn=config
objectClass: top
objectClass: extensibleObject
objectClass: nsMappingTree
nsslapd-state: referral on update
nsslapd-backend: somesuffixRoot
cn: ou=somesuffix,o=caltech,c=us
creatorsName: cn=directory manager
modifiersName: cn=server,cn=plugins,cn=config
createTimestamp: 20210415004818Z
modifyTimestamp: 20210415005939Z
numSubordinates: 1
nsslapd-referral:
ldap://supplier2:389/ou%3Dsomesuffix%2Co%3Dcaltech%2Cc%3Dus
nsslapd-referral:
ldap://supplier1:389/ou%3Dsomesuffix%2Co%3Dcaltech%2Cc%3Dus
nsslapd-referral:
ldap://supplier0:389/ou%3Dsomesuffix%2Co%3Dcaltech%2Cc%3Dus
nsslapd-referral:
ldap://supplier4.caltech.edu:389/ou%3Dsomesuffix%2Co%3Dcaltech%2
Cc%3Dus
nsslapd-referral:
ldap://supplier5.caltech.edu:389/ou%3Dsomesuffix%2Co%3Dcaltech%2
Cc%3Dus
nsslapd-referral:
ldap://supplier3.caltech.edu:389/ou%3Dsomesuffix%2Co%3Dcaltech%2
Cc%3Dus
How I set it set up the hub and the agreement: (note the same commands i
used to setup the suppliers and consumers worked great with only
variance is really the role)
# how i setup the consumer
dsconf -D "cn=Directory Manager" -w XXX ldap://$consumer replication
enable --suffix="ou=somesuffix,o=caltech,c=us" --role="hub"
--replica-id=6001 --bind-dn="cn=replication manager,cn=config"
--bind-passwd=XXX
# how i setup the agreement
dsconf -D "cn=Directory Manager" -w XXXX ldap://supplier repl-agmt
create --suffix="ou=somesuffix,o=caltech,c=us" --host=consumer --port=389 \
--conn-protocol=StartTLS --bind-dn="cn=replication
manager,cn=config" \
--bind-passwd=XXXX --bind-method=SIMPLE --init \
replication-agreement-name-super-awesome
1 year, 9 months
Compact problem solved with nsslapd-db-locks: 1500000, should i keep it?
by murmansk@hotmail.com
We had a problem today, one of our two 389 DS servers hanged showing the errors:
ERR - libdb - BDB2055 Lock table is out of available lock entries
ERR - NSMMReplicationPlugin - changelog program - _cl5CompactDBs - Failed to compact db5f7bb9-ab0611e6-9bc8987f-40ec05bf; db error - 12 Cannot allocate memory
We redirected all the queries to our second 389 DS Server and started debugging the problem.
It's the second time that this happens to us. We did our homework the first time, four months ago, and didn't found any problem.
At that time, we just increased the number of open files and reviewed the documentation about tunning nsslapd-db-locks (https://access.redhat.com/solutions/3217591). We set the nsslapd-db-locks to 100000 as recommended.
But today, with a little more time, we reviewed the documentation about compacting the log manually (https://access.redhat.com/documentation/en-us/red_hat_directory_server/10...).
We started to increase the nsslapd-db-locks, first up to 200000, then to half a million, and at the end to 1.5 millions.
Every check failed but the last one. We have no error logs (nor an affirmative one), but the database file in the 'changelogdb' folder went from 13GB to 2.8GB.
Is it reasonable to keep nsslapd-db-locks so high?
1 year, 9 months
Forbidden uid?
by Jan Tomasek
Hi,
is there a way how to provide 389DS with list of forbidden uid to
prevent creating such user? For example 'root', 'sys', ...
Thanks
--
-----------------------
Jan Tomasek aka Semik
http://www.tomasek.cz/
1 year, 9 months
How do I change the root password storage scheme to CRYPT-SHA512 through dsconf?
by spike
Hi everyone,
I'd like to change the default root password storage scheme from PBKDF2_SHA256 to CRYPT-SHA512 but I'm not having much success. I'm using the RHDS 11 documentation (https://access.redhat.com/documentation/en-us/red_hat_directory_server/11...) as a reference since the 389ds documentation page (https://directory.fedoraproject.org/docs/389ds/documentation.html) refers to that as "The best documentation for use and deployment". The 389ds version is 1.4.4.15 which should correspond with RHDS 11.
What I've tried:
# mkpasswd -m sha512crypt secret
$6$gOiCU3fNsdrH9.mR$fVxsLUf0JLS4wYdQa98VNy7mIy.LkShcdNcJbAFPE.10PKJ7EFD4hB0C33znHyIjgPF67IxNVNKgkKDiuuxQq/
# dsconf localhost config replace nsslapd-rootpwstoragescheme=CRYPT-SHA512 nsslapd-rootpw="{crypt}$6$gOiCU3fNsdrH9.mR$fVxsLUf0JLS4wYdQa98VNy7mIy.LkShcdNcJbAFPE.10PKJ7EFD4hB0C33znHyIjgPF67IxNVNKgkKDiuuxQq/"
selinux is disabled, will not relabel ports or files.
Successfully replaced "nsslapd-rootpwstoragescheme"
selinux is disabled, will not relabel ports or files.
Successfully replaced "nsslapd-rootpw"
Which results in me being unable to log in (bind non-anonymously). I've also tried:
# dsconf localhost config replace nsslapd-rootpwstoragescheme=CRYPT-SHA512 nsslapd-rootpw="{CRYPT-SHA512}$6$gOiCU3fNsdrH9.mR$fVxs..."
and
# dsconf localhost config replace nsslapd-rootpwstoragescheme=CRYPT-SHA512 nsslapd-rootpw="$6$gOiCU3fNsdrH9.mR$fVxs..."
which were also unsuccessful (login not possible).
Setting a `CRYPT-SHA512` password though the 389ds cockpit UI plugin works fine though, so I'm pretty sure I'm just not getting the syntax for `dsconf` correctly.
Any pointers are greatly appreciated.
Cheers!
1 year, 9 months
[@all] Request for sanitised access log data
by William Brown
Hi everyone,
At the moment I have been helping a student with their higher education thesis. As part of this we need to understand realistic work loads from 389-ds servers in production.
To assist, I would like to ask if anyone is able to or willing to volunteer to submit sanitised content of their access logs to us for this. We can potentially also use these for 389-ds for benchmarking and simulation in the future.
An example of the sanitised log output is below. All DN's are replaced with UUID's that are randomly generated so that no data can be reversed from the content of the access log. The script uses a combination of filter and basedn uniqueness, and nentries to make a virtual tree, and then substitute in extra data as required. All rtimes are relative times of "when" the event occured relative to the start of the log, so we also do not see information about the time of accesses.
This data will likely be used in a public setting, so assume that it will be released if provided. Of course I encourage you to review the content of the sanitisation script, and the sanitised output so that you are comfortable to run this tool. It's worth noting the tool will likely use a lot of RAM, so you should NOT run it on your production server - rather you should copy the production access log to another machine and process it there.
1 hour to 24 hours of processed output from a production server would help a lot!
Please send the output as a response to this mail, or directly to me ( wbrown at suse dot de )
Thanks,
[
{
"etime": "0.005077600",
"ids": [
"9b207c6e-f7a2-4cd8-984a-415ad5e0960f"
],
"rtime": "0:00:36.219942",
"type": "add"
},
{
"etime": "0.000433300",
"ids": [
"9b207c6e-f7a2-4cd8-984a-415ad5e0960f"
],
"rtime": "0:00:36.225207",
"type": "bind"
},
{
"etime": "0.000893100",
"ids": [
"9b207c6e-f7a2-4cd8-984a-415ad5e0960f",
"eb2139a1-a0f3-41cf-bdbe-d213a75c6bb7"
],
"rtime": "0:00:40.165807",
"type": "srch"
},
]
USAGE:
python3 access_pattern_extract.py /path/to/log/access /path/to/output.json
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
1 year, 9 months
Preserving create & modifyTimestamp during import
by Jan Tomasek
Hi,
I need to import sub-suffix into the existing suffix on a running
server. When I use:
dsconf -D "cn=Directory Manager" -w "$pswd" ldap://localhost backend
import userRoot sub-suffix.ldif
than userRoot is truncated and later import fails with error:
[13/Apr/2021:15:08:41.180921374 +0200] - WARN - import_foreman - import
userRoot: Skipping entry "o=sub,o=suffix" which has no parent, ending at
line 36 of file "/root/sub-suffix.ldif"
One way is to dump existing userRoot and later re-import complete backend:
dsconf -D "cn=Directory Manager" -w "$pswd" ldap://localhost backend
import userRoot suffix.ldif sub-suffix.ldif
But that means downtime I'm trying to avoid.
Other import way is use ldapadd but that means that server replaces
operational attributes:
creatorsName
modifiersName
createTimestamp
modifyTimestamp
Is there a way how to import sub-suffix into existing and running server
and preserve those operational attributes at the same time?
Thanks
--
-----------------------
Jan Tomasek aka Semik
http://www.tomasek.cz/
1 year, 9 months
dsconf duplicate replica id
by Gary Waters
Hey Everyone,
I love the new dsconf python tool. Its great, and big upgrade over the
perl scripts that I think I have been using for decades.
However I am having a problem using it when making new replication
agreements between multiple masters.
How do I find the duplicates and how do i run dsconf to avoid that
duplicate?
Error after agreements are made:
Error (11) Replication error acquiring replica: Unable to acquire
replica: the replica has the same Replica ID as this one. Replication is
aborting. (duplicate replica ID detected)
Error (11) Replication error acquiring replica: duplicate replica ID
detected
I know how to delete the newly made agreements no problem, but how do I
recreate them to avoid the duplicates? when creating the agreements I
didnt see a way to set an id.
command for the individual agreement.. maybe I am doing something wrong.
This is how I setup the agreement:
dsconf -D "cn=Directory Manager" -w $pass ldap://$supplier-
repl-agmt \
create --suffix="ou=$suffix,o=school,c=us" --host=$consumer
--port=389 \
--conn-protocol=StartTLS --bind-dn="cn=replication
manager,cn=config" \
--bind-passwd="x" --bind-method=SIMPLE --init \
$agreement_name
And this is how I setup the id and replication for the for the suffix:
dsconf -D "cn=Directory Manager" -w $pass ldap://$consumer replication \
enable --suffix="ou=$suffix,o=school,c=us" --role="master"
--replica-id=$repid \
--bind-dn="cn=replication manager,cn=config" --bind-passwd=XXX
In my case I have 3 masters that I need to setup MMR between, and this
error happens when I added the third. It says replication is already set
for the suffix as mmr, which is correct, but I cant set a new repid for
each aggreement via the first command.
Thank you everyone,
Gary
Other info:
rhel8.3
389-ds-base-libs-1.4.3.17-1.module_el8+10764+2b5f8656.x86_64
389-ds-base-1.4.3.17-1.module_el8+10764+2b5f8656.x86_64
1 year, 9 months
Announcing 389 Directory Server 2.0.4
by thierry bordaz
389 Directory Server 2.0.4
The 389 Directory Server team is proud to announce 389-ds-base version 2.0.4
Fedora packages are available on Fedora 34 and Rawhide
Fedora 34:
https://koji.fedoraproject.org/koji/taskinfo?taskID=65380611
<https://koji.fedoraproject.org/koji/taskinfo?taskID=65380611> - Koji
https://bodhi.fedoraproject.org/updates/FEDORA-2021-123ca32c27
<https://bodhi.fedoraproject.org/updates/FEDORA-2021-123ca32c27> - Bohdi
The new packages and versions are:
* 389-ds-base-2.0.4-1
Source tarballs are available for download at Download
389-ds-base Source
<https://github.com/389ds/389-ds-base/archive/389-ds-base-2.0.4.tar.gz>
Highlights in 2.0.4
* Bug & security fixes
Installation and Upgrade
See Download <https://www.port389.org/docs/389ds/download.html> for
information about setting up your yum repositories.
To install the server use *dnf install 389-ds-base*
To install the Cockpit UI plugin use *dnf install cockpit-389-ds*
After rpm install completes, run *dscreate interactive*
For upgrades, simply install the package. There are no further
steps required.
There are no upgrade steps besides installing the new rpms
See Install_Guide
<https://www.port389.org/docs/389ds/howto/howto-install-389.html> for
more information about the initial installation and setup
See Source <https://www.port389.org/docs/389ds/development/source.html>
for information about source tarballs and SCM (git) access.
Feedback
We are very interested in your feedback!
Please provide feedback and comments to the 389-users mailing list:
https://lists.fedoraproject.org/admin/lists/389-users.lists.fedoraproject...
If you find a bug, or would like to see a new feature, file it in our
GitHub project: https://github.com/389ds/389-ds-base
* Bump version to 2.0.4
* Issue 4680 - 389ds coredump (@389ds/389-ds-base-nightly) in replica
install with CA (#4715)
* Issue 3965 - RFE - Implement the Password Policy attribute
“pwdReset” (#4713)
* Issue 4700 - Regression in winsync replication agreement (#4712)
* Issue 3965 - RFE - Implement the Password Policy attribute
“pwdReset” (#4710)
* Issue 4169 - UI - migrate monitor tables to PF4
* issue 4585 - backend redesign phase 3c - dbregion test removal (#4665)
* Issue 2736 - remove remaining perl references
* Issue 2736 - https://github.com/389ds/389-ds-base/issues/2736
* Issue 4706 - negative wtime in access log for CMP operations
* Issue 3585 - LDAP server returning controltype in different sequence
* Issue 4127 - With Accounts/Account module delete fuction is not
working (#4697)
* Issue 4666 - BUG - cb_ping_farm can fail with anonymous binds
disabled (#4669)
* Issue 4671 - UI - Fix browser crashes
* Issue 4169 - UI - Add PF4 charts for server stats
* Issue 4648 - Fix some issues and improvement around CI tests (#4651)
* Issue 4654 Updates to tickets/ticket48234_test.py (#4654)
* Issue 4229 - Fix Rust linking
* Issue 4673 - Update Rust crates
* Issue 4658 - monitor - connection start date is incorrect
* Issue 4169 - UI - migrate modals to PF4
* Issue 4656 - remove problematic language from ds-replcheck
* Issue 4459 - lib389 - Default paths should use dse.ldif if the
server is down
* Issue 4656 - Remove problematic language from UI/CLI/lib389
* Issue 4661 - RFE - allow importing openldap schemas (#4662)
* Issue 4659 - restart after openldap migration to enable plugins (#4660)
* Merge pull request #4664 from mreynolds389/issue4663
* issue 4552 - Backup Redesign phase 3b - use dbimpl in replicatin
plugin (#4622)
* Issue 4643 - Add a tool that generates Rust dependencies for a
specfile (#4645)
* Issue 4646 - CLI/UI - revise DNA plugin management
* Issue 4644 - Large updates can reset the CLcache to the beginning of
the changelog (#4647)
* Issue 4649 - crash in sync_repl when a MODRDN create a cenotaph (#4652)
* Issue 4169 - UI - Migrate alerts to PF4
* Issue 4169 - UI - Migrate Accordians to PF4 ExpandableSection
* Issue 4595 - Paged search lookthroughlimit bug (#4602)
* Issue 4169 - UI - port charts to PF4
* Issue 2820 - Fix CI test suite issues
* Issue 4513 - CI - make acl ip address tests more robust
1 year, 10 months