I'm new to 389-ds and last week downloaded and installed the software.
I have a running instance of the server, and I've added TLS/SSL. I've configured a CentOS 7 client to be able to query
the server using TLS/SSL, and all appears working.
I've created users and groups on the 389-ds server successfully. For each user and group, I've enabled posix attributes and my client
can see the unix users and groups using the "getent password" or "getent group" commands.
Now, here's where I'm getting tripped up..........
I need to limit which users have access to which systems. I've been trying to do this via memberOf group limitations.
I found the following online resource (https://thornelabs.net/2013/01/28/aix-restrict-server-login-via-ldap-grou...)
which is close enough to CentOS that the initial commands worked.
I enabled the MemberOf plugin and changed the attributes per the link, and restarted the system.
I created a test group (that I didn't enable a posix GID) and tried to add a single user via:
Right click on group -- > click Properties --> then Members --> click Add --> Search for user --> click Add.
When I try to go this route (which worked before enabling the memberOf plugin) it worked. Now it seems I get the error:
"Cannot save to directory server.
netscape.ldap.LDAPException: error resiult(65): Object class violation"
And the messages file throws the error (/var/log/dirsrv/slapd-<instancename>/errors:
"Entry "uid=test,ou=People,dc=int,dc=com" -- attribute "memberOf" not allowed
[17/Feb/2016:11:22:58 -0700] memberof-plugin - memberof_postop_modify: failed to add dn (cn=testgroup,ou=Groups,dc=int,dc=com) to target. Error (65)"
So it seems my server isn't quite using the memberOf plugin properly, but I'm not sure what else to enable. I'll have to solve this issue before
I even try to filter login access via groups on my client system.
I should mention that if I go under the advanced tab for one of the groups I created, I can add the the attribute "uniquemember", but I'm not sure what I
should set the "value" to be.
I've tried creating new users to see if I could set their "uniquemember" attributes, but no luck. It seems that I don't have the ability to set this attribute
on individual users, only groups.
This might not be the right road to head down when trying to restrict access to servers via groups, so I'm open to any suggestions.
Any suggestions would be appreciated.
William thank you for reply, bellow is output for certl cmd for this host with error( Failed to get the default state of cipher)
To deploy almost identical ldap hosts , the Sys Admin here is using Puppet but unfortunelly are always issues with rpms version mismatch and cfg , can you suggest another solution to deploy multiple ldap hosts all running same version and almost same cfg , only diff in ldap hosts is the name of DS instance aka :ldap*
Here is the output s per your request:
certutil -L -d /etc/dirsrv/slapd-ldap2/
Certificate Nickname Trust Attributes
XX Internal Root CA CT,,
XX Internal CA CT,,
From: William Brown <wbrown(a)suse.de>
Subject: [389-users] Re: 389-DS Failed to get the default state of
Content-Type: text/plain; charset=utf-8
> we have another host with same version and suppose same cfg but never
> saw the error,
> [24/Jun/2020:09:22:54.687024072 -0700] - ERR - Security Initialization
> - _conf_setallciphers - Failed to get the default state of cipher
I'm curious - how did you make a host with the same config? Normally with 389 you need to configure both individually to look the same but you can't copy-paste config files etc.
My guess here is that perhaps your nss db isn't configured properly, so I'd want to see the output of certutil -L -d /etc/dirsrv/slapd-<instance>/ on the affected host.
We tried to dynamically a new schema dynamically using /usr/lib64/dirsrv/slapd-eldapp1/schema-reload.pl
Unfortunately, (and unknown to us at the time) the objectClass definition misspelt a couple of the attribute names.
The schema reload process should have picked that up and refused it, but it didn't and so proceeded to update entries using the new schema.
That's when we started getting errors like the following in the error log:
[19/Jun/2020:10:28:08.390882389 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.399523527 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.404890880 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.430284251 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.466371449 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.495859651 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.522007224 -0700] - ERR - libdb - BDB0151 fsync: Input/output error
[19/Jun/2020:10:28:08.546930415 -0700] - ERR - libdb - BDB4519 txn_checkpoint: failed to flush the buffer cache: Input/output error
[19/Jun/2020:10:28:08.569781853 -0700] - CRIT - checkpoint_threadmain - Serious Error---Failed to checkpoint database, err=5 (Input/output error)
I tried restarting dirsrv and that's when it started giving errors about the unknown (misspelt) attributes in the new objectClass.
I fixed those errors in the schema and restarting dirsrv.
I saw the following message in the error log:
NOTICE - dblayer_start - Detected Disorderly Shutdown last time Directory Server was running, recovering database.
There was no further log, but the CPU utilization for ns-slapd was at 99.9% so I just let it run over night hoping that it wasn't stuck in a loop.
But there was no improvement the next morning, so I ordered a RAM increase from 4 GB --> 16 GB hoping that would fix it, I let it run for a while with no indication of progress.
I also tried to run db2ldif to try to dump the db to an ldif file, but got the same "recovering database" message. That's where it is now - I'll let it run for a few hours and hope it does something.
Would anyone be able to offer any further advice?
Is there any way to see how it's getting along with the database recovery?
Is this db well and truly hosed?
Unfortunately this system was spec'd for development so no backups were running so recovery from backup is not an option.
Our new DS env is running:
After DS was upgrade to above version seeing this error when restarting the DS, we have another host with same version and suppose same cfg but never saw the error, please advise a fix for this issue in this version:
ERR - Security Initialization - _conf_setallciphers - Failed to get the default state of cipher (null)
[24/Jun/2020:09:22:54.686777541 -0700] - ERR - Security Initialization - _conf_setallciphers - Failed to get the default state of cipher (null)
[24/Jun/2020:09:22:54.687024072 -0700] - ERR - Security Initialization - _conf_setallciphers - Failed to get the default state of cipher (null)
[24/Jun/2020:09:22:54.688953359 -0700] - INFO - Security Initialization - SSL info: Enabling default cipher set.
[24/Jun/2020:09:22:54.689229153 -0700] - INFO - Security Initialization - SSL info: Configured NSS Ciphers
I find the DNA Plugin NextValue attribute will automatically added every time for same uid.
This is the server side configuration:
> dn: cn=uidNumber,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
> objectClass: top
> objectClass: extensibleObject
> cn: uidNumber
> dnaType: uidNumber
> dnaMagicRegen: 99999
> dnaFilter: (objectclass=posixAccount)
> dnaScope: dc=example,dc=com
> dnaNextValue: 5007
> dnaMaxValue: 9999
> dnaThreshold: 200
> creatorsName: cn=directory manager
> modifiersName: cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
> createTimestamp: 20190822054416Z
> modifyTimestamp: 20200619040000Z
User attribute source is Windows AD, I have nsDSWindowsReplicationAgreement which sync posix attribute from AD to 389ds.
When I fill magic number 99999 on AD side, user will get a UidNumber through DNA plugin. For example, an user get a uidNumber 5007 for the first sync, when I update user entry attribute(add telephone), this user will get a new uidNumber 5008 for the second sync.
I don't know whether this is normal.
I enabled password complexity constraints, password history and password
expiration (1 days min, 70 days max).
When I use the command passwd to change a user's password, I get the error
Password change failed. Server message: Failed to update password
passwd: Authentication token is no longer valid; new one required
In the following cases:
Password was changed less than a days ago
Password does not match complexity constraints
Password is already in history
My question: would it be possible to give better information to the user ?
To let him know that his password is not matching constraints, already in
history or changed recently ?
I realize that some of this is related to sssd/pam, but I'd like to know if
389 server is at least able to tell this to sssd/pam.
I am working on large Directory Server topology, which is reaching very
fast the amount of available locks in BDB ( cf
- Can the planned switch in 389-ds-base-1.4.next to LMDB help for such
( Especially after reading "The database structure is multi-versioned so
readers run with no locks" on http://www.lmdb.tech/doc/index.html )
- Is the switch to lmdb planned in an amount of quartals or years? I know
nobody likes to put dates on roadmaps, it's just that having a rough
estimate would help me to know how much effort I should put into tuning BDB
Technical Account Manager
Red Hat EMEA <https://www.redhat.com>
> On 12 Jun 2020, at 03:12, Crocker, Deborah <crock(a)ua.edu> wrote:
> What is it about this newer version compared to the old where this is happening. Is it that our setup is not quite the same? We try to bring all settings forward (except now it is auto-tuning cache) but it is possible we missed something.
It's hard to tell. Unindexed searches like this will always hurt performance. Unindexed searches have a tendancy to blow your cache out through evicts/includes. You should check also your db monitor to see if there are many cache evictions. That would tell you that autotuning is too low.
We had to develop the cache auto-tune to work with FreeIPA in mind, and so by default it uses 10% of the system ram (25% as of 1.4.4 I think ....). FreeIPA comes with a lot of other daemons like dogtag and co, and they are are memory hungry, so DS has to "share the playground" with them. There were also issues with glibc fragmenting our address space, and that caused us to "appear" to leak (We have since improved this situation of course). When autotuning was added, DS would ship with out of the box, I think 100MB of entry cache only, and some people went to production with this. Auto tuning isn't designed to be perfect, it's designed to be "better than before". And yes we'll keep improving that, but sometimes you need to tweak it to use more of the resources you have for your workload. As yet, I haven't thought of a good way to make it so that a pure 389-ds instance gets more memory, but we tune for less in freeipa to share ....
You could find that changing it to 25% or 40% will improve your situation, especially if you are seeing lots of inclusions and evictions.
And again, you *really really* should index all the attributes in that query, because any query that is "notes=F|A|U" is going to be bad, and you should configure SSSD to "play nice" ie ignore_group_members=true and enumerate=false to reduce load on your directory servervs, but also to improve your client login times (it used to take 5 minutes for me to sudo at my old workplace until I set ignore_group_members=true).
Hope that helps,
Senior Software Engineer, 389 Directory Server