My organisation is using a replicated 389-dirsrv. Lately, it has been crashing
each time after compacting.
It is replicable on our instances by lowering the compactdb-interval to
trigger the compacting:
dsconf -D "cn=Directory Manager" ldap://127.0.0.1 -w 'PASSWORD_HERE' backend config set --compactdb-interval 300
This is the log:
[03/Aug/2022:16:06:38.552781605 +0200] - NOTICE - checkpoint_threadmain - Compacting DB start: userRoot
[03/Aug/2022:16:06:38.752592692 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact userRoot - 8 pages freed
[03/Aug/2022:16:06:44.172233009 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact userRoot - 888 pages freed
[03/Aug/2022:16:06:44.179315345 +0200] - NOTICE - checkpoint_threadmain - Compacting DB start: changelog
[03/Aug/2022:16:13:18.020881527 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact changelog - 458 pages freed
dirsrv(a)auth-alpha.service: Main process exited, code=killed, status=11/SEGV
dirsrv(a)auth-alpha.service: Failed with result 'signal'.
dirsrv(a)auth-alpha.service: Consumed 2d 6h 22min 1.122s CPU time.
The first steps are done very quickly, but the step before the 458 pages of the
retro-changelog are freed, takes several minutes. In this time the dirsrv writes
more than 10 G and reads more than 7 G (according to iotop).
After this line is printed the dirsrv crashes within seconds.
What I also noticed is, that even though it said it freed a lot of pages the
retro-changelog does not seem to change in size.
The file `/var/lib/dirsrv/slapd-auth-alpha/db/changelog/id2entry.db` is 7.2 G
before and after the compacting.
389-ds-base/stable,now 22.214.171.124-2 amd64
Does someone have an idea how to debug / fix this?
we are running 2 DS in multi master replication , this week the Sysadmin decided to apply OS patches updates on both servers, after patches applied one DS is working fine with no issues and one is back online but we can not access DS or run ldap search or import , no errors just plain "invalid credentials" ,, any help where to look for ?
here is the version info:
OS on both hosts :Linux 3.10.0-1160.76.1.el7.x86_64 #1 SMP Wed Aug 10 16:21:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
####Before OS patches both of each servers were running :
#####After OS patches running:
I have a StafulSet in Kubernetes based on
I set a memory limit to 6Gi
Doing some performance test with 4k entries (query every entry in a
loop), the used memory is increased on every test until kubernetes kills
the pod (twentieth test aproximately).
Reason:Reason: OOMKilled - exit code: 0
Logs only shows
NOTICE - ldbm_back_search - Unindexed search: search
base="ou=myou,dc=XXX,dc=XXX" scope=2 filter="(objectClass=nsperson)"
when I get all entries in my ou
ldapsearch -D "cn=directory manager" -W -b "ou=myou,dc=XXX,dc=XXX" -H
ldap://localhost:3389 -s sub "(objectclass=nsPerson)" uid
Maybe a tunning problem or a mem leak?
There have been a lot of people just sending "unsubscribe" messages to
the list. At the bottom of every email from this list there is a link
to unsubscribe yourself. I don't mind doing it, but it's very easy to
do it yourself. Just a reminder...
Directory Server Development Team
I'am facing an issue that i can't solve.
I have recently install two new LDAP servers (ubuntu 18.04 /389
All about 12hours, the LDAP stop responding evenif the process is there.
When i make a restart, it take a long time (so i have to kill the process).
I have 2 old 389 (version 126.96.36.199) with the same base that function
Is there a knowed bug about that?
389ds as shipped by RHEL9 is linked to NSS, which in theory supports PKCS11, but in practice I can't get to work.
Most specifically, when you display a 389ds NSS database using modutil, you see p11-kit-proxy (good), but it reports "There are no slots attached to this module” (bad).
Has anyone got an explanation as to why this might be?
[root@seawitch ~]# modutil -list -dbdir /etc/dirsrv/slapd-seawitch
Listing of PKCS #11 Modules
1. NSS Internal PKCS #11 Module
slots: 2 slots attached
slot: NSS Internal Cryptographic Services
token: NSS Generic Crypto Services
slot: NSS User Private Key and Certificate Services
token: NSS Certificate DB
library name: p11-kit-proxy.so
slots: There are no slots attached to this module
At the very least the system and default CA databases should be visible, but alas no:
[root@seawitch ~]# p11-kit list-modules
library-description: PKCS#11 Kit Trust Module
library-manufacturer: PKCS#11 Kit
token: System Trust
manufacturer: PKCS#11 Kit
token: Default Trust
manufacturer: PKCS#11 Kit