Problem browsing LDAP with Outlook
by Chris Bryant
When configuring Microsoft Outlook (not Outlook Express) to access an LDAP directory, there is an option to 'Enable Browsing (requires server support)'. If this option is chosen and the directory server supports it, then you should be able to open the LDAP address book and page up and down through the results. I have been unable to get this working properly with 389 DS.
When I try to browse from Outlook against the 389 DS directory, I am able to see the first page of results perfectly. However, if I move to the next page, only the first object returned will have any attributes included, and all of the rest of the objects in the page will have no attributes. I have a test perl script that duplicates this functionality as well.
I can get this to work properly with an older version of Netscape Directory Server, and I can get it working with OpenDS. Since 389 DS advertises support for the controls that are required for this to work, just like the other two servers, then I would expect it to work there also.
Has anyone out there gotten this to work with 389 DS? If so, can you share if there was anything special that you needed to do to get this to work? I'm trying to determine if this is a bug in the server, or if I'm just missing something in the configuration.
Thanks,
Chris
USA.NET
You Run Your Business. We'll Run Your Email.
This message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information of USA.NET, Inc. Any unauthorized review, use, copying, disclosure, or distribution is prohibited. If you are not the intended recipient, please immediately contact the sender by reply email and delete all copies of the original message.
3 years, 1 month
changelog
by Denise Cosso
Hi,
How to modify the attribute nsslapd-encryptionalgorithm in Centos?
Thanks,
Denise
Stop Master servers and set nsslapd-encryptionalgorithm. The allowed value is AES or 3DES.
dn: cn=changelog5,cn=config
[...]
nsslapd-encryptionalgorithm: AES
--- Em ter, 4/6/13, Rich Megginson <rmeggins(a)redhat.com> escreveu:
De: Rich Megginson <rmeggins(a)redhat.com>
Assunto: Re: [389-users] changelog
Para: "Denise Cosso" <guanaes51(a)yahoo.com.br>
Data: Terça-feira, 4 de Junho de 2013, 16:34
On 06/04/2013 01:26 PM, Denise Cosso
wrote:
Hi, Rich
CentOS release 6.3 (Final)
389-ds-base-libs-1.2.10.2-20.el6_3.x86_64
389-ds-1.2.2-1.el6.noarch
389-dsgw-1.1.10-1.el6.x86_64
389-ds-console-1.2.6-1.el6.noarch
389-ds-console-doc-1.2.6-1.el6.noarch
389-ds-base-1.2.10.2-20.el6_3.x86_64
As far as replication goes - you will need to use a security layer
(SSL, TLS, or GSSAPI) to protect the clear text password on the wire
As far as encrypting it in the changelog - not sure
Denise
--- Em ter, 4/6/13, Rich Megginson <rmeggins(a)redhat.com>
escreveu:
De: Rich Megginson <rmeggins(a)redhat.com>
Assunto: Re: [389-users] changelog
Para: "General discussion list for the 389 Directory
server project."
<389-users(a)lists.fedoraproject.org>
Cc: "Denise Cosso" <guanaes51(a)yahoo.com.br>
Data: Terça-feira, 4 de Junho de 2013, 16:11
On
06/04/2013 12:39 PM, Denise Cosso wrote:
Hi,
Description of problem:
When a userPassword is changed in a server with changelog, the hashed password
is logged and also a cleartext pseudo-attribute version. It looks like this:
change::
replace: userPassword
userPassword: {SHA256}vqtiN2LHdrEUOJUKu+IBVqAVFsAlvFw+11kD/Q==
-
replace: unhashed#user#password
unhashed#user#password: secret12
This unhashed version is used in winsync where the cleartext version of the
password must be written to the AD.
Now if the DS is involved in replication with another DS, the change will be
replayed exactly as it is logged to the other DS replicas, including the
cleartext pseudo-attribute password.
What platform? What version of 389-ds-base are you
using?
thanks,
Denise
--
389 users mailing list
389-users(a)lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
8 years, 5 months
389 Master - Master Replication
by Santos Ramirez
Good Morning,
We have a master - master replication agreement. When we initialize the replication it works perfectly we can see changes to a test user we have set up go up and down from the two servers. However at some point the replication stops and we cannot get replication to start once again. The only way we can get replication to start once again is to recreate the replication agreement and then it fails again. Can anyone please point us in a direction. I am relatively new to 389 so any help would be greatly appreciated.
Santos U. Ramirez
Linux Systems Administrator
National DCP, LLC
150 Depot Street
Bellingham, Ma. 02019
Phone: 508-422-3089
Fax: 508-422-3866
Santos.Ramirez(a)natdcp.com<mailto:Santos.Ramirez@natdcp.com>
This email and any attachments are intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, do not copy or forward to any unauthorized persons, permanently delete the original and notify the sender by replying to this email.
9 years
389 directory server crash
by Mitja Mihelič
Hi!
We are having problems with some our 389-DS instances. They crash after
receiving an update from the provider.
The crash happened twice after about a week of running without problems.
The crashes happened on two consumer servers but not at the same time.
The servers are running CentOS 6x with the following 389DS packages
installed:
389-ds-console-doc-1.2.6-1.el6.noarch
389-console-1.1.7-1.el6.noarch
389-adminutil-1.1.15-1.el6.x86_64
389-dsgw-1.1.10-1.el6.x86_64
389-ds-base-debuginfo-1.2.11.15-14.el6_4.x86_64
389-admin-1.1.29-1.el6.x86_64
389-ds-console-1.2.6-1.el6.noarch
389-admin-console-doc-1.1.8-1.el6.noarch
389-ds-1.2.2-1.el6.noarch
389-ds-base-1.2.11.15-14.el6_4.x86_64
389-ds-base-libs-1.2.11.15-14.el6_4.x86_64
389-admin-console-1.1.8-1.el6.noarch
We are in the process of replacing the Centos 5x base consumer+provider
setup with a CentOS 6x base one. For the time being, the CentOS 6
machines are acting as consumers for the old server. They run for a
while and then the replicated instances crash though not at the same time.
One of the servers did not want to start after the crash, so I have run
db2index on its database. It's been running for four days and it has
still not finished. All I get from db2index now are these outputs:
[09/Jul/2013:13:29:11 +0200] - reindex db: Processed 65095 entries (pass
1104) -- average rate 53686277.5/sec, recent rate 0.0/sec, hit ratio 0%
The other instance did start up, but the replication process did not
work anymore. I disabled the replication to this host and set it up
again. I chose "Initialize consumer now" and the consumer crashed every
time. I have enabled full error logging and could find nothing.
I have read a few threads (not all, I admit) on this list and
http://directory.fedoraproject.org/wiki/FAQ#Debugging_Crashes and tried
to troubleshoot.
The crash produced the attached core dump and I could use your help with
understanding it. As well as any help with the crash. If more info is
needed I will gladly provide it.
Regards, Mitja
9 years, 10 months
FW: fresh replica reports "reloading ruv failed " just after successfull initialization
by Jovan.VUKOTIC@sungard.com
Hi,
I would like to link the issue I reported on Saturday with the bug 723937 filed some two years ago.
There, just as in my case, dn/entry cache entries have been reported prior to the initialization of master replica.
I repeated the replication configuration today, where the multi-master replica that was initialized by other replica having only one entry in userRoot datase prior the initialization( root object)
First, two entries were found, then 5... and then 918 (matches the number of entries from the master database)
24/Jun/2013:08:16:03 -0400) - entrycache_clear_int: there are still 2 entries in the entry cache.
[24/Jun/2013:08:16:03 -0400) - dncache_clear_int: there are still 2 dn's in the dn cache. :/
[24/Jun/2013:08:16:03 -0400) - WARNNG Import is running with nsslapd-db-private-import-mem on: No other process is allowed to access the database
[24/Jun/2013:08:16:07 -04001 - import userRoot: Workers finished: cleaning p...
[24/Jun/2013:08:16:07 -0400) - import userRoot: Workers cleaned up.
[24/Jun/2013:08:16:07 -0400) - import userRoot: Indexing complete. Post-processing...
[24/Jun/2013:08:16:07 -0400) - import userRoot: Generating numSubordinates complete.
[24/Jun/2013:08:16:07 -0400) - import userRoot: Flushing caches...
[24/Jun/2013:08:16:07 -0400) - import userRoot: Closing files...
[24/Jun/2013:08:16:07 -0400) - entrycache_clear_int: there are still 5 entries in the entry cache.
[24/Jun/2013:08:16:07 -0400) - dncache_clear-int: there are still 918 dn's in the dn cache. :/
[24/Jun/2013:08:16:07 -0400) - import userRoot: Import complete. Processed 918 entries in 4 seconds. (229.50 entries/sac)
[24/Jun/2013:08:16:07 -0400] NSMMReplicationPlugin - multimastar_be_state_change: replica dc:xxxxxx,dc=com is coming on
line: enabling replication
[24/Jun/2013:08:16:07 -0400] NSMMReplicationPlugin - replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxx,dc-com): LDAP error - 68
I would like to add that all replicas that could not be configured due to the reported errors were installed on Solaris 10 on Sparc processors, whereas the only replica that was initialized successfully was installed on Solaris 10 on i386 processors.
Thanks,
Jovan
Jovan Vukotić * Senior Software Engineer * Ambit Treasury Management * SunGard * Banking * Bulevar Milutina Milankovića 136b, Belgrade, Serbia * tel: +381.11.6555-66-1 * jovan.vukotic(a)sungard.com<mailto:jovan.vukotic@sungard.com>
[Description: Description: Description: Description: Description: coc-signature-03-2012]<http://www.capitalize-on-change.com/?email=70150000000Y1Et>
Join the online conversation with SunGard's customers, partners and Industry experts and find an event near you at: www.sungard.com/ten<http://www.capitalize-on-change.com/?email=70150000000Y1Et>.
From: Vukotic, Jovan
Sent: Saturday, June 22, 2013 11:59 PM
To: '389-users(a)lists.fedoraproject.org'
Subject: fresh replica reports "reloading ruv failed " just after successfull initialization.
Hi,
We have four 389 DS, version 1.2.11 that we are organizing in multi-master replication topology.
After I enabled all four multi-master replicas and initialized them - from the one, referent replica M1 and Incremental Replication started, it turned out that only two of them are included in replication, the referent M1 and M2 (replication is working in both direction)
I tried to fix M3 and M4 in the following way:
M3 example:
removed replication agreement M1-M3 (M2-M3 did not existed, M4 switched off)
After several database restores of pre-replication state and reconfiguration of that replica, I removed 389 DS instance M3 completely and reinstalled it again: remove-ds-admin.pl + setup-ds-admin.pl. I configured TLS/SSL (as before), restarted the DS and enabled replica from 389 Console.
Then I returned to M1, recreated the agreement and did initialization of M3. It was successful again, in terms that M3 imported all the data, but immediately after that, to me strange errors were reported:
What confuses me is that LDAP 68 means that an entry already exits... even if it is a new replica. Why a tombstone?
Or to make the long story short: Is the only remedy to reinstall all four replica again?
22/Jun/2013:16:30:50 - 0400] - All database tnreaas now stopped // this is from a backup done before replication configuration
[22/Jun/2013:16:43:25 -0400] NSMMReplicationPlugin - multimaster_be_state_change: replica xxxxxxxxxx is going off line; disablin
g replication
[22/Jun/2013:16:43:25 -0400] - entrycache_clear_int: there are still 20 entries in the entry cache,
[22/Jun/2013:16:43:25 -0400] - dncache_clear_int: there are still 20 dns in the dn cache. :/
[22/Jun/2013:16:43:25 -0400] - WARNING: Import is running with nsslapd-db-private-import-mem on; No other process is allowed to access th
e database
[22/Jun/2013:16:43:30 -0400] - import userRoot: Workers finished; cleaning up..
[22/Jun/2013:16:43:30 -0400] - import userRoot: Workers cleaned up.
[22/Jun/2013:16:43:30 -0400] - import userRoot: Indexing complete. Post-processing...
[22/Jun/2013:16:43:30 -0400] - import userRoot: Generating numSubordinates complete.
[22/Jun/2013:16:43:30 -0400] - import userRoot: Flushing caches.
[22/Jun/2013:16:43:30 -0400] - import userRoot: Closing files.
[22/Jun/2013:16:43:30 -0400] - entrycache_clear_int: there are still 20 entries in the entry cache.
[22/Jun/2013:16:43:30 -0400] - dncache_clear_int: there are still 917 dn's in the dn cache. :/
[22/Jun/2013:16:43:30 -0400] - import userRoot: Import complete. Processed 917 entries in 4 seconds, (229.25 entries/sec)
[22/Jun/2013:16:43:30 -0400] NSMMRep1 icationPlugin - multimaster_be_state_change: replica xxxxxxxxxxx is coming online; enabling
replication
[22/Jun/2013:16:43:30 -0400] NSMMReplicationPlugin - replica_configure_ruv: failed to create replica ruy tombstone entry (xxxxxxxxxx); LDAP error - 68
[22/Jun/2013:16:43:30 -0400] NSMMReplicationPlugin - replica_enable_replication: reloading ruv failed
[22/Jun/2013:16:43:32 -0400] NSMMReplicationPlugin - _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error - 68
[22/Jun/2013:16:44:02 -0400] NSMMReplicationPlugin - replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxxx); LDAP error - 68
[22/Jun/2013:16:44:32 -0400] NSMMReplicationPlugin - _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error - 68
[22/Jun/2013:16:45:02 -0400] NSMMReplicationPluyin - _replica_confiyure_ruv: failed to create replica ruv tombstone entry (xxxxxxxx); LDAP error - 68
[22/Jun/2013:16:45:32 -0400] NSMMReplicationPlugin - _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error - 68
[22/Jun/2013:16:46:02 -0400] NSMMReplicationPlugin - _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error - 68
Any help will be appreciated.
Thank you.
Jovan Vukotić * Senior Software Engineer * Ambit Treasury Management * SunGard * Banking * Bulevar Milutina Milankovića 136b, Belgrade, Serbia * tel: +381.11.6555-66-1 * jovan.vukotic(a)sungard.com<mailto:jovan.vukotic@sungard.com>
[Description: Description: Description: Description: Description: coc-signature-03-2012]<http://www.capitalize-on-change.com/?email=70150000000Y1Et>
Join the online conversation with SunGard's customers, partners and Industry experts and find an event near you at: www.sungard.com/ten<http://www.capitalize-on-change.com/?email=70150000000Y1Et>.
10 years, 1 month
ACL processing
by Russell Beall
I did a lot of work experimenting with 389 for use as a replacement to Sun SJES. Worked really well when I focused my efforts on the backend processing we do with Directory Manager, except for a few performance issues which are being addressed in bug reports.
I thought sure I had done at least some load testing with service accounts. The service accounts must go through ACL processing, and we have a lot of ACLs. I'm not sure if I changed something, or if I just didn't quite test this feature enough, but now that I am doing more development work with service accounts, I am showing a huge processing hit taken if a service account is used as opposed to Directory Manager. This is on the order of a second and a half to respond to a simple base query, versus instantaneous. Our old SJES servers respond very snappily in comparison for this type of query.
CPU usage for a single thread maxes out during the time spent waiting and I/O wait is zero, so I know that probably the bulk of time is being spent processing the ACLs. This is especially true if I turn on logging for ACL processing, then it takes a very long time, with one example taking about 9 minutes.
It seems to be processing and reprocessing the ACLs many many times over.
I think I must have changed something or done something wrong because I'm pretty sure I remember much quicker response times when using a service account in earlier testing.
This is with 389-ds-base 1.2.10.14 on RedHat 6.2.
This was an experimental version downloaded to check out a memory fragmentation option that was coded in, so maybe I just have a version that was mid ACL processing changes?
Thanks for any help,
Russ.
10 years, 1 month
Question about lastlogintime
by harry.devine@faa.gov
We were interested in tracking a user's last login time, and I see the
attribute that I can add in the user's profile. But we have 460 users so
adding that in manually would be tedious. I saw this article online:
https://fedorahosted.org/389/ticket/371 and wondered if all we had to do
was add what it mentions to our dse.ldif file and restart the server.
Would that work? If not, would scripting the addition of that attribute
be possible? Or is there another way?
Thanks!
Harry
10 years, 2 months
How to keep dnanextvalue in sync when using DNA plugin?
by Kyle Johnson
Hey everyone,
The DNA plugin has been setup on my first server for a while now and has
been working fine.
I've added a second server to the environment and configured it as
multi-master. After setting up the plugin on that server and then
adding a test user to it, the UID is starting at the bottom of the
dnanextvalue. I would like for dnanextvalue to stay in sync between
each server in a multi-master environment.
How do you do this? Am I missing something obvious?
Kyle
10 years, 2 months
Self Service Portal
by Tom Tucker
Any recommendations on a commercial or open source web based self service
portal to allow 389 DS users the ability to recover or change their
password?
Thanks,
Tom
10 years, 2 months