Hi,
two 389-ds running with multimaster replication. and dbbackup size 66MB but when I have enabled "account policy plugin" for tracing lastlogintime of users.
but now I see changelog db size incraced 3GB
... the database size is now 3,8G May 25 10:17 74c37b82-3ef411e7-ac57be37-2d84af6b_55dc8a41000000010000.db ...
How can I fix the changelog db size problem.
[root@mhrsldap1 changelogdb]# cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core) [root@mhrsldap1 changelogdb]# rpm -qa |grep 389 389-admin-console-doc-1.1.12-1.el7.noarch 389-adminutil-1.1.22-1.el7.x86_64 389-ds-base-libs-1.3.5.10-20.el7_3.x86_64 389-ds-base-1.3.5.10-20.el7_3.x86_64 389-admin-console-1.1.12-1.el7.noarch 389-ds-console-doc-1.2.16-1.el7.noarch 389-console-1.1.18-1.el7.noarch 389-ds-base-devel-1.3.5.10-20.el7_3.x86_64 389-adminutil-devel-1.1.22-1.el7.x86_64 389-ds-base-snmp-1.3.5.10-20.el7_3.x86_64 389-ds-console-1.2.16-1.el7.noarch 389-admin-1.1.46-1.el7.x86_64
On 05/25/2017 03:23 AM, Alparslan Ozturk wrote:
Hi,
two 389-ds running with multimaster replication. and dbbackup size 66MB but when I have enabled "account policy plugin" for tracing lastlogintime of users.
but now I see changelog db size incraced 3GB
... the database size is now 3,8G May 25 10:17 74c37b82-3ef411e7-ac57be37-2d84af6b_55dc8a41000000010000.db ...
How can I fix the changelog db size problem.
I just wrote this wiki page to address this changelog size issue:
http://www.port389.org/docs/389ds/FAQ/changelog-trimming.html
Regards, Mark
[root@mhrsldap1 changelogdb]# cat /etc/redhat-release CentOS Linux release 7.3.1611 (Core) [root@mhrsldap1 changelogdb]# rpm -qa |grep 389 389-admin-console-doc-1.1.12-1.el7.noarch 389-adminutil-1.1.22-1.el7.x86_64 389-ds-base-libs-1.3.5.10-20.el7_3.x86_64 389-ds-base-1.3.5.10-20.el7_3.x86_64 389-admin-console-1.1.12-1.el7.noarch 389-ds-console-doc-1.2.16-1.el7.noarch 389-console-1.1.18-1.el7.noarch 389-ds-base-devel-1.3.5.10-20.el7_3.x86_64 389-adminutil-devel-1.1.22-1.el7.x86_64 389-ds-base-snmp-1.3.5.10-20.el7_3.x86_64 389-ds-console-1.2.16-1.el7.noarch 389-admin-1.1.46-1.el7.x86_64
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
On Thu, 2017-05-25 at 13:24 -0400, Mark Reynolds wrote:
On 05/25/2017 03:23 AM, Alparslan Ozturk wrote:
Hi,
two 389-ds running with multimaster replication. and dbbackup size 66MB but when I have enabled "account policy plugin" for tracing lastlogintime of users.
How can I fix the changelog db size problem.
I just wrote this wiki page to address this changelog size issue:
http://www.port389.org/docs/389ds/FAQ/changelog-trimming.html
Check your RUV's too.
The changelog only trims entries that are both *fully resolved* on all masters and where the time / size has past.
So if you have a master that is not accepting updates, no matter your trim settings, the changelog will grow continually until that master is resolved to a sane state.
I hope that helps,
Hi William, number of change very big. what I am doing wrong ?
[image: Satır içi resim 1]
2017-05-26 2:44 GMT+03:00 William Brown wibrown@redhat.com:
On Thu, 2017-05-25 at 13:24 -0400, Mark Reynolds wrote:
On 05/25/2017 03:23 AM, Alparslan Ozturk wrote:
Hi,
two 389-ds running with multimaster replication. and dbbackup size 66MB but when I have enabled "account policy plugin" for tracing lastlogintime of users.
How can I fix the changelog db size problem.
I just wrote this wiki page to address this changelog size issue:
http://www.port389.org/docs/389ds/FAQ/changelog-trimming.html
Check your RUV's too.
The changelog only trims entries that are both *fully resolved* on all masters and where the time / size has past.
So if you have a master that is not accepting updates, no matter your trim settings, the changelog will grow continually until that master is resolved to a sane state.
I hope that helps,
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
the server1:
dn: cn=ldap1-ldap2,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc\3Dgov\2Cdc\ 3Dtr,cn=mapping tree,cn=config objectClass: top objectClass: nsDS5ReplicationAgreement description: ldap1 -> ldap2 cn: ldap1-ldap2 nsDS5ReplicaRoot: dc=sagliknet,dc=saglik,dc=gov,dc=tr nsDS5ReplicaHost: 172.16.54.181 nsDS5ReplicaPort: 389 nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaTransportInfo: LDAP nsDS5ReplicaBindMethod: SIMPLE nsDS5ReplicaCredentials: {AES******** nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170526084327Z nsds5replicaLastUpdateEnd: 19700101000000Z nsds5replicaChangesSentSinceStartup:: MToxMjE2MTE0Mi8xNDUg nsds5replicaLastUpdateStatus: Error (0) Replica acquired successfully: Increme ntal update started nsds5replicaUpdateInProgress: TRUE nsds5replicaLastInitStart: 20170522184342Z nsds5replicaLastInitEnd: 20170522184404Z nsds5replicaLastInitStatus: 0 Total update succeeded
the other server2:
dn: cn=mhrsldap2-mhrsldap1,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc\3Dg ov\2Cdc\3Dtr,cn=mapping tree,cn=config objectClass: top objectClass: nsDS5ReplicationAgreement description: mhrsldap2 => mhrsldap1 cn: mhrsldap2-mhrsldap1 nsDS5ReplicaRoot: dc=sagliknet,dc=saglik,dc=gov,dc=tr nsDS5ReplicaHost: 172.16.54.180 nsDS5ReplicaPort: 389 nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaTransportInfo: LDAP nsDS5ReplicaBindMethod: SIMPLE nsDS5ReplicaCredentials: ************* nsds50ruv: {replicageneration} 55dc8a41000000010000 nsds50ruv: {replica 1 ldap://mhrsldap1.localdomain:389} 55dd9ef0000000010000 5 91976c7001500010000 nsds50ruv: {replica 2 ldap://mhrsldap2.localdomain:389} 55df0ea1000000020000 5 91976c3008800020000 nsruvReplicaLastModified: {replica 1 ldap://mhrsldap1.localdomain:389} 0000000 0 nsruvReplicaLastModified: {replica 2 ldap://mhrsldap2.localdomain:389} 0000000 0 nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170522184536Z nsds5replicaLastUpdateEnd: 19700101000000Z nsds5replicaChangesSentSinceStartup:: Mjo3NDkzMDg4LzAgMDozMC8wIA== nsds5replicaLastUpdateStatus: Error (0) Replica acquired successfully: Increme ntal update started nsds5replicaUpdateInProgress: TRUE nsds5replicaLastInitStart: 19700101000000Z nsds5replicaLastInitEnd: 19700101000000Z
2017-05-26 11:36 GMT+03:00 Alparslan Ozturk alparslan.ozturk@gmail.com:
Hi William, number of change very big. what I am doing wrong ?
[image: Satır içi resim 1]
2017-05-26 2:44 GMT+03:00 William Brown wibrown@redhat.com:
On Thu, 2017-05-25 at 13:24 -0400, Mark Reynolds wrote:
On 05/25/2017 03:23 AM, Alparslan Ozturk wrote:
Hi,
two 389-ds running with multimaster replication. and dbbackup size 66MB but when I have enabled "account policy plugin" for tracing lastlogintime of users.
How can I fix the changelog db size problem.
I just wrote this wiki page to address this changelog size issue:
http://www.port389.org/docs/389ds/FAQ/changelog-trimming.html
Check your RUV's too.
The changelog only trims entries that are both *fully resolved* on all masters and where the time / size has past.
So if you have a master that is not accepting updates, no matter your trim settings, the changelog will grow continually until that master is resolved to a sane state.
I hope that helps,
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
On Fri, 2017-05-26 at 11:44 +0300, Alparslan Ozturk wrote:
dn: cn=mhrsldap2-mhrsldap1,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc
...
nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170522184536Z nsds5replicaLastUpdateEnd: 19700101000000Z
The answer is here: this replica has never been able to send a sucessful update to the ldap1 perhaps. You should check your error log, turn on replication logging, and generally check connectivity between these two masters,
I am tring to upgrade ldap2 server I saw this messages;
[29/May/2017:11:35:48.682669408 +0300] slapd shutting down - signaling operation threads - op stack size 166 max work q size 135 max work q stack size 135 [29/May/2017:11:35:48.690952202 +0300] slapd shutting down - closing down internal subsystems and plugins [29/May/2017:11:35:49.872175332 +0300] NSMMReplicationPlugin - agmt="cn=mhrsldap2-mhrsldap1" (172:389): Warning: Attempting to release replica, but unable to receive endReplication extended operation response from the replica. Error -5 (Timed out) [29/May/2017:11:35:50.060265381 +0300] Waiting for 4 database threads to stop [29/May/2017:11:35:51.008788537 +0300] All database threads now stopped [29/May/2017:11:35:51.021523201 +0300] slapd shutting down - freed 135 work q stack objects - freed 166 op stack objects [29/May/2017:11:35:51.974432991 +0300] slapd stopped. [29/May/2017:11:35:54.648763978 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2933 [29/May/2017:11:35:54.651378439 +0300] Import allocates 2860044KB import cache. [29/May/2017:11:35:54.652443493 +0300] Upgrade DN Format - NetscapeRoot: Start upgrade dn format. [29/May/2017:11:35:54.654011238 +0300] Upgrade DN Format - Instance NetscapeRoot in /var/lib/dirsrv/slapd-mhrsldap/db/NetscapeRoot is up-to-date [29/May/2017:11:35:54.874089044 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2934 [29/May/2017:11:35:54.876137518 +0300] Import allocates 2859732KB import cache. [29/May/2017:11:35:54.877121520 +0300] Upgrade DN Format - userRoot: Start upgrade dn format. [29/May/2017:11:35:54.878978734 +0300] Upgrade DN Format - Instance userRoot in /var/lib/dirsrv/slapd-mhrsldap/db/userRoot is up-to-date [29/May/2017:11:35:55.351168914 +0300] 389-Directory/1.3.5.10 B2017.145.2037 starting up [29/May/2017:11:35:55.375878667 +0300] resizing db cache size: 20000000 -> 10000000 [29/May/2017:11:35:55.559740675 +0300] slapd started. Listening on All Interfaces port 389 for LDAP requests
2017-05-29 1:16 GMT+03:00 William Brown wibrown@redhat.com:
On Fri, 2017-05-26 at 11:44 +0300, Alparslan Ozturk wrote:
dn: cn=mhrsldap2-mhrsldap1,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc
...
nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170522184536Z nsds5replicaLastUpdateEnd: 19700101000000Z
The answer is here: this replica has never been able to send a sucessful update to the ldap1 perhaps. You should check your error log, turn on replication logging, and generally check connectivity between these two masters,
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
when I have disabled lastlogintime attiribute in "Account Policy Plugin" (alwayrecord:yes) then I saw every thing normal replicaton accured with succced.
so I think is there a problem for changelogdb management. becase I need only last 90 days logs for lastlogintime information. and may be our system is used ver fequrencly so the replication not complated during users loged in with lastlogintime are recorded?
in ldap2 logs:
[29/May/2017:11:54:31.609920121 +0300] conn=4 fd=64 slot=64 connection from 172.16.54.180 to 172.16.54.181 [29/May/2017:11:54:31.610135409 +0300] conn=4 op=0 BIND dn="cn=replication manager,cn=config" method=128 version=3 [29/May/2017:11:54:31.610658927 +0300] conn=4 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replication manager,cn=config" [29/May/2017:11:54:31.611037508 +0300] conn=4 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [29/May/2017:11:54:31.612070109 +0300] conn=4 op=1 RESULT err=0 tag=101 nentries=1 etime=0 [29/May/2017:11:54:31.612470105 +0300] conn=4 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [29/May/2017:11:54:31.614206652 +0300] conn=4 op=2 RESULT err=0 tag=101 nentries=1 etime=0 [29/May/2017:11:54:31.614628513 +0300] conn=4 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [29/May/2017:11:54:31.615079610 +0300] conn=4 op=3 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.633632464 +0300] conn=4 op=4 MOD dn="cn=20606764540,cn=Users,dc=sagliknet,dc=saglik,dc=gov,dc=tr" [29/May/2017:11:54:31.635792553 +0300] conn=4 op=4 RESULT err=0 tag=103 nentries=0 etime=0 csn=592be1c9000000010000 [29/May/2017:11:54:31.635907086 +0300] conn=4 op=5 MOD dn="cn=20606764540,cn=Users,dc=sagliknet,dc=saglik,dc=gov,dc=tr" [29/May/2017:11:54:31.637579233 +0300] conn=4 op=5 RESULT err=0 tag=103 nentries=0 etime=0 csn=592be1c9000100010000 [29/May/2017:11:54:31.764759069 +0300] conn=4 op=6 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop" [29/May/2017:11:54:31.765778342 +0300] conn=4 op=6 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.768032079 +0300] conn=4 op=7 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [29/May/2017:11:54:31.768405286 +0300] conn=4 op=7 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.769224664 +0300] conn=4 op=8 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop" [29/May/2017:11:54:31.770321529 +0300] conn=4 op=8 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:55:31.835062981 +0300] conn=4 op=9 UNBIND [29/May/2017:11:55:31.835103451 +0300] conn=4 op=9 fd=64 closed - U1
2017-05-29 11:36 GMT+03:00 Alparslan Ozturk alparslan.ozturk@gmail.com:
I am tring to upgrade ldap2 server I saw this messages;
[29/May/2017:11:35:48.682669408 +0300] slapd shutting down - signaling operation threads - op stack size 166 max work q size 135 max work q stack size 135 [29/May/2017:11:35:48.690952202 +0300] slapd shutting down - closing down internal subsystems and plugins [29/May/2017:11:35:49.872175332 +0300] NSMMReplicationPlugin - agmt="cn=mhrsldap2-mhrsldap1" (172:389): Warning: Attempting to release replica, but unable to receive endReplication extended operation response from the replica. Error -5 (Timed out) [29/May/2017:11:35:50.060265381 +0300] Waiting for 4 database threads to stop [29/May/2017:11:35:51.008788537 +0300] All database threads now stopped [29/May/2017:11:35:51.021523201 +0300] slapd shutting down - freed 135 work q stack objects - freed 166 op stack objects [29/May/2017:11:35:51.974432991 +0300] slapd stopped. [29/May/2017:11:35:54.648763978 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2933 [29/May/2017:11:35:54.651378439 +0300] Import allocates 2860044KB import cache. [29/May/2017:11:35:54.652443493 +0300] Upgrade DN Format - NetscapeRoot: Start upgrade dn format. [29/May/2017:11:35:54.654011238 +0300] Upgrade DN Format - Instance NetscapeRoot in /var/lib/dirsrv/slapd-mhrsldap/db/NetscapeRoot is up-to-date [29/May/2017:11:35:54.874089044 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2934 [29/May/2017:11:35:54.876137518 +0300] Import allocates 2859732KB import cache. [29/May/2017:11:35:54.877121520 +0300] Upgrade DN Format - userRoot: Start upgrade dn format. [29/May/2017:11:35:54.878978734 +0300] Upgrade DN Format - Instance userRoot in /var/lib/dirsrv/slapd-mhrsldap/db/userRoot is up-to-date [29/May/2017:11:35:55.351168914 +0300] 389-Directory/1.3.5.10 B2017.145.2037 starting up [29/May/2017:11:35:55.375878667 +0300] resizing db cache size: 20000000 -> 10000000 [29/May/2017:11:35:55.559740675 +0300] slapd started. Listening on All Interfaces port 389 for LDAP requests
2017-05-29 1:16 GMT+03:00 William Brown wibrown@redhat.com:
On Fri, 2017-05-26 at 11:44 +0300, Alparslan Ozturk wrote:
dn: cn=mhrsldap2-mhrsldap1,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc
...
nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170522184536Z nsds5replicaLastUpdateEnd: 19700101000000Z
The answer is here: this replica has never been able to send a sucessful update to the ldap1 perhaps. You should check your error log, turn on replication logging, and generally check connectivity between these two masters,
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
I am remove all agrement. then create new agrement without lastlogintime. now are woring propely. many thanks for helps.
2017-05-29 12:24 GMT+03:00 Alparslan Ozturk alparslan.ozturk@gmail.com:
when I have disabled lastlogintime attiribute in "Account Policy Plugin" (alwayrecord:yes) then I saw every thing normal replicaton accured with succced.
so I think is there a problem for changelogdb management. becase I need only last 90 days logs for lastlogintime information. and may be our system is used ver fequrencly so the replication not complated during users loged in with lastlogintime are recorded?
in ldap2 logs:
[29/May/2017:11:54:31.609920121 +0300] conn=4 fd=64 slot=64 connection from 172.16.54.180 to 172.16.54.181 [29/May/2017:11:54:31.610135409 +0300] conn=4 op=0 BIND dn="cn=replication manager,cn=config" method=128 version=3 [29/May/2017:11:54:31.610658927 +0300] conn=4 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replication manager,cn=config" [29/May/2017:11:54:31.611037508 +0300] conn=4 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [29/May/2017:11:54:31.612070109 +0300] conn=4 op=1 RESULT err=0 tag=101 nentries=1 etime=0 [29/May/2017:11:54:31.612470105 +0300] conn=4 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension" [29/May/2017:11:54:31.614206652 +0300] conn=4 op=2 RESULT err=0 tag=101 nentries=1 etime=0 [29/May/2017:11:54:31.614628513 +0300] conn=4 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [29/May/2017:11:54:31.615079610 +0300] conn=4 op=3 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.633632464 +0300] conn=4 op=4 MOD dn="cn=20606764540,cn=Users,dc=sagliknet,dc=saglik,dc=gov,dc=tr" [29/May/2017:11:54:31.635792553 +0300] conn=4 op=4 RESULT err=0 tag=103 nentries=0 etime=0 csn=592be1c9000000010000 [29/May/2017:11:54:31.635907086 +0300] conn=4 op=5 MOD dn="cn=20606764540,cn=Users,dc=sagliknet,dc=saglik,dc=gov,dc=tr" [29/May/2017:11:54:31.637579233 +0300] conn=4 op=5 RESULT err=0 tag=103 nentries=0 etime=0 csn=592be1c9000100010000 [29/May/2017:11:54:31.764759069 +0300] conn=4 op=6 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop" [29/May/2017:11:54:31.765778342 +0300] conn=4 op=6 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.768032079 +0300] conn=4 op=7 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop" [29/May/2017:11:54:31.768405286 +0300] conn=4 op=7 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:54:31.769224664 +0300] conn=4 op=8 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop" [29/May/2017:11:54:31.770321529 +0300] conn=4 op=8 RESULT err=0 tag=120 nentries=0 etime=0 [29/May/2017:11:55:31.835062981 +0300] conn=4 op=9 UNBIND [29/May/2017:11:55:31.835103451 +0300] conn=4 op=9 fd=64 closed - U1
2017-05-29 11:36 GMT+03:00 Alparslan Ozturk alparslan.ozturk@gmail.com:
I am tring to upgrade ldap2 server I saw this messages;
[29/May/2017:11:35:48.682669408 +0300] slapd shutting down - signaling operation threads - op stack size 166 max work q size 135 max work q stack size 135 [29/May/2017:11:35:48.690952202 +0300] slapd shutting down - closing down internal subsystems and plugins [29/May/2017:11:35:49.872175332 +0300] NSMMReplicationPlugin - agmt="cn=mhrsldap2-mhrsldap1" (172:389): Warning: Attempting to release replica, but unable to receive endReplication extended operation response from the replica. Error -5 (Timed out) [29/May/2017:11:35:50.060265381 +0300] Waiting for 4 database threads to stop [29/May/2017:11:35:51.008788537 +0300] All database threads now stopped [29/May/2017:11:35:51.021523201 +0300] slapd shutting down - freed 135 work q stack objects - freed 166 op stack objects [29/May/2017:11:35:51.974432991 +0300] slapd stopped. [29/May/2017:11:35:54.648763978 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2933 [29/May/2017:11:35:54.651378439 +0300] Import allocates 2860044KB import cache. [29/May/2017:11:35:54.652443493 +0300] Upgrade DN Format - NetscapeRoot: Start upgrade dn format. [29/May/2017:11:35:54.654011238 +0300] Upgrade DN Format - Instance NetscapeRoot in /var/lib/dirsrv/slapd-mhrsldap/db/NetscapeRoot is up-to-date [29/May/2017:11:35:54.874089044 +0300] check_and_set_import_cache: pagesize: 4096, pages: 2001763, procpages: 2934 [29/May/2017:11:35:54.876137518 +0300] Import allocates 2859732KB import cache. [29/May/2017:11:35:54.877121520 +0300] Upgrade DN Format - userRoot: Start upgrade dn format. [29/May/2017:11:35:54.878978734 +0300] Upgrade DN Format - Instance userRoot in /var/lib/dirsrv/slapd-mhrsldap/db/userRoot is up-to-date [29/May/2017:11:35:55.351168914 +0300] 389-Directory/1.3.5.10 B2017.145.2037 starting up [29/May/2017:11:35:55.375878667 +0300] resizing db cache size: 20000000 -> 10000000 [29/May/2017:11:35:55.559740675 +0300] slapd started. Listening on All Interfaces port 389 for LDAP requests
2017-05-29 1:16 GMT+03:00 William Brown wibrown@redhat.com:
On Fri, 2017-05-26 at 11:44 +0300, Alparslan Ozturk wrote:
dn: cn=mhrsldap2-mhrsldap1,cn=replica,cn=dc\3Dsagliknet\2Cdc\3Dsaglik\2Cdc
...
nsds5replicareapactive: 0 nsds5replicaLastUpdateStart: 20170522184536Z nsds5replicaLastUpdateEnd: 19700101000000Z
The answer is here: this replica has never been able to send a sucessful update to the ldap1 perhaps. You should check your error log, turn on replication logging, and generally check connectivity between these two masters,
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
On Mon, 2017-05-29 at 15:05 +0300, Alparslan Ozturk wrote:
I am remove all agrement. then create new agrement without lastlogintime. now are woring propely. many thanks for helps.
Glad that resolved it: I don't understand what the issue was caused by though :(
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
[image: Satır içi resim 1][image: Satır içi resim 2]
2017-05-30 1:05 GMT+03:00 William Brown wibrown@redhat.com:
On Mon, 2017-05-29 at 15:05 +0300, Alparslan Ozturk wrote:
I am remove all agrement. then create new agrement without lastlogintime. now are woring propely. many thanks for helps.
Glad that resolved it: I don't understand what the issue was caused by though :(
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever.
Allow the replication of the attr :)
Yes you are right. it is growing :)
[root@mhrsldap1 ~]# ls -alh /var/lib/dirsrv/slapd-mhrsldap/changelogdb/ toplam 1,7G drwxr-xr-x 2 nobody nobody 135 May 29 14:34 . drwxrwx--- 6 nobody nobody 54 May 22 16:33 .. -rw------- 1 nobody nobody 1,6G May 31 09:27 a8595502-446211e7-840bbe37-2d84af6b_55dc8a41000000010000.db -rw-r--r-- 1 nobody nobody 0 May 29 14:34 a8595502-446211e7-840bbe37-2d84af6b.sema -rw------- 1 nobody nobody 30 May 22 16:36 DBVERSION
2017-05-31 2:44 GMT+03:00 William Brown wibrown@redhat.com:
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever.
Allow the replication of the attr :)
-- Sincerely,
William Brown Software Engineer Red Hat, Australia/Brisbane
On 05/31/2017 01:44 AM, William Brown wrote:
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever.
Allow the replication of the attr :)
the changes will be logged independent if they are replicated or not, so trimming is the only way to limit the size. if you mean that trimming doesn't happen if the change is not rpelicated, this should be handled by the regular update of the "keep alive" entry, which will be replicated and update the consumer ruv, so that trimming can procede
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Dear Ludwin, I think you are right. The replication is complated with successfully. becasue I change replication aggrement.( every user loged in the lastlogintime changed ) , you know I have remove lastlogintime attribute the replication agreement.
now changelogdb are trimmed multimaster sites. the size change between 1 - 2GB
-rw------- 1 nobody nobody 1,1G Haz 6 13:46 a8595502-446211e7-840bbe37-2d84af6b_55dc8a41000000010000.db
2017-05-31 10:04 GMT+03:00 Ludwig Krispenz lkrispen@redhat.com:
On 05/31/2017 01:44 AM, William Brown wrote:
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever.
Allow the replication of the attr :)
the changes will be logged independent if they are replicated or not, so trimming is the only way to limit the size. if you mean that trimming doesn't happen if the change is not rpelicated, this should be handled by the regular update of the "keep alive" entry, which will be replicated and update the consumer ruv, so that trimming can procede
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
-- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
The major probm is many user logedin conncurrency so replication not posible. I think this site must be developed or high usage system must be schecled replication with optimum periots instead "real time".
2017-06-06 13:47 GMT+03:00 Alparslan Ozturk alparslan.ozturk@gmail.com:
Dear Ludwin, I think you are right. The replication is complated with successfully. becasue I change replication aggrement.( every user loged in the lastlogintime changed ) , you know I have remove lastlogintime attribute the replication agreement.
now changelogdb are trimmed multimaster sites. the size change between 1
- 2GB
-rw------- 1 nobody nobody 1,1G Haz 6 13:46 a8595502-446211e7-840bbe37- 2d84af6b_55dc8a41000000010000.db
2017-05-31 10:04 GMT+03:00 Ludwig Krispenz lkrispen@redhat.com:
On 05/31/2017 01:44 AM, William Brown wrote:
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever.
Allow the replication of the attr :)
the changes will be logged independent if they are replicated or not, so trimming is the only way to limit the size. if you mean that trimming doesn't happen if the change is not rpelicated, this should be handled by the regular update of the "keep alive" entry, which will be replicated and update the consumer ruv, so that trimming can procede
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
-- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
On 06/06/2017 06:52 AM, Alparslan Ozturk wrote:
The major probm is many user logedin conncurrency so replication not posible. I think this site must be developed or high usage system must be schecled replication with optimum periots instead "real time".
You can setup a replication schedule, see:
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/ht...
and
https://access.redhat.com/documentation/en-us/red_hat_directory_server/10/ht...
2017-06-06 13:47 GMT+03:00 Alparslan Ozturk <alparslan.ozturk@gmail.com mailto:alparslan.ozturk@gmail.com>:
Dear Ludwin, I think you are right. The replication is complated with successfully. becasue I change replication aggrement.( every user loged in the lastlogintime changed ) , you know I have remove lastlogintime attribute the replication agreement. now changelogdb are trimmed multimaster sites. the size change between 1 - 2GB -rw------- 1 nobody nobody 1,1G Haz 6 13:46 a8595502-446211e7-840bbe37-2d84af6b_55dc8a41000000010000.db 2017-05-31 10:04 GMT+03:00 Ludwig Krispenz <lkrispen@redhat.com <mailto:lkrispen@redhat.com>>: On 05/31/2017 01:44 AM, William Brown wrote:
On Tue, 2017-05-30 at 15:42 +0300, Alparslan Ozturk wrote:
this is my person object and it has lastlogintime (the operational attribute) so if many user bind the lastlogintime information updated so replication is started. now I am changed agreement exclude lastlogintime many replication is not accured. so changelogdb size slowly incrace. but still incrace the size.
I don't think you can use fractionalReplication like this. Because you end up commiting a change on either A or B master that can't be resolved by the other, so the cl will grow forever. Allow the replication of the attr :)
the changes will be logged independent if they are replicated or not, so trimming is the only way to limit the size. if you mean that trimming doesn't happen if the change is not rpelicated, this should be handled by the regular update of the "keep alive" entry, which will be replicated and update the consumer ruv, so that trimming can procede
_______________________________________________ 389-users mailing list -- 389-users@lists.fedoraproject.org <mailto:389-users@lists.fedoraproject.org> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org <mailto:389-users-leave@lists.fedoraproject.org>
-- Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander _______________________________________________ 389-users mailing list -- 389-users@lists.fedoraproject.org <mailto:389-users@lists.fedoraproject.org> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org <mailto:389-users-leave@lists.fedoraproject.org>
389-users mailing list -- 389-users@lists.fedoraproject.org To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
389-users@lists.fedoraproject.org