On 12/23/2015 11:16 AM, bahan w wrote:
Re.

I have some additionnal questions, if I may ?

Let's say I have 4 ipa masters :
S1
S2
S3
S4

1. When a modification is performed on a specific server, S1 for example, then :
- is it the replication plugin on S2, S3, and S4 which replicates the modification ?
- Or is it the replication plugin on S1 which replicates to S2, S3 and S4 ?
Simple description:

When a replica is updated by a client, that "update" goes into the replication changelog.  Then each replication agreement reads the change log and sends those updates to the replication consumer defined in the agreement.  Then the process continues on each replica until the change is present in all the servers.  There's much more to it than that, but that's the basic idea

2. To come back to my original question, so this is the replication plugin of S2, S3 and S4 which tries to create the entry cn=repl keep alive in the ldap ? Not the the replication plugin from the S1, right ?
Each replica has/creates its own "keep alive" entry. 

3. In the log message I can see a number with the "repl keep alive", do you know what it means ? Is it the frequency of the replication ?
It's the replica ID

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Managing_Replication-Configuring_Multi_Master_Replication.html#Multi_Master_Replication-Configuring_the_Read_Write_Replicas_on_the_Supplier_Servers

4. When you say this is to avoid replicas to become obsolete, what do you mean ? If I understood well, there is one entry in the ldap for cn=repl keep alive, are there multiple attributes for this entry ?
If a replica is not directly updated in a long time then the last changes it received from a client can get purged from the changelog and/or entry meta data.  Which makes it impossible to update other replicas once this condition is hit.  This is more prevalent when using fractional replication(which ipa uses).  This is a very complicated process, and out of the scope of this discussion.  You don't need to worry about this entry :-)  And the error message you reported is actually just a information message - it should be changed to a different log level.

Sorry for all these questions, I'm a little bit noob here.

Doesn't hesitate to look at the docs:

https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Managing_Replication.html

And our wiki has some great content:

http://ww.port389.org

Mark

Best regards.

Bahan


On Wed, Dec 23, 2015 at 4:53 PM, bahan w <bahanw042014@gmail.com> wrote:
Hey Mark.

Thanks for your answer.

Just to be sure, you say this entry is regularly updated, but when I try to ldapsearch it, I cannot find it :
###
ldapsearch -x -D "cn=Directory Manager" -h <IPA SERVER> -p 389 -W -b "cn=repl keep alive 6,dc=mydomain"
###

Result :
###
# extended LDIF
#
# LDAPv3
# base <cn=repl keep alive 6,dc=mydomain> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# search result
search: 2
result: 32 No such object
matchedDN: dc=mydomain

# numResponses: 1
###

Maybe my ldapsearch is bad ?

Best regards.

Bahan

On Wed, Dec 23, 2015 at 4:26 PM, Mark Reynolds <mareynol@redhat.com> wrote:


On 12/23/2015 10:09 AM, bahan w wrote:
Hello.

I'm using FreeIPA and I have 4 masters which replicates to each others.

On all the masters, I can see the following message from time to time :
###
NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 6,dc=mydomain> already exists
###

I don't really understand this message in fact. May you explain it to me please ?
This message is harmless and can be ignored.  The replication plugin is just trying to add the "keep alive" entry, but it already exists(which is fine).  This ticket is what introduced the "keep alive" entry:

https://fedorahosted.org/389/ticket/48266

Basically it is an entry that is periodically updated to avoid replicas from becoming stale - which can lead to replication permanently breaking, and require a re-initialization.

Is the message being logged too often?  Perhaps this message can be moved to the "replication" logging section, instead of being logged by default?  Can you can open a new ticket to have that investigated if you would like.

Mark

Best regards.

Bahan


--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org


--
389 users mailing list
389-users@%(host_name)s
http://lists.fedoraproject.org/admin/lists/389-users@lists.fedoraproject.org