Hi Alberto,
I think I reproduced the same crash locally:
(gdb) where
#0 __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f4137c13972 in __GI_abort () at abort.c:100
#2 0x00007f4137e6c241 in PR_Assert (
s=0x7f4138437420 "(vs->sorted == NULL) ||
(vs->num < VALUESET_ARRAY_SORT_THRESHOLD) ||
((vs->num >= VALUESET_ARRAY_SORT_THRESHOLD)
&& (vs->sorted[0] < vs->num))",
file=0x7f4138437400 "ldap/servers/slapd/valueset.c",
ln=471) at ../../.././nspr/pr/src/io/prlog.c:571
#3 0x00007f41384079ce in slapi_valueset_done
(vs=0x7f4098016c18) at ldap/servers/slapd/valueset.c:471
#4 0x00007f41384085fb in valueset_array_purge
(a=0x7f4098016be0, vs=0x7f4098016c18,
csn=0x7f4098017570) at ldap/servers/slapd/valueset.c:804
#5 0x00007f4138408766 in valueset_purge
(a=0x7f4098016be0, vs=0x7f4098016c18,
csn=0x7f4098017570) at ldap/servers/slapd/valueset.c:834
#6 0x00007f41383483ce in attr_purge_state_information
(entry=0x7f40980151b0, attr=0x7f4098016be0,
csnUpTo=0x7f4098017570)
at ldap/servers/slapd/attr.c:739
#7 0x00007f413836e410 in
entry_purge_state_information (e=0x7f40980151b0,
csnUpTo=0x7f4098017570) at
ldap/servers/slapd/entrywsi.c:292
#8 0x00007f4134f8dedb in
purge_entry_state_information (pb=0x7f4098000b60) at
ldap/servers/plugins/replication/repl5_plugins.c:558
#9 0x00007f4134f8e283 in multimaster_bepreop_modify
(pb=0x7f4098000b60) at
ldap/servers/plugins/replication/repl5_plugins.c:700
#10 0x00007f4134f8dfe3 in multimaster_mmr_preop
(pb=0x7f4098000b60, flags=451) at
ldap/servers/plugins/replication/repl5_plugins.c:588
#11 0x00007f41383c12b5 in plugin_call_mmr_plugin_preop
(pb=0x7f4098000b60, e=0x0, flags=451) at
ldap/servers/slapd/plugin_mmr.c:39
#12 0x00007f4135094600 in ldbm_back_modify
(pb=0x7f4098000b60) at
ldap/servers/slapd/back-ldbm/ldbm_modify.c:635
#13 0x00007f41383a1e3f in op_shared_modify
(pb=0x7f4098000b60, pw_change=0, old_pw=0x0) at
ldap/servers/slapd/modify.c:1022
#14 0x00007f41383a0343 in do_modify
(pb=0x7f4098000b60) at ldap/servers/slapd/modify.c:380
#15 0x0000000000418c2b in
connection_dispatch_operation (conn=0x47eeb28,
op=0x47a1750, pb=0x7f4098000b60) at
ldap/servers/slapd/connection.c:624
#16 0x000000000041ad0b in connection_threadmain () at
ldap/servers/slapd/connection.c:1753
#17 0x00007f4137e85869 in _pt_root (arg=0x47c4880) at
../../.././nspr/pr/src/pthreads/ptthread.c:201
#18 0x00007f4137e1a4c0 in start_thread
(arg=<optimized out>) at pthread_create.c:479
#19 0x00007f4137ced133 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
I will make a test case and open a ticket for that. The
problem should be missed by asan because it is related to
uninitialized structure and not a use after free as it was
initially looking like .
Many thanks for your help on this and continuous
investigation. It helped a lot.
Also, if you would like to produce a asan build it is
described in
http://www.port389.org/docs/389ds/howto/howto-addresssanitizer.html
best regards
thierry
On 5/8/20 2:26 PM, Alberto Viana wrote:
William,
It's suppose to be production, but once it's not
working (the replication) I just left one 389 as main
server, so I can do any test as I want.
I have no idea how to do that, can you point me in
the right direction?
Thanks
Alberto Viana
Is this a development/debug build? Do you have a
reproducer? It would be interesting to run this under
ASAN ...
> On 7 May 2020, at 22:31, Alberto Viana <albertocrj@gmail.com >
wrote:
>
> William,
>
> Here's:
> Assertion failure: (vs->sorted == NULL) ||
(vs->num < VALUESET_ARRAY_SORT_THRESHOLD) ||
((vs->num >= VALUESET_ARRAY_SORT_THRESHOLD)
&& (vs->sorted[0] < vs->num)), at
ldap/servers/slapd/valueset.c:471
> Thread 17 "ns-slapd" received signal SIGABRT,
Aborted.
> [Switching to Thread 0x7fffbbfff700 (LWP 13431)]
> 0x00007ffff455399f in raise () from
/lib64/libc.so.6
> (gdb) frame 3
> #3 0x00007ffff7b71627 in slapi_valueset_done
(vs=0x7fffb0022aa8) at
ldap/servers/slapd/valueset.c:471
> 471 PR_ASSERT((vs->sorted == NULL) ||
(vs->num < VALUESET_ARRAY_SORT_THRESHOLD) ||
((vs->num >= VALUESET_ARRAY_SORT_THRESHOLD)
&& (vs->sorted[0] < vs->num)));
> (gdb) print vs->sorted@21
> $1 = {0x7fffb0023ad0, 0x7fffb0022b50, 0x4,
0x6c7e80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7fffb0023c00,
0x7fffb00247c0, 0x0, 0x0, 0x0, 0x25,
0x664f7265626d656d, 0x0, 0x0, 0x115, 0x0, 0x0}
>
> Thanks
>
> Alberto Viana
>
> On Wed, May 6, 2020 at 11:38 PM William Brown
<wbrown@suse.de > wrote:
>
>
> > On 6 May 2020, at 22:40, Alberto Viana <albertocrj@gmail.com >
wrote:
> >
> > William,
> >
> > Here's:
> >
> > (gdb) frame 3
> > #3 0x00007ffff7b71627 in
slapi_valueset_done (vs=0x7fffac022aa8) at
ldap/servers/slapd/valueset.c:471
> > 471 PR_ASSERT((vs->sorted == NULL)
|| (vs->num < VALUESET_ARRAY_SORT_THRESHOLD) ||
((vs->num >= VALUESET_ARRAY_SORT_THRESHOLD)
&& (vs->sorted[0] < vs->num)));
> > (gdb) print *vs->sorted@21
> > $1 = {18446744073709551615 <repeats 21
times>}
>
> Ahhh sorry, maybe it should be vs->sorted@21
(no *)
>
> >
> > Everything has been a quite chaotic to me
too.
> >
> > Thanks
> >
> > Alberto Viana
> >
> > On Tue, May 5, 2020 at 10:38 PM William
Brown <wbrown@suse.de >
wrote:
> > So reading these frames, it's likely that
this is the assert condition failing:
> >
> > (vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) &&
(vs->sorted[0] < vs->num)
> >
> > This is because vs->sorted exists, and
vs->num >= threshhold (10), so as a result, this
would indicate that vs->sorted[0] has a problem
where it must be equal to or greater than vs->num.
> >
> > Is is still possible to seethe content of
the vs->sorted array? I think you could do:
> >
> > frame 3
> > print *vs->sorted@21
> >
> > Sorry about the delay in responding to this,
things have been hectic for me.
> >
> >
> >
> > > On 29 Apr 2020, at 23:40, Alberto Viana
<albertocrj@gmail.com >
wrote:
> > >
> > > William,
> > >
> > > Here's:
> > >
> > > Frame9:
> > > https://gist.github.com/albertocrj/87bf4a010bf2f7e1f97ef3ee72ee44df
> > >
> > > Frame7:
> > > https://gist.github.com/albertocrj/840f15e5df10cad0e2977cd030abdba4
> > >
> > > Frame6:
> > > https://gist.github.com/albertocrj/befb7144b86bc4af86b9a2e0be0293a1
> > >
> > > Thank you
> > >
> > > Alberto Viana
> > >
> > > On Wed, Apr 22, 2020 at 11:09 PM
William Brown <wbrown@suse.de >
wrote:
> > >
> > >
> > > > On 23 Apr 2020, at 06:59, Alberto
Viana <albertocrj@gmail.com >
wrote:
> > > >
> > > > Mark,
> > > >
> > > > On frame 9:
> > > >
> > > > It's go until p
*mod->mod_bvalues[20]
> > > >
> > > > (gdb) p *mod->mod_bvalues[21]
> > > > Cannot access memory at address
0x0
> > > >
> > > > On frame 7:
> > > > It's go until p *replacevals[20]
> > > >
> > > > (gdb) p *replacevals[21]
> > > > Cannot access memory at address
0x0
> > >
> > > Yep, but we need to see all the outputs
from 0 -> 20 and 0 -> 21 respectively :) So copy
paste the full out put please! Thanks for your
patience with this.
> > >
> > > >
> > > > On frame 6:
> > > > (gdb) frame 6
> > > > #6 0x00007ffff7ada6fa in
entry_delete_present_values_wsi_multi_valued
(e=0x7fff8401f500, type=0x7fff84012780 "memberOf",
vals=0x0, csn=0x7fff967fb340, urp=8, mod_op=2,
replacevals=0x7fff840127c0)
> > > > at
ldap/servers/slapd/entrywsi.c:777
> > > > 777 valueset_purge(a,
&a->a_present_values, csn);
> > > > (gdb) print *a
> > > > $278 = {a_type = 0x7fff84022b30
"memberOf", a_present_values = {num = 21, max = 32,
sorted = 0x7fff84023ad0, va = 0x7fff84022b50}, a_flags
= 4, a_plugin = 0x6c7e80, a_deleted_values = {num = 0,
max = 0,
> > > > sorted = 0x0, va = 0x0},
a_listtofree = 0x0, a_next = 0x7fff84023c00,
a_deletioncsn = 0x7fff840247c0, a_mr_eq_plugin = 0x0,
a_mr_ord_plugin = 0x0, a_mr_sub_plugin = 0x0}
> > > > (gdb) print
*a->a_present_values
> > > > Structure has no component named
operator*.
> > > > (gdb) print *a->a_present_values.va [0]
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Alberto Viana
> > > >
> > > > On Wed, Apr 22, 2020 at 4:57 PM
Mark Reynolds <mreynolds@redhat.com >
wrote:
> > > > Goto frame 9 and start printing
the mod:
> > > >
> > > > (gdb) p *mod
> > > >
> > > > (gdb) print i
> > > >
> > > > (gdb) p *mod->mod_bvalues[0]
> > > >
> > > > (gdb) p *mod->mod_bvalues[1]
> > > >
> > > > ... Keep doing that unitl its NULL
> > > >
> > > >
> > > >
> > > > Then goto frame 7
> > > >
> > > > (gdb) p *replacevals
> > > >
> > > > (gdb) p *replacevals[0]
> > > >
> > > > (gdb) p *replacevals[1]
> > > >
> > > > --- Keeping doing this until its
NULL
> > > >
> > > >
> > > >
> > > > Then goto frame 6
> > > >
> > > > (gdb) print *a
> > > >
> > > > (gdb) print
*a->a_present_values
> > > >
> > > > (gdb) print *a->a_present_values.va [0]
> > > >
> > > > (gdb) print *a->a_present_values.va [1]
> > > >
> > > > --- Keeping doing this until its
NULL
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > Mark
> > > >
> > > >
> > > >
> > > > On 4/22/20 3:43 PM, Alberto Viana
wrote:
> > > >> Mark,
> > > >>
> > > >> Yes, I'm in frame 3, and No,
I do not know what modification is, sorry. I think
thats what I'm trying to find out, why one of the
servers always crash if I enable the replication
between 2 389.
> > > >>
> > > >> Maybe reconfigure my
replication, enable debug log and see where stops?
> > > >>
> > > >> What else can I do?
> > > >>
> > > >> Thanks
> > > >>
> > > >>
> > > >> On Wed, Apr 22, 2020 at 4:34
PM Mark Reynolds <mreynolds@redhat.com >
wrote:
> > > >>
> > > >>
> > > >> On 4/22/20 3:27 PM, Alberto
Viana wrote:
> > > >>> Mark,
> > > >>>
> > > >>> Here's:
> > > >>> (gdb) where
> > > >>> #0 0x00007ffff455399f in
raise () at /lib64/libc.so.6
> > > >>> #1 0x00007ffff453dcf5 in
abort () at /lib64/libc.so.6
> > > >>> #2 0x00007ffff5430cd0 in
PR_Assert () at /lib64/libnspr4.so
> > > >>> #3 0x00007ffff7b71627 in
slapi_valueset_done (vs=0x7fff8c022aa8) at
ldap/servers/slapd/valueset.c:471
> > > >>> #4 0x00007ffff7b72257 in
valueset_array_purge (a=0x7fff8c022aa0,
vs=0x7fff8c022aa8, csn=0x7fff977fd340) at
ldap/servers/slapd/valueset.c:804
> > > >>> #5 0x00007ffff7b723c5 in
valueset_purge (a=0x7fff8c022aa0, vs=0x7fff8c022aa8,
csn=0x7fff977fd340) at
ldap/servers/slapd/valueset.c:834
> > > >>> #6 0x00007ffff7ada6fa in
entry_delete_present_values_wsi_multi_valued
(e=0x7fff8c01f500, type=0x7fff8c012780 "memberOf",
vals=0x0, csn=0x7fff977fd340, urp=8, mod_op=2,
replacevals=0x7fff8c0127c0)
> > > >>> at
ldap/servers/slapd/entrywsi.c:777
> > > >>> #7 0x00007ffff7ada20d in
entry_delete_present_values_wsi (e=0x7fff8c01f500,
type=0x7fff8c012780 "memberOf", vals=0x0,
csn=0x7fff977fd340, urp=8, mod_op=2,
replacevals=0x7fff8c0127c0)
> > > >>> at
ldap/servers/slapd/entrywsi.c:623
> > > >>> #8 0x00007ffff7adaa7a in
entry_replace_present_values_wsi (e=0x7fff8c01f500,
type=0x7fff8c012780 "memberOf", vals=0x7fff8c0127c0,
csn=0x7fff977fd340, urp=8) at
ldap/servers/slapd/entrywsi.c:869
> > > >>> #9 0x00007ffff7adabf1 in
entry_apply_mod_wsi (e=0x7fff8c01f500,
mod=0x7fff8c0127a0, csn=0x7fff977fd340, urp=8) at
ldap/servers/slapd/entrywsi.c:903
> > > >>> #10 0x00007ffff7adae52 in
entry_apply_mods_wsi (e=0x7fff8c01f500,
smods=0x7fff977fd3c0, csn=0x7fff8c012160, urp=8) at
ldap/servers/slapd/entrywsi.c:973
> > > >>> #11 0x00007fffead19364 in
modify_apply_check_expand
> > > >>> (pb=0x7fff8c000b20,
operation=0x814160, mods=0x7fff8c012750,
e=0x7fff8c01bc90, ec=0x7fff8c01f480,
postentry=0x7fff977fd4b0,
ldap_result_code=0x7fff977fd434,
ldap_result_message=0x7fff977fd4d8)
> > > >>> at
ldap/servers/slapd/back-ldbm/ldbm_modify.c:247
> > > >>> #12 0x00007fffead1a430 in
ldbm_back_modify (pb=0x7fff8c000b20) at
ldap/servers/slapd/back-ldbm/ldbm_modify.c:665
> > > >>> #13 0x00007ffff7b0cd60 in
op_shared_modify (pb=0x7fff8c000b20, pw_change=0,
old_pw=0x0) at ldap/servers/slapd/modify.c:1021
> > > >>> #14 0x00007ffff7b0b266 in
do_modify (pb=0x7fff8c000b20) at
ldap/servers/slapd/modify.c:380
> > > >>> #15 0x000000000041592c in
connection_dispatch_operation (conn=0x150e220,
op=0x814160, pb=0x7fff8c000b20) at
ldap/servers/slapd/connection.c:638
> > > >>> #16 0x0000000000417a0e in
connection_threadmain () at
ldap/servers/slapd/connection.c:1767
> > > >>> #17 0x00007ffff544a568 in
_pt_root () at /lib64/libnspr4.so
> > > >>> #18 0x00007ffff4de52de in
start_thread () at /lib64/libpthread.so.0
> > > >>> #19 0x00007ffff46184b3 in
clone () at /lib64/libc.so.6
> > > >>> (gdb) print
*vs->sorted[0]
> > > >>> Cannot access memory at
address 0xffffffffffffffff
> > > >> Are you in the
slapi_valueset_done frame?
> > > >>
> > > >> Do you know what the modify
operation is doing? It's something with memberOf, but
if you knew the exact operation, and what the entry
looks like prior to making that update, it would be
very useful to us.
> > > >>
> > > >> Thanks,
> > > >> Mark
> > > >>
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Alberto Viana
> > > >>>
> > > >>> On Wed, Apr 22, 2020 at
4:22 PM Mark Reynolds <mreynolds@redhat.com >
wrote:
> > > >>>
> > > >>>
> > > >>> On 4/22/20 3:15 PM,
Alberto Viana wrote:
> > > >>>> William,
> > > >>>>
> > > >>>> Here's:
> > > >>>>
> > > >>>> (gdb) frame 3
> > > >>>> #3 0x00007ffff7b71627
in slapi_valueset_done (vs=0x7fff8c022aa8) at
ldap/servers/slapd/valueset.c:471
> > > >>>> 471
PR_ASSERT((vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) &&
(vs->sorted[0] < vs->num)));
> > > >>>> (gdb) print *vs
> > > >>>> $1 = {num = 21, max =
32, sorted = 0x7fff8c023ad0, va = 0x7fff8c022b50}
> > > >>> Can you also do a "print
*vs->sorted[0]" ?
> > > >>>
> > > >>> And a "where" so we can
see the full stack trace that leads up to this
assertion?
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Mark
> > > >>>
> > > >>>>
> > > >>>>
> > > >>>> Thanks,
> > > >>>>
> > > >>>> Alberto Viana
> > > >>>>
> > > >>>> On Sun, Apr 19, 2020
at 8:52 PM William Brown <wbrown@suse.de > wrote:
> > > >>>>
> > > >>>>
> > > >>>> > On 18 Apr 2020,
at 02:55, Alberto Viana <albertocrj@gmail.com >
wrote:
> > > >>>> >
> > > >>>> > Hi Guys,
> > > >>>> >
> > > >>>> > I build my own
packages (from source), here's the info:
> > > >>>> >
389-ds-base-1.4.2.8-20200414gitfae920fc8.el8.x86_64.rpm
> > > >>>> >
389-ds-base-debuginfo-1.4.2.8-20200414gitfae920fc8.el8.x86_64.rpm
> > > >>>> >
python3-lib389-1.4.2.8-20200414gitfae920fc8.el8.noarch.rpm
> > > >>>> >
> > > >>>> > I'm running in
centos8.
> > > >>>> >
> > > >>>> > Here's what I
could debug:
> > > >>>> > https://gist.github.com/albertocrj/4d74732e4e357fbc5a27296199127a62
> > > >>>> > https://gist.github.com/albertocrj/94fc3521024c7a508f1726923936e476
> > > >>>>
> > > >>>> So that assert seems
to be:
> > > >>>>
> > > >>>>
PR_ASSERT((vs->sorted == NULL) || (vs->num <
VALUESET_ARRAY_SORT_THRESHOLD) || ((vs->num >=
VALUESET_ARRAY_SORT_THRESHOLD) &&
(vs->sorted[0] < vs->num)));
> > > >>>>
> > > >>>> But it's not clear
which condition here is being violated.
> > > >>>>
> > > >>>> It looks like your
catching this in GDB though, so can you go to:
> > > >>>>
> > > >>>> https://gist.github.com/albertocrj/4d74732e4e357fbc5a27296199127a62
> > > >>>>
> > > >>>> (gdb) frame 3
> > > >>>> (gdb) print *vs
> > > >>>>
> > > >>>> That would help to
work out what condition is incorrectly being asserted
here.
> > > >>>>
> > > >>>> Thanks!
> > > >>>>
> > > >>>>
> > > >>>> >
> > > >>>> >
> > > >>>> > Do you guys need
something else?
> > > >>>> >
> > > >>>> > Thanks
> > > >>>> >
> > > >>>> > Alberto Viana
> > > >>>> >
> > > >>>> >
> > > >>>> >
> > > >>>> >
> > > >>>> > On Tue, Mar 31,
2020 at 8:03 PM William Brown <wbrown@suse.de > wrote:
> > > >>>> >
> > > >>>> >
> > > >>>> > > On 1 Apr
2020, at 05:18, Mark Reynolds <mreynolds@redhat.com >
wrote:
> > > >>>> > >
> > > >>>> > >
> > > >>>> > > On 3/31/20
1:36 PM, Alberto Viana wrote:
> > > >>>> > >> Hey
Guys,
> > > >>>> > >>
> > > >>>> > >>
389-Directory/1.4.2.8
> > > >>>> > >>
> > > >>>> > >> 389
(master) <=> 389 (master)
> > > >>>> > >>
> > > >>>> > >> In a
master to master replication, start to see this error
:
> > > >>>> > >>
[31/Mar/2020:17:30:52.610637150 +0000] - WARN -
NSMMReplicationPlugin - replica_check_for_data_reload
- Disorderly shutdown for replica dc=rnp,dc=local.
Check if DB RUV needs to be updated
> > > >>>> >
> > > >>>> > Also might be
good to remind us what distro and packages you have
389-ds from?
> > > >>>> >
> > > >>>> > > Looks like
the server is crashing which is why you see these
disorderly shutdown messages. Please get a core file
and take some stack traces from it:
> > > >>>> > >
> > > >>>> > > http://www.port389.org/docs/389ds/FAQ/faq.html#sts=Debugging%C2%A0Crashes
> > > >>>> > >
> > > >>>> > > Can you
please provide the complete logs? Also, you might
want to try re-initializing the replication agreement
instead of disabling and re-enabling replication (its
less painful and it "might" solve the issue).
> > > >>>> > >
> > > >>>> > > Mark
> > > >>>> > >
> > > >>>> > >>
> > > >>>> > >> Even
after restart the service the problem persists, I have
to disable and re-enable replication (and replication
agr) on both sides, it works for some time, and the
problem comes back.
> > > >>>> > >>
> > > >>>> > >> Any
tips?
> > > >>>> > >>
> > > >>>> > >> Thanks
> > > >>>> > >>
> > > >>>> > >> Alberto
Viana
> > > >>>> > >>
> > > >>>> > >>
> > > >>>> > >>
_______________________________________________
> > > >>>> > >>
389-users mailing list --
> > > >>>> > >> 389-users@lists.fedoraproject.org
> > > >>>> > >>
> > > >>>> > >> To
unsubscribe send an email to
> > > >>>> > >> 389-users-leave@lists.fedoraproject.org
> > > >>>> > >>
> > > >>>> > >> Fedora
Code of Conduct:
> > > >>>> > >> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > >>>> > >>
> > > >>>> > >> List
Guidelines:
> > > >>>> > >> https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > >>>> > >>
> > > >>>> > >> List
Archives:
> > > >>>> > >> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > > >>>> > > --
> > > >>>> > >
> > > >>>> > > 389
Directory Server Development Team
> > > >>>> > >
> > > >>>> > >
_______________________________________________
> > > >>>> > > 389-users
mailing list -- 389-users@lists.fedoraproject.org
> > > >>>> > > To
unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > > >>>> > > Fedora Code
of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > >>>> > > List
Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > >>>> > > List
Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > > >>>> >
> > > >>>> > —
> > > >>>> > Sincerely,
> > > >>>> >
> > > >>>> > William Brown
> > > >>>> >
> > > >>>> > Senior Software
Engineer, 389 Directory Server
> > > >>>> > SUSE Labs
> > > >>>> >
> > > >>>>
> > > >>>> —
> > > >>>> Sincerely,
> > > >>>>
> > > >>>> William Brown
> > > >>>>
> > > >>>> Senior Software
Engineer, 389 Directory Server
> > > >>>> SUSE Labs
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
_______________________________________________
> > > >>>> 389-users mailing list
--
> > > >>>> 389-users@lists.fedoraproject.org
> > > >>>>
> > > >>>> To unsubscribe send an
email to
> > > >>>> 389-users-leave@lists.fedoraproject.org
> > > >>>>
> > > >>>> Fedora Code of
Conduct:
> > > >>>> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > >>>>
> > > >>>> List Guidelines:
> > > >>>> https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > >>>>
> > > >>>> List Archives:
> > > >>>> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > > >>> --
> > > >>>
> > > >>> 389 Directory Server
Development Team
> > > >>>
> > > >>>
> > > >>>
> > > >>>
_______________________________________________
> > > >>> 389-users mailing list --
> > > >>> 389-users@lists.fedoraproject.org
> > > >>>
> > > >>> To unsubscribe send an
email to
> > > >>> 389-users-leave@lists.fedoraproject.org
> > > >>>
> > > >>> Fedora Code of Conduct:
> > > >>> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > >>>
> > > >>> List Guidelines:
> > > >>> https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > >>>
> > > >>> List Archives:
> > > >>> https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > > >> --
> > > >>
> > > >> 389 Directory Server
Development Team
> > > >>
> > > > --
> > > >
> > > > 389 Directory Server Development
Team
> > > >
> > > >
_______________________________________________
> > > > 389-users mailing list -- 389-users@lists.fedoraproject.org
> > > > To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > > > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > > List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > >
> > > —
> > > Sincerely,
> > >
> > > William Brown
> > >
> > > Senior Software Engineer, 389 Directory
Server
> > > SUSE Labs
> > >
_______________________________________________
> > > 389-users mailing list -- 389-users@lists.fedoraproject.org
> > > To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> > >
_______________________________________________
> > > 389-users mailing list -- 389-users@lists.fedoraproject.org
> > > To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> >
> > —
> > Sincerely,
> >
> > William Brown
> >
> > Senior Software Engineer, 389 Directory
Server
> > SUSE Labs
> >
_______________________________________________
> > 389-users mailing list -- 389-users@lists.fedoraproject.org
> > To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> >
_______________________________________________
> > 389-users mailing list -- 389-users@lists.fedoraproject.org
> > To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> > Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
>
> —
> Sincerely,
>
> William Brown
>
> Senior Software Engineer, 389 Directory Server
> SUSE Labs
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
> _______________________________________________
> 389-users mailing list -- 389-users@lists.fedoraproject.org
> To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org
_______________________________________________
389-users mailing list -- 389-users@lists.fedoraproject.org
To unsubscribe send an email to 389-users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@lists.fedoraproject.org