[PATCH] Ensure that fds are only added once in the sbus
by Stephen Gallagher
There were cases that we we were mishandling where D-BUS would try
to send us the same file descriptor when changing events. Instead
of trying to blindly add a new event (which caused an EEXIST error
in epoll), we will remove the old tevent_fd and add the new one.
Also, we were trying to be too fancy in the toggle function. It is
simpler to just remove or add watches as appropriate.
--
Stephen Gallagher
RHCE 804006346421761
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
14 years, 8 months
RFC: Enumerations and sssd drivers
by Simo Sorce
Hello all,
during this month I have been slowly working on a set of patches to move
from storing information in 2 different formats (legacy and
member/memberOf based) to just one format (member/memberOf based).
While doing this I had to address some problems that come up when you
want to store a group and its members have not been stored yet, and
cases like this.
All the while I have been testing doing enumerations against a server
that has more than 3k users and 3k groups.
This is a medium sized database, and yet getting groups from scratch
(startup after deleting the .ldb database) could take up to a minute;
granted the operation is quite a bit faster if the database just needs
updating and not creation from scratch, but I still think it's too much.
I've been thinking hard about how to address this problem and solve the
few hacks we have in the code when it comes to enumeration caching and
retrieval. We always said that enumerations are evil (and they are
indeed) and in fact we even have options that disable enumerations by
default. Yet I think this is not necessarily the right way to go.
I think we have 2 major problems in our current architecture when it
comes to enumerations.
1) we try to hit the wire when an enumeration request comes in from a
process and a (small) timeout for the previous enumeration has been
expired.
2) We run the enumeration in a single transaction (and yes I have
recently introduced this), which means any other operation is blocked
until the enumeration is finished.
The problem I actually see is that user space apps may have to wait just
too much, and this *will* turn out to be a problem. Even if we give the
option to turn off enumeration I think that for apps that needs it the
penalty has become simply too big. Also I think the way we have to
perform updates using this model is largely inefficient, as we basically
perform a full new search potentially every few minutes.
After some hard thinking I wrote down a few points I'd like the list
opinion on. If people agree I will start acting on them.
* stop performing enumerations on demand, and perform them in background
if enumerations are activated (change the enumeration parameter from a
bitfield to a true/flase boolean)
* perform a full user+group enumeration at startup (possibly using a
paged or vlv search)
* when possible request the modifyTimestamp attribute and save the
highest modifyTimestamp into the domain entry as originalMaxTimestamp
* on a tunable interval run a task that refreshes all users and groups
in the background using a search filter that includes
&(modifyTimestamp>$originalMaxtimestamp)
* still do a full refresh every X minutes/hours
* disable using a single huge transaction for enumerations (we might be
ok doing a transaction for each page search if pages are small,
otherwise just revert to the previous behavior of having a transaction
per stored object)
* Every time we update an entry we store the originalModifyTimestamp on
it as a copy of the remote modifyTimestamp, this allows us to know if we
actually need to touch the cached entry at all upon refresh (like when a
getpwnam() is called, speeding up operations for entries that need no
refresh (we will avoid data transformation and a writing to ldb).
* Every time we run the general refresh task or we save a changed entry
we store a LastUpdatedTimestamp
* When the refresh task is completed successfully we run another cleanup
task that searches our LDB for any entry that has a too old
LastUpdatedTimestamp. If any is found, we double check against the
remote server if the entry still exists (and update it if it does), and
otherwise we delete it.
NOTE: this means that until the first background enumeration is
complete, a getent passwd or a getent group call may return incomplete
results. I think this is acceptable as it will really happen only at
startup, when the daemon caches are empty.
NOTE2: Off course the scheduled refreshes and cleanup tasks are always
rescheduled if we are offline or if a fatal error occurs during the
task.
NOTE3: I am proposing to change only the way enumerations are handled,
single user or group lookups will remain unchanged for now and will be
dealt with later if needed.
Please provide comments or questions if you think there is anything not
clear with the proposed items or if you think I forgot to take some
important aspect in account.
Simo.
14 years, 8 months
Looking for a bit of advice
by Dmitri Pal
Hi,
I am working on the next set of changes for ELAPI and have a question.
With some recent changes to the design the terminology looks like this:
target - a destination where the event is sent. Target consists of chain
of sinks in priority (fail over) order. So if one sink fails next sink
in the chain will be used.
Imagine now two sinks in the chain. The first sink is to write to the
file on the NFS and the second to write to the local file. They differ
in configuration but the code is the same for both sinks.
This brings to the notion of "provider". Provider is the implementation
(the code) to write to specific destination that can be configured in
different ways. The sink then a provider + configuration.
Also some of this can be read here:
https://fedorahosted.org/sssd/wiki/WikiPage/ELAPIInterface
Ok so not when we are clear (hopefully) on the terminology let me ask
the question.
Under directory that contains mail ELAPI code (which is
sssd/common/elapi) I plan to have sub directory named "providers". Under
it there will be directory for "file" provider, "syslog" provider,
"stderr" provider and other providers. Generally the providers should be
implemented as shared libraries and that will be the case. However per
Simo's suggestion I plan to embed the three basic providers: "file",
"syslog", "stderr" into the ELAPI library itself. This means that these
providers have to be built before I build the rest of ELAPI library and
its unit tests. Other (loadable) providers do not cause any dependency
on the core ELAPI code and can be built separately. So how should I
better organize the tree and build process.
I can:
a) Since the loadable libraries can be built independently they probably
should live in other part of the tree, right? Then does it make sense to
have the hierarchy that I plan? May I should just put all the code for
the basic providers into the elapi directory itself?
b) Another option is to keep this code structured as I plan but do not
have makefile.am and configure.ac in the subdirectories and build the
standard providers from within the elapi makefile.
c) Add makefile.am and configure.ac into the "provider" subdirectory and
individual provider directories, build independent libraries for each
provider and then use these libraries when I build elapi and its unit test.
The first option is the simplest but then I will have a mixture of file
prefixes in one directory. The source files that constitute core of
elapi start with "elapi_" but the providers start with provider name
like "file_provider.c", "stderr_provider.c" etc.
The second one does not seem to be inline with automake guidelines as
far as I understand them.
The third one is the most complex.
So which way should I go?
--
Thank you,
Dmitri Pal
Engineering Manager IPA project,
Red Hat Inc.
-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
14 years, 8 months
The Monitor returned an error [org.freedesktop.DBus.Error.NoReply]
by Mathias Gug
Hi,
While trying to start sssd 0.5.0 on an Ubuntu Karmic system, I've run
into the following error:
[sssd[dp]] [id_callback] (0): The Monitor returned an error
[org.freedesktop.DBus.Error.NoReply]
[sssd] [global_checks_handler] (0): Unknown child (2552) did exit
[sssd[nss]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[nss]] [sss_process_init] (0): fatal error setting up backend connector
[sssd[be[files]]] [be_cli_init] (0): Failed to connect to monitor services.
[sssd[be[files]]] [be_process_init] (0): fatal error setting up server bus
[sssd[be[files]]] [main] (0): Could not initialize backend [5]
[sssd[pam]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[pam]] [sss_process_init] (0): fatal error setting up backend connector
[sssd] [global_checks_handler] (0): Unknown child (2553) did exit
[sssd] [global_checks_handler] (0): Unknown child (2554) did exit
[sssd] [global_checks_handler] (0): Unknown child (2555) did exit
[sssd[nss]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[nss]] [sss_process_init] (0): fatal error setting up backend connector
[sssd[pam]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[pam]] [sss_process_init] (0): fatal error setting up backend connector
[sssd[be[files]]] [be_cli_init] (0): Failed to connect to monitor services.
[sssd[be[files]]] [be_process_init] (0): fatal error setting up server bus
[sssd[be[files]]] [main] (0): Could not initialize backend [5]
[sssd[nss]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[nss]] [sss_process_init] (0): fatal error setting up backend connector
[sssd[pam]] [sss_dp_init] (0): Failed to connect to monitor services.
[sssd[pam]] [sss_process_init] (0): fatal error setting up backend connector
[...]
dbus is installed and running:
$ ps -ef | grep dbus
106 2382 1 0 21:27 ? 00:00:00 /bin/dbus-daemon --system
The sssd configuration is the example configuration with using the
LOCAL domain (/etc/sssd/sssd.conf):
[services]
description = Local Service Configuration
activeServices = nss, dp, pam
# Number of times services should attempt to reconnect in the
# event of a Data Provider crash or restart before they give up
reconnection_retries = 3
[services/nss]
description = NSS Responder Configuration
# the following prevents sssd for searching for the root user/group in
# all domains (you can add here a comma separated list of system accounts are
# always going to be /etc/passwd users, or that you want to filter out)
filterGroups = root
filterUsers = root
[services/dp]
description = Data Provider Configuration
[services/pam]
description = PAM Responder Configuration
[services/monitor]
description = Service Monitor Configuration
#if a backend is particularly slow you can raise this timeout here
sbusTimeout = 30
[domains]
description = Domains served by SSSD
domains = LOCAL
[domains/LOCAL]
description = LOCAL migration domain
enumerate = 3
minId = 500
magicPrivateGroups = FALSE
legacy = TRUE
provider = files
--
Mathias Gug
Ubuntu Developer http://www.ubuntu.com
14 years, 8 months
Announcing the release of SSSD 0.5.0
by Stephen Gallagher
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
The SSSD Development team is proud to announce the immediate release of
SSSD 0.5.0. It can be found at https://fedorahosted.org/sssd/
The list of changes include (but are not limited to)
* The addition of the Kerberos 5 authentication provider
* The addition of the native LDAP identity provider
* Support for building on Debian and Ubuntu
* Improved caching mechanism to reduce server load
* Includes the fix released for CVE-2009-2410 in SSSD 0.4.1
and many other bugfixes.
- --
Stephen Gallagher
RHCE 804006346421761
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/
iEYEARECAAYFAkqS3bAACgkQeiVVYja6o6NSiACgirHdUpkJqun2ugkrjDYTfAhz
vWsAoKlioTE9q/7XEZwuf15RGnbsCnDL
=SJUo
-----END PGP SIGNATURE-----
14 years, 8 months
[PATCHES] fix various minor issues
by Simo Sorce
Series of patches I made up while working toward elimination of the
legacy options this summer.
0001 - Catch a bad buffer sent by glibc.
I reproduced this a couple of times. It was almost certainly due to bad
input sent from the server, but I wasn't able to pinpoint the main
cause. Meanwhile catch the bad buffer so at least we do not segfault
apps.
0002 - Add debug statements.
Found the need to instrument sysdb with some more debug statements to
catch some bugs I was working on.
0003 - Relax memberof constraints.
Instead of completely failing an operation if a member does not actually
exists, simply skip it. This makes the code "lossy" but allows for a
much simpler 2 pass storage technique then having to find a way to track
down dependencies and try to store groups according to them (impossible
for circular dependencies anyway).
0004 - Do not fail enums
If a single store failed for any reason, then the enumeration would stop
completely. Reworked the code to simply skip the bad element and keep
going.
Simo.
--
Simo Sorce * Red Hat, Inc * New York
14 years, 8 months
[PATCH] use stored upn if available
by Sumit Bose
Hi,
this is the last patch in the series to add the basic support for AD as
a server. With this patch the kerberos backend will use the user
principal name provided by the server to get the TGT. To make the client
side kerberos libraries happy the realm part is always made upper case.
bye,
Sumit
14 years, 8 months
[PATCH] Disallow all legacy operations outside domains
by Jakub Hrozek
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
One of my previous patches disallowed adding users and groups outside
known domains but I forgot disallowing modifying, deleting, etc.
Fixes: ticket #114
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/
iEYEARECAAYFAkqOZ7QACgkQHsardTLnvCWAyACePh6G5RKsvhlIVSKwRfeASHw3
rGAAn0XX6gphj2xLgPOvRb1NS9JboqWZ
=J7zs
-----END PGP SIGNATURE-----
14 years, 8 months