On Fri, 2012-03-16 at 14:25 +0100, Jan Zelený wrote:
> On Thu, 2012-01-12 at 11:49 -0500, Simo Sorce wrote:
> > On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
> > > On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
> > > > > Sure, we can talk about it. I'm looking at it from the
users'
> > > > > perspectives, who I think would generally expect (and be
alright
> > > >
> > > > with)
> > > >
> > > > > the fast cache being emptied on service restart. Since we still
> > > > > have
> > > >
> > > > the
> > > >
> > > > > not-quite-as-fast persistent LDB cache, I think the gain
isn't
> > > > > worth
> > > >
> > > > the
> > > >
> > > > > user confusion.
> > > >
> > > > Ok, I think I can understand this, it will also simplify the code I
> > > > guess so I'll change the patch.
> > >
> > > Attached new set of patches.
> > >
> > > I changed the init code to always create a new cache file on restart,
> > > incidentally this also got rid of the stat() call, so that problem has
> > > been put to rest permanently :)
> > >
> > > I also changed all functions to use errno_t as the return type in their
> > > declaration when they return a errno code.
> > >
> > > I think all the suggestions seen in the thread so far have been
> > > implemented as requested at this point, with the exception of the part
> > > where I should implement and use new options as that required
> > > additional code instead of mere refactoring. I will add those features
> > > in a later rebase.
> >
> > New revision.
> >
> > I removed bootid from the cache file header, given we recreate the file
> > every time sssd restarts it was useles.
> >
> > I replaced it with a seed, and made the seed used in the hash table
> > randomly regenerated every time the hash table is recreated.
> >
> > This is used to avoid collision based attacks [1].
> >
> > Simo.
> >
> > [1]
https://lwn.net/Articles/474912/
>
> I started looking at these last night. I have yet to complete a full
> review, but I'm attaching a set of patches rebased atop the current
> master so several of us can start reviewing.
Ok, I finished the review. I am fine with these patches in general, I have just
a few comments and/or questions:
Why only one hash table for both name and uidkey? I think using two would make
things a little bit more simple and fast (of course a tradeoff would be
slightly higher memory consumption).
I wanted to limit memory consumption, it is important especially in
32bit apps that have a much more limited virtual memory (remember that
this is mmapped in any application).
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I
think I have line numbers off by 1 ... but also do not see the
unnecessary condition. cur is adavanced since the test in line 208/209
so it is not testing the same thing afaics.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond
the table it will cause a segfault.
responder/nss/nsssrv_mmap_cache.c:516
- sizeof(uint32_t) -> sizeof(h.status)
yes it is better to not assume.
responder/nss/nsssrv_mmap_cache.c:517
- sizeof(uint32_t) -> sizeof(h.status)
same as above, yes.
responder/nss/nsssrv_mmap_cache.c:561
- shouldn't the umask better be 0133?
I don't think we care for explicitly removing the executable bit, the
default umask doesn't set it. What matters here is that it is nor
writable.
close vs. munmap
Just out of curiosity: does it matter which should be first? I found two places
in the code where the order is different:
no it shouldn't matter.
sss_client/nss_mc_common.c:146
responder/nss/nsssrv_mmap_cache.c:717
I changed the second one to do the munmap first
That's all from my side. As I've said, my comments are rather
minor things and
suggestions, so in general I give these patches a green light.
Attached new patches with some of the picks fixed.
Simo.
--
Simo Sorce * Red Hat, Inc * New York