Off-line Entitlement Management
by Jeroen van Meeuwen
Hello there,
We (Kolab Systems AG) are very much interested in using Candlepin for
entitlement management. In the capacity of one of their sysadmins, I'm
exploring implementation feasibility and such and so forth.
Now, I'm not all too familiar with Candlepin, so please bear with me while I
attempt to both give a correct representation of Candlepin semantics and my
idea for off-line entitlement management.
Let me first state that Kolab System on it's own was looking to include the
entitlement metadata in x509 extensions, and it seems to me so does Candlepin.
Restricted access to updates for the work using SSL server/client certificates
included, I think the similarities between the two sets of functional
requirements are worth exploring ;-)
Either way, this topic is about how entitlements are verified; We've had a
short discussion on IRC on the subject, and given that conversation and the
docs on fh.o/candlepin it seems to me, that the entitlement verification is
based on some sort of -what one could arguably call- a phone-home mechanism
(with or without a satellite candlepin system in between provider and
customer).
Now, let's suppose a customer simply refuses to implement any kind of phone-
home mechanism, and wants their systems completely offline -which to me sounds
like a very reasonable requirement.
The functionality that I proposed for consideration in a short IRC chat is
based on this -existing- requirement;
What if we figure out a way to encrypt a license file somehow, that the
application on the customer side can only decrypt to verify its entitlements,
and the provider can create using information not available to the customer?
The culprit basically is, that if it can be taken off-line, it might also be
forged. However, since we're in the realm of Free Software (capital F), we're
not worried about this. We're worried about not being able to meet the
customer implementation requirements ;-)
I would appreciate some feedback! ;-)
Take care,
--
Jeroen van Meeuwen
Senior Engineer, Kolab Systems AG
e: vanmeeuwen(a)kolabsys.com
t: +316 42 801 403
w: http://www.kolabsys.com
pgp: 9342 BF08
13 years, 10 months
unit tests
by jesus rodriguez
While working in ConsumerResourceTest recently, I noticed that
it requires the DB. I started thinking (yeah imagine that) :)
that maybe we shouldn't require the DB for anything except
the curators.
Another thing I noticed is we have 3 ways of creating a
test consumer:
* direct ctor i.e. new Consumer(...)
* TestUtil.createConsumer
* consumerCurator.create(...)
I prefer the first one or the second if it is used by many
places aside from this particular Test class. The only
thing that should call the curators are the curator tests
the rest should use mockito to mock them out.
The ConsumerResourceTest requires a lot of infrastructure,
like the database, curators, services etc. While guice makes
some of this easier, it makes for a slow and brittle test.
I replaced the testCreateConsumer (and 2 others) with mocks.
http://pastie.org/1041394
The original test 'looked' simpler: http://pastie.org/1041401
but it relied on a lot of stuff from the @Before method.
And the problem I ran into was that this test uses a
StubIdentityService which does not behave like our
DefaultIdentity Service adapter nor any other.
My proposal is that we make the resource unit tests
use mocks as much as possible to test the interaction.
And the only thing that should use the DB should be the
Curator tests.
Thoughts? comments?
--
jesus m. rodriguez | jesusr(a)redhat.com
principal software engineer | irc: zeus
red hat systems management | 919.754.4413 (w)
rhce # 805008586930012 | 919.623.0080 (c)
+---------------------------------------------+
| "Those who cannot remember the past |
| are condemned to repeat it." |
| -- George Santayana |
+---------------------------------------------+
13 years, 10 months
feedback requested for the data-export branch
by James Bowes
Hi all:
Please take some time to read over the code in the data-export branch.
Relevant code lives in the org.fedoraproject.candlepin.sync package.
The exporter entry point is on ConsumerResource. The importer entry
point is on OwnerResource.
To create and consume your own export, try the following:
* Make your 'upstream' candlepin
* run the deploy script with import dir and gendb set.
* register a consumer, note the consumer's uuid
* subscribe the consumer to some pools
* download your export with:
curl -k -u admin:admin https://localhost:8443/candlepin/consumers/$UUID/export > export.zip
* Turn your 'upstream' candlepin into your new 'downstream' candlepin
* run the deploy script without import dir, but with gendb (to empty
out the db)
* import your export with:
curl -u admin:admin -k -F export=(a)export.zip https://localhost:8443/candlepin/owners/1/import
Besides the new package, we also added an upstreamUuid field to Owner to
track consumer to owner mapping, and an upstreamPoolId to Subscription
to track the flow of upstream pool -> entitlement -> downstream
subscription. Why is Subscription not using upstreamEntitlementId? this
way, if you unentitle then reentitle your candlepin, the entitlement
object will have changed, but syncing will ignore it. Devan may wish to
point you at some of the work he did for products, specifically on the
ProductCurator.
Related doc - https://fedorahosted.org/candlepin/wiki/DataTransferFormat
Thanks!
-James
13 years, 10 months
Couple of comments on the CRL Branch
by Bryan Kearney
- First time I have seen a Dto object in the code base. What caused us
to need it?
- Who calls the ctor on CertificateRevocationListTask? How does the
parameters get set?
- Reading in the entire CRL into memory scares me :) If we assume a
serial number is a long of average length stored as a string, then it is
10 bytes. Add another 18 bytes for the time stamp as a string, and then
you get 28 bytes per entry. A million entitlements would get us roughly
a 26 meg file. Is there a way to stream this? Perhaps read in each
record from the CRL and the process it into the new file?
- Related to this, lets turn down the logging :)
- The logic for deleting the old certificates is to delete those that
expired yesterday certs. Are there any rules about how long a cert needs
to be in a CRL?
- Does the CRL need to be signed?
- Have we tried loading the CRL into apache or an ocspd daemon to see it
will work?
-- bk
13 years, 10 months
Circular dependency between Entitler and *Pool classes
by Ajay Kumar
Hello everyone,
Bryan wanted me to move the revocation code fragment present in
OwnerResource#refreshEntitlementPools
if (log.isInfoEnabled()) {
log.info("No of entitlements to revoke: #" + toRevoke.size());
}
for (Entitlement e : toRevoke) {
this.entitler.revokeEntitlement(e);
}
into the PoolCurator#refreshPools so that it can be re-used by other classes.
However, moving this into PoolCurator requires that Entitler be
injected into the PoolCurator class.
But, Entitler has the following constructor:
protected Entitler(PoolCurator epCurator,
EntitlementCurator entitlementCurator, ConsumerCurator consumerCurator,
Enforcer enforcer, EntitlementCertServiceAdapter entCertAdapter,
SubscriptionServiceAdapter subAdapter,
EventFactory eventFactory,
EventSink sink,
PostEntHelper postEntHelper){
....
}
As you can see, Entitler requires almost all the Curators and this
prevents the Entitler to be re-used within
various Curator classes.(the vicious circle)
What are your views on this issue? How do you think it can be resolved?
--
with regards
Ajay Kumar.N.S
http://www.linkedin.com/in/ajaykumarns
13 years, 10 months
What goes in Resources?
by Bryan Kearney
I have had question about how fat resources should be. I am going to
pick on Ajay (sorry) because I was looking at his patch for the
LIFO/FIFO Stuff (good job).
In his patch [1] he has added the revocation code into the Owner
Resource. Should this actually be in the curator level? There are other
example of this (ConsumerResrouce.create adds the identity certificate)
as well.
My main question is, should the resource be "only" going from REST to
internal API? Or should there be business logic there?
-- bk
[1]
http://git.fedorahosted.org/git/?p=candlepin.git;a=commitdiff;h=9898dccdb...
13 years, 10 months
Splitting Attributes (heads up)
by Devan Goodwin
The Attributes class is currently used on both Pools and Products, and
thus the Attribute object itself has no link back to the entity it
belongs too, it's only in one direction and everything gets lumped
into the one table. In working on Product update it became apparent
that it's quite easy to end up with orphaned attributes left in the
database and no way to clean them up. As such I'm going to be
splitting Attribute into two child classes, ProductAttribute and
PoolAttribute, and adding a proper link to Pool/Product.
Also it looks like we support hierarchical Attributes, I suspect
nothing is using this, I'm inclined to drop it. Will do some more
poking around to make sure it's unused but please speak up if you know
of some reason why we should keep it around.
Thanks,
Devan
--
Devan Goodwin <dgoodwin(a)rm-rf.ca>
http://rm-rf.ca
13 years, 10 months
Are we using the certificates for data transfer?
by Bryan Kearney
Are we using the certificates to actually help with the data transfer?
If we are, is there an issue where the the certificate format should be
pluggable, but we are assuming an x.509 format for the data transfer?
-- bk
13 years, 10 months