cross-posting to matahari list, since this discussion is more about matahari implementation rather than the apis themselves.
On 07/22/2010 10:01 AM, Perry Myers wrote:
On 07/22/2010 03:25 AM, Andrew Beekhof wrote:
----- "Daniel P. Berrange"berrange@redhat.com wrote:
On Wed, Jul 21, 2010 at 03:20:30PM -0400, Darryl L. Pierce wrote:
(respond long time coming)
On Wed, Jun 16, 2010 at 09:16:10AM +0100, Daniel P. Berrange wrote:
As I mention above though, the single daemon model is really bad
when
you come to write SELinux policy, because it makes it near
impossible
to lock down its access. I know we have this model with libvirtd
already
and this is one of its core flaws. One day libvirtd is going to
have to
be split into libvirtd-hypervisor, libvirtd-network,
libvirtd-storage,
etc, so that we can actually write a usable SELinux policy for
it.
I exchanged emails with Dan Walsh on this and we can have one
daemon
spawn multiple processes, each running in a separate SELinux
context
with the appropriate restrictions on each. So don't need to have multiple Matahari daemons running to have proper access control.
I don't see any difference from what I described, besides the change in terminology s/daemon/process/ ? Once you're spawning processes, rather than just creating threads you're in the model I proposed[1].
Regards, Daniel
[1] Ok in theory you can fork, but not then execve a new binary but that way is a rather hairy approach because you end up with many of the disadvantages of both the threads& processes models.
Agreed. Fork without exec would mean we're still using a single selinux context for everything.
I think the main alternatives here are:
a) Matahari fork+exec's one short-lived child per job, possibly passing job details via the command line or environment.
b) A Matahari daemon that fork+exec's N long-lived children at startup and gives them jobs via ipc
c) The Matahari init script kicks of N long-lived daemons at startup and they receive jobs via qpid
'a' is not particularly performance friendly (and can have security implications) and 'b' is re-implementing a stripped down qpid broker.
So on balance, I think c) is probably the most preferrable approach and if I understood correctly, pretty much what Daniel was proposing.
Ack on using option C, though if we're doing this then having 'n' init scripts does make more sense (as danpb indicated). I had written up an email in parallel coming up with the same analysis, but you beat me to it. :)
What this means is that we have the follow architecture:
- On Linux:
- matahari is an SRPM that has several binary RPM subpackages:
matahari-net, matahari-host, matahari-pkg, matahari-services, etc
- The matahari binary RPM might just be common libraries that are
shared between the various other RPMs/daemons
- Each other RPM has it's own:
- daemon that connects to QMF bus as an agent
- init script to start stop that particular daemon
- selinux policy specific to what that portion of matahari needs to do
- On Windows:
- init script == Windows Service, so one Windows Service for each of
matahari-net, matahari-host, etc
- Each of these Windows Services is an independent QMF agent
- We don't want 'n' Windows installers, since they don't have nice RPM
and yum functionality. So where Linux will have matahari-* for each service, we'll have a single 'matahari installer' for Windows that installs all of the services. Admin can deactivate services they don't want to use
Questions: Do we have each Agent connect to a single qpid broker installed on the guest, and then all we need to do is connect that broker to an external broker outside of the guest? This would make configuration much easier, all of the various matahari daemons/services could connect to a static configuration for the local broker, and then the only place with custom config would be chaining the local broker to the other external brokers?
Can the single guest broker that we have connect to both an external network broker and also connect over virtio-serial to the host broker? Or do we need two guest brokers, one to handle network based QMF and one to handle virtio-serial based QMF?
Perry
cloud-apis mailing list cloud-apis@redhat.com https://www.redhat.com/mailman/listinfo/cloud-apis
On Thu, Jul 22, 2010 at 10:03:21AM -0400, Perry Myers wrote:
What this means is that we have the follow architecture:
- On Linux:
- matahari is an SRPM that has several binary RPM subpackages:
matahari-net, matahari-host, matahari-pkg, matahari-services, etc
- The matahari binary RPM might just be common libraries that are
shared between the various other RPMs/daemons
- Each other RPM has it's own:
- daemon that connects to QMF bus as an agent
- init script to start stop that particular daemon
- selinux policy specific to what that portion of matahari needs to do
- On Windows:
- init script == Windows Service, so one Windows Service for each of
matahari-net, matahari-host, etc
- Each of these Windows Services is an independent QMF agent
- We don't want 'n' Windows installers, since they don't have nice RPM
and yum functionality. So where Linux will have matahari-* for each service, we'll have a single 'matahari installer' for Windows that installs all of the services. Admin can deactivate services they don't want to use
Questions: Do we have each Agent connect to a single qpid broker installed on the guest, and then all we need to do is connect that broker to an external broker outside of the guest? This would make configuration much easier, all of the various matahari daemons/services could connect to a static configuration for the local broker, and then the only place with custom config would be chaining the local broker to the other external brokers?
Can the single guest broker that we have connect to both an external network broker and also connect over virtio-serial to the host broker? Or do we need two guest brokers, one to handle network based QMF and one to handle virtio-serial based QMF?
If we use a broker inside the guest I think we should have a dedicated broker by default, otherwise we'll significantly increase our install time complexity. eg what happens of the broker already present is too old for our agent's needs. also complicates access controls & routing to reuse the same broker.
Fortunately from a Matahari code POV this decision doesn't have any real impact. We'll just code it against qpid APIs and which broker it connects to will just be a config param. So we don't have to worry about this decision too much until we get to writing the installer for Win32/RPM %post setup.
So I'd aim for a dedicated broker in the guest to talk over virtio to host and connect guest agents to that. If we want to change this later to make guest agents talk directly to host broker it won't be any code changes, just config parameter tweaks.
Regards, Daniel
matahari@lists.fedorahosted.org