Hi,
I have a couple of questions. The first is that in the FC3 targetted policy, it appears that ldconfig cannot write to user_home_t directories. Why is this? It appears to be a restriction with no purpose, and some programs rely on this to work. In fact I see from the archives that ldconfig not being able to write or search certain directories has come up before.
The second question is what impact SELinux will have on third party installers. It seems from the nVidia thread that currently if you copy files onto the system using "cp", this is the wrong way to do it and it will break peoples SELinux setups. This surely cannot be correct: that'd break every pretty much every third party installer (eg Loki Setup, etc) out there!
If this is the case and this rather questionable decision is not reversed, is using "install" the correct way to go about things on *every* SELinux enabled distro, or is that a Fedora custom thing? It's a bit worrying how much Fedora SELinux seems to differ from upstream, is this something that will get better with time?
thanks -mike
Mike Hearn wrote:
Hi,
I have a couple of questions. The first is that in the FC3 targetted policy, it appears that ldconfig cannot write to user_home_t directories. Why is this? It appears to be a restriction with no purpose, and some programs rely on this to work. In fact I see from the archives that ldconfig not being able to write or search certain directories has come up before.
The second question is what impact SELinux will have on third party installers. It seems from the nVidia thread that currently if you copy files onto the system using "cp", this is the wrong way to do it and it will break peoples SELinux setups. This surely cannot be correct: that'd break every pretty much every third party installer (eg Loki Setup, etc) out there!
Yes install and rpm are the only options right now. Not sure how dpkg works on debian. Your other option is to use cp and the run restorecon.
The problem is similar to DAC, in that you have to specify the file context associated with the file, the same way you need to specify file permission for Descretionary Access Control. In most cases the default behavior is that the file picks up the protection of the directory that you are copying into. Or the context of the file you are replacying. The problem is that sometimes file like share libraries need a different file context (shlib_t) than the directory they are being copied to (lib_t). RPM and now install have the smarts to handle this. mv and cp do not. And it is arguable that they shouldn't. Imagine using cp/mv to copy a sensitive piece of data. If they changed the context without you knowing they could allow the sensitive data to be exposed.
If this is the case and this rather questionable decision is not reversed, is using "install" the correct way to go about things on *every* SELinux enabled distro, or is that a Fedora custom thing? It's a bit worrying how much Fedora SELinux seems to differ from upstream, is this something that will get better with time?
What do you base this on? Fedora is where most of the SELinux development has been going on.
thanks -mike
-- fedora-selinux-list mailing list fedora-selinux-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-selinux-list
On Thu, 30 Dec 2004 22:52:02 -0500, Daniel J Walsh wrote:
The problem is that sometimes file like share libraries need a different file context (shlib_t) than the directory they are being copied to (lib_t). RPM and now install have the smarts to handle this. mv and cp do not.
I see. What happens if you create a file in a lib_t directory using the standard POSIX APIs? I looked at the Loki setup sources and it doesn't use "cp" directly of course, it just opens files and copies them using a read/write loop.
What happens if a library is put in a directory that isn't lib_t, and the DSO is not marked as shlib_t? Does the linker refuse to link it? Or is it just that ldconfig cannot read them.
I have a game here where it uses libraries marked as file_t, and it seems to work when using LD_LIBRARY_PATH which makes me happier :)
Most third party programs do not rely on the linker cache anyway, so I suppose this is a good thing.
What do you base this on? Fedora is where most of the SELinux development has been going on.
Yes, I mean it's hard to find out how Fedora differs from Debian or Gentoo SELinux-wise. If I use "install" does this only work on Fedora? Or is this something that will eventually be merged into other distributions too. What about the pam_selinux module, is that used elsewhere or on other distros must I remember to use the SELinux su equivalent as well? (I forgot it's name ...)
thanks -mike
Mike Hearn wrote:
On Thu, 30 Dec 2004 22:52:02 -0500, Daniel J Walsh wrote:
The problem is that sometimes file like share libraries need a different file context (shlib_t) than the directory they are being copied to (lib_t). RPM and now install have the smarts to handle this. mv and cp do not.
I see. What happens if you create a file in a lib_t directory using the standard POSIX APIs? I looked at the Loki setup sources and it doesn't use "cp" directly of course, it just opens files and copies them using a read/write loop.
What happens if a library is put in a directory that isn't lib_t, and the DSO is not marked as shlib_t? Does the linker refuse to link it? Or is it just that ldconfig cannot read them.
The file will get recieve the context of the parent directory. Linker is probably running in unconfined_t so it will not any problem.
I have a game here where it uses libraries marked as file_t, and it seems to work when using LD_LIBRARY_PATH which makes me happier :)
Most third party programs do not rely on the linker cache anyway, so I suppose this is a good thing.
You should not have anything marked file_t unless they were created on a machine that was not running SELinux. This indicates that you need a relabel.
What do you base this on? Fedora is where most of the SELinux development has been going on.
Yes, I mean it's hard to find out how Fedora differs from Debian or Gentoo SELinux-wise. If I use "install" does this only work on Fedora? Or is this something that will eventually be merged into other distributions too.
Hopefully, good ideas usually get picked up by other distributions, of course they might not think this is a good idea. :^) Of course you could say that generally about differences between distributions.
What about the pam_selinux module, is that used elsewhere or on other distros must I remember to use the SELinux su equivalent as well? (I forgot it's name ...)
I believe pam_selinux is being used elsewhere.
thanks -mike
-- fedora-selinux-list mailing list fedora-selinux-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-selinux-list
On Mon, 03 Jan 2005 10:31:13 -0500, Daniel J Walsh wrote:
The file will get recieve the context of the parent directory. Linker is probably running in unconfined_t so it will not any problem.
ldconfig doesn't though. Hmm.
You should not have anything marked file_t unless they were created on a machine that was not running SELinux. This indicates that you need a relabel.
They're in my home directory. I did a "make relabel" when I enabled the targetted policy. Is that not enough?
Hopefully, good ideas usually get picked up by other distributions, of course they might not think this is a good idea. :^)
Yeah this makes it rather hard for 3rd parties to track what's going on here. Why can this stuff not all be done upstream and just merged with Fedora at regular intervals?
Of course you could say that generally about differences between distributions.
I could, and I do. It's a major pain for all concerned.
thanks -mike
Mike Hearn wrote:
On Mon, 03 Jan 2005 10:31:13 -0500, Daniel J Walsh wrote:
The file will get recieve the context of the parent directory. Linker is probably running in unconfined_t so it will not any problem.
ldconfig doesn't though. Hmm.
ldconfig transitions to ldconfig_t and is only allowed to read certain files.
You should not have anything marked file_t unless they were created on a machine that was not running SELinux. This indicates that you need a relabel.
They're in my home directory. I did a "make relabel" when I enabled the targetted policy. Is that not enough?
relabel should have been enough, what kind of file system is your homedirectory?
Hopefully, good ideas usually get picked up by other distributions, of course they might not think this is a good idea. :^)
Yeah this makes it rather hard for 3rd parties to track what's going on here. Why can this stuff not all be done upstream and just merged with Fedora at regular intervals?
Because we have a chicken and the egg problem. Upstream does not care for SELinux until people start to use it. So why would they put SELinux changes in, if know one was using SELinux. Also upstream does not always accept changes from the distros, so either the distro is forced to carry that patch or drop the functionality.
Of course you could say that generally about differences between distributions.
I could, and I do. It's a major pain for all concerned.
thanks -mike
-- fedora-selinux-list mailing list fedora-selinux-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-selinux-list
On Mon, 2005-01-03 at 11:08, Mike Hearn wrote:
Yeah this makes it rather hard for 3rd parties to track what's going on here. Why can this stuff not all be done upstream and just merged with Fedora at regular intervals?
Fedora Core is the de facto "upstream" as far as SELinux modifications to userland are concerned. Red Hat took over maintaining the SELinux userspace patches back in early 2003 when Dan Walsh ported them to the 2.6 SELinux API and started expanding them to more programs to provide better integration into the distribution. NSA is only maintaining the core SELinux code now, i.e. the SELinux kernel code and the core set of new SELinux userland packages (libsepol, libselinux, checkpolicy, policycoreutils, policy). Information about patched userland for other distros is at the selinux sourceforge site, http://selinux.sf.net. I'd expect that the SELinux userland patches will eventually go into the upstream packages (in cases where there is still an upstream maintainer), but that wasn't likely to happen before the Fedora integration.
On Thu, 2004-12-30 at 16:05, Mike Hearn wrote:
I have a couple of questions. The first is that in the FC3 targetted policy, it appears that ldconfig cannot write to user_home_t directories. Why is this? It appears to be a restriction with no purpose, and some programs rely on this to work. In fact I see from the archives that ldconfig not being able to write or search certain directories has come up before.
Principle of least privilege; only allow a program to do what it requires for its legitimate purpose. If it truly requires such access for legitimate purposes, then you can certainly propose adding those permissions, but be aware of potential ramifications, e.g. mis-use of permissions by the caller, corruption of ldconfig via untrustworthy input, etc.
The second question is what impact SELinux will have on third party installers. It seems from the nVidia thread that currently if you copy files onto the system using "cp", this is the wrong way to do it and it will break peoples SELinux setups. This surely cannot be correct: that'd break every pretty much every third party installer (eg Loki Setup, etc) out there!
cp only explicitly sets the security context if you pass one of the relevant options to it. Otherwise, it just follows the default behavior of creating the new file based on the domain of the creating process and the type of the parent directory (which falls back to inheriting the type on the parent directory in the absence of an explicit rule). Having cp automatically try to preserve or set context has been discussed previously, but is often not what you want and may often run into permissions problems for unprivileged callers.
On Thu, 2004-12-30 at 21:05 +0000, Mike Hearn wrote:
Hi,
I have a couple of questions. The first is that in the FC3 targetted policy, it appears that ldconfig cannot write to user_home_t directories. Why is this? It appears to be a restriction with no purpose, and some programs rely on this to work. In fact I see from the archives that ldconfig not being able to write or search certain directories has come up before.
Can you explain why you have ldconfig writing to a home directory? Are you doing the equivalent of "ldconfig > ~/install.log"?
The second question is what impact SELinux will have on third party installers. It seems from the nVidia thread that currently if you copy files onto the system using "cp", this is the wrong way to do it and it will break peoples SELinux setups. This surely cannot be correct: that'd break every pretty much every third party installer (eg Loki Setup, etc) out there!
My hope was that by modifying "install", we'd minimize the breakage. At least all of the Automake-generated packages should work.
I had a quick look at two other ISV installers; HelixPlayer and Mozilla. It appears neither uses "install", they both do the equivalent of cp.
The route we may need to go down is having a relabeling daemon that monitors /usr/lib/, /usr/local/lib, etc. and fixes file contexts.
On Mon, 03 Jan 2005 12:49:05 -0500, Colin Walters wrote:
Can you explain why you have ldconfig writing to a home directory? Are you doing the equivalent of "ldconfig > ~/install.log"?
cp *.so.* ~/.local/lib /sbin/ldconfig -n ~/.local/lib # generate the symlinks
That's pseudocode lifted from autopackage, but other scripts and programs do similar stuff. There are other ways to generate the symlinks of course, it's a simple enough operation, but it seems unintuitive that this API would not work anymore for your home directory.
My hope was that by modifying "install", we'd minimize the breakage. At least all of the Automake-generated packages should work.
I had a quick look at two other ISV installers; HelixPlayer and Mozilla. It appears neither uses "install", they both do the equivalent of cp.
The route we may need to go down is having a relabeling daemon that monitors /usr/lib/, /usr/local/lib, etc. and fixes file contexts.
Hmm, OK. I have to admit I never saw a third party installer that uses "install" so that is probably not enough.
A daemon that fixes contexts as files are added feels rather racy. I'm sure I'm missing a lot of context from previous discussions on the matter here, but perhaps the kernel should set the context automatically when a new file is created in certain directories that are marked as "autofix".
OK so then we have the problem that the context setting code is all done in userspace with regexs and other un-kernely things. Maybe there needs to be a framework in the kernel where a thread that does a file creation can be suspended and the kernel invokes a user-space program with the file path to figure out what the context should be. Once the process returns with the answer the file can be atomically created/set and the original thread resumes.
thanks -mike
On Tue, 2005-01-04 at 10:21, Mike Hearn wrote:
A daemon that fixes contexts as files are added feels rather racy. I'm sure I'm missing a lot of context from previous discussions on the matter here, but perhaps the kernel should set the context automatically when a new file is created in certain directories that are marked as "autofix".
OK so then we have the problem that the context setting code is all done in userspace with regexs and other un-kernely things. Maybe there needs to be a framework in the kernel where a thread that does a file creation can be suspended and the kernel invokes a user-space program with the file path to figure out what the context should be. Once the process returns with the answer the file can be atomically created/set and the original thread resumes.
To clarify, the file_contexts configuration is only really intended to initialize the security contexts for a filesystem at install-time. After that point, you shouldn't be setting file contexts based on pathnames, as they don't convey the desired information about the real security properties of the object. Instead, you want the file to be labeled based on the creating process domain and parent directory type (which is what the kernel does), and allow security-aware applications to further customize the context if necessary for finer-grained labeling (which is already supported via the libselinux API). Pathname-based security considered harmful.
Stephen Smalley wrote:
On Tue, 2005-01-04 at 10:21, Mike Hearn wrote:
A daemon that fixes contexts as files are added feels rather racy. I'm sure I'm missing a lot of context from previous discussions on the matter here, but perhaps the kernel should set the context automatically when a new file is created in certain directories that are marked as "autofix".
OK so then we have the problem that the context setting code is all done in userspace with regexs and other un-kernely things. Maybe there needs to be a framework in the kernel where a thread that does a file creation can be suspended and the kernel invokes a user-space program with the file path to figure out what the context should be. Once the process returns with the answer the file can be atomically created/set and the original thread resumes.
To clarify, the file_contexts configuration is only really intended to initialize the security contexts for a filesystem at install-time. After that point, you shouldn't be setting file contexts based on pathnames, as they don't convey the desired information about the real security properties of the object. Instead, you want the file to be labeled based on the creating process domain and parent directory type (which is what the kernel does), and allow security-aware applications to further customize the context if necessary for finer-grained labeling (which is already supported via the libselinux API). Pathname-based security considered harmful.
But inode based automagic labeling is gonna be needed, and the aliasing problems due to path in order to accomplish same can be handled.
JMHO, policy still congealing.
73 de Jeff
On Tue, 04 Jan 2005 10:40:59 -0500, Stephen Smalley wrote:
To clarify, the file_contexts configuration is only really intended to initialize the security contexts for a filesystem at install-time.
OK, so what would Colins proposed daemon actually do then? Is kernel-level context propagation enough and if so why does install have to be modified?
I'm a little confused now and feel I'm missing some key bit of understanding ...
On Tue, 2005-01-04 at 11:25, Mike Hearn wrote:
OK, so what would Colins proposed daemon actually do then? Is kernel-level context propagation enough and if so why does install have to be modified?
I'm a little confused now and feel I'm missing some key bit of understanding ...
I'm not in favor of the daemon idea. "install" is akin to "rpm" in the sense of installing a file, so it may make sense to initialize its security context based on pathname at that time, because we have no real runtime knowledge of its security properties and have presumably checked its integrity in some manner prior to installation. But for normal day-to-day file copying, the kernel (or some daemon) has no way of knowing whether: a) the context of the original should be preserved (e.g. making a backup copy of /etc/shadow), b) the context of the target location should be used (e.g. copying a file from /home to /var/www to export it via apache), c) the context should factor in information about the copying process, reflecting its own confidentiality or integrity properties.
Hence, any "automagic" technique based on pathname is not suitable.
On Tue, 04 Jan 2005 11:25:31 -0500, Stephen Smalley wrote:
I'm not in favor of the daemon idea. "install" is akin to "rpm" in the sense of installing a file, so it may make sense to initialize its security context based on pathname at that time, because we have no real runtime knowledge of its security properties and have presumably checked its integrity in some manner prior to installation.
Alright. It seems to me then that files that are not copied in some SELinux aware matter from an installer (ie new files created in /usr/lib or whatever) should just be subject to normal UNIX security and SELinux should not control them. Supporting SELinux would then become a feature of newer installers, but older software would not break.
I have a feeling you can't selectively opt files out of SELinux like that though.
On Tue, 2005-01-04 at 11:25 -0500, Stephen Smalley wrote:
"install" is akin to "rpm" in the sense of installing a file, so it may make sense to initialize its security context based on pathname at that time, because we have no real runtime knowledge of its security properties and have presumably checked its integrity in some manner prior to installation. But for normal day-to-day file copying, the kernel (or some daemon) has no way of knowing whether:
Let's not have this devolve into the general file-copying problem, which consensus seems to be is insoluble.
Here we're talking about a very specific case of software installation to well-known directories such as /usr/local/bin and /usr/local/lib. In this case we can presume the caller is highly trusted; anything with write access to those directories has to be. What we want to happen is for the shared libraries to be labeled correctly.
I'm not in favor of the daemon idea.
Well, it's not beautiful. But we need some solution. Even if we got changes into Mozilla, Helixplayer, etc. to use a restorecon equivalent tomorrow, all of their existing tarballs would be broken, forever.
Actually I just saw in CVS that Dan added the following permission: allow ldconfig_t lib_t:file r_file_perms;
So essentially in the targeted policy only the targeted daemons will be unable to read shared libraries not installed by RPM. But for strict policy the above permission doesn't help; we'd need to grant it to everything which reads shlib_t.
One other option besides the daemon is to have ldconfig itself do an automatic restorecon. This is less efficient since it will do so for every shared library, but given that ldconfig has always been the magic command you run to make shared libraries work, it does seem somewhat of a logical place to solve this particular problem.
Long term we can push 'install' at these ISVs, and maybe around FC5 or FC6 if we have enough success, say that that's the only supported way to install files to the system.
On Tue, 04 Jan 2005 13:01:01 -0500, Colin Walters wrote:
Well, it's not beautiful. But we need some solution. Even if we got changes into Mozilla, Helixplayer, etc. to use a restorecon equivalent tomorrow, all of their existing tarballs would be broken, forever.
Actually I just saw in CVS that Dan added the following permission: allow ldconfig_t lib_t:file r_file_perms;
Would that fix this?
Jan 4 19:07:22 littlegreen kernel: audit(1104865642.095:0): avc: denied { read } for pid=25822 exe=/sbin /ldconfig name=libiculx.so.26.2 dev=hdd2 ino=2212143 scontext=root:system_r:ldconfig_t tcontext=root:object_ r:lib_t tclass=file
This is actually from doing an "apt-get install mono" on FC3 + apt + SELinux enabled so RPM was involved.
The result is that running mono fails:
[root@littlegreen tmp]# mono mono: error while loading shared libraries: libmono.so.0: cannot open shared object file: No such file or directory
So essentially in the targeted policy only the targeted daemons will be unable to read shared libraries not installed by RPM. But for strict policy the above permission doesn't help; we'd need to grant it to everything which reads shlib_t.
That sounds a lot better :)
One other option besides the daemon is to have ldconfig itself do an automatic restorecon. This is less efficient since it will do so for every shared library, but given that ldconfig has always been the magic command you run to make shared libraries work, it does seem somewhat of a logical place to solve this particular problem.
Yes that would help although unfortunately some (broken?) RPMs don't run ldconfig, on the grounds that /usr/lib is always scanned by the linker regardless of what the cache says.
Long term we can push 'install' at these ISVs, and maybe around FC5 or FC6 if we have enough success, say that that's the only supported way to install files to the system.
I'm not keen on this line of thinking: it's the type that means many of my Linux-native games and demos no longer run without lots of hacking about. Is the the benefit of restricting 3rd party binaries that don't opt-in worth the cost?
I tend to see SELinux as a tool to help enhance the security of programs that are explicitly interested in it, which goes hand in hand with a proper audit to flush out bad practice. Hopefully in future shipping policy with third party programs will become common. But I don't think it's wise to try and apply policy universally shot-gun style, especially not to legacy programs that don't expect it (which today, everything is).
thanks -mike
Mike Hearn wrote:
On Tue, 04 Jan 2005 13:01:01 -0500, Colin Walters wrote:
Well, it's not beautiful. But we need some solution. Even if we got changes into Mozilla, Helixplayer, etc. to use a restorecon equivalent tomorrow, all of their existing tarballs would be broken, forever.
Actually I just saw in CVS that Dan added the following permission: allow ldconfig_t lib_t:file r_file_perms;
Would that fix this?
Yes
Jan 4 19:07:22 littlegreen kernel: audit(1104865642.095:0): avc: denied { read } for pid=25822 exe=/sbin /ldconfig name=libiculx.so.26.2 dev=hdd2 ino=2212143 scontext=root:system_r:ldconfig_t tcontext=root:object_ r:lib_t tclass=file
This is actually from doing an "apt-get install mono" on FC3 + apt + SELinux enabled so RPM was involved.
The result is that running mono fails:
[root@littlegreen tmp]# mono mono: error while loading shared libraries: libmono.so.0: cannot open shared object file: No such file or directory
So essentially in the targeted policy only the targeted daemons will be unable to read shared libraries not installed by RPM. But for strict policy the above permission doesn't help; we'd need to grant it to everything which reads shlib_t.
That sounds a lot better :)
That is what we have with this change.
One other option besides the daemon is to have ldconfig itself do an automatic restorecon. This is less efficient since it will do so for every shared library, but given that ldconfig has always been the magic command you run to make shared libraries work, it does seem somewhat of a logical place to solve this particular problem.
An intersting idea...
Yes that would help although unfortunately some (broken?) RPMs don't run ldconfig, on the grounds that /usr/lib is always scanned by the linker regardless of what the cache says.
Long term we can push 'install' at these ISVs, and maybe around FC5 or FC6 if we have enough success, say that that's the only supported way to install files to the system.
I'm not keen on this line of thinking: it's the type that means many of my Linux-native games and demos no longer run without lots of hacking about. Is the the benefit of restricting 3rd party binaries that don't opt-in worth the cost?
I tend to see SELinux as a tool to help enhance the security of programs that are explicitly interested in it, which goes hand in hand with a proper audit to flush out bad practice. Hopefully in future shipping policy with third party programs will become common. But I don't think it's wise to try and apply policy universally shot-gun style, especially not to legacy programs that don't expect it (which today, everything is).
thanks -mike
-- fedora-selinux-list mailing list fedora-selinux-list@redhat.com http://www.redhat.com/mailman/listinfo/fedora-selinux-list
On Tue, 2005-01-04 at 19:43 +0000, Mike Hearn wrote:
Yes that would help although unfortunately some (broken?) RPMs don't run ldconfig, on the grounds that /usr/lib is always scanned by the linker regardless of what the cache says.
If it's installed via RPM it will be labeled automatically.
Long term we can push 'install' at these ISVs, and maybe around FC5 or FC6 if we have enough success, say that that's the only supported way to install files to the system.
I'm not keen on this line of thinking: it's the type that means many of my Linux-native games and demos no longer run without lots of hacking about. Is the the benefit of restricting 3rd party binaries that don't opt-in worth the cost?
I don't expect you to do this hacking; I'd expect the vendor to do it.
I tend to see SELinux as a tool to help enhance the security of programs that are explicitly interested in it,
That's what the targeted policy does essentially. But SELinux is capable of a lot more than that; e.g. giving the ability to define a "webmaster" role with only the access necessary to administer Apache.
So it would be good to fix this problem in a generic way so it works in targeted and strict. If we can fix enough of these kinds of speedbumps, I feel that strict could be usable by a much wider range of people.
which goes hand in hand with a proper audit to flush out bad practice. Hopefully in future shipping policy with third party programs will become common.
Mmm. I think the interesting question isn't where the policy binary bits are stored (in individual .rpm packages versus one big blob in selinux-policy-targeted RPM), but who writes the source.
But I don't think it's wise to try and apply policy universally shot-gun style, especially not to legacy programs that don't expect it (which today, everything is).
I run strict policy (i.e. universally shot-gun style ;)) on my server, it works quite well.
On Tue, 04 Jan 2005 15:08:01 -0500, Colin Walters wrote:
I'm not keen on this line of thinking: it's the type that means many of my Linux-native games and demos no longer run without lots of hacking about. Is the the benefit of restricting 3rd party binaries that don't opt-in worth the cost?
I don't expect you to do this hacking; I'd expect the vendor to do it.
That works when the vendor is around and keen to give you free bugfix updates, but often they're not, eg Loki (or if your support period expired).
I tend to see SELinux as a tool to help enhance the security of programs that are explicitly interested in it,
That's what the targeted policy does essentially. But SELinux is capable of a lot more than that; e.g. giving the ability to define a "webmaster" role with only the access necessary to administer Apache.
Yep, that stuff is very cool. It doesn't affect application compatibility though so I don't have to worry about it :)
So it would be good to fix this problem in a generic way so it works in targeted and strict. If we can fix enough of these kinds of speedbumps, I feel that strict could be usable by a much wider range of people.
Yes that'd be good although my understanding of strict is that programs without policy won't work, ie third party RPMs created before SELinux, games from Loki/GarageGames or whatever. Or at least won't work without a lot of tweaking.
Mmm. I think the interesting question isn't where the policy binary bits are stored (in individual .rpm packages versus one big blob in selinux-policy-targeted RPM), but who writes the source.
Right, by "shipping policy with third party programs" I meant they write their own policy. I seem to remember arguing about this with Russell before though :)
I run strict policy (i.e. universally shot-gun style ;)) on my server, it works quite well.
Sure but that's a server, which I guess is fairly typical web+mail+ssh+a few other things, right? When you only run a relatively small set of programs all provided by a central source it's a lot easier to do that. I want to see SELinux on desktops, which means working with all the random software the user has :)
thanks -mike
On Tue, 2005-01-04 at 22:51 +0000, Mike Hearn wrote:
On Tue, 04 Jan 2005 15:08:01 -0500, Colin Walters wrote:
I'm not keen on this line of thinking: it's the type that means many of my Linux-native games and demos no longer run without lots of hacking about. Is the the benefit of restricting 3rd party binaries that don't opt-in worth the cost?
I don't expect you to do this hacking; I'd expect the vendor to do it.
That works when the vendor is around and keen to give you free bugfix updates, but often they're not, eg Loki (or if your support period expired).
Ok. I don't think it would be a big deal to write this off in say two years, but that's just my opinion.
So it would be good to fix this problem in a generic way so it works in targeted and strict. If we can fix enough of these kinds of speedbumps, I feel that strict could be usable by a much wider range of people.
Yes that'd be good although my understanding of strict is that programs without policy won't work, ie third party RPMs created before SELinux,
The labeling happens at install time, so even RPMs created before SELinux will be labeled correctly, assuming that the files contained in the RPM are "typical" files such as shared libraries in /usr/lib or binaries in /usr/bin.
games from Loki/GarageGames or whatever. Or at least won't work without a lot of tweaking.
Well, it depends. System daemons will definitely require policy to run. But regular user binaries run unmodified; e.g. very little in all of GNOME was changed.
Mmm. I think the interesting question isn't where the policy binary bits are stored (in individual .rpm packages versus one big blob in selinux-policy-targeted RPM), but who writes the source.
Right, by "shipping policy with third party programs" I meant they write their own policy. I seem to remember arguing about this with Russell before though :)
Yes, it's a whole other debate.
I run strict policy (i.e. universally shot-gun style ;)) on my server, it works quite well.
Sure but that's a server, which I guess is fairly typical web+mail+ssh+a few other things, right?
Right.
When you only run a relatively small set of programs all provided by a central source it's a lot easier to do that. I want to see SELinux on desktops, which means working with all the random software the user has :)
So do I, and I think we're actually a lot closer to that than some of the discussions here might make one think.
Strict policy should work very well today on something like a kiosk, where the set of software is fixed in advance, tested, and users are tightly restricted in what they can do. A typical knowledge worker desktop should for the most part just work. But a developer workstation (what most of us on this list use) is far more difficult. Particularly for Fedora developers, because we're changing the base OS itself.
Besides people doing various things with Apache, this shlib_t issue is probably the biggest problem we've seen, and I think we have a mostly acceptable workaround for now with Dan's latest policy.
On Tue, 04 Jan 2005 18:46:27 -0500, Colin Walters wrote:
Ok. I don't think it would be a big deal to write this off in say two years, but that's just my opinion.
It's a fair one. I'm coming from the perspective of competing with Windows, but there's a whole debate here you could have about the cost vs benefits of leaving people behind when introducing new technologies. Arguably we don't care that much about upgrade sales because it's free software :) so our value systems can be different. But should they be?
I don't know.
The labeling happens at install time, so even RPMs created before SELinux will be labeled correctly, assuming that the files contained in the RPM are "typical" files such as shared libraries in /usr/lib or binaries in /usr/bin.
Hm, OK. I should investigate why the Mono RPMs I got via apt the other day didn't work correctly in enforcing/targetted. That sounds like a bug.
Well, it depends. System daemons will definitely require policy to run. But regular user binaries run unmodified; e.g. very little in all of GNOME was changed.
OK, that sounds good.
When you only run a relatively small set of programs all provided by a central source it's a lot easier to do that. I want to see SELinux on desktops, which means working with all the random software the user has :)
So do I, and I think we're actually a lot closer to that than some of the discussions here might make one think.
Strict policy should work very well today on something like a kiosk, where the set of software is fixed in advance, tested, and users are tightly restricted in what they can do. A typical knowledge worker desktop should for the most part just work. But a developer workstation (what most of us on this list use) is far more difficult. Particularly for Fedora developers, because we're changing the base OS itself.
Besides people doing various things with Apache, this shlib_t issue is probably the biggest problem we've seen, and I think we have a mostly acceptable workaround for now with Dan's latest policy.
Long term I certainly hope strict can be the default but it's going to be a lot of work, and it's going to raise interesting and difficult questions about backwards compatibility.
That can wait for another day though. The ldconfig issue is fixed in CVS, third party installers are being looked at again and that means I'm happy :)
thanks -mike
On Tue, 2005-01-04 at 15:21 +0000, Mike Hearn wrote:
On Mon, 03 Jan 2005 12:49:05 -0500, Colin Walters wrote:
Can you explain why you have ldconfig writing to a home directory? Are you doing the equivalent of "ldconfig > ~/install.log"?
cp *.so.* ~/.local/lib /sbin/ldconfig -n ~/.local/lib # generate the symlinks
Hmm. This is actually something that should work in the strict policy, but not in targeted. The reason is that in targeted, we can't easily differentiate between the system and users. So in targeted, we transition to ldconfig_t, but in strict there should be no transition.
I can't think of any good ideas on a solution for this one at the moment. Can you file a bugzilla?
Hmm, OK. I have to admit I never saw a third party installer that uses "install" so that is probably not enough.
Depends how you define third party, but I know what you mean.
A daemon that fixes contexts as files are added feels rather racy.
It's just as racy as prelink; actually less so because it doesn't actually change file content.
I'm sure I'm missing a lot of context from previous discussions on the matter here, but perhaps the kernel should set the context automatically when a new file is created in certain directories that are marked as "autofix".
What specific race conditions do you see that we can't solve in userspace?
On Tue, 04 Jan 2005 15:21:07 -0500, Colin Walters wrote:
I can't think of any good ideas on a solution for this one at the moment. Can you file a bugzilla?
Done: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=144190
Depends how you define third party, but I know what you mean.
Well, Loki Setup, Mozilla installer, HelixPlayer, BitRock, Sun JVM scripts etc. Basically installers that are built once and shipped with the app so we can't really modify them later.
autopackage kind of sits in the middle. It's mostly separate from the .package file itself and is downloaded automatically on first run. So it's more like source tarballs because they can be modified after the fact by rerunning the autotools chain.
It's just as racy as prelink; actually less so because it doesn't actually change file content.
Prelink is just an optimisation, it can't actually stop apps working. Whereas I think if libraries have the wrong context stuff will break.
What specific race conditions do you see that we can't solve in userspace?
User installs RPM, it runs a program contained in the payload that links to libraries also in the payload in a post-install script, that fails because the libraries haven't been fixed yet. IE there's a gap between the time the library becomes available to apps and the time at which the daemon gets around to fixing it.
I may be horribly misunderstanding what you're proposing though ...
thanks -mike
selinux@lists.fedoraproject.org