There have been a number of bugs reported in Red Hat bugzilla against X which have recently been tracked down to 3rd party video drivers being the culprit behind the problem the user was experienced. In many of the cases however, it wasn't obvious that the 3rd party drivers were at fault because the user was actually using the Red Hat supplied drivers, and not using the 3rd party driver that they had previously installed.
Since I've wasted at least 6-8 hours in the last month diagnosing issues of this nature which have later turned out to be caused by proprietary drivers having been "installed" on the system, wether they were actually being *used* or not, I thought I should write a short useful informational email on the topic to the lists to try and inform people of some pitfalls you may encounter if you even _install_ 3rd party video drivers.
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor. Most users seem to favour the hardware vendor supplied drivers directly, rather than using more sanely packaged 3rd party packages that contain the same drivers. This is very unfortunate, because installing these 3rd party tarball driver installations is very harmful to your clean OS installation.
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL. Nvidia's driver installs a replacement libglx.a X server module, removing the Red Hat supplied X.Org module in the process. ATI's driver may or may not replace libglx.a with it's own, I haven't checked (but if someone could confirm that, I'd appreciate knowing for certain).
Once you have either of these drivers installed on your system, you can no longer use DRI with any video card. So if you install the ATI fglrx driver, while you should still be able in theory at least to use the Red Hat supplied radeon driver, you may no longer be able to use DRI with the radeon driver, because ATI's driver has blown away critical files that come with the OS that are needed for proper operation.
If you install Nvidia's driver, and later decide to install an ATI card, and still have Nvidia's driver installed, bang - you will not be able to get Red Hat supplied DRI 3D acceleration to work. You must remove Nvidia's driver completely from your hard disk, and completely reinstall all of the xorg-x11 and mesa packages, and ensure they are all intact by using:
rpm -Va
Another problem being reported by a few people, is they are unable to get DRI to work because mesa libGL is looking for the DRI drivers in the wrong directory. The claim is that mesa is looking for the DRI drivers in /usr/X11R6/lib/modules.
On a fresh OS install however, my findings are that mesa's libGL very much is not looking in /usr/X11R6 for it's modules. It is looking in the proper location of /usr/lib/dri for the modules. Why then is it looking in the wrong place on some systems?
Answer: Because of fglrx having been installed. If you have had a previous OS release installed, and have installed ATI's fglrx driver from tarball, it has removed the OS supplied libGL et al and made backup copies of them aparently. Now you do an OS upgrade which works properly and installs everything in the right place. Then you uninstall ATI's fglrx with whatever script or whatever they supply, and now you try to run X, and get no DRI!
Well, since you don't have fglrx installed at all, it must be our OS at fault right! Wrong. the uninstall script has put the OLD libGL it backed up (from FC4 or whatever) back in the system, overwriting the new FC5 supplied libGL in the process, and since ATI's fglrx driver is DRI based as well, it looks for the DRI modules in the wrong place now.
Conclusions: If you are going to use any 3rd party proprietary drivers, please do yourself and everyone else a huge favour, and at least get your drivers from reputable 3rd party rpm package repositories such as livna.org which packages both the nvidia and ati proprietary drivers in rpm packages which install the drivers sanely without overwriting Red Hat/Fedora supplied files. These 3rd party packages install the files in alternative locations, and configure the X server et al. appropriately so that everything works. Since they do not blow away OS supplied files, you can use the OS supplied drivers still by reconfiguring xorg.conf. Also, if you decide to uninstall the 3rd party drivers via rpm, they just go away and cause no further harm to the system. So PLEASE USE THIRD PARTY RPM PACKAGES if you _must_ use 3rd party drivers. It helps create world peace.
If you choose to install ATI or Nvidia tarball/whatever drivers directly from ATI/Nvidia (or any other vendor for that matter), your system is 100% completely and totally unsupported. Even if you are using _our_ drivers, your 3rd party driver installation may have blown away our libGL, our libglx.a or any other files that have been supplied by our OS. As such, your system is not supported.
For those who encounter a bug of any kind whatsoever while using 3rd party video drivers, completely remove the 3rd party drivers from your system, and then perform a full "yum update" to ensure you have the latest Fedora Core supplied X packages installed. After doing this, do an "rpm -Va" of your whole system, in particular the xorg-x11-*, mesa-* and lib* packages. If there are any discrepancies found in any of the Fedora supplied packages, in particular in libGL, or the X server packages, remove them and reinstall them and reverify that the files installed on your system are the ones shipped by Fedora.
If you are able to reproduce the problem you are having after having performed these steps, and having ensured that you are neither using 3rd party drivers, nor even have them installed, then feel free to file a bug report in bugzilla.
By doing this small amount of pre-diagnosis of your own system if you are using 3rd party drivers, you will save yourself a lot of headaches, and will save other people, including developers such as myself from wasting endless hours trying to diagnose problems which turn out to be bogus. Hours which could have been spent fixing legitimate bugs that are present in bugzilla.
As an additional note - if anyone is using proprietary drivers and has any problems which they believe might actually be a bug in Xorg and not in their proprietary driver - file such bugs directly in X.Org bugzilla. X.Org has an nVidia (closed) component specifically for the proprietary driver, and Nvidia engineers get those bugs and will investigate them over time.
Anyhow, I hope this helps people understand at least some of the problems that can occur when you opt to using 3rd party drivers, present some alternatives, and to help people diagnose their own problems which might be caused by having installed 3rd party drivers.
Thanks for reading. TTYL
P.S. Feel free to forward this email on to any other lists or people whom you think might benefit from it. Also, if anyone thinks this information would be useful to have on the Fedora Wiki or somewhere else, feel free to copy my email into a wiki page, or paraphrase, etc.
Speaking of DRI, I have an ATI 200M. Using clean install of xorg driver, I see this:
(WW) RADEON(0): Enabling DRM support
[...] drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) [ repeat 255 times ...] (II) RADEON(0): [drm] drmOpen failed (EE) RADEON(0): [dri] DRIScreenInit failed. Disabling DRI.
I think I mentioned this here before, and IIRC someone else said they have exactly the same result. Is this expected behavior?
/sbin/lsmod Module Size Used by radeon 141665 0 drm 117481 1 radeon
drm is loaded, but there are no /dev/dri/<anything>
Speaking of DRI, I have an ATI 200M. Using clean install of xorg driver, I see this:
(WW) RADEON(0): Enabling DRM support
[...] drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) [ repeat 255 times ...] (II) RADEON(0): [drm] drmOpen failed (EE) RADEON(0): [dri] DRIScreenInit failed. Disabling DRI.
I think I mentioned this here before, and IIRC someone else said they have exactly the same result. Is this expected behavior?
/sbin/lsmod Module Size Used by radeon 141665 0 drm 117481 1 radeon
drm is loaded, but there are no /dev/dri/<anything>
Same thing here with an ATI FireGL X1 (R300 chipset) on an Itanium workstation.
Mike A. Harris wrote:
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor. Most users seem to favour the hardware vendor supplied drivers directly, rather than using more sanely packaged 3rd party packages that contain the same drivers. This is very unfortunate, because installing these 3rd party tarball driver installations is very harmful to your clean OS installation. Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL. Nvidia's driver installs a replacement libglx.a X server module, removing the Red Hat supplied X.Org module in the process. ATI's driver may or may not replace libglx.a with it's own, I haven't checked (but if someone could confirm that, I'd appreciate knowing for certain).
What if we uninstall the official nVidia tarball and install Livna nvidia package, do we back things in order? Or it takes some extra manual work? Speaking about libglx.a and stuff. I mean, there it would be insane if we had to reinstall the whole system because of that. Thanks.
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
What if we uninstall the official nVidia tarball and install Livna nvidia package, do we back things in order? Or it takes some extra manual work? Speaking about libglx.a and stuff. I mean, there it would be insane if we had to reinstall the whole system because of that. Thanks.
Like Mike Harris said,
For those who encounter a bug of any kind whatsoever while using 3rd party video drivers, completely remove the 3rd party drivers from your system, and then perform a full "yum update" to ensure you have the latest Fedora Core supplied X packages installed. After doing this, do an "rpm -Va" of your whole system, in particular the xorg-x11-*, mesa-* and lib* packages. If there are any discrepancies found in any of the Fedora supplied packages, in particular in libGL, or the X server packages, remove them and reinstall them and reverify that the files installed on your system are the ones shipped by Fedora.
Essentially, after installing livna's packages instead, you would need to do an rpm -Va on various packages to ensure that the files hadn't been replaced (If they have, reinstall those packages).
n0dalus.
On Thursday 23 February 2006 01:10am, n0dalus wrote:
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
[snip]
Essentially, after installing livna's packages instead, you would need to do an rpm -Va on various packages to ensure that the files hadn't been replaced (If they have, reinstall those packages).
Not to nit-pick, but running rpm -Va will check *all* packages; that's what the -a switch does. Same for rpm -qa listing all packages. Of course, you can also do:
rpm -qa kernel*
to see all packages that begin with "kernel". If you want to verify only certain packages with rpm, run:
rpm -V package1 package2 package3
BTW: rpm -Va will take a long time to run.
Lamont R. Peterson wrote:
On Thursday 23 February 2006 01:10am, n0dalus wrote:
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
[snip]
Essentially, after installing livna's packages instead, you would need to do an rpm -Va on various packages to ensure that the files hadn't been replaced (If they have, reinstall those packages).
Not to nit-pick, but running rpm -Va will check *all* packages; that's what the -a switch does. Same for rpm -qa listing all packages. Of course, you can also do:
rpm -qa kernel*
to see all packages that begin with "kernel". If you want to verify only certain packages with rpm, run:
rpm -V package1 package2 package3
BTW: rpm -Va will take a long time to run.
Users who are technically inclined enough to optimize the rpm verify to a subset of all packages that just encompass all X packages, are of course free to do so. :o)
rpm -Va gets them all, without having to have a big explanation, or provide a list of all of the relevent packages however.
On Friday 24 February 2006 03:26am, Mike A. Harris wrote:
Lamont R. Peterson wrote:
On Thursday 23 February 2006 01:10am, n0dalus wrote:
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
[snip]
Essentially, after installing livna's packages instead, you would need to do an rpm -Va on various packages to ensure that the files hadn't been replaced (If they have, reinstall those packages).
Not to nit-pick, but running rpm -Va will check *all* packages; that's what the -a switch does. Same for rpm -qa listing all packages. Of course, you can also do:
rpm -qa kernel*
to see all packages that begin with "kernel". If you want to verify only certain packages with rpm, run:
rpm -V package1 package2 package3
BTW: rpm -Va will take a long time to run.
Users who are technically inclined enough to optimize the rpm verify to a subset of all packages that just encompass all X packages, are of course free to do so. :o)
rpm -Va gets them all, without having to have a big explanation, or provide a list of all of the relevent packages however.
Yes, you are right about that.
Of course, we're talking about 2 minutes versus (potentially) 90+ minutes to run through it. Yes, I know, those numbers depend on a lot of variables. I have to work with a lot of systems that have everything installs on them. rpm -Va takes a long time to run on those boxes.
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
What if we uninstall the official nVidia tarball and install Livna nvidia package, do we back things in order? Or it takes some extra manual work?
If you ever install from the nvidia tarball.. you have to reinstall the Core mesa packages to make sure the Core mesa libraries are put back as expected. The livna packages prevent problems... they do not fix the problems caused by tarball based installs. The livna packages by design do not touch the library files owned by Core packages so they can not fix problems associated with vendor tarball installs overwriting those libraries.
-jef
Jeff Spaleta wrote:
On 2/23/06, Igor Jagec igorm5@vip.hr wrote:
What if we uninstall the official nVidia tarball and install Livna nvidia package, do we back things in order? Or it takes some extra manual work?
If you ever install from the nvidia tarball.. you have to reinstall the Core mesa packages to make sure the Core mesa libraries are put back as expected. The livna packages prevent problems... they do not fix the problems caused by tarball based installs. The livna packages by design do not touch the library files owned by Core packages so they can not fix problems associated with vendor tarball installs overwriting those libraries.
I didn't know until now, thanks. I switched recently to nVidia tarball during the stability issue using tv out full screen and I thought I'll solve the problem that way, which I didn't :-/ I was lazy afterwards to bring Livna package back. I usually recompile Livna nvidia-glx srpm package. But the funny thing is that I've never seen some official nvidia-glx package for RHEL. That can be an issue if original nVidia tarballs make problems, and people are paying for that product (RHEL). I know that I can rebuild Livna srpm package on RHEL but I've never seen some official recommendation, or something for installing nVidia propriatery driver on RHEL.
Igor Jagec wrote:
Mike A. Harris wrote:
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor. Most users seem to favour the hardware vendor supplied drivers directly, rather than using more sanely packaged 3rd party packages that contain the same drivers. This is very unfortunate, because installing these 3rd party tarball driver installations is very harmful to your clean OS installation. Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL. Nvidia's driver installs a replacement libglx.a X server module, removing the Red Hat supplied X.Org module in the process. ATI's driver may or may not replace libglx.a with it's own, I haven't checked (but if someone could confirm that, I'd appreciate knowing for certain).
What if we uninstall the official nVidia tarball and install Livna nvidia package, do we back things in order? Or it takes some extra manual work? Speaking about libglx.a and stuff. I mean, there it would be insane if we had to reinstall the whole system because of that. Thanks.
Using the livna ati/nvidia rpms is probably the cleanest method currently of installing and using the proprietary drivers. Having said that, you can still have various problems still.
Here are some off the top of my head:
1) Using the proprietary driver, then switching to the OSS driver by editing xorg.conf and changing from "nvidia" to "nv". This will switch 2D drivers, but unless you comment out the config file changes to ModulePath, Nvidia's proprietary modules such as libglx.a will still get loaded, and will complain in the X server log file. Not sure if this actually causes any real problems or not, but as soon as I see it in a log file, the bug report becomes blacklisted.
2) Using the proprietary driver (any vendor) and switching to an OSS driver, without doing a complete full reboot of the system, and ensuring their kernel modules do not ever load, will mean the OSS driver is using the hardware in whichever state the proprietary driver happened to leave it in. This can cause problems if the hardware is not in the state the OSS driver assumes it to be in. Likewise, once a proprietary kernel module has been loaded, it _owns_ the kernel. Even if you remove it with rmmod, it has been in the kernel's memory space and could have done absolutely anything while it was there. The proprietary kernel modules for video drivers do _insane_ crazy stuff in the name of performance, which really screws with the kernel's innards.
For that reason, if you're using the livna rpms, and switching from proprietary to OSS modules, the best thing to do probably is:
1) Disable any kernel modules from loading if they load at init time by initscripts or similar, or from modprobe.conf.
2) Make a backup copy of xorg.conf in case you want to use it again later, or reference it for anything.
3) Run "system-config-display --reconfig" which will generate a completely fresh config file. DO NOT EDIT THIS FILE. Test it first, and if there are custom tweaks you wish to make, make them _ONLY_ after you have tested the stock xorg.conf that system-config-display generates.
4) Reboot the computer or power it right off and back on. This is critically important to ensure the hardware is reset to factory power on defaults. A lot of people refuse to reboot, or are strongly against the idea of doing so, with the idea that Linux never needs to be rebooted. While Linux generally doesn't need to be rebooted, there are sometimes some very good benefits to doing so, and this is one of those cases. Don't fight this step in a childish quest to have higher uptime or some other useless trivial reason, as you'll may end up pulling your hair out trying to troubleshoot problems that rebooting would cure instantly. Focus on solving the real problem, and just hit the switch.
Hope this helps. TTYL
P.S. As always, feel free to add this info to the wiki, or paraphrase and polish, yada yada if desired.
Mike A. Harris wrote:
Using the livna ati/nvidia rpms is probably the cleanest method currently of installing and using the proprietary drivers. Having said that, you can still have various problems still. Here are some off the top of my head:
...
Hope this helps.
Thanks Mike, it really helped. The funny thing is that nVidia developer recommend me to use nvidia tarball instead of Livna package during the debugging of my TV out stability issue. Anyway, it (nVidia tarball) didn't solve the problem so I was lazy to bring back Livna package. Livna package also didn't provide nvidia script for debugging (they probably provide it now, I didn't check). Anyway, I'll follow your recommendations.
P.S. As always, feel free to add this info to the wiki, or paraphrase and polish, yada yada if desired.
Well, I'm sure that will do someone who speaks English as mother tongue ;)
Mike A. Harris wrote:
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
Davide Bolcioni
Davide Bolcioni wrote:
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
That would be incredibly annoying and is not where we want to go... It would complicate updates and installs and configuration and everything that is normal administration.
People should just learn by their "mistakes" and replace whatever rpm their operations may have corrupted.
I realize that some people need these rpms and thats why mharris kindly suggests that these people use the third party repos.
Problem solved :o)
/Thomas
Davide Bolcioni wrote:
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
That would be incredibly annoying and is not where we want to go... It would complicate updates and installs and configuration and everything that is normal administration.
I disagree, I think this would improve the security of the distribution. I would not recommend making such changes to targeted policy, but it seems potentially valuable to strict.
Granting all powers to root is dangerous, we should be moving in the opposite direction, from coarse-grained security towards fine-grained security. I.E. applications ran as sysadm_t which don't need install (and relabeling) privileges shouldn't have them.
I see no reason why my accidental execution of a hostile script as sysadm_t should have the powers to take over my computer. I think strict policy has already been changed to run in an underprivileged role by default (staff_r) for root logins, so I'm not sure if more needs to be done...
2006/2/23, Ivan Gyurdiev ivg2@cornell.edu:
Davide Bolcioni wrote:
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
That would be incredibly annoying and is not where we want to go... It would complicate updates and installs and configuration and everything that is normal administration.
I disagree, I think this would improve the security of the distribution. I would not recommend making such changes to targeted policy, but it seems potentially valuable to strict.
Granting all powers to root is dangerous, we should be moving in the opposite direction, from coarse-grained security towards fine-grained security. I.E. applications ran as sysadm_t which don't need install (and relabeling) privileges shouldn't have them.
agreed.
I see no reason why my accidental execution of a hostile script as sysadm_t should have the powers to take over my computer. I think strict policy has already been changed to run in an underprivileged role by default (staff_r) for root logins, so I'm not sure if more needs to be done...
agreed
regards, Rudolf Kastl
my personal conclusion:
While there should be mechanisms to turn off the "rpm file protection" it would by default be nice since users stop wrecking their systems and reporting bogus bugs.
regards, Rudolf Kastl
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
Rudolf Kastl wrote:
my personal conclusion:
While there should be mechanisms to turn off the "rpm file protection" it would by default be nice since users stop wrecking their systems and reporting bogus bugs.
A boolean named "invalidate_rhel_support_contract", false by default, should fit the bill. This protection would not apply to configuration or data files, of course, only to files which have no need to be writable for ordinary execution.
Regards, Davide Bolcioni
Thomas M Steenholdt wrote:
Davide Bolcioni wrote:
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
That would be incredibly annoying and is not where we want to go... It would complicate updates and installs and configuration and everything that is normal administration.
People should just learn by their "mistakes" and replace whatever rpm their operations may have corrupted.
I realize that some people need these rpms and thats why mharris kindly suggests that these people use the third party repos.
Problem solved :o)
/Thomas
won't help ... people will ask on the nvidia forums an get a reply "do setenforce 0 before installing the driver and setenforce 1 after it finished"
2006/2/23, dragoran dragoran@feuerpokemon.de:
Thomas M Steenholdt wrote:
Davide Bolcioni wrote:
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
That would be incredibly annoying and is not where we want to go... It would complicate updates and installs and configuration and everything that is normal administration.
People should just learn by their "mistakes" and replace whatever rpm their operations may have corrupted.
I realize that some people need these rpms and thats why mharris kindly suggests that these people use the third party repos.
Problem solved :o)
/Thomas
won't help ... people will ask on the nvidia forums an get a reply "do setenforce 0 before installing the driver and setenforce 1 after it finished"
thats definitely a worst case scenario ;)
regards, rudolf kastl
-- fedora-devel-list mailing list fedora-devel-list@redhat.com https://www.redhat.com/mailman/listinfo/fedora-devel-list
On 2/23/06, Rudolf Kastl che666@gmail.com wrote:
thats definitely a worst case scenario ;)
And sadly the most likely one, until there are some end-user oriented notifications from the system which explain what is going on and why, when an selinux related denial happens. Having to keep a running tail of /var/log/messages open and learning how to decipher the avc messages while using vendor installers is a hurdle an order of magnitude too large for normal home users who don't understand the underlying issues. And sadly, reaching out to other users tends to get you blanket "turn off" selinux answers. There is a steep learning curve associated with selinux denials, and unless the fedora system makes an attempt to point users to granular tools as the denials occur the re-education effort is going to be hamstrung.
-jef
On Thursday 23 February 2006 07:34am, Jeff Spaleta wrote:
On 2/23/06, Rudolf Kastl che666@gmail.com wrote:
thats definitely a worst case scenario ;)
And sadly the most likely one, until there are some end-user oriented notifications from the system which explain what is going on and why, when an selinux related denial happens. Having to keep a running tail of /var/log/messages open and learning how to decipher the avc messages while using vendor installers is a hurdle an order of magnitude too large for normal home users who don't understand the underlying issues. And sadly, reaching out to other users tends to get you blanket "turn off" selinux answers. There is a steep learning curve associated with selinux denials, and unless the fedora system makes an attempt to point users to granular tools as the denials occur the re-education effort is going to be hamstrung.
By no means is this limited to home users. I would say that the *vast* majority of corporate admins just turn off SELinux. The story behind how & why they learned to do that to begin with only vary in details. It's almost always, "I had problems installing X or doing Y and I found a document on the Internet that said that SELinux was in the way and didn't work right anyway and was too complicated and didn't do me any good and that I couldn't learn enough about it to even understand what was happening, let alone deal with it, in less than a month and ... well, so I just turn off SELinux and then I don't have to deal with it."
I teach Linux for a living. I teach Red Hat's courses and hear this story in almost every class taught. Students even ask me if they'll have to do SELinux in the RHCT/RHCE exams, and then cringe in anticipation that I'll reply, "Yes.". Of course, the only answer I can give is "I don't know; if it's in the book it could be on the exam." ;)
You're right, there needs to be a buffer that makes SELinux troubleshooting and education less intimidating if we want end-users to keep SELinux enabled. I tell students in my classes that SELinux *is* intimidating and that they are not going to learn enough about it to write their own policy. But that they will learn enough to understand why SELinux is important and valuable and to be able to identify and fix the most common problems (missing labels, booleans that need flipping, etc.) so that they can keep their SELinux enabled systems running smoothly and that it's not as hard as they think.
I also think that application developers need to think about SELinux when writing code. If they also helped (that's "helped", not "did all the work") in producing policy for their own app(s), it just might not "get in the way".
This might be a pipe dream today; but, I remain hopeful.
On Thu, Feb 23, 2006 at 10:19:15AM -0700, Lamont R. Peterson wrote:
By no means is this limited to home users. I would say that the *vast* majority of corporate admins just turn off SELinux. The story behind how & why they learned to do that to begin with only vary in details. It's almost always, "I had problems installing X or doing Y and I found a document on the Internet that said that SELinux was in the way and didn't work right anyway and was too complicated and didn't do me any good and that I couldn't learn enough about it to even understand what was happening, let alone deal with it, in less than a month and ... well, so I just turn off SELinux and then I don't have to deal with it."
You forgot the alternative, "SELinux does not help at all given our threat model, so it's all cost and no returns". That's the case here. I won't activate SELinux any time soon.
OG.
On Thu, 23 Feb 2006, Olivier Galibert wrote:
You forgot the alternative, "SELinux does not help at all given our threat model, so it's all cost and no returns". That's the case here. I won't activate SELinux any time soon.
Can I ask what your threat model is?
- James
On Thu, Feb 23, 2006 at 12:33:14PM -0500, James Morris wrote:
On Thu, 23 Feb 2006, Olivier Galibert wrote:
You forgot the alternative, "SELinux does not help at all given our threat model, so it's all cost and no returns". That's the case here. I won't activate SELinux any time soon.
Can I ask what your threat model is?
We're a governmental research lab somewhere, with students and visitors coming around and even classes in the conference rooms on a regular basis. The computers are behind a reasonable, bidirectional firewall. All disks are nfs-exported everywhere so that anyone can work no matter what computer they're on. You can always find some ips that are in the access lists but for which the associated computer is offline at the time, especially since the list is accessed through NIS. Also rlogind is active on most of the computers. Next to that, the web servers, ftp servers, etc are reasonably competently administred, with rampant paranoia w.r.t all scripts in there and this kind of stuff.
We don't have wifi at that point.
The biggest data loss we've had in some years is when someone stole a server computer, disks and all.
So our real threat is physical access, either stealing computers/disks or plugging into the network. The technical answer to that is paranoid encryption everywhere, which won't happen because the cost is way higher than the risk. SELinux doesn't enter the picture at any point. Remote control of a windows desktop box would be the secondary threat if it wasn't for the bidirectional firewall. The unix systems are far behind.
OG.
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
-jef
On Thu, 2006-02-23 at 13:08 -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
What if somebody approaches you holding a banana?
M
tor, 23 02 2006 kl. 13:13 -0500, skrev Michael Tiemann:
On Thu, 2006-02-23 at 13:08 -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
What if somebody approaches you holding a banana?
You release the Begal tiger of course
Michael Tiemann wrote:
On Thu, 2006-02-23 at 13:08 -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
What if somebody approaches you holding a banana?
Tertiary threat: gorilla attacks.
Michael Tiemann wrote:
On Thu, 2006-02-23 at 13:08 -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
What if somebody approaches you holding a banana?
Disarm the banana.
On Thu, Feb 23, 2006 at 01:08:34PM -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
Thankfully I don't have a sister.
OG.
On Thu, 2006-02-23 at 19:36 +0100, Olivier Galibert wrote:
On Thu, Feb 23, 2006 at 01:08:34PM -0500, Jeff Spaleta wrote:
On 2/23/06, James Morris jmorris@redhat.com wrote:
Can I ask what your threat model is?
Primary threat: bear attacks Secondary threat: moose attacks
Thankfully I don't have a sister.
Where did I put that interspace toothbrush...
Lamont R. Peterson wrote:
By no means is this limited to home users. I would say that the *vast* majority of corporate admins just turn off SELinux. The story behind how & why they learned to do that to begin with only vary in details. It's almost always, "I had problems installing X or doing Y and I found a document on the Internet that said that SELinux was in the way and didn't work right anyway and was too complicated and didn't do me any good and that I couldn't learn enough about it to even understand what was happening, let alone deal with it, in less than a month and ... well, so I just turn off SELinux and then I don't have to deal with it."
I think we might be aiming at the wrong target, especially in the case of corporate admins. Target application developers, not admins: applications must work without requiring any modification to the system and adapt accordingly. Make modifications invalidate the RHEL support contract: SELinux just helps you to nail down lazy application developers. If the application means more money to the admin than the support contract, he disables it *knowingly* and should the need arise RH support engineers do rpm -Va, notice that something is fishy, and the admin pays per incident or whatever the contract says. If the admin does not like this, next time he'll complain to the application vendor which will get his code, the actual culprit, fixed.
Davide Bolcioni
Hi.
On Thu, 23 Feb 2006 10:19:15 -0700, Lamont R. Peterson wrote:
details. It's almost always, "I had problems installing X or doing Y and I found a document on the Internet that said that SELinux was in the way and didn't work right anyway and was too complicated and didn't do me any good and that I couldn't learn enough about it to even understand what was happening, let alone deal with it, in less than a month and ... well, so I just turn off SELinux and then I don't have to deal with it."
I am in exactly that situation right now. I am migrating a whole bunch of machines over to a selinux-capable system, but I turn it off, mainly because I do not even remotely know enough about how it works and how one uses it.
It sucks. Majorly.
Is there a decent book available on understanding and managing selinux (covering what is available in RHEL4)? I am quite fond of dead tree versions for learning about a topic, and printing out PDFs is unsatisfying.
Ralf Ertzinger wrote:
Is there a decent book available on understanding and managing selinux (covering what is available in RHEL4)? I am quite fond of dead tree versions for learning about a topic, and printing out PDFs is unsatisfying.
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/selinux-guide/
http://www.amazon.com/gp/search/ref=br_ss_hs/103-0884455-3434219?search-alia...
On Thu, 2006-02-23 at 15:36 -0500, John Poelstra wrote:
Ralf Ertzinger wrote:
Is there a decent book available on understanding and managing selinux (covering what is available in RHEL4)? I am quite fond of dead tree versions for learning about a topic, and printing out PDFs is unsatisfying.
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/selinux-guide/
Yes, that is the de facto standard for RHEL 4 SELinux information. If you are just looking to get stuff done, look in the second part of the book.
Best yet, flip to the Index and look under "how to". For learning about SELinux, the "what is/what are" Index entries are numerous and useful.
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/selinux-guide/ge...
For developers (hey, now I'm on-topic!), the first part that explains about SELinux and how it works is ... well, it is what it is. I did the best I could to make sense out of it for you all. ;-)
I encourage you to give the PDF a try. I pored over it, page by page, to make sure that it prints out without content loss or nasty, ugly formatting. If you don't want to print it yourself, Cafepress or Kinkos is quite capable.
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/pdf/rhel-selg-en...
http://www.amazon.com/gp/search/ref=br_ss_hs/103-0884455-3434219?search-alia...
The only one there remotely useful is the upcoming SELinux by Example [1], authored by people who know what they are talking about. Aug. 2006 is the publication date on Amazon. The ORA book was based on FC 2 and a waaaaaay older framework in the kernel.
- Karsten
[1] http://www.amazon.com/gp/product/0131963694/sr=8-6/qid=1140789393/ref=sr_1_6...
On 2/23/06, Davide Bolcioni db-fedora@3di.it wrote:
Mike A. Harris wrote:
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
Davide Bolcioni
Quite possibly. But the first BAD FAQ will be on the mailling lists:
How do I get my nvidia card to perform decently?
Download the tar balls from nvidia, and reboot your system with SELINUX=0.
-- Stephen J Smoogen. CSIRT/Linux System Administrator
Davide Bolcioni wrote:
Mike A. Harris wrote:
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
One of the SElinux guys can probably answer that better than I, as I don't use SElinux personally, and my knowledge of what all it can do, and how to make it do that, is rather limited. As some of our other developers have mentioned before, it's black voodoo magic.
;o)
chattr +i on the files might do the job, but then I suppose nvidia's installer would just chattr -i them circumventing it. It's far easier to just clearly state that 3rd party drivers are not supported in any way shape or form, and give people the right expectations. Then they may or may not like it, but at least they know where things stand.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
One of the SElinux guys can probably answer that better than I, as I don't use SElinux personally, and my knowledge of what all it can do, and how to make it do that, is rather limited. As some of our other developers have mentioned before, it's black voodoo magic.
Black Voodoo Magic?
All those complains about how "selinux is hard", and "I have no idea how selinux works, I'm turning it off" make no sense to me. Selinux policy follows what your code does, and establishes a bound around what it's allowed to do. If you're able to understand thousands of lines of complex code (in a project such as the X server), then surely you can figure out the (much simpler) security box around it, and change it to your advantage. The job of the selinux developers (figure out each and every detail of X, so it will never leave the security box, and fail) seems harder to me.
Ultimately a failure in selinux means that either (1) your application is doing something it shouldn't, or (2) the selinux developers' did not anticipate or understand that your application would need such privileges. Policy writing is a manual project at this time. Some people have mentioned to me that it might be possible to automate certain parts of it, but this isn't available today...
I agree with the earlier poster, which said application developers should help with policy. If you are interested in that, you can look at: http://serefpolicy.sourceforge.net/, download the code, and at least have a look at the policy for your project (if one is available), and point out any critical flaws in it on the fedora-selinux list. I'm sure patches are also welcome.
========================
I'm not currently involved with policy development, so questions would be better directed at the selinux list. However, I have written some policy in the past, so here's what my suggestions would be (follow at your own risk):
1. policy is a collection of language rules, as described here: http://www.nsa.gov/selinux/papers/policy2/x107.html 2. later extended by Tresys to this: http://sepolicy-server.sourceforge.net/index.php?page=module-language 3. that are later processed through m4, which allows simulation of "functions" and "if-else" statements (now organized in modules, with api, which get compiled and linked in two steps)
4. reading all the above is very useful, but in the short run you can follow existing patterns to learn what's going on
5. everything that is not specifically allowed is denied 6. A typical rule is: allow { src1_t src2_t } { target1_t target2_t }:{ class1 class2 } { permission1 permission2 }. Things that end in _t are types - they're defined with the "type" language rule.
class1, class2 are typically things like: file, dir, fifo_file, and are defined in flask/security_classes permission1, permission2 are specific to the above class, and are defined in flask/access_vectors.
This rule allows subjects in { src1_t src2_t } to act on objects in { target1_t target2_t } of type { class1 class2 } in ways { permission1 permission2 }. The set notation above expands as a cartesian product.
7) Because m4 expansion is used, things are written in if-else statements (conditioned on things called booleans), and in functions/interfaces. This has the drawback of having a steep learning curve, because you might not be familiar with the other macros being called. I suggest use of grep, and http://serefpolicy.sourceforge.net/api-docs/ to figure it out. It would be absolutely wrong to write only low-level rules - the policy structure should model the program being confined. Things in a shared library should go into a shared interface.
8) Follow existing patterns. Interfaces go in the .if file (they take arguments $1, $2), other rules go in the .te file, file context labels go in the .fc file.
9) Important concepts are: domain transition (this is how you get out of your domain, and into another one, potentially causing havoc there). type transition (this is how your program sets the context of your files to something other than the parent directory, without modifying the application code - black magic! automatic chmod. very useful in practice)
Grep for domtrans/filetrans/domain_auto_trans in the policy, I'm not sure what those are called nowdays, but it shouldn't be hard to figure it out.
- Because m4 expansion is used, things are written in if-else
statements (conditioned on things called booleans), and in functions/interfaces. This has the drawback of having a steep learning curve, because you might not be familiar with the other macros being called. I suggest use of grep, and http://serefpolicy.sourceforge.net/api-docs/ to figure it out. It would be absolutely wrong to write only low-level rules - the policy structure should model the program being confined. Things in a shared library should go into a shared interface.
arg... correction: Booleans are not handled via m4 expansion, they are part of the language (because their state is determined at runtime, not at compile time). There's: m4 ifelse construct - for conditional policy generation language if construct - for booleans, conditional policy at runtime
On Fri, Feb 24, 2006 at 05:23:05 -0500, "Mike A. Harris" mharris@mharris.ca wrote:
Davide Bolcioni wrote:
Mike A. Harris wrote:
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
Yes it should be possible to do this. However, you need some way to distinguish updates of those libraries when done normally as opposed to being done by ATI or Nvidia code. What you would probably like to do is only let rpm change those files. However if ATI and Nvidia are supplying rpms, selinux isn't going to be able to tell the difference.
You could also go by what role the person who runs rpm had. Then it would be up to you to change your role based on whose rpms you were installing.
Another issue is that files only have one tag for selinux and if you use a tag that indicates just that it was installed by rpm, that isn't going to play nice with other selinux policies. You might be able to get away with restricting how files with a number of different types are updated. You may cover some files you don't want doing this, but I think you could get close.
Another approach would be to have rpm not allow rpms to stomp on files from other rpms if they weren't signed by the same key (perhaps --force would override that).
Bruno Wolff III wrote:
On Fri, Feb 24, 2006 at 05:23:05 -0500, "Mike A. Harris" mharris@mharris.ca wrote:
Davide Bolcioni wrote:
Mike A. Harris wrote:
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
Yes it should be possible to do this. However, you need some way to distinguish updates of those libraries when done normally as opposed to being done by ATI or Nvidia code. What you would probably like to do is only let rpm change those files. However if ATI and Nvidia are supplying rpms, selinux isn't going to be able to tell the difference.
You could also go by what role the person who runs rpm had. Then it would be up to you to change your role based on whose rpms you were installing.
Another issue is that files only have one tag for selinux and if you use a tag that indicates just that it was installed by rpm, that isn't going to play nice with other selinux policies. You might be able to get away with restricting how files with a number of different types are updated. You may cover some files you don't want doing this, but I think you could get close.
Another approach would be to have rpm not allow rpms to stomp on files from other rpms if they weren't signed by the same key (perhaps --force would override that).
Except:
1) It would be an ugly hack, and would not actually stop people from doing what they really want to anyway.
2) Would make people get upset at SElinux and probably disable it if they don't already.
3) Would probably result in FAQ's and other "advice" to fix the problem recommending disabling SElinux.
4) Would require significant additional work to implement and test.
5) Even if it was implemented, many people already do not use SElinux, and would thus not be affected by the changes anyway.
6) Would not really bring any real world gain in the end.
I can't really envision anyone coming up with any viable rationale to really consider this as an option.
Everyone is given an OS to install and use, and with that freedom comes responsibility. You're given the rope to hang yourself with in thousands of places in Linux and Linux-like OSs. It is entirely the responsibility of the system administrator, or user responsible for the computer system to ensure that they are installing software wisely.
However, it is always possible to hang one's self with the rope one is given. Trying to make the OS prevent all possible manners in which a user can hang themselves is simply not possible, fruitless to waste time on, and generally speaking, is more likely to cause more problems than it solves.
idea.veto = 1;
On Fri, Feb 24, 2006 at 10:25:00 -0500, "Mike A. Harris" mharris@mharris.ca wrote:
- Would make people get upset at SElinux and probably disable it if
they don't already.
I admit I did that for FC3, but I really like targetted for FC4. I had a couple issues with httpd where I had some stuff outside the /var/www/html tree that needed to marked with the correct context and a few perl scripts that needed more access (mostly acces to postgres and one talks to a remote host) that I made unconstrained (though I am trying to learn enough to tighten them back up).
I really want to try out strict. I think I know enough now to be able to work through problems and I don't like programs having network access by default. This includes some CD players supplied by fedora that are configured to do remote lookups by default. I also don't trust game software provided by commercial vendors. When I upgrade to FC5 I am going to at least try it out.
Everyone is given an OS to install and use, and with that freedom comes responsibility. You're given the rope to hang yourself with in thousands of places in Linux and Linux-like OSs. It is entirely the responsibility of the system administrator, or user responsible for the computer system to ensure that they are installing software wisely.
Currently that is a real pain to do, depending on how much trust you give to various vendors. Ideally you would like a separate environment for each different source of software that you want to install. So that when you do installs, the install scripts can't do some things (phone home, install DRM, etc.). You can kind of do that now by creating a separate account for each source and setting up necessary directories with appropiate ownership before doing the install.
While I did something like this for neverwinter nights, so I could restrict its network access by user in my packet filter, this gets tiring after a while. SELinux isn't going to solve this problem either, but I might be able to have it block some bad behavior for me without spending as much effort.
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL.
Could SELinux be used to prevent this and, more generally, disallow replacement of rpm-controlled files even by the root user ?
Yes it should be possible to do this. However, you need some way to distinguish updates of those libraries when done normally as opposed to being done by ATI or Nvidia code. What you would probably like to do is only let rpm change those files. However if ATI and Nvidia are supplying rpms, selinux isn't going to be able to tell the difference.
The goal here is not to prevent Nvidia-supplied rpms to run on Linux. The goal is to prevent shell-based installers from modifying files that are "controlled" by the rpm database. Nvidia rpms would not create a problem on Fedora, since any conflicts with other rpms would be exposed by the package manager.
Another issue is that files only have one tag for selinux and if you use a tag that indicates just that it was installed by rpm, that isn't going to play nice with other selinux policies. You might be able to get away with restricting how files with a number of different types are updated. You may cover some files you don't want doing this, but I think you could get close.
I think this is the correct way to do it. I don't follow why you couldn't get close...
You'd enumerate all the contexts for files under /lib, /usr/lib, etc.. places which would be declared "controlled" by rpm. Then you create a new attribute called "managed" or something like that, and mark all those types with that attribute. Then you write policy to allow rpm to manage those types. You write an assertion to make sure nothing but rpm manages those files. Then audit and remove all rules from policy that violate that assertion. I haven't written policy in a while, but shouldn't this work?
Another approach would be to have rpm not allow rpms to stomp on files from other rpms if they weren't signed by the same key (perhaps --force would override that).
That solves a completely different problem from the original question.
The goal here is not to prevent Nvidia-supplied rpms to run on Linux. The goal is to prevent shell-based installers from modifying files that are "controlled" by the rpm database. Nvidia rpms would not create a problem on Fedora, since any conflicts with other rpms would be exposed by the package manager.
Well, maybe this is wishful thinking, since rpm does have scripting capabilities, so it can do whatever it wants to...
On 2/24/06, Ivan Gyurdiev ivg2@cornell.edu wrote:
The goal here is not to prevent Nvidia-supplied rpms to run on Linux. The goal is to prevent shell-based installers from modifying files that are "controlled" by the rpm database. Nvidia rpms would not create a problem on Fedora, since any conflicts with other rpms would be exposed by the package manager.
Correction.. non-crackrock rpms would not create a problem. You can do an amazing amount of damage via postinstall scripts inside packages. It wouldn't be all that difficult to create an nvidia rpm that dropped the nvidia installer on the system and then ran the installer via postinstall script. In fact I'm pretty sure I've seen that sort of beast in the wild at some point. If your security is so tight that postinstall actions during rpm packages would generally fail when tampering with other package's files.. then you break lots of postinstall actions.
-jef
Correction.. non-crackrock rpms would not create a problem. You can do an amazing amount of damage via postinstall scripts inside packages. It wouldn't be all that difficult to create an nvidia rpm that dropped the nvidia installer on the system and then ran the installer via postinstall script. In fact I'm pretty sure I've seen that sort of beast in the wild at some point. If your security is so tight that postinstall actions during rpm packages would generally fail when tampering with other package's files.. then you break lots of postinstall actions.
I think rpm scripts already run within rpm_script_t domain which is confined on strict policy. Not sure how extensive the confinement is (I don't think it's very extensive).
What kind of scripts legitimately need to tamper with other packages' files? Examples?
On 2/24/06, Ivan Gyurdiev ivg2@cornell.edu wrote:
What kind of scripts legitimately need to tamper with other packages' files? Examples?
I take you you mean non config file examples?
-jef
What kind of scripts legitimately need to tamper with other packages' files? Examples?
I take you you mean non config file examples
Right now I see things like this in the current policy for rpm_script_t (on strict, no less...): Not sure why all of those things are necessary...
# ideally we would not need this auth_manage_all_files_except_shadow(rpm_script_t)
# ideally we would not need this dev_manage_generic_blk_files(rpm_script_t) dev_manage_generic_chr_files(rpm_script_t) dev_manage_all_blk_files(rpm_script_t) dev_manage_all_chr_files(rpm_script_t)
storage_raw_read_fixed_disk(rpm_script_t) storage_raw_write_fixed_disk(rpm_script_t)
On Fri, Feb 24, 2006 at 10:27:37 -0500, Ivan Gyurdiev ivg2@cornell.edu wrote:
You'd enumerate all the contexts for files under /lib, /usr/lib, etc.. places which would be declared "controlled" by rpm. Then you create a new attribute called "managed" or something like that, and mark all those types with that attribute. Then you write policy to allow rpm to manage those types. You write an assertion to make sure nothing but rpm manages those files. Then audit and remove all rules from policy that violate that assertion. I haven't written policy in a while, but shouldn't this work?
You're right you could do that. There wouldn't be just one 'managed' context though. You'd have to make a 'managed' version of each existing context that was used in those directories. Its a bit more work, but would be doable.
On Wed, 2006-02-22 at 20:07 -0500, Mike A. Harris wrote:
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor.
The Intel driver is worse than that, in some ways. In that case you don't even need to seek out and install separate software; a clean Fedora installation out of the box will run binary-only code supplied by your board manufacturer, without really giving you much clue that it's doing so.
I recently purchased a board with Intel i915 graphics, because I was led to believe that it had a fully open source driver -- and now I've found that all the mode setup is in binary-only code. So I can't make it do the PAL output modes for which I purchased it.
On Thu, Feb 23, 2006 at 02:34:57PM +0000, David Woodhouse wrote:
Fedora installation out of the box will run binary-only code supplied by your board manufacturer, without really giving you much clue that it's doing so.
Correct. Each intel board has board specific analogue circuitry and interfaces which are driven via a BIOS interface. X would need a driver for each motherboard (and potentially each board rev) to do anything else
Alan
Alan Cox wrote:
On Thu, Feb 23, 2006 at 02:34:57PM +0000, David Woodhouse wrote:
Fedora installation out of the box will run binary-only code supplied by your board manufacturer, without really giving you much clue that it's doing so.
Correct. Each intel board has board specific analogue circuitry and interfaces which are driven via a BIOS interface. X would need a driver for each motherboard (and potentially each board rev) to do anything else
Yes, the driver may need to become more complicated in order to do direct mode programming, however users of other proprietary OSs do have video drivers that give them what they want/need, so it is possible in theory to have OSS drivers that do the same. There's no reason it needs to be multiple separate drivers however, conditional codepaths and autodetection within a single driver should be able to handle it, at least in theory.
All smoke until Intel does something that allows the situation to change though.
David Woodhouse wrote:
On Wed, 2006-02-22 at 20:07 -0500, Mike A. Harris wrote:
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor.
The Intel driver is worse than that, in some ways. In that case you don't even need to seek out and install separate software; a clean Fedora installation out of the box will run binary-only code supplied by your board manufacturer, without really giving you much clue that it's doing so.
I recently purchased a board with Intel i915 graphics, because I was led to believe that it had a fully open source driver -- and now I've found that all the mode setup is in binary-only code. So I can't make it do the PAL output modes for which I purchased it.
Yeah, that's a situation that continues to suck bigtime. The i[89]resolution utilities hack around it in some cases, but it's still just an ugly hack, and doesn't work all the time.
Oddly enough though, the i810 driver is currently the most video vendor supported driver. Hopefully Intel will change their tune about mode programming documentation in the future.
This sounds like an incredibly useful thing to have in the release notes for FC5, perhaps in abbreviated form.
CC'ing relnotes@ as per instructions at http://fedoraproject.org/wiki/DocsProject/ReleaseNotes/Process
Mike A. Harris wrote:
There have been a number of bugs reported in Red Hat bugzilla against X which have recently been tracked down to 3rd party video drivers being the culprit behind the problem the user was experienced. In many of the cases however, it wasn't obvious that the 3rd party drivers were at fault because the user was actually using the Red Hat supplied drivers, and not using the 3rd party driver that they had previously installed.
Since I've wasted at least 6-8 hours in the last month diagnosing issues of this nature which have later turned out to be caused by proprietary drivers having been "installed" on the system, wether they were actually being *used* or not, I thought I should write a short useful informational email on the topic to the lists to try and inform people of some pitfalls you may encounter if you even _install_ 3rd party video drivers.
Both ATI and Nvidia, and perhaps even other 3rd party drivers out there come in some form of tarball or equivalent form from the particular vendor. Most users seem to favour the hardware vendor supplied drivers directly, rather than using more sanely packaged 3rd party packages that contain the same drivers. This is very unfortunate, because installing these 3rd party tarball driver installations is very harmful to your clean OS installation.
Both ATI and Nvidia's proprietary video driver installation utilities replace the Red Hat supplied libGL library with their own libGL. Nvidia's driver installs a replacement libglx.a X server module, removing the Red Hat supplied X.Org module in the process. ATI's driver may or may not replace libglx.a with it's own, I haven't checked (but if someone could confirm that, I'd appreciate knowing for certain).
Once you have either of these drivers installed on your system, you can no longer use DRI with any video card. So if you install the ATI fglrx driver, while you should still be able in theory at least to use the Red Hat supplied radeon driver, you may no longer be able to use DRI with the radeon driver, because ATI's driver has blown away critical files that come with the OS that are needed for proper operation.
If you install Nvidia's driver, and later decide to install an ATI card, and still have Nvidia's driver installed, bang - you will not be able to get Red Hat supplied DRI 3D acceleration to work. You must remove Nvidia's driver completely from your hard disk, and completely reinstall all of the xorg-x11 and mesa packages, and ensure they are all intact by using:
rpm -Va
Another problem being reported by a few people, is they are unable to get DRI to work because mesa libGL is looking for the DRI drivers in the wrong directory. The claim is that mesa is looking for the DRI drivers in /usr/X11R6/lib/modules.
On a fresh OS install however, my findings are that mesa's libGL very much is not looking in /usr/X11R6 for it's modules. It is looking in the proper location of /usr/lib/dri for the modules. Why then is it looking in the wrong place on some systems?
Answer: Because of fglrx having been installed. If you have had a previous OS release installed, and have installed ATI's fglrx driver from tarball, it has removed the OS supplied libGL et al and made backup copies of them aparently. Now you do an OS upgrade which works properly and installs everything in the right place. Then you uninstall ATI's fglrx with whatever script or whatever they supply, and now you try to run X, and get no DRI!
Well, since you don't have fglrx installed at all, it must be our OS at fault right! Wrong. the uninstall script has put the OLD libGL it backed up (from FC4 or whatever) back in the system, overwriting the new FC5 supplied libGL in the process, and since ATI's fglrx driver is DRI based as well, it looks for the DRI modules in the wrong place now.
Conclusions: If you are going to use any 3rd party proprietary drivers, please do yourself and everyone else a huge favour, and at least get your drivers from reputable 3rd party rpm package repositories such as livna.org which packages both the nvidia and ati proprietary drivers in rpm packages which install the drivers sanely without overwriting Red Hat/Fedora supplied files. These 3rd party packages install the files in alternative locations, and configure the X server et al. appropriately so that everything works. Since they do not blow away OS supplied files, you can use the OS supplied drivers still by reconfiguring xorg.conf. Also, if you decide to uninstall the 3rd party drivers via rpm, they just go away and cause no further harm to the system. So PLEASE USE THIRD PARTY RPM PACKAGES if you _must_ use 3rd party drivers. It helps create world peace.
If you choose to install ATI or Nvidia tarball/whatever drivers directly from ATI/Nvidia (or any other vendor for that matter), your system is 100% completely and totally unsupported. Even if you are using _our_ drivers, your 3rd party driver installation may have blown away our libGL, our libglx.a or any other files that have been supplied by our OS. As such, your system is not supported.
For those who encounter a bug of any kind whatsoever while using 3rd party video drivers, completely remove the 3rd party drivers from your system, and then perform a full "yum update" to ensure you have the latest Fedora Core supplied X packages installed. After doing this, do an "rpm -Va" of your whole system, in particular the xorg-x11-*, mesa-* and lib* packages. If there are any discrepancies found in any of the Fedora supplied packages, in particular in libGL, or the X server packages, remove them and reinstall them and reverify that the files installed on your system are the ones shipped by Fedora.
If you are able to reproduce the problem you are having after having performed these steps, and having ensured that you are neither using 3rd party drivers, nor even have them installed, then feel free to file a bug report in bugzilla.
By doing this small amount of pre-diagnosis of your own system if you are using 3rd party drivers, you will save yourself a lot of headaches, and will save other people, including developers such as myself from wasting endless hours trying to diagnose problems which turn out to be bogus. Hours which could have been spent fixing legitimate bugs that are present in bugzilla.
As an additional note - if anyone is using proprietary drivers and has any problems which they believe might actually be a bug in Xorg and not in their proprietary driver - file such bugs directly in X.Org bugzilla. X.Org has an nVidia (closed) component specifically for the proprietary driver, and Nvidia engineers get those bugs and will investigate them over time.
Anyhow, I hope this helps people understand at least some of the problems that can occur when you opt to using 3rd party drivers, present some alternatives, and to help people diagnose their own problems which might be caused by having installed 3rd party drivers.
Thanks for reading. TTYL
P.S. Feel free to forward this email on to any other lists or people whom you think might benefit from it. Also, if anyone thinks this information would be useful to have on the Fedora Wiki or somewhere else, feel free to copy my email into a wiki page, or paraphrase, etc.
On Thu, 2006-02-23 at 15:21 -0500, Jack Tanner wrote:
This sounds like an incredibly useful thing to have in the release notes for FC5, perhaps in abbreviated form.
CC'ing relnotes@ as per instructions at http://fedoraproject.org/wiki/DocsProject/ReleaseNotes/Process
Thanks, haven't seen it come in yet, but I decided to file a bug report to work from[1]. We'll put a short, descriptive, action-oriented report in the release notes beats (fp.o/wiki/Docs/Beats/Xorg) and probably paste Mike's essay in it's entirety on another Wiki page to reference.
cheers - Karsten [1] https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=182677