This may be a dumb question, but why can't Redhat distribute NVIDIA binary
In NVIDIA's licence (http://www.nvidia.com/object/nv_swlicense.html) it
"2.1.2 Linux Exception. Notwithstanding the foregoing terms of Section
2.1.1, SOFTWARE designed exclusively for use on the Linux operating system
copied and redistributed, provided that the binary files thereof are not
modified in any
way (except for unzipping of compressed files)."
So, what's keeping RedHat from putting the drivers in the distribution? If
it's a GPL
thing, would it be easy to just download it during installation or at
least give the option to the user?
Ok, I have had Yarrow working well for a while now, but yesterday I
started experiencing some odd issues with my mouse. All of a sudden it
stops working correctly. The only thing that seems to fix is to kill X
and run mouse-test, then restart.
Also, I have FC 1 running on a desktop which is hooked up to a KVM
switch. Whenever I go to another PC, and return, the same thing
happens, the mouse goes crazy.
Damian Menscher <menscher(a)uiuc.edu>@redhat.com on 04/07/2004 04:57:13 PM
> On Wed, 7 Apr 2004, Jeff Elkins wrote:
> > I'm getting failure messages on my nfs mounts i.e. :
> > mount to NFS server 'music.elkins' failed: server is down.
> > nsfd appears to be running and I didn't see anything suspicious in the
> > The servers are up and running and have other clients connected.
> You didn't mention what steps you took to debug it:
> Can you ping the server?
> What is the output of rpcinfo -p servername?
> Does the server have access restrictions (firewall, TCP Wrappers, etc)?
I have the same symptoms...
rpcinfo says that nfs et.al. are running.
Something has changed in test 2, since the same PC running RH9
accesses that host just fine.
I don't know the right way to fix this, but something is definitely broken;
and something needs to be fixed, one way or the other. The question is what
exactly needs to be fixed.
Consider something like this:
AC_TRY_LINK_FUNC(res_query, AC_MSG_RESULT(yes), AC_MSG_RESULT(no))
Here's what happens on x86_64:
gcc -o conftest -g -O2 -Wall -I.. -I./.. conftest.c -lresolv >&5
/tmp/ccW7EeDX.o(.text+0x7): In function `main':
/home/mrsam/src/courier/authlib/configure:5160: undefined reference to
collect2: ld returned 1 exit status
configure:5147: $? = 1
configure: failed program was:
[ blah blah blah ]
| /* We use char because int might match the return type of a gcc2
| builtin and then its argument prototype would still apply. */
| char res_query ();
| main ()
| res_query ();
| return 0;
The same exact test on FC1 x86 will work.
The reason appears to be that you have to #include <resolv.conf> on x86_64
in order to succesfully pull res_query() out of libresolv.so. You don't
need to do this on x86, and the test program generated by AC_TRY_LINK_FUNC
does not include any headers, but uses a manual prototype.
So, what now?
the plan is to do a kernel update friday or monday (depending on the
amount of things that turn up in this testing). There is a 424 kernel in
the testing dir on the ftp repository and on
One of the last minute changes is a series of patches to the input
subsystem that are supposed to fix a lot of the "tapping my laptops
touchpad doesn't work" bugs. So please beat hard on this kernel...
Arjan van de Ven
Mike doesn't seem to be paying attention to this somewhat dated
thread but in case anyone else is noticing.... it seems the erase
setting is different than previous xterms. I get
erase = ^?
looking at stty -a output.
I think this means that the backspace key in emacs -nw will not
function as expected (without some elisp correction) but will call
Very disconcerting when coding at full speed.
One can set erase to ^H in .bash_profile or similar but then there is
the need to keep up with that setting when changing users or etc.
I'm guessing the old defualt was `erase = ^H' but not really sure. I
don't see any settings in my rc files so apparently it didn't need any
So after successfully convincing my boss to allow me to install linux at work
on a dual boot machine, I did so this week. Guess what? The infamous (that
I am now newly aware of) win XP dual boot bug rears its ugly head! Now I'm
in trouble for screwing up a windows computer that other people need, and the
case for linux at work has just recieved a nail in the coffin... So I get to
try to fix this on my own time, thank you fedora, and just hope that in the
far off future I can try again with fedora core 10 or something...
I mean, JESUS CHRIST, how can such a show-stopping bug be allowed in a full
release?! If people are going to be migrating away from windows and giving
linux a try, is it wise to trash their computers?! Now I have to become an
expert on disk repair so I can fix the stupid thing.
Of course, I'm happy to use FC2 and say hasta la vista to windows xp
forever... but back in the real world, we still need windows.
I'm trying to do a yum update currently and it is amazingly slow. Is
there a problem currently?
I falls over a lot on the kernel-sourcecode package
"If I face my God tomorrow, I can tell Him I am innocent.
I've never harmed anyone. I have cheated no one.
I have deceived no one. I have hurt no one.
Except myself. And that He will forgive me." - Hans Holzel