This may be a dumb question, but why can't Redhat distribute NVIDIA binary
In NVIDIA's licence (http://www.nvidia.com/object/nv_swlicense.html) it
"2.1.2 Linux Exception. Notwithstanding the foregoing terms of Section
2.1.1, SOFTWARE designed exclusively for use on the Linux operating system
copied and redistributed, provided that the binary files thereof are not
modified in any
way (except for unzipping of compressed files)."
So, what's keeping RedHat from putting the drivers in the distribution? If
it's a GPL
thing, would it be easy to just download it during installation or at
least give the option to the user?
Ok, I have had Yarrow working well for a while now, but yesterday I
started experiencing some odd issues with my mouse. All of a sudden it
stops working correctly. The only thing that seems to fix is to kill X
and run mouse-test, then restart.
Also, I have FC 1 running on a desktop which is hooked up to a KVM
switch. Whenever I go to another PC, and return, the same thing
happens, the mouse goes crazy.
I don't know the right way to fix this, but something is definitely broken;
and something needs to be fixed, one way or the other. The question is what
exactly needs to be fixed.
Consider something like this:
AC_TRY_LINK_FUNC(res_query, AC_MSG_RESULT(yes), AC_MSG_RESULT(no))
Here's what happens on x86_64:
gcc -o conftest -g -O2 -Wall -I.. -I./.. conftest.c -lresolv >&5
/tmp/ccW7EeDX.o(.text+0x7): In function `main':
/home/mrsam/src/courier/authlib/configure:5160: undefined reference to
collect2: ld returned 1 exit status
configure:5147: $? = 1
configure: failed program was:
[ blah blah blah ]
| /* We use char because int might match the return type of a gcc2
| builtin and then its argument prototype would still apply. */
| char res_query ();
| main ()
| res_query ();
| return 0;
The same exact test on FC1 x86 will work.
The reason appears to be that you have to #include <resolv.conf> on x86_64
in order to succesfully pull res_query() out of libresolv.so. You don't
need to do this on x86, and the test program generated by AC_TRY_LINK_FUNC
does not include any headers, but uses a manual prototype.
So, what now?
This thread has been beating around the bush and avoiding even an
attempt to reach agreement. People need to first agree on the purpose of
this testing and the definition of the resources. Even the original
poster seems to have lost track of his original purpose for posting.
To try to put things in their perspective, I think the following terms
need definition, both to what they are and their purpose.
Test Release - is it merely a convenient snapshot for installation but
serving no useful purpose after that (other than PR)? This is what I
would gather from those that advocate that testers stay in sync with
Rawhide - is it the staging ground for release candidates or is it a
communication point between a developer and those that are in contact
with him? If it is the latter, then there needs to be another repository
that indicates that a package is ready for global testing. If it is the
former, than I wonder why an intermediate stage exists in the Core 1
I think that Robert Day's initial point is correct. If there is no
stable baseline then testers are constantly finding superficial bugs;
deep bugs that take hours of testing will never get reached. Alan Cox is
also right when he states that a tester should check against the current
state of rawhide before he reports a bug.
I think that a lot of the confusion comes form a lack of a public test
plan and the lack of guidelines for testers. Also, for some reason, the
difference between internal testing (which in the framework of open
source I would consider them to be dedicated testers) and beta testers
(those trying to use the features in a real environment) has been
totally blurred. Most of the arguments against Robert Day were from the
perspective of internal testers. They are right for their function. But
most of them didn't need Test 1 except to test Anaconda; they were in
sync with rawhide anyway. For beta testers, a stable platform is needed.
If they are not at least pretending to do useful work then real life
considerations will never be actualized.
In summary, I also advocate an intermediate repository whose sole
purpose is to keep the baseline usable. If Redhat is unwilling to do
this until development branches to Core 3, then I must assume that
Redhat regards final release of Fedora as the real beta.
Where can I find a DVD iso from FC2? or how can I make my own DVD ISO
from the 4 CD isos?
I was looking in the fedora-list for some information but I can't be
able to do (I found a couple of scripts but no one works).
Thanks in advance.
Justin has raised an interesting question on his weblog concerning Fedora core
and the amd64-- should developers waste time on FC1 x86_64 or put their
efforts into FC2 which will be available "real soon now"?
I can see arguments for both situations but I wonder what other amd64 users
1. With limited developer resources and FC2 test2 about two weeks away (it
will have a x86_64 snapshot with needed i386 packages), the better option is
to put the resources into getting FC2 "right". With the test1 snapshot as a
base and all of the package updates in development, the x86_64 versions
should be in pretty good shape. Furthermore, the Red Hat build process is
set up to handle x86_64 builds for development (and FC2 Test2) whereas x86_64
updated packages are few and far between. When the test2 snapshot rolls out,
x86_64 updates will be available as a matter of course (or at least that is
2. At the current time, FC1 is more stable (especially with all of the
updates applied) than the current state of FC2 (as could be expected since
FC2 is still in development and testing). In addition, it is not at all
difficult to rebuild the ix86 packages for the x86_64 (I have done this for
the current set of updates). FC1 test1 + updates is a fairly stable platform
and should be pretty close to what would be available if FC1 x86_64 final was
So, work on getting FC2 with varying stability (there will be problems) or put
time in to roll out a FC1 x86_64 final? At the present time, I lean toward
option 1. I do have a working FC1 test1 plus all updates so the release of
the FC2 x86_64 final will be of little value to me except to do testing for
I have built all of the ix86 updated packages (as applicable) and even include
mozilla 1.6 from development to provide a 64 bit mozilla. Together with some
others who are interested in a FC1 x86_64 system, I could make the rebuilt
packages available on a "it works for me but use at your own risk" basis.
I've had several up2date and yum hiccups lately and ran out of disk
space. Delving into where the space went, I found that I have two
versions of quite a few packages installed. For example, rpm -qa | grep
cyrus | sort returns the following:
yum update complains that cyrus-imapd-2.2.3-2 is not available, when
trying to update the other cyrus-imapd-xxxx packages.
Is there a way to clean this up? I tried rpm --rebuilddb incase the
database was toast, but no luck.
I hate to make a post like this, but I'm totally confused.
I've been using FC2T1 since it came out; until now the only real problem I'd run into was X crashing due to psaux.
But, today I downloaded kernel version 2.6.3-1.106 as well as about 500 other updates (I had already downloaded all of the XFree86 updates the day before and had no problems). But now I cannot boot the system with any version of the kernel. The X server starts up, but the screen just turns black with the 'X' cursor. Then I press Ctrl-Alt-F1 to get to the login screen, and the system is hanging just after the step where the swap space is activated/mounted. So, I can never anywhere near a login prompt to try to diagnose the problem. One other thing: when I downloaded all of those updates, I did it in two groups; the first group had the kernel updates, all lib* updates, and I think systembase (something like that), and the second group was everything else. When I started up2date for the second group, it couldn't show the little pictures on the 'forward', 'back', and 'cancel' buttons (the blue arrows and red X). Then I restarted gkrellm and it said that it could load some .png files. So I reinstalled libpng (I think version 1.2.5) and logged out of KDE. The X server wouldn't start and give me the blue login screen; so I rebooted, and haven't been able to get into Linux since.
If anyone sees any of the symptoms I've mentioned or has any ideas, I would greatly appreciate any help I can get.
Richard Ayer III