Damian Menscher <menscher(a)uiuc.edu>@redhat.com on 04/07/2004 04:57:13 PM
> On Wed, 7 Apr 2004, Jeff Elkins wrote:
> > I'm getting failure messages on my nfs mounts i.e. :
> > mount to NFS server 'music.elkins' failed: server is down.
> > nsfd appears to be running and I didn't see anything suspicious in the
> > The servers are up and running and have other clients connected.
> You didn't mention what steps you took to debug it:
> Can you ping the server?
> What is the output of rpcinfo -p servername?
> Does the server have access restrictions (firewall, TCP Wrappers, etc)?
I have the same symptoms...
rpcinfo says that nfs et.al. are running.
Something has changed in test 2, since the same PC running RH9
accesses that host just fine.
I don't know the right way to fix this, but something is definitely broken;
and something needs to be fixed, one way or the other. The question is what
exactly needs to be fixed.
Consider something like this:
AC_TRY_LINK_FUNC(res_query, AC_MSG_RESULT(yes), AC_MSG_RESULT(no))
Here's what happens on x86_64:
gcc -o conftest -g -O2 -Wall -I.. -I./.. conftest.c -lresolv >&5
/tmp/ccW7EeDX.o(.text+0x7): In function `main':
/home/mrsam/src/courier/authlib/configure:5160: undefined reference to
collect2: ld returned 1 exit status
configure:5147: $? = 1
configure: failed program was:
[ blah blah blah ]
| /* We use char because int might match the return type of a gcc2
| builtin and then its argument prototype would still apply. */
| char res_query ();
| main ()
| res_query ();
| return 0;
The same exact test on FC1 x86 will work.
The reason appears to be that you have to #include <resolv.conf> on x86_64
in order to succesfully pull res_query() out of libresolv.so. You don't
need to do this on x86, and the test program generated by AC_TRY_LINK_FUNC
does not include any headers, but uses a manual prototype.
So, what now?
What if Fedora Core development was divided in two divisions, the
GNOME division and the KDE division, the releases will be available to
be GNOME-only or KDE-only releases on seperate iso's, the GNOME ISO
will specializes only on GNOME softwares and applications. There will
be a micro-managed bug fixing for GNOME-only related softwares, the
effort will be concentrating on much more focused approach than the
traditional (hybrid ISO's), same to the KDE ISO, wherein include a
KDE-only related (or specialized) softwares and applications.
We now have a full set of sparc packages that match up to Fedora Core 2,
and its name is Tangerine. Did you know that of the top ten hits for
Tangerine on Google, none of them have anything to do with the fruit?
You should eat more tangerines, they're quite tasty and good for you.
But I digress.
Like the previous release (1.91), its not an installable tree
so again this means no ISOs.
I'm going to repeat this one more time: THERE ARE NO ISOS FOR 1.92.
Why? Because anaconda is hard, and people didn't want to wait another 6
months for me to figure out what is broken. However, if anyone is willing
to try on their own to get this working, we're happily accepting patches.
But, unless someone else takes up the charge, this tree branch will stop
at 1.92. If someone fixes anaconda so that the installer actually works past
keyboard selection, I'll spin a new tree.
In the meantime, we're refocusing the effort on a new tree, which is
currently going to be based on Fedora Core 3. You can follow the daily
notes for this work here: http://auroralinux.org/journal.php
Now, I have yumified the tree, so if you're feeling really brave, you
can always point yum at it, and try to upgrade that way. A version of
yum for Aurora 1.0 is here:
If you're running 1.91, you should be able to use the yum in that tree.
Going from 1.91 to 1.92 is reportedly a fairly painless process.
If you're a listed mirror site, please sync the build-1.92 directory,
and chime in. The primary directory is currently at:
Now, for the known bugs:
The "rpm" package in 1.92 is a little broken on sparc32 systems. Its my fault.
Fixed packages are already available from:
A temporary workaround is to run: export LD_ASSUME_KERNEL=2.2.5.
Build 1.92 uses SILO 1.4.8 which should work fine. It seems to occasionally
burp when you're trying to tab completion, but I can't reproduce this
consistently. If it breaks for you, let me know.
There is no SMP kernel for sparc32, upstream has marked this as broken.
Any other bugs that you find? Please either email me or file them in
bugzilla.auroralinux.org, under Corona.
Last but not least, we've setup a hardware support matrix Wiki to keep track
of what works and what doesn't. You can find it here:
Thanks for your continued patience and support,
Tom "spot" Callaway <tcallawa(a)redhat*com> LCA, RHCE
Red Hat Sales Engineer || Aurora SPARC Linux Project Leader
"If you are going through hell, keep going."
- Sir Winston Churchill
For me, and a few of my friends, FC3 has two different kinds of
problems regarding CDs:
1/ it no longer burns CDs correctly. ie. they are un-usable.
2/ you can't use dd or md5sum to accurately read or check CDs.
I used the following commands to erase/record (as Xcdroast invoked them)...
cdrecord dev= /dev/cdrom gracetime=2 -v -eject speed=4 blank=fast
cdrecord dev= /dev/cdrom gracetime=2 fs=4096k driveropts=burnfree -v
-useinfo speed=4 -dao -eject -pad -data
Here is a summary of my investigations:
- I boot with the older kernel (2.6.9-1.677) and I can burn a CD
and read it with dd and I get the correct MD5
- Burning with the later kernel, and the CD's are bad.
- 2.6.8.-1.681_FC3 causes problems
read with (exact sector count)
burned with =========== ================
2.6.9-1.677 OK OK
2.6.8.-1.681_FC3 bad bad
using a 2.6.9-1.677 burn...
read with (no sector count)
sectors read: 326441 326441
It should have read 326426 2Kbyte sectors)
Does anyone care to comment?
fedora-list mailing list
To unsubscribe: http://www.redhat.com/mailman/listinfo/fedora-list
X.Org X11 6.8.2rc1 a.k.a 188.8.131.521 is now in Fedora Development
(rawhide) for testing. This new X11 test release is from the stable
bugfix-only 6.8-branch of X.Org X11 CVS. The new release fixes a large
number of bugs in 6.8.1 which have been reported since it was released,
including fixes to the X server, radeon and other drivers, and addition
of PCI IDs of some newer hardware.
This is strictly a bug fix only release, and contains no new major
features. If you experienced a bug in X.Org 6.8.1 or earlier which is
still present in the currently shipping Fedora Core xorg-x11 update, you
may want to install this release candidate to see if your bug may have
For a complete list of the specific issues that have been fixed in the
new release, I refer you to the X.org CVS changelog (xc/ChangeLog) in
the X.Org source code, and the Red Hat rpm spec file changelog. This is
the only location the changes are documented in, so read these 2 places
if you want to know the details, as this is the only locations the
changes are documented.
After upgrading to test the new release, if you experience any new bugs
or regressions, please be sure to file them in X.Org's official
bugzilla, which is located at:
http://bugs.freedesktop.org in the "xorg" component.
Be sure to indicate in your bug report the version of Fedora Core you
are using, and the specific xorg-x11 rpm version-release from rawhide,
which can be obtained by running "rpm -q xorg-x11". Also, make sure you
attach your X server log and config file to the bug report as single
uncompressed file attachments using the bugzilla file attachment
interface. If your problem is lockup/crash related, or DRI/AGP related,
also attach your complete /var/log/messages from the time of boot
onward, so there is sufficient information present in the bug report for
an initial analysis.
If you experience any problems that are related specifically to the rpm
packaging of xorg-x11, and not the software itself, please file them in
Red Hat bugzilla instead, in the "xorg-x11" component.
1) Someone has reported a bug already, in which after upgrading to the
new X.Org, their config file was renamed from xorg.conf to
xorg.conf.backup. If this happens for you, just rename it back to
xorg.conf and it should work as before. Don't report this bug in
bugzilla again, as we already know about it and will be investigating it
2) Compatibility with 3rd party and proprietary video drivers: This
release of X.Org is believed to be 100% ABI compatible with 6.8.1. As
such, proprietary and other 3rd party drivers should function with this
test release if they worked with 6.8.1. One important thing to note
however, is that various 3rd party driver installation mechanisms such
as Nvidia's proprietary driver installer, improperly overwrite X.Org
supplied files which are managed by RPM in Fedora Core.
When you upgrade X.Org via rpm, some of the 3rd party driver files (most
noteably libGL and libglx.a for Nvidia) will probably be overwritten by
the new X.Org supplied files in rpm format. The simplest solution to
this problem, is to obtain your Nvidia or ATI proprietary drivers in RPM
Alternatively, you can redownload the driver again from wherever you got
it the first time, and reinstall it.
If you experience any problems with unsupported 3rd party drivers,
report them directly to the vendor who supplied the drivers, not to Red
Hat. In the extremely unlikely event the X server module ABI has
changed unexpectedly, if you experience proprietary driver problems with
the new X server which you believe may be due to accidental regression
in the X server, report it to X.Org in the http://bugs.freedesktop.org
bugzilla, so the issue can be investigated before the 6.8.2 release
3) If you experience ANY problems that you are planning on filing in any
bugzilla, make sure you query that bugzilla's both open and closed bugs
to see if the issue has already been reported. Quite often people don't
do this critical step and an issue ends up being reported a large number
of times, which wastes developers time closing duplicates that they
could be spending fixing bugs.
On behalf of Red Hat, as well as X.Org, I would like to thank all of the
brave beta testers out there in advance for testing this new X.Org
stable release candidate. Enjoy.
I have continued to be plagued by a kernel hang or lockup since FC 2 on AMD
dual cpus with ATI radeon 7500. I am currently running FC 3 with the latset
released kernel (681).
Is there any hope for the problem by using one of the test kernels?
On Fri, 31 Dec 2004 at 12:00pm, Tom Browder <tbrowder(a)cox.net> wrote
> I have continued to be plagued by a kernel hang or lockup since FC 2 on AMD
> dual cpus with ATI radeon 7500. I am currently running FC 3 with the latset
> released kernel (681).
> Is there any hope for the problem by using one of the test kernels?
What motherboard are you using? There are lots of us throwing out our
Tyan S2466 based systems as fast as we can. After a while they start
exhibiting the behavior you're seeing (hard locks (no OOPS available)) at
random times that are very hard to diagnose/resolve.
See the beowulf list archives for a fair bit of discussion of this.
Department of Biomedical Engineering
The problem with bad sectors is not with sectors that show a write check
but with those that don't show a write check but are bad. The write
check on most hard drives is very simple and consists only of parity.
If a certain number of bits are dropped then there will be no write
check and install will work perfectly. You will never know about this
until you try to run the program because there will be no read error.
In addition there is the case where buffers are being allocated. No
check of media integrity is made on these(swap partition). This has
always been a problem with RH. Most shops have discs with small bad
spots and many discs develop these over time. The c option was put into
mkfs just to take care of these problems. At the present time we must
take ANY disc that displays even a small bad spot and use mkfs on
another system before we can install Linux. I suspect that a surprising
number of the strange troubles that are being reported by only one user
are due to this problem. When you invoke Disc Druid you need to be able
to request a long format or else mkfs needs to be on the recovery disc.
Currently discs are tested at the factory and bad sectors are locked
out In some large discs there are quite a few bad spots. In normal
operation perfectly good discs will have bad spots develop over time.
This is not a serious problem with modern discs if it is handled by the
software. The problem has always been that RH doesn't handle the
problem. Incidently, Microsoft does.