How much interest would there be in getting a bunch of cross-compilers into Extras?
Stuff like crosstool makes it relatively simple, but it's still slow -- I'd really like to be able to easily and quickly install cross-compiler packages for random architectures like ARM, MIPS, i386, etc.
I'd like to ship a multi-arch capable binutils like Debian's 'binutils-multi' and a set of cross-compilers -- preferably the same versions of each as the one in Core.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot -- we could avoid having to rebuild glibc etc. for architectures which are in rawhide, for example. But that isn't imperative.
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I think some cross compilers would be a good idea. I already use arm regularly and will shortly have a need for nesC and some AVR cross tools. If they are candidates for extras, I don't know.
I do now there was some discuss, which remains unresolved, as to the naming convention to use for cross tool chain packages.
Michael
David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Stuff like crosstool makes it relatively simple, but it's still slow -- I'd really like to be able to easily and quickly install cross-compiler packages for random architectures like ARM, MIPS, i386, etc.
I'd like to ship a multi-arch capable binutils like Debian's 'binutils-multi' and a set of cross-compilers -- preferably the same versions of each as the one in Core.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot -- we could avoid having to rebuild glibc etc. for architectures which are in rawhide, for example. But that isn't imperative.
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
On Mon, 2006-07-24 at 00:57 +1200, Michael J. Knox wrote:
I think some cross compilers would be a good idea. I already use arm regularly and will shortly have a need for nesC and some AVR cross tools. If they are candidates for extras, I don't know.
I don't really see why not.
I do now there was some discuss, which remains unresolved, as to the naming convention to use for cross tool chain packages.
Does it matter? Far too much masturbation goes on around here.
David Woodhouse wrote:
Stuff like crosstool makes it relatively simple, but it's still slow --
I take that back, btw -- I've been fighting crosstool for a while this morning and I still haven't managed to build a simple ppc->i686 crosscompiler. And I don't even want glibc -- I _only_ want to build kernels. Not that that seems to be an option.
The whole incestuous gcc/libgcc/libc thing needs to be fixed up and made relatively sane -- but that's outside the scope of this discussion, I suppose.
"DW" == David Woodhouse dwmw2@infradead.org writes:
DW> On Mon, 2006-07-24 at 00:57 +1200, Michael J. Knox wrote:
I do now there was some discuss, which remains unresolved, as to the naming convention to use for cross tool chain packages.
DW> Does it matter? Far too much masturbation goes on around here.
Yes, it matters. It would be bad for end users and for consistency in the distribution if different packagers chose different naming conventions. It that's masturbation, then so be it. At this point I don't personally care what gets chosen, but a choice needs to be made. The same goes for the naming of executables and the location of libraries and such.
And if this is all completely obvious to everyone who is packaging up cross-compilation tools and you're all already using the same conventions then it shouldn't take more than ten minutes to write that down somewhere.
- J<
Michael J. Knox schrieb:
David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
I think some cross compilers would be a good idea.
Agreed.
FYI: The packaging committee currently has this on its schedule ( http://www.fedoraproject.org/wiki/Packaging/GuidelinesTodo ):
cross compilation | JasonTibbitts | not urgent | Naming and proper file locations for cross compilers need to be worked out
Help probably appreciated and will probably accelerate the process of getting rules in place.
CU thl
On Mon, 2006-07-24 at 00:57 +1200, Michael J. Knox wrote:
I think some cross compilers would be a good idea. I already use arm regularly and will shortly have a need for nesC and some AVR cross tools.
The avr is yet another special case: The avr guys are using a custom libc, which isn't either glibc nor newlib.
Ralf
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
I would like to see MinGW + SDL.
On Sun, 2006-07-23 at 09:29 -0400, Matthew Miller wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
I would like to see MinGW + SDL.
I've been messing with the MingGW packages that were recently mentioned here. I've got C and C++ working, and SDL.
Would be nice to have in extras, but I don't want to submit it myself as I don't feel particularly knowledgeable enough to be maintaining gcc...
On Sun, 2006-07-23 at 08:15 -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Stuff like crosstool makes it relatively simple, but it's still slow --
Crosstool doesn't support newlib based targets.
I'd really like to be able to easily and quickly install cross-compiler packages for random architectures like ARM, MIPS, i386, etc.
These are still linux/glibc based variants.
I'd like to ship a multi-arch capable binutils like Debian's 'binutils-multi' and a set of cross-compilers -- preferably the same versions of each as the one in Core.
I am not a friend of this mult-targeted binutils. For a user, they are a PITA, because each and every tiny arch-specific bug-fix touches all arches and because RH's sources are not usable for other OSes.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot -- we could avoid having to rebuild glibc etc. for architectures which are in rawhide, for example. But that isn't imperative.
glibc .. you are talking about linux.
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I have ca. 15 cross compiler toolchains at hand. ca. 9 RTEMS toolchains, mingw, cygwin, different freebsds and solaris (Non distributable). I.e. probably exactly those cases you don't have.
Ralf
On Sun, Jul 23, 2006 at 08:15:18 -0400, David Woodhouse dwmw2@infradead.org wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
I would like to be able to build Wesnoth for Windows using a cross compiler. So I would give such a package a try.
--On Sunday, July 23, 2006 8:15 AM -0400 David Woodhouse dwmw2@infradead.org wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Not so much a "bunch" as a few good source examples illustrating how to package the chain for an arbitrary arch. Good parameterization is key. That way I can shop for my embedded architecture and then build an RPM from the example, changing just a macro at the top of the spec file, or overriding it on the rpmbuild command line, to create the tool chain for what I've chosen.
For example, one arch I'd consider building myself but wouldn't expect to see in Extras:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I think it would be great that have this, for a wide range of arches.
As for maintainance, I'm in the same situation as you. But, if you can get things rolling I'd be happy to help maintain MIPS and/or maybe some others.
John
On Mon, 2006-07-24 at 15:17 -0400, John W. Linville wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I think it would be great that have this, for a wide range of arches.
/me thinks there is a common misunderstanding.
A cross-toolchain doesn't target an "arch" - it targets a "target-system".
Such a "target-system" typically consists of an architecture, a libc and and parts of the OS/kernel (sometimes plus further target run-time libraries).
E.g. an i386-linux -> mips-linux cross toolchain is a completely different toolchain than a i386-pc-linux -> mips-<some-embedded-target> toolchain.
I.e. building a cross-toolchain basically condenses to building and packaging the works the target system maintainers do, and not to develop on the target system (or target arch).
As for maintainance, I'm in the same situation as you. But, if you can get things rolling I'd be happy to help maintain MIPS and/or maybe some others.
As I tried to express above, arch-specific development is an almost negligible part in building cross-toolchains. The focus is on system-integration.
Ralf
On Tue, 2006-07-25 at 07:28 +0200, Ralf Corsepius wrote:
E.g. an i386-linux -> mips-linux cross toolchain is a completely different toolchain than a i386-pc-linux -> mips-<some-embedded-target> toolchain.
Personally I meant $ARCH-linux-glibc toolchains, but I don't care too much. Multilib is your friend, and it's irrelevant for kernel builds anyway.
On Tue, Jul 25, 2006 at 07:28:30AM +0200, Ralf Corsepius wrote:
On Mon, 2006-07-24 at 15:17 -0400, John W. Linville wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I think it would be great that have this, for a wide range of arches.
/me thinks there is a common misunderstanding.
/me thinks what we seem to lack is a common context...
A cross-toolchain doesn't target an "arch" - it targets a "target-system".
Such a "target-system" typically consists of an architecture, a libc and and parts of the OS/kernel (sometimes plus further target run-time libraries).
Thank you so much for your pedantic nit-picking.
I was, of course, presuming that the audience of this list would be interested in targeting linux. Please do forgive me for being so pertinent. I even presumed that stating "MIPS" might cover both "mips" (or "mipseb") and "mipsel" -- how sloppy of me. All the mipsel-rtems developers in the audience must be appalled. I won't even mention glibc, for fear of stirring-up trouble w/ the uclinux crowd...
But, at least I provided you an opportunity to show how much smarter you are than the rest of us -- you're welcome.
John
On Tue, 2006-07-25 at 09:27 -0400, John W. Linville wrote:
I was, of course, presuming that the audience of this list would be interested in targeting linux. Please do forgive me for being so pertinent. I even presumed that stating "MIPS" might cover both "mips" (or "mipseb") and "mipsel" -- how sloppy of me.
Damn right it does. That's the -EB and -EL options are for. S'not sloppy at all.
As I said, multilib is your friend.
On Tue, 2006-07-25 at 09:27 -0400, John W. Linville wrote:
On Tue, Jul 25, 2006 at 07:28:30AM +0200, Ralf Corsepius wrote:
On Mon, 2006-07-24 at 15:17 -0400, John W. Linville wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
I think it would be great that have this, for a wide range of arches.
/me thinks there is a common misunderstanding.
/me thinks what we seem to lack is a common context...
A cross-toolchain doesn't target an "arch" - it targets a "target-system".
Such a "target-system" typically consists of an architecture, a libc and and parts of the OS/kernel (sometimes plus further target run-time libraries).
Thank you so much for your pedantic nit-picking.
I was, of course, presuming that the audience of this list would be interested in targeting linux.
Well, people had been referring to uclinux, avr/avr-libc, mingw32/msys, cygwin/newlib, rtems/newlib, bare metal and ... linux/glibc targets.
.. so I am probably not alone with my perception.
But, at least I provided you an opportunity to show how much smarter you are than the rest of us -- you're welcome.
It's just that cross-compilers is a subject I work on for almost a decade and am feel embarrassed when people start talking about "mips" compilers when they actually mean "mips-linux", "mipsel-linux" or "mipseb-linux" targets.
Ralf
On Tue, Jul 25, 2006 at 03:51:52PM +0200, Ralf Corsepius wrote:
On Tue, 2006-07-25 at 09:27 -0400, John W. Linville wrote:
I was, of course, presuming that the audience of this list would be interested in targeting linux.
Well, people had been referring to uclinux, avr/avr-libc, mingw32/msys, cygwin/newlib, rtems/newlib, bare metal and ... linux/glibc targets.
.. so I am probably not alone with my perception.
Well, perhaps you are right. I apologize if I reacted too harshly.
For the record...unless explicitly stated otherwise, by referring only to a processor arch in this discussion I am implying linux and glibc for the target.
Allowing for other targets seems worthwhile as well, FWIW...
Thanks,
John
On Tue, Jul 25, 2006 at 09:27:56AM -0400, John W. Linville wrote:
I was, of course, presuming that the audience of this list would be interested in targeting linux. Please do forgive me for being so
Well, I am, but I'm also interested in Linux as a platform for cross-OS development.
On Mon, 2006-07-24 at 20:49 -0400, Aron Griffis wrote:
David Woodhouse wrote: [Sun Jul 23 2006, 08:15:18AM EDT]
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume?
ia64, please. :-)
8051 please :-) OK this is not GCC based. But SDCC could be a nice addition to extras, they already have a RPM and it works fine here.
- Erwin
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Stuff like crosstool makes it relatively simple, but it's still slow -- I'd really like to be able to easily and quickly install cross-compiler packages for random architectures like ARM, MIPS, i386, etc.
I'd like to ship a multi-arch capable binutils like Debian's 'binutils-multi' and a set of cross-compilers -- preferably the same versions of each as the one in Core.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot -- we could avoid having to rebuild glibc etc. for architectures which are in rawhide, for example. But that isn't imperative.
Does anyone else care? Other than the full set of rawhide architectures, what others would we include? Alpha, SPARC{64,}, ARM, MIPS, SH I assume? Would anyone volunteer to maintain each of those toolchains? I wouldn't really feel happy doing it myself, since when it comes to GCC I would only ever be a package-monkey, and not a proper _maintainer_.
- From the responses you got, I'd say there's a fair amount of interest.
What do you think about starting small (e.g. generating a mesh of FC x FC compilers)? Starting with an FC target would mean that we could use packages we know already work in the Fedora framework. It would just be a matter of making a specfile (or series of specfiles) that are cross-friendly to build and package gcc, binutils, glibc and gdb. I've done that a few times and while it's not exactly pretty, it's doable. We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system). Of course after getting the toolchains packaged, it's a matter of asking the maintainers to keep their specfiles cross friendly, but if they'll take patches, we can clean that up.
After we get that done, we could add additional Linux architectures, such as ARM, MIPS, SH, etc. Or sucker^Wentice someone into adding uClibc or newlib/dietlibc targets.
Clark
On Wed, 2006-08-02 at 15:21 -0500, Clark Williams wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
- From the responses you got, I'd say there's a fair amount of interest.
What do you think about starting small (e.g. generating a mesh of FC x FC compilers)? Starting with an FC target would mean that we could use packages we know already work in the Fedora framework. It would just be a matter of making a specfile (or series of specfiles) that are cross-friendly to build and package gcc, binutils, glibc and gdb. I've done that a few times and while it's not exactly pretty, it's doable. We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system). Of course after getting the toolchains packaged, it's a matter of asking the maintainers to keep their specfiles cross friendly, but if they'll take patches, we can clean that up.
It is way more than just keeping their specfiles cross friendly. Most larger projects, like Xorg, are a bitch to crosscompile, and almost all need a lot of tuning before even './configure' works. The ones without configure will probably even more work to get obscure Makefiles to do cross compiling.
The cross compiler part is less than 0.1% of the problem.
- Erwin
If you are really serious about using cross compilers, take a look at OpenEmbedded (http://www.www.openembedded.org) OE addresses the toolchain and the other 99.9% of the problem.
Philip
Erwin Rol wrote:
On Wed, 2006-08-02 at 15:21 -0500, Clark Williams wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
- From the responses you got, I'd say there's a fair amount of interest.
What do you think about starting small (e.g. generating a mesh of FC x FC compilers)? Starting with an FC target would mean that we could use packages we know already work in the Fedora framework. It would just be a matter of making a specfile (or series of specfiles) that are cross-friendly to build and package gcc, binutils, glibc and gdb. I've done that a few times and while it's not exactly pretty, it's doable. We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system). Of course after getting the toolchains packaged, it's a matter of asking the maintainers to keep their specfiles cross friendly, but if they'll take patches, we can clean that up.
It is way more than just keeping their specfiles cross friendly. Most larger projects, like Xorg, are a bitch to crosscompile, and almost all need a lot of tuning before even './configure' works. The ones without configure will probably even more work to get obscure Makefiles to do cross compiling.
The cross compiler part is less than 0.1% of the problem.
- Erwin
On Wed, 2006-08-02 at 15:21 -0500, Clark Williams wrote:
We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system).
Cross-compilation of packages is never going to work reliably. Too many people make the mistake of using autocrap, and don't handle cross-compilation at all well.
On Thu, 2006-08-03 at 09:30 +0800, David Woodhouse wrote:
On Wed, 2006-08-02 at 15:21 -0500, Clark Williams wrote:
We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system).
Cross-compilation of packages is never going to work reliably.
True.
Too many people make the mistake of using autocrap, and don't handle cross-compilation at all well.
You are dead wrong. auto*tools do handle cross compilation very well. Many auto* based configuration work with cross-compilers OTB.
Ralf
Hi
On Thu, 2006-08-03 at 09:30 +0800, David Woodhouse wrote:
On Wed, 2006-08-02 at 15:21 -0500, Clark Williams wrote:
We could generate x86, x86_64, and PPC hosted toolchains for x86, x86_64 and PPC and then be able to build say PPC packages from an x86_64 (the immediate benefactor would probably be the build system).
Cross-compilation of packages is never going to work reliably.
True.
Too many people make the mistake of using autocrap, and don't handle cross-compilation at all well.
You are dead wrong. auto*tools do handle cross compilation very well. Many auto* based configuration work with cross-compilers OTB.
Also, ready availability of cross toolchains would greatly increase the ease of testing packages for other architectures. So over time packages might improve towards being buildable for different architectures.
-Cam
On 8/3/06, Ralf Corsepius rc040203@freenet.de wrote:
Too many people make the mistake of using autocrap, and don't handle cross-compilation at all well.
You are dead wrong. auto*tools do handle cross compilation very well. Many auto* based configuration work with cross-compilers OTB.
If done properly, I am sure it does. SQLite does not fall into the category though...
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
A very good idea.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot
I don't think installing packages of a foreign arch is easy w/o an emulator. You have scriptlets (%post and friends) that need to run on the target arch's platform. E.g. even installing glibc calls /usr/sbin/glibc_post_upgrade.* which would have to be emulated.
-devel usually has no scriplets, but the pulled in main lib package will at the very least want to call ldconfig.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Axel Thimm wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
A very good idea.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot
I don't think installing packages of a foreign arch is easy w/o an emulator. You have scriptlets (%post and friends) that need to run on the target arch's platform. E.g. even installing glibc calls /usr/sbin/glibc_post_upgrade.* which would have to be emulated.
-devel usually has no scriplets, but the pulled in main lib package will at the very least want to call ldconfig.
Well, it really depends on what you're doing in the scriptlet. If it's running chkconfig, adduser, etc. (i.e. updating configuration information) then something we've done in the past would work (it's ugly, but it works). Essentially you work in a chroot and at the point where you want to run the scriptlets, you bind mount the host /bin, /sbin, /lib, /usr/lib, /usr/bin, and /usr/sbin into the chroot (from outside the chroot of course), then run the scriptlets inside the chroot natively, then undo it (Hey! I *said* it was ugly). I've used this method to build mipsel and sh root filesystems.
Clark
On Wed, Aug 02, 2006 at 05:34:24PM -0500, Clark Williams wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Axel Thimm wrote:
On Sun, Jul 23, 2006 at 08:15:18AM -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
A very good idea.
It'd be particularly nice if we could install native -devel packages into each toolchain's sysroot
I don't think installing packages of a foreign arch is easy w/o an emulator. You have scriptlets (%post and friends) that need to run on the target arch's platform. E.g. even installing glibc calls /usr/sbin/glibc_post_upgrade.* which would have to be emulated.
-devel usually has no scriplets, but the pulled in main lib package will at the very least want to call ldconfig.
Well, it really depends on what you're doing in the scriptlet. If it's running chkconfig, adduser, etc. (i.e. updating configuration information) then something we've done in the past would work (it's ugly, but it works). Essentially you work in a chroot and at the point where you want to run the scriptlets, you bind mount the host /bin, /sbin, /lib, /usr/lib, /usr/bin, and /usr/sbin into the chroot (from outside the chroot of course), then run the scriptlets inside the chroot natively, then undo it (Hey! I *said* it was ugly). I've used this method to build mipsel and sh root filesystems.
Yes, I've done similar things to create an embedded ppc chroot, but the pain is big and each package to be imported needs to be examined and handled specially. Mostly because some packages install stuff of the target arch in /bin and friends and you need to replace them with cross-built tools during the package installation, e.g. intercept the package installation between cpio unpacking and scriplets.
You can do that for a couple of packages (like for an embedded system), but it doesn't really scale well. :(
On Sun, 2006-07-23 at 08:15 -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Starting with binutils.... at http://david.woodhou.se/binutils.spec there's a specfile based on the current Core package which lets you build cross-binutils with for example --define "binutils_target i686-fedora-linux"
That approach lets us track the Core package directly, and I think is sanest. What I'm not sure of, however, is how we actually deal with that when building for Extras. Is there a simple way we can build it multiple times with multiple definitions of %binutils_target, or would we have to import it all into multiple directories in CVS with the requisite one-line change and then build each one normally?
Another possibility is that we could make a single SRPM spit out _all_ the $ARCH-fedora-linux-binutils binary packages, building them all in a loop. But that might involve diverging even more from the Core specfile, which wouldn't be ideal.
On the other hand, if we have to postprocess the Core specfile when we export it from Core to Extras anyway, perhaps we could have a scripted way of converting it to build multiple packages too?
Suggestions on a postcard to...
On Sun, Sep 17, 2006 at 07:10:20PM +0100, David Woodhouse wrote:
On Sun, 2006-07-23 at 08:15 -0400, David Woodhouse wrote:
How much interest would there be in getting a bunch of cross-compilers into Extras?
Starting with binutils.... at http://david.woodhou.se/binutils.spec there's a specfile based on the current Core package which lets you build cross-binutils with for example --define "binutils_target i686-fedora-linux"
That approach lets us track the Core package directly, and I think is sanest. What I'm not sure of, however, is how we actually deal with that when building for Extras. Is there a simple way we can build it multiple times with multiple definitions of %binutils_target, or would we have to import it all into multiple directories in CVS with the requisite one-line change and then build each one normally?
Another possibility is that we could make a single SRPM spit out _all_ the $ARCH-fedora-linux-binutils binary packages, building them all in a loop. But that might involve diverging even more from the Core specfile, which wouldn't be ideal.
On the other hand, if we have to postprocess the Core specfile when we export it from Core to Extras anyway, perhaps we could have a scripted way of converting it to build multiple packages too?
Suggestions on a postcard to...
I like the one SRPM -> all binary packages idea. Any reason that *couldn't* be in Core?
David Woodhouse wrote:
Starting with binutils.... at http://david.woodhou.se/binutils.spec there's a specfile based on the current Core package which lets you build cross-binutils with for example --define "binutils_target i686-fedora-linux"
Any particular reason for the binutils in binutils_target? Something like target_triplet could be shared across multiple packages (gcc, gdb). Also, putting some form of Fedora in the name is great, but it should be in some way versioned like i686-fedora5-linux or even i686-fc5-linux. This is a little more specific and will scale better if cross compiler interest blossoms.
That approach lets us track the Core package directly, and I think is sanest. What I'm not sure of, however, is how we actually deal with that when building for Extras. Is there a simple way we can build it multiple times with multiple definitions of %binutils_target, or would we have to import it all into multiple directories in CVS with the requisite one-line change and then build each one normally?
Another possibility is that we could make a single SRPM spit out _all_ the $ARCH-fedora-linux-binutils binary packages, building them all in a loop. But that might involve diverging even more from the Core specfile, which wouldn't be ideal.
This seems like a pretty small divergence. Instead of target_triplet use target_triplets, use a for loop and you're scarcely any further from the original spec file. That said, three downsides come to mind:
1. The build will become increasingly slow as targets are added. 2. A build failure of one target may prevent any target RPMs from being produced (optional). 3. If people want to take the idea and run with it for other targets, a single SRPM means less flexible maintainership.
The question that's been gnawing on my mind since your original posting is: Where does the sys-root come from? Clearly for the Fedora targets, there exist RPMs that contain the needed files from the existing build process. These need to be available when generating the crosses. Your binutils.spec (nice) assumes there is an installation under "/usr/%{binutils_target}". Whether or not this is the right place, it'd be good for there to be a dependency that ensures this exists.
My current thought is that there be a wrapper spec file that takes in the target's RPMs, puts them in a standardized directory "/usr/share/sys-roots/%{target_triplet}" (imperfect location of the day), then makes a noarch RPM out of the contents. Is this possible in the build system? How do we accommodate the GPL here? Assuming this is a viable option, binutils (gcc, etc) could simply require this package prior to building. Having distinct sys-root packages also accommodates other (non-Fedora targeted) systems for which some interest has been shown.
It would be great if Fedora could be cross compiled using any host system to produce binaries for any target system, be it a supported and rare host (s390, ia64) or an entirely new target (arm, mips*).
-Brendan (blc@redhat.com)
On Sun, 2006-09-17 at 22:41 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
Starting with binutils.... at http://david.woodhou.se/binutils.spec there's a specfile based on the current Core package which lets you build cross-binutils with for example --define "binutils_target i686-fedora-linux"
Any particular reason for the binutils in binutils_target? Something like target_triplet could be shared across multiple packages (gcc, gdb).
s/could/must/ wrt. GCC
Also, putting some form of Fedora in the name is great, but it should be in some way versioned like i686-fedora5-linux or even i686-fc5-linux. This is a little more specific and will scale better if cross compiler interest blossoms.
Using a versioned target-triple is inevitable for cross-toolchains, because the toolchains aren't necessarily compatible.
That approach lets us track the Core package directly, and I think is sanest. What I'm not sure of, however, is how we actually deal with that when building for Extras. Is there a simple way we can build it multiple times with multiple definitions of %binutils_target, or would we have to import it all into multiple directories in CVS with the requisite one-line change and then build each one normally?
Another possibility is that we could make a single SRPM spit out _all_ the $ARCH-fedora-linux-binutils binary packages, building them all in a loop. But that might involve diverging even more from the Core specfile, which wouldn't be ideal.
This seems like a pretty small divergence. Instead of target_triplet use target_triplets, use a for loop and you're scarcely any further from the original spec file. That said, three downsides come to mind:
- The build will become increasingly slow as targets are added.
- A build failure of one target may prevent any target RPMs from being
produced (optional).
This might not be much of an issue for RH-based toolchains, but in general, but in general, in practice, this is a real showstopper to such multiple target cross-toolchains and it renders this approach to be impractical. Experience tells, one target is always broken somewhere and not all GCC versions work for all targets.
Furthermore, fixing target specific bugs introduces unnecessary rebuilds for other targets - This renders this approach to be a pain to end-users.
- If people want to take the idea and run with it for other targets, a
single SRPM means less flexible maintainership.
The question that's been gnawing on my mind since your original posting is: Where does the sys-root come from?
The easiest approach is to repackage the original (native) Fedora rpms into noarch rpms containing the sys-root for a cross toolchain.
Ralf
On Sun, 2006-09-17 at 22:41 -0400, Brendan Conoboy wrote:
Any particular reason for the binutils in binutils_target? Something like target_triplet could be shared across multiple packages (gcc, gdb). Also, putting some form of Fedora in the name is great, but it should be in some way versioned like i686-fedora5-linux or even i686-fc5-linux. This is a little more specific and will scale better if cross compiler interest blossoms.
I only called it that to distinguish it from RPM's _target_cpu, which is the _RPM_ target. And yes, something like i386-fedora6-linux would be better.
This seems like a pretty small divergence. Instead of target_triplet use target_triplets, use a for loop and you're scarcely any further from the original spec file. That said, three downsides come to mind:
- The build will become increasingly slow as targets are added.
- A build failure of one target may prevent any target RPMs from being
produced (optional). 3. If people want to take the idea and run with it for other targets, a single SRPM means less flexible maintainership.
Perhaps we could build all the Fedora cross-toolchains in a loop like that, but let people take it and do more esoteric targets individually in Extras.
The question that's been gnawing on my mind since your original posting is: Where does the sys-root come from? Clearly for the Fedora targets, there exist RPMs that contain the needed files from the existing build process. These need to be available when generating the crosses. Your binutils.spec (nice) assumes there is an installation under "/usr/%{binutils_target}". Whether or not this is the right place, it'd be good for there to be a dependency that ensures this exists.
Binutils doesn't need it. I can build kernels quite happily without.
My current thought is that there be a wrapper spec file that takes in the target's RPMs, puts them in a standardized directory "/usr/share/sys-roots/%{target_triplet}" (imperfect location of the day), then makes a noarch RPM out of the contents. Is this possible in the build system? How do we accommodate the GPL here?
If it's our own packages then we're already shipping the source, so the GPL shouldn't be an issue if we repackage them.
Note that we want all this for populating qemu sysroots already. And we want to _share_ our sysroot with qemu. We might want ia32el using the same system too.
I don't actually think we _do_ want to repackage them. If I want to be able to install the proper i686 acrobat reader packages in to my i686 qemu/gcc sysroot, I want to just use yum -- or at _least_ RPM. I don't want to have to repackage everything as noarch. We should be able to use them directly, even if we have to modify rpm a little.
Assuming this is a viable option, binutils (gcc, etc) could simply require this package prior to building. Having distinct sys-root packages also accommodates other (non-Fedora targeted) systems for which some interest has been shown.
I don't want to require it _prior_ to building. That might be OK for Fedora where we _have_ the sysroot prior to building the compiler, but when we don't have a pre-existing sysroot, we need to build the compiler first.
See rants elsewhere over the last decade or so about dependencies and building everything three times :)
It would be great if Fedora could be cross compiled using any host system to produce binaries for any target system, be it a supported and rare host (s390, ia64) or an entirely new target (arm, mips*).
You'll never do that until we ban autoconf in packaging. Packages in _general_ won't cross-compile. We'll always have to have a "native" environment, although qemu can fake that and your _compiler_ binary can be a real native binary in the middle of a target-sysroot, so it's nice and fast. See scratchbox, for example.
On Mon, 2006-09-18 at 07:58 +0100, David Woodhouse wrote:
It would be great if Fedora could be cross compiled using any host system to produce binaries for any target system, be it a supported and rare host (s390, ia64) or an entirely new target (arm, mips*).
You'll never do that until we ban autoconf in packaging.
Sigh - Will you ever stop reiterating this FUD?
All properly packaged "single-targeted" autoconf/automake based packages do support cross-compilation, OTB.
Few packages do support mixed native/cross compilation and even less do support multi-target configurations.
Packages in _general_ won't cross-compile.
Yes, because many packagers don't test it and because rpm doesn't support it.
Ralf
On Mon, 2006-09-18 at 09:47 +0200, Ralf Corsepius wrote:
On Mon, 2006-09-18 at 07:58 +0100, David Woodhouse wrote:
It would be great if Fedora could be cross compiled using any host system to produce binaries for any target system, be it a supported and rare host (s390, ia64) or an entirely new target (arm, mips*).
You'll never do that until we ban autoconf in packaging.
Sigh - Will you ever stop reiterating this FUD?
All properly packaged "single-targeted" autoconf/automake based packages do support cross-compilation, OTB.
Then there are few of what you call 'properly packaged single-targeted' packages out there, because seamless support for cross-compilation has _not_ been my experience.
Few packages do support mixed native/cross compilation and even less do support multi-target configurations.
Packages in _general_ won't cross-compile.
Yes, because many packagers don't test it and because rpm doesn't support it.
I've spent a lot of time attempting to cross-build the distribution. RPM actually handles it just fine -- the problems were mostly caused by the (possibly incorrect) use of autotools in the package itself.
I agree, however, that there is nothing _fundamentally_ evil about autotools. Autotools don't kill cross-compilation; people do. Autotools just seem to make it easy.
On Mon, 2006-09-18 at 08:57 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 09:47 +0200, Ralf Corsepius wrote:
On Mon, 2006-09-18 at 07:58 +0100, David Woodhouse wrote:
It would be great if Fedora could be cross compiled using any host system to produce binaries for any target system, be it a supported and rare host (s390, ia64) or an entirely new target (arm, mips*).
You'll never do that until we ban autoconf in packaging.
Sigh - Will you ever stop reiterating this FUD?
All properly packaged "single-targeted" autoconf/automake based packages do support cross-compilation, OTB.
Then there are few of what you call 'properly packaged single-targeted' packages out there, because seamless support for cross-compilation has _not_ been my experience.
Well, I'd estimate 90% of all lib* packages do work OTB.
It's the packages' authors who ship broken configurations, because they hard-code stupid things like run-time checks or hard-coding compiler/system features (byte-order, type-sizes etc.).
Few packages do support mixed native/cross compilation and even less do support multi-target configurations.
Packages in _general_ won't cross-compile.
Yes, because many packagers don't test it and because rpm doesn't support it.
I've spent a lot of time attempting to cross-build the distribution. RPM actually handles it just fine -- the problems were mostly caused by the (possibly incorrect) use of autotools in the package itself.
Well, this is NOT my experience.
RPM doesn't even get the target/host/build-tuple right for native noarch building.
Building cross-compilers (Note: These are native apps!) is PITA, because RPM doesn't handle foreign binaries correctly (stripping, debug info etc. all are treated as <native>-elf).
Cross building (rpmbuild --target= ...) isn't even close to be be functional, because rpm screws up various target/host/build platforms setting (e.g. %rpmopt), and doesn't properly distinguish between target/host/build and contain many hard-coded redhat specifics (We are cross building cross-toolchain rpms to mingw, cygwin and solaris).
I agree, however, that there is nothing _fundamentally_ evil about autotools. Autotools don't kill cross-compilation; people do. Autotools just seem to make it easy.
Right, that's a statement I can live with.
Ralf
On Mon, 2006-09-18 at 10:23 +0200, Ralf Corsepius wrote:
Well, I'd estimate 90% of all lib* packages do work OTB.
Either things have got a _lot_ better since I was doing this, or you've been a lot luckier than I was.
It's the packages' authors who ship broken configurations, because they hard-code stupid things like run-time checks or hard-coding compiler/system features (byte-order, type-sizes etc.).
Yes. Autotools seems to encourage this behaviour, rather than just encouraging people to write sane portable code in the first place.
For example, why do runtime checks for word-size when you can just use explicitly sized C99 types if you actually care? But autotools makes it easy... and suddenly your package no longer compiles.
Maybe autotools wouldn't be so bad if it was made much harder to do stupid things.
Few packages do support mixed native/cross compilation and even less do support multi-target configurations.
Packages in _general_ won't cross-compile.
Yes, because many packagers don't test it and because rpm doesn't support it.
I've spent a lot of time attempting to cross-build the distribution. RPM actually handles it just fine -- the problems were mostly caused by the (possibly incorrect) use of autotools in the package itself.
Well, this is NOT my experience.
RPM doesn't even get the target/host/build-tuple right for native noarch building.
You mean in %configure? I don't recall it screwing that up, but again I haven't tried this recently. If it broke, file a bug. I suspect it's a problem with redhat-rpm-config instead of rpm itself.
Building cross-compilers (Note: These are native apps!) is PITA, because RPM doesn't handle foreign binaries correctly (stripping, debug info etc. all are treated as <native>-elf).
I have a vague recollection of overriding %strip. But binutils-multi would also help with this.
Cross building (rpmbuild --target= ...) isn't even close to be be functional, because rpm screws up various target/host/build platforms setting (e.g. %rpmopt), and doesn't properly distinguish between target/host/build and contain many hard-coded redhat specifics (We are cross building cross-toolchain rpms to mingw, cygwin and solaris).
I haven't looked at cross-building to non-Linux RPMs. I can well believe that it's more problematic, but certainly I've had reasonable success with cross-building _Linux_ RPMs. As I said, the majority of failures I saw were with autotools being used to do the wrong thing. Not really with RPM itself.
On Mon, 2006-09-18 at 09:56 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 10:23 +0200, Ralf Corsepius wrote:
Few packages do support mixed native/cross compilation and even less do support multi-target configurations.
Packages in _general_ won't cross-compile.
Yes, because many packagers don't test it and because rpm doesn't support it.
I've spent a lot of time attempting to cross-build the distribution. RPM actually handles it just fine -- the problems were mostly caused by the (possibly incorrect) use of autotools in the package itself.
Well, this is NOT my experience.
RPM doesn't even get the target/host/build-tuple right for native noarch building.
You mean in %configure?
Yes.
I don't recall it screwing that up, but again I haven't tried this recently. If it broke, file a bug. I suspect it's a problem with redhat-rpm-config instead of rpm itself.
%configure passes --target=noarch-redhat-linux to configure for noarch packages - The issue is known to RH developers for quite a while, but has been ignored so far (I don't know if there is a PR on this.)
Building cross-compilers (Note: These are native apps!) is PITA,
because
RPM doesn't handle foreign binaries correctly (stripping, debug info etc. all are treated as <native>-elf).
I have a vague recollection of overriding %strip. But binutils-multi would also help with this.
Nope, it would not help us much.
1. We are using patched binutils and rely upon canonicalized binutils. Therefore, non-canonicalized tools builts from HJLu's sources or vanilla FSF sources don't help us much.
2. Cross-built rpms consist of both target and native binaries. RPM treats all of them as native. We need to patch the scripts to use the correct search path.
Cross building (rpmbuild --target= ...) isn't even close to be be functional, because rpm screws up various target/host/build platforms setting (e.g. %rpmopt), and doesn't properly distinguish between target/host/build and contain many hard-coded redhat specifics (We are cross building cross-toolchain rpms to mingw, cygwin and solaris).
I haven't looked at cross-building to non-Linux RPMs. I can well believe that it's more problematic, but certainly I've had reasonable success with cross-building _Linux_ RPMs. As I said, the majority of failures I saw were with autotools being used to do the wrong thing.
Probably because you either * use ancient autotools * mix up on --host/--build/--target * are not applying canonicalization * rely upon config.cache, or worse config.site. ...
Not really with RPM itself.
Conversely for me.
Ralf
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 09:56 +0100, David Woodhouse wrote:
I don't recall it screwing that up, but again I haven't tried this recently. If it broke, file a bug. I suspect it's a problem with redhat-rpm-config instead of rpm itself.
%configure passes --target=noarch-redhat-linux to configure for noarch packages - The issue is known to RH developers for quite a while, but
Maybe you could enlighten me on why/how --target=noarch-redhat-linux is wrong or why this *is* an issue?. I've used it (successfully, I might add) on a few occasions to generate .noarch rpms.
-- Rex
On Mon, 2006-09-18 at 10:50 -0500, Rex Dieter wrote:
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 09:56 +0100, David Woodhouse wrote:
I don't recall it screwing that up, but again I haven't tried this recently. If it broke, file a bug. I suspect it's a problem with redhat-rpm-config instead of rpm itself.
%configure passes --target=noarch-redhat-linux to configure for noarch packages - The issue is known to RH developers for quite a while, but
Maybe you could enlighten me on why/how --target=noarch-redhat-linux is wrong or why this *is* an issue?. I've used it (successfully, I might add) on a few occasions to generate .noarch rpms.
Because,
1) In autoconf terms, --target is the target-tuple of a cross tool. It is very rarely useful at all (only by cross-tools), so passing it on to configure call is very questionable and rare used, and even less often required.
2) The tool to check for validity of an architecture being utilized by the autotools is "config.sub". It contains a list of valid architectures, and (correctly) rejects noarch-<*>, because the initial part of a target tuple (CPU-MANUFACTURER-OS) is supposed to contain a valid cpu. noarch isn't one.
I.e. there are at least different issues at once:
a) broken configure scripts which mix up target/host/build. This is the typical case which triggers this break down in packages. AFAICT, several mono packages suffer from this issue.
b) rpm passing an incorrect value to --target. "none" is the value config.sub has reserved for such purposes.
c) rpm passing --target at all. It is very rarely used at all nor required at all. Not even building a native GNU toolchain needs it. David would need to override it for his multi-target binutils, I need it for my cross-compilers.
Ralf
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 10:50 -0500, Rex Dieter wrote:
Maybe you could enlighten me on why/how --target=noarch-redhat-linux is wrong or why this *is* an issue?. I've used it (successfully, I might add) on a few occasions to generate .noarch rpms.
...
I.e. there are at least different issues at once:
b) rpm passing an incorrect value to --target. "none" is the value config.sub has reserved for such purposes.
This looks like the easiest quick-fix/short-term solution to me.
-- Rex
On Mon, 2006-09-18 at 11:19 -0500, Rex Dieter wrote:
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 10:50 -0500, Rex Dieter wrote:
Maybe you could enlighten me on why/how --target=noarch-redhat-linux is wrong or why this *is* an issue?. I've used it (successfully, I might add) on a few occasions to generate .noarch rpms.
...
I.e. there are at least different issues at once:
b) rpm passing an incorrect value to --target. "none" is the value config.sub has reserved for such purposes.
This looks like the easiest quick-fix/short-term solution to me.
To address broken configure scripts without patching rpm, yes.
The real fix would be rpm to drop passing --target and leave appending it to those maintainers who really need it.
Ralf
Ralf Corsepius wrote:
The real fix would be rpm to drop passing --target and leave appending it to those maintainers who really need it.
If you drop --target, then you'll have to drop --host as well, else, you'll end up seeing binaries named like: %{_bindir}/i386-redhat-linux-foo that's why --target was added in the first place (way back when).
-- Rex
On Tue, 2006-09-19 at 05:20 -0500, Rex Dieter wrote:
Ralf Corsepius wrote:
The real fix would be rpm to drop passing --target and leave appending it to those maintainers who really need it.
If you drop --target, then you'll have to drop --host as well, else, you'll end up seeing binaries named like: %{_bindir}/i386-redhat-linux-foo
Only if the package's configuration is broken :)
Normal packages don't apply --target at all nor do they apply canonicalisation (the behavior you describe above).
Only packages using AC_CANONICAL_TARGET, use --target, and are subject to canonicalisation if --target is passed to configure. If it's not being passed canonicalisation doesn't take place
Some maintainers are confusing --host/--build/--target with host and/or build and incorrectly apply --target.
that's why --target was added in the first place (way back when).
... broken packages ... confused maintainers ... ancient/outdated/unmaintained packages ... many years ago, there once was a bug in autoconf/automake which triggered this behavior. This bug has been resolved many years ago.
Ralf
Ralf Corsepius wrote:
On Tue, 2006-09-19 at 05:20 -0500, Rex Dieter wrote:
that's why --target was added in the first place (way back when).
... broken packages ... confused maintainers ... ancient/outdated/unmaintained packages ... many years ago, there once was a bug in autoconf/automake which triggered this behavior. This bug has been resolved many years ago.
Nice to hear.
So what do you suggest? drop both --build/--target?
-- Rex
On Tue, 2006-09-19 at 20:53 -0500, Rex Dieter wrote:
Ralf Corsepius wrote:
On Tue, 2006-09-19 at 05:20 -0500, Rex Dieter wrote:
that's why --target was added in the first place (way back when).
... broken packages ... confused maintainers ... ancient/outdated/unmaintained packages ... many years ago, there once was a bug in autoconf/automake which triggered this behavior. This bug has been resolved many years ago.
Nice to hear.
So what do you suggest? drop both --build/--target?
Dropping --target.
Dropping --target from %configure should be pretty safe, because only very few packages really use it and even less really need it. It will definitely break some (broken) packages, but the number being affected should be very small and finite.
Dropping --build is one step more aggressive and therefore would require some careful analysis of the consequences resulting from this: * --build is used by most packages * The pair "--build/--host" is used to trigger cross-compilation, sometimes this is desired, sometimes not. Unfortunately autoconf's behavior is non-trivial. * Theoretically dropping --build should be safe, and implies to mplying configure scripts to resort to an autodetected value), but this definitely is one magnitude more dangerous than dropping --target.
Ralf
Ralf Corsepius wrote:
On Tue, 2006-09-19 at 20:53 -0500, Rex Dieter wrote:
So what do you suggest? drop both --build/--target?
Dropping --target.
Dropping --target from %configure should be pretty safe, because only very few packages really use it and even less really need it. It will definitely break some (broken) packages, but the number being affected should be very small and finite.
Thanks, I'll go test a few pkgs dropping --target and see how it does (*crosses fingers*). Do you know any current/open bugzilla's on this topic?
-- Rex
On Mon, Sep 18, 2006 at 08:57:11AM +0100, David Woodhouse wrote:
I agree, however, that there is nothing _fundamentally_ evil about
There is a lot fundamentally evil about autotools, it uses perl to start with.
autotools. Autotools don't kill cross-compilation; people do. Autotools just seem to make it easy.
Autotools also makes it extremely hard to debug a cross compilation problem. Neither does it deal with repeatability, consider what happens if you cross build a package during beta and it works then native build it during final and it doesn't. The vaguaries of the compiler and cross compiler suite can cause this to bite you very occasionally.
It would be good to be able to cross build Fedora, if only for slow old architectures and embedded where its pretty essential.
Alan
On Mon, 2006-09-18 at 07:09 -0400, Alan Cox wrote:
Autotools also makes it extremely hard to debug a cross compilation problem. Neither does it deal with repeatability, consider what happens if you cross build a package during beta and it works then native build it during final and it doesn't. The vaguaries of the compiler and cross compiler suite can cause this to bite you very occasionally.
I particularly like the way bridge-utils will build _entirely_ differently according to whether it happens to detect libsysfs in the system or not. And sometimes it fails to detect libsysfs even when it's present, so it silently builds for a 2.4 kernel :)
On Mon, 2006-09-18 at 12:27 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 07:09 -0400, Alan Cox wrote:
Autotools also makes it extremely hard to debug a cross compilation problem. Neither does it deal with repeatability, consider what happens if you cross build a package during beta and it works then native build it during final and it doesn't. The vaguaries of the compiler and cross compiler suite can cause this to bite you very occasionally.
I particularly like the way bridge-utils will build _entirely_ differently according to whether it happens to detect libsysfs in the system or not. And sometimes it fails to detect libsysfs even when it's present, so it silently builds for a 2.4 kernel :)
Blame this package's authors and don't blame the tools ;)
Ralf
On Mon, 2006-09-18 at 07:09 -0400, Alan Cox wrote:
On Mon, Sep 18, 2006 at 08:57:11AM +0100, David Woodhouse wrote:
I agree, however, that there is nothing _fundamentally_ evil about
There is a lot fundamentally evil about autotools, it uses perl to start with.
Except that you personally seem to hate Perl and apparently feel like having to reiterate your opinion, it's an implementation detail, not of any importance to it's function.
BTW: perl is the least problematic part of the autotools. The most problematic ones are shells and m4, plus people outsmarting themselves by abusing the autotools.
autotools. Autotools don't kill cross-compilation; people do. Autotools just seem to make it easy.
Autotools also makes it extremely hard to debug a cross compilation problem.
How that?
Neither does it deal with repeatability, consider what happens if you cross build a package during beta and it works then native build it during final and it doesn't.
And how is this problem related to the autotools?
Use 2 different build directories and appropriate host/build/target tuples and you're done.
The vaguaries of the compiler and cross compiler suite can cause this to bite you very occasionally.
Sure, ... this would you hit with other buildsystem in the same way.
It would be good to be able to cross build Fedora, if only for slow old architectures and embedded where its pretty essential.
Sure.
Ralf
Ralf Corsepius wrote:
BTW: perl is the least problematic part of the autotools. The most
Unless you want to crosscompile perl so autotools can run on your target, which is an interesting experience involving a full configure and build for the host to generate a host miniperl so that a perl-using cross build for a non-host target can complete :-O
-Andy
On Mon, 2006-09-18 at 14:37 +0100, Andy Green wrote:
Ralf Corsepius wrote:
BTW: perl is the least problematic part of the autotools. The most
Unless you want to crosscompile perl so autotools can run on your target, which is an interesting experience involving a full configure and build for the host to generate a host miniperl so that a perl-using cross build for a non-host target can complete :-O
You don't need perl to configure/build/install packages to cross-compile - cross compiling perl ... that's a different issue ...
Ralf
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 14:37 +0100, Andy Green wrote:
Ralf Corsepius wrote:
BTW: perl is the least problematic part of the autotools. The most
Unless you want to crosscompile perl so autotools can run on your target, which is an interesting experience involving a full configure and build for the host to generate a host miniperl so that a perl-using cross build for a non-host target can complete :-O
You don't need perl to configure/build/install packages to cross-compile
- cross compiling perl ... that's a different issue ...
Sure, hence "so autotools can run on your target". The issue is pertinent though if people think about a truly crosscompilable distro, Python and Perl at least will cause a lot of trouble. Although of course if they are helped through it everyone would benefit.
Fedora package configure options as they are would also be extremely fat on most kinds of embedded hardware.
-Andy
Andy Green wrote:
Sure, hence "so autotools can run on your target". The issue is pertinent though if people think about a truly crosscompilable distro, Python and Perl at least will cause a lot of trouble. Although of course if they are helped through it everyone would benefit.
Step 1: Get cross compilers into Fedora in some official capacity Step 2: Resolve cross compilation build problems in individual packages.
I suspect there are dozens of people/organizaions who have their own cross compilers already. Likewise, they have their own fixes for the cross compilation failures of packages such as Python and Perl. We need cross compilation to be more common before those patches are going to make it back into the problem packages.
Fedora package configure options as they are would also be extremely fat on most kinds of embedded hardware.
Many kinds, anyway. Suppose it depends on at what point you think of the hardware as being embedded. Fedora (plus custom kernel) runs fine on a Kurobox, for instance. I wouldn't want to recompile it natively, though!
-Brendan (blc@redhat.com)
I am still not sure why people want cross compilers in Fedora? Maybe I missed the very beginning of the thread.
There are a number of project dedicated to building images using cross tool chains. The problem is much more involved than supplying a cross compiler.
See:
www.openembedded.org buildroot scratchbox
Philip
Brendan Conoboy wrote:
Andy Green wrote:
Sure, hence "so autotools can run on your target". The issue is pertinent though if people think about a truly crosscompilable distro, Python and Perl at least will cause a lot of trouble. Although of course if they are helped through it everyone would benefit.
Step 1: Get cross compilers into Fedora in some official capacity Step 2: Resolve cross compilation build problems in individual packages.
I suspect there are dozens of people/organizaions who have their own cross compilers already. Likewise, they have their own fixes for the cross compilation failures of packages such as Python and Perl. We need cross compilation to be more common before those patches are going to make it back into the problem packages.
Fedora package configure options as they are would also be extremely fat on most kinds of embedded hardware.
Many kinds, anyway. Suppose it depends on at what point you think of the hardware as being embedded. Fedora (plus custom kernel) runs fine on a Kurobox, for instance. I wouldn't want to recompile it natively, though!
-Brendan (blc@redhat.com)
Philip Balister wrote:
I am still not sure why people want cross compilers in Fedora? Maybe I missed the very beginning of the thread.
You missed the beginning of the thread. If you check the archives you'll see there are as many reasons as people who are interested in this. For my part, I would like to see porting Fedora to new platforms be a relatively straightforward task.
There are a number of project dedicated to building images using cross tool chains. The problem is much more involved than supplying a cross compiler.
Well, yes, but you have to start somewhere. Many of the people participating in this thread use cross tools on a daily basis.
www.openembedded.org buildroot
Will read up on these next...
scratchbox
This looks pretty slick, but it requires a QEMU port in order to work.
-Brendan (blc@redhat.com)
The OpenEmbedded guys do have some support for generating rpm's. It may not be 100% due to bit rot, but some people are getting interested in making the rpm stuff work again.
With OE, the command "bitbake FC6" could produce a set of rpms for the release and the required install images. By changing the underlying machine file, the build could be targeted to different systems.
Philip
Brendan Conoboy wrote:
Philip Balister wrote:
I am still not sure why people want cross compilers in Fedora? Maybe I missed the very beginning of the thread.
You missed the beginning of the thread. If you check the archives you'll see there are as many reasons as people who are interested in this. For my part, I would like to see porting Fedora to new platforms be a relatively straightforward task.
There are a number of project dedicated to building images using cross tool chains. The problem is much more involved than supplying a cross compiler.
Well, yes, but you have to start somewhere. Many of the people participating in this thread use cross tools on a daily basis.
www.openembedded.org buildroot
Will read up on these next...
scratchbox
This looks pretty slick, but it requires a QEMU port in order to work.
-Brendan (blc@redhat.com)
On Mon, 2006-09-18 at 11:02 -0400, Brendan Conoboy wrote:=
Many kinds, anyway. Suppose it depends on at what point you think of the hardware as being embedded. Fedora (plus custom kernel) runs fine on a Kurobox, for instance. I wouldn't want to recompile it natively, though!
I wouldn't want to recompile natively on the $100 laptop either -- which is why I'm chasing up i686-fedora-linux cross-compilers to run on my shiny Fedora/PPC machines :)
David Woodhouse wrote:
Perhaps we could build all the Fedora cross-toolchains in a loop like that, but let people take it and do more esoteric targets individually in Extras.
Sure. It would be great if core's gcc and binutils made the mesh already.
Binutils doesn't need it. I can build kernels quite happily without.
That's fine if all you want to do is build a kernel, but any package is a candidate for cross compilation.
If it's our own packages then we're already shipping the source, so the GPL shouldn't be an issue if we repackage them.
This doesn't sound right. We're talking about two different repositories (core vs extras).
I don't actually think we _do_ want to repackage them. If I want to be able to install the proper i686 acrobat reader packages in to my i686 qemu/gcc sysroot, I want to just use yum -- or at _least_ RPM. I don't want to have to repackage everything as noarch. We should be able to use them directly, even if we have to modify rpm a little.
Modifying rpm may be the best long term option. If RPM had a magic incantation like 'rpm -i --sysroot somepackage.mipsel.rpm' that knew to put it under /usr/sysroots/mipsel-linux-gnu that'd be great.
Or we could abolish /usr/include and /usr/lib in favor of sys-roots from the ground-up. Any takers? :-)
I don't want to require it _prior_ to building. That might be OK for Fedora where we _have_ the sysroot prior to building the compiler, but when we don't have a pre-existing sysroot, we need to build the compiler first.
How about seeding the build system with a hand-made sys-root for the first generation? After that you can iterate using previous builds.
See rants elsewhere over the last decade or so about dependencies and building everything three times :)
Please, no...
You'll never do that until we ban autoconf in packaging. Packages in _general_ won't cross-compile. We'll always have to have a "native" environment, although qemu can fake that and your _compiler_ binary can be a real native binary in the middle of a target-sysroot, so it's nice and fast. See scratchbox, for example.
Autoconf and cross compilation can work just fine together. That said, there are plenty of auto* tests that are cross-ignorant and need fixing. It's not insurmountable, but it does require every package to play nicely.
I haven't looked at scratchbox before. Will do that now.
-Brendan (blc@redhat.com)
On Mon, 2006-09-18 at 10:48 -0400, Brendan Conoboy wrote:
I don't want to require it _prior_ to building. That might be OK for Fedora where we _have_ the sysroot prior to building the compiler, but when we don't have a pre-existing sysroot, we need to build the compiler first.
How about seeding the build system with a hand-made sys-root for the first generation? After that you can iterate using previous builds.
That should be a last resort if we really cannot fix dependencies in any other way. It's really not an ideal situation.
How about just building binutils, then the compiler, then some libraries?
On Mon, 2006-09-18 at 11:07 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
How about just building binutils, then the compiler, then some libraries?
That would be great if it's possible. How is this going to work with only the headers supplied in binutils and gcc?
It doesn't work at all. You at least need glibc, too.
[BTW: None of the components involved needs the binutils headers/libs.]
Ralf
On Mon, 2006-09-18 at 11:07 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
How about just building binutils, then the compiler, then some libraries?
That would be great if it's possible. How is this going to work with only the headers supplied in binutils and gcc?
I believe it ought to go something like
binutils < gcc < glibc < libgcc
We might want to put libgcc into a separate package for the cross-toolchain, unless we can _fake_ the presence of glibc. We might only really need a dummy DSO to link libgcc against; it doesn't actually have to be glibc -- it only needs about 10 symbols to be present iirc.
On Mon, 2006-09-18 at 16:30 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 11:07 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
How about just building binutils, then the compiler, then some libraries?
That would be great if it's possible. How is this going to work with only the headers supplied in binutils and gcc?
I believe it ought to go something like
binutils < gcc < glibc < libgcc
We might want to put libgcc into a separate package for the cross-toolchain, unless we can _fake_ the presence of glibc.
As mentioned a dozen of times before: Simply repackage the glibc binary rpms into a sys-rooted environment (for those GCC's supporting it - Older versions don't).
Ralf
On Mon, 2006-09-18 at 17:42 +0200, Ralf Corsepius wrote:
On Mon, 2006-09-18 at 16:30 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 11:07 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
How about just building binutils, then the compiler, then some libraries?
That would be great if it's possible. How is this going to work with only the headers supplied in binutils and gcc?
I believe it ought to go something like
binutils < gcc < glibc < libgcc
Forgot to mention: - libgcc is part of GCC. - The dependency GCC and glibc (and the kernel-headers) is circular.
Splitting out libgcc from GCC IMO is an attempt to break this circular dependency from the wrong end.
We might want to put libgcc into a separate package for the cross-toolchain, unless we can _fake_ the presence of glibc.
As mentioned a dozen of times before: Simply repackage the glibc binary rpms into a sys-rooted environment (for those GCC's supporting it - Older versions don't).
Using the binary glibc, breaks this dependencies into the same linear, incremental dependency chain as being used for native compilation and re-uses the identical target library binaries as being used natively.
Ralf
Once again, several other open source projects have solved this problem. The problem is not the lack of cross building systems, rather than there are so many. Why does Fedora need to reinvent the wheel?
Philip
Ralf Corsepius wrote:
On Mon, 2006-09-18 at 17:42 +0200, Ralf Corsepius wrote:
On Mon, 2006-09-18 at 16:30 +0100, David Woodhouse wrote:
On Mon, 2006-09-18 at 11:07 -0400, Brendan Conoboy wrote:
David Woodhouse wrote:
How about just building binutils, then the compiler, then some libraries?
That would be great if it's possible. How is this going to work with only the headers supplied in binutils and gcc?
I believe it ought to go something like
binutils < gcc < glibc < libgcc
Forgot to mention:
- libgcc is part of GCC.
- The dependency GCC and glibc (and the kernel-headers) is circular.
Splitting out libgcc from GCC IMO is an attempt to break this circular dependency from the wrong end.
We might want to put libgcc into a separate package for the cross-toolchain, unless we can _fake_ the presence of glibc.
As mentioned a dozen of times before: Simply repackage the glibc binary rpms into a sys-rooted environment (for those GCC's supporting it - Older versions don't).
Using the binary glibc, breaks this dependencies into the same linear, incremental dependency chain as being used for native compilation and re-uses the identical target library binaries as being used natively.
Ralf
On Mon, 2006-09-18 at 18:19 -0400, Philip Balister wrote:
Once again, several other open source projects have solved this problem. The problem is not the lack of cross building systems, rather than there are so many.
There are so many because none of them meets the demands of vendors.
We (RTEMS) have a working system meeting our demands and giving us the amount of control we need for our purposes => Switching to yet another approach claiming to have solved "all problems" is not necessarily interesting.
Ralf
David Woodhouse wrote:
We might want to put libgcc into a separate package for the cross-toolchain, unless we can _fake_ the presence of glibc. We might only really need a dummy DSO to link libgcc against; it doesn't actually have to be glibc -- it only needs about 10 symbols to be present iirc.
Would you trust a gcc built against a fake glibc? I wouldn't. When bootstrapping a glibc targeted cross compiler, my method is:
1. Create minimal sys-root with glibc-kernheaders (Haven't done this since the package change) plus a few fake headers that glibc would normally provide.
2. Create target-gcc with step 1 headers.
3. Create target-glibc sys-root with step 2.
4. Create final target-gcc with step 3.
5. Create final target-glibc with step 4.
Steps 1-3 are throw-away bits. Placing cross compilers in Fedora does not require all this because the build system does not need to solve the chicken&egg problem. The main problem to be solved is The Right Way (tm) to leverage those already-generated files that a sys-root is composed of.
Suggestions:
1. Repackage binary rpms as noarch rpms under a sys-root tree.
2. Modify rpm such that RPMs of different architectures can be installed in a sys-root tree.
3. Modify Fedora so that all headers and libraries are by default in a sys-root.
4. Modify something (rpm? all packages?) such that an optional sys-root package is emitted along with devel packages. Sort of like debuginfo.
...
-Brendan (blc@redhat.com)