Fedora 13 ARM users can now update to use a new repository composed on March 23rd. This new compose adds additional software and includes software groups.
The Beta2 has been signed with a new key. If you are currently running the Beta1 you will need to install the new key which can be done by entering:
rpm -Uvh http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
And replace the fedora.repo file:
mv /etc/yum.repos.d/fedora.repo.rpmnew /etc/yum.repos.d/fedora.repo
The Fedora 13 ARM Beta2 compose can be found at :
http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
A new root filesystem is available here. The password for the root account remains “fedoraarm”.
We are asking users of Fedora ARM for some feedback. Is there something missing? Is the software included so far working as expected? Any other suggestions? Please forward feedback to the mailing list
On 03/25/2011 09:53 PM, Paul Whalen wrote:
Fedora 13 ARM users can now update to use a new repository composed on March 23rd. This new compose adds additional software and includes software groups.
The Beta2 has been signed with a new key. If you are currently running the Beta1 you will need to install the new key which can be done by entering:
rpm -Uvh http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
I think you mean:
http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
Gordan
On Fri, Mar 25, 2011 at 6:53 PM, Paul Whalen paul.whalen@senecac.on.ca wrote:
Fedora 13 ARM users can now update to use a new repository composed on March 23rd. This new compose adds additional software and includes software groups.
The Beta2 has been signed with a new key. If you are currently running the Beta1 you will need to install the new key which can be done by entering:
rpm -Uvh http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
And replace the fedora.repo file:
mv /etc/yum.repos.d/fedora.repo.rpmnew /etc/yum.repos.d/fedora.repo
The Fedora 13 ARM Beta2 compose can be found at :
http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
A new root filesystem is available here. The password for the root account remains “fedoraarm”.
We are asking users of Fedora ARM for some feedback. Is there something missing? Is the software included so far working as expected? Any other suggestions? Please forward feedback to the mailing list
are you able to include fake-kernel rpm from
http://comments.gmane.org/gmane.linux.redhat.fedora.arm/959
?
On Mar 25, 2011, at 2:53 PM, Paul Whalen wrote:
Fedora 13 ARM users can now update to use a new repository composed on March 23rd. This new compose adds additional software and includes software groups.
The Beta2 has been signed with a new key. If you are currently running the Beta1 you will need to install the new key which can be done by entering:
rpm -Uvh http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
And replace the fedora.repo file:
mv /etc/yum.repos.d/fedora.repo.rpmnew /etc/yum.repos.d/fedora.repo
The Fedora 13 ARM Beta2 compose can be found at :
http://arm.koji.fedoraproject.org/mash/beta/f13-arm-2011-03-23/f13-arm/arm/o...
A new root filesystem is available here. The password for the root account remains “fedoraarm”.
Looks like the path to the rootfs bz2 file got dropped. Backtracking from the beta1 location I found the beta2 at: http://scotland.proximity.on.ca/fedora-arm/beta/f13/rootfs-f13-beta-2011-03-...
We are asking users of Fedora ARM for some feedback. Is there something missing? Is the software included so far working as expected? Any other suggestions? Please forward feedback to the mailing list _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
On 25 March 2011 21:53, Paul Whalen paul.whalen@senecac.on.ca wrote:
We are asking users of Fedora ARM for some feedback. Is there something missing? Is the software included so far working as expected? Any other suggestions? Please forward feedback to the mailing list
Most things are working well. I think the F13 beta is much better than the F12 from a year ago.
What's missing from my (biased) perspective?
1. Updates tracking F13 mainline. 2. Armv7 / VFP / NEON support to squeeze a bit more performance out (where appropriate to the h/w). 3. Some kernel build strategy. 4. More "spreading the word" on appropriate mailing lists / groups that Fedora on ARM is worth using.
All these have been discussed before.
It's probably worth gathering some data on h/w and experiences as the beta progresses. Any objections to my creating a wiki page to track and summarise this?
Matthew
On 03/26/2011 09:10 PM, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
I suspect you'll find that NEON won't make a noticeable difference, at least not until:
1) GCC can do worthwhile vectorization - I've yet to try 4.5.x branch which supposedly has some work on that contributed from IBM, but given that it's taken 14 years to get this far since MMX was first introduced, I'm not too hopeful of a quantum leap overnight.
_AND_
2) Developers start to write code in a way that the compiler can sensibly apply vectorization. Considering that people haven't really done this after the best part of a decade of availability of decent vectorizing compilers (ICC on x86), I suspect this will be a bigger problem than 1). And very few people are likely to have a great interest in rewriting something that works, no matter how poorly.
Don't bet on SIMD for anything but niche applications (e.g. scientific number crunching and games - and for the former, ARM isn't exactly a popular platform).
Other missing things that I would add to your list are:
- Lack of ported application. Most important one, IMO, is OpenOffice/LibreOffice. Ubuntu has this on ARM, so there is very little excuse for not having it, since it has been done. The build systems are, unfortunately, sufficiently different to make this far from trivial without the compatibility being worked on upstream. This is, IMO, the key reason why Ubuntu is so far ahead in terms of shipping pre-installed on ARM netbooks.
- Legacy bad programming. ARM, like SPARC, is susceptible to issues arising from memory pointer dereference that isn't word-aligned. This has been discussed here before, and I find it more than a little surprising that while the GCC SPARC back-end seems to align all structures and arrays to word boundary, the ARM back-end does not, and this causes bugs to arise. One recent example that shocked me is just how often this happens in code that is really critical (e2fsprogs), where buffers get defined as arrays of char, and then the contents get cast into structs. GCC will align char[] to a byte boundary, not a word boundary, and thus cause all sorts of issues. The fact that some things work at all is nothing short of a miracle. But like 2) I mentioned above, that is a lot of code to rewrite correctly. The alternative is similar to 1), in that the GCC ARM back end needs a parameter to force alignment of all structures to a word boundary. Interestingly, Intel's compiler for x86 as an option for this, despite the fact that x86 has transparent hardware fix-up for this - because unaligned arrays cannot be vectorized. I'm guessing that hasn't happened on GCC/ARM because most ARM development has traditionally been on systems where saving a few bytes of RAM is of paramount importance and the developers are competent enough to hand-craft their code to make sure it works (embedded systems). If ARM is to grow up into a desktop processor, it's compiler has to do so, too. Even so, we're likely to be unpicking unalignment access violations in existing code for years, right up until bad programming gets enshrined in transparent hardware alignment fixup (e.g. Cortex A / ARMv7).
</rant> ;)
Gordan
Hi,
On Sat, Mar 26 2011, Gordan Bobic wrote:
On 03/26/2011 09:10 PM, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
I suspect you'll find that NEON won't make a noticeable difference, at least not until:
I suspect you're right that recompiling the world with NEON is no big deal, but simply doing glibc/X/liboil/codecs should be a large win by itself. In those cases there's pre-vectorized code sitting there and waiting to be emitted once the right flag's turned on.
- Chris.
On 03/27/2011 04:19 AM, Chris Ball wrote:
Hi,
On Sat, Mar 26 2011, Gordan Bobic wrote:
On 03/26/2011 09:10 PM, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
I suspect you'll find that NEON won't make a noticeable difference, at least not until:
I suspect you're right that recompiling the world with NEON is no big deal, but simply doing glibc/X/liboil/codecs should be a large win by itself. In those cases there's pre-vectorized code sitting there and waiting to be emitted once the right flag's turned on.
If there is such code in there (and I'm not convinced there is much, if any), it is likely to be hand-crafted assembly - and if that is the case, it's a virtual certainty that it isn't ARM assembly.
Gordan
Hi,
On Sun, Mar 27 2011, Gordan Bobic wrote:
On 03/27/2011 04:19 AM, Chris Ball wrote:
I suspect you're right that recompiling the world with NEON is no big deal, but simply doing glibc/X/liboil/codecs should be a large win by itself. In those cases there's pre-vectorized code sitting there and waiting to be emitted once the right flag's turned on.
If there is such code in there (and I'm not convinced there is much, if any), it is likely to be hand-crafted assembly - and if that is the case, it's a virtual certainty that it isn't ARM assembly.
You are wrong. Here is a patch for NEON-optimized memcpy() for glibc written in ARM assembly:
http://sourceware.org/ml/libc-ports/2009-07/msg00003.html
Orc¹, which replaced liboil in gstreamer, also emits NEON asm:
http://code.entropywave.com/git?p=orc.git;a=blob;f=orc/orcrules-neon.c
As does pixman, which accelerates X rendering and reports simple fill/blit operations being at least twice as fast with NEON:
http://sandbox.movial.com/blog/2009/06/pixman-gets-neon-support/
- Chris.
¹: http://code.entropywave.com/projects/orc/
On Mon, Mar 28, 2011 at 5:29 AM, Chris Ball cjb@laptop.org wrote:
Hi,
On Sun, Mar 27 2011, Gordan Bobic wrote:
On 03/27/2011 04:19 AM, Chris Ball wrote:
I suspect you're right that recompiling the world with NEON is no big deal, but simply doing glibc/X/liboil/codecs should be a large win by itself. In those cases there's pre-vectorized code sitting there and waiting to be emitted once the right flag's turned on.
If there is such code in there (and I'm not convinced there is much, if any), it is likely to be hand-crafted assembly - and if that is the case, it's a virtual certainty that it isn't ARM assembly.
You are wrong. Here is a patch for NEON-optimized memcpy() for glibc written in ARM assembly:
http://sourceware.org/ml/libc-ports/2009-07/msg00003.html
Orc¹, which replaced liboil in gstreamer, also emits NEON asm:
http://code.entropywave.com/git?p=orc.git;a=blob;f=orc/orcrules-neon.c
As does pixman, which accelerates X rendering and reports simple fill/blit operations being at least twice as fast with NEON:
http://sandbox.movial.com/blog/2009/06/pixman-gets-neon-support/
Yip, and SKIA, EFL, and ffmpeg. Linaro is working quite a bit in this area such as Cairo, libjpeg, AAC, and VP8: http://status.linaro.org/group/tr-graphics-toolkits-optimization-cairo.html http://status.linaro.org/group/tr-multimedia-optimize-jpeg-decoding.html http://status.linaro.org/group/tr-multimedia-optimize-aac-encoding.html http://status.linaro.org/group/tr-multimedia-optimize-vp8-decoding.html
On the toolchain side, using NEON by default on normal code is pretty much neutral. We're working on that though: https://blueprints.launchpad.net/gcc-linaro/+spec/auto-vectorization-improve...
You can do runtime selection based on the capabilities of the chip using something custom, GLIBC's hwcaps, or the recently added IFUNC support. That way you get compatible and fast for the cost of a bigger installed image.
-- Michael
On Sun, Mar 27, 2011 at 4:19 AM, Chris Ball cjb@laptop.org wrote:
Hi,
On Sat, Mar 26 2011, Gordan Bobic wrote:
On 03/26/2011 09:10 PM, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
I suspect you'll find that NEON won't make a noticeable difference, at least not until:
I suspect you're right that recompiling the world with NEON is no big deal, but simply doing glibc/X/liboil/codecs should be a large win by itself. In those cases there's pre-vectorized code sitting there and waiting to be emitted once the right flag's turned on.
cairo and pixman have had some heavy optimisations done for NEON as well so that should massively improve things like gtk/gnome/mozilla based apps as well.
Peter
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take 2.6.38 and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
I'll be at LF/ELC the next couple of weeks, but the hope is to have something in motion by then and posted here. If anyone else is already working on kernel packaging, please ping me so we don't duplicate efforts in putting this together :)
Jon.
Jon Masters jcm@redhat.com writes:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take 2.6.38 and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
Is this something that would also work on a Sheeva or Guru plug?
-derek
Derek Atkins wrote:
Jon Masters jcm@redhat.com writes:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take 2.6.38 and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
Is this something that would also work on a Sheeva or Guru plug?
Conceptually, yes, but not the same kernel rpm. Sheeva/Guru is based on a Marvell Kirkwood chip (ARMv5) while the BeagleBoard and PandaBoard are Cortex A based (ARMv7). At least judging by the kernel configuration options, it doesn't seem possible to build a single kernel for both.
Gordan
So is the idea is to have kernel rpm for each sub-architecture. ARMv5, ARMv7, ARMv9?
What about the user-land? Would we keep seperate repos to have optimized bits for v7/9? On Mar 29, 2011 9:49 AM, "Gordan Bobic" gordan@bobich.net wrote:
Derek Atkins wrote:
Jon Masters jcm@redhat.com writes:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take 2.6.38 and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
Is this something that would also work on a Sheeva or Guru plug?
Conceptually, yes, but not the same kernel rpm. Sheeva/Guru is based on a Marvell Kirkwood chip (ARMv5) while the BeagleBoard and PandaBoard are Cortex A based (ARMv7). At least judging by the kernel configuration options, it doesn't seem possible to build a single kernel for both.
Gordan
arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
I am still assuming the split is going to be between softfp and hardfp (ABI), rather than arch (ARMv6 vs ARMv7).
Oh, and there's no such thing ar ARMv9 (yet at least).
Gordan
Jon wrote:
So is the idea is to have kernel rpm for each sub-architecture. ARMv5, ARMv7, ARMv9?
What about the user-land? Would we keep seperate repos to have optimized bits for v7/9?
On Mar 29, 2011 9:49 AM, "Gordan Bobic" <gordan@bobich.net mailto:gordan@bobich.net> wrote:
Derek Atkins wrote:
Jon Masters <jcm@redhat.com mailto:jcm@redhat.com> writes:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take
2.6.38
and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
Is this something that would also work on a Sheeva or Guru plug?
Conceptually, yes, but not the same kernel rpm. Sheeva/Guru is based on a Marvell Kirkwood chip (ARMv5) while the BeagleBoard and PandaBoard are Cortex A based (ARMv7). At least judging by the kernel configuration options, it doesn't seem possible to build a single kernel for both.
Gordan
arm mailing list arm@lists.fedoraproject.org mailto:arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
Quoting Gordan Bobic gordan@bobich.net:
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
Is there a way around this? I mean like being able to make a megakernal with a ton of modules that has support for every board and autodetects hardware?
When you look at the defconfigs for arm, you see about 50 of them. It would be nice to have a "generic" arm, or a generic arm5 configuration that would be a megakernel with all the hardware in modules or something that can autodetect the board and proc. Even broken out to armv5, armv6, etc would be nice.
Then if you want to tweak your kernel for sheeva or OMAP, then by all means go ahead. Im not even against supplying precompiled kernels for this but it would be more like a x86 pae, MP or xen kernel.
--
If the split is not hardfp, I would seriously consider looking at bootstrapping FAT binaries for optimisations between v5 and v7. Im pretty sure this is how Apple did some of the optimisations for Altivec for OS X which means some of this code maybe sitting in the Darwin archives.
(Apple started off by using something similar to the perl script to relink, with OS 10.0 it took like 20 minute to download and install an update, and about 3 hours to "optimise", they ended up backgrounding the process and then they switched to something else which I think is the bootstrap.)
(the 68k-> PPC fat binaries, actually was two separate binaries of the program, and the bootstrap just picked which "fork" in HFS to read for the binary from which you can't do on linux easily.)
omalleys@msu.edu wrote:
Quoting Gordan Bobic gordan@bobich.net:
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
Is there a way around this? I mean like being able to make a megakernal with a ton of modules that has support for every board and autodetects hardware?
When you look at the defconfigs for arm, you see about 50 of them. It would be nice to have a "generic" arm, or a generic arm5 configuration that would be a megakernel with all the hardware in modules or something that can autodetect the board and proc. Even broken out to armv5, armv6, etc would be nice.
Whether the kernel is modifiable to allow for that, I don't know, but it certainly doesn't seem to be possible to do that with the current vanilla kernel.
If the split is not hardfp, I would seriously consider looking at bootstrapping FAT binaries for optimisations between v5 and v7. Im pretty sure this is how Apple did some of the optimisations for Altivec for OS X which means some of this code maybe sitting in the Darwin archives.
I haven't tested this myself, but I seem to remember somebody here reporting that the typical improvement from optimizing for ARMv7 while sticking with softfp was in the low single figure % numbers. I'm not sure it justifies the effort.
(Apple started off by using something similar to the perl script to relink, with OS 10.0 it took like 20 minute to download and install an update, and about 3 hours to "optimise", they ended up backgrounding the process and then they switched to something else which I think is the bootstrap.)
(the 68k-> PPC fat binaries, actually was two separate binaries of the program, and the bootstrap just picked which "fork" in HFS to read for the binary from which you can't do on linux easily.)
IMO the fat binary support should be handled on the compiler level, not post-processing. There's also the issue of dlls - you'd need the dynamic linking to be aware of it too. And you might end up having /lib5s, /lib5h, /lib6s, /lib6h, /lib7s, /lib7h (like we have /lib and /lib64 on x86/x86-64).
All that seems like a lot more effort than maintaining two separate builds, and I cannot think of a reasonable use case where it would be of vital importance to have binary compatibility. Why bother with binary compatibility when you have the source. :)
Gordan
Quoting Gordan Bobic gordan@bobich.net:
omalleys@msu.edu wrote:
Quoting Gordan Bobic gordan@bobich.net:
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
Is there a way around this? I mean like being able to make a megakernal with a ton of modules that has support for every board and autodetects hardware?
When you look at the defconfigs for arm, you see about 50 of them. It would be nice to have a "generic" arm, or a generic arm5 configuration that would be a megakernel with all the hardware in modules or something that can autodetect the board and proc. Even broken out to armv5, armv6, etc would be nice.
Whether the kernel is modifiable to allow for that, I don't know, but it certainly doesn't seem to be possible to do that with the current vanilla kernel.
I -assume- it is possible, but I don't know for sure either. It would help quite a bit even if it was just forward compatible.
If the split is not hardfp, I would seriously consider looking at bootstrapping FAT binaries for optimisations between v5 and v7. Im pretty sure this is how Apple did some of the optimisations for Altivec for OS X which means some of this code maybe sitting in the Darwin archives.
I haven't tested this myself, but I seem to remember somebody here reporting that the typical improvement from optimizing for ARMv7 while sticking with softfp was in the low single figure % numbers. I'm not sure it justifies the effort.
If you can slide neon optimisation in, it would make it worth it for certain programs.
(Apple started off by using something similar to the perl script to relink, with OS 10.0 it took like 20 minute to download and install an update, and about 3 hours to "optimise", they ended up backgrounding the process and then they switched to something else which I think is the bootstrap.)
(the 68k-> PPC fat binaries, actually was two separate binaries of the program, and the bootstrap just picked which "fork" in HFS to read for the binary from which you can't do on linux easily.)
IMO the fat binary support should be handled on the compiler level, not post-processing. There's also the issue of dlls - you'd need the dynamic linking to be aware of it too. And you might end up having /lib5s, /lib5h, /lib6s, /lib6h, /lib7s, /lib7h (like we have /lib and /lib64 on x86/x86-64).
Actually i don't think you can mix hard and soft so per system so that has to be a separate release. :P
There should be say a 3-4 char designation for the /lib since say libhnc would be lib hard neon cuda support. Im not sure the H is needed, but the point is more that it needs to be standardized in an extensible format non-confusing format. Like if there was a haploid vector processor, then you dont have a libh for both hard-fp and libh for vector.
All that seems like a lot more effort than maintaining two separate builds, and I cannot think of a reasonable use case where it would be of vital importance to have binary compatibility. Why bother with binary compatibility when you have the source. :)
The issue I see is ARM appears to be moving quite fast right now, and in order to keep up, I don't think it is wise to keep releasing a new distro for every new arch. I just don't think that is a good habit to get into. Or else our distribution ends up to be a mess like the Kernel is.
By having "fat" compatibility, it gives the ability to speed up the 20 packages that you can get significant improvement from, without having the manpower to work out the other 1300. I am guessing the majority of the issues with the distro, are related to the whole ARM arch and not just a subset of the arch.
Also, I see this as a bigger issue moving forward and especially when you start adding vectorization optimisation to the equation and the armv21 "lightning" processor is released.
--
To sound like a hypocrite...
I actually don't have an issue with a build of say armv7 hardfp for F15 especially if the patches are getting pushed upstream, by the time koji gets to F15, I assume 99% of the bugs will be fixed and it will be merely a recompile. :P
Hi,
I should work on kernel rpm for my bachelor thesis. I plan to get dreamplug and make kernel rpm for kirkwood processors. After that i can work on other devices.
Peter
On 29 March 2011 21:26, omalleys@msu.edu wrote:
Quoting Gordan Bobic gordan@bobich.net:
omalleys@msu.edu wrote:
Quoting Gordan Bobic gordan@bobich.net:
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
Is there a way around this? I mean like being able to make a megakernal with a ton of modules that has support for every board and autodetects hardware?
When you look at the defconfigs for arm, you see about 50 of them. It would be nice to have a "generic" arm, or a generic arm5 configuration that would be a megakernel with all the hardware in modules or something that can autodetect the board and proc. Even broken out to armv5, armv6, etc would be nice.
Whether the kernel is modifiable to allow for that, I don't know, but it certainly doesn't seem to be possible to do that with the current vanilla kernel.
I -assume- it is possible, but I don't know for sure either. It would help quite a bit even if it was just forward compatible.
If the split is not hardfp, I would seriously consider looking at bootstrapping FAT binaries for optimisations between v5 and v7. Im pretty sure this is how Apple did some of the optimisations for Altivec for OS X which means some of this code maybe sitting in the Darwin archives.
I haven't tested this myself, but I seem to remember somebody here reporting that the typical improvement from optimizing for ARMv7 while sticking with softfp was in the low single figure % numbers. I'm not sure it justifies the effort.
If you can slide neon optimisation in, it would make it worth it for certain programs.
(Apple started off by using something similar to the perl script to relink, with OS 10.0 it took like 20 minute to download and install an update, and about 3 hours to "optimise", they ended up backgrounding the process and then they switched to something else which I think is the bootstrap.)
(the 68k-> PPC fat binaries, actually was two separate binaries of the program, and the bootstrap just picked which "fork" in HFS to read for the binary from which you can't do on linux easily.)
IMO the fat binary support should be handled on the compiler level, not post-processing. There's also the issue of dlls - you'd need the dynamic linking to be aware of it too. And you might end up having /lib5s, /lib5h, /lib6s, /lib6h, /lib7s, /lib7h (like we have /lib and /lib64 on x86/x86-64).
Actually i don't think you can mix hard and soft so per system so that has to be a separate release. :P
There should be say a 3-4 char designation for the /lib since say libhnc would be lib hard neon cuda support. Im not sure the H is needed, but the point is more that it needs to be standardized in an extensible format non-confusing format. Like if there was a haploid vector processor, then you dont have a libh for both hard-fp and libh for vector.
All that seems like a lot more effort than maintaining two separate builds, and I cannot think of a reasonable use case where it would be of vital importance to have binary compatibility. Why bother with binary compatibility when you have the source. :)
The issue I see is ARM appears to be moving quite fast right now, and in order to keep up, I don't think it is wise to keep releasing a new distro for every new arch. I just don't think that is a good habit to get into. Or else our distribution ends up to be a mess like the Kernel is.
By having "fat" compatibility, it gives the ability to speed up the 20 packages that you can get significant improvement from, without having the manpower to work out the other 1300. I am guessing the majority of the issues with the distro, are related to the whole ARM arch and not just a subset of the arch.
Also, I see this as a bigger issue moving forward and especially when you start adding vectorization optimisation to the equation and the armv21 "lightning" processor is released.
--
To sound like a hypocrite...
I actually don't have an issue with a build of say armv7 hardfp for F15 especially if the patches are getting pushed upstream, by the time koji gets to F15, I assume 99% of the bugs will be fixed and it will be merely a recompile. :P
arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
-- :wq
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality.
There isn't support for EABI or < armv6 in the ARM-backend yet.
But it sounds like it might be a lot more wicked at catching bugs.
omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality.
There isn't support for EABI or < armv6 in the ARM-backend yet.
But it sounds like it might be a lot more wicked at catching bugs.
Sounds interesting. C/C++ code should be compiler agnostic anyway, so testing with more than one compiler is always a good thing.
Gordan
On 2011-04-01 14:50, omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
llvm/clang Runs fine on my ARM EABI Linux systems.
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality.
There isn't support for EABI or < armv6 in the ARM-backend yet.
I have tested clang/llvm under EABI Debian Squeeze for ARM targeting armv4t+ and EABI Ubuntu Natty for ARM targeting armv7+.
Try package it for Fedora, it should simply work.
But it sounds like it might be a lot more wicked at catching bugs.
Yes it got quite colourful error reporting :)
On Sat, Apr 2, 2011 at 1:50 AM, omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality. There isn't support for EABI or < armv6 in the ARM-backend yet.
I'm having a bit of a look at this for Linaro at the moment. LLVM is quite respectable, and generates code that is slower than GCC but generally in the same ballpark. Of the three benchmarks I've tried, two took 8 % longer to run on an A9 and pybench took more like 40 % longer to run. pybench is sensitive to having a good inner loop though.
-- Michael
On 2011-04-02 00:20, Michael Hope wrote:
On Sat, Apr 2, 2011 at 1:50 AM,omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality. There isn't support for EABI or< armv6 in the ARM-backend yet.
I'm having a bit of a look at this for Linaro at the moment. LLVM is quite respectable, and generates code that is slower than GCC but generally in the same ballpark. Of the three benchmarks I've tried, two took 8 % longer to run on an A9 and pybench took more like 40 % longer to run. pybench is sensitive to having a good inner loop though.
-- Michael _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
Hi Michael!
For what I know, LLVM defaults to ARMv4t code generation unless it gets told that it are allowed to use newer code generation.
This llvm bug tracked how llvm and clang implemented X86 cpu feature autodetection code to make clang generate the best code available for any given host when using -march=native. http://www.llvm.org/bugs/show_bug.cgi?id=5389
I have added an initial cpu features auto-detection code for ARM to that bug-report for the LLVM part.
Do you get better performance on your A9 tests when running clang with clang -mcpu=generic -mattr=+neon,-thumb2,+v6,+vfp2 ?
Cheers Xerxes
On Sat, Apr 2, 2011 at 12:04 PM, Xerxes Ranby xerxes@zafena.se wrote:
On 2011-04-02 00:20, Michael Hope wrote:
On Sat, Apr 2, 2011 at 1:50 AM,omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality. There isn't support for EABI or< armv6 in the ARM-backend yet.
I'm having a bit of a look at this for Linaro at the moment. LLVM is quite respectable, and generates code that is slower than GCC but generally in the same ballpark. Of the three benchmarks I've tried, two took 8 % longer to run on an A9 and pybench took more like 40 % longer to run. pybench is sensitive to having a good inner loop though.
-- Michael _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
Hi Michael!
For what I know, LLVM defaults to ARMv4t code generation unless it gets told that it are allowed to use newer code generation.
This llvm bug tracked how llvm and clang implemented X86 cpu feature autodetection code to make clang generate the best code available for any given host when using -march=native. http://www.llvm.org/bugs/show_bug.cgi?id=5389
I have added an initial cpu features auto-detection code for ARM to that bug-report for the LLVM part.
Do you get better performance on your A9 tests when running clang with clang -mcpu=generic -mattr=+neon,-thumb2,+v6,+vfp2
I've only done a first pass, but I'm fairly sure I used clang -mcpu=cortex-a9 -mfpu=neon. I don't think clang supports ARMv4T at all. I'm working on automating the benchmarks at the moment and I'll add llvm into that. Better than some throw-away, unreproducable results :)
-- Michael
Quoting Michael Hope michael.hope@linaro.org:
On Sat, Apr 2, 2011 at 12:04 PM, Xerxes Ranby xerxes@zafena.se wrote:
On 2011-04-02 00:20, Michael Hope wrote:
On Sat, Apr 2, 2011 at 1:50 AM,omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality. There isn't support for EABI or< armv6 in the ARM-backend yet.
I'm having a bit of a look at this for Linaro at the moment. LLVM is quite respectable, and generates code that is slower than GCC but generally in the same ballpark. Of the three benchmarks I've tried, two took 8 % longer to run on an A9 and pybench took more like 40 % longer to run. pybench is sensitive to having a good inner loop though.
-- Michael _______________________________________________ arm mailing list arm@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/arm
Hi Michael!
For what I know, LLVM defaults to ARMv4t code generation unless it gets told that it are allowed to use newer code generation.
This llvm bug tracked how llvm and clang implemented X86 cpu feature autodetection code to make clang generate the best code available for any given host when using -march=native. http://www.llvm.org/bugs/show_bug.cgi?id=5389
I have added an initial cpu features auto-detection code for ARM to that bug-report for the LLVM part.
Do you get better performance on your A9 tests when running clang with clang -mcpu=generic -mattr=+neon,-thumb2,+v6,+vfp2
I've only done a first pass, but I'm fairly sure I used clang -mcpu=cortex-a9 -mfpu=neon. I don't think clang supports ARMv4T at all. I'm working on automating the benchmarks at the moment and I'll add llvm into that. Better than some throw-away, unreproducable results :)
According to what I understood from their web docs (which may or may not be accurate..) and was listed under "known issues" I think for the "backend" for the 2.8 release: -armv4 doesn't have thumb support yet. -EABI is unsupported for all processors.
I was unclear whether that was just the llvm/clang toolchain or it also included the llvm-gccX toolchain.
LLVM 2.9 is scheduled to be released today (April 4th). 2.8 is still listed as the current version on their website, so there might be a delay or it may not be updated yet.
Quoting omalleys@msu.edu:
Quoting Michael Hope michael.hope@linaro.org:
On Sat, Apr 2, 2011 at 12:04 PM, Xerxes Ranby xerxes@zafena.se wrote:
On 2011-04-02 00:20, Michael Hope wrote:
On Sat, Apr 2, 2011 at 1:50 AM,omalleys@msu.edu wrote:
Did anyone ever get llvm/clang working?
It -says- it is fast, good optimization, faster binaries, aimed at generating better errors, and has good tools for debugging. :) the darwin-arm (and x86 ports are production quality. There isn't support for EABI or< armv6 in the ARM-backend yet.
I'm having a bit of a look at this for Linaro at the moment. LLVM is quite respectable, and generates code that is slower than GCC but generally in the same ballpark. Of the three benchmarks I've tried, two took 8 % longer to run on an A9 and pybench took more like 40 % longer to run. pybench is sensitive to having a good inner loop though.
For what I know, LLVM defaults to ARMv4t code generation unless it gets told that it are allowed to use newer code generation.
This llvm bug tracked how llvm and clang implemented X86 cpu feature autodetection code to make clang generate the best code available for any given host when using -march=native. http://www.llvm.org/bugs/show_bug.cgi?id=5389
I have added an initial cpu features auto-detection code for ARM to that bug-report for the LLVM part.
Do you get better performance on your A9 tests when running clang with clang -mcpu=generic -mattr=+neon,-thumb2,+v6,+vfp2
I've only done a first pass, but I'm fairly sure I used clang -mcpu=cortex-a9 -mfpu=neon. I don't think clang supports ARMv4T at all. I'm working on automating the benchmarks at the moment and I'll add llvm into that. Better than some throw-away, unreproducable results :)
According to what I understood from their web docs (which may or may not be accurate..) and was listed under "known issues" I think for the "backend" for the 2.8 release: -armv4 doesn't have thumb support yet. -EABI is unsupported for all processors.
I was unclear whether that was just the llvm/clang toolchain or it also included the llvm-gccX toolchain.
LLVM 2.9 is scheduled to be released today (April 4th). 2.8 is still listed as the current version on their website, so there might be a delay or it may not be updated yet.
I will correct myself, it is the armv6 that has issues with thumb support. I don't know if that means that everything < armv6 has the same issue or not.
- performance- phoronix did a benchmark testing between gcc 4.6, gcc 4.5, llvm/clang, llvm/dragonegg on x86/x86_64
http://www.phoronix.com/scan.php?page=article&item=gcc_46_llvm29&num...
It doesn't appear as though llvm is necessarily faster. I'm actually kind of interested in how much more colourful the warnings are. :)
(BTW I wasn't actually suggesting -changing- compilers at this point without EABI support.) However more colourful error messages maybe useful.
FWIW LLVM 2.9 was released today. It now supports thumb for armv6 and higher and well as many other optimizations for the arm platform. Still no EABI support but if I read it correctly it does alignment checking.
On Tue, 2011-03-29 at 12:03 -0400, omalleys@msu.edu wrote:
Quoting Gordan Bobic gordan@bobich.net:
It'd have to be more finely grained than sub-architecture since a kernel for one target won't necessarily work on other CPU of the same sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5 processors).
Is there a way around this? I mean like being able to make a megakernal with a ton of modules that has support for every board and autodetects hardware?
Kind of. There is something called Flattened Device Tree (an improvement over older ATAGS) that is making its way into upstream and into U-Boot. My intention is build upon this, to have a kernel that can get all kinds of platform information from U-Boot and flexibly load modules or quirks for various boards. It won't get you completely away from having multiple kernels for different arch rev optimizations, but it will improve the situation. Right now, the kernel does get a machine ID passed into it from the bootloader, and does a bunch of fixups, but this can be taken a lot further and made a lot more generic.
Doing broken out modules for multi-arch isn't going to be possible. But that's ok. We can pick a low common denominator kernel for e.g. ARMv5 and have an ARMv7 kernel and try to make as much else flexible at run time as possible. I will post some comments and followup as we begin poking at the device tree bits and considering what we can do.
Jon.
On Tue, Mar 29, 2011 at 2:10 AM, Jon Masters jcm@redhat.com wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Some kernel build strategy.
There are a couple of us looking into this at the moment. The thinking (thus far, only really started pondering recently) goes that we need a kernel RPM but using the F13 kernel is basically certain death in terms of the number of extra patches needed, etc. Therefore, we'll take 2.6.38 and do a rawhide-like kernel RPM that is also installable on F13 to get going. I'm thinking we'll start with an OMAP kernel RPM that works on BeagleBoard-xM and PandaBoard and work from there.
I'll be at LF/ELC the next couple of weeks, but the hope is to have something in motion by then and posted here. If anyone else is already working on kernel packaging, please ping me so we don't duplicate efforts in putting this together :)
Jon.
Count me in for testing the OMAP kernels. I've spent the past couple of
days trying to get a 2.6.38 kernel for my BeagleBoard-xM that has working ethernet and video (I can only seem to get one or the other). Do you plan on going with the vanilla kernel, or using the linux-omap branch for instance?
I haven't been working on the packaging part yet (need a working configuration first,) but I would be interested in helping out.
Rich
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
- More "spreading the word" on appropriate mailing lists / groups
that Fedora on ARM is worth using.
Actually, engaging with marketing and design, and websites, and ambassadors, etc. are probably all good things to do too. It'd be great if we had some logos and bumper stickers, and general buzz :)
It's probably worth gathering some data on h/w and experiences as the beta progresses. Any objections to my creating a wiki page to track and summarise this?
That sounds like a great idea! I'm running Fedora ARM on a bunch of BeagleBoards, and some PandaBoards at the moment. I have a DreamPlug on the way, and a SheevaPlug that may get re-used, but hopefully not before I've convinced myself that ARMv7 is a good base ;)
My most fun bit of ARM hardware right now is (still) by collection of empeg car stereos, which alas won't likely run Fedora any time soon, but provide hours of entertainment nonetheless.
Jon.
Jon Masters wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
I think you are overestimating the proliveration of ARMv7. There are a lot of capable ARMv5 devices out there, such as the SheevaPlug/GuruPlug.
Is the plan to offer both v5 and v7 builds (same way as there is an x86 and x86-64 build)? Or v7 only?
It's probably worth gathering some data on h/w and experiences as the beta progresses. Any objections to my creating a wiki page to track and summarise this?
That sounds like a great idea! I'm running Fedora ARM on a bunch of BeagleBoards, and some PandaBoards at the moment. I have a DreamPlug on the way, and a SheevaPlug that may get re-used, but hopefully not before I've convinced myself that ARMv7 is a good base ;)
IMO committing to ARMv7 to the point of exclusion of everything else would be a bad idea at the moment. The primary motivation for the v7 switch is performance, and there are still a number of ARMv5 and ARMv6 devices out there that are more than adequate on the ARM performance scale.
Gordan
Gordan Bobic gordan@bobich.net writes:
Jon Masters wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
I think you are overestimating the proliveration of ARMv7. There are a lot of capable ARMv5 devices out there, such as the SheevaPlug/GuruPlug.
Is the plan to offer both v5 and v7 builds (same way as there is an x86 and x86-64 build)? Or v7 only?
Please do not alienate us SheevaPlug/GuruPlug users.
IMO committing to ARMv7 to the point of exclusion of everything else would be a bad idea at the moment. The primary motivation for the v7 switch is performance, and there are still a number of ARMv5 and ARMv6 devices out there that are more than adequate on the ARM performance scale.
Seconded. Please do not drop v5 support.
Gordan
-derek
On Tue, 2011-03-29 at 10:45 -0400, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
Jon Masters wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
I think you are overestimating the proliveration of ARMv7. There are a lot of capable ARMv5 devices out there, such as the SheevaPlug/GuruPlug.
Is the plan to offer both v5 and v7 builds (same way as there is an x86 and x86-64 build)? Or v7 only?
Please do not alienate us SheevaPlug/GuruPlug users.
:) For the time being, it's probably sufficient to do an optimized version of things like glibc (akin to how there used to be i686 packages for it when other bits were i386). Notice I said "when", not that this should happen today or tomorrow. But it's worth considering, which is of course another reason why having data on users really helps.
I haven't checked the smolt situation yet, but that could also be another source of data in due course.
Jon.
Jon Masters wrote:
On Tue, 2011-03-29 at 10:45 -0400, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
Jon Masters wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
I think you are overestimating the proliveration of ARMv7. There are a lot of capable ARMv5 devices out there, such as the SheevaPlug/GuruPlug.
Is the plan to offer both v5 and v7 builds (same way as there is an x86 and x86-64 build)? Or v7 only?
Please do not alienate us SheevaPlug/GuruPlug users.
:) For the time being, it's probably sufficient to do an optimized version of things like glibc (akin to how there used to be i686 packages for it when other bits were i386). Notice I said "when", not that this should happen today or tomorrow. But it's worth considering, which is of course another reason why having data on users really helps.
I haven't checked the smolt situation yet, but that could also be another source of data in due course.
I thought the v7 switch was going to coincide with the hardfp switch. If that's the case, it'll have to be a complete distro rebuild. I thought the consensus was that ARMv7 switch without a hardfp switch at the same time was generally deemed to not be worth the effort. Is that not the case?
Gordan
On Tue, 2011-03-29 at 10:45 -0400, Derek Atkins wrote:
Gordan Bobic gordan@bobich.net writes:
Jon Masters wrote:
On Sat, 2011-03-26 at 21:10 +0000, Matthew Wilson wrote:
- Armv7 / VFP / NEON support to squeeze a bit more performance out
(where appropriate to the h/w).
FWIW, I think (eventually), moving to an ARMv7 base has a lot of benefit, with not a (lot) of drawback. After all, Fedora ARM is new enough that there isn't a lot of legacy out there (there will always be old ARM boards people want to use, certainly), and all of the boards now being produced are based on Cortex (or similar) designs with v7. There are one or two notable exceptions, but it's obvious where things are headed. It's really just a question of /when/ to switch IMHO.
I think you are overestimating the proliveration of ARMv7. There are a lot of capable ARMv5 devices out there, such as the SheevaPlug/GuruPlug.
Is the plan to offer both v5 and v7 builds (same way as there is an x86 and x86-64 build)? Or v7 only?
Please do not alienate us SheevaPlug/GuruPlug users.
IMO committing to ARMv7 to the point of exclusion of everything else would be a bad idea at the moment. The primary motivation for the v7 switch is performance, and there are still a number of ARMv5 and ARMv6 devices out there that are more than adequate on the ARM performance scale.
Seconded. Please do not drop v5 support.
The current plan is to continue armv5tel (softfp) and to add armv7hl (hardfp+vfp) in the f15 cycle (post F15-armv5tel release). So no, there are no plans to drop v5 at this time.
(An open question would be whether there's value in a small armv7l initiative (softfp+v7+vfp+neon) for some of the key libraries in the interim, to work with the rest of the armv5tel userspace).
-Chris