Good to talk with you...where are you based now anyway, OOI? I ought to
know this, but perhaps that neuron is temporarily unavailable.
On 03/22/2012 09:36 AM, Josh Boyer wrote:
On Wed, Mar 21, 2012 at 10:13:45PM -0400, Jon Masters wrote:
> 0). Hardware. We are making certain plans for ARM hardware to be
> available for Fedora developers, in addition to the existing FAS-based
> approach we have in Seneca today. What developer hardware would you
> consider that you would want/need in order to be able to support ARM as
> a target in Fedora? Do we need to buy each of you an ARM board? :)
I think that depends on a number of things. I would have put this
question last if I were making the list to be honest. However I do
appreciate that you're actively thinking about how to increase the ARM
developer community ;)
Ah yes, but I do need to sweeten the pot somehow :) In all seriousness,
though, we're not opposed to buying the kernel team ARM boards if it
will mean you're able to play with stuff and help with feedback.
> - We build every ARM kernel every time
> - We build e.g. versatile in general and periodically build the other
> kernels by use of a SPEC file macro knob
> I think the second is probably a non-starter for you. So assuming, the
> first is the preferred option, then my question becomes:
The second option isn't necessarily a non-starter. It depends on how
it's done. If the macro was in place to enable additional board kernels
and was toggled ON from Beta->release, then it might be acceptable. You
would leave it off during the ramp up to Beta to limit impact perhaps.
Not great, but not entirely unresonable and somewhat along the lines of
how we handle the debug options for the kernel during development.
Excellent. This was my original thought precisely. However, I think
Matthew has a strong reservation against this. Therefore, I would like
to know what the collective consensus is on this one?
My biggest requirement in this regard is that whatever ARM kernels
built are done from a single SRPM in a single koji build invocation. If
more kernels are needed, that's a new SRPM and new koji build. I don't
want to see someone trying to rebuild an SRPM already built in koji to
enable more kernels and shove those into the repo from some side
Right. How right you are. This actually did get suggested in my team. I
have tried to shoot it down as a crazy idea and I now have this
wonderful email of yours to back up that opinion :) Two things I think
are immutable on this topic:
1). We have only one kernel package. One SPEC/SRPM. Others might make
non-Fedora kernels for $random, and that's fine, but that's not
official. For example, we won't pull in non-upstream patches for Nvidia
systems to have better framebuffer support, but those are headed up and
indeed we had a chat with them about it recently. Meanwhile, if they
want to host a separate testing kernel with those (open source) patches,
they can be free to do so as anyone else is able to.
2). There is only one official tagged build at a time. We don't re-issue
builds or do some other nonsense.
> - How long is acceptable for a kernel build to take?
I'm not sure there's a hard and fast timeframe. Even the x86 builds
vary for things like building release vs. debug kernels. Pretty sure
we'd know when we see something that would be bad though. For example:
Those are probably way too long. This one:
Right, those are what you get when you build all of the possibilities we
might enable. That would be the worst worst case build time for a
post-beta/whatever kernel if we split it as discussed above. Note:
1). Koji could be modified to issue those in parallel, so the real build
time would be similar to the following one you liked more.
2). The Enterprise hardware even this year will be at least twice as
fast as that worst case time. Current suped-up cellphone hardware has
crappy cache sizes, etc. The first gen of 32-bit servers will have 4MB
cache sizes, 4 cores, and up to 4GB of RAM, which is much improved. The
second generation will take that memory over 8GB and the cores will be a
lot higher performance. That's next year. The year after, we can start
to look at 64-bit systems. We can run a 32-bit userspace on a 64-bit
system so we can use those beefier builders later on.
seems totally acceptable to me, but looking at what it produced I would
gather it's mostly useless for the ARM project (perhaps just the qemu
Yes. That's the minimum kernel we discussed. That's "versatile" as in
Versatile Express, the marketing-ish name of the FPGA development system
ARM produce that can support many of their different cores. We have a
physical VE with the hardware from a server semiconductor already in
place prior to fabrication for example. The thing is, Ve is both real
hardware and the default for qemu.
I'm not sure building only the qemu kernel is a great way to
go either, both from a ARM _and_ kernel perspective. It might be better
to build that + 1 board from each armv5tel and armv7hl.
That would be fine too. I mean, we could compromise and choose a set of
qemu+obvious board as the minimum and then turn on the rest at certain
points in the cycle. Whatever works for you guys.
I know that gets into "which do we build" but at least it
to test on their hardware. I looked, but didn't seem to find a build
configured like that. It would be interesting to see what the time looks
Ok. That will be useful data to have. We can look at that.
> Now, a trivial SPEC file or general non-arch kernel bug is likely
> fail on x86 well before it fails on ARM. That will of course take care
> of many generic build issues that will fail a parent Koji build quickly.
Yes. However, waiting for a day for your build to complete only to have
it be canceled because some ARM variant died is one of the larger
Absolutely. Clearly, a day is nuts. But it sounds like a few hours for
ARM isn't as big a deal. Would 2 or 4 hours be your upper limit? etc.?
> issue. So, another option is that we modify Koji to submit
> tasks across multiple builders. i.e. all of the ARM subpackages (and
> this would happen for x86 variants too) would get submitted to builders
> at the same time, rather than linearly. It's a lot of work, but it's
> doable. Especially if it's the only option. It comes down to how long
> you guys think is the longest you are willing to wait for an overall
> all-arch build of the kernel to take in Koji.
That sounds like you'd need to both modify Koji to handle that, and
rework package spec files to somehow tell Koji "hey, we're going to be X
variants from this SRPM". I'm not sure the kernel.spec lends itself to
easily farming out builds to separate machines at the moment, since it
just loops through all variants it needs to build and does them in a
serial fashion (e.g. i686, i686-debug, i686-PAE, i686-PAEdebug).
That option is interesting, but I don't consider it feasible for f18 at
all and I would even be surprised if koji was able to handle it for f19.
Right. That's more of a if we have to, but also something we could go in
with the expectation of doing. For example, if you could live with 2-4
hours build time for ARM kernels (or whatever you guys decide) for now,
and then we try to parallelize and make that go away later on. Or maybe
it's a non-starter for the other guys to wait that long for a build so
they'll say they need this kind of Koji thing as a gate.
> 2). Impact. What does making ARM a primary arch mean to you? Not
> we think it means, but what do you think it means to you in terms of:
> - How will this positively or negatively impact your role?
Positive: Broader Fedora dominance, I guess. Maybe some new learning.
Negative: More bugs, more platforms to support, more time spent.
That might be pretty black and white, but at least I came up with _some_
positives. Honestly, if ARM is handled well upstream and by the ARM
team, it might not impact us more than waiting a bit for builds. If it
_isn't_ handled well upstream it can quickly become a nightmare.
Yep. We're counting on the level of upstream investment, Linaro
involvement, and all of that goodness to help us out.
> - What level of disruption are you willing to accept?
Can you elaborate on disruption? We already spend time mucking around
with ARM configs during rebase (not intelligently mind you), and I don't
think we grumbled too much about it. Aside from more bugs, is there
something else you were thinking?
Well, it's really whether you're ok with that kind of thing sometimes.
We can't promise to be perfect, and in fact we can promise crap like
that will come up...so it's really whether you can live with that.
> - Are you willing to make any changes to workflow?
I think we're pretty open minded. We might not be as gung-ho about
something as the owners, but if the changes aren't unreasonable we tend
to evaluate them fairly. It's going to depend on what's proposed.
I'll counter with a question to you. Who (as in name(s)) is
going to be
the main ARM kernel person? My ARM knowledge is limited at best, I
don't think Dave has much at all, and I'm not sure about Justin but I'd
be surprised if he was a closet ARM expert. Do you have people in mind
to handle the HW specific issues that pop up so we can assign bugs to
In fact we do. We have a Red Hat FY13 hire for our ARM team who will
represent us in upstream kernel work. That's mostly for v8 but I would
expect (given the person we have in mind) that they will also handle
32-bit. I'll be able to share names soon. Beyond that, we have others in
the team internally and in the community who can help debug issues.
> We don't intend for this to be a trainwreck. If it's not
ready, it won't
> be PA, period. But we want to know how flexible you guys are willing to
> be as we figure this out. If you want to wait until we have total
> parity, single zimage, and we're just like x86, that is good to know
> (and discuss) right now :)
Personally, I don't think it has to be one vmlinux + 18 FDTs to be
suitable. A smaller set of board specific kernels is probably doable,
depending on some of the factors mentioned above. I see that as similar
to how we built e.g. ppc, ppc-smp, ppc64, ppc64-kdump. Nobody
particularly _liked_ doing that but we lived.
Right. To me, it's about having a plan to kill off the separate flavors.
As long as we don't do what $other_distro does and consider it hunky
dory having a bazillion ARM kernels, and go in aggressively wanting to
cut it down to one in the end, I think we could live. But that's my
opinion, and I'm not you guys.
Thanks for the feedback, you rock!