Hi,
there's a new Firefox update waiting in Bodhi and we can't push it to stable because of new rules. We recommend you to update to it ASAP as it fixes a public critical 0day vulnerability (https://bugzilla.mozilla.org/show_bug.cgi?id=607222).
Bodhi links: https://admin.fedoraproject.org/updates/firefox-3.5.15-1.fc12,xulrunner-1.9....
https://admin.fedoraproject.org/updates/firefox-3.6.12-1.fc13,xulrunner-1.9....
ma.
Martin Stransky wrote:
there's a new Firefox update waiting in Bodhi and we can't push it to stable because of new rules. We recommend you to update to it ASAP as it fixes a public critical 0day vulnerability (https://bugzilla.mozilla.org/show_bug.cgi?id=607222).
Looks like the F13 build got karma quickly enough to land directly in stable after all, the F12 build, on the other hand, was stuck in testing for 2 days before finally making it out to stable. Yet another blatant example of failure of the Update Acceptance Criteria, needlessly exposing our users to critical vulnerabilities.
(And no, by giving yet another special exception to Firefox wouldn't be a solution. ;-) This problem can hit any other app as well.)
Kevin Kofler
On Sun, 31 Oct 2010 04:37:38 +0100, Kevin wrote:
Martin Stransky wrote:
there's a new Firefox update waiting in Bodhi and we can't push it to stable because of new rules. We recommend you to update to it ASAP as it fixes a public critical 0day vulnerability (https://bugzilla.mozilla.org/show_bug.cgi?id=607222).
Looks like the F13 build got karma quickly enough to land directly in stable after all, the F12 build, on the other hand, was stuck in testing for 2 days before finally making it out to stable. Yet another blatant example of failure of the Update Acceptance Criteria, needlessly exposing our users to critical vulnerabilities.
(And no, by giving yet another special exception to Firefox wouldn't be a solution. ;-) This problem can hit any other app as well.)
Kevin Kofler
Okay, feedback time.
Lately, there have been several attempts at urging proventesters (and not just testers in general) to give positive karma for aging critpath updates. It also has been decided by someone (or maybe even a comittee) to spam proventesters daily with "[old_testing_critpath]" messages for all three dist releases, with no day to unsubscribe from that (other than leaving proventesters group, which is what at least one person has threatened with, or filtering those messages).
Dunno about other testers (and there aren't many yet), but I have abandoned F-12 long ago due to lack of time when F-13 became the one to use on a daily basis. And some time before F-14 Beta, my desktop has been switched to boot F-14 by default. That's the only opportunity to evaluate F-14 early and possibly find issues prior to its release. On the contrary, most of Fedora's users will wait for the final release, and many users will wait even longer. It's highly likely that bugzilla can confirm that.
F-14 is the the only way forward, and don't like to spend time on F-13 and older anymore. That also applies to packagers I maintain or monitor. I simply don't see the user base [target group] anymore.
About positive karma in bodhi, I don't feel comfortable signing off arbitrary updates just because they didn't crash for me after five minutes. With some updates, regression has slipped through already. And the more bugs an update addresses with either patches or a version upgrade, the more careful I would like to be when testing something. Also, in my book, an update working on F-14 may still malfunction on an older dist release due to differences in dependences and the core setup. I still don't understand why some non-security updates are rushed out with sometimes not even the package maintainer(s) having tested them at all.
On 10/31/2010 03:18 AM, Michael Schwendt wrote:
On Sun, 31 Oct 2010 04:37:38 +0100, Kevin wrote:
Martin Stransky wrote:
there's a new Firefox update waiting in Bodhi and we can't push it to stable because of new rules. We recommend you to update to it ASAP as it fixes a public critical 0day vulnerability (https://bugzilla.mozilla.org/show_bug.cgi?id=607222).
Looks like the F13 build got karma quickly enough to land directly in stable after all, the F12 build, on the other hand, was stuck in testing for 2 days before finally making it out to stable. Yet another blatant example of failure of the Update Acceptance Criteria, needlessly exposing our users to critical vulnerabilities.
(And no, by giving yet another special exception to Firefox wouldn't be a solution. ;-) This problem can hit any other app as well.)
Kevin Kofler
Okay, feedback time.
Lately, there have been several attempts at urging proventesters (and not just testers in general) to give positive karma for aging critpath updates. It also has been decided by someone (or maybe even a comittee) to spam proventesters daily with "[old_testing_critpath]" messages for all three dist releases, with no day to unsubscribe from that (other than leaving proventesters group, which is what at least one person has threatened with, or filtering those messages).
Dunno about other testers (and there aren't many yet), but I have abandoned F-12 long ago due to lack of time when F-13 became the one to use on a daily basis. And some time before F-14 Beta, my desktop has been switched to boot F-14 by default. That's the only opportunity to evaluate F-14 early and possibly find issues prior to its release. On the contrary, most of Fedora's users will wait for the final release, and many users will wait even longer. It's highly likely that bugzilla can confirm that.
F-14 is the the only way forward, and don't like to spend time on F-13 and older anymore. That also applies to packagers I maintain or monitor. I simply don't see the user base [target group] anymore.
About positive karma in bodhi, I don't feel comfortable signing off arbitrary updates just because they didn't crash for me after five minutes. With some updates, regression has slipped through already. And the more bugs an update addresses with either patches or a version upgrade, the more careful I would like to be when testing something. Also, in my book, an update working on F-14 may still malfunction on an older dist release due to differences in dependences and the core setup. I still don't understand why some non-security updates are rushed out with sometimes not even the package maintainer(s) having tested them at all.
I am willing to work with the older, still supported, distros, but would really appreciate test cases since most of the critical-path bugs the update addresses are not common and I haven't run into them. That said, if the update remains without karma, the release is within a month of end-of-life, then the update could be left in updates testing and docs changed to provide a warning. I don't think there would be that much impact on storage to keep an updates-testing repo around on the mirrors that choose to provide the release. Most just delete the release anyway.
Regards, OldFart
On Sun, 31 Oct 2010 10:16:41 -0400 "Clyde E. Kunkel" clydekunkel7734@cox.net wrote:
On 10/31/2010 03:18 AM, Michael Schwendt wrote:
Okay, feedback time.
Lately, there have been several attempts at urging proventesters (and not just testers in general) to give positive karma for aging critpath updates. It also has been decided by someone (or maybe even a comittee) to spam proventesters daily with "[old_testing_critpath]" messages for all three dist releases, with no day to unsubscribe from that (other than leaving proventesters group, which is what at least one person has threatened with, or filtering those messages).
Yeah, I am not sure at all how usefull those emails are. There are a variety of ways to see what things need testing, sending emails to proventesters a bunch isn't likely to be a very nice one.
Dunno about other testers (and there aren't many yet), but I have abandoned F-12 long ago due to lack of time when F-13 became the one to use on a daily basis. And some time before F-14 Beta, my desktop has been switched to boot F-14 by default. That's the only opportunity to evaluate F-14 early and possibly find issues prior to its release. On the contrary, most of Fedora's users will wait for the final release, and many users will wait even longer. It's highly likely that bugzilla can confirm that.
I've got a f12 vm that I can run command line testing easily and some limited gui testing (vnc).
F-14 is the the only way forward, and don't like to spend time on F-13 and older anymore. That also applies to packagers I maintain or monitor. I simply don't see the user base [target group] anymore.
It's really hard to tell I'm afraid, but I understand your feeling there.
I also would love some simple per-package test plans. My thought was that our testing could start out with a very low bar, and go from there. ie, Does it install. Does it run and display a window, etc. If there was a test plan page for better testing that would be great. If there was a way to test a specific bug that would also be great. (several updates have included test cases in their bug that were great to test and confirm it was fixed).
Anyhow, I agree we should look at adjusting the f12 setup (although its only got 1 month left), as well as look at dropping the emails to proventesters with old testing stuff.
Thanks for the constructive feedback Clyde and Michael.
Specific plans for changing things for the better welcome.
kevin
On Mon, Nov 01, 2010 at 09:35:49AM -0600, Kevin Fenzi wrote:
On Sun, 31 Oct 2010 10:16:41 -0400 "Clyde E. Kunkel" clydekunkel7734@cox.net wrote:
On 10/31/2010 03:18 AM, Michael Schwendt wrote:
Okay, feedback time.
Lately, there have been several attempts at urging proventesters (and not just testers in general) to give positive karma for aging critpath updates. It also has been decided by someone (or maybe even a comittee) to spam proventesters daily with "[old_testing_critpath]" messages for all three dist releases, with no day to unsubscribe from that (other than leaving proventesters group, which is what at least one person has threatened with, or filtering those messages).
Yeah, I am not sure at all how usefull those emails are. There are a variety of ways to see what things need testing, sending emails to proventesters a bunch isn't likely to be a very nice one.
I've disabled this behavior in git, and will update our production instance this week. This spam is unnecessary now that this information is available in the updates-testing digests sent to the test list.
luke
On Sun, 2010-10-31 at 04:37 +0100, Kevin Kofler wrote:
Martin Stransky wrote:
there's a new Firefox update waiting in Bodhi and we can't push it to stable because of new rules. We recommend you to update to it ASAP as it fixes a public critical 0day vulnerability (https://bugzilla.mozilla.org/show_bug.cgi?id=607222).
Looks like the F13 build got karma quickly enough to land directly in stable after all, the F12 build, on the other hand, was stuck in testing for 2 days before finally making it out to stable. Yet another blatant example of failure of the Update Acceptance Criteria, needlessly exposing our users to critical vulnerabilities.
Kevin, could you *please* not word things like that? There's just no need for it.
I already wrote this to -test a couple of days ago:
http://lists.fedoraproject.org/pipermail/test/2010-October/095135.html
and we're discussing it there. I think the thread demonstrates things tend to go much more constructively if you avoid throwing words like 'blatant' and 'failure' and 'needlessly' around. We designed a policy, put it into effect, now we're observing how well it works and we can modify its implementation on the fly. It doesn't need to be done in an adversarial spirit.
Adam Williamson píše v Ne 31. 10. 2010 v 18:06 -0700:
On Sun, 2010-10-31 at 04:37 +0100, Kevin Kofler wrote: Yet another blatant example of
failure of the Update Acceptance Criteria, needlessly exposing our users to critical vulnerabilities.
Kevin, could you *please* not word things like that? There's just no need for it.
I already wrote this to -test a couple of days ago:
http://lists.fedoraproject.org/pipermail/test/2010-October/095135.html
and we're discussing it there. I think the thread demonstrates things tend to go much more constructively if you avoid throwing words like 'blatant' and 'failure' and 'needlessly' around.
Did we not fail our users? Was there a real need to fail them? (As a non-native speaker, I won't judge the relative merits of "blatant" vs. "sucks".)
We designed a policy, put it into effect, now we're observing how well it works and we can modify its implementation on the fly. It doesn't need to be done in an adversarial spirit.
Given that _this exact scenario_ was repeatedly brought up since the very start of the update acceptance criteria proposals, I think some frustration is quite justified. This situation is not really a surprise, and Fedora did not have to unnecessarily expose users to a vulnerability in order to relearn this lesson.
In addition to being constructive about remedying the situation, shouldn't we try to be more constructive about _not introducing such situations_ in the first place? Mirek
On Mon, 2010-11-01 at 02:18 +0100, Miloslav Trmač wrote:
Kevin, could you *please* not word things like that? There's just no need for it.
I already wrote this to -test a couple of days ago:
http://lists.fedoraproject.org/pipermail/test/2010-October/095135.html
and we're discussing it there. I think the thread demonstrates things tend to go much more constructively if you avoid throwing words like 'blatant' and 'failure' and 'needlessly' around.
Did we not fail our users? Was there a real need to fail them? (As a non-native speaker, I won't judge the relative merits of "blatant" vs. "sucks".)
I didn't say that what Kevin said was *wrong*, I said it wasn't the best way to word it.
We designed a policy, put it into effect, now we're observing how well it works and we can modify its implementation on the fly. It doesn't need to be done in an adversarial spirit.
Given that _this exact scenario_ was repeatedly brought up since the very start of the update acceptance criteria proposals, I think some frustration is quite justified. This situation is not really a surprise, and Fedora did not have to unnecessarily expose users to a vulnerability in order to relearn this lesson.
On the other hand, other scenarios were also brought up, which have not come to pass - for instance, the same thing happening to Fedora 13 or Fedora 14. If we had simply accepted the predictions of doom and not implemented the policy at all, we would be without its benefits for the development of F13 and F14.
In addition to being constructive about remedying the situation, shouldn't we try to be more constructive about _not introducing such situations_ in the first place? Mirek
Saying 'oh dear, this might not work, we'd better not try' is rarely a good approach, IMHO. It's better to try things, with the proviso that you accept when they aren't working and withdraw or modify them.
On Mon, Nov 1, 2010 at 5:08 PM, Adam Williamson awilliam@redhat.com wrote:
Saying 'oh dear, this might not work, we'd better not try' is rarely a good approach, IMHO. It's better to try things, with the proviso that you accept when they aren't working and withdraw or modify them.
I would agree with this, if the "let's try them and see what happens" approach to a process makes it a point to set up a set of specific conditions before putting a new process in place which latch a withdraw of the process. "Hey we are going to try this.. and if this or this other things happens..then we are going to stop doing this and put our heads together and have a little re-think about modifying the process"
-jef
Adam Williamson píše v Po 01. 11. 2010 v 10:08 -0700:
We designed a policy, put it into effect, now we're observing how well it works and we can modify its implementation on the fly. It doesn't need to be done in an adversarial spirit.
Given that _this exact scenario_ was repeatedly brought up since the very start of the update acceptance criteria proposals, I think some frustration is quite justified. This situation is not really a surprise, and Fedora did not have to unnecessarily expose users to a vulnerability in order to relearn this lesson.
On the other hand, other scenarios were also brought up, which have not come to pass - for instance, the same thing happening to Fedora 13 or Fedora 14. If we had simply accepted the predictions of doom and not implemented the policy at all, we would be without its benefits for the development of F13 and F14.
A problem with this line of argument is that the benefits are not quite apparent to me.
In addition to being constructive about remedying the situation, shouldn't we try to be more constructive about _not introducing such situations_ in the first place?
Saying 'oh dear, this might not work, we'd better not try' is rarely a good approach, IMHO.
That is a cost-benefit comparison. "New" does not imply "improved".
It's better to try things, with the proviso that you accept when they aren't working and withdraw or modify them.
It's even better not to dismiss known problems with the policy, and to make sure the policy can handle them properly from the start. This was not a surprise, this was an "unforced error". Mirek
On Mon, 2010-11-01 at 18:29 +0100, Miloslav Trmač wrote:
On the other hand, other scenarios were also brought up, which have not come to pass - for instance, the same thing happening to Fedora 13 or Fedora 14. If we had simply accepted the predictions of doom and not implemented the policy at all, we would be without its benefits for the development of F13 and F14.
A problem with this line of argument is that the benefits are not quite apparent to me.
The policies prevented us from shipping a number of completely broken updates, which is exactly what they were intended to do. I don't have a command handy to do a search for rejected proposed critpath updates for F14, but if you figure it out, you can see the precise results of the policy there.
In addition to being constructive about remedying the situation, shouldn't we try to be more constructive about _not introducing such situations_ in the first place?
Saying 'oh dear, this might not work, we'd better not try' is rarely a good approach, IMHO.
That is a cost-benefit comparison. "New" does not imply "improved".
We had an extensive discussion about the benefits of testing important updates at the time the policy went into effect. I don't think it's really necessary to re-hash the entire thing. For the record, I did not say nor do I believe that "new" inevitably implies "improved".
It's better to try things, with the proviso that you accept when they aren't working and withdraw or modify them.
It's even better not to dismiss known problems with the policy, and to make sure the policy can handle them properly from the start. This was not a surprise, this was an "unforced error".
Sorry, but characterizing it as a 'known problem' is misleading. It's easy to forecast failure, and you'll likely always be correct in *some* cases if you forecast enough failures. Only if you precisely forecast only the failures that actually happen, and do not forecast any failures that don't happen, can your forecast be considered truly reliable. If this had truly been a 'known problem' then those who predicted it would also have correctly chosen *not* to predict failure in the case of Fedora 13 and Fedora 14. The fact is that they did predict a failure which has not, in fact, come to pass (neither F13 nor F14 have long queues of old critpath updates).
Adam Williamson píše v Po 01. 11. 2010 v 10:39 -0700:
On Mon, 2010-11-01 at 18:29 +0100, Miloslav Trmač wrote:
It's better to try things, with the proviso that you accept when they aren't working and withdraw or modify them.
It's even better not to dismiss known problems with the policy, and to make sure the policy can handle them properly from the start. This was not a surprise, this was an "unforced error".
Sorry, but characterizing it as a 'known problem' is misleading. It's easy to forecast failure, and you'll likely always be correct in *some* cases if you forecast enough failures. Only if you precisely forecast only the failures that actually happen, and do not forecast any failures that don't happen, can your forecast be considered truly reliable.
The accuracy of prediction, and especially accuracy of the timing, is not at all relevant. In fact, it is _preciselly_ the unknown nature of risks that requires thinking about them in advance.
People don't wear helmets because they know when something will hit their head, but because they don't know when, or even if, it will. Mirek
On Mon, 2010-11-01 at 18:51 +0100, Miloslav Trmač wrote:
Sorry, but characterizing it as a 'known problem' is misleading. It's easy to forecast failure, and you'll likely always be correct in *some* cases if you forecast enough failures. Only if you precisely forecast only the failures that actually happen, and do not forecast any failures that don't happen, can your forecast be considered truly reliable.
The accuracy of prediction, and especially accuracy of the timing, is not at all relevant. In fact, it is _preciselly_ the unknown nature of risks that requires thinking about them in advance.
Which rather contradicts your description of it as a 'known problem', yes?
People don't wear helmets because they know when something will hit their head, but because they don't know when, or even if, it will. Mirek
That's not really a relevant analogy. You can't 'wear a helmet' in this case. There's no way we could have implemented the policy and 'worn a helmet' by allowing updates to happen without the conditions of the policy being fulfilled; that would effectively negate the policy.
If you want to continue with the analogy, what you seem to be saying is that we should never have implemented the policy in the first place, which is not analogous to wearing a helmet; it's analogous to never leaving the house just in case something hits you on the head.
Adam Williamson píše v Po 01. 11. 2010 v 10:55 -0700:
On Mon, 2010-11-01 at 18:51 +0100, Miloslav Trmač wrote:
Sorry, but characterizing it as a 'known problem' is misleading. It's easy to forecast failure, and you'll likely always be correct in *some* cases if you forecast enough failures. Only if you precisely forecast only the failures that actually happen, and do not forecast any failures that don't happen, can your forecast be considered truly reliable.
The accuracy of prediction, and especially accuracy of the timing, is not at all relevant. In fact, it is _preciselly_ the unknown nature of risks that requires thinking about them in advance.
Which rather contradicts your description of it as a 'known problem', yes?
No; the existence of the problem was known, only the timing and precise extent was not.
If you want to continue with the analogy, what you seem to be saying is that we should never have implemented the policy in the first place,
That is one option; another would be adding a "I'm the maintainer and I really mean it" checkbox for security updates (with FESCo/Fedora QA/somebody else reviewing the cases retrospectively, if they feel like it); yeat another is not enforcing the policy on security updates at all, as I seem to remember was proposed (or even implemented?) at one time. Mirek
Adam Williamson wrote:
The policies prevented us from shipping a number of completely broken updates, which is exactly what they were intended to do. I don't have a command handy to do a search for rejected proposed critpath updates for F14, but if you figure it out, you can see the precise results of the policy there.
They also let several completely broken updates through and then delayed the FIXES for those updates, exactly as I had been warning about all the time.
For example, my firstboot update which was required to make the Xfce spin work again (there was an additional problem with the LXDE spin, but that one was present both before and after that update, and could only be noticed after that update was pushed) got delayed.
The fact is that they did predict a failure which has not, in fact, come to pass (neither F13 nor F14 have long queues of old critpath updates).
Even ONE old critpath update is a failure.
Kevin Kofler
On Mon, 01 Nov 2010 19:26:43 +0100 Kevin Kofler kevin.kofler@chello.at wrote:
They also let several completely broken updates through and then delayed the FIXES for those updates, exactly as I had been warning about all the time.
Cite(s)?
For example, my firstboot update which was required to make the Xfce spin work again (there was an additional problem with the LXDE spin, but that one was present both before and after that update, and could only be noticed after that update was pushed) got delayed.
If You mean: https://admin.fedoraproject.org/updates/firstboot-1.113-4.fc14
it didn't break the Xfce spin. Xfce spin is using gdm still, which means metacity is pulled in, which means firstboot was fine.
So, this case seems to me like a poor example for your position.
kevin
Adam Williamson wrote:
On the other hand, other scenarios were also brought up, which have not come to pass - for instance, the same thing happening to Fedora 13 or Fedora 14.
Nonsense. We just do not have enough evidence yet to show such things happening for F13 and F14. They CAN, and IMHO WILL, happen, e.g.: * Will a critical security fix for Konqueror get karma as quickly as the one for Firefox did? (This is especially relevant considering that some people want to put the whole KDE workspace into critpath. But even non-critpath updates need karma to get pushed.) * Would that Firefox update have gotten karma that quickly without the nagmail to the devel ML? Do you think the approach of sending nagmail scales?
And at least for F14 development, there have been other, less critical failures, which I've already pointed out in the respective threads.
If we had simply accepted the predictions of doom and not implemented the policy at all, we would be without its benefits for the development of F13 and F14.
What benefits? I see only problems, in fact the very ones I've warned about right from the beginning.
Kevin Kofler
Adam Williamson wrote:
I already wrote this to -test a couple of days ago:
http://lists.fedoraproject.org/pipermail/test/2010-October/095135.html
and we're discussing it there. I think the thread demonstrates things tend to go much more constructively if you avoid throwing words like 'blatant' and 'failure' and 'needlessly' around. We designed a policy, put it into effect, now we're observing how well it works and we can modify its implementation on the fly. It doesn't need to be done in an adversarial spirit.
There's exactly one constructive thing to do, it's repealing this set of policies (Critical Path and Update Acceptance Criteria) in its entirety.
An update should go stable when the maintainer says so, karma should be purely informational feedback for the maintainer.
Kevin Kofler
On Mon, 2010-11-01 at 03:54 +0100, Kevin Kofler wrote:
There's exactly one constructive thing to do, it's repealing this set of policies (Critical Path and Update Acceptance Criteria) in its entirety.
An update should go stable when the maintainer says so, karma should be purely informational feedback for the maintainer.
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
The evidence in THIS thread is for one release, but the same problems have been found everywhere. In different degrees of severity, sure, but they're there.
Kevin Kofler
mån 2010-11-01 klockan 10:09 -0700 skrev Adam Williamson:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
I don't mind the process in general, but have some points where it can improve
Very often the same update is submitted for several releases, and it's kind of pointless to require full karma in all releases (to require some in each release is ok). If one release has got full karma then it's reasonable to require less karma on other releases receiving the same update. The risk for non-obvious regression for some release only is fairly low, more likely there is very obvious release specific regressions like dependency failures when another package have been split/merged etc and related fuckups.
We also need some obvious ways where users in general can subscribe to testing updates of stuff that they care about, to expand the userbase that performs testing of updates. Generally running a system with updates-testing always enabled is scary and not many want to take that leap. But I think that if we could give users the ability to subscribe to testing packages X,Y,Z of their choics and getting update & testing notifications for those packages only from updates-testing would speed things up considerably.
In addition the package management & update request process could do with some serious makeover to streamline the process and reduce risk for error, but that's topic for another thread.
Regards Henrik
On Mon, 2010-11-01 at 22:54 +0100, Henrik Nordström wrote:
mån 2010-11-01 klockan 10:09 -0700 skrev Adam Williamson:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
I don't mind the process in general, but have some points where it can improve
Very often the same update is submitted for several releases, and it's kind of pointless to require full karma in all releases (to require some in each release is ok). If one release has got full karma then it's reasonable to require less karma on other releases receiving the same update. The risk for non-obvious regression for some release only is fairly low, more likely there is very obvious release specific regressions like dependency failures when another package have been split/merged etc and related fuckups.
This is a reasonable modification of the idea that an update should only require karma for one release (which would be nice if it were true but unfortunately isn't). In practice, though, there isn't much wiggle room for requiring 'less' karma. Non-critpath updates already require only one +1 to go through without the waiting time requirement. Critpath updates only require +1 from a proventester and +1 from any other tester (proven or not).
I think I'd probably support the proposal that if a critpath update exists in identical form for multiple releases, and it has passed full critpath karma requirements for one release, it should require only +1 from any tester on the other releases to go out.
We also need some obvious ways where users in general can subscribe to testing updates of stuff that they care about, to expand the userbase that performs testing of updates. Generally running a system with updates-testing always enabled is scary and not many want to take that leap. But I think that if we could give users the ability to subscribe to testing packages X,Y,Z of their choics and getting update & testing notifications for those packages only from updates-testing would speed things up considerably.
That's also a nice idea (though it's somewhat complex given that updates are *actually* pushed out as sets, and a given update may be affected by another given update even if they don't have an explicit relationship through dependencies).
mån 2010-11-01 klockan 15:12 -0700 skrev Adam Williamson:
This is a reasonable modification of the idea that an update should only require karma for one release (which would be nice if it were true but unfortunately isn't). In practice, though, there isn't much wiggle room for requiring 'less' karma. Non-critpath updates already require only one +1 to go through without the waiting time requirement. Critpath updates only require +1 from a proventester and +1 from any other tester (proven or not).
Right. I was mostly thinking about the autokarma I think. Not normally doing pushes until after the waiting period.
I think I'd probably support the proposal that if a critpath update exists in identical form for multiple releases, and it has passed full critpath karma requirements for one release, it should require only +1 from any tester on the other releases to go out.
Yes. From the same reasoning explained before. If it's provenly tested in one release then chances are very high it works in the other releases as well unless it doesn't work at all.
Regards Henrik
[changing topic to split this out to it's own thread]
mån 2010-11-01 klockan 15:12 -0700 skrev Adam Williamson:
We also need some obvious ways where users in general can subscribe to testing updates of stuff that they care about, to expand the userbase that performs testing of updates. Generally running a system with updates-testing always enabled is scary and not many want to take that leap. But I think that if we could give users the ability to subscribe to testing packages X,Y,Z of their choics and getting update & testing notifications for those packages only from updates-testing would speed things up considerably.
That's also a nice idea (though it's somewhat complex given that updates are *actually* pushed out as sets, and a given update may be affected by another given update even if they don't have an explicit relationship through dependencies).
Not sure it's bad to expose that complexity to the package maintainers. We do allow users to selectively update their systems.
But yes, it may be good to inform users when there is updates in any dependencies even if not strictly required by version in the dependency.
Regards Henrik
On Mon, Nov 01, 2010 at 10:09:17AM -0700, Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
It was brought to my attention that also current Fedora releases have problems with delaying important security updates. A fix for a remote code execution vulnerability in proftpd was only pushed to stable with a seven day delay: https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc13 https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc14
And it is not a theoretical threat, I know that servers in the nearby area have been exploited because of this vulnerability. Delaying such updates seems to be a very bad idea. Even in the unlikely case that the update was broken and made proftpd not start anymore, this is usually not as bad as having the system corrupted by an evil attacker.
Regards Till
On Fri, 2010-11-12 at 20:03 +0100, Till Maas wrote:
On Mon, Nov 01, 2010 at 10:09:17AM -0700, Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
It was brought to my attention that also current Fedora releases have problems with delaying important security updates. A fix for a remote code execution vulnerability in proftpd was only pushed to stable with a seven day delay: https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc13 https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc14
And it is not a theoretical threat, I know that servers in the nearby area have been exploited because of this vulnerability. Delaying such updates seems to be a very bad idea. Even in the unlikely case that the update was broken and made proftpd not start anymore, this is usually not as bad as having the system corrupted by an evil attacker.
Thanks for flagging this up.
I'm wondering if perhaps we should devise a system - maybe a sub-group of proventesters - to ensure timely testing of security updates. wdyt?
On Fri, 12 Nov 2010 11:19:22 -0800 Adam Williamson awilliam@redhat.com wrote:
On Fri, 2010-11-12 at 20:03 +0100, Till Maas wrote:
On Mon, Nov 01, 2010 at 10:09:17AM -0700, Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
It was brought to my attention that also current Fedora releases have problems with delaying important security updates. A fix for a remote code execution vulnerability in proftpd was only pushed to stable with a seven day delay: https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc13 https://admin.fedoraproject.org/updates/proftpd-1.3.3c-1.fc14
And it is not a theoretical threat, I know that servers in the nearby area have been exploited because of this vulnerability. Delaying such updates seems to be a very bad idea. Even in the unlikely case that the update was broken and made proftpd not start anymore, this is usually not as bad as having the system corrupted by an evil attacker.
Thanks for flagging this up.
I'm wondering if perhaps we should devise a system - maybe a sub-group of proventesters - to ensure timely testing of security updates. wdyt?
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
Simo.
On Fri, 2010-11-12 at 14:54 -0500, Simo Sorce wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
I don't have a hugely strong opinion either way, but the stated reason by those who do is that security updates can be broken just like any other. We don't have a magic 'infallible' switch on packagers which we toggle only when they're building a security update. :)
On Fri, 12 Nov 2010 12:02:03 -0800 Adam Williamson awilliam@redhat.com wrote:
On Fri, 2010-11-12 at 14:54 -0500, Simo Sorce wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
I don't have a hugely strong opinion either way, but the stated reason by those who do is that security updates can be broken just like any other. We don't have a magic 'infallible' switch on packagers which we toggle only when they're building a security update. :)
Oh sure I don't doubt that. But in this case we need to deal with the lesser evil. Is it more important to close a security bug with a (small) risk of breaking a package ? Or is it more important to (try to) test it and leave our users exposed for a long time to a security threat ?
If we are not comfortable with treating all security issues the same we can have a flag that skips testing only for "remote exploit" type of security issues. That will reduce the number of exception to the most dangerous cases.
What do you think ?
Simo.
On Fri, 12 Nov 2010 14:54:28 -0500 Simo Sorce ssorce@redhat.com wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
No. The issue is that in the past sometimes security updates have been rushed out with no testing and broken things badly. ;(
See http://fedoraproject.org/wiki/Updates_Lessons For some small number of examples (yes, anyone is welcome to please add others you have run into to the page).
I know of at least dbus, bind, nss and a few others that were security updates and pushe out with no testing and turned out to break things.
Perhaps security updates could have a smaller timeout? Or a security group that tests them ?
kevin
On Fri, Nov 12, 2010 at 01:14:12PM -0700, Kevin Fenzi wrote:
No. The issue is that in the past sometimes security updates have been rushed out with no testing and broken things badly. ;(
See http://fedoraproject.org/wiki/Updates_Lessons For some small number of examples (yes, anyone is welcome to please add others you have run into to the page).
The documented issues do not seem to be as bad as a system being exploited. It is only about dependency breakage or services not working anymore. There is no major data corruption requiring access to backups and restoring the whole system. But this is what people using Fedora with proftpd and being exploited have to do.
Regards Till
On Sat, Nov 13, 2010 at 10:21:30AM +0100, Till Maas wrote:
The documented issues do not seem to be as bad as a system being exploited. It is only about dependency breakage or services not working anymore. There is no major data corruption requiring access to backups and restoring the whole system. But this is what people using Fedora with proftpd and being exploited have to do.
If security updates break functionality then people will stop applying security updates.
On Sat, 2010-11-13 at 14:22 +0000, Matthew Garrett wrote:
On Sat, Nov 13, 2010 at 10:21:30AM +0100, Till Maas wrote:
The documented issues do not seem to be as bad as a system being exploited. It is only about dependency breakage or services not working anymore. There is no major data corruption requiring access to backups and restoring the whole system. But this is what people using Fedora with proftpd and being exploited have to do.
If security updates break functionality then people will stop applying security updates.
That may be true in general, but I think Till has given a compelling example in which many (most?) users would prefer an update with some probability of being broken to no update. If necessary, we could have a separate repository of "urgent" updates that sysadmins could choose to enable or not based on their security and stability needs.
On Sat, Nov 13, 2010 at 02:22:42PM +0000, Matthew Garrett wrote:
On Sat, Nov 13, 2010 at 10:21:30AM +0100, Till Maas wrote:
The documented issues do not seem to be as bad as a system being exploited. It is only about dependency breakage or services not working anymore. There is no major data corruption requiring access to backups and restoring the whole system. But this is what people using Fedora with proftpd and being exploited have to do.
If security updates break functionality then people will stop applying security updates.
If there are no security updates, people can not apply them. So what is worse? If people stop applying updates, then it is at least their decision. If there are no updates, people can only choose not to use Fedora. E.g. either build the applications themselves or use another distribution. But this is not a viable goal.
The optimal case is to provide well tested security updates fast, but this is not what Fedora achieves. In my example there is no indication that the update was especially tested, because it did not get any karma. And it was not provided fast.
Regards Till
On Sun, Nov 14, 2010 at 13:59:24 +0100, Till Maas opensource@till.name wrote:
If there are no security updates, people can not apply them. So what is worse? If people stop applying updates, then it is at least their decision. If there are no updates, people can only choose not to use
Many people are going to just pull updates. They aren't going to make a decision on their own.
Security updates aren't all created equal. While the case that was referenced in this was easily remotely exploitable, not all security issues expose a system to that level of risk.
The optimal case is to provide well tested security updates fast, but this is not what Fedora achieves. In my example there is no indication that the update was especially tested, because it did not get any karma. And it was not provided fast.
There is definitely a problem that needs fixing. But I don't think changing the goal to untested security updates provided quickly is the preferred solution.
Perhaps we should have a way to draw attention to high priority updates. Generally we need more testers and need to make them more efficient. (Test plans for packages can make testing more efficient and accurate.)
On Sun, Nov 14, 2010 at 08:03:35AM -0600, Bruno Wolff III wrote:
On Sun, Nov 14, 2010 at 13:59:24 +0100, Till Maas opensource@till.name wrote:
The optimal case is to provide well tested security updates fast, but this is not what Fedora achieves. In my example there is no indication that the update was especially tested, because it did not get any karma. And it was not provided fast.
There is definitely a problem that needs fixing. But I don't think changing the goal to untested security updates provided quickly is the preferred solution.
The root cause for the new update acceptance criteria was that there where updates that broke systems. Now with the criteria enforced, systems are broken worse according to the collection of bad update examples, because of updates not being pushed to stable. This was something that was always being highlighted as a potential problem iirc and yet it happend.
Perhaps we should have a way to draw attention to high priority updates. Generally we need more testers and need to make them more efficient. (Test plans for packages can make testing more efficient and accurate.)
Yes, more manpower would help, but where should it come from?
Regards Till
On Fri, Nov 12, 2010 at 9:14 PM, Kevin Fenzi kevin@scrye.com wrote:
On Fri, 12 Nov 2010 14:54:28 -0500 Simo Sorce ssorce@redhat.com wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
No. The issue is that in the past sometimes security updates have been rushed out with no testing and broken things badly. ;(
See http://fedoraproject.org/wiki/Updates_Lessons For some small number of examples (yes, anyone is welcome to please add others you have run into to the page).
I know of at least dbus, bind, nss and a few others that were security updates and pushe out with no testing and turned out to break things.
Perhaps security updates could have a smaller timeout?
This reminds me of the excellent "Timing the application of security patches for optimal uptime" paper: http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf Maybe a smaller timeout would do, yet I don't think that we could use a small enough (security-wise) timeout and still be safe from awful regressions.
Or a security group that tests them ?
People can't be expected to be knowledgeable or interested in all packages that would need timely security testing. In other words, telling all people in a group that an updated package they don't care about is available would only add noise, while telling relevant testers packages they care about are available may speed up things. For some reason, this reminds of RHN which sends emails only if updates for the *installed* (relevant) packages are available.
We could create a subgroup of proventesters willing to enter information about packages they know enough for them to test in a timely manner. Or we could extend smolt to automate that task (on-demand, of course). After all, we all tend to use what we install.
François
On Tue, 2010-11-16 at 23:35 +0100, François Cami wrote:
On Fri, Nov 12, 2010 at 9:14 PM, Kevin Fenzi kevin@scrye.com wrote:
On Fri, 12 Nov 2010 14:54:28 -0500 Simo Sorce ssorce@redhat.com wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
No. The issue is that in the past sometimes security updates have been rushed out with no testing and broken things badly. ;(
See http://fedoraproject.org/wiki/Updates_Lessons For some small number of examples (yes, anyone is welcome to please add others you have run into to the page).
I know of at least dbus, bind, nss and a few others that were security updates and pushe out with no testing and turned out to break things.
Perhaps security updates could have a smaller timeout?
This reminds me of the excellent "Timing the application of security patches for optimal uptime" paper: http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf Maybe a smaller timeout would do, yet I don't think that we could use a small enough (security-wise) timeout and still be safe from awful regressions.
Funnily enough, that paper concludes that 10 days is an optimal time to delay patch application in order to avoid getting a bad patch that will need later revision. This is close to the 7 days used for pre-release updates in the Updates Policy. So I would take this as another useful data point demonstrating that one doesn't need to push security updates instantaneously to stable, but that they can wait for testing results.
(no policy is immune from huge day zero vulnerabilities with massive exploits, but I'm certain that if there were such an incident with this policy, an exception could be made if the impact was significant)
Jon.
* Jon Masters (jonathan@jonmasters.org) wrote:
On Tue, 2010-11-16 at 23:35 +0100, François Cami wrote:
On Fri, Nov 12, 2010 at 9:14 PM, Kevin Fenzi kevin@scrye.com wrote:
On Fri, 12 Nov 2010 14:54:28 -0500 Simo Sorce ssorce@redhat.com wrote:
Adam why should security updates wait at all ? Do you fear some packager will flag as security updates that are not ? Surely we can deal with such maintainer if that happens...
No. The issue is that in the past sometimes security updates have been rushed out with no testing and broken things badly. ;(
See http://fedoraproject.org/wiki/Updates_Lessons For some small number of examples (yes, anyone is welcome to please add others you have run into to the page).
I know of at least dbus, bind, nss and a few others that were security updates and pushe out with no testing and turned out to break things.
Perhaps security updates could have a smaller timeout?
This reminds me of the excellent "Timing the application of security patches for optimal uptime" paper: http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf Maybe a smaller timeout would do, yet I don't think that we could use a small enough (security-wise) timeout and still be safe from awful regressions.
Funnily enough, that paper concludes that 10 days is an optimal time to delay patch application in order to avoid getting a bad patch that will need later revision. This is close to the 7 days used for pre-release updates in the Updates Policy. So I would take this as another useful data point demonstrating that one doesn't need to push security updates instantaneously to stable, but that they can wait for testing results.
I'd not place as much emphasis in the exact number as the concept (10 days was the conclusion based on the CVE data available at the time, but there's a lot more data now). Security fixes are not magically immune from testing and validation.
(no policy is immune from huge day zero vulnerabilities with massive exploits, but I'm certain that if there were such an incident with this policy, an exception could be made if the impact was significant)
Yup, I agree. Basic risk assessment.
thanks, -chris
On Fri, Nov 12, 2010 at 11:19:22AM -0800, Adam Williamson wrote:
Thanks for flagging this up.
I'm wondering if perhaps we should devise a system - maybe a sub-group of proventesters - to ensure timely testing of security updates. wdyt?
I am not sure if a smaller group would help here. But what is certainly missing is proper monitoring of updates that need to be tested asap and notify testers or people in charge of untested updates.
Regards Till
Till Maas opensource@till.name writes:
On Mon, Nov 01, 2010 at 10:09:17AM -0700, Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
It was brought to my attention that also current Fedora releases have problems with delaying important security updates.
Quite. In my little corner of the system, none of the last several mysql and postgresql updates have gone out with less than a seven-day delay, despite some of them being security updates (admittedly not high-severity ones, but still). And the trend is downhill: out of the last nine such updates, five shipped with zero karma because not even one tester had got round to looking at them. How does it help anyone to delay releases when no testing will happen?
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
regards, tom lane
On Fri, 2010-11-12 at 14:32 -0500, Tom Lane wrote:
Till Maas opensource@till.name writes:
On Mon, Nov 01, 2010 at 10:09:17AM -0700, Adam Williamson wrote:
I disagree. The evidence you cite does not support this conclusion. We implemented the policies for three releases. There are significant problems with one release. This does not justify the conclusion that the policies should be entirely repealed.
It was brought to my attention that also current Fedora releases have problems with delaying important security updates.
Quite. In my little corner of the system, none of the last several mysql and postgresql updates have gone out with less than a seven-day delay, despite some of them being security updates (admittedly not high-severity ones, but still). And the trend is downhill: out of the last nine such updates, five shipped with zero karma because not even one tester had got round to looking at them. How does it help anyone to delay releases when no testing will happen?
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
the policy is already differentiated between critpath and non-critpath packages; critpath *require* testing, there's no 7-day clause.
it's worth noting that part of the point of the 7-day clause is to cover 'invisible testing'; even if people aren't posting feedback to Bodhi, it's likely that if the update actually is broken, we will find out one way or another within 7 days (some people will post negative feedback to Bodhi but not positive; or we'll get notified on an ML, or forums, or something).
it's also worth noting that this is a communal effort: we don't have a big batch of testing robots who test whatever they're told to test. (yet). It's going to work much better if developers take some responsibility for getting their packages testing it. if you're packaging something, presumably you know *somebody* who uses it: the idea is that you can ask them to test it and provide the bodhi feedback, not just rely on someone who runs fedora-easy-karma as a matter of course providing feedback.
On Fri, Nov 12, 2010 at 11:42:19AM -0800, Adam Williamson wrote:
it's worth noting that part of the point of the 7-day clause is to cover 'invisible testing'; even if people aren't posting feedback to Bodhi, it's likely that if the update actually is broken, we will find out one way or another within 7 days (some people will post negative feedback to Bodhi but not positive; or we'll get notified on an ML, or forums, or something).
IMHO it is pretty unlikely that people use updates-testing but do not care about posting feedback to Bodhi.
responsibility for getting their packages testing it. if you're packaging something, presumably you know *somebody* who uses it: the idea is that you can ask them to test it and provide the bodhi feedback, not just rely on someone who runs fedora-easy-karma as a matter of course providing feedback.
I usually do not know who uses my packages except for me. And I am also not interested to track for each of my packages who uses it and on which Fedora release. This is something that can be done centrally if desired. Also one would need to know at least three other people using different Fedora releases.
Regards Till
On Sun, Nov 21, 2010 at 10:35:31 +0100, Till Maas opensource@till.name wrote:
IMHO it is pretty unlikely that people use updates-testing but do not care about posting feedback to Bodhi.
I usually notice only when something breaks, not when it keeps working.
On Sun, Nov 21, 2010 at 09:59:42PM -0600, Bruno Wolff III wrote:
On Sun, Nov 21, 2010 at 10:35:31 +0100, Till Maas opensource@till.name wrote:
IMHO it is pretty unlikely that people use updates-testing but do not care about posting feedback to Bodhi.
I usually notice only when something breaks, not when it keeps working.
You can very easy report that you have installed some update, used it and it did not break. This is afaik enough to justify +1 karma. With fedora-easy-karma this takes only a very short time, therefore this is not a very good excuse for the lack of karma in Bodhi.
Regards Till
On Mon, Nov 22, 2010 at 18:15:29 +0100, Till Maas opensource@till.name wrote:
You can very easy report that you have installed some update, used it and it did not break. This is afaik enough to justify +1 karma.
I thought you needed to do a bit more than just install a package to give a +1. But hopefully this situation is short term and autoQA will catch problems here instead of us humans. For now I just report file conflicts and what look to be missing obsoletes.
On Sun, Nov 28, 2010 at 05:36:58AM -0600, Bruno Wolff III wrote:
On Mon, Nov 22, 2010 at 18:15:29 +0100, Till Maas opensource@till.name wrote:
You can very easy report that you have installed some update, used it and it did not break. This is afaik enough to justify +1 karma.
I thought you needed to do a bit more than just install a package to give a +1. But hopefully this situation is short term and autoQA will
I wrote to give +1 after using it and your original problem was, that you did not notice when something works. So if you updated firefox yesterday and used it to browse the world wide web without problems and run fedora-easy-karma today, you can give the firefox update a +1. Fedora-easy-karma shows you how long rpms from an update have been installed and you can even query only for updates with rpms that have been installed for at least a certain time. So you only need to remember whether you actually used it.
Regards Till
On Sun, Nov 28, 2010 at 13:38:36 +0100, Till Maas opensource@till.name wrote:
I wrote to give +1 after using it and your original problem was, that you did not notice when something works. So if you updated firefox yesterday and used it to browse the world wide web without problems and run fedora-easy-karma today, you can give the firefox update a +1. Fedora-easy-karma shows you how long rpms from an update have been installed and you can even query only for updates with rpms that have been installed for at least a certain time. So you only need to remember whether you actually used it.
Ah, that's a bit different. I use updates-testing (or rawhide) and have kitchen sink type installs, so lots of updates I get in updates-testing I don't actually use. For things I don't actually use I just end up reporting about installation problems. For things I do use, I usually have only been doing karma when I am following up on a bug I have encountered.
But if easy-karma makes it easy to give karma for things I do use, then it's probably worth taking a look at.
On Fri, 2010-11-12 at 14:32 -0500, Tom Lane wrote:
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
BTW, another point worth noting is that there is not actually a specific rule against +1ing your own updates. It's 'frowned upon', I guess, but actually I think it's probably workable to say it's fine for the developer to +1 their own update in Bodhi: *as long as you actually have tested it*. For non-critpath updates especially. The Bodhi system is essentially an honor system anyway. So how I'd see this working is if it becomes clear that some maintainer is gaming the system by just +1ing everything they submit, even if it actually turns out to be broken, we look at saying they can't +1 their own updates any more. But if you actually are conscientiously testing your own updates, that's probably worth a +1 in Bodhi, for me.
On Fri, Nov 12, 2010 at 11:46:36AM -0800, Adam Williamson wrote:
On Fri, 2010-11-12 at 14:32 -0500, Tom Lane wrote:
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
BTW, another point worth noting is that there is not actually a specific rule against +1ing your own updates. It's 'frowned upon', I guess, but actually I think it's probably workable to say it's fine for the developer to +1 their own update in Bodhi: *as long as you actually have tested it*. For non-critpath updates especially. The Bodhi system is essentially an honor system anyway. So how I'd see this working is if it becomes clear that some maintainer is gaming the system by just +1ing everything they submit, even if it actually turns out to be broken, we look at saying they can't +1 their own updates any more. But if you actually are conscientiously testing your own updates, that's probably worth a +1 in Bodhi, for me.
The ability to +1 your own updates was disapproved by FESCo, and will be disabled in a future version of bodhi.
https://fedorahosted.org/bodhi/ticket/277
luke
On Tue, 2010-11-16 at 17:22 -0500, Luke Macken wrote:
On Fri, Nov 12, 2010 at 11:46:36AM -0800, Adam Williamson wrote:
On Fri, 2010-11-12 at 14:32 -0500, Tom Lane wrote:
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
BTW, another point worth noting is that there is not actually a specific rule against +1ing your own updates. It's 'frowned upon', I guess, but actually I think it's probably workable to say it's fine for the developer to +1 their own update in Bodhi: *as long as you actually have tested it*. For non-critpath updates especially. The Bodhi system is essentially an honor system anyway. So how I'd see this working is if it becomes clear that some maintainer is gaming the system by just +1ing everything they submit, even if it actually turns out to be broken, we look at saying they can't +1 their own updates any more. But if you actually are conscientiously testing your own updates, that's probably worth a +1 in Bodhi, for me.
The ability to +1 your own updates was disapproved by FESCo, and will be disabled in a future version of bodhi.
hum, that wasn't well publicised, and I wasn't aware of it. (I should probably show up to more FESCo meetings...picture FESCo members going 'no, no, really, it's fine!') I'd disagree, for the reasons above.
On Fri, 19 Nov 2010 22:04:24 -0800 Adam Williamson awilliam@redhat.com wrote:
...snip...
hum, that wasn't well publicised, and I wasn't aware of it. (I should probably show up to more FESCo meetings...picture FESCo members going 'no, no, really, it's fine!') I'd disagree, for the reasons above.
Well, my thought on it is:
* As maintainer, shouldn't you be testing your updates already? Granted there's often no way you could test everything, but at least installing it and confirming the bug(s) you claim are fixed are fixed?
* Having just one pair of eyes (even if they are exceptionally talented ones) can lead to things slipping through. I know personally I have messed up and tested something, confirmed the bug was fixed, then due to one of: trying to clean up the commit before building without another test cycle, testing on a machine with different package versions, etc the update I push out is not the one that really fixes the issues or introduces some other issue. Unless you have a really really good test workflow this kind of thing can happen.
We can surely revisit this if folks like... but if we do, we might consider for non critpath packages just re-enabling a 'push to stable', as thats what it would allow with less clicking. ;)
kevin
On Sat, 2010-11-20 at 14:49 -0700, Kevin Fenzi wrote:
On Fri, 19 Nov 2010 22:04:24 -0800 Adam Williamson awilliam@redhat.com wrote:
...snip...
hum, that wasn't well publicised, and I wasn't aware of it. (I should probably show up to more FESCo meetings...picture FESCo members going 'no, no, really, it's fine!') I'd disagree, for the reasons above.
Well, my thought on it is:
- As maintainer, shouldn't you be testing your updates already? Granted there's often no way you could test everything, but at least installing it and confirming the bug(s) you claim are fixed are fixed?
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or even prevent the system from booting, got pushed in the past, if all maintainers actually test their updates.
The advantage of doing it my way (allowing maintainers to test their own updates and file bodhi feedback, but requiring bodhi feedback) is that it leaves an audit trail: it requires the maintainer to effectively make an explicit public declaration that they tested the update and it worked, rather than just relying on the implied 'oh of course they must have tested it'. What this means is that if we come across cases where a maintainer builds an update, submits it, files bodhi feedback saying they've tested it, and it turns out to be completely broken in a way they should have caught if they tested it, we now have all the necessary evidence to take some kind of sanctions against that packager.
Of course, the idea would be that we'd never have to do that, because the fact that the above is the case would be sufficient motivation to ensure that packagers really *do* test their updates properly.
- Having just one pair of eyes (even if they are exceptionally talented ones) can lead to things slipping through. I know personally I have messed up and tested something, confirmed the bug was fixed, then due to one of: trying to clean up the commit before building without another test cycle, testing on a machine with different package versions, etc the update I push out is not the one that really fixes the issues or introduces some other issue. Unless you have a really really good test workflow this kind of thing can happen.
We can surely revisit this if folks like... but if we do, we might consider for non critpath packages just re-enabling a 'push to stable', as thats what it would allow with less clicking. ;)
See above for what I consider the advantage of preserving the karma requirement but allowing the maintainer to provide it. This is for non-critpath packages, of course. I don't think this should be allowed for critpath.
Adam Williamson awilliam@redhat.com writes:
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or even prevent the system from booting, got pushed in the past, if all maintainers actually test their updates.
I don't think it's so hard to explain as all that. It could well be that somebody tests a package, and it works *for him*, but breaks for many other people. An example of a very easy way for that to happen is a missed dependency on a package that he happens to have installed.
I don't by any means disagree with the idea that testing packages before they go out is a good thing. What I have a problem with is the idea that an "unfunded mandate" for that to happen is going to accomplish much. A policy isn't worth the electrons it's written on unless you can bring resources to make it happen, and so far the resources have failed to materialize. Jawboning package maintainers is going to be an even more spectacular failure, because they have much more than enough to do already; and they're smart enough to know that turning them all into individual ad-hoc test managers is an incredibly inefficient use of their time.
regards, tom lane
On Sat, 2010-11-20 at 17:45 -0500, Tom Lane wrote:
Adam Williamson awilliam@redhat.com writes:
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or even prevent the system from booting, got pushed in the past, if all maintainers actually test their updates.
I don't think it's so hard to explain as all that. It could well be that somebody tests a package, and it works *for him*, but breaks for many other people. An example of a very easy way for that to happen is a missed dependency on a package that he happens to have installed.
That's not what I'm talking about. There have been multiple instances where updates have been pushed that were *completely broken*: they could not work at all, in any fashion, for anyone. It doesn't happen a lot, but it happens; enough to prove that not all maintainers test updates before pushing them.
On 11/20/2010 06:02 PM, Adam Williamson wrote:
That's not what I'm talking about. There have been multiple instances where updates have been pushed that were *completely broken*: they could not work at all, in any fashion, for anyone. It doesn't happen a lot, but it happens; enough to prove that not all maintainers test updates before pushing them.
Just to provide some examples, here are the bugzilla entries for a package that didn't even start up (https://bugzilla.redhat.com/show_bug.cgi?id=591213), and another one for a package that crashed on a first elementary operation (https://bugzilla.redhat.com/show_bug.cgi?id=454045) FYI, I fixed the latter, and the former is still there in Fedora 14.
Hopefully AutoQA will solve many of those problems, if we can come up with test cases and a method to check elementary GUI operation.
On Sat, 2010-11-20 at 17:45 -0500, Tom Lane wrote:
I don't by any means disagree with the idea that testing packages before they go out is a good thing. What I have a problem with is the idea that an "unfunded mandate" for that to happen is going to accomplish much. A policy isn't worth the electrons it's written on unless you can bring resources to make it happen, and so far the resources have failed to materialize. Jawboning package maintainers is going to be an even more spectacular failure, because they have much more than enough to do already; and they're smart enough to know that turning them all into individual ad-hoc test managers is an incredibly inefficient use of their time.
Please remember the exact policy we have. There is still no absolute requirement for testing for anything but critpath packages, which is a fairly small number. All other packages can push updates without testing; there's simply a short waiting period to do so.
While we're arguing theoreticals, don't forget the factuals. :)
Adam Williamson wrote:
Please remember the exact policy we have. There is still no absolute requirement for testing for anything but critpath packages, which is a fairly small number. All other packages can push updates without testing; there's simply a short waiting period to do so.
But one of the main points of this subthread is that that waiting period is way too long for some urgent fixes (security fixes, regression fixes etc.).
Kevin Kofler
On Sun, 2010-11-21 at 03:16 +0100, Kevin Kofler wrote:
Adam Williamson wrote:
Please remember the exact policy we have. There is still no absolute requirement for testing for anything but critpath packages, which is a fairly small number. All other packages can push updates without testing; there's simply a short waiting period to do so.
But one of the main points of this subthread is that that waiting period is way too long for some urgent fixes (security fixes, regression fixes etc.).
Security issues, perhaps; best to follow that up in Kevin's thread.
On Sat, Nov 20, 2010 at 6:16 PM, Kevin Kofler kevin.kofler@chello.at wrote:
But one of the main points of this subthread is that that waiting period is way too long for some urgent fixes (security fixes, regression fixes etc.).
If it's really a regression, then you will have interested users who will test from updates-testing and provide karma.
Security karma should come from the security team.
Also security updates should not have any other changes mixed in. If it makes other changes take longer to get to stable (because the update after the security update needs the security update as well as the other updates that were queued up prior to the security update), well that's just how it is.
So you have these package versions:
foo-2 foo-2.1 foo-3
foo-2 is vulnerable to the exploit. foo-2.1 is and update that does not contain any changes except what is required to close the vulnerability. foo-3 has changes from foo-2.1 as well as the other updates that were planned.
The idea is that you stop everything, make the security update based on the latest stable package, and then submit the update for testing (by the security team?). then you continue with your normal packaging workflow.
Mike Fedyk píše v Po 22. 11. 2010 v 18:03 -0800:
Also security updates should not have any other changes mixed in.
In the early days of Fedora, it was explicitly decided that (contra Debian) maintainers are not required to backport patches and that rebases (fixing a bug by updating to a new upstream release) are the most expected kind of update.
It seems the consensus on this decision is not as strong as it used to be, nevertheless - with the number of package maintainers that admit they can't fix bugs in their packages on their own, is overturning this policy even possible? Mirek
On 11/23/2010 05:51 AM, Miloslav Trmač wrote:
Mike Fedyk píše v Po 22. 11. 2010 v 18:03 -0800:
Also security updates should not have any other changes mixed in.
In the early days of Fedora, it was explicitly decided that (contra Debian) maintainers are not required to backport patches and that rebases (fixing a bug by updating to a new upstream release) are the most expected kind of update.
It seems the consensus on this decision is not as strong as it used to be, nevertheless - with the number of package maintainers that admit they can't fix bugs in their packages on their own, is overturning this policy even possible?
I am not sure if I understand you correctly.
IMO, the real problem is not "backports" vs. "upgrading" to "fix bugs", it's bugs not getting fixed in Fedora, for a variety of reasons.
Therefore, I consider trying to apply any such simple "policy" to be impossible and naive.
Ralf
On 11/23/2010 06:51 AM, Ralf Corsepius wrote:
IMO, the real problem is not "backports" vs. "upgrading" to "fix bugs", it's bugs not getting fixed in Fedora, for a variety of reasons.
Therefore, I consider trying to apply any such simple "policy" to be impossible and naive.
Agreeable logical conclusion.
The underlying problem needs to get address and fixed first.
I proposed this as a possible long term solution in one rough possible way a bit back on a different list to try to address the underlying issue but I did not receive any feedback on that proposal.
1. Improve the general standard of packagers ( need to at least have upstream bugzilla account and are part of or in good communication with the upstream community ) 2 Allow for a adjusting period when it's over revoke the rights from those that already have but do not full fill this requirements. Package goes up for grabs or gets dropped. 2. Allow all maintainers to touch every component in Fedora note that maintainer that brought the component to Fedora is still responsible for his components. 3. Gather what information from all those maintainers we have in the community what their code skill are and in which language and what skill level their expertise is. 4. Assemble a "bug fixing task force" ( can be per language ) to target component ( including testers if needed ). 5. Assign a component to the "bug fixing task force" and assign a time period they should spend looking at the bugs on that component and fixing them could be a day a week a month starting from critical path and onwards. 6. Assign interns ( students home hackers and what not ) to tag along the bug fixing task force and learn a few things..
Note that there could be several bug fixing task force working at the same time but in different programming language and based on what skill level they have as newbies could take the first rounds tackle the easy fixers push what they cant fix to the medium team which then goes through it if they cant handle it they push it on to the heavy hitters who will strike upon it with furious vengeance and squash that bug to a different dimension..
If and when something like above is ready then we can start small with procedure we know.
create "proven $language coders" groups which maintainers sign up for
Reverse the roles of testers and maintainers and host a "bug squash day!"
QA decide which components needs addressing and contacts the relevant "proven $language coders".
Triagers run through the bugs list on the component the day(s) before and create a tracker bug with all the valid reports
"proven $language coders" run through tracker bug list
Testers stand ready on the sidelines during the code fiesta.
Hopefully bunch of bugs get squashed and users live happily ever after or we find out this idea was great on paper but crap on field and we return back to the drawing board..
JBG
On Tue, Nov 23, 2010 at 08:31:15AM +0000, "Jóhann B. Guðmundsson" wrote:
On 11/23/2010 06:51 AM, Ralf Corsepius wrote:
IMO, the real problem is not "backports" vs. "upgrading" to "fix bugs", it's bugs not getting fixed in Fedora, for a variety of reasons.
Therefore, I consider trying to apply any such simple "policy" to be impossible and naive.
Agreeable logical conclusion.
The underlying problem needs to get address and fixed first.
I proposed this as a possible long term solution in one rough possible way a bit back on a different list to try to address the underlying issue but I did not receive any feedback on that proposal.
- Improve the general standard of packagers ( need to at least have
upstream bugzilla account and are part of or in good communication with the upstream community ) 2 Allow for a adjusting period when it's over revoke the rights from those that already have but do not full fill this requirements. Package goes up for grabs or gets dropped.
I don't agree with the combination of the above two. The first is a nice to have but we also have to realize that requiring that will require lots more manpower. Step #2 is basically the enforcement phase for making #1 a requiement. I think that at some point maintaining a package becomes too much effort and as the number of packages that were too much effort build up, the utility of Fedora goes down.
- Allow all maintainers to touch every component in Fedora note that
maintainer that brought the component to Fedora is still responsible for his components.
I like this idea.
- Gather what information from all those maintainers we have in the
community what their code skill are and in which language and what skill level their expertise is. 4. Assemble a "bug fixing task force" ( can be per language ) to target component ( including testers if needed ).
I like this idea as well however...
- Assign a component to the "bug fixing task force" and assign a time
period they should spend looking at the bugs on that component and fixing them could be a day a week a month starting from critical path and onwards.
We have a tiny version of this in the FES tickets for fixing bundled libraries. I note that the python sub-ticket of that is the only one that's been closed. The C and php ones have hardly been touched. I'm not sure what would make this experience more productive.
-Toshio
On 11/24/2010 12:22 PM, Toshio Kuratomi wrote:
On Tue, Nov 23, 2010 at 08:31:15AM +0000, "Jóhann B. Guðmundsson" wrote:
On 11/23/2010 06:51 AM, Ralf Corsepius wrote:
IMO, the real problem is not "backports" vs. "upgrading" to "fix bugs", it's bugs not getting fixed in Fedora, for a variety of reasons.
Therefore, I consider trying to apply any such simple "policy" to be impossible and naive.
Agreeable logical conclusion.
The underlying problem needs to get address and fixed first.
I proposed this as a possible long term solution in one rough possible way a bit back on a different list to try to address the underlying issue but I did not receive any feedback on that proposal.
- Improve the general standard of packagers ( need to at least have
upstream bugzilla account and are part of or in good communication with the upstream community )
"One size doesn't fit everybody"
This is applicable in some occasions, but is non-applicable in many.
- Allow all maintainers to touch every component in Fedora note that
maintainer that brought the component to Fedora is still responsible for his components.
I like this idea.
Hmm, we already have the proven-packagers group and we already have the concept of co-maintainers.
- Assemble a "bug fixing task force" ( can be per language ) to target
component ( including testers if needed ).
I like this idea as well however...
Again, proven-packagers already can do this.
At least I occasionally applied my proven-packagers' privileges to troubleshoot critical situations. However, having done so, one lesson learnt from this, was this kind of help often only to be a short-term relief, but not to be a long term cure, because packages which are suffering from issues a "trouble-shooting group" is able to help often suffer from much deeper issues.
Also, it's this kind of situations, where Fedora's QA's "delays" have shown to be counter-productive.
Ralf
On Wed, 2010-11-24 at 13:07 +0100, Ralf Corsepius wrote:
Also, it's this kind of situations, where Fedora's QA's "delays" have shown to be counter-productive.
To be clear, they are not QA's delays. The initial proposal to FESCo was by mjg, the revised proposal was by notting, and it was FESCo which voted to adopt the policy requiring karma or a 7-day delay for updates.
On Tue, Nov 23, 2010 at 05:51:06AM +0100, Miloslav Trmač wrote:
Mike Fedyk píše v Po 22. 11. 2010 v 18:03 -0800:
Also security updates should not have any other changes mixed in.
In the early days of Fedora, it was explicitly decided that (contra Debian) maintainers are not required to backport patches and that rebases (fixing a bug by updating to a new upstream release) are the most expected kind of update.
It seems the consensus on this decision is not as strong as it used to be, nevertheless - with the number of package maintainers that admit they can't fix bugs in their packages on their own, is overturning this policy even possible? Mirek
Thanks, Mirek, for pointing out the first issue with this idea. The second issue is that Fedora doesn't have a security team which fixes security issues. We have package maintainers and the people they can/will ping to come up with solutions for security issues. The security team was just there for keeping track of when security issues are reported in other venues and seeing that we addressed them in Fedora (I'm not sure how active it still is either.)
-Toshio
On Sat, Nov 20, 2010 at 5:09 PM, Adam Williamson awilliam@redhat.com wrote:
On Sat, 2010-11-20 at 14:49 -0700, Kevin Fenzi wrote:
On Fri, 19 Nov 2010 22:04:24 -0800 Adam Williamson awilliam@redhat.com wrote:
...snip...
hum, that wasn't well publicised, and I wasn't aware of it. (I should probably show up to more FESCo meetings...picture FESCo members going 'no, no, really, it's fine!') I'd disagree, for the reasons above.
Well, my thought on it is:
- As maintainer, shouldn't you be testing your updates already? Granted
there's often no way you could test everything, but at least installing it and confirming the bug(s) you claim are fixed are fixed?
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or even prevent the system from booting, got pushed in the past, if all maintainers actually test their updates.
The advantage of doing it my way (allowing maintainers to test their own updates and file bodhi feedback, but requiring bodhi feedback) is that it leaves an audit trail: it requires the maintainer to effectively make an explicit public declaration that they tested the update and it worked, rather than just relying on the implied 'oh of course they must have tested it'. What this means is that if we come across cases where a maintainer builds an update, submits it, files bodhi feedback saying they've tested it, and it turns out to be completely broken in a way they should have caught if they tested it, we now have all the necessary evidence to take some kind of sanctions against that packager.
Of course, the idea would be that we'd never have to do that, because the fact that the above is the case would be sufficient motivation to ensure that packagers really *do* test their updates properly.
I think this is an interesting idea, but I'll also say I think it can be made simpler. Why not just hold package maintainers accountable period. Make them accountable to FESCo (which in theory they are to begin with) If I, as a package maintainer continuously want to 'push directly to stable' and continuously screw it up, I'd hope FESCo and my original sponsor would at least tell me I am doing it wrong. Having a +1 button click recorded in Bodhi strikes me as no more damning evidence than the fact that I committed the update and asked for it to be pushed to stable. (whether I wait 7 days, or push it immediately).
I am curious to know a few things?
How many updates submitted to bodhi since the policy has been in place? How many updates received any feedback? How many updates received only neutral or negative feedback? How many updates had an overall negative score. (assuming this is the number of 'problems' we can confidently confirm we caught - though more possibly exist) How many updates received no feedback - and of that group - how long were they queued up for in updates-testing?
On Sun, 2010-11-21 at 12:42 -0500, David Nalley wrote:
I am curious to know a few things?
How many updates submitted to bodhi since the policy has been in place? How many updates received any feedback? How many updates received only neutral or negative feedback? How many updates had an overall negative score. (assuming this is the number of 'problems' we can confidently confirm we caught - though more possibly exist) How many updates received no feedback - and of that group - how long were they queued up for in updates-testing?
I'd also be interested in these numbers; Luke, can we provide them easily?
David Nalley wrote:
I think this is an interesting idea, but I'll also say I think it can be made simpler. Why not just hold package maintainers accountable period. Make them accountable to FESCo (which in theory they are to begin with) If I, as a package maintainer continuously want to 'push directly to stable' and continuously screw it up, I'd hope FESCo and my original sponsor would at least tell me I am doing it wrong.
+1
Let people use their brains. If they screw up, yell at them, and work on informing people in a better way so such mistakes don't happen again.
Bureaucracy is not the right solution to prevent mistakes, education is.
Kevin Kofler
On Sun, Nov 21, 2010 at 11:39:14PM +0100, Kevin Kofler wrote:
Let people use their brains. If they screw up, yell at them, and work on informing people in a better way so such mistakes don't happen again.
Everyone makes mistakes. The idea is to provide an opportunity for people's mistakes to become obvious before they break things for large numbers of people. Our maintainers are human, so processes that depend on perfection are inappropriate.
On Sat, Nov 20, 2010 at 02:09:47PM -0800, Adam Williamson wrote:
On Sat, 2010-11-20 at 14:49 -0700, Kevin Fenzi wrote:
On Fri, 19 Nov 2010 22:04:24 -0800 Adam Williamson awilliam@redhat.com wrote:
...snip...
hum, that wasn't well publicised, and I wasn't aware of it. (I should probably show up to more FESCo meetings...picture FESCo members going 'no, no, really, it's fine!') I'd disagree, for the reasons above.
Well, my thought on it is:
- As maintainer, shouldn't you be testing your updates already? Granted there's often no way you could test everything, but at least installing it and confirming the bug(s) you claim are fixed are fixed?
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or even prevent the system from booting, got pushed in the past, if all maintainers actually test their updates.
I guess the easiest way to find out why the maintainers did it, is to ask them.
Btw. especially on maintainer systems it is pretty easy to not test the current update situation, e.g. because different versions than currently in the build root might be installed on the maintainer's machine or the maintainers might use local mock builds instead of the actual update. Off course I do not know, whether any of these possible pitfalls have been hit in the past.
Regards Till
On Sat, Nov 20, 2010 at 02:09:47PM -0800, Adam Williamson wrote:
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or
Btw. this is not a problem that might happen with updates, but also happens with initial critical path packages, e.g. afaics system-config-keyboard cannot be used in Fedora 14: https://bugzilla.redhat.com/show_bug.cgi?id=646041
(If it can be use, please tell me how.)
Regards Till
On Mon, 2010-11-22 at 19:29 +0100, Till Maas wrote:
On Sat, Nov 20, 2010 at 02:09:47PM -0800, Adam Williamson wrote:
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or
Btw. this is not a problem that might happen with updates, but also happens with initial critical path packages, e.g. afaics system-config-keyboard cannot be used in Fedora 14: https://bugzilla.redhat.com/show_bug.cgi?id=646041
(If it can be use, please tell me how.)
yeah, I know about that one; it didn't make the cut as a release blocker because it's not actually present on the KDE or GNOME desktops (they have their own keyboard config tools). But it's a head-slapper indeed. I don't know if anyone's actually figured out what the bug is there yet. I *think* it's a case of some underlying change pulling the rug out from under s-c-k, but not sure.
On Mon, Nov 22, 2010 at 10:43:21AM -0800, Adam Williamson wrote:
On Mon, 2010-11-22 at 19:29 +0100, Till Maas wrote:
On Sat, Nov 20, 2010 at 02:09:47PM -0800, Adam Williamson wrote:
I do. I don't believe all maintainers do. It's pretty hard to explain why updates that completely prevent the app in question from working, or
Btw. this is not a problem that might happen with updates, but also happens with initial critical path packages, e.g. afaics system-config-keyboard cannot be used in Fedora 14: https://bugzilla.redhat.com/show_bug.cgi?id=646041
(If it can be use, please tell me how.)
yeah, I know about that one; it didn't make the cut as a release blocker because it's not actually present on the KDE or GNOME desktops (they have their own keyboard config tools). But it's a head-slapper indeed. I
It is available on both live images, though. And it is a kind of strange reasoning to label it as "critical path", but not require that it works at all.
don't know if anyone's actually figured out what the bug is there yet. I *think* it's a case of some underlying change pulling the rug out from under s-c-k, but not sure.
There is a patch attached to the bug, but I do not know, whether it helps.
Regards Till
On 11/12/2010 02:32 PM, Tom Lane wrote:
Till Maasopensource@till.name writes:
<snip>
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
regards, tom lane
Test cases would help alleviate manpower issues. Many of the security updates and regular updates are outside my area and I feel some frustration that I have to bypass providing karma; however, I am used to doing QA work with test cases. Are they so hard to provide? Maybe certain updates should have test cases, i.e., security updates and critical path updates.
Regards, OldFart
"Clyde E. Kunkel" clydekunkel7734@cox.net writes:
On 11/12/2010 02:32 PM, Tom Lane wrote:
It's absolutely crystal clear to me that we don't have enough tester manpower to make the current policy workable; it's past time to stop denying that. I'd suggest narrowing the policy to a small number of critical packages, for which there might be some hope of it actually working as designed.
Test cases would help alleviate manpower issues. Many of the security updates and regular updates are outside my area and I feel some frustration that I have to bypass providing karma; however, I am used to doing QA work with test cases. Are they so hard to provide? Maybe certain updates should have test cases, i.e., security updates and critical path updates.
The major packages that I work with have regression test suites, which in fact get run as part of the RPM build sequence. It's not apparent to me that I should need to invent some more tests.
The likely failure cases that I can see are of two types:
1. Upstream screwed up and introduced a regression into what was supposed to be a minor bug-fix or security update. This does happen, for sure, but there's pretty much 0 chance that I as packager am going to catch it if it gets past the built-in regression tests. Unfortunately, there is also pretty much 0 chance that Fedora testers are going to notice such a problem in the limited time window for sanity testing. It hasn't ever happened for any of my packages that Fedora testers caught such things in time.
2. I screwed up and introduced a packaging bug, for instance bad dependencies or inability to "yum update". That's been known to happen too. But I have a lot more faith in autoqa being able to catch that kind of problem in a timely fashion than I do in manual testing catching it.
I guess what this boils down to is that I'd be happier with the testing process if it were actually successful at finding problems. In my experience, it's a week's delay for exactly zero return.
regards, tom lane
On Fri, 2010-11-12 at 23:14 -0500, Tom Lane wrote:
- I screwed up and introduced a packaging bug, for instance bad
dependencies or inability to "yum update". That's been known to happen too. But I have a lot more faith in autoqa being able to catch that kind of problem in a timely fashion than I do in manual testing catching it.
In the long run so do we, but right now, autoqa is not hooked up to the build process in any way. It's manual testing or nothing.
I guess what this boils down to is that I'd be happier with the testing process if it were actually successful at finding problems. In my experience, it's a week's delay for exactly zero return.
It does find problems. Though, by what you say, not in your packages, so I know where you're coming from; but we've certainly caught a positive integer amount of bugs with the process. :)
On 11/12/2010 11:14 PM, Tom Lane wrote:
"Clyde E. Kunkel"clydekunkel7734@cox.net writes:
<snip>
The major packages that I work with have regression test suites, which in fact get run as part of the RPM build sequence. It's not apparent to me that I should need to invent some more tests.
I did not know that. Good to know. Would it help if the test cases were mentioned so their use could be considered in providing karma? Or, even if they were made available?
Regards,
OldFart