On 05/08/12 13:08, Miloslav Trmač wrote:
On Mon, May 7, 2012 at 11:36 PM, Lennart Poettering
<mzerqung(a)0pointer.de> wrote:
> On Mon, 07.05.12 23:02, Jan Kratochvil (jan.kratochvil(a)redhat.com) wrote:
>> On Mon, 07 May 2012 22:16:02 +0200, Lennart Poettering wrote:
> I mean, just think of this: you have a pool of workstations to
> administer. It's all the same machines, with the same prepared OS
> image. You want to know about the backtraces as admin. Now, since the OS
> images are all the same some errors will happen across all the machines
> at the same time. Now, with your logic this would either result in all
> of them downloading a couple of GB of debuginfo for glibc and stuff like
> that, or all of them bombarding the retrace server, if they can.
No, someone administering a pool of machines would also want to
collect the crash information centrally instead of running tools
manually on every machine in the pool
Who talks about running stuff manually? I'd expect we'll have some
service (abrt?) doing it automagically and send the trace to syslog, so
the userspace traces end up in the logs like the kernel oopses do today.
- and it turns out ABRT was from
the start designed to support such data collection; all core files can
be configured to end up at a single analysis machine.
The minidebuginfo traces can easily go a central logserver too.
My take:
1) Developers of the software in question: Bluntly, the ~1-100 users
in the whole world shouldn't matter in our discussion - if they are
even running the RPM, they can and probably will install complete
debuginfo, enable logging and do other non-default things to make
their job easier; The Fedora defaults don't matter that much for them,
and the mini debuginfo is not that useful either.
Depends. My internet link isn't exactly fast. For stuff I'm working on
I have the debuginfo packages locally mirrored / installed. For other
stuff I havn't and it can easily take hours to fetch it. Having at
least a basic trace without delay has its value. Often this is enougth
to track it down.
Or when debugging your own program (with full debuginfo) it is useful to
have at least the symbols of the libraries used in the trace too.
2) Non-programming end-users. _This_ is the case that we need to
get
right by default. In many cases, a developer is lucky if the end
user ever sends any crash report, they often don't respond to
follow-up questions, and the problem does not have to be reproducible
at all. From such users we definitely want as full crash information
as possible (IOW, including the variable contents information) because
there won't be a second change to get it. The mini debuginfo is
therefore irrelevant, we need to steer users to the retrace server (or
to attaching full core files to reports, which has much worse privacy
impact).
Wrong. From /me you don't get abrt reports at all, because abrt simply
is a pain with a slow internet link due to the tons of data it wants
transmit. Also it doesn't say what it is going to do (download ?? MB
debuginfo / upload ?? MB core). And there is no progress bar. Ok,
might have changed meanwhile, its a while back I tried last.
Can we agree on the above, at least that 1) and 2) are not
noticeably
improved by mini debuginfo,
No.
BTW, the feature suggests mini debuginfo would be useful for
userspace
tracing - AFAIK such uses, e.g. systemtap, use the variable location
information very extensively, and would thus not benefit from mini
debuginfo.
How about 'perf top -p $pid'?
cheers,
Gerd