>>> This kernel works as expected with one exception. The
exception has been
>>> a nagging problem, but I have not reported it because 1) we are using
>>> a research OS in DomU and 2) we are not clear if the problem is in our
>>> code, Linux or Xen. But, here are the symptoms:
>>>
>>> Occasionally (this seems to correlate to network activity between Dom0
>>> and DomU), the system becomes unresponsive. I am running the Michael
>>> Young kernel at runlevel 3 within Dom0 (very little memory used by
>>> applications). Our OS runs in DomU and is constrained to 128MB of
>>> memory. When the system is unresponsive, typing a character into a
>>> Dom0 console take 2-5 seconds to appear on the screen. Likewise, other
>>> activity is extremely slow. As I mentioned, we have not been able to
>>> isolate where the problem is. Running, for example, an OpenWrt Linux
>>> build in DomU does not have this problem.
>> I have seen something similar, though I don't know where
the fault
>> lies either.
> That is somewhat good to hear. I have today solved this problem
by running
> "xm vcpu-set Domain-0 1." By default, Xen assigned Dom0 all of my cores
> (two). Reducing this to one solves the problem for me. I am working on
> a better write up that I'll send to fedora-xen and possibly the upstream
> Xen mailing list. I have not decided if this is a bug and am having some
> discussions locally that may help me formulate a better inquiry.
Usually it's better to use dom0_max_vcpus=1 on the grub xen.gz
line.
So, is this a known "issue." Is it typically best practice to limit Dom0 to
one core? I've seen systems where this is not a problem (dom0_max_vcpus=n
works fine, where n is the number of cores) and others where it is. Why
would this be?
--
Mike
:wq