On 05/13/14 at 10:34pm, Vivek Goyal wrote:
On Fri, May 09, 2014 at 04:25:13PM +0800, WANG Chao wrote:
> On 05/08/14 at 12:07pm, Vivek Goyal wrote:
> > On Thu, May 08, 2014 at 08:30:19PM +0800, WANG Chao wrote:
> > [..]
> > > Because when emergency.service is triggered, all the other services are
> > > stopped running, even the mounted filesystem is umounted.
> > This is strange. Why already mounted filesysems should be unmounted. I
> > think we should send a mail to systemd and ask about this behavior.
> I think that a mount unit is just like a service unit from systemd's
> point of view. When systemd "isolate" to emergency.target, all the rest
> of units (including service/target/mount/slice/... man systemd.unit(5)
> for more types) will be stopped.
What does it mean to stop a "target" unit? That means target will be
considered not reached?
It's the same thing as a "service". When a service is stopped, other
units that depends on it will be stopped. When a target is stopped,
other units that depends on it will be stopped.
> "isolate" is targeting all unit types, not just for xxx.service.
> FWIW one can specify "IgnoreOnIsolate=yes" to any type of unit to avoid
> being stopped. But this is not our case, because mount units are
> generated runtime.
> > >
> > > But I've figured a way to bring /sysroot back to life before
> > > dump_to_rootfs by manually:
> > >
> > > systemctl start initqueue
> > > systemctl start sysroot.mount
> > I am not convinced with this idea of trying to call back into systemd
> > to start some units after failure. This sounds little fragile to me.
> dracut-initqueue only have dependencies for udev related unit. In fact,
> it will only run the scripts under initqueue hook (like dracut-pre-pivot
> hook). The initqueue hook is used to bring device online by some
> installed scripts at generate-time.
> It looks to me both dracut-initqueue.service and sysroot.mount are
> pretty much independant. So I think that would be too much risky to call
> back to these two.
> > Having said that, it might not be perfect solution but if it works, it
> > might not be too bad a solution till we find even better solution.
> Bottom line is we need /sysroot when dump_to_rootfs ...
> > Do we really have to kick initqueue again? Just starging sysroot.mount
> > is not enough?
> Probally not "again". Failure can happen at any point of time, that
> means it could happen before dracut-initqueue gets started. So starting
> dracut-initqueue will ensure that the desired device is brought up
> before trying to mount it. Otherwise starting sysroot.mount will be
> blocking to wait for the disk.
> > Also after starting sysroot.mount, how long will you wait for root to
> > be mounted?
> "systemctl start sysroot.mount" will block as long as it's mounted or
> the worst case timeout after 90 seconds.
> If everything goes smoothly, sysroot.mount shouldn't block for long.
Interesting. What decides when do we wait for systemctl command and
when we just send a message to systemd on dbus and exit.
I wasn't saying systemctl command won't block. It should block until the
command is finished, except calling with "systemctl --no-block"
I was trying to say, if the device or the file system we need to mount
is there already, the mount operation should be finished very short.
> > How are you inducing failure to invoke this error path?
> There's a critical path for systemd boot (man bootup(7)).
This is a very good writeup. Good find.
Ok, I have couple of more questions.
As per bootup(7), basic.target should be reached first. I checked that
there is no OnFailure= directive in basic.target. So say some error
happens and basic.target does not reach. What happens now? Will system
I don't know much about it. basic.target "Requires=" sysinit.target but
sysinit doesn't "Requires=" others but only "Wants". Based on
relations, I think sysinit.target will be reached unconditionally and so
will the basic.target.
Second quesiton, So looks like there are two error paths. If error
happens early enough, then dracut can drop us into emergency shell
otherwise later systemd can put us into emergency shell. It depends
when error happened.
After a search in 2nd kernel, I find out there are two cases that will
a.) rd.break= is specified in cmdline. dracut script will eventually
b.) In rescue mode, rescue.service will trigger dracut-emergency. But
this is not our case, because we are not in rescue mode.
So to me if we don't specify "rd.break=", we won't trigger
dracut-emergency. Given the fact that we already disabled
dracut-emergency.service, we don't need to worry about it.
I don't understand why early error is handled by
dracut-emergency.service and late is handled by systemd's
emergency.service. Where do you get that?
If error happened in dracut early, then tyring to kick dracut initqueue
again will most likely not help much. If error happens later in systemd
when it is trying to mount non-root file systems, then kicking
dracut-initqueue and enabling sysroot.mount might help.
BTW, I still don't understand that why should we try to enable
dracut-initqueue explicitly. If sysroot.mount depends on it, then it
will automatically be enabled.
sysroot.mount doesn't depends on it but it depends on the device (or the
filesystem) is available. dracut-initqueue mainly does the job to bring
up the devices we need.
Say we have a lvm rootfs, udev and its rules only get us the disk
partitions, like /dev/sda1. But the lvm under /dev/mapper/ won't be
available, we need to bring it up by command like "vgchange -ay". That's
the job done by dracut-initqueue.
Secondly, if error happened before dracut-initqueue, then there is
no point in running initqueue scripts. If error happend after dracut
initqueue, then none of the initqueue action will be undone and there
is no need to run dracut-initqueue.
We need dracut-initqueue to run its scripts because we need to bring up
lvm devices. But you're right we don't need to run it again if error
happens after dracut-initqueue. But currently I'm not sure how we can
determine if dracut-initqueue is in the state of "yet to be started" or
"started but stopped again". That's why I start it unconditionally to
So I think we just need to explicitly enable sysroot.mount from kdump
handler. If it blocks for 90 seconds, we don't have to do anything. But
if it exits immediately, then we need to sit in a tight loop and wait
for 90 seconds. If root shows up, dump to it otherwise reboot.
Let me clarify things here, we got two ways here:
a.) "systemctl start sysroot.mount"
This will wait the operation until it's finished, aka blocking.
If the operation isn't finished in 90 seconds, this will return with a
b.) "systemctl --no-block sysroot.mount"
This will not wait the operation to finish. But we will poll for root
fs for 90 seconds.
Currently I'm choosing a.) because simple and we don't need to worry
about future change for timeout value. What do you think?