before switching off for the rest of the year, a little status. I have
three branches pending in this order:
- wip/permanent-dashboard, pull request #170.
Stores the list of machines on the machine running cockpit-ws, with
the usual notifications, etc. Can discovers machines via OpenSLP.
A gross hack to enable the "manage" button for all machines on the
dashboard. It's ugly but better than the status-quo, IMO.
A 'wizard' kind of thing that can synchronize user accounts from one
machine to another. This is very raw.
If someone wants to review these, please start at the top.
Have a great holiday!
is now passing "make check" (Thanks Stef for investing in the fast
internal tests, they were really helpful for this work)
I'm going to work on ./VERIFY now.
Note this patch set introduces a new bleeding edge dependency on libssh
git master. (Although I could pretty easily make it conditionally
compile only if you have it, and fall back to local-only).
Any early comments on the patch are appreciated, but I'll post again
when I know ./VERIFY works.
I found myself coming back to the issue of how to 'architect' Initial
Initial Setup refers to the steps necessary to get a machine that
already has the "cockpit" package installed to the point where it can
be remotely managed in a browser.
We had two approches:
- Run a script via ssh on the target machine.
on the target machine.
I started to work on the first approach, but my spidey senses made me
stop: There was too much stuff to figure out that would be easy to do
with the second approach, but required inventing new stuff (like data
formats and new communication mechanisms) with the first. The second
approach also needs inventing new stuff, but that new stuff will
hopefully be useful outside of the Initial Setup as well.
IMO, Initial Setup is already a bit too complicated and interactive for
a script. We need to run code locally and remotely, in more than one
- collect data locally for use in checking (list of Cockpit users and
their passwords, roles, joined domain)
- check what needs to be done remotely (set hostname, sync users, join domain)
- check what needs to be done locally (creating OTP for domain join,
creating certificate without domain)
- get confirmation from user, and collect missing parameters.
- collect data locally for execution (user avatars)
- do what needs to be done locally
- do what needs to be done remotely
The steps above are not very precise, some can be combined, some might
not be necessary initially, or even ever. Still, my feeling is that a
good Initial Setup experience requires more than just "run this script
as root on the target machine with these arguments".
The disadvantages of running the Initial Setup in the browser are of a
- The code comes from the machine running cockpit-ws, not from the
machine being set up, so we need to figure out how to take the target
OS version / configuration / etc into account.
- We can't rely on the setup running to completion since the program
in the browser can stop at any point.
- Some sensitive information such as password hashes will pass
through the browser.
We need to get these under control anyway, I'd say, and the Initial
Setup might not be 'perfect' before addressing them, but so is
So... Here's what I will do, until Stef makes me remember why it is a
bad idea. :-)
* Allow to specify credentials when opening a web socket. These
would be used instead of the ones found via the session cookie.
This feature would for now only be used for Initial Setup.
Websockets made for machines on the Dashboard will only be able to
use the session credentials.
* Initial Setup will do two things only: set a hostname, and
synchronize Cockpit users.
The Cockpit users are all users on the local machine that are in
one of the Cockpit role groups.
Synchronizing a Cockpit user will make sure that it exists and that
it has the same password as on the local machine.
Creating a user will copy over the avatar and other bits, but if
the user already exists, nothing will be changed.
Nothing will be done about SSH keys (user or host), domains,
certificates, and firewalls.
So after some work on udisks-lvm it's now ready to play with. Lots has
changed in the implementation, and a few things have changed in the DBus
interfaces that were proposed in the patches.
Note that the DBus interfaces have the prefix of com.redhat.lvm2. Happy
to change this early on, if it's not acceptable.
Code, as before is here: https://github.com/stefwalter/udisks-lvm
Now accepting patches :)
Implementation notes follow ...
Basic functionality is tested. But I didn't have time to complete tests
for everything for the initial block of code. I think for each change
that we do from here on out we should try and make sure it's covered by
HACKING contains info on how to run the tests in a VM, using NFS to
access the built files, and (if rw mounted) write test coverage data back.
No client API:
No client interface (yet?). Since OpenLMI wants to use DBus directly,
and I imagine Cockpit can have it's own client interface.
DBus interface file is installed into /usr/share/dbus-1/interfaces as
you would expect. Easy to gdbus-codegen a client API from there if one
wants to go that route.
Simple Object Manager:
Made the object manager usage *much* simpler. We avoid object skeletons,
and have a high level concept of publishing interfaces or objects.
Waiting for objects to show up is also streamlined. We rely on DBus
timeouts when they don't show up, rather than inventing our own timeouts.
No use of threading in method handlers. The previous code was not thread
safe in several areas. Things like polkit authorization and caller uid
lookup happen before the method handler.
Some of the previous jobs didn't actually encapsulate the whole task,
especially where we were wiping blocks. These are now wrapped into
threaded jobs. Very careful to pass only copied fundamental types, such
as strings and flags into these threaded jobs.
Handle polkit authorization in g-authorize-method signal as one would
expect. We load the authorization info from annotations in the DBus
interface, rather than all over.
Do the actions may need to be more segmented? Currently we just have one
The daemon quits when all its clients are gone, and no jobs are running.
Since we want to use this on the server, it really makes sense not to
have yet another daemon running all the time.
Colin, if you have time I would appreciate your feedback on the lifetime
implementation. I've tried to do this in a non-racy manner, enabled by
the fact that our daemon is rather stateless.
LVM name encoding
The Name property now contains the possibly encoded name, and the
DisplayName the possibly decoded one.
That said I don't think we should be encoding the LVM volume group and
logical volume names passed in over the DBus API. This goes against the
Cockpit ethos (and likely OpenLMI too) of exposing what's on the
system rather than hiding it away under some layer. Encoding the names
makes it awkward to create a volume group via the API and then use it
from the command line.
Do we really need to have Poll()? Can we just poll as needed, especially
now that we quit when no more clients are around?
As discussed on IRC and elsewhere, I've been working on a LVM addon for
udisks. This is mainly so we can get away from our custom patched
version of udisks.
This takes Marius' LVM patches and puts them in their own daemon:
Basic untested implementation here, mostly written today, for the
* I need to test it, just compiled, not run.
* I need to document it.
* I need to make udisks-lvm quit when not in use.
So don't run it ... yet. As I write this a crasher bug popped into my
head that I'm sure to while testing. Probably at least a few more days
of work before it's in a usable state.
A bit about what's going on. There's a new set of com.redhat.lvw2.xxx
Two of them are meant to extend block objects: LogicalVolumeBlock (when
a block is an activated logical volume) and PhysicalVolumeBlock (when a
block is a configured physical volume). These live at the same object
paths in the udisks-lvm daemon as the real blocks in udisksd.
The c.r.l.Manager interface extends the o.f.U.Manager interface, and
should live at the same object path.
So at a high level ... I know in discussions with some of you I said
this wasn't a viable scheme. That was because I was trying to make
things too general previously and cover other technologies. Now that
I've limited the scope of this addon to LVM only, I didn't run into any
major blockers ... yet :)
one of the next steps for Cockpit to take would be to discover machines
on the network and show them in the "Add Server" dialog, as a
This is a little report of where I am with that.
- OpenSLP relies on unicast replies to multicast queries. This does not
work in the presence of NAT or normal firewalls.
The fundamental problem seems to be that the connection tracking
facility in Linux does not expect the replies and thus doesn't create
a 'RELATED' connection flow for them. Both NAT and normal firewall
rules rely on connection tracking to work.
The proper fix would be to write a user space conntrack helper for use
with "nfct helper add". nfct is part of conntrack-tools, and we
should probably contact upstream for advise before writing any code.
- Before we have the proper fix, we can try to workaround the conntrack
- In order to receive the unicast replies, we can punch a small hole in
the firewall while we listen for those replies. The hole would allow
packets from *:427 to the socket we listen on, nothing else. This is
done inside cockpitd.
- We can explicitly disable NAT for packets to the SLP multicast address
22.214.171.124. This needs to be done on the machine doing the NAT,
which is normally the host for the virtual test machines. We would
thus need to ask people to mess with an important firewall.
The challenge is to make a permanent change to the host firewall setup
that doesn't get clobbered by libvirt. Thus, the virtual network for
the virtual machines needs to use forward mode 'route', we explicitly
enable masquerading in the firewall (globally), and then disallow the
# firewall-cmd --add-masquerade
# firewall-cmd --direct --add-rule ipv4 nat POSTROUTING 0 \
-d 126.96.36.199 -j ACCEPT
(Direct rules go before the general masquerade rule. Not sure if this
is guaranteed, though.)
- We can also switch NAT on before PREPARE and switch it off before
VERIFY. This makes OpenSLP work, and we also get better isolated test
runs. We really don't want to access the Internet from tests.
(We can't have two networks and just use the right one since FreeIPA
needs the same IP address during setup as during normal operation, so
this needs to happen on the same network.)
- OpenSLP also has a startup problem where it enters the failed state
reliably on every boot. This should be easily fixable.
- We also need to open the firewall for incoming SLP queries. Trivial.
- Using Avahi would just work. We could list all instances of
_workstation._tcp.local, say, or _cockpit-ssh._tcp.local.
Avahi also has dynamic notifications, so the list of servers in the
"Add Servers" dialog would be 'live' without any extra magic like
polling or out-of-band notifications.
* NAT on/off switch in vm-prep.
* PREPARE and VERIFY check that the switch is in the right position.
* Bugs in OpenSLP startup are workedaround.
* cockpitd learns to punch holes and to talk SLP.
* The "Add Server" dialog shows the result of find-srvs:service-agent.
* Tests for the above.
I have uploaded a new magic base tarball for Fedora 20 here:
There is also now a workaround for the logind bug in Cockpit master, and
I have removed systemd from cockpit-deps. (Only UDisks2 left, yay!)
Thus, to do verification, you need to download the new base tarball, run
PREPARE from Cockpit master, and move the new images into your
I made a repo for hubbot:
Nothing fancy, but it might help a bit with verifying pull requests. I
have been using it for a while now; it's the guy behind
Ask me or Stephen for the necessary github_token.
-----BEGIN PGP SIGNED MESSAGE-----
As mentioned before, I've attempted to put together a user-perspective
on what "roles" should look like in Fedora Server, using FreeIPA as a
representative example. I want to thank Andreas Nilsson of the Cockpit
project for helping out substantially with the mock-ups.
The proposal is fairly wordy, so I put it up on my blog for perusal. I
can copy it into the Server WG blog later, if we agree it's not
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----