Image generated from Waifu Diffusion: A locked chamber, imposing, cyberpunk, vaporwave, cyan, high-resolution, sharp-focus, light-bloom-effect
In one of my previous blog posts, I discussed how I configured my system to use
qubes-rpc to enable running gpg-agent
in a single qube, restricting access to
my Yubikey behind RPC policies. The system that implements all of this
functionality seems really neat, so in this post I'd like to break it down and
see if we can reimplement it for SSH in a way that closely resembles the UX
flow of GPG.
The first component of the split GPG system is the gpg
wrapper, which can
emulate GPG for most use cases:
|
But, this setup doesn't work well for a couple commands, due to them either not being implemented, or relying on things that would break the security model of the system:
# This command isn't implemented
# This command relies on network access
# This command moves the private key data out of the vault qube
We can look at the source code of qubes-gpg-client
and see that it calls qrexec-client-vm
with a remote domain and a policy
name. As an example of how qrexec-client-vm
works, we can run the following
command:
# Prints out: 2022-09-28T07:04:20+00:00
We can see what runs by using cat /etc/qubes-rpc/qubes.GetDate
, which in the
case of my vault AppVM, is date -u -Iseconds
. It's important to note that the
command that is executed must exist in the AppVM that you're targeting, if
you are using a different template between your qubes, you may notice there's a
difference in the RPC commands.
Qubes has many types of VMs but for this blog post we're going to focus on two types, called AppVMs and TemplateVMs. TemplateVMs have a mutable filesystem, which means anything that we want to persist on a system level, such as a Qubes RPC invocation command. Once a TemplateVM has been updated and has successfully shut down, any AppVMs that use the TemplateVM can be restarted to receive the configured changes.
I recommend creating several copies of the Debian 11 TemplateVM, never
modifying the original. This means we can create a user
TemplateVM (which can
be used for Personal, Work, etc. qubes), a vault
TemplateVM (whose sole
purpose is to be used by vault), and keep debian-11
pure. If you ever want to
make a new copy, you can either copy the debian-11
TemplateVM or a separate
TemplateVM you've already created.
First, we must create a file containing the command that should be executed
when a separate VM runs qrexec-client-vm
. At this point, we should make a
command that:
#!/bin/sh -eu
This makes the assumption that GPG has been started with SSH support and writes its socket to the mentioned directory. This should be the most common configuration, and if using the split GPG configuration, this is most likely how it is set up.
The executable bit should be set for the command, which can be done by using
chmod +x /etc/qubes-rpc/qubes.Ssh
. Once this is done, the TemplateVM can be
shut down and the vault qube can be restarted to apply changes.
We also need to make sure that scdaemon
is installed, as it is not
automatically installed in (at least) Qubes 4.1.1's Debian 11 template. This
can be done by running sudo apt install -y scdaemon
in the TemplateVM used by
the vault qube.
To allow usage of the RPC command, we must specify a policy file in the dom0
VM. You can do this by running the following command:
This policy will allow any VM to run the qubes.Ssh RPC command. This can be
restricted further by using ask
instead of allow
, or only specifying
certain AppVMs instead of any VM.
To test this, from an AppVM, we can run the following command:
|
If the output is 0, we've succeeded. If the output is 126, the request was denied, which means there's an error with the policy file. If the output is 127, the command was not found in the vault qube.
Using the socat
command, we can create a relay from a UNIX socket (like
ssh-agent expects) to a command:
From another terminal in the same AppVM, we can run a command to print all public keys that the SSH agent has a matching private key for:
If the exit status of the command is 0, we've succeeded. If the output is 2, the socket wasn't correctly created or can't be found by the agent.
Using systemd's socket functionality, we can create a service file that will
automatically be created every time a client connects to the socket by using
Accept=true
in the socket file and StandardInput=socket
in the service
template file. The "@" in the service file is significant; this is what signals
systemd to run the command for every new client.
These files must be created in the TemplateVMs that you wish to grant access to the SSH agent.
# /etc/systemd/user/ssh-agent.socket
[Unit]
Description=Forward connections to an SSH agent to a remote Qube
[Socket]
ListenStream=%t/ssh/S.ssh-agent
SocketMode=0600
DirectoryMode=0700
Accept=true
[Install]
WantedBy=sockets.target
# /etc/systemd/user/ssh-agent@.service
[Unit]
Description=Forward connections to an SSH agent to a remote Qube
[Service]
ExecStart=qrexec-client-vm vault qubes.Ssh
StandardInput=socket
Once the unit files are created, we can enable the socket to automatically start; once a qube has started, it will automatically load the socket unit and create the socket file.
The following can also be placed in .bashrc
, .zshrc
, or similar files to
automatically configure the SSH agent:
The split GPG setup has a convenient section of code that can automatically prompt whether or not a qube can access the SSH qube. We can modify it and use it in our qubes-rpc service file to create an interface for approving SSH keyring access:
#!/bin/sh
# License: https://github.com/QubesOS/qubes-app-linux-split-gpg/blob/fa04403e049f1d5b27975fdc8651c4740b302680/debian/copyright
# Source: https://github.com/QubesOS/qubes-app-linux-split-gpg/blob/fa04403e049f1d5b27975fdc8651c4740b302680/qubes.Gpg.service#L3-L38
# With minor modifications
if [; then
QUBES_GPG_AUTOACCEPT=300
fi
days=" d"
hours=" h"
minutes=" m"
seconds=" s"
stat_file="/var/run/qubes-gpg-split/stat."
stat_time=
now=
if [; then
|
# This has been modified to say "SSH" instead of "GPG"
msg_text="Do you allow VM ' ' to access your SSH keys"
msg_text=" \n(now and for the following )?"
||
fi
# Add in our snippet to forward the inbound connection to the running SSH agent
A vault qube typically has no network access but for this use case we'll allow
outbound traffic (via sys-firewall) on port 22. We also assume that the
TemplateVM used for the vault qube has installed pass
and dmenu
.
To start off with, we can create a policy to get the password list from the vault qube:
#!/usr/bin/env bash
# /etc/qubes-rpc/pass.ListPasswords
prefix=
password_files=( ""/**/*.gpg )
password_files=( "" )
password_files=( "" )
The executable bit should be set for the command, which can be done by using
chmod +x /etc/qubes-rpc/pass.ListPasswords
. Once this is done, the TemplateVM
can be shut down and the vault qube can be restarted to apply changes. Then, we
need to create an RPC policy:
This will act as a way to send our password list to a program that can help us
choose a password, such as dmenu
:
|
# Note: Sometimes with qrexec-client-vm, you can leave off the pipe and it'll
# pipe directly to the command. This can't work here, since qrexe-client-vm
# eats the output of the command.
# Example output: github.com
Now that we have a password chosen, we need to build an RPC command to show the command:
#!/bin/sh
# /etc/qubes-rpc/pass.GetPassword
Once the executable bit has been set, the TemplateVM has been shut down, and the policy has been created, we can test this out by running:
We can design a shell script to automatically link these components together
and copy the password to the keyboard, by making use of the clip
function in
/usr/bin/pass
:
#!/usr/bin/env bash
# $HOME/bin/passmenu
BASE64=base64
X_SELECTION=""
CLIP_TIME=""
QUBE=""
password_name=""