A Nix Flake to build NixOS and run it on one of several Type-2
Hypervisors on NixOS/Linux. The project is intended to provide a more
isolated alternative to nixos-container
. You can either build and
run MicroVMs like Nix packages, or alternatively install them as
systemd services declaratively in your host's Nix Flake or
impereratively with the provided microvm
command.
Warning: This is a Nix Flakes-only project. Use with nix-shell -p nixFlakes
- MicroVMs are Virtual Machines but use special device interfaces (virtio) for high performance
- This project runs them on NixOS hosts
- You can choose one of five hypervisors for each MicroVM
- MicroVMs have a fixed RAM allocation (default: 512 MB)
- MicroVMs have a read-only root disk with a prepopulated
/nix/store
- You define your MicroVMs in a Nix Flake's
nixosConfigurations
section, reusing thenixosModules
that are exported by this Flake - MicroVMs can access stateful filesystems either on a image volume as a block device or as a shared directory hierarchy through virtiofsd.
- Zero, one, or more virtual tap ethernet network interfaces can be attached to a MicroVM.
Hypervisor | Language | Restrictions |
---|---|---|
qemu | C | |
cloud-hypervisor | Rust | no 9p shares |
firecracker | Rust | no 9p/virtiofs shares |
crosvm | Rust | no network interfaces |
kvmtool | C | no virtiofs shares |
While ubiquitous qemu seems to work in most situations, other hypervisors tend to break with Linux kernel updates. Especially crosvm and kvmtool need a lot of luck to get going.
nix registry add microvm github:astro/microvm.nix
(If you do not want to inflict this change on your system, just
replace microvm
with github:astro/microvm.nix
in the following
examples.)
nix flake init -t microvm
nix run microvm#qemu-example
nix run microvm#firecracker-example
nix run microvm#cloud-hypervisor-example
nix run microvm#crosvm-example
nix run microvm#kvmtool-example
nix run microvm#vm
Check networkctl status virbr0
for the DHCP leases of the
MicroVMs. They listen for ssh with an empty root password.
In microvm.shares
elements the proto
field allows either of two
values:
-
9p
(default) is built into many hypervisors, allowing you to quickly share a directory tree -
virtiofs
requires a separate virtiofsd service which is only started as a prerequisite when you start MicroVMs through a systemd service that comes with themicrovm.nixosModules.host
module.Expect
virtiofs
to yield better performance over9p
.
If a share with source = "/nix/store"
is defined, size and build
time of the stage1 squashfs for /dev/vda
will be reduced
drastically.
microvm.shares = [ {
tag = "ro-store";
source = "/nix/store";
mountPoint = "/nix/.ro-store";
} ];
The writable layer is mounted from the path
microvm.writableStoreOverlay
. You may choose to add a persistent
volume or share for that mountPoint.
Recommended configuration to disable this feature, making /nix/store
read-only:
microvm.writableStoreOverlay = null;
User-mode networking is only provided by qemu and kvmtool, providing outgoing connectivity to your MicroVM without any further setup.
As kvmtool seems to lack a built-in DHCP server, additional static IP configuration is necessary inside the MicroVM.
Use a virtual tuntap Ethernet interface. Its name is the value of
id
.
Some Hypervisors may be able to automatically create these interfaces when running as root, which we advise against. Instead, create the interfaces before starting a microvm:
sudo ip tuntap add $IFACE_NAME mode tap user $USER
When running MicroVMs through the host
module, the tap network
interfaces are created through a systemd service dependency.
This mode lets qemu create a tap interface and attach it to a bridge.
The qemu-bridge-helper
binary needs to be setup with the proper
permissions. See the host
module for that. qemu will be run
without -sandbox on
in order for this contraption to work.
Use this on a (physical) machine that is supposed to host MicroVMs.
Declare MicroVMs in your host's nixosSystem.
This method is meant to be used to ensure the presence of a
MicroVM. It will not update preexisting MicroVMs in
/var/lib/microvm
. Use the imperative microvm
command to do that.
microvm.vms."my-microvm" = {
# Source flake for `nixos-rebuild` of the host
flake = self;
# Source flakeref for `microvm -u my-microvm`
updateFlake = "git+https://...";
};
# Create my-microvm
microvm -f git+https://... -c my-microvm
# Update my-microvm
microvm -u my-microvm
# List MicroVMs
microvm -l
Import this module in your MicroVM's nixosSystem. Refer to nixos-modules/microvm/options.nix for MicroVM-related config.
Your Flake does no longer need to provide the MicroVMs as packages. An
entry for each MicroVM in nixosConfiguration
is enough.
To get a MicroVM's hypervisor runner as a package, use:
nix build myflake#nixosConfigurations.my-microvm.config.microvm.runner.qemu
MicroVM parameters have moved inside the NixOS configuration, gaining
parameter validation through the module system. Refer to
nixos-modules/microvm/options.nix
for their definitions.
Delete the following remnants from 0.1.0:
microvm-run
microvm-shutdown
tap-interfaces
virtiofs
All these copied files are now behind the current
symlink to a
Hypervisor runner package.
At last, check the validity of the symlinks in
/nix/var/nix/gcroots/microvm
.
The author can be hired to implement the features that you wish, or to integrate this tool into your toolchain. If in doubt, just press the 💗sponsor button.
- Boot with root off virtiofs, avoiding overhead of creating squashfs image
- Provide a writable
/nix/store
- Distribute/fail-over MicroVMs at run-time within a cluster of hosts