Drives
Overview
Drives are persistent block devices that provide storage for VMs. They serve as both the bootable root filesystem and data storage mechanism within WorkFort’s architecture.
On the host, drives are backed by btrfs subvolumes — enabling instant CoW snapshots, zero-cost cloning, and efficient storage sharing. Firecracker VMs see them as standard block devices, exposed via dm/nbd.
Responsibilities
Bootable Root Filesystem
Drives created from base subvolumes provide the root filesystem for VMs:
- Work VMs boot from drives containing the execution environment and guest-agent
- Portal VMs boot from drives containing the agent runtime
- Service VMs boot from drives containing their application stack
Data Storage
Drives provide persistent storage independent of VM lifecycle:
- Data persists after VM shutdown
- Can be reused across multiple VM sessions
- Support both read-write and read-only modes
Data Movement Between VMs
Drives enable sequential data transfer between VMs:
- VM completes work and shuts down
- Drive detaches from terminated VM
- Drive attaches to new VM at boot
- New VM accesses data written by previous VM
This sequential access pattern is imposed by Firecracker’s security model — concurrent host/guest access is not supported.
Design
btrfs-Backed Storage
Unlike traditional ext4 image files, WorkFort’s drives are backed by btrfs subvolumes:
| Operation | Traditional | WorkFort (btrfs) |
|---|---|---|
| Create workspace | Copy full image (slow, full disk cost) | btrfs subvolume snapshot (instant, zero disk cost) |
| Storage sharing | OverlayFS layers | CoW — shared blocks stay shared |
| Cleanup | Delete image file | btrfs subvolume delete |
| Checkpoint | Copy image or OverlayFS snapshot | btrfs subvolume snapshot (instant) |
| Rollback | Restore from backup | Switch to previous snapshot |
Drive Types
Drives are distinguished by purpose:
- Boot drives: Created from read-only master image snapshots, contain bootable root filesystem with init system, tools, and (for work VMs) guest-agent
- Data drives: Created empty or populated with project data, used for workspace storage and transfer between VMs
Both are btrfs subvolumes exposed to Firecracker as block devices via dm/nbd.
Access Patterns
Drives follow a sequential access model:
1. Host prepares drive → Snapshot base subvolume or create empty
2. VM boots with drive → Drive attached before VM starts
3. VM operates on drive → Read/write within VM
4. VM shuts down → Drive detaches
5. Host or next VM uses → Snapshot, inspect, or attach to new VM
Constraint: No concurrent access. Host and guest cannot access the same drive simultaneously. This is a Firecracker security design decision, not a limitation being addressed.
Multiple Drives Per VM
VMs support multiple drive attachments:
- One bootable drive (required for VM boot)
- Additional data drives (workspace, shared datasets, outputs)
Example: Work VM with boot drive + workspace drive containing project source code.
Persistence Model
Drives are persistent resources:
- Survive VM termination
- Reusable across multiple VM sessions
- Managed independently of VM lifecycle
- Can accumulate data across multiple VM executions
Host Layout
/var/lib/nexus/
├── workspaces/
│ ├── @base-agent/ ← read-only master image
│ ├── @work-code-1/ ← CoW snapshot of @base-agent
│ └── @portal-openclaw/ ← CoW snapshot of portal master
├── images/
│ └── vmlinux ← kernel
└── state/
└── nexus.db ← SQLite
Relationship to Other Components
Drives connect multiple architecture components:
- Master images → Snapshot into Drives
- Drives → Exposed via dm/nbd → Attached to VMs at boot
- VMs → Managed by Nexus
- guest-agent (in Work VMs) → Operates on files within mounted Drives