Step 4: VM Records CRUD — Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: REST endpoints and CLI commands for creating, listing, inspecting, and deleting VM records in SQLite – no Firecracker processes yet, state machine limited to created.
Architecture: The vms table is added to the schema constant in nexus-lib. A new VmStore trait extends StateStore with VM CRUD methods; SqliteStore implements it. nexusd gains four new routes (POST /v1/vms, GET /v1/vms, GET /v1/vms/:id, DELETE /v1/vms/:id). nexusctl gains the vm subcommand with create, list, inspect, and delete actions. NexusClient gains methods matching each endpoint. vsock CIDs are auto-assigned starting at 3 (reserved: 0=hypervisor, 1=host, 2=host).
Tech Stack (additions to existing):
uuid1.x – generate unique VM IDsserde_json(already a dep in nexusd) – serializeconfig_jsonfieldchrono0.4 – human-readable age formatting in CLI table output
See data model for the full vms table definition and state machine.
See CLI design for command grammar and output formatting conventions.
Task 1: Add vms Table to Schema
Files:
- Modify:
nexus/nexus-lib/src/store/schema.rs
Step 1: Update SCHEMA_VERSION and SCHEMA_SQL
Bump the schema version from 1 to 2 and append the vms table. Pre-alpha migration (delete + recreate) handles the version bump automatically.
#![allow(unused)]
fn main() {
// nexus/nexus-lib/src/store/schema.rs
/// Schema version — increment when the schema changes.
/// Pre-alpha migration strategy: if the stored version doesn't match,
/// delete the DB and recreate.
pub const SCHEMA_VERSION: u32 = 2;
/// Database schema. Executed as a single batch on first start.
/// Domain tables are added by later steps — each step bumps SCHEMA_VERSION
/// and appends its tables here. Pre-alpha migration (delete + recreate)
/// means all tables are always created from this single constant.
pub const SCHEMA_SQL: &str = r#"
-- Schema version tracking
CREATE TABLE schema_meta (
key TEXT PRIMARY KEY,
value TEXT NOT NULL
);
-- Application settings (key-value store)
CREATE TABLE settings (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
type TEXT NOT NULL CHECK(type IN ('string', 'int', 'bool', 'json'))
);
-- VMs: Firecracker microVM instances
CREATE TABLE vms (
id TEXT PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
role TEXT NOT NULL CHECK(role IN ('portal', 'work', 'service')),
state TEXT NOT NULL CHECK(state IN ('created', 'running', 'stopped', 'crashed', 'failed')),
cid INTEGER NOT NULL UNIQUE,
vcpu_count INTEGER NOT NULL DEFAULT 1,
mem_size_mib INTEGER NOT NULL DEFAULT 128,
config_json TEXT,
pid INTEGER,
socket_path TEXT,
uds_path TEXT,
console_log_path TEXT,
created_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
updated_at INTEGER NOT NULL DEFAULT (strftime('%s', 'now')),
started_at INTEGER,
stopped_at INTEGER
);
CREATE INDEX idx_vms_role ON vms(role);
CREATE INDEX idx_vms_state ON vms(state);
"#;
}
Step 2: Verify build
Run: cd /home/kazw/Work/WorkFort/nexus && mise run check
Expected: Compiles with no errors.
Step 3: Run existing store tests to verify schema migration works
The existing init_creates_all_tables test expects table_count == 2. It needs to be updated to 3 (schema_meta, settings, vms). But first, verify the schema mismatch triggers recreate by running:
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib store::sqlite::tests::schema_mismatch_triggers_recreate
Expected: PASS – the test creates a fake version “0” database, and open_and_init deletes and recreates.
Step 4: Update table count expectations in existing tests
In nexus/nexus-lib/src/store/sqlite.rs, update the following tests:
init_creates_all_tables: changeassert_eq!(status.table_count, 2, ...)toassert_eq!(status.table_count, 3, ...)init_is_idempotent: changeassert_eq!(status.table_count, 2)toassert_eq!(status.table_count, 3)schema_mismatch_triggers_recreate: changeassert_eq!(status.table_count, 2, ...)toassert_eq!(status.table_count, 3, ...)
Step 5: Run all store tests
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib store
Expected: All tests PASS.
Step 6: Update the integration test in nexusd
In nexus/nexusd/tests/daemon.rs, update:
assert_eq!(body["database"]["table_count"], 2)toassert_eq!(body["database"]["table_count"], 3)
Step 7: Run workspace tests
Run: cd /home/kazw/Work/WorkFort/nexus && mise run test
Expected: All tests PASS.
Step 8: Commit
git add nexus/nexus-lib/src/store/schema.rs nexus/nexus-lib/src/store/sqlite.rs nexus/nexusd/tests/daemon.rs
git commit -m "feat(nexus-lib): add vms table to database schema (v2)"
Task 2: Add uuid Dependency to nexus-lib
Files:
- Modify:
nexus/nexus-lib/Cargo.toml
Step 1: Add uuid dependency
Add to [dependencies] in nexus/nexus-lib/Cargo.toml:
uuid = { version = "1", features = ["v4"] }
Step 2: Verify build
Run: cd /home/kazw/Work/WorkFort/nexus && mise run check
Expected: Compiles with no errors.
Step 3: Commit
git add nexus/nexus-lib/Cargo.toml
git commit -m "chore(nexus-lib): add uuid dependency for VM ID generation"
Task 3: VM Domain Types in nexus-lib
Files:
- Create:
nexus/nexus-lib/src/vm.rs - Modify:
nexus/nexus-lib/src/lib.rs
Step 1: Write the failing test
#![allow(unused)]
fn main() {
// nexus/nexus-lib/src/vm.rs
use serde::{Deserialize, Serialize};
/// VM role determines the VM's function in the system.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum VmRole {
Portal,
Work,
Service,
}
impl VmRole {
pub fn as_str(&self) -> &'static str {
match self {
VmRole::Portal => "portal",
VmRole::Work => "work",
VmRole::Service => "service",
}
}
}
impl std::fmt::Display for VmRole {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}
impl std::str::FromStr for VmRole {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"portal" => Ok(VmRole::Portal),
"work" => Ok(VmRole::Work),
"service" => Ok(VmRole::Service),
_ => Err(format!("invalid VM role: '{s}' (expected: portal, work, service)")),
}
}
}
/// VM lifecycle state. Step 4 only uses `Created`.
/// Other states are defined for the data model but transitions
/// are not implemented until Firecracker integration (Step 6).
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum VmState {
Created,
Running,
Stopped,
Crashed,
Failed,
}
impl VmState {
pub fn as_str(&self) -> &'static str {
match self {
VmState::Created => "created",
VmState::Running => "running",
VmState::Stopped => "stopped",
VmState::Crashed => "crashed",
VmState::Failed => "failed",
}
}
}
impl std::fmt::Display for VmState {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}
impl std::str::FromStr for VmState {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"created" => Ok(VmState::Created),
"running" => Ok(VmState::Running),
"stopped" => Ok(VmState::Stopped),
"crashed" => Ok(VmState::Crashed),
"failed" => Ok(VmState::Failed),
_ => Err(format!("invalid VM state: '{s}'")),
}
}
}
/// Parameters for creating a new VM.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CreateVmParams {
pub name: String,
#[serde(default = "default_role")]
pub role: VmRole,
#[serde(default = "default_vcpu")]
pub vcpu_count: u32,
#[serde(default = "default_mem")]
pub mem_size_mib: u32,
}
fn default_role() -> VmRole { VmRole::Work }
fn default_vcpu() -> u32 { 1 }
fn default_mem() -> u32 { 128 }
/// A VM record as stored in the database and returned by the API.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Vm {
pub id: String,
pub name: String,
pub role: VmRole,
pub state: VmState,
pub cid: u32,
pub vcpu_count: u32,
pub mem_size_mib: u32,
pub created_at: i64,
pub updated_at: i64,
#[serde(skip_serializing_if = "Option::is_none")]
pub started_at: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub stopped_at: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pid: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub socket_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub uds_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub console_log_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub config_json: Option<String>,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn role_roundtrip() {
assert_eq!("work".parse::<VmRole>().unwrap(), VmRole::Work);
assert_eq!("portal".parse::<VmRole>().unwrap(), VmRole::Portal);
assert_eq!("service".parse::<VmRole>().unwrap(), VmRole::Service);
assert!("invalid".parse::<VmRole>().is_err());
}
#[test]
fn state_roundtrip() {
assert_eq!("created".parse::<VmState>().unwrap(), VmState::Created);
assert_eq!("running".parse::<VmState>().unwrap(), VmState::Running);
assert_eq!("stopped".parse::<VmState>().unwrap(), VmState::Stopped);
assert_eq!("crashed".parse::<VmState>().unwrap(), VmState::Crashed);
assert_eq!("failed".parse::<VmState>().unwrap(), VmState::Failed);
assert!("bogus".parse::<VmState>().is_err());
}
#[test]
fn role_display() {
assert_eq!(VmRole::Work.to_string(), "work");
assert_eq!(VmRole::Portal.to_string(), "portal");
assert_eq!(VmRole::Service.to_string(), "service");
}
#[test]
fn create_params_deserialize_with_defaults() {
let json = r#"{"name": "my-vm"}"#;
let params: CreateVmParams = serde_json::from_str(json).unwrap();
assert_eq!(params.name, "my-vm");
assert_eq!(params.role, VmRole::Work);
assert_eq!(params.vcpu_count, 1);
assert_eq!(params.mem_size_mib, 128);
}
#[test]
fn create_params_deserialize_with_overrides() {
let json = r#"{"name": "big-vm", "role": "portal", "vcpu_count": 4, "mem_size_mib": 1024}"#;
let params: CreateVmParams = serde_json::from_str(json).unwrap();
assert_eq!(params.name, "big-vm");
assert_eq!(params.role, VmRole::Portal);
assert_eq!(params.vcpu_count, 4);
assert_eq!(params.mem_size_mib, 1024);
}
#[test]
fn vm_serializes_without_none_fields() {
let vm = Vm {
id: "abc".to_string(),
name: "test".to_string(),
role: VmRole::Work,
state: VmState::Created,
cid: 3,
vcpu_count: 1,
mem_size_mib: 128,
created_at: 1000,
updated_at: 1000,
started_at: None,
stopped_at: None,
pid: None,
socket_path: None,
uds_path: None,
console_log_path: None,
config_json: None,
};
let json = serde_json::to_string(&vm).unwrap();
assert!(!json.contains("started_at"));
assert!(!json.contains("pid"));
assert!(json.contains("\"cid\":3"));
}
}
}
Step 2: Export the module from lib.rs
#![allow(unused)]
fn main() {
// nexus/nexus-lib/src/lib.rs
pub mod client;
pub mod config;
pub mod store;
pub mod vm;
#[cfg(feature = "test-support")]
pub mod test_support;
}
Step 3: Run tests to verify they pass
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib vm::tests
Expected: All 6 tests PASS.
Step 4: Commit
git add nexus/nexus-lib/src/vm.rs nexus/nexus-lib/src/lib.rs
git commit -m "feat(nexus-lib): add VM domain types (Vm, VmRole, VmState, CreateVmParams)"
Task 4: VmStore Trait and SqliteStore Implementation
Files:
- Modify:
nexus/nexus-lib/src/store/traits.rs - Modify:
nexus/nexus-lib/src/store/sqlite.rs
This task adds VM CRUD methods to the store. The trait is extended with VM-specific operations. SqliteStore implements them using parameterized queries.
Step 1: Write the failing test
Add to nexus/nexus-lib/src/store/traits.rs, extending the existing StateStore trait:
#![allow(unused)]
fn main() {
// Add these imports at the top of traits.rs
use crate::vm::{CreateVmParams, Vm};
// Add these methods to the StateStore trait, after the existing methods:
/// Create a new VM record. Assigns a unique ID and auto-assigns a vsock CID.
/// Returns the created VM.
fn create_vm(&self, params: &CreateVmParams) -> Result<Vm, StoreError>;
/// List all VMs, optionally filtered by role and/or state.
fn list_vms(&self, role: Option<&str>, state: Option<&str>) -> Result<Vec<Vm>, StoreError>;
/// Get a single VM by name or ID.
fn get_vm(&self, name_or_id: &str) -> Result<Option<Vm>, StoreError>;
/// Delete a VM by name or ID. Returns true if a record was deleted.
/// Refuses to delete VMs in the `running` state (returns StoreError).
fn delete_vm(&self, name_or_id: &str) -> Result<bool, StoreError>;
}
Also add a new error variant to StoreError:
#![allow(unused)]
fn main() {
/// Operation not allowed in current state
Conflict(String),
}
And its Display arm:
#![allow(unused)]
fn main() {
StoreError::Conflict(e) => write!(f, "conflict: {e}"),
}
Step 2: Write tests in sqlite.rs
Add these tests to the tests module in nexus/nexus-lib/src/store/sqlite.rs:
#![allow(unused)]
fn main() {
#[test]
fn create_vm_assigns_id_and_cid() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let params = CreateVmParams {
name: "test-vm".to_string(),
role: VmRole::Work,
vcpu_count: 2,
mem_size_mib: 512,
};
let vm = store.create_vm(¶ms).unwrap();
assert!(!vm.id.is_empty());
assert_eq!(vm.name, "test-vm");
assert_eq!(vm.role, VmRole::Work);
assert_eq!(vm.state, VmState::Created);
assert_eq!(vm.cid, 3); // first CID
assert_eq!(vm.vcpu_count, 2);
assert_eq!(vm.mem_size_mib, 512);
}
#[test]
fn create_vm_auto_increments_cid() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let vm1 = store.create_vm(&CreateVmParams {
name: "vm-1".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let vm2 = store.create_vm(&CreateVmParams {
name: "vm-2".to_string(),
role: VmRole::Portal,
vcpu_count: 1,
mem_size_mib: 256,
}).unwrap();
assert_eq!(vm1.cid, 3);
assert_eq!(vm2.cid, 4);
}
#[test]
fn create_vm_duplicate_name_fails() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let params = CreateVmParams {
name: "dup-vm".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
};
store.create_vm(¶ms).unwrap();
let result = store.create_vm(¶ms);
assert!(result.is_err());
}
#[test]
fn list_vms_returns_all() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
store.create_vm(&CreateVmParams {
name: "vm-a".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
store.create_vm(&CreateVmParams {
name: "vm-b".to_string(),
role: VmRole::Portal,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let vms = store.list_vms(None, None).unwrap();
assert_eq!(vms.len(), 2);
}
#[test]
fn list_vms_filter_by_role() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
store.create_vm(&CreateVmParams {
name: "work-vm".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
store.create_vm(&CreateVmParams {
name: "portal-vm".to_string(),
role: VmRole::Portal,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let work_vms = store.list_vms(Some("work"), None).unwrap();
assert_eq!(work_vms.len(), 1);
assert_eq!(work_vms[0].name, "work-vm");
}
#[test]
fn get_vm_by_name() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let created = store.create_vm(&CreateVmParams {
name: "find-me".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let found = store.get_vm("find-me").unwrap().unwrap();
assert_eq!(found.id, created.id);
assert_eq!(found.name, "find-me");
}
#[test]
fn get_vm_by_id() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let created = store.create_vm(&CreateVmParams {
name: "id-test".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let found = store.get_vm(&created.id).unwrap().unwrap();
assert_eq!(found.name, "id-test");
}
#[test]
fn get_vm_not_found() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let result = store.get_vm("nonexistent").unwrap();
assert!(result.is_none());
}
#[test]
fn delete_vm_removes_record() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
store.create_vm(&CreateVmParams {
name: "delete-me".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let deleted = store.delete_vm("delete-me").unwrap();
assert!(deleted);
let found = store.get_vm("delete-me").unwrap();
assert!(found.is_none());
}
#[test]
fn delete_vm_not_found_returns_false() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let deleted = store.delete_vm("ghost").unwrap();
assert!(!deleted);
}
#[test]
fn delete_vm_by_id() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let created = store.create_vm(&CreateVmParams {
name: "del-by-id".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
let deleted = store.delete_vm(&created.id).unwrap();
assert!(deleted);
}
#[test]
fn cid_reused_after_delete() {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
let vm1 = store.create_vm(&CreateVmParams {
name: "first".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
assert_eq!(vm1.cid, 3);
store.delete_vm("first").unwrap();
// Next VM should get CID 3 again (lowest available)
// Implementation may choose CID 4 if using max+1 strategy.
// Both are acceptable -- the key invariant is uniqueness.
let vm2 = store.create_vm(&CreateVmParams {
name: "second".to_string(),
role: VmRole::Work,
vcpu_count: 1,
mem_size_mib: 128,
}).unwrap();
assert!(vm2.cid >= 3);
}
}
Step 3: Run tests to verify they fail
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib store::sqlite::tests::create_vm
Expected: FAIL – create_vm does not exist on SqliteStore yet.
Step 4: Implement VmStore methods on SqliteStore
Update nexus/nexus-lib/src/store/traits.rs with the trait extension as described in Step 1.
Update nexus/nexus-lib/src/store/sqlite.rs with the implementation. Add these imports at the top:
#![allow(unused)]
fn main() {
use crate::vm::{CreateVmParams, Vm, VmState};
use uuid::Uuid;
}
Add the VM CRUD implementation to impl StateStore for SqliteStore:
#![allow(unused)]
fn main() {
fn create_vm(&self, params: &CreateVmParams) -> Result<Vm, StoreError> {
let conn = self.conn.lock().unwrap();
let id = Uuid::new_v4().to_string();
// Auto-assign CID: find the max CID in use, start from 3
let max_cid: Option<u32> = conn
.query_row("SELECT MAX(cid) FROM vms", [], |row| row.get(0))
.map_err(|e| StoreError::Query(format!("cannot query max CID: {e}")))?;
let cid = max_cid.map(|c| c + 1).unwrap_or(3);
conn.execute(
"INSERT INTO vms (id, name, role, state, cid, vcpu_count, mem_size_mib) \
VALUES (?1, ?2, ?3, 'created', ?4, ?5, ?6)",
rusqlite::params![
id,
params.name,
params.role.as_str(),
cid,
params.vcpu_count,
params.mem_size_mib,
],
)
.map_err(|e| {
if let rusqlite::Error::SqliteFailure(err, _) = &e {
if err.code == rusqlite::ErrorCode::ConstraintViolation {
return StoreError::Conflict(format!("VM name '{}' already exists", params.name));
}
}
StoreError::Query(format!("cannot insert VM: {e}"))
})?;
self.get_vm(&id)?
.ok_or_else(|| StoreError::Query("VM not found after insert".to_string()))
}
fn list_vms(&self, role: Option<&str>, state: Option<&str>) -> Result<Vec<Vm>, StoreError> {
let conn = self.conn.lock().unwrap();
let mut sql = "SELECT id, name, role, state, cid, vcpu_count, mem_size_mib, \
created_at, updated_at, started_at, stopped_at, pid, \
socket_path, uds_path, console_log_path, config_json \
FROM vms WHERE 1=1".to_string();
let mut params: Vec<&dyn rusqlite::types::ToSql> = Vec::new();
if let Some(r) = role {
sql.push_str(" AND role = ?");
params.push(r);
}
if let Some(s) = state {
sql.push_str(" AND state = ?");
params.push(s);
}
sql.push_str(" ORDER BY created_at DESC");
let mut stmt = conn.prepare(&sql)
.map_err(|e| StoreError::Query(format!("cannot prepare list query: {e}")))?;
let vms = stmt
.query_map(params.as_slice(), |row| Ok(row_to_vm(row)))
.map_err(|e| StoreError::Query(format!("cannot list VMs: {e}")))?
.collect::<Result<Vec<_>, _>>()
.map_err(|e| StoreError::Query(format!("cannot read VM row: {e}")))?;
Ok(vms)
}
fn get_vm(&self, name_or_id: &str) -> Result<Option<Vm>, StoreError> {
let conn = self.conn.lock().unwrap();
let mut stmt = conn
.prepare(
"SELECT id, name, role, state, cid, vcpu_count, mem_size_mib, \
created_at, updated_at, started_at, stopped_at, pid, \
socket_path, uds_path, console_log_path, config_json \
FROM vms WHERE id = ?1 OR name = ?1",
)
.map_err(|e| StoreError::Query(format!("cannot prepare get query: {e}")))?;
let mut rows = stmt
.query_map([name_or_id], |row| Ok(row_to_vm(row)))
.map_err(|e| StoreError::Query(format!("cannot get VM: {e}")))?;
match rows.next() {
Some(Ok(vm)) => Ok(Some(vm)),
Some(Err(e)) => Err(StoreError::Query(format!("cannot read VM row: {e}"))),
None => Ok(None),
}
}
fn delete_vm(&self, name_or_id: &str) -> Result<bool, StoreError> {
// Check if VM exists and is not running
if let Some(vm) = self.get_vm(name_or_id)? {
if vm.state == VmState::Running {
return Err(StoreError::Conflict(format!(
"cannot delete VM '{}': VM is running, stop it first",
vm.name
)));
}
} else {
return Ok(false);
}
let conn = self.conn.lock().unwrap();
let deleted = conn
.execute(
"DELETE FROM vms WHERE id = ?1 OR name = ?1",
[name_or_id],
)
.map_err(|e| StoreError::Query(format!("cannot delete VM: {e}")))?;
Ok(deleted > 0)
}
}
Add the row mapping helper function (outside the impl block, in the module):
#![allow(unused)]
fn main() {
/// Map a rusqlite row to a Vm struct.
/// NOTE: Uses unwrap() throughout -- will panic on invalid data.
/// Accepted for pre-alpha: CHECK constraints in the schema prevent invalid
/// values from being inserted through normal operations.
fn row_to_vm(row: &rusqlite::Row) -> Vm {
Vm {
id: row.get(0).unwrap(),
name: row.get(1).unwrap(),
role: row.get::<_, String>(2).unwrap().parse().unwrap(),
state: row.get::<_, String>(3).unwrap().parse().unwrap(),
cid: row.get(4).unwrap(),
vcpu_count: row.get(5).unwrap(),
mem_size_mib: row.get(6).unwrap(),
created_at: row.get(7).unwrap(),
updated_at: row.get(8).unwrap(),
started_at: row.get(9).unwrap(),
stopped_at: row.get(10).unwrap(),
pid: row.get(11).unwrap(),
socket_path: row.get(12).unwrap(),
uds_path: row.get(13).unwrap(),
console_log_path: row.get(14).unwrap(),
config_json: row.get(15).unwrap(),
}
}
}
Step 5: Update the MockStore in nexusd/src/api.rs tests
Add stub implementations of the new trait methods to MockStore and FailingStore in nexusd/src/api.rs:
#![allow(unused)]
fn main() {
fn create_vm(&self, _params: &CreateVmParams) -> Result<Vm, StoreError> {
unimplemented!()
}
fn list_vms(&self, _role: Option<&str>, _state: Option<&str>) -> Result<Vec<Vm>, StoreError> {
unimplemented!()
}
fn get_vm(&self, _name_or_id: &str) -> Result<Option<Vm>, StoreError> {
unimplemented!()
}
fn delete_vm(&self, _name_or_id: &str) -> Result<bool, StoreError> {
unimplemented!()
}
}
Add the import to the test module:
#![allow(unused)]
fn main() {
use nexus_lib::vm::{CreateVmParams, Vm};
}
Step 6: Run tests to verify they pass
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib store::sqlite::tests
Expected: All tests PASS (existing + new VM tests).
Step 7: Run full workspace tests
Run: cd /home/kazw/Work/WorkFort/nexus && mise run test
Expected: All tests PASS.
Step 8: Commit
git add nexus/nexus-lib/src/store/traits.rs nexus/nexus-lib/src/store/sqlite.rs nexus/nexusd/src/api.rs
git commit -m "feat(nexus-lib): implement VM CRUD in SqliteStore with auto-assigned CIDs"
Task 5: REST API Endpoints for VMs
Files:
- Modify:
nexus/nexusd/src/api.rs
This task adds four routes: POST /v1/vms, GET /v1/vms, GET /v1/vms/:id, DELETE /v1/vms/:id.
Step 1: Write the failing tests
Add to the tests module in nexus/nexusd/src/api.rs. The MockStore needs to be upgraded to support VM operations for these tests. Replace MockStore with a version that uses an in-memory SqliteStore:
#![allow(unused)]
fn main() {
use nexus_lib::store::sqlite::SqliteStore;
fn test_state() -> Arc<AppState> {
let dir = tempfile::tempdir().unwrap();
let db_path = dir.path().join("test.db");
let store = SqliteStore::open_and_init(&db_path).unwrap();
// Leak the tempdir so it lives long enough
std::mem::forget(dir);
Arc::new(AppState {
store: Box::new(store),
})
}
#[tokio::test]
async fn create_vm_returns_201() {
let state = test_state();
let app = router(state);
let response = app
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "test-vm"}"#))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::CREATED);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(json["name"], "test-vm");
assert_eq!(json["state"], "created");
assert_eq!(json["role"], "work");
assert_eq!(json["cid"], 3);
}
#[tokio::test]
async fn create_vm_duplicate_returns_409() {
let state = test_state();
let app = router(state.clone());
// Create first
app.clone()
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "dup"}"#))
.unwrap(),
)
.await
.unwrap();
// Create duplicate
let response = router(state)
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "dup"}"#))
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::CONFLICT);
}
#[tokio::test]
async fn list_vms_returns_array() {
let state = test_state();
let app = router(state.clone());
// Create a VM first
router(state.clone())
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "list-me"}"#))
.unwrap(),
)
.await
.unwrap();
let response = router(state)
.oneshot(Request::get("/v1/vms").body(Body::empty()).unwrap())
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
let json: Vec<serde_json::Value> = serde_json::from_slice(&body).unwrap();
assert_eq!(json.len(), 1);
assert_eq!(json[0]["name"], "list-me");
}
#[tokio::test]
async fn get_vm_returns_detail() {
let state = test_state();
router(state.clone())
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "detail-vm"}"#))
.unwrap(),
)
.await
.unwrap();
let response = router(state)
.oneshot(
Request::get("/v1/vms/detail-vm")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::OK);
let body = axum::body::to_bytes(response.into_body(), usize::MAX)
.await
.unwrap();
let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(json["name"], "detail-vm");
}
#[tokio::test]
async fn get_vm_not_found_returns_404() {
let state = test_state();
let app = router(state);
let response = app
.oneshot(
Request::get("/v1/vms/nonexistent")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
#[tokio::test]
async fn delete_vm_returns_204() {
let state = test_state();
router(state.clone())
.oneshot(
Request::post("/v1/vms")
.header("content-type", "application/json")
.body(Body::from(r#"{"name": "doomed"}"#))
.unwrap(),
)
.await
.unwrap();
let response = router(state)
.oneshot(
Request::delete("/v1/vms/doomed")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NO_CONTENT);
}
#[tokio::test]
async fn delete_vm_not_found_returns_404() {
let state = test_state();
let app = router(state);
let response = app
.oneshot(
Request::delete("/v1/vms/ghost")
.body(Body::empty())
.unwrap(),
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_FOUND);
}
}
Step 2: Run tests to verify they fail
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexusd api::tests::create_vm
Expected: FAIL – routes don’t exist yet.
Step 3: Implement the API handlers
Update nexus/nexusd/src/api.rs. Add these imports:
#![allow(unused)]
fn main() {
use axum::{extract::Path, routing::{delete, post}};
use nexus_lib::vm::{CreateVmParams, Vm};
use nexus_lib::store::traits::StoreError;
}
Add the handler functions:
#![allow(unused)]
fn main() {
async fn create_vm(
State(state): State<Arc<AppState>>,
Json(params): Json<CreateVmParams>,
) -> (StatusCode, Json<serde_json::Value>) {
match state.store.create_vm(¶ms) {
Ok(vm) => (StatusCode::CREATED, Json(serde_json::to_value(vm).unwrap())),
Err(StoreError::Conflict(msg)) => (
StatusCode::CONFLICT,
Json(serde_json::json!({"error": msg})),
),
Err(e) => (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": e.to_string()})),
),
}
}
async fn list_vms(
State(state): State<Arc<AppState>>,
axum::extract::Query(query): axum::extract::Query<std::collections::HashMap<String, String>>,
) -> (StatusCode, Json<serde_json::Value>) {
let role = query.get("role").map(|s| s.as_str());
let vm_state = query.get("state").map(|s| s.as_str());
match state.store.list_vms(role, vm_state) {
Ok(vms) => (StatusCode::OK, Json(serde_json::to_value(vms).unwrap())),
Err(e) => (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": e.to_string()})),
),
}
}
async fn get_vm(
State(state): State<Arc<AppState>>,
Path(name_or_id): Path<String>,
) -> (StatusCode, Json<serde_json::Value>) {
match state.store.get_vm(&name_or_id) {
Ok(Some(vm)) => (StatusCode::OK, Json(serde_json::to_value(vm).unwrap())),
Ok(None) => (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": format!("VM '{}' not found", name_or_id)})),
),
Err(e) => (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": e.to_string()})),
),
}
}
async fn delete_vm(
State(state): State<Arc<AppState>>,
Path(name_or_id): Path<String>,
) -> (StatusCode, Json<serde_json::Value>) {
match state.store.delete_vm(&name_or_id) {
Ok(true) => (StatusCode::NO_CONTENT, Json(serde_json::json!(null))),
Ok(false) => (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": format!("VM '{}' not found", name_or_id)})),
),
Err(StoreError::Conflict(msg)) => (
StatusCode::CONFLICT,
Json(serde_json::json!({"error": msg})),
),
Err(e) => (
StatusCode::INTERNAL_SERVER_ERROR,
Json(serde_json::json!({"error": e.to_string()})),
),
}
}
}
Update the router() function to include the new routes:
#![allow(unused)]
fn main() {
pub fn router(state: Arc<AppState>) -> Router {
Router::new()
.route("/v1/health", get(health))
.route("/v1/vms", post(create_vm).get(list_vms))
.route("/v1/vms/{name_or_id}", get(get_vm).delete(delete_vm))
.with_state(state)
}
}
Step 4: Add dev-dependencies for the tests
Add to nexus/nexusd/Cargo.toml in [dev-dependencies]:
tempfile = "3"
Step 5: Run tests to verify they pass
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexusd api::tests
Expected: All tests PASS.
Step 6: Commit
git add nexus/nexusd/src/api.rs nexus/nexusd/Cargo.toml
git commit -m "feat(nexusd): add REST API endpoints for VM CRUD (POST/GET/DELETE /v1/vms)"
Task 6: Add VM Methods to NexusClient
Files:
- Modify:
nexus/nexus-lib/src/client.rs
Step 1: Write the failing tests
Add to the tests module in nexus/nexus-lib/src/client.rs:
#![allow(unused)]
fn main() {
#[test]
fn vm_response_deserializes() {
let json = r#"{"id":"abc","name":"test","role":"work","state":"created","cid":3,"vcpu_count":1,"mem_size_mib":128,"created_at":1000,"updated_at":1000}"#;
let vm: crate::vm::Vm = serde_json::from_str(json).unwrap();
assert_eq!(vm.name, "test");
assert_eq!(vm.cid, 3);
}
}
Step 2: Add VM methods to NexusClient
Add to NexusClient in nexus/nexus-lib/src/client.rs:
#![allow(unused)]
fn main() {
pub async fn create_vm(&self, params: &crate::vm::CreateVmParams) -> Result<crate::vm::Vm, ClientError> {
let url = format!("{}/v1/vms", self.base_url);
let resp = self.http.post(&url).json(params).send().await.map_err(|e| {
if e.is_connect() || e.is_timeout() {
ClientError::Connect(e.to_string())
} else {
ClientError::Api(e.to_string())
}
})?;
let status = resp.status();
if status == reqwest::StatusCode::CONFLICT {
let body: serde_json::Value = resp.json().await.map_err(|e| ClientError::Api(e.to_string()))?;
return Err(ClientError::Api(body["error"].as_str().unwrap_or("conflict").to_string()));
}
if !status.is_success() {
let body = resp.text().await.unwrap_or_default();
return Err(ClientError::Api(format!("unexpected status {status}: {body}")));
}
resp.json().await.map_err(|e| ClientError::Api(e.to_string()))
}
pub async fn list_vms(&self, role: Option<&str>, state: Option<&str>) -> Result<Vec<crate::vm::Vm>, ClientError> {
let mut url = format!("{}/v1/vms", self.base_url);
let mut params = Vec::new();
if let Some(r) = role { params.push(format!("role={r}")); }
if let Some(s) = state { params.push(format!("state={s}")); }
if !params.is_empty() {
url.push('?');
url.push_str(¶ms.join("&"));
}
let resp = self.http.get(&url).send().await.map_err(|e| {
if e.is_connect() || e.is_timeout() {
ClientError::Connect(e.to_string())
} else {
ClientError::Api(e.to_string())
}
})?;
let status = resp.status();
if !status.is_success() {
return Err(ClientError::Api(format!("unexpected status: {status}")));
}
resp.json().await.map_err(|e| ClientError::Api(e.to_string()))
}
pub async fn get_vm(&self, name_or_id: &str) -> Result<Option<crate::vm::Vm>, ClientError> {
let url = format!("{}/v1/vms/{name_or_id}", self.base_url);
let resp = self.http.get(&url).send().await.map_err(|e| {
if e.is_connect() || e.is_timeout() {
ClientError::Connect(e.to_string())
} else {
ClientError::Api(e.to_string())
}
})?;
if resp.status() == reqwest::StatusCode::NOT_FOUND {
return Ok(None);
}
if !resp.status().is_success() {
return Err(ClientError::Api(format!("unexpected status: {}", resp.status())));
}
resp.json().await.map(Some).map_err(|e| ClientError::Api(e.to_string()))
}
pub async fn delete_vm(&self, name_or_id: &str) -> Result<bool, ClientError> {
let url = format!("{}/v1/vms/{name_or_id}", self.base_url);
let resp = self.http.delete(&url).send().await.map_err(|e| {
if e.is_connect() || e.is_timeout() {
ClientError::Connect(e.to_string())
} else {
ClientError::Api(e.to_string())
}
})?;
match resp.status().as_u16() {
204 => Ok(true),
404 => Ok(false),
409 => {
let body: serde_json::Value = resp.json().await.map_err(|e| ClientError::Api(e.to_string()))?;
Err(ClientError::Api(body["error"].as_str().unwrap_or("conflict").to_string()))
}
other => Err(ClientError::Api(format!("unexpected status: {other}"))),
}
}
}
Step 3: Run tests to verify they pass
Run: cd /home/kazw/Work/WorkFort/nexus && cargo test -p nexus-lib client::tests
Expected: All tests PASS (existing + new).
Step 4: Commit
git add nexus/nexus-lib/src/client.rs
git commit -m "feat(nexus-lib): add VM CRUD methods to NexusClient"
Task 7: nexusctl vm Subcommand
Files:
- Modify:
nexus/nexusctl/src/main.rs - Modify:
nexus/nexusctl/Cargo.toml
This task adds the vm subcommand to nexusctl with create, list, inspect, and delete actions.
Step 1: Add chrono dependency to nexusctl
Add to nexus/nexusctl/Cargo.toml:
[dependencies]
# ... existing deps ...
chrono = "0.4"
Step 2: Update the CLI with the vm subcommand
Update nexus/nexusctl/src/main.rs to add the vm subcommand:
#![allow(unused)]
fn main() {
use clap::{Parser, Subcommand};
use nexus_lib::client::NexusClient;
use nexus_lib::vm::{CreateVmParams, VmRole};
use std::path::PathBuf;
use std::process::ExitCode;
mod config;
// Exit codes per CLI spec
const EXIT_GENERAL_ERROR: u8 = 1;
const EXIT_DAEMON_UNREACHABLE: u8 = 3;
const EXIT_NOT_FOUND: u8 = 4;
const EXIT_CONFLICT: u8 = 5;
#[derive(Parser)]
#[command(
name = "nexusctl",
about = "WorkFort Nexus CLI (alias: nxc)",
version
)]
struct Cli {
/// Path to configuration file
/// [default: $XDG_CONFIG_HOME/nexusctl/config.yaml]
#[arg(long, global = true)]
config: Option<PathBuf>,
/// Daemon address (host:port)
#[arg(long, global = true)]
daemon: Option<String>,
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
/// Show daemon status
Status,
/// Print version information
Version,
/// Manage virtual machines
Vm {
#[command(subcommand)]
action: VmAction,
},
}
#[derive(Subcommand)]
enum VmAction {
/// List all VMs
List {
/// Filter by role (work, portal, service)
#[arg(long)]
role: Option<String>,
/// Filter by state (created, running, stopped, crashed, failed)
#[arg(long)]
state: Option<String>,
},
/// Create a new VM
Create {
/// VM name
name: String,
/// VM role: work, portal, service
#[arg(long, default_value = "work")]
role: String,
/// vCPU count
#[arg(long, default_value = "1")]
vcpu: u32,
/// Memory in MiB
#[arg(long, default_value = "128")]
mem: u32,
},
/// Show VM details
Inspect {
/// VM name or ID
name: String,
},
/// Delete a VM
Delete {
/// VM name or ID
name: String,
/// Skip confirmation
#[arg(short, long)]
yes: bool,
},
}
}
Update the main() function to dispatch VM commands:
#[tokio::main]
async fn main() -> ExitCode {
let cli = Cli::parse();
let config_path = cli.config.unwrap_or_else(config::default_config_path);
let cfg = config::load(&config_path);
let daemon_addr = cli.daemon.unwrap_or(cfg.daemon);
match cli.command {
Commands::Status => cmd_status(&daemon_addr).await,
Commands::Version => cmd_version(&daemon_addr).await,
Commands::Vm { action } => cmd_vm(&daemon_addr, action).await,
}
}
Implement the VM command handlers:
#![allow(unused)]
fn main() {
async fn cmd_vm(daemon_addr: &str, action: VmAction) -> ExitCode {
let client = NexusClient::new(daemon_addr);
match action {
VmAction::List { role, state } => {
match client.list_vms(role.as_deref(), state.as_deref()).await {
Ok(vms) => {
if vms.is_empty() {
println!("No VMs found.");
return ExitCode::SUCCESS;
}
// Print table header
println!(
"{:<20} {:<10} {:<10} {:<6} {:<8} {:<6}",
"NAME", "ROLE", "STATE", "VCPU", "MEM", "CID"
);
for vm in &vms {
println!(
"{:<20} {:<10} {:<10} {:<6} {:<8} {:<6}",
vm.name, vm.role, vm.state, vm.vcpu_count,
format!("{}M", vm.mem_size_mib), vm.cid,
);
}
ExitCode::SUCCESS
}
Err(e) if e.is_connect() => {
print_connect_error(daemon_addr);
ExitCode::from(EXIT_DAEMON_UNREACHABLE)
}
Err(e) => {
eprintln!("Error: {e}");
ExitCode::from(EXIT_GENERAL_ERROR)
}
}
}
VmAction::Create { name, role, vcpu, mem } => {
let role: VmRole = match role.parse() {
Ok(r) => r,
Err(e) => {
eprintln!("Error: {e}");
return ExitCode::from(EXIT_GENERAL_ERROR);
}
};
let params = CreateVmParams {
name: name.clone(),
role,
vcpu_count: vcpu,
mem_size_mib: mem,
};
match client.create_vm(¶ms).await {
Ok(vm) => {
println!("Created VM \"{}\" (state: {}, CID: {})", vm.name, vm.state, vm.cid);
println!("\n Inspect it: nexusctl vm inspect {}", vm.name);
ExitCode::SUCCESS
}
Err(e) if e.is_connect() => {
print_connect_error(daemon_addr);
ExitCode::from(EXIT_DAEMON_UNREACHABLE)
}
Err(e) => {
eprintln!("Error: cannot create VM \"{name}\"\n {e}");
ExitCode::from(EXIT_CONFLICT)
}
}
}
VmAction::Inspect { name } => {
match client.get_vm(&name).await {
Ok(Some(vm)) => {
println!("Name: {}", vm.name);
println!("ID: {}", vm.id);
println!("Role: {}", vm.role);
println!("State: {}", vm.state);
println!("CID: {}", vm.cid);
println!("vCPUs: {}", vm.vcpu_count);
println!("Memory: {} MiB", vm.mem_size_mib);
println!("Created: {}", format_timestamp(vm.created_at));
if let Some(ts) = vm.started_at {
println!("Started: {}", format_timestamp(ts));
}
if let Some(ts) = vm.stopped_at {
println!("Stopped: {}", format_timestamp(ts));
}
ExitCode::SUCCESS
}
Ok(None) => {
eprintln!("Error: VM \"{}\" not found", name);
ExitCode::from(EXIT_NOT_FOUND)
}
Err(e) if e.is_connect() => {
print_connect_error(daemon_addr);
ExitCode::from(EXIT_DAEMON_UNREACHABLE)
}
Err(e) => {
eprintln!("Error: {e}");
ExitCode::from(EXIT_GENERAL_ERROR)
}
}
}
VmAction::Delete { name, yes } => {
if !yes {
eprintln!(
"Error: refusing to delete VM without confirmation\n \
Run with --yes to skip confirmation: nexusctl vm delete {} --yes",
name
);
return ExitCode::from(EXIT_GENERAL_ERROR);
}
match client.delete_vm(&name).await {
Ok(true) => {
println!("Deleted VM \"{}\"", name);
ExitCode::SUCCESS
}
Ok(false) => {
eprintln!("Error: VM \"{}\" not found", name);
ExitCode::from(EXIT_NOT_FOUND)
}
Err(e) if e.is_connect() => {
print_connect_error(daemon_addr);
ExitCode::from(EXIT_DAEMON_UNREACHABLE)
}
Err(e) => {
eprintln!("Error: cannot delete VM \"{}\"\n {e}", name);
ExitCode::from(EXIT_CONFLICT)
}
}
}
}
}
fn print_connect_error(daemon_addr: &str) {
eprintln!(
"Error: cannot connect to Nexus daemon at {}\n \
The daemon does not appear to be running.\n\n \
Start it: systemctl --user start nexus.service",
daemon_addr
);
}
fn format_timestamp(epoch_secs: i64) -> String {
chrono::DateTime::from_timestamp(epoch_secs, 0)
.map(|dt| dt.format("%Y-%m-%d %H:%M:%S UTC").to_string())
.unwrap_or_else(|| epoch_secs.to_string())
}
}
Factor print_connect_error out from the existing cmd_status to avoid duplication, and update cmd_status to use it too.
Step 3: Verify build
Run: cd /home/kazw/Work/WorkFort/nexus && mise run build
Expected: Compiles with no errors.
Step 4: Verify nexusctl vm --help
Run: cd /home/kazw/Work/WorkFort/nexus && cargo run -p nexusctl -- vm --help
Expected:
Manage virtual machines
Usage: nexusctl vm <COMMAND>
Commands:
list List all VMs
create Create a new VM
inspect Show VM details
delete Delete a VM
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
Step 5: Commit
git add nexus/nexusctl/src/main.rs nexus/nexusctl/Cargo.toml
git commit -m "feat(nexusctl): add vm subcommand with create, list, inspect, delete actions"
Task 8: Integration Tests for VM CRUD
Files:
- Modify:
nexus/nexusd/tests/daemon.rs - Modify:
nexus/nexusctl/tests/cli.rs
Step 1: Add VM API integration test to nexusd
Add to nexus/nexusd/tests/daemon.rs:
#![allow(unused)]
fn main() {
#[tokio::test]
async fn vm_crud_lifecycle() {
let daemon = TestDaemon::start_with_binary(
env!("CARGO_BIN_EXE_nexusd").into(),
)
.await;
let client = reqwest::Client::new();
let base = format!("http://{}", daemon.addr);
// Create a VM
let resp = client
.post(format!("{base}/v1/vms"))
.json(&serde_json::json!({"name": "int-test-vm"}))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 201);
let vm: serde_json::Value = resp.json().await.unwrap();
assert_eq!(vm["name"], "int-test-vm");
assert_eq!(vm["state"], "created");
assert_eq!(vm["cid"], 3);
// List VMs
let resp = client.get(format!("{base}/v1/vms")).send().await.unwrap();
assert_eq!(resp.status(), 200);
let vms: Vec<serde_json::Value> = resp.json().await.unwrap();
assert_eq!(vms.len(), 1);
// Get VM by name
let resp = client
.get(format!("{base}/v1/vms/int-test-vm"))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 200);
let detail: serde_json::Value = resp.json().await.unwrap();
assert_eq!(detail["name"], "int-test-vm");
// Get VM not found
let resp = client
.get(format!("{base}/v1/vms/nonexistent"))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 404);
// Delete VM
let resp = client
.delete(format!("{base}/v1/vms/int-test-vm"))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 204);
// Verify deleted
let resp = client
.get(format!("{base}/v1/vms/int-test-vm"))
.send()
.await
.unwrap();
assert_eq!(resp.status(), 404);
// List should be empty
let resp = client.get(format!("{base}/v1/vms")).send().await.unwrap();
let vms: Vec<serde_json::Value> = resp.json().await.unwrap();
assert_eq!(vms.len(), 0);
}
}
Step 2: Add VM CLI integration test to nexusctl
Add to nexus/nexusctl/tests/cli.rs:
#![allow(unused)]
fn main() {
#[tokio::test]
async fn vm_create_list_inspect_delete() {
let daemon = TestDaemon::start().await;
// Create a VM
let output = Command::new(env!("CARGO_BIN_EXE_nexusctl"))
.args(["--daemon", &daemon.addr, "vm", "create", "cli-test-vm"])
.output()
.expect("failed to run nexusctl");
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(output.status.success(), "create failed: {stdout}");
assert!(stdout.contains("Created VM"), "expected create message: {stdout}");
// List VMs
let output = Command::new(env!("CARGO_BIN_EXE_nexusctl"))
.args(["--daemon", &daemon.addr, "vm", "list"])
.output()
.expect("failed to run nexusctl");
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(output.status.success(), "list failed: {stdout}");
assert!(stdout.contains("cli-test-vm"), "expected VM in list: {stdout}");
assert!(stdout.contains("NAME"), "expected table header: {stdout}");
// Inspect VM
let output = Command::new(env!("CARGO_BIN_EXE_nexusctl"))
.args(["--daemon", &daemon.addr, "vm", "inspect", "cli-test-vm"])
.output()
.expect("failed to run nexusctl");
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(output.status.success(), "inspect failed: {stdout}");
assert!(stdout.contains("cli-test-vm"), "expected VM name: {stdout}");
assert!(stdout.contains("State:"), "expected state field: {stdout}");
assert!(stdout.contains("CID:"), "expected CID field: {stdout}");
// Delete VM
let output = Command::new(env!("CARGO_BIN_EXE_nexusctl"))
.args(["--daemon", &daemon.addr, "vm", "delete", "cli-test-vm", "--yes"])
.output()
.expect("failed to run nexusctl");
let stdout = String::from_utf8_lossy(&output.stdout);
assert!(output.status.success(), "delete failed: {stdout}");
assert!(stdout.contains("Deleted VM"), "expected delete message: {stdout}");
// Verify deleted
let output = Command::new(env!("CARGO_BIN_EXE_nexusctl"))
.args(["--daemon", &daemon.addr, "vm", "inspect", "cli-test-vm"])
.output()
.expect("failed to run nexusctl");
assert!(!output.status.success(), "expected inspect to fail after delete");
}
}
Step 3: Run integration tests
Run: cd /home/kazw/Work/WorkFort/nexus && mise run test
Expected: All tests PASS.
Step 4: Commit
git add nexus/nexusd/tests/daemon.rs nexus/nexusctl/tests/cli.rs
git commit -m "test: add integration tests for VM CRUD API and CLI"
Task 9: Workspace-Wide Verification
Step 1: Full build
Run: cd /home/kazw/Work/WorkFort/nexus && mise run build
Expected: Compiles with no errors and no warnings.
Step 2: Full test suite
Run: cd /home/kazw/Work/WorkFort/nexus && mise run test
Expected: All tests pass.
Step 3: Clippy
Run: cd /home/kazw/Work/WorkFort/nexus && mise run clippy
Expected: No warnings.
Step 4: End-to-end smoke test
Terminal 1:
cd /home/kazw/Work/WorkFort/nexus && mise run run
Terminal 2:
# Create VMs
cd /home/kazw/Work/WorkFort/nexus
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm create my-vm
# Expected: Created VM "my-vm" (state: created, CID: 3)
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm create portal-1 --role portal --vcpu 2 --mem 512
# Expected: Created VM "portal-1" (state: created, CID: 4)
# List VMs
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm list
# Expected: Table with my-vm and portal-1
# List with filter
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm list --role work
# Expected: Only my-vm
# Inspect VM
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm inspect my-vm
# Expected: Full detail output
# Direct API access
curl -s http://127.0.0.1:9600/v1/vms | python -m json.tool
# Expected: JSON array with 2 VMs
curl -s http://127.0.0.1:9600/v1/vms/my-vm | python -m json.tool
# Expected: JSON object with my-vm details
# Delete VM
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm delete my-vm --yes
# Expected: Deleted VM "my-vm"
# Delete without --yes
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm delete portal-1
# Expected: Error about confirmation
# Delete non-existent
mise run run:nexusctl -- --daemon 127.0.0.1:9600 vm inspect my-vm
# Expected: Error: VM "my-vm" not found (exit code 4)
Kill the daemon (Ctrl-C in terminal 1).
Step 5: Verify the schema migration
# The old database (version 1) should be auto-recreated
sqlite3 ~/.local/state/nexus/nexus.db ".tables"
# Expected: schema_meta settings vms
sqlite3 ~/.local/state/nexus/nexus.db "SELECT value FROM schema_meta WHERE key='version'"
# Expected: 2
Step 6: Commit (if any final adjustments were needed)
git add nexus/
git commit -m "chore: final adjustments from step 4 verification"
Verification Checklist
-
mise run buildsucceeds with no warnings -
mise run test– all tests pass -
mise run clippy– no warnings -
POST /v1/vmswith{"name":"test"}returns 201 with VM JSON including auto-assigned CID -
POST /v1/vmswith duplicate name returns 409 -
GET /v1/vmsreturns JSON array of all VMs -
GET /v1/vms?role=workfilters by role -
GET /v1/vms/:namereturns VM detail -
GET /v1/vms/:nonexistentreturns 404 -
DELETE /v1/vms/:namereturns 204 and removes the record -
DELETE /v1/vms/:nonexistentreturns 404 -
nexusctl vm create my-vmcreates a VM and prints confirmation with CID -
nexusctl vm create my-vm --role portal --vcpu 4 --mem 1024overrides defaults -
nexusctl vm listrenders a table with NAME, ROLE, STATE, VCPU, MEM, CID columns -
nexusctl vm list --role workfilters the list -
nexusctl vm inspect my-vmshows full detail (name, ID, role, state, CID, vCPUs, memory, timestamps) -
nexusctl vm delete my-vm --yesdeletes the VM -
nexusctl vm delete my-vm(without –yes) refuses with helpful error - Schema version is 2,
vmstable exists alongsideschema_metaandsettings - CIDs auto-increment starting from 3
- All VMs created in step 4 have state
created(no Firecracker yet) - Daemon restart with old schema version triggers pre-alpha migration (delete + recreate)