Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Technical Feasibility Assessment Conventions

How to perform a technical feasibility assessment on a Nexus implementation plan. Covers purpose, scope, document structure, severity levels, and verification methodology.


1. Purpose of Technical Feasibility Assessments

A technical feasibility assessment verifies that an implementation plan is correct against the actual codebase and external APIs before work begins. The assessor reads every source file the plan touches, checks every library API call against docs.rs, and confirms every dependency exists at the specified version. It catches errors that are expensive to discover during implementation: wrong API signatures, missing dependencies, incompatible crate versions, schema mismatches, and stale assumptions about existing code.

When to write one

Write a technical feasibility assessment when a plan:

  • Introduces new crate dependencies
  • Uses external library APIs for the first time
  • Makes schema changes or data model additions
  • Proposes architectural patterns not yet established in the codebase
  • Touches system-level interfaces (vsock, btrfs, nftables, systemd)

Do NOT write one for:

  • Plans that only add straightforward CRUD using already-established patterns
  • Documentation-only changes
  • mise task additions with no code impact

Where it fits in the workflow

A feasibility assessment is effectively a code review for plans. A researcher produces it and hands it off to the planner, who incorporates the findings before the plan is committed.

Plan drafted → Researcher writes feasibility assessment → Planner revises plan → Plan committed (single commit)

The assessment is a temporary handoff document between the researcher and the planner. It is never committed to version control, never referenced from plans or other documentation, and is deleted once its findings are incorporated. The plan is the only artifact that gets committed — as a single commit with all corrections already applied. This avoids churn and confusion from intermediate revisions.

What it is NOT

A feasibility assessment is not a design review or a product requirements check. It does not evaluate whether the plan solves the right problem or whether the feature is worth building. It verifies only that the plan’s code and configuration will work as written — that it will compile, pass tests, and interact correctly with its dependencies.


2. Assessment Scope

Every assessment MUST check the following categories. If a category does not apply to the plan, state “Not applicable” rather than omitting it.

Dependency verification

  • Verify each new crate exists on crates.io at the specified version
  • Check for active RustSec advisories (cargo audit or manual lookup)
  • Confirm the crate is maintained (last publish date, open issue activity)
  • Note if a newer major version exists and whether it matters

API compatibility

  • Verify every library API call in the plan against docs.rs for the exact version
  • Check function signatures, return types, and trait bounds
  • Confirm feature flags enable the APIs used (e.g., reqwest’s json feature for .json())
  • Verify derive macros and attribute syntax

Architecture

  • Evaluate where new code is placed (correct crate, correct module)
  • Check for unnecessary dependencies between crates
  • Assess whether the approach introduces coupling that will cause problems later
  • Verify the plan’s module structure matches existing conventions

Testing

  • Check that test assertions match the code being tested
  • Verify test infrastructure (ports, temp dirs, binary paths) won’t conflict
  • Confirm dev-dependencies are declared for test utilities used
  • Assess whether TDD flow is practical for the proposed tasks

Build system

  • Verify Cargo.toml changes are syntactically correct
  • Check that workspace member additions are reflected in the root Cargo.toml
  • Confirm mise task updates match the actual commands needed

Internal API verification

  • Read every source file the plan modifies or calls into
  • Verify that functions, structs, enums, and traits the plan references actually exist
  • Check that function signatures (parameters, return types, generics) match what the plan assumes
  • Confirm struct fields, enum variants, and trait methods are accurate
  • Verify module paths (use statements) resolve correctly
  • Check that the plan’s “before” state matches the current codebase — plans can go stale if earlier steps changed things

Schema and data consistency

  • Verify schema version numbers are correct (e.g., incrementing from the current value)
  • Check that new tables/columns don’t conflict with existing schema
  • Confirm SQL in migration code matches the struct definitions that read from those tables
  • Verify foreign key references point to tables that exist

3. Document Structure

Every assessment MUST follow this structure. Use the exact heading names.

# Step N: Technical Feasibility Assessment

**Date:** YYYY-MM-DD
**Status:** PASS | PASS WITH NOTES | FAIL

## Summary

<One paragraph. State the overall verdict, then list the critical findings
as a numbered list. The reader should be able to stop here and know whether
to proceed.>

## Dependency Verification

<Table of all new and modified dependencies. One row per crate.>

| Crate | Version in Plan | Latest Available | Status | Notes |
|-------|----------------|-----------------|--------|-------|
| `foo` | 1.2 | 1.2.5 | OK | ... |

## API Compatibility

<One subsection per API surface area. Each gets a verdict line.>

### <Library or API Name>
**Verdict: OK | ISSUE FOUND.**
<Details of what was checked and what was found.>

## Architecture

<Assessment of structural decisions. Subsection per concern.>

## Testing

<Assessment of test strategy and infrastructure.>

## Build System

<Assessment of Cargo.toml, mise, and workspace changes.>

## Risks and Recommendations

<Numbered list. Each item has a severity tag, risk description,
and recommendation.>

### 1. <Short title> (Severity: HIGH | MEDIUM | LOW)
**Risk:** <What goes wrong.>
**Recommendation:** <What to do about it.>

## Verdict

**<Proceed | Proceed with modifications | Do not proceed.>**

<Restate the action items as a numbered list with severity tags:>
1. **MUST FIX:** ...
2. **SHOULD FIX:** ...
3. **NICE TO HAVE:** ...

Status values

StatusMeaning
PASSNo issues found. Implement as written.
PASS WITH NOTESIssues found but none are blocking if addressed. Implement with listed modifications.
FAILBlocking issues found. Plan needs significant revision before implementation.

4. Issue Categorization

Severity levels

LevelCriteriaAction Required
MUST FIXWill not compile, will crash at runtime, introduces a security vulnerability, or produces incorrect behavior. The plan cannot be implemented as written.Plan MUST be updated before implementation begins. Implementer MUST NOT skip these.
SHOULD FIXAdds unnecessary dependencies, uses a suboptimal pattern, introduces technical debt, or has a moderate risk of causing problems. The plan will work but is not ideal.Plan SHOULD be updated. Implementer may proceed at their discretion but should document the decision.
NICE TO HAVEMinor improvement opportunity. Slightly cleaner API usage, small simplification, or cosmetic consistency. No functional impact.At the implementer’s discretion. Do not delay implementation for these.

Severity decision guide

Use this to categorize issues consistently:

SymptomSeverity
Missing dependency — code won’t compileMUST FIX
Wrong API signature — code won’t compileMUST FIX
Internal API mismatch — plan calls function with wrong argsMUST FIX
Incorrect feature flag — method doesn’t existMUST FIX
Schema version not incremented — migration silently skippedMUST FIX
Port conflict — tests fail nondeterministicallyMUST FIX
Security advisory on a dependencyMUST FIX
Schema change breaks existing dataMUST FIX
Unnecessary dependency addedSHOULD FIX
Suboptimal error handling approachSHOULD FIX
Missing test coverage for edge caseSHOULD FIX
Race condition with low probabilitySHOULD FIX
Could use a simpler API callNICE TO HAVE
Macro available instead of manual implNICE TO HAVE
Slightly better variable namingNICE TO HAVE

5. Verification Methodology

The core activity of a technical feasibility assessment is checking the plan’s claims against reality. Every code snippet in the plan makes implicit claims: this struct has these fields, this function has this signature, this crate exports this type. The assessor’s job is to verify each claim.

Read actual source

Never trust the plan’s description of existing code. Always read the actual files:

# Check what a struct actually looks like
cat nexus/nexus-lib/src/config.rs

# Check what dependencies are actually declared
cat nexus/nexus-lib/Cargo.toml

# Check what the current schema version is
grep SCHEMA_VERSION nexus/nexus-lib/src/store/schema.rs

Verify internal APIs

When the plan calls functions or uses types from the project’s own crates, verify them against the actual source:

# Plan says: Store::open(&config.db_path).await?
# Verify Store::open actually takes a &Path and returns Result<Store>
cat nexus/nexus-lib/src/store/sqlite.rs | grep -A5 "pub async fn open"

# Plan says: schema::SCHEMA_VERSION is 2
# Verify the current value
grep "pub const SCHEMA_VERSION" nexus/nexus-lib/src/store/schema.rs

# Plan says: VmRecord has a `cid` field
# Verify the struct definition
grep -A20 "pub struct VmRecord" nexus/nexus-lib/src/store/sqlite.rs

This is the most common source of MUST FIX findings. Plans go stale when earlier implementation steps change function signatures, add parameters, or rename types.

Check docs.rs

For every API call in the plan, verify against docs.rs for the exact crate version specified in the plan:

https://docs.rs/reqwest/0.12/reqwest/
https://docs.rs/clap/4/clap/
https://docs.rs/rusqlite/latest/rusqlite/

Check:

  • Function exists on the type
  • Parameter types match
  • Return type matches what the plan expects
  • Required feature flags are enabled

Test compilation

When an API usage is ambiguous or underdocumented, write a minimal test program:

// scratch.rs — verify API exists, do not commit
use reqwest::Client;
fn main() {
    let _ = Client::builder().timeout(std::time::Duration::from_secs(5)).build();
}
cargo check --manifest-path scratch/Cargo.toml

This is the gold standard for API verification. If it compiles, the API is correct.

Check RustSec advisories

# If cargo-audit is installed:
cargo audit

# Manual check:
# https://rustsec.org/advisories/

Verify crate availability

# Check crates.io for version and publish date
cargo search <crate_name>

6. Assessment Checklist

Copy this checklist into your working notes when performing an assessment. Every item must be checked or marked N/A.

### Pre-Assessment
- [ ] Read the plan document fully before starting
- [ ] Read all source files the plan modifies
- [ ] Read all source files the plan references

### Dependencies
- [ ] Every new crate verified on crates.io (exists, version available)
- [ ] No active RustSec advisories on new or existing dependencies
- [ ] Feature flags match API usage (e.g., `json` feature for reqwest)
- [ ] No unnecessary dependencies added

### API Verification
- [ ] Every library API call checked against docs.rs
- [ ] Function signatures match (parameters, return types)
- [ ] Trait bounds satisfied
- [ ] Derive macros and attributes use correct syntax

### Internal APIs and Existing Code
- [ ] Plan's description of existing code matches actual source
- [ ] Struct fields, function signatures, module paths are accurate
- [ ] `use` statements resolve to real modules and types
- [ ] Schema version incremented correctly from current value
- [ ] New tables/columns consistent with struct definitions
- [ ] SQL migration code matches Rust types that read/write the tables

### Architecture
- [ ] New code placed in the correct crate and module
- [ ] No unnecessary cross-crate coupling introduced
- [ ] Pattern consistent with existing codebase conventions

### Testing
- [ ] Test dev-dependencies declared (tokio, tempfile, etc.)
- [ ] No port conflicts between test binaries
- [ ] Temp directories used (not real XDG paths) in tests
- [ ] Binary paths correct for cross-package integration tests

### Build System
- [ ] Cargo.toml changes syntactically correct
- [ ] Workspace members updated if new crate added
- [ ] mise tasks reference correct commands

### Final
- [ ] All issues categorized with severity level
- [ ] MUST FIX items have clear remediation steps
- [ ] Verdict section summarizes all action items

7. Relationship to Plans

File naming

While it exists, the assessment lives next to the plan it evaluates:

src/design/
  step-05-btrfs-workspaces.md       ← plan (committed)
  step-05-feasibility.md            ← assessment (temporary, never committed)

Naming convention: step-NN-feasibility.md

How findings feed back into plans

The planner incorporates all findings directly into the plan before committing:

SeverityPlan Update Required
MUST FIXYes. The plan MUST be revised to incorporate the fix before implementation. The implementer should not have to figure out the fix themselves.
SHOULD FIXRecommended. If the plan author disagrees, they should document why in the plan.
NICE TO HAVENo. The implementer decides during implementation.

No references to assessments

Plans, SUMMARY.md, and all other committed documentation MUST NOT contain links or references to individual feasibility assessments. The assessment is a transient working document — its value is captured entirely through the corrections it produces in the plan. The committed plan should read as if it was correct from the start.

Lifecycle

  1. Researcher writes the assessment (step-NN-feasibility.md)
  2. Planner revises the plan to incorporate all findings
  3. Plan is committed as a single commit with all corrections applied
  4. Assessment file is deleted

Feasibility assessments are excluded from version control via .gitignore (src/design/step-*-feasibility.md). They are temporary handoff documents between the researcher and planner, not permanent artifacts.

Reassessment

If the plan is substantially revised after assessment (new dependencies, architecture changes), the researcher should reassess the affected sections. Minor corrections (fixing a Cargo.toml snippet, changing a port number) do not require reassessment.


8. Examples

MUST FIX — Missing dependency (will not compile)

### 1. Missing tokio dev-dependency in nexus-lib (Severity: HIGH — WILL NOT COMPILE)
**Risk:** The plan's Task 1 adds `#[tokio::test]` to
`nexus-lib/src/client.rs`, but the Cargo.toml snippet does NOT include
tokio as a dev-dependency. This will cause:
`error[E0433]: failed to resolve: use of undeclared crate or module 'tokio'`
**Recommendation:** Add to `nexus-lib/Cargo.toml`:
    ```toml
    [dev-dependencies]
    tokio = { version = "1", features = ["rt", "macros"] }
    ```

Why this is MUST FIX: the code literally will not compile without it. There is no workaround.

MUST FIX — Port conflict (tests fail nondeterministically)

### 2. Port Conflict in Integration Tests (Severity: HIGH)
**Risk:** Both nexusd and nexusctl integration tests bind port 9600.
`cargo test --workspace` runs test binaries in parallel, causing
`AddrInUse` errors.
**Recommendation:** Use port 9601 for the nexusctl integration test,
or adopt the `free_port()` pattern from the test harness conventions.

Why this is MUST FIX: tests pass sometimes and fail other times depending on execution order. This will waste the implementer’s time debugging phantom failures.

SHOULD FIX — Unnecessary dependency

### 3. serde_json in nexus-lib (Severity: LOW)
**Risk:** The plan adds `serde_json = "1"` to nexus-lib's dependencies,
but `client.rs` never directly uses serde_json — reqwest's `json` feature
provides it transitively.
**Recommendation:** Remove `serde_json = "1"` from the nexus-lib
Cargo.toml snippet.

Why this is SHOULD FIX, not MUST FIX: the code will compile and work correctly either way. The issue is unnecessary dependency bloat, not correctness.

NICE TO HAVE — Simpler API usage

### 4. CARGO_BIN_EXE for nexusctl (Severity: LOW)
**Risk:** None, just a simplification opportunity.
**Recommendation:** Use `env!("CARGO_BIN_EXE_nexusctl")` for the
nexusctl binary in integration tests instead of the manual
`target_dir()` approach. Only the cross-package nexusd binary needs
the manual path.

Why this is NICE TO HAVE: the plan’s approach works. The alternative is marginally cleaner but has no functional impact.