We continue with the series.
In the previous chapters we filled the pantry: JSON persistence to disk, interchangeable repositories, and we passed quality control with behavior-driven tests and documented technical debt. With the ingredients preserved, verified, and the kitchen equipped, it is time to come up to the surface and plate the dish: build the layer that the user actually touches.
Because here the question isn’t “does it work?”, but rather “would anyone other than you want to use it?”. A CLI can execute all its commands correctly and still be a terrible experience: cryptic error messages, ambiguous arguments, output impossible to parse with scripts. That is not a CLI, it’s a prototype that escaped from main.rs.
In this installment we focus on the basic plating: transforming the raw ingredients (domain + use cases + persistence) into something that presents well to the diner, whether they eat with their eyes (terminal) or send the kitchen assistant to pick up the order (scripts and pipelines).
In cooking, plating doesn’t change the flavor of the dish, but it determines whether the diner trusts it before tasting it. With a CLI, exactly the same thing happens: the first impression is the --help, and if that output is confusing, the user won’t try anything else.

Reference code:
- src/tasks/adapters/cli/cli_command.rs
- src/tasks/adapters/cli/printer.rs
- src/tasks/adapters/cli/errors.rs
- src/main.rs
The philosophy: a CLI is a contract, not a script #
Before jumping into the code, it’s worth stopping to think about what makes a CLI good. And the short answer is: predictability.
A good CLI is a contract. The user (human or machine) expects that:
- Invalid arguments fail before executing anything.
- Errors are printed to
stderr, not mixed with useful output onstdout. - The exit code is
0on success and!= 0on error. - The output has a stable shape that doesn’t change depending on the developer’s mood.
This might seem obvious, but the amount of CLIs that violate these rules is staggering. We are going to respect all four from the first commit. And to achieve this, we rely on clap.
Why clap?
#
In the Rust ecosystem, there are several options to build a CLI interface: clap, argh, or even parsing std::env::args() by hand. The choice isn’t trivial because the argument parser is, literally, the first line of code that touches user input.
We choose clap for specific reasons:
-
derive API with procedural macros.
clapoffers two APIs:builder(imperative, you build the parser by chaining methods) andderive(declarative, you define structs/enums and macros generate the parser). The derive API turns argument definitions into type definitions, and that fits perfectly with the philosophy of this series: if the type compiles, the argument is valid. -
ValueEnumfor closed enums. With a single derivation,clapgenerates validation, autocompletion, and help for enums. Neitherarghnor manual parsing offer this without extra code. -
Native support for
FromStr. Any type that implementsFromStrcan be used directly as an argument. This means thatUuid,PathBuf,u64, or any custom type are parsed at the input boundary, not inside the business logic. -
Mature ecosystem.
claphas been in the Rust ecosystem for over a decade. It has extensive documentation, active support, and extensions likeclap_completefor shell autocompletion generation.
In our Cargo.toml, the dependency looks like this:
[dependencies]
clap = { version = "4.5.59", features = ["derive"] }The derive feature flag is required to enable the procedural macros (Parser, Subcommand, ValueEnum). Without it, you would only have the builder API available.
The root struct: Cli and the Parser trait
#
The entry point for all parsing is the Cli struct, decorated with #[derive(Parser)]:
use clap::{Parser, Subcommand, ValueEnum};
use uuid::Uuid;
#[derive(Debug, Parser)]
#[command(name = "todo", version, about = "Manage tasks from the terminal")]
pub struct Cli {
#[arg(long, value_enum, global = true, default_value_t = OutputFormat::Table)]
pub output: OutputFormat,
#[command(subcommand)]
pub command: TodoCommand,
}There’s quite a bit of condensed information here. Let’s break it down.
#[derive(Parser)]
#
Parser is the clap trait that turns a struct into a full-featured argument parser. When you write Cli::parse(), the macro generates all the code necessary to read std::env::args(), validate each argument against the declared rules, and build a Cli instance with typed values. If anything fails, clap prints a formatted error message to stderr and terminates the process with code 2 (standard convention for usage errors). All of this happens before your code executes.
#[command(name = "todo", version, about = "...")]
#
This attribute configures the binary’s metadata:
name = "todo": name that appears in the help (Usage: todo <COMMAND>).version: automatically extracts the version fromCargo.toml(0.1.0), without having to duplicate it.about: description that appears in--help.
An important detail: version without an explicit value internally uses the env!("CARGO_PKG_VERSION") macro. This guarantees that the version shown by --version always matches the one in Cargo.toml. Zero manual maintenance.
The global --output flag
#
#[arg(long, value_enum, global = true, default_value_t = OutputFormat::Table)]
pub output: OutputFormat,This field deserves an attribute-by-attribute explanation:
long: generates the--outputflag. Withoutshort, there is no abbreviated-o. Deliberate decision:-ois ambiguous in many CLIs (it can mean “output file”, “output format”, “overwrite”…). Better to be explicit.value_enum: indicates that the possible values are derived from theOutputFormatenum.clapautomatically generates the[table, json]list in the help.global = true: this is the key attribute. Without it,--outputwould only be available before the subcommand (todo --output json list). Withglobal = true, the flag propagates to all subcommands and can be placed anywhere:todo --output json listandtodo list --output jsonare equivalent. This avoids duplicating the--outputdefinition in everyTodoCommandvariant.default_value_t = OutputFormat::Table: if the user doesn’t provide--output,Tableis assumed. Notice that we usedefault_value_t(with_tfor “typed”) and notdefault_value. The difference is thatdefault_valueexpects a&strand parses it at runtime, whiledefault_value_tdirectly accepts the typed value. If the field type and the default value don’t match, the compiler rejects it. Yet another case where we move validation to the compiler.
#[command(subcommand)]
#
#[command(subcommand)]
pub command: TodoCommand,This attribute indicates that the command field is not a simple argument but a subcommand that expands into its own argument structure. The TodoCommand type must derive Subcommand.
Subcommands as enum variants #
Subcommands are defined as variants of TodoCommand:
#[derive(Debug, Clone, PartialEq, Eq, Subcommand)]
pub enum TodoCommand {
Add {
title: String,
},
List {
#[arg(long, value_enum, default_value_t = StatusArg::All)]
status: StatusArg,
},
Done {
id: Uuid,
},
Todo {
id: Uuid,
},
Delete {
id: Uuid,
},
}Notice this: this is the contract specification. There is no separate documentation that can go out of date. The enum signature defines exactly what each command accepts. If tomorrow you add an Edit { id: Uuid, title: String } subcommand, the help and validation will update automatically.
Design decisions in subcommands #
There are intentional technical decisions in this enum:
Add { title: String } — positional argument without a flag:
title doesn’t have #[arg(long)], so clap treats it as a positional argument. The user writes todo add "Buy milk" instead of todo add --title "Buy milk". It’s more natural for a quick creation command. Content validation (empty title, title too long) doesn’t happen here: that is the domain’s responsibility. The CLI only guarantees that a String arrives; the domain decides if it’s valid.
List { status: StatusArg } — optional flag with a default value:
status is a flag (--status) with default_value_t = StatusArg::All. This allows three forms of use:
todo list→ lists all tasks (defaultAll).todo list --status todo→ only pending tasks.todo list --status done→ only completed tasks.
Why a flag and not positional? Because an optional filter is semantically a modifier of behavior, not the subject of the command. Positionals communicate “what”, flags communicate “how”.
Done { id: Uuid }, Todo { id: Uuid }, Delete { id: Uuid } — UUID as a native type:
This is subtle but fundamental. clap knows how to parse Uuid because the uuid crate implements the FromStr trait:
impl FromStr for Uuid {
type Err = uuid::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> { /* ... */ }
}clap internally calls Uuid::from_str() during parsing. If the user writes todo done abc123, clap fails immediately with an error message that includes the invalid value and expected format, before any service is instantiated or the persistence file is opened. There is no need to validate the UUID inside the use case, where we should already be talking about business, not parsing.
This is the exact same “validate at the outermost boundary” philosophy we apply with ValueEnum, but extended to any type that implements FromStr. If tomorrow you had a custom ProjectId type, you’d only need to implement FromStr for it and use it directly in the subcommand enum.
ValueEnum: closed enums validated by clap
#
Both StatusArg and OutputFormat are enums with #[derive(ValueEnum)]:
#[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)]
pub enum OutputFormat {
Table,
Json,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, ValueEnum)]
pub enum StatusArg {
All,
Todo,
Done,
}ValueEnum tells clap: “the only valid values are the variants of this enum”. Internally, clap generates an implementation that converts the name of each variant to kebab-case by default (although for single-word variants like Table, Json, All, Todo, Done, the result is simply lowercase: table, json, all, todo, done).
What do we gain over receiving a String and doing manual match?
- Automatic validation. If the user writes
--status doing,claprejects the value with a clear message before any business logic is executed. The message includes the list of possible values. - Autocompletion and
--helpfor free.clapautomatically generates[possible values: all, todo, done]in the help. If you add anInProgressvariant in the future, it appears in--helpwithout touching anything else. - Zero error message maintenance. There is no hand-written
eprintln!("Invalid status: {}", s)that could become misaligned when you modify the enum. - Compile-time exhaustiveness. By using
matchon aValueEnum, the compiler warns you if you forget to cover a variant. WithString, thatmatchwould have to include a catch-all_ =>arm that silences future errors.
Furthermore, notice that both enums derive Copy. They are practically zero-sized types (the enum is stored as a u8 internally). Passing them by value is more efficient than by reference, and allows using them in const contexts or pattern matching without worrying about lifetimes.
From CLI to use case: the From conversion
#
Now we arrive at a key design point that connects the CLI layer with the application layer without coupling them. Let’s remember: in our hexagonal architecture, StatusArg belongs to the CLI adapter and FilterTask / ListTasksCommand belong to the application layer. They are concepts from the same semantic universe, but from different layers.
The translation between both worlds is resolved with a From implementation:
impl From<StatusArg> for ListTasksCommand {
fn from(value: StatusArg) -> Self {
Self::new(status_command_to_filter_task(value))
}
}
pub fn status_command_to_filter_task(command: StatusArg) -> FilterTask {
match command {
StatusArg::All => FilterTask::All,
StatusArg::Todo => FilterTask::Todo,
StatusArg::Done => FilterTask::Done,
}
}Why not use FilterTask directly as a CLI argument? Because you would be breaking dependency inversion. If cli_command.rs used FilterTask directly in the TodoCommand enum, the CLI adapter would be importing types from the application layer inside its parsing definition. That means a change in the application layer (adding FilterTask::Overdue, renaming FilterTask to TaskFilter) would break the parser.
With From, the dependency flows in the right direction: the CLI adapter knows the application types only at the conversion point, not in the user interface definition. If tomorrow the application layer adds a FilterTask::Overdue filter, the CLI doesn’t break: it simply wouldn’t offer that option until someone exposes it as a StatusArg variant.
In main.rs, the conversion is used explicitly with .from():
TodoCommand::List { status } => {
let list_service = ListTasksService::new(repository);
let tasks: Vec<Task> = list_service.execute(ListTasksCommand::from(status))?;
print_tasks(&tasks, cli.output)
}ListTasksCommand::from(status) translates from StatusArg (CLI type) to ListTasksCommand (application type) in a single call. Clean, explicit, and traceable.
Contract tests: try_parse_from as a verification tool
#
The cli_command.rs module includes a battery of tests that deserves analysis. They are not unit tests for business logic: they are contract tests that verify the public interface of the CLI behaves as promised.
The key piece is Cli::try_parse_from(). Unlike Cli::parse() (which terminates the process if it fails), try_parse_from returns a Result that we can inspect in tests:
#[test]
fn parses_add_command() {
let cli = Cli::try_parse_from(["todo", "add", "Buy milk"])
.expect("cli should parse add");
assert_eq!(cli.output, OutputFormat::Table);
assert_eq!(
cli.command,
TodoCommand::Add { title: "Buy milk".to_string() }
);
}This test verifies three things simultaneously: (1) the add subcommand parses correctly, (2) the positional title is captured as a String, and (3) the default output format is Table. Notice that the first element of the array ("todo") simulates the binary name, which clap consumes but doesn’t process as an argument.
#[test]
fn parses_list_command_with_default_status() {
let cli = Cli::try_parse_from(["todo", "list"])
.expect("cli should parse list");
assert_eq!(cli.command, TodoCommand::List { status: StatusArg::All });
}
#[test]
fn parses_list_command_with_explicit_status() {
let cli = Cli::try_parse_from(["todo", "list", "--status", "done"])
.expect("cli should parse list with status");
assert_eq!(cli.command, TodoCommand::List { status: StatusArg::Done });
}These two tests document the behavior of default_value_t: without --status, All is assumed; with --status done, Done is parsed.
#[test]
fn parses_done_command_with_uuid() {
let id = Uuid::new_v4();
let cli = Cli::try_parse_from(["todo", "done", &id.to_string()])
.expect("cli should parse done");
assert_eq!(cli.command, TodoCommand::Done { id });
}This test generates a random v4 UUID, converts it to a string, passes it as an argument, and verifies that the parsed Uuid matches the original. It is proof that the clap + uuid::FromStr integration works correctly in both directions.
#[test]
fn parses_global_output_flag() {
let cli = Cli::try_parse_from(["todo", "--output", "json", "list"])
.expect("cli should parse global output");
assert_eq!(cli.output, OutputFormat::Json);
assert_eq!(cli.command, TodoCommand::List { status: StatusArg::All });
}This test verifies the behavior of global = true: the --output json flag is placed before the subcommand and is still correctly associated with the output field of the Cli struct.
#[test]
fn rejects_invalid_uuid_for_done_command() {
let parsed = Cli::try_parse_from(["todo", "done", "not-a-uuid"]);
assert!(parsed.is_err());
}And the most revealing one: testing that the CLI correctly rejects malformed inputs. We are not validating business logic; we are ensuring that the input boundary filters out garbage before it reaches the inside of the system. This is pure contract testing.
You can see the full test suite in the cli_command.rs test module.
CLI layer errors: closing the chain #
Each layer of the architecture has its own error type. We saw this in detail in the previous post, and the CLI layer closes that chain as the final link:
use crate::tasks::application::errors::ApplicationError;
use thiserror::Error;
pub type CliResult<T> = Result<T, CliError>;
#[derive(Debug, Error)]
pub enum CliError {
#[error(transparent)]
Application(#[from] ApplicationError),
#[error(transparent)]
Serializer(#[from] serde_json::Error),
}CliError has exactly two variants, and that is no coincidence. They are the only two things that can fail in the CLI layer once argument parsing (which clap handles internally) is passed:
Application: any error coming from the use cases. Thanks to#[from], the?operator automatically converts anApplicationErrorintoCliError::Application. And thatApplicationErrorin turn can contain aDomainErroror aRepoError, each with its specific message.Serializer:serde_jsonerrors when serializing output. It only happens in--output jsonmode. The#[from]allows aserde_json::Errorto be automatically converted toCliError::Serializervia?.
The #[error(transparent)] attribute delegates message formatting to the inner error. This means that if the domain produces a DomainError::EmptyTitle, the end user sees exactly that message: "task title cannot be empty". No unnecessary wrapping layers, no generic prefixes like "CLI error: Application error: Domain error: ...".
The full error chain looks like this:
CliError
├── ApplicationError
│ ├── DomainError (EmptyTitle, TitleTooLong, TaskNotFound, InvalidStatusTransition)
│ └── RepoError (I/O errors, persistence serialization)
└── serde_json::Error (JSON output serialization errors)Each layer captures the errors of the layer below with #[from] and the ? operator cleanly propagates upwards. There are no unwrap()s, no panic!()s, no hand-crafted strings. The CliResult<T> type alias simplifies the signatures of all CLI adapter functions.
Dual output: the printer module
#
In many CLIs, pretty text is printed, and then someone tries to automate it and discovers there’s no stable output contract. Parsing with grep and awk works until someone changes a space, a capital letter, or a column width and everything breaks.
Here we solved that by design with the printer module, which completely separates presentation logic from business logic. The printer receives already processed data (a &Task, a &[Task], or a bool for delete) and formats them according to the OutputFormat chosen by the user.
pub fn print_task(task: &Task, output: OutputFormat) -> CliResult<()> {
match output {
OutputFormat::Json => {
println!("{}", serde_json::to_string(task)?);
}
OutputFormat::Table => {
print_tasks_table(std::slice::from_ref(task));
}
}
Ok(())
}
pub fn print_tasks(tasks: &[Task], output: OutputFormat) -> CliResult<()> {
match output {
OutputFormat::Json => {
println!("{}", serde_json::to_string(tasks)?);
}
OutputFormat::Table => {
print_tasks_table(tasks);
}
}
Ok(())
}Two formats, one single function per operation:
table: quick reading in the terminal for humans. Dynamically aligned columns.json: stable payload for scripts, CI/CD pipelines, or integration with tools likejq.
Usage example:
cargo run -- list
cargo run -- --output json list --status donestd::slice::from_ref: zero-allocation reuse
#
A technical detail worth noting: in print_task, when the operation returns a single task (add, done, todo), instead of duplicating the formatting logic or creating a temporary vec![task], we use std::slice::from_ref(task).
This standard library function converts a &T reference into a single-element slice &[T]. It is zero-cost: there’s no heap allocation, no copying, no Vec. Just pointer reinterpretation. This allows us to reuse print_tasks_table for all cases without any performance penalty.
Dynamic table with calculated widths #
The print_tasks_table function calculates column widths based on the actual content:
fn print_tasks_table(tasks: &[Task]) {
let id_header = "ID";
let status_header = "STATUS";
let title_header = "TITLE";
let id_width = tasks
.iter()
.map(|task| task.task_id().to_string().len())
.max()
.unwrap_or(0)
.max(id_header.len());
let status_width = tasks
.iter()
.map(|task| status_label(task).len())
.max()
.unwrap_or(0)
.max(status_header.len());
let title_width = tasks
.iter()
.map(|task| task.title().len())
.max()
.unwrap_or(0)
.max(title_header.len());
println!(
"| {:<id_width$} | {:<status_width$} | {:<title_width$} |",
id_header, status_header, title_header
);
println!(
"|-{:-<id_width$}-|-{:-<status_width$}-|-{:-<title_width$}-|",
"", "", ""
);
for task in tasks {
let id = task.task_id().to_string();
println!(
"| {:<id_width$} | {:<status_width$} | {:<title_width$} |",
id, status_label(task), task.title()
);
}
}
fn status_label(task: &Task) -> &'static str {
match task.status() {
TaskStatus::Todo => "TODO",
TaskStatus::Done => "DONE",
}
}The pattern for each column is the same: iterate through all tasks, take the maximum length of the field, and compare it with the header length using .max(header.len()). This way, the width of each column is never smaller than its header, but expands if any data requires it.
There are no hardcoded fixed widths. A v4 UUID is always 36 characters, but if tomorrow you changed to a shorter (or longer) ID, the table would adapt automatically. The {:<id_width$} formatters use Rust’s dynamic width syntax ($ indicates that the width value comes from a variable, not a literal).
Why not use a crate like prettytable-rs or tabled? Because for three columns with simple data, the manual implementation is 30 lines with no extra dependencies. Adding an entire crate for this would have been over-engineering. If the table grew to 10 columns with colors and borders, the decision would be different.
The status_label helper function
#
Look at the return type of status_label: &'static str. It is a reference to a string literal embedded in the binary. There is no allocation, no enum to String conversion. It is the most efficient pattern for mapping enums to fixed text labels.
Interesting case: delete and visible idempotency
#
delete is the most interesting subcommand from a contract design perspective. What happens if you try to delete a task that doesn’t exist? There are two schools of thought:
- Throw an error (404-style): “the task does not exist, you are doing something wrong”.
- Idempotent response: “there was nothing to delete, but it is not an error”.
We chose the second one, but with explicit visibility. The reason lies in how this command is consumed in practice:
- A human who runs
todo delete <id>twice probably made a mistake the second time, but doesn’t want to see a stacktrace because of it. - A cleanup script iterating over a list of IDs to delete shouldn’t break because one was already deleted in a previous run.
The principle is the same one followed by rm -f in Unix: it doesn’t fail if the file doesn’t exist, but it doesn’t hide it either.
In the repository, the delete_task.rs use case returns a bool:
impl<R: TaskRepository> DeleteTaskUseCase for DeleteTaskService<R> {
fn execute(&mut self, cmd: DeleteTaskCommand) -> ApplicationResult<bool> {
let task_id: Uuid = cmd.task_id;
Ok(self.repo.delete(task_id)?)
}
}And the TaskRepository trait defines delete as:
fn delete(&mut self, id: Uuid) -> RepoResult<bool>;The bool travels from the repository, goes through the use case without transformation, and reaches the printer, where it translates into a visible response:
pub fn print_delete(id: String, deleted: bool, output: OutputFormat) -> CliResult<()> {
let message = if deleted {
format!("deleted {id}")
} else {
format!("task {id} not found")
};
match output {
OutputFormat::Json => {
let payload = DeleteOutput { id, deleted, message };
println!("{}", serde_json::to_string(&payload)?);
}
OutputFormat::Table => {
let result = if deleted { "DELETED" } else { "NOT_FOUND" };
println!("| RESULT | MESSAGE |");
println!("|-----------|--------------------|");
println!("| {result:<9} | {message} |");
}
}
Ok(())
}
#[derive(Debug, Serialize)]
struct DeleteOutput {
id: String,
deleted: bool,
message: String,
}The DeleteOutput struct serializes to JSON a payload with three explicit fields. A script can do jq .deleted and make decisions without parsing text. In table mode, the human sees DELETED or NOT_FOUND at a glance.
Notice that in both cases the exit code is 0 (success). The process didn’t fail, there simply was nothing to delete. If you wanted to differentiate these cases in a pipeline, you’d inspect the deleted field of the JSON, not the exit code. That is a clean contract.
Orchestration in main.rs: just wiring, zero decisions
#
The main.rs is the acid test for the entire architecture. If you’ve designed the layers well, the entry point should be boring. Let’s see:
pub fn main() {
if let Err(error) = run() {
eprintln!("{error}");
std::process::exit(1);
}
}main() calls run(). If it fails, it prints the error to stderr (not stdout) and exits with code 1. Three lines. This is the standard Unix contract that any script, CI/CD pipeline, or shell wrapper expects.
Notice that main() does not return Result. We could have used fn main() -> Result<(), CliError>, but that delegates error formatting to the Debug implementation (or Display with the termination feature), which produces less controlled messages. With if let Err, we have full control over how the error is shown to the user.
The run() function is where all the orchestration lives:
fn run() -> CliResult<()> {
let cli = Cli::parse();
let repository = JsonFileTaskRepository::new()
.map_err(ApplicationError::Repository)?;
match cli.command {
TodoCommand::Add { title } => {
let mut add_service = AddTaskService::new(repository);
let task: Task = add_service.execute(AddTaskCommand::new(title))?;
print_task(&task, cli.output)
}
TodoCommand::List { status } => {
let list_service = ListTasksService::new(repository);
let tasks: Vec<Task> = list_service.execute(ListTasksCommand::from(status))?;
print_tasks(&tasks, cli.output)
}
TodoCommand::Done { id } => {
let mut mark_task_done_service = MarkTaskDoneService::new(repository);
let task = mark_task_done_service.execute(MarkTaskDoneCommand::new(id))?;
print_task(&task, cli.output)
}
TodoCommand::Todo { id } => {
let mut mark_task_todo_service = MarkTaskTodoService::new(repository);
let task = mark_task_todo_service.execute(MarkTaskTodoCommand::new(id))?;
print_task(&task, cli.output)
}
TodoCommand::Delete { id } => {
let mut delete_service = DeleteTaskService::new(repository);
let deleted = delete_service.execute(DeleteTaskCommand::new(id))?;
print_delete(id.to_string(), deleted, cli.output)
}
}
}run() does exactly three things and nothing else:
- Parses the command (
Cli::parse()). If arguments are invalid,clapterminates the process here with an error message and code2.run()never gets to execute. - Instantiates the repository (
JsonFileTaskRepository::new()). This is the only place where the specific persistence adapter is decided. The.map_err(ApplicationError::Repository)converts theRepoErrorinto anApplicationError, which?propagates asCliError::Application. - Dispatches to the corresponding use case and formats the output. Each arm of the
matchfollows the same pattern:
instantiate service -> execute with Command -> format outputThere are no business rules. There are no validations. There are no data transformations. Just wiring. This uniformity isn’t accidental. It is a direct consequence of three design decisions we made in previous posts:
- The Command Pattern in the use cases guarantees that every service has a uniform interface (
execute(Command) -> Result<T>). - Dependency inversion with
TaskRepositoryallows injecting the concrete repository withoutrun()knowing persistence details. - The
printermodule separates formatting from orchestration.
The docstring as specification #
A detail worth mentioning: main.rs includes an extensive docstring that documents the complete CLI contract as a comment on the main function:
/// CLI Contract v0.1
/// - add <title>
/// - Input: title: String (required)
/// - Rules: title must not be empty/blank
/// - Success output: created task summary (id, title, status)
/// - Error output: validation error when title is empty
/// - list [--status <all|todo|done>]
/// - Input: optional status flag
/// - Default: all
/// - Success output: one line per task with status + id + title
/// - Error output: invalid status value (argument parsing)
/// ...
This isn’t by chance. It is a versioned contract (v0.1) that documents inputs, rules, expected outputs, and possible errors for each command. It is a practice I’d recommend for any CLI: before writing the code, document the contract. And do it in the same file where the orchestration lives, so the contract is always visible when someone opens the entry point.
Structure of the CLI module within the architecture #
It is worth seeing where all this lives in the project’s directory tree:
src/tasks/adapters/cli/
├── mod.rs # Submodule declarations
├── cli_command.rs # Cli struct, TodoCommand enum, ValueEnum enums, tests
├── printer.rs # Formatting functions: print_task, print_tasks, print_delete
└── errors.rs # CliError, CliResultThe mod.rs is minimal:
pub mod cli_command;
pub mod errors;
pub mod printer;Three modules, each with a clear responsibility:
cli_command: input parsing and validation (the “what” enters the system).printer: output formatting (the “what” leaves the system).errors: the layer’s error type (the “what” can fail in this layer).
This structure mirrors the Input/Output separation that is inherent to any adapter in a hexagonal architecture. The CLI is a primary adapter (or “driving adapter”): it initiates action. It is not the system that decides when to execute, it is the user who invokes it. But at the responsibility level, it still has an input face (parsing) and an output face (formatting), and we have separated them into distinct modules.
Repo history complement #
This chapter connects particularly well with this commit:
If you review it, you clearly see the leap from “functional CLI” to “CLI usable by humans and scripts”. The diff shows exactly the decisions we’ve explained here: the introduction of the --output flag, the separation of the printer module, and the dual output.
Conclusion: plating matters as much as the recipe #
Going back to our cooking metaphor: you can have the best ingredients on the market (immutable domain) and a flawless recipe (clean use cases), but if the dish arrives at the table poorly plated, cold, or without the proper cutlery, the diner’s experience is bad.
The same goes for a terminal tool. The domain can be perfect, but if the user has to guess what arguments it accepts, read cryptic errors, or invent a regex to parse the output, the tool won’t be used. And a tool that isn’t used is dead code.
In this chapter we have designed an input adapter that:
- Delegates parsing to
clapderive: the type definition is the contract specification. If it compiles, the arguments are valid. - Validates at the boundary with
ValueEnumandFromStr: closed enums for finite options, semantic types (likeUuid) for structured arguments. Everything rejected before touching business logic. - Separates types by layer:
StatusArg(CLI) is converted toFilterTask(application) viaFrom, preserving dependency inversion. - Propagates errors without noise: the
CliError -> ApplicationError -> DomainError/RepoErrorchain flows cleanly with?and#[error(transparent)]. - Offers two output contracts:
--output tablefor humans,--output jsonfor machines. A single flag, decided by the consumer. - Keeps
main.rsas pure wiring: no logic, no business decisions, just connections between parsing, service, and printer.
With the dish presented on the table and the diner satisfied, we have a functional end-to-end CLI: immutable domain, JSON persistence, clean use cases, and a terminal interface with a stable contract. But there is one detail: the diner eats with their eyes. And our current presentation is plain text on stdout.
In the next post we are going to make the leap from home cooking to Michelin star: migrating from a pure CLI to an interactive TUI interface with ratatui. Same kitchen, same ingredients, same recipe, but with plating that elevates the experience to another level.
See you in the kitchen!