Hello there. In this series, I’ll tell you how we are building todo-cli-rs starting from CodeCrafters’ Project #1 (Rust Projects). If you want to follow the code in parallel, here is the repo: github.com/rafafrdz/todo-cli-rs.
The challenge seemed simple (until it wasn’t) #
On paper, it was a small CLI: receive commands, operate on tasks, and persist local state. The initial MVP looked like this:
add <title>list [--status <all|todo|done>]done <id>todo <id>delete <id>
Easy, right? The problem wasn’t making it work today. The problem was avoiding it becoming a giant main.rs in two weeks’ time, touching parsing, JSON, console output, and business rules all at the same time.
The decision that changed the project #
Why Hexagonal Architecture for a CLI? #
Normally, command-line tools are written as a single script: a main.rs that reads arguments with clap, mutates a JSON file on disk, and does a println! of the result.

It works fast, but it mixes three things that should be orthogonal:
- The input mechanism (CLI argument parsing).
- The business rules (what a valid task is, what state transitions are allowed).
- The storage mechanism (writing to the file system).
If tomorrow we want this CLI to become an interactive terminal interface (TUI) using ratatui, or if we want to expose the tasks through an HTTP API, we would have to untangle all the logic.
That’s why, before writing use cases, we decided on the structure:
- Hexagonal Architecture (Ports and Adapters) to protect the domain and make it independent of the infrastructure.
- Screaming Architecture so the folder tree screams “tasks” and not “utils”.
Is it using a sledgehammer to crack a nut for such a small MVP? Probably. Is it worth it? Absolutely. The initial cost of defining this boundary allows us to treat the application’s “core” as a pure Rust library. That initial cost prevents painful refactors when requirements change (like switching from JSON persistence to SQLite).
More details here: docs/architecture.md
Contracts first: input commands #
Defining the inputs well wasn’t just UX, it was system design. When you narrow down commands and arguments from the beginning, you reduce ambiguity throughout the whole flow.
In this first iteration, we stuck with the previous subset. Reference implementation: src/tasks/adapters/cli/cli_command.rs
Understanding Hexagonal Architecture in depth #
When we talk about “Hexagonal Architecture”, many people look for the hexagon’s six sides in their code or imagine concentric circles. The reality is that the original name Alistair Cockburn gave it is much more descriptive: Ports and Adapters Architecture.
The hexagon is just a visual metaphor to break away from traditional layered architecture (top-down). By using a flat figure, it illustrates that a system can have multiple entry points (CLI, HTTP, events) and multiple exit points (databases, FS, third-party APIs) all connecting to a central core. There is no “top and bottom”, there is inside and outside.
If we want to theorize a bit more, the system looks more like a directed graph of dependencies. The real value of the model isn’t in drawing a geometric shape, but in a fundamental concept: the Dependency Inversion Principle (DIP).
In a classic architecture, business code calls the database directly to save data. That is, the business depends on the infrastructure. In hexagonal architecture, we flip it around:
- The domain (the center) defines what it needs to function through Ports (in Rust, these are purely
traits). - The infrastructure (the exterior) provides Adapters that implement those ports.
That’s why the three underlying ideas that govern our code are:
- The domain is the semantic core. It knows nothing about JSON,
clap, or the terminal. It must remain stable. - The infrastructure is a replaceable implementation detail.
- The ports dictate the contract. They are the boundary that protects the domain.
Now, grounding this into something operational for this project:
- Compilation dependencies always point towards the core.
- Use cases orchestrate the flow using the ports.
- Adapters (CLI, JSON Persistence) live on the edge and implement those ports.
With that grounding, a quick view of the structure looks like this:
+----------------------+
| main |
| (entrypoint CLI) |
+----------+-----------+
|
v
+------------+-------------+
| application/use_cases |
| (orquesta el flujo) |
+------+------------+------+
| |
v v
+----------+--+ +---+----------------+
| ports | | domain |
| (contratos) | | (reglas puras) |
+------+-------+ +--------------------+
^
|
+--------------+---------------+
| |
v v
+-------+-------------+ +---------+---------------+
| adapters/cli | | adapters/persistence |
| (entrada/salida) | | (json, fs, sqlite...) |
+---------------------+ +-------------------------+If you want to see it through two different lenses, this double outline usually clears things up a lot:
1) Execution flow (what happens when running a command)
Usuario
|
v
CLI adapter (clap / parser)
|
v
Use case (application)
|
v
Domain (entidades + reglas)
|
v
Port (trait)
|
v
Persistence adapter (json/fs/sqlite)2) Dependency flow (who knows whom in code)
adapters ------implementan------> ports
use_cases ------usan-------------> ports
use_cases ------usan-------------> domain
domain ------(sin deps)------> nadieThe key lies in separating these two questions: how it executes doesn’t always match how the code is coupled.
Partitioning: technical vs domain #
Besides separating layers, we must decide how to group the project. In practical terms, here are two strategies:
- Domain partitioning: ideal when the business is broad, with clear subdomains and mixed teams (product, business, engineering). Ubiquitous language rules. Defining a semantics everyone understands is essential.
- Technical partitioning: useful when the scope is small, the team shares technical context, and delivery speed is a priority.
For this MVP we chose technical partitioning because the initial domain (task) was bounded and didn’t yet justify opening multiple business contexts.
Quick module map:
domain: pure business rules.ports: contracts to decouple application and infrastructure.adapters/cli: terminal input/output.adapters/persistence: concrete repositories.application/use_cases: flow orchestration.main: entry point.
Base module code: src/tasks/mod.rs
Real turning point #
The choice was validated when we introduced dual output (table/json) and file persistence. Not a single use case had to learn about serde_json, system paths, or formatted println!: everything was encapsulated in adapters.
That is the sign that the architecture is doing its job.
Conclusion #
Starting a project from scratch is always a mix of excitement and pragmatism. The temptation to jump straight into writing code that compiles and spits out a result to the console is enormous. And for a throwaway script, that’s exactly what you should do.
But when the goal is to build something that will evolve, laying solid architectural foundations ceases to be “overengineering” and becomes survival. In this first step, we haven’t yet written the logic to mark a task as completed, but we’ve achieved something much more important: we’ve drawn the boundaries.
By choosing Hexagonal Architecture and dividing our CLI into Domain, Use Cases, and Adapters, we have armored our business rules. We have guaranteed that the day we want to swap the terminal for a graphical interface, or JSON for a real database, the heart of our application won’t suffer a scratch. We’ll only have to write a new adapter and plug it into a port.
The project is no longer a “Rust script”, it’s a business core protected by an infrastructure shell.
In the next installment we will get our hands dirty and enter directly into the core we just isolated: we will design an immutable domain, model the state transitions of our tasks, and create a taxonomy of strongly typed errors by layer. See you in the code!