Skip to main content
  1. Posts/

Ownership in Rust 2. Six ways to share state — and how to pick the right one

·9 mins
Rafael Fernandez
Author
Rafael Fernandez
Mathematics, programming, and life stuff
Ownership in Rust - This article is part of a series.
Part 2: This Article

In the Jedi Temple, younglings learn seven forms of lightsaber combat. Each form was developed to counter a specific threat: Form III (Soresu) is pure defense, Form V (Djem So) turns the enemy’s strength against them, Form VII (Juyo) channels aggression into power. A Jedi who uses only one form will eventually face an opponent that form cannot handle.

Rust’s ownership model works the same way. There is no single “correct” way to share data. There are six strategies, each designed for a specific kind of problem. Using the wrong one does not just produce suboptimal code; it produces code that is harder to maintain, harder to reason about, and in some cases, code that panics at runtime or deadlocks in production.

The previous post covered what .clone() actually does. This post covers when to use it, and when to reach for something else entirely.

Form I: Move — transfer ownership
#

fn process(data: Vec<u8>) {
    // data is owned here. Dropped at end of scope.
}

let buffer = vec![0u8; 1024];
process(buffer);
// buffer is invalid. Ownership was transferred.

The form: one owner, one lifetime. The value goes in, gets consumed, and ceases to exist for the caller.

When it excels: linear workflows where the producer creates data and the consumer consumes it. No sharing, no borrowing, no complexity. This is the default in Rust, and it is the simplest model.

When it fails: you need the value after giving it away. Or two consumers need it. The moment you write process(buffer) and then try to use buffer again, the compiler stops you. Move is exclusive by nature.

Form II: Borrow — lend without giving
#

fn analyze(data: &[u8]) -> usize { data.len() }
fn modify(data: &mut Vec<u8>) { data.push(42); }

let mut buffer = vec![0u8; 1024];
let len = analyze(&buffer);     // immutable borrow: many simultaneous readers
modify(&mut buffer);             // mutable borrow: exclusive writer

The form: zero-cost access. No allocation, no duplication, no overhead. The borrow checker verifies at compile time that borrows do not overlap dangerously.

When it excels: the data outlives all borrowers, and the scope is contained. A function that reads data and returns a result. A method that mutates a field and returns. Local, scoped, predictable.

When it fails: lifetimes propagate.

struct Processor<'a> {
    config: &'a Config,
}

struct Server<'a> {
    processor: Processor<'a>,
}

struct Application<'a> {
    server: Server<'a>,
}

Three layers of lifetime parameters for a reference to a config struct. Now try to move Application into a spawned thread. 'static is required. The lifetimes block you. You either refactor the entire ownership chain or you clone the config. The config is 200 bytes. The refactor takes two hours and makes every type signature worse.

Borrowing is the optimal solution when lifetimes are local. When they are not, the cure becomes worse than the disease.

Form III: Clone — give each consumer its own copy
#

let config = Config { name: "api".into(), port: 8080, tags: vec![] };

let config_for_server = config.clone();
let config_for_logger = config.clone();

start_server(config_for_server);
start_logger(config_for_logger);

The form: independent copies. Each consumer owns its data. No lifetime parameters. No shared state. No coordination.

When it excels: the data is cheap to clone and consumers do not need to see each other’s mutations. Config structs, path wrappers, small DTOs, message payloads.

// PathBuf wrapper: ~50 bytes to clone
#[derive(Clone)]
struct JsonFileTaskRepository { file_path: PathBuf }

// Builder template: clone and customize
let template = RequestBuilder::new("https://api.example.com")
    .header("Authorization", "Bearer token");
let get_users = template.clone().path("/users");
let get_posts = template.clone().path("/posts");

When it fails catastrophically:

let image: Vec<u8> = load_image();  // 10 MB

// This copies 10 megabytes. Every. Time.
let copy = image.clone();
process(copy);

Cloning 10 MB takes ~100-500 microseconds. An Arc::clone takes ~5 nanoseconds. That is a 100,000x difference. If this is inside a request handler processing 1,000 images per second, the clone alone consumes most of your CPU budget.

The rule: if the struct contains small scalar fields and short strings, clone. If it contains anything proportional to user input (buffers, collections, caches), measure before cloning.

Form IV: Rc<T> / Arc<T> — shared ownership
#

use std::sync::Arc;

let data = Arc::new(vec![1, 2, 3, 4, 5]);

let handle1 = Arc::clone(&data);  // refcount: 2
let handle2 = Arc::clone(&data);  // refcount: 3
// Three handles, one allocation, zero data copies.

The form: multiple owners, single allocation. The data lives until the last owner drops it. No duplication, no lifetimes.

Variant Thread-safe? Overhead per clone
Rc<T> No (!Send) ~2-5 ns (non-atomic increment)
Arc<T> Yes ~5-10 ns (atomic increment)

When it excels: the data is expensive to clone and multiple consumers need read access. Images, precomputed tables, parsed configuration trees, shared datasets.

When it fails: consumers need to mutate the data. Arc<T> gives you &T, not &mut T. If you need mutation, you need Form V.

Convention: write Arc::clone(&x) instead of x.clone(). Both compile identically. The convention signals to human readers: “this is a refcount bump, not a deep copy.” A small thing, but in a codebase with both Arc and heavy structs, this distinction prevents confusion.

Form V: Interior mutability — Rc<RefCell<T>> / Arc<Mutex<T>>
#

// Single-threaded: Rc<RefCell<T>>
let cache = Rc::new(RefCell::new(HashMap::new()));
let cache2 = Rc::clone(&cache);

cache2.borrow_mut().insert("key", "value");
assert_eq!(cache.borrow().get("key"), Some(&"value"));
// Both handles see the mutation.
// Multi-threaded: Arc<Mutex<T>>
let counter = Arc::new(Mutex::new(0u64));
let counter2 = Arc::clone(&counter);

std::thread::spawn(move || {
    *counter2.lock().unwrap() += 1;
});

The form: shared ownership plus mutation. The borrow checker’s static analysis is replaced by runtime checks: RefCell panics on double mutable borrow, Mutex blocks on contention.

When it excels: shared mutable state that cannot be restructured into message passing. Caches, counters, global registries.

When it fails: overuse. If you are wrapping every field in Rc<RefCell<>>, you have not solved an ownership problem; you have turned Rust into a garbage-collected language with extra steps. The compile-time guarantees are gone. The RefCell can panic at runtime. The Mutex can deadlock.

This is the dark side of the Force: powerful, but it erodes the guarantees that make Rust valuable. Use it when you must. Question yourself when you reach for it.

Form VI: Cow<T> — clone on write
#

use std::borrow::Cow;

fn normalize_path(input: &str) -> Cow<str> {
    if input.contains("//") {
        Cow::Owned(input.replace("//", "/"))  // allocates only when needed
    } else {
        Cow::Borrowed(input)                   // zero-cost borrow
    }
}

let a = normalize_path("/home/user");     // Borrowed. No allocation.
let b = normalize_path("/home//user");    // Owned. Allocated.

The form: defer the clone until mutation is actually needed. In the common case, no allocation happens. In the rare case, a clone is made.

When it excels: functions that usually return the input unchanged but occasionally transform it. Parsers, normalizers, template engines, string processors. Any pipeline where most values pass through unmodified.

When it fails: the mutation is always needed. If every call goes through the Owned path, Cow adds a branch and an enum wrapper for no benefit. Just return an owned value directly.

The decision tree
#

Do multiple consumers need the data?
├─ No → Move (Form I)
└─ Yes
   ├─ Do all consumers outlive the data source?
   │  ├─ Yes, and lifetimes stay local → Borrow (Form II)
   │  └─ No, or lifetimes propagate virally
   │     ├─ Is the data cheap to clone? (<1 KB, no heavy fields)
   │     │  ├─ Yes → Clone (Form III)
   │     │  └─ No
   │     │     ├─ Do consumers need to mutate?
   │     │     │  ├─ No → Arc<T> / Rc<T> (Form IV)
   │     │     │  └─ Yes
   │     │     │     ├─ Single-threaded? → Rc<RefCell<T>> (Form V)
   │     │     │     └─ Multi-threaded? → Arc<Mutex<T>> (Form V)
   │     │     └─ Mutation is rare? → Cow<T> (Form VI)
   │     └─ Crossing thread/async boundary?
   │        └─ Must be 'static → Arc<T> or Clone

The key question at every node: what is inside the struct? The answer determines which form applies. A PathBuf wrapper clones in 50 nanoseconds; use Form III. A 10 MB image buffer cannot afford Form III; use Form IV. A shared mutable cache requires Form V. A text normalizer that rarely modifies fits Form VI.

When clone is right: four patterns
#

Config structs
#

#[derive(Clone)]
struct AppConfig {
    db_url: String,     // ~50 bytes
    port: u16,          // 2 bytes
    log_level: String,  // ~10 bytes
}

let config = load_config();
let server = Server::new(config.clone());  // ~60 bytes cloned
let worker = Worker::new(config);          // moved, not cloned

Arc<AppConfig> would add atomic refcounting and force every API to accept Arc<AppConfig> instead of AppConfig. For 60 bytes, that is buying a Star Destroyer to cross a river.

Message passing
#

Each message is an independent value. Clone is the semantic model:

tx.send(event.clone()).await?;

Builder/template patterns
#

Clone a base, customize the copy:

let base = Request::builder().timeout(30).auth("token");
let req_a = base.clone().path("/users");
let req_b = base.clone().path("/posts");

PathBuf wrappers
#

The case from our TUI project: a JsonFileTaskRepository wrapping a single PathBuf. Cloning costs ~50 bytes. The I/O that follows costs milliseconds. The clone is invisible in the profile.

When clone is catastrophic: four anti-patterns
#

Large data buffers
#

// 10 MB buffer cloned per request = ~500 μs per clone
// At 1,000 req/s = 500 ms/s spent just cloning. Half your CPU budget.
let frame = image_buffer.clone();  // DON'T
let frame = Arc::clone(&image_buffer);  // DO: 5 ns, not 500 μs

Database connection pools
#

Pools contain OS resources: TCP sockets, TLS sessions. They either do not implement Clone (correct) or their Clone is an Arc::clone in disguise (also correct, but understand what you are cloning).

Shared mutable state
#

// WRONG: each clone is an independent snapshot
let cache_a = cache.clone();  // Task A inserts here
let cache_b = cache.clone();  // Task B never sees A's inserts

// RIGHT: shared ownership
let cache = Arc::new(RwLock::new(HashMap::new()));

If consumers must see each other’s mutations, clone is structurally wrong. It creates copies, not views.

Types that refuse Clone
#

File, TcpStream, MutexGuard. These types represent unique system resources. The absence of Clone is the type author telling you: this value cannot be meaningfully duplicated. Restructure your ownership instead of fighting the type system.

Where this leaves us
#

Six forms. Each one exists because the others cannot handle every situation. The challenge is not learning the forms; it is recognizing which one the current situation demands.

The next post closes the series with the cultural and theoretical angle: why the Rust community developed an instinctive fear of clone, whether that fear is justified, and what affine type theory and linear logic have to say about the explicit duplication that .clone() represents.

Reference:

Ownership in Rust - This article is part of a series.
Part 2: This Article