Rust on Metal Chapter 1 · The Argument for Rust
Part I — The Language · Chapter 1

The Argument
for Rust

Every language makes a bet. C bets on the programmer. Python bets on the interpreter. Java bets on the garbage collector. Rust makes a different bet entirely — it bets on the compiler. Understanding what that means, and why it matters on both a microcontroller and a production web server, is the foundation of everything in this book.
§ 1.1
Memory: The Fundamental Problem

To understand why Rust exists, you need to understand what computing's deepest, most persistent problem actually is. It is not algorithmic complexity. It is not network latency. It is not database design. It is memory — specifically, the question of who owns a piece of memory, who is allowed to read it, who is allowed to write it, and when it should be released.

This problem has been causing catastrophic failures since the 1970s. A 2019 analysis by Microsoft found that approximately 70% of the CVEs they assigned over the previous twelve years were memory safety bugs. Google's Project Zero team reports similar numbers for Chrome. The NSA published guidance in 2022 recommending that organisations transition to memory-safe languages. These are not theoretical concerns — they are buffer overflows in production firewalls, use-after-free vulnerabilities in browsers, and null pointer dereferences in ISP routing daemons. The attacks that compromised infrastructure you have personally managed in East Africa almost certainly exploited a memory safety bug somewhere in the chain.

Memory management is hard because it involves time. A piece of data is allocated at one moment. It may be referenced by many different parts of the program simultaneously. It must be released exactly once — not before all references to it are finished (use-after-free), and not released and then released again (double-free), and not held forever when nothing needs it any more (memory leak). Getting this right across a large codebase, across multiple threads, across code written by multiple engineers over multiple years, is genuinely difficult. The programming languages of the world have tried three distinct approaches to this problem, and each has a cost.

The Three Approaches to Memory Management

Manual, Automatic, and Ownership.

Manual (C, C++): The programmer allocates memory with malloc/new and releases it with free/delete. Maximum performance. Maximum control. Maximum danger. The compiler gives you no help. You can free memory that is still in use. You can forget to free memory. You can free memory twice. You can write past the end of an allocated buffer. None of these mistakes are caught at compile time. All of them become security vulnerabilities.

Automatic (Python, Java, Go, C#): A garbage collector tracks references and releases memory when nothing references it any more. Correctness is guaranteed — you cannot have a dangling pointer because the collector ensures the memory lives as long as something references it. The cost is runtime overhead: CPU cycles for the collector, pause times when the collector runs a major collection, unpredictable latency spikes, and baseline memory overhead. In a Python web service this is acceptable. In an embedded system running on 520KB of RAM with hard timing requirements, it is not.

Ownership (Rust): The compiler tracks ownership of every value at compile time. Memory is released the moment the owner goes out of scope — deterministic, with no runtime cost. The compiler refuses to compile code where two parts of the program could simultaneously modify the same data, or where a reference outlives the data it points to. These are compile-time errors, not runtime failures. If the program compiles, memory safety is guaranteed. Zero runtime overhead. Deterministic release. No garbage collector required.

The ownership approach is Rust's core contribution to computing. It is not a new idea — the theory comes from linear type systems in academic computer science, specifically from work by Philip Wadler and others in the 1990s. What is new is making it practical, ergonomic, and fast enough for systems programming. The borrow checker — the compiler component that enforces ownership rules — is the most important thing in the Rust toolchain. Chapter 2 is entirely devoted to understanding it.

§ 1.2
What C Gets Wrong — And Why It Matters Here

You have written C. Or you have worked with systems written in C — routers, switches, embedded controllers. You know how it feels to debug a segmentation fault that only appears under load on a production system at 3am. Let us be precise about what C's failure modes are, because understanding them makes Rust's design choices legible.

Buffer Overflows

C arrays are simply memory addresses. When you write arr[10] and arr has 8 elements, C will happily compute the address, add the offset, and write or read from whatever happens to be at that memory location. There is no bounds checking at runtime unless you add it yourself. There is certainly no compile-time check. The Morris Worm in 1988 exploited a buffer overflow in fingerd. Heartbleed in 2014 — which affected HTTPS servers globally, including those serving African banking systems — was a buffer over-read in OpenSSL. These were not bugs in obscure edge-case code. They were in core infrastructure written by experienced engineers.

⚠ On Your Network Right Now

The Cisco IOS and Juniper Junos code running your BGP sessions is written in C. The OpenSSL library used by your HTTPS services is written in C. The Linux kernel managing your server's memory is written in C. The probability that at least one CVE affecting code running in your Sprint infrastructure right now is a C memory safety bug is not low. This is not alarmism — it is the documented reality of the software supply chain.

Use After Free

In C, when you call free(ptr), the memory is returned to the allocator. The pointer variable itself still holds the same address. Nothing in C prevents you from reading or writing through that pointer after the free. This is a use-after-free vulnerability. If an attacker can cause you to free a buffer and then use it again, they can often control what the reallocated memory contains — and therefore control what your program reads and executes. In C++, this is also possible through smart pointers if you are not careful with shared_ptr cycles and object lifetime.

use_after_free.c — a classic C mistake c
// This compiles. It runs. It is undefined behaviour.
// What it does depends on the allocator, the OS, the moon phase.
char *buf = malloc(256);
strcpy(buf, "Sprint Group NOC");
free(buf);
// ... some code later ...
printf("%s\n", buf);  // Use after free. May work. May crash. May be exploited.
The equivalent in Rust — compile error, not runtime crash rust
let s = String::from("Sprint Group NOC");
drop(s);  // Explicitly drop s — memory is freed here
println!("{}", s);  // COMPILE ERROR: use of moved value: `s`
// error[E0382]: borrow of moved value: `s`
// The program does not compile. The bug cannot ship.

The Rust compiler's error is not a warning. It is a hard refusal. The program does not compile. The bug cannot exist in a Rust binary because the compiler will not produce the binary. This is the fundamental difference. In C, correctness depends on programmer discipline applied consistently over the entire lifetime of the codebase by every engineer who ever touches it. In Rust, correctness is mechanically verified on every build.

Data Races

In C, if two threads simultaneously read and write the same memory location without synchronisation, you have a data race. Data races are undefined behaviour in C — the compiler is allowed to assume they do not happen, and will optimise your code in ways that make catastrophic sense from a correctness perspective but are completely unexpected. Data races in multi-threaded C code are notoriously difficult to reproduce and nearly impossible to find with code review alone. Tools like ThreadSanitizer can find them, but only at runtime, and only in code paths that execute during testing.

In Rust, data races are impossible. The ownership and borrowing rules make it impossible to have two simultaneous mutable references to the same data — from the same thread or from different threads. If you try to share mutable data between threads without proper synchronisation primitives (Mutex, RwLock, atomic operations), the compiler refuses to compile the code. Thread safety is a compile-time guarantee. This is why Rust's standard library types are documented with Send and Sync marker traits — the compiler tracks which types are safe to send between threads and which can be shared between them.

§ 1.3
The Garbage Collection Tax — Why It Matters on a Pico

If C is so dangerous, why not just use a garbage-collected language everywhere? Python, Java, Go — they are all memory-safe. They are all far more productive to write in than C. What is the actual cost of a garbage collector, and why does that cost matter for the work in this book?

Unpredictable Latency

A garbage collector runs periodically to identify and free unreachable memory. The most common modern collectors (like JVM's G1GC, or Python's cyclic GC) require stop-the-world pauses — moments where all your application threads stop so the collector can safely inspect and modify the heap. In Go's concurrent collector, these pauses are sub-millisecond for most workloads. In CPython, the GIL means only one thread runs at a time anyway. In Java, major GC pauses of tens or hundreds of milliseconds are possible under load.

For a web API serving ISP customers, a 50ms GC pause means a 50ms tail latency spike on that request. Users notice latency above 100ms. For a network monitoring system that needs to detect and alert on BGP session drops within seconds, a periodic 100ms freeze is not catastrophic. But consider your Pico's motor controller: if you needed precise PWM timing and a GC pause stopped your control loop for 50ms, the motor would behave erratically. Real-time embedded systems cannot tolerate non-deterministic pauses. This is the first and most important reason there is no garbage-collected language in the embedded world.

Memory Overhead

A garbage collector requires overhead — the collector's own heap metadata, the objects needed to track reference counts or object graphs, the headroom in the heap to allow allocation between collection cycles. The JVM baseline for a minimal application is 50–100MB of RAM. Python's base memory footprint is 20–50MB. Go is more efficient but still requires several megabytes.

Your Pico 2 has 520KB of RAM. Total. All of it. That includes your stack, your global variables, your driver state, your Embassy task arenas, and anything else your application allocates. A Go runtime would consume all available memory and more before you wrote a single line of application code. Garbage collection is architecturally incompatible with the embedded microcontroller world.

520KB
RP2350 RAM
Total. Stack + code + data + driver state.
~50MB
JVM minimum
Before writing a single line of application code.
~2KB
Embassy runtime
Total overhead for the async executor.
0ms
GC pauses
No garbage collector. Deterministic memory release.
No Standard Library on Bare Metal

Garbage-collected languages depend on a runtime — a layer of code that initialises the heap, starts the collector, provides system services. That runtime requires an operating system to exist. The JVM calls OS APIs to allocate virtual memory. CPython calls malloc. Go's runtime needs OS threads. On the RP2350 with no OS, none of these runtimes can run. The only languages that can run on bare metal are those that can operate without an OS: C, C++, Ada, and Rust.

Of these, only Rust provides memory safety guarantees at compile time. This is not a close call. If you want memory safety on bare metal, Rust is currently the only practical choice.

§ 1.4
Zero-Cost Abstractions — The Promise and What It Actually Means

Bjarne Stroustrup, the creator of C++, articulated the zero-cost abstraction principle: "What you don't use, you don't pay for. And further: What you do use, you couldn't hand-code any better." Rust takes this seriously — more seriously than C++ does, and with stronger guarantees.

Generics Without a Runtime

In Java or Python, generic types (templates, type parameters) are resolved at runtime through a mechanism called type erasure (Java) or dynamic dispatch (Python). This means that a List<String> and a List<Integer> are the same class at runtime — the type information is lost. Generic method calls go through virtual dispatch, which requires a pointer indirection and prevents inlining.

In Rust, generics are resolved at compile time through monomorphisation. When you write a generic function fn process<T: Display>(item: T), the compiler generates a separate, concrete version of that function for every type T you actually use. If you call it with a u32 and a String, the compiler generates process_u32 and process_string — both fully concrete, both inlinable, both with no runtime type overhead. The abstraction costs nothing at runtime because it is resolved before runtime exists.

Iterators and the Optimizer

In most languages, using higher-order functions (map, filter, fold) instead of for-loops has a performance cost — function call overhead, closure allocation, virtual dispatch. Idiomatic Rust code uses iterators heavily, and yet iterator chains in Rust compile to the same assembly as hand-written loops — often to better assembly, because the optimizer can see the full computation and apply transformations that are not possible when the code is written as explicit loops with mutable variables.

Iterator chain — compiles to the same code as the loop below it rust
// Iterator version — idiomatic, readable, zero-cost
let sum: u32 = (0..=100)
    .filter(|n| n % 2 == 0)
    .map(|n| n * n)
    .sum();

// Loop version — the compiler generates identical assembly for both
let mut sum: u32 = 0;
for n in 0..=100u32 {
    if n % 2 == 0 {
        sum += n * n;
    }
}

// The compiler sees through the abstraction. No allocation. No function pointers.
// In --release mode, both produce the same tight loop in assembly.
Async/Await Without a Heap Allocator

This is where Rust does something remarkable that no other language does. In Node.js, every async function creates a Promise object on the heap. In Python, every coroutine is a heap-allocated object. In Go, every goroutine has a small but non-zero heap-allocated stack. These allocations are cheap individually but accumulate, require GC, and require a heap to exist at all.

In Rust, an async function compiles to a state machine. The compiler analyses the function's await points, identifies all the variables that must survive across those points, and generates a struct containing exactly those variables. The struct is sized at compile time. It can be allocated on the stack, in a static memory region, or anywhere the programmer chooses — without a heap allocator. This is how Embassy runs async code on the RP2350 with no heap, no malloc, and no garbage collector: every async task is a fixed-size state machine allocated in a static array at startup.

§ 1.5
Who Uses Rust in Production — The Evidence Base

Before learning a new language, it is reasonable to ask whether it has been proven in production at scale. The answer for Rust is unambiguous: yes, at some of the most demanding scale and reliability requirements in the industry.

Rust in Production — Selected Examples

This is not hype. These are engineering decisions made by people with the same constraints you have.

Linux kernel: Since 2022, Rust is an officially supported language for writing Linux kernel drivers and subsystems. This is the single most conservative codebase in computing — Linus Torvalds accepted Rust only after years of review and evidence. Rust drivers now exist for Android GPU drivers and several filesystem components.

Amazon Web Services: The Firecracker microVM — the technology underneath AWS Lambda and Fargate — is written in Rust. It runs millions of times per day, boots in under 125ms, and has an exceptionally small attack surface. AWS also rewrote their S3's network stack components in Rust after analysis showed it would eliminate a class of memory safety bugs.

Cloudflare: Their edge proxies, DNS resolver (1.1.1.1), and DDoS mitigation systems include significant Rust components. Cloudflare has published extensively on the performance and reliability improvements they observed after migrating from C/C++ and Go.

Microsoft: Azure's IoT Edge runtime is written in Rust. Microsoft's Windows team has been rewriting memory-unsafe components of Windows in Rust. Their analysis of historical CVEs was the trigger — 70% of security vulnerabilities traceable to memory safety bugs that Rust would have prevented at compile time.

Android (Google): Since Android 12, Google has been writing new Android system components in Rust. The number of memory safety vulnerabilities in Android has decreased measurably as Rust adoption has grown.

Mozilla: The original backer of Rust's development. Firefox's media pipeline, CSS engine (Stylo), and increasingly, browser internals are written in Rust. Servo — an experimental browser engine — demonstrated that Rust could achieve C++ performance with safety guarantees.

The pattern is consistent: organisations with the highest reliability and security requirements and the engineering resources to evaluate alternatives carefully are choosing Rust. This is not fashion. It is the result of cost-benefit analysis by engineers who have been burned by memory safety bugs in C and C++, found GC latency unacceptable, and needed something that combined both worlds.

The Rust Survey Numbers

The annual Stack Overflow Developer Survey has found Rust to be the "most loved" language for seven consecutive years through 2024. This survey metric measures the percentage of developers currently using a language who wish to continue using it. The number for Rust has been above 85% every year — meaning the overwhelming majority of people who have used Rust want to keep using it. Compare this to languages that people use because they have to (PHP, Java) versus languages people use because they want to. Rust is decisively in the latter category.

The Rust Foundation — an independent organisation supporting Rust's development — has members including Google, Microsoft, Amazon, Huawei, Mozilla, and Samsung. This is not a language at risk of disappearing. It is infrastructure at a level that makes it as stable a bet as any language in production today.

§ 1.6
The Embedded Case — Why Rust Specifically on the Pico

Embedded systems have traditionally been the domain of C — and for good reason. C gives you direct memory access, predictable code size, no runtime overhead, and a fifty-year history of toolchain maturity. When you program a microcontroller in C, you know exactly what the compiled binary will do and how much flash it will occupy. These properties are valuable.

The problems with C in embedded systems are the same problems with C everywhere, but they are worse. In a server, a memory corruption bug might cause a service restart or a security breach — both bad, but recoverable. In an embedded system controlling physical hardware, a memory corruption bug can cause a motor to run at full speed when the command was stop, or a valve to open when it should close, or a safety system to disable when it should activate. The consequences are not just data loss but physical damage and injury.

The Embedded Rust Ecosystem Today

Embedded Rust has matured substantially since 2018. The key components are now stable and production-ready:

The HAL trait system defines hardware abstraction layer traits — interfaces for GPIO, SPI, I2C, UART, PWM — that peripheral drivers can be written against. A driver written against the embedded-hal traits works with any microcontroller that implements those traits. This is a level of portability that does not exist in the C embedded ecosystem, where drivers are almost always chip-specific.

Embassy is the async embedded framework we will use throughout this book. It provides an async executor that runs on bare metal, HAL implementations for the RP2350 (and many other chips), timer abstractions, synchronisation primitives, and networking stacks. It is not a toy — it is used in production medical devices, industrial controllers, and aerospace systems.

defmt is a logging framework optimised for embedded systems. Instead of formatting strings on the microcontroller (expensive, requiring heap allocation), defmt sends compact binary representations over RTT (Real-Time Transfer) and formats them on the host machine. You get structured, searchable logs with timestamp and file/line information, with essentially zero impact on your application's timing.

probe-rs is the debugging and flashing tool that talks to your Raspberry Pi Debug Probe over USB. A single cargo run compiles your code, flashes it to the Pico, resets the chip, and starts streaming RTT logs to your terminal. The development loop is as fast as any other language — faster than most, because cargo's incremental compilation only recompiles what changed.

Why This Matters For Your Team

Your engineers are writing C in your embedded products. You are about to change that.

The NovaGen development arm builds embedded systems. The embedded devices in your Kawuku smart infrastructure proposal, your IoT logistics work for Vunja Bei Group, your industrial automation interest — all of these involve embedded code. Today that code is being written in C or MicroPython. C exposes the team to the full range of memory safety bugs. MicroPython is too slow for time-critical applications and too large for resource-constrained devices.

When your engineers write the motor controller from Chapter 10 in Rust, the compiler guarantees that no two tasks accidentally share mutable state. When they write an I2C driver, the compiler guarantees the GPIO pins are not accidentally used elsewhere. When they write a network packet parser, the compiler guarantees no buffer over-reads. These are not hypothetical benefits — they are compile-time proofs that replace the runtime failures you would otherwise discover at the worst possible moment.

This weekend's work with the Pico is the foundation for telling your team: we write embedded systems in Rust now. The argument is this chapter. The evidence is the working system at the end of the book.

Rust vs MicroPython on the Pico

The Pico ships with MicroPython support and it is the most accessible entry point for beginners. Why not use it? Three reasons. First, performance: MicroPython runs an interpreter loop. Every Python statement is interpreted at runtime. On a 150MHz ARM Cortex-M33, interpreted Python executes at roughly the speed of a 5MHz processor. For the bit-banging protocol work in Chapter 8, you need microsecond-level timing precision that interpreted code cannot reliably provide. Second, size: the MicroPython runtime itself occupies approximately 200KB of flash — more than a third of the Pico's total flash, before you write a line of application code. Third, safety: MicroPython gives you dynamic typing and no compile-time error checking. The errors you find in C at runtime you also find in MicroPython at runtime. Rust finds them at compile time.

The Learning Investment

Rust has a reputation for being difficult to learn. This reputation is partially deserved. The borrow checker — which we cover in depth in the next chapter — will reject code that feels correct to a programmer coming from C or Python. The error messages are excellent (better than any other compiled language's), but understanding them requires understanding the ownership model, and understanding the ownership model requires time.

Here is the honest assessment: the learning curve is steeper than Python and roughly comparable to learning C properly (not just writing C that mostly works). The difference is that what you learn in Rust transfers to every program you write in it. There is no runtime debugging of memory corruption bugs because they cannot exist. There is no mystery crash that only happens on production hardware. The difficulty is front-loaded — paid at compile time, in the form of understanding the compiler's reasoning — and the benefit is a fundamentally different quality of software at the end.

For an engineer of your seniority, with your background in systems that have to work in harsh environments — remote oil fields, cellular towers in rural Uganda, data centres with inconsistent power — the Rust mental model should feel right. The discipline it demands is the discipline you already apply to infrastructure work. The checklist methodology is the same. Rust is just the compiler applying your checklist automatically on every build.

§ 1.7
Exercises
Exercise 1.1 — Comparative Analysis

Language selection for three real problems

For each of the following scenarios, argue for and against using Rust. Be specific — cite the tradeoffs discussed in this chapter.

  • A Zabbix alerting microservice that receives webhook events from your network monitoring system and forwards them to PagerDuty and Telegram.
  • An embedded controller for the solar charge regulator at a rural tower site with no on-site maintenance capability, monitoring battery voltage and controlling charge current via PWM.
  • A Django-replacement for the Sprint Group customer portal — currently handling 5,000 active subscribers and their billing data.

For each scenario, identify the dominant concerns (performance, memory safety, developer productivity, ecosystem maturity, team skill availability) and explain which language properties matter most.

Exercise 1.2 — CVE Research

Find a real memory safety vulnerability in software you run

Go to the National Vulnerability Database (nvd.nist.gov) and search for CVEs in the following software. For each CVE you find, identify the class of memory safety bug (buffer overflow, use-after-free, null pointer dereference, etc.) and explain in one paragraph how Rust's ownership model would have prevented it at compile time — or if the bug is not a memory safety bug, explain what category it is and whether Rust would have helped.

  • OpenSSL (search for 2014–2016)
  • Sudo (search for 2021)
  • Linux kernel net/ipv6 (search for any year)
Exercise 1.3 — The Abstraction Cost

Measure what the Rust compiler actually generates

After completing the toolchain setup in Chapter 5, create a new Rust project with cargo new measure-abstractions. Write two functions that compute the sum of squares of even numbers from 0 to 1000 — one using an iterator chain (filter, map, sum) and one using an explicit for-loop with a mutable accumulator. Compile with cargo build --release and examine the generated assembly with cargo install cargo-asm && cargo asm measure_abstractions::iterator_version. Compare the assembly of both functions. What do you observe about the zero-cost abstraction claim?

Exercise 1.4 — The Embedded Argument

Write the business case

Write a one-page technical brief addressed to Isaac Walusimbi (CEO, Sprint Group) arguing for adopting Rust in NovaGen's embedded development practice. The audience is technically literate but not an embedded systems engineer. The brief should address: what problem Rust solves, what the transition cost is (realistic assessment), what the ongoing benefit is, and what the first project should be. Use the numbers from this chapter. Be honest about the difficulty. Do not oversell.