Optimizations: the speed size tradeoff

Everyone wants their program to be super fast and super small but it's usuallynot possible to have both characteristics. This section discusses thedifferent optimization levels that rustc provides and how they affect theexecution time and binary size of a program.

No optimizations

This is the default. When you call cargo build you use the development (AKAdev) profile. This profile is optimized for debugging so it enables debuginformation and does not enable any optimizations, i.e. it uses -C opt-level = 0.

At least for bare metal development, debuginfo is zero cost in the sense that itwon't occupy space in Flash / ROM so we actually recommend that you enabledebuginfo in the release profile — it is disabled by default. That will let youuse breakpoints when debugging release builds.

  1. [profile.release]
  2. # symbols are nice and they don't increase the size on Flash
  3. debug = true

No optimizations is great for debugging because stepping through the code feelslike you are executing the program statement by statement, plus you can printstack variables and function arguments in GDB. When the code is optimized, tryingto print variables results in $0 = <value optimized out> being printed.

The biggest downside of the dev profile is that the resulting binary will behuge and slow. The size is usually more of a problem because unoptimizedbinaries can occupy dozens of KiB of Flash, which your target device may nothave — the result: your unoptimized binary doesn't fit in your device!

Can we have smaller, debugger friendly binaries? Yes, there's a trick.

Optimizing dependencies

WARNING This section uses an unstable feature and it was last tested on2018-09-18. Things may have changed since then!

On nightly, there's a Cargo feature named profile-overrides that lets youoverride the optimization level of dependencies. You can use that feature tooptimize all dependencies for size while keeping the top crate unoptimized anddebugger friendly.

Here's an example:

  1. # Cargo.toml
  2. cargo-features = ["profile-overrides"] # +
  3. [package]
  4. name = "app"
  5. # ..
  6. [profile.dev.overrides."*"] # +
  7. opt-level = "z" # +

Without the override:

  1. $ cargo size --bin app -- -A
  2. app :
  3. section size addr
  4. .vector_table 1024 0x8000000
  5. .text 9060 0x8000400
  6. .rodata 1708 0x8002780
  7. .data 0 0x20000000
  8. .bss 4 0x20000000

With the override:

  1. $ cargo size --bin app -- -A
  2. app :
  3. section size addr
  4. .vector_table 1024 0x8000000
  5. .text 3490 0x8000400
  6. .rodata 1100 0x80011c0
  7. .data 0 0x20000000
  8. .bss 4 0x20000000

That's a 6 KiB reduction in Flash usage without any loss in the debuggability ofthe top crate. If you step into a dependency then you'll start seeing those<value optimized out> messages again but it's usually the case that you wantto debug the top crate and not the dependencies. And if you do need to debug adependency then you can use the profile-overrides feature to exclude aparticular dependency from being optimized. See example below:

  1. # ..
  2. # don't optimize the `cortex-m-rt` crate
  3. [profile.dev.overrides.cortex-m-rt] # +
  4. opt-level = 0 # +
  5. # but do optimize all the other dependencies
  6. [profile.dev.overrides."*"]
  7. codegen-units = 1 # better optimizations
  8. opt-level = "z"

Now the top crate and cortex-m-rt are debugger friendly!

Optimize for speed

As of 2018-09-18 rustc supports three "optimize for speed" levels: opt-level = 1, 2 and 3. When you run cargo build —release you are using the releaseprofile which defaults to opt-level = 3.

Both opt-level = 2 and 3 optimize for speed at the expense of binary size,but level 3 does more vectorization and inlining than level 2. Inparticular, you'll see that at opt-level equal to or greater than 2 LLVM willunroll loops. Loop unrolling has a rather high cost in terms of Flash / ROM(e.g. from 26 bytes to 194 for a zero this array loop) but can also halve theexecution time given the right conditions (e.g. number of iterations is bigenough).

Currently there's no way to disable loop unrolling in opt-level = 2 and 3 soif you can't afford its cost you should optimize your program for size.

Optimize for size

As of 2018-09-18 rustc supports two "optimize for size" levels: opt-level = "s" and "z". These names were inherited from clang / LLVM and are not toodescriptive but "z" is meant to give the idea that it produces smallerbinaries than "s".

If you want your release binaries to be optimized for size then change theprofile.release.opt-level setting in Cargo.toml as shown below.

  1. [profile.release]
  2. # or "z"
  3. opt-level = "s"

These two optimization levels greatly reduce LLVM's inline threshold, a metricused to decide whether to inline a function or not. One of Rust principles arezero cost abstractions; these abstractions tend to use a lot of newtypes andsmall functions to hold invariants (e.g. functions that borrow an inner valuelike deref, as_ref) so a low inline threshold can make LLVM missoptimization opportunities (e.g. eliminate dead branches, inline calls toclosures).

When optimizing for size you may want to try increasing the inline threshold tosee if that has any effect on the binary size. The recommended way to change theinline threshold is to append the -C inline-threshold flag to the otherrustflags in .cargo/config.

  1. # .cargo/config
  2. # this assumes that you are using the cortex-m-quickstart template
  3. [target.'cfg(all(target_arch = "arm", target_os = "none"))']
  4. rustflags = [
  5. # ..
  6. "-C", "inline-threshold=123", # +
  7. ]

What value to use? As of 1.29.0 these are the inline thresholds that thedifferent optimization levels use:

  • opt-level = 3 uses 275
  • opt-level = 2 uses 225
  • opt-level = "s" uses 75
  • opt-level = "z" uses 25

You should try 225 and 275 when optimizing for size.