Speculative Load Hardening

A Spectre Variant #1 Mitigation Technique

Author: Chandler Carruth - chandlerc@google.com

Problem Statement

Recently, Google Project Zero and other researchers have found information leakvulnerabilities by exploiting speculative execution in modern CPUs. Theseexploits are currently broken down into three variants:

  • GPZ Variant #1 (a.k.a. Spectre Variant #1): Bounds check (or predicate) bypass
  • GPZ Variant #2 (a.k.a. Spectre Variant #2): Branch target injection
  • GPZ Variant #3 (a.k.a. Meltdown): Rogue data cache load

For more details, see the Google Project Zero blog post and the Spectre researchpaper:

The core problem of GPZ Variant #1 is that speculative execution uses branchprediction to select the path of instructions speculatively executed. This pathis speculatively executed with the available data, and may load from memory andleak the loaded values through various side channels that survive even when thespeculative execution is unwound due to being incorrect. Mispredicted paths cancause code to be executed with data inputs that never occur in correctexecutions, making checks against malicious inputs ineffective and allowingattackers to use malicious data inputs to leak secret data. Here is an example,extracted and simplified from the Project Zero paper:

  1. struct array {
  2. unsigned long length;
  3. unsigned char data[];
  4. };
  5. struct array *arr1 = ...; // small array
  6. struct array *arr2 = ...; // array of size 0x400
  7. unsigned long untrusted_offset_from_caller = ...;
  8. if (untrusted_offset_from_caller < arr1->length) {
  9. unsigned char value = arr1->data[untrusted_offset_from_caller];
  10. unsigned long index2 = ((value&1)*0x100)+0x200;
  11. unsigned char value2 = arr2->data[index2];
  12. }

The key of the attack is to call this with untrusted_offset_from_caller thatis far outside of the bounds when the branch predictor will predict that itwill be in-bounds. In that case, the body of the if will be executedspeculatively, and may read secret data into value and leak it via acache-timing side channel when a dependent access is made to populate value2.

High Level Mitigation Approach

While several approaches are being actively pursued to mitigate specificbranches and/or loads inside especially risky software (most notably various OSkernels), these approaches require manual and/or static analysis aided auditingof code and explicit source changes to apply the mitigation. They are unlikelyto scale well to large applications. We are proposing a comprehensivemitigation approach that would apply automatically across an entire programrather than through manual changes to the code. While this is likely to have ahigh performance cost, some applications may be in a good position to take thisperformance / security tradeoff.

The specific technique we propose is to cause loads to be checked usingbranchless code to ensure that they are executing along a valid control flowpath. Consider the following C-pseudo-code representing the core idea of apredicate guarding potentially invalid loads:

  1. void leak(int data);
  2. void example(int* pointer1, int* pointer2) {
  3. if (condition) {
  4. // ... lots of code ...
  5. leak(*pointer1);
  6. } else {
  7. // ... more code ...
  8. leak(*pointer2);
  9. }
  10. }

This would get transformed into something resembling the following:

  1. uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
  2. uintptr_t all_zeros_mask = 0;
  3. void leak(int data);
  4. void example(int* pointer1, int* pointer2) {
  5. uintptr_t predicate_state = all_ones_mask;
  6. if (condition) {
  7. // Assuming ?: is implemented using branchless logic...
  8. predicate_state = !condition ? all_zeros_mask : predicate_state;
  9. // ... lots of code ...
  10. //
  11. // Harden the pointer so it can't be loaded
  12. pointer1 &= predicate_state;
  13. leak(*pointer1);
  14. } else {
  15. predicate_state = condition ? all_zeros_mask : predicate_state;
  16. // ... more code ...
  17. //
  18. // Alternative: Harden the loaded value
  19. int value2 = *pointer2 & predicate_state;
  20. leak(value2);
  21. }
  22. }

The result should be that if the if (condition) { branch is mis-predicted,there is a data dependency on the condition used to zero out any pointersprior to loading through them or to zero out all of the loaded bits. Eventhough this code pattern may still execute speculatively, invalid speculativeexecutions are prevented from leaking secret data from memory (but note thatthis data might still be loaded in safe ways, and some regions of memory arerequired to not hold secrets, see below for detailed limitations). Thisapproach only requires the underlying hardware have a way to implement abranchless and unpredicted conditional update of a register’s value. All modernarchitectures have support for this, and in fact such support is necessary tocorrectly implement constant time cryptographic primitives.

Crucial properties of this approach:

  • It is not preventing any particular side-channel from working. This isimportant as there are an unknown number of potential side channels and weexpect to continue discovering more. Instead, it prevents the observation ofsecret data in the first place.
  • It accumulates the predicate state, protecting even in the face of nestedcorrectly predicted control flows.
  • It passes this predicate state across function boundaries to provideinterprocedural protection.
  • When hardening the address of a load, it uses a destructive ornon-reversible modification of the address to prevent an attacker fromreversing the check using attacker-controlled inputs.
  • It does not completely block speculative execution, and merely preventsmis-speculated paths from leaking secrets from memory (and stallsspeculation until this can be determined).
  • It is completely general and makes no fundamental assumptions about theunderlying architecture other than the ability to do branchless conditionaldata updates and a lack of value prediction.
  • It does not require programmers to identify all possible secret data usingstatic source code annotations or code vulnerable to a variant #1 styleattack.

Limitations of this approach:

  • It requires re-compiling source code to insert hardening instructionsequences. Only software compiled in this mode is protected.
  • The performance is heavily dependent on a particular architecture’simplementation strategy. We outline a potential x86 implementation below andcharacterize its performance.
  • It does not defend against secret data already loaded from memory andresiding in registers or leaked through other side-channels innon-speculative execution. Code dealing with this, e.g cryptographicroutines, already uses constant-time algorithms and code to preventside-channels. Such code should also scrub registers of secret data followingtheseguidelines.
  • To achieve reasonable performance, many loads may not be checked, such asthose with compile-time fixed addresses. This primarily consists of accessesat compile-time constant offsets of global and local variables. Code whichneeds this protection and intentionally stores secret data must ensure thememory regions used for secret data are necessarily dynamic mappings or heapallocations. This is an area which can be tuned to provide more comprehensiveprotection at the cost of performance.
  • Hardened loads may still load data fromvalid addresses if not attacker-controlled addresses. To prevent thesefrom reading secret data, the low 2gb of the address space and 2gb above andbelow any executable pages should be protected.

Credit:

  • The core idea of tracing misspeculation through data and marking pointers toblock misspeculated loads was developed as part of a HACS 2018 discussionbetween Chandler Carruth, Paul Kocher, Thomas Pornin, and several otherindividuals.
  • Core idea of masking out loaded bits was part of the original mitigationsuggested by Jann Horn when these attacks were reported.

Indirect Branches, Calls, and Returns

It is possible to attack control flow other than conditional branches withvariant #1 style mispredictions.

  • A prediction towards a hot call target of a virtual method can lead to itbeing speculatively executed when an expected type is used (often called“type confusion”).
  • A hot case may be speculatively executed due to prediction instead of thecorrect case for a switch statement implemented as a jump table.
  • A hot common return address may be predicted incorrectly when returning froma function.

These code patterns are also vulnerable to Spectre variant #2, and as such arebest mitigated with aretpoline on x86 platforms.When a mitigation technique like retpoline is used, speculation simply cannotproceed through an indirect control flow edge (or it cannot be mispredicted inthe case of a filled RSB) and so it is also protected from variant #1 styleattacks. However, some architectures, micro-architectures, or vendors do notemploy the retpoline mitigation, and on future x86 hardware (both Intel andAMD) it is expected to become unnecessary due to hardware-based mitigation.

When not using a retpoline, these edges will need independent protection fromvariant #1 style attacks. The analogous approach to that used for conditionalcontrol flow should work:

  1. uintptr_t all_ones_mask = std::numerical_limits<uintptr_t>::max();
  2. uintptr_t all_zeros_mask = 0;
  3. void leak(int data);
  4. void example(int* pointer1, int* pointer2) {
  5. uintptr_t predicate_state = all_ones_mask;
  6. switch (condition) {
  7. case 0:
  8. // Assuming ?: is implemented using branchless logic...
  9. predicate_state = (condition != 0) ? all_zeros_mask : predicate_state;
  10. // ... lots of code ...
  11. //
  12. // Harden the pointer so it can't be loaded
  13. pointer1 &= predicate_state;
  14. leak(*pointer1);
  15. break;
  16.  
  17. case 1:
  18. predicate_state = (condition != 1) ? all_zeros_mask : predicate_state;
  19. // ... more code ...
  20. //
  21. // Alternative: Harden the loaded value
  22. int value2 = *pointer2 & predicate_state;
  23. leak(value2);
  24. break;
  25.  
  26. // ...
  27. }
  28. }

The core idea remains the same: validate the control flow using data-flow anduse that validation to check that loads cannot leak information alongmisspeculated paths. Typically this involves passing the desired target of suchcontrol flow across the edge and checking that it is correct afterwards. Notethat while it is tempting to think that this mitigates variant #2 attacks, itdoes not. Those attacks go to arbitrary gadgets that don’t include the checks.

Variant #1.1 and #1.2 attacks: “Bounds Check Bypass Store”

Beyond the core variant #1 attack, there are techniques to extend this attack.The primary technique is known as “Bounds Check Bypass Store” and is discussedin this research paper: https://people.csail.mit.edu/vlk/spectre11.pdf

We will analyze these two variants independently. First, variant #1.1 works byspeculatively storing over the return address after a bounds check bypass. Thisspeculative store then ends up being used by the CPU during speculativeexecution of the return, potentially directing speculative execution toarbitrary gadgets in the binary. Let’s look at an example.

  1. unsigned char local_buffer[4];
  2. unsigned char *untrusted_data_from_caller = ...;
  3. unsigned long untrusted_size_from_caller = ...;
  4. if (untrusted_size_from_caller < sizeof(local_buffer)) {
  5. // Speculative execution enters here with a too-large size.
  6. memcpy(local_buffer, untrusted_data_from_caller,
  7. untrusted_size_from_caller);
  8. // The stack has now been smashed, writing an attacker-controlled
  9. // address over the return address.
  10. minor_processing(local_buffer);
  11. return;
  12. // Control will speculate to the attacker-written address.
  13. }

However, this can be mitigated by hardening the load of the return address justlike any other load. This is sometimes complicated because x86 for exampleimplicitly loads the return address off the stack. However, theimplementation technique below is specifically designed to mitigate thisimplicit load by using the stack pointer to communicate misspeculation betweenfunctions. This additionally causes a misspeculation to have an invalid stackpointer and never be able to read the speculatively stored return address. Seethe detailed discussion below.

For variant #1.2, the attacker speculatively stores into the vtable or jumptable used to implement an indirect call or indirect jump. Because this isspeculative, this will often be possible even when these are stored inread-only pages. For example:

  1. class FancyObject : public BaseObject {
  2. public:
  3. void DoSomething() override;
  4. };
  5. void f(unsigned long attacker_offset, unsigned long attacker_data) {
  6. FancyObject object = getMyObject();
  7. unsigned long *arr[4] = getFourDataPointers();
  8. if (attacker_offset < 4) {
  9. // We have bypassed the bounds check speculatively.
  10. unsigned long *data = arr[attacker_offset];
  11. // Now we have computed a pointer inside of `object`, the vptr.
  12. *data = attacker_data;
  13. // The vptr points to the virtual table and we speculatively clobber that.
  14. g(object); // Hand the object to some other routine.
  15. }
  16. }
  17. // In another file, we call a method on the object.
  18. void g(BaseObject &object) {
  19. object.DoSomething();
  20. // This speculatively calls the address stored over the vtable.
  21. }

Mitigating this requires hardening loads from these locations, or mitigatingthe indirect call or indirect jump. Any of these are sufficient to block thecall or jump from using a speculatively stored value that has been read back.

For both of these, using retpolines would be equally sufficient. One possiblehybrid approach is to use retpolines for indirect call and jump, while relyingon SLH to mitigate returns.

Another approach that is sufficient for both of these is to harden all of thespeculative stores. However, as most stores aren’t interesting and don’tinherently leak data, this is expected to be prohibitively expensive given theattack it is defending against.

Implementation Details

There are a number of complex details impacting the implementation of thistechnique, both on a particular architecture and within a particular compiler.We discuss proposed implementation techniques for the x86 architecture and theLLVM compiler. These are primarily to serve as an example, as otherimplementation techniques are very possible.

x86 Implementation Details

On the x86 platform we break down the implementation into three corecomponents: accumulating the predicate state through the control flow graph,checking the loads, and checking control transfers between procedures.

Accumulating Predicate State

Consider baseline x86 instructions like the following, which test threeconditions and if all pass, loads data from memory and potentially leaks itthrough some side channel:

  1. # %bb.0: # %entry
  2. pushq %rax
  3. testl %edi, %edi
  4. jne .LBB0_4
  5. # %bb.1: # %then1
  6. testl %esi, %esi
  7. jne .LBB0_4
  8. # %bb.2: # %then2
  9. testl %edx, %edx
  10. je .LBB0_3
  11. .LBB0_4: # %exit
  12. popq %rax
  13. retq
  14. .LBB0_3: # %danger
  15. movl (%rcx), %edi
  16. callq leak
  17. popq %rax
  18. retq

When we go to speculatively execute the load, we want to know whether any ofthe dynamically executed predicates have been misspeculated. To track that,along each conditional edge, we need to track the data which would allow thatedge to be taken. On x86, this data is stored in the flags register used by theconditional jump instruction. Along both edges after this fork in control flow,the flags register remains alive and contains data that we can use to build upour accumulated predicate state. We accumulate it using the x86 conditionalmove instruction which also reads the flag registers where the state resides.These conditional move instructions are known to not be predicted on any x86processors, making them immune to misprediction that could reintroduce thevulnerability. When we insert the conditional moves, the code ends up lookinglike the following:

  1. # %bb.0: # %entry
  2. pushq %rax
  3. xorl %eax, %eax # Zero out initial predicate state.
  4. movq $-1, %r8 # Put all-ones mask into a register.
  5. testl %edi, %edi
  6. jne .LBB0_1
  7. # %bb.2: # %then1
  8. cmovneq %r8, %rax # Conditionally update predicate state.
  9. testl %esi, %esi
  10. jne .LBB0_1
  11. # %bb.3: # %then2
  12. cmovneq %r8, %rax # Conditionally update predicate state.
  13. testl %edx, %edx
  14. je .LBB0_4
  15. .LBB0_1:
  16. cmoveq %r8, %rax # Conditionally update predicate state.
  17. popq %rax
  18. retq
  19. .LBB0_4: # %danger
  20. cmovneq %r8, %rax # Conditionally update predicate state.
  21. ...

Here we create the “empty” or “correct execution” predicate state by zeroing%rax, and we create a constant “incorrect execution” predicate value byputting -1 into %r8. Then, along each edge coming out of a conditionalbranch we do a conditional move that in a correct execution will be a no-op,but if misspeculated, will replace the %rax with the value of %r8.Misspeculating any one of the three predicates will cause %rax to hold the“incorrect execution” value from %r8 as we preserve incoming values whenexecution is correct rather than overwriting it.

We now have a value in %rax in each basic block that indicates if at somepoint previously a predicate was mispredicted. And we have arranged for thatvalue to be particularly effective when used below to harden loads.

Indirect Call, Branch, and Return Predicates

There is no analogous flag to use when tracing indirect calls, branches, andreturns. The predicate state must be accumulated through some other means.Fundamentally, this is the reverse of the problem posed in CFI: we need tocheck where we came from rather than where we are going. For function-localjump tables, this is easily arranged by testing the input to the jump tablewithin each destination (not yet implemented, use retpolines):

  1. pushq %rax
  2. xorl %eax, %eax # Zero out initial predicate state.
  3. movq $-1, %r8 # Put all-ones mask into a register.
  4. jmpq *.LJTI0_0(,%rdi,8) # Indirect jump through table.
  5. .LBB0_2: # %sw.bb
  6. testq $0, %rdi # Validate index used for jump table.
  7. cmovneq %r8, %rax # Conditionally update predicate state.
  8. ...
  9. jmp _Z4leaki # TAILCALL
  10.  
  11. .LBB0_3: # %sw.bb1
  12. testq $1, %rdi # Validate index used for jump table.
  13. cmovneq %r8, %rax # Conditionally update predicate state.
  14. ...
  15. jmp _Z4leaki # TAILCALL
  16.  
  17. .LBB0_5: # %sw.bb10
  18. testq $2, %rdi # Validate index used for jump table.
  19. cmovneq %r8, %rax # Conditionally update predicate state.
  20. ...
  21. jmp _Z4leaki # TAILCALL
  22. ...
  23.  
  24. .section .rodata,"a",@progbits
  25. .p2align 3
  26. .LJTI0_0:
  27. .quad .LBB0_2
  28. .quad .LBB0_3
  29. .quad .LBB0_5
  30. ...

Returns have a simple mitigation technique on x86-64 (or other ABIs which havewhat is called a “red zone” region beyond the end of the stack). This region isguaranteed to be preserved across interrupts and context switches, making thereturn address used in returning to the current code remain on the stack andvalid to read. We can emit code in the caller to verify that a return edge wasnot mispredicted:

  1. callq other_function
  2. return_addr:
  3. testq -8(%rsp), return_addr # Validate return address.
  4. cmovneq %r8, %rax # Update predicate state.

For an ABI without a “red zone” (and thus unable to read the return addressfrom the stack), we can compute the expected return address prior to the callinto a register preserved across the call and use that similarly to the above.

Indirect calls (and returns in the absence of a red zone ABI) pose the mostsignificant challenge to propagate. The simplest technique would be to define anew ABI such that the intended call target is passed into the called functionand checked in the entry. Unfortunately, new ABIs are quite expensive to deployin C and C++. While the target function could be passed in TLS, we would stillrequire complex logic to handle a mixture of functions compiled with andwithout this extra logic (essentially, making the ABI backwards compatible).Currently, we suggest using retpolines here and will continue to investigateways of mitigating this.

Optimizations, Alternatives, and Tradeoffs

Merely accumulating predicate state involves significant cost. There areseveral key optimizations we employ to minimize this and various alternativesthat present different tradeoffs in the generated code.

First, we work to reduce the number of instructions used to track the state:

  • Rather than inserting a cmovCC instruction along every conditional edge inthe original program, we track each set of condition flags we need to captureprior to entering each basic block and reuse a common cmovCC sequence forthose.
    • We could further reuse suffixes when there are multiple cmovCCinstructions required to capture the set of flags. Currently this isbelieved to not be worth the cost as paired flags are relatively rare andsuffixes of them are exceedingly rare.
  • A common pattern in x86 is to have multiple conditional jump instructionsthat use the same flags but handle different conditions. Naively, we couldconsider each fallthrough between them an “edge” but this causes a much morecomplex control flow graph. Instead, we accumulate the set of conditionsnecessary for fallthrough and use a sequence of cmovCC instructions in asingle fallthrough edge to track it.

Second, we trade register pressure for simpler cmovCC instructions byallocating a register for the “bad” state. We could read that value from memoryas part of the conditional move instruction, however, this creates moremicro-ops and requires the load-store unit to be involved. Currently, we placethe value into a virtual register and allow the register allocator to decidewhen the register pressure is sufficient to make it worth spilling to memoryand reloading.

Hardening Loads

Once we have the predicate accumulated into a special value for correct vs.misspeculated, we need to apply this to loads in a way that ensures they do notleak secret data. There are two primary techniques for this: we can eitherharden the loaded value to prevent observation, or we can harden the addressitself to prevent the load from occurring. These have significantly differentperformance tradeoffs.

Hardening loaded values

The most appealing way to harden loads is to mask out all of the bits loaded.The key requirement is that for each bit loaded, along the misspeculated paththat bit is always fixed at either 0 or 1 regardless of the value of the bitloaded. The most obvious implementation uses either an and instruction withan all-zero mask along misspeculated paths and an all-one mask along correctpaths, or an or instruction with an all-one mask along misspeculated pathsand an all-zero mask along correct paths. Other options become less appealingsuch as multiplying by zero, or multiple shift instructions. For reasons weelaborate on below, we end up suggesting you use or with an all-ones mask,making the x86 instruction sequence look like the following:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. movl (%rsi), %edi # Load potentially secret data from %rsi.
  6. orl %eax, %edi

Other useful patterns may be to fold the load into the or instruction itselfat the cost of a register-to-register copy.

There are some challenges with deploying this approach:

  • Many loads on x86 are folded into other instructions. Separating them wouldadd very significant and costly register pressure with prohibitiveperformance cost.
  • Loads may not target a general purpose register requiring extra instructionsto map the state value into the correct register class, and potentially moreexpensive instructions to mask the value in some way.
  • The flags registers on x86 are very likely to be live, and challenging topreserve cheaply.
  • There are many more values loaded than pointers & indices used for loads. Asa consequence, hardening the result of a load requires substantially moreinstructions than hardening the address of the load (see below).Despite these challenges, hardening the result of the load critically allowsthe load to proceed and thus has dramatically less impact on the totalspeculative / out-of-order potential of the execution. There are also severalinteresting techniques to try and mitigate these challenges and make hardeningthe results of loads viable in at least some cases. However, we generallyexpect to fall back when unprofitable from hardening the loaded value to thenext approach of hardening the address itself.
Loads folded into data-invariant operations can be hardened after the operation

The first key to making this feasible is to recognize that many operations onx86 are “data-invariant”. That is, they have no (known) observable behaviordifferences due to the particular input data. These instructions are often usedwhen implementing cryptographic primitives dealing with private key databecause they are not believed to provide any side-channels. Similarly, we candefer hardening until after them as they will not in-and-of-themselvesintroduce a speculative execution side-channel. This results in code sequencesthat look like:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. addl (%rsi), %edi # Load and accumulate without leaking.
  6. orl %eax, %edi

While an addition happens to the loaded (potentially secret) value, thatdoesn’t leak any data and we then immediately harden it.

Hardening of loaded values deferred down the data-invariant expression graph

We can generalize the previous idea and sink the hardening down the expressiongraph across as many data-invariant operations as desirable. This can use veryconservative rules for whether something is data-invariant. The primary goalshould be to handle multiple loads with a single hardening instruction:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. addl (%rsi), %edi # Load and accumulate without leaking.
  6. addl 4(%rsi), %edi # Continue without leaking.
  7. addl 8(%rsi), %edi
  8. orl %eax, %edi # Mask out bits from all three loads.
Preserving the flags while hardening loaded values on Haswell, Zen, and newer processors

Sadly, there are no useful instructions on x86 that apply a mask to all 64 bitswithout touching the flag registers. However, we can harden loaded values thatare narrower than a word (fewer than 32-bits on 32-bit systems and fewer than64-bits on 64-bit systems) by zero-extending the value to the full word sizeand then shifting right by at least the number of original bits using the BMI2shrx instruction:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. addl (%rsi), %edi # Load and accumulate 32 bits of data.
  6. shrxq %rax, %rdi, %rdi # Shift out all 32 bits loaded.

Because on x86 the zero-extend is free, this can efficiently harden the loadedvalue.

Hardening the address of the load

When hardening the loaded value is inapplicable, most often because theinstruction directly leaks information (like cmp or jmpq), we switch tohardening the address of the load instead of the loaded value. This avoidsincreasing register pressure by unfolding the load or paying some other highcost.

To understand how this works in practice, we need to examine the exactsemantics of the x86 addressing modes which, in its fully general form, lookslike (%base,%index,scale)offset. Here %base and %index are 64-bitregisters that can potentially be any value, and may be attacker controlled,and scale and offset are fixed immediate values. scale must be 1, 2,4, or 8, and offset can be any 32-bit sign extended value. The exactcomputation performed to find the address is then: %base + (scale * %index) + offset under 64-bit 2’s complement modular arithmetic.

One issue with this approach is that, after hardening, the %base + (scale %index) subexpression will compute a value near zero (-1 + (scale -1)) andthen a large, positive offset will index into memory within the first twogigabytes of address space. While these offsets are not attacker controlled,the attacker could chose to attack a load which happens to have the desiredoffset and then successfully read memory in that region. This significantlyraises the burden on the attacker and limits the scope of attack but does noteliminate it. To fully close the attack we must work with the operating systemto preclude mapping memory in the low two gigabytes of address space.

64-bit load checking instructions

We can use the following instruction sequences to check loads. We set up %r8in these examples to hold the special value of -1 which will be cmoved over%rax in misspeculated paths.

Single register addressing mode:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. orq %rax, %rsi # Mask the pointer if misspeculating.
  6. movl (%rsi), %edi

Two register addressing mode:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. orq %rax, %rsi # Mask the pointer if misspeculating.
  6. orq %rax, %rcx # Mask the index if misspeculating.
  7. movl (%rsi,%rcx), %edi

This will result in a negative address near zero or in offset wrapping theaddress space back to a small positive address. Small, negative addresses willfault in user-mode for most operating systems, but targets which need the highaddress space to be user accessible may need to adjust the exact sequence usedabove. Additionally, the low addresses will need to be marked unreadable by theOS to fully harden the load.

RIP-relative addressing is even easier to break

There is a common addressing mode idiom that is substantially harder to check:addressing relative to the instruction pointer. We cannot change the value ofthe instruction pointer register and so we have the harder problem of forcing%base + scale %index + offset to be an invalid address, by only changing%index. The only advantage we have is that the attacker also cannot modify%base. If we use the fast instruction sequence above, but only apply it tothe index, we will always access %rip + (scale -1) + offset. If theattacker can find a load which with this address happens to point to secretdata, then they can reach it. However, the loader and base libraries can alsosimply refuse to map the heap, data segments, or stack within 2gb of any of thetext in the program, much like it can reserve the low 2gb of address space.

The flag registers again make everything hard

Unfortunately, the technique of using orq-instructions has a serious flaw onx86. The very thing that makes it easy to accumulate state, the flag registerscontaining predicates, causes serious problems here because they may be aliveand used by the loading instruction or subsequent instructions. On x86, theorq instruction sets the flags and will override anything already there.This makes inserting them into the instruction stream very hazardous.Unfortunately, unlike when hardening the loaded value, we have no fallback hereand so we must have a fully general approach available.

The first thing we must do when generating these sequences is try to analyzethe surrounding code to prove that the flags are not in fact alive or beingused. Typically, it has been set by some other instruction which just happensto set the flags register (much like ours!) with no actual dependency. In thosecases, it is safe to directly insert these instructions. Alternatively we maybe able to move them earlier to avoid clobbering the used value.

However, this may ultimately be impossible. In that case, we need to preservethe flags around these instructions:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. pushfq
  6. orq %rax, %rcx # Mask the pointer if misspeculating.
  7. orq %rax, %rdx # Mask the index if misspeculating.
  8. popfq
  9. movl (%rcx,%rdx), %edi

Using the pushf and popf instructions saves the flags register around ourinserted code, but comes at a high cost. First, we must store the flags to thestack and reload them. Second, this causes the stack pointer to be adjusteddynamically, requiring a frame pointer be used for referring to temporariesspilled to the stack, etc.

On newer x86 processors we can use the lahf and sahf instructions to saveall of the flags besides the overflow flag in a register rather than on thestack. We can then use seto and add to save and restore the overflow flagin a register. Combined, this will save and restore flags in the same manner asabove but using two registers rather than the stack. That is still veryexpensive if slightly less expensive than pushf and popf in most cases.

A flag-less alternative on Haswell, Zen and newer processors

Starting with the BMI2 x86 instruction set extensions available on Haswell andZen processors, there is an instruction for shifting that does not set anyflags: shrx. We can use this and the lea instruction to implement analogouscode sequences to the above ones. However, these are still very marginallyslower, as there are fewer ports able to dispatch shift instructions in mostmodern x86 processors than there are for or instructions.

Fast, single register addressing mode:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. shrxq %rax, %rsi, %rsi # Shift away bits if misspeculating.
  6. movl (%rsi), %edi

This will collapse the register to zero or one, and everything but the offsetin the addressing mode to be less than or equal to 9. This means the fulladdress can only be guaranteed to be less than (1 << 31) + 9. The OS may wishto protect an extra page of the low address space to account for this

Optimizations

A very large portion of the cost for this approach comes from checking loads inthis way, so it is important to work to optimize this. However, beyond makingthe instruction sequences to apply the checks efficient (for example byavoiding pushfq and popfq sequences), the only significant optimization isto check fewer loads without introducing a vulnerability. We apply severaltechniques to accomplish that.

Don’t check loads from compile-time constant stack offsets

We implement this optimization on x86 by skipping the checking of loads whichuse a fixed frame pointer offset.

The result of this optimization is that patterns like reloading a spilledregister or accessing a global field don’t get checked. This is a verysignificant performance win.

Don’t check dependent loads

A core part of why this mitigation strategy works is that it establishes adata-flow check on the loaded address. However, this means that if the addressitself was already loaded using a checked load, there is no need to check adependent load provided it is within the same basic block as the checked load,and therefore has no additional predicates guarding it. Consider code like thefollowing:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. movq (%rcx), %rdi
  5. movl (%rdi), %edx

This will get transformed into:

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. orq %rax, %rcx # Mask the pointer if misspeculating.
  6. movq (%rcx), %rdi # Hardened load.
  7. movl (%rdi), %edx # Unhardened load due to dependent addr.

This doesn’t check the load through %rdi as that pointer is dependent on achecked load already.

Protect large, load-heavy blocks with a single lfence

It may be worth using a single lfence instruction at the start of a blockwhich begins with a (very) large number of loads that require independentprotection and which require hardening the address of the load. However, thisis unlikely to be profitable in practice. The latency hit of the hardeningwould need to exceed that of an lfence when correctly speculativelyexecuted. But in that case, the lfence cost is a complete loss of speculativeexecution (at a minimum). So far, the evidence we have of the performance costof using lfence indicates few if any hot code patterns where this trade offwould make sense.

Tempting optimizations that break the security model

Several optimizations were considered which didn’t pan out due to failure touphold the security model. One in particular is worth discussing as many otherswill reduce to it.

We wondered whether only the first load in a basic block could be checked. Ifthe check works as intended, it forms an invalid pointer that doesn’t evenvirtual-address translate in the hardware. It should fault very early on in itsprocessing. Maybe that would stop things in time for the misspeculated path tofail to leak any secrets. This doesn’t end up working because the processor isfundamentally out-of-order, even in its speculative domain. As a consequence,the attacker could cause the initial address computation itself to stall andallow an arbitrary number of unrelated loads (including attacked loads ofsecret data) to pass through.

Interprocedural Checking

Modern x86 processors may speculate into called functions and out of functionsto their return address. As a consequence, we need a way to check loads thatoccur after a misspeculated predicate but where the load and the misspeculatedpredicate are in different functions. In essence, we need some interproceduralgeneralization of the predicate state tracking. A primary challenge to passingthe predicate state between functions is that we would like to not require achange to the ABI or calling convention in order to make this mitigation moredeployable, and further would like code mitigated in this way to be easilymixed with code not mitigated in this way and without completely losing thevalue of the mitigation.

Embed the predicate state into the high bit(s) of the stack pointer

We can use the same technique that allows hardening pointers to pass thepredicate state into and out of functions. The stack pointer is triviallypassed between functions and we can test for it having the high bits set todetect when it has been marked due to misspeculation. The callsite instructionsequence looks like (assuming a misspeculated state value of -1):

  1. ...
  2.  
  3. .LBB0_4: # %danger
  4. cmovneq %r8, %rax # Conditionally update predicate state.
  5. shlq $47, %rax
  6. orq %rax, %rsp
  7. callq other_function
  8. movq %rsp, %rax
  9. sarq 63, %rax # Sign extend the high bit to all bits.

This first puts the predicate state into the high bits of %rsp before callingthe function and then reads it back out of high bits of %rsp afterward. Whencorrectly executing (speculatively or not), these are all no-ops. Whenmisspeculating, the stack pointer will end up negative. We arrange for it toremain a canonical address, but otherwise leave the low bits alone to allowstack adjustments to proceed normally without disrupting this. Within thecalled function, we can extract this predicate state and then reset it onreturn:

  1. other_function:
  2. # prolog
  3. callq other_function
  4. movq %rsp, %rax
  5. sarq 63, %rax # Sign extend the high bit to all bits.
  6. # ...
  7.  
  8. .LBB0_N:
  9. cmovneq %r8, %rax # Conditionally update predicate state.
  10. shlq $47, %rax
  11. orq %rax, %rsp
  12. retq

This approach is effective when all code is mitigated in this fashion, and caneven survive very limited reaches into unmitigated code (the state willround-trip in and back out of an unmitigated function, it just won’t beupdated). But it does have some limitations. There is a cost to merging thestate into %rsp and it doesn’t insulate mitigated code from misspeculation inan unmitigated caller.

There is also an advantage to using this form of interprocedural mitigation: byforming these invalid stack pointer addresses we can prevent speculativereturns from successfully reading speculatively written values to the actualstack. This works first by forming a data-dependency between computing theaddress of the return address on the stack and our predicate state. And evenwhen satisfied, if a misprediction causes the state to be poisoned theresulting stack pointer will be invalid.

Rewrite API of internal functions to directly propagate predicate state

(Not yet implemented.)

We have the option with internal functions to directly adjust their API toaccept the predicate as an argument and return it. This is likely to bemarginally cheaper than embedding into %rsp for entering functions.

Use lfence to guard function transitions

An lfence instruction can be used to prevent subsequent loads fromspeculatively executing until all prior mispredicted predicates have resolved.We can use this broader barrier to speculative loads executing betweenfunctions. We emit it in the entry block to handle calls, and prior to eachreturn. This approach also has the advantage of providing the strongest degreeof mitigation when mixed with unmitigated code by halting all misspeculationentering a function which is mitigated, regardless of what occurred in thecaller. However, such a mixture is inherently more risky. Whether this kind ofmixture is a sufficient mitigation requires careful analysis.

Unfortunately, experimental results indicate that the performance overhead ofthis approach is very high for certain patterns of code. A classic example isany form of recursive evaluation engine. The hot, rapid call and returnsequences exhibit dramatic performance loss when mitigated with lfence. Thiscomponent alone can regress performance by 2x or more, making it an unpleasanttradeoff even when only used in a mixture of code.

Use an internal TLS location to pass predicate state

We can define a special thread-local value to hold the predicate state betweenfunctions. This avoids direct ABI implications by using a side channel betweencallers and callees to communicate the predicate state. It also allows implicitzero-initialization of the state, which allows non-checked code to be the firstcode executed.

However, this requires a load from TLS in the entry block, a store to TLSbefore every call and every ret, and a load from TLS after every call. As aconsequence it is expected to be substantially more expensive even than using%rsp and potentially lfence within the function entry block.

Define a new ABI and/or calling convention

We could define a new ABI and/or calling convention to explicitly pass thepredicate state in and out of functions. This may be interesting if none of thealternatives have adequate performance, but it makes deployment and adoptiondramatically more complex, and potentially infeasible.

High-Level Alternative Mitigation Strategies

There are completely different alternative approaches to mitigating variant 1attacks. Mostdiscussion so far focuses on mitigatingspecific known attackable components in the Linux kernel (or other kernels) bymanually rewriting the code to contain an instruction sequence that is notvulnerable. For x86 systems this is done by either injecting an lfenceinstruction along the code path which would leak data if executed speculativelyor by rewriting memory accesses to have branch-less masking to a known saferegion. On Intel systems, lfence will prevent the speculative load of secretdata.On AMD systems lfence is currently a no-op, but can be madedispatch-serializing by setting an MSR, and thus preclude misspeculation of thecode path (mitigation G-2 +V1-1).

However, this relies on finding and enumerating all possible points in codewhich could be attacked to leak information. While in some cases staticanalysis is effective at doing this at scale, in many cases it still relies onhuman judgement to evaluate whether code might be vulnerable. Especially forsoftware systems which receive less detailed scrutiny but remain sensitive tothese attacks, this seems like an impractical security model. We need anautomatic and systematic mitigation strategy.

Automatic lfence on Conditional Edges

A natural way to scale up the existing hand-coded mitigations is simply toinject an lfence instruction into both the target and fallthroughdestinations of every conditional branch. This ensures that no predicate orbounds check can be bypassed speculatively. However, the performance overheadof this approach is, simply put, catastrophic. Yet it remains the only truly“secure by default” approach known prior to this effort and serves as thebaseline for performance.

One attempt to address the performance overhead of this and make it morerealistic to deploy is MSVC’s /Qspectreswitch.Their technique is to use static analysis within the compiler to only insertlfence instructions into conditional edges at risk of attack. However,initialanalysishas shown that this approach is incomplete and only catches a small and limitedsubset of attackable patterns which happen to resemble very closely the initialproofs of concept. As such, while its performance is acceptable, it does notappear to be an adequate systematic mitigation.

Performance Overhead

The performance overhead of this style of comprehensive mitigation is veryhigh. However, it compares very favorably with previously recommendedapproaches such as the lfence instruction. Just as users can restrict thescope of lfence to control its performance impact, this mitigation techniquecould be restricted in scope as well.

However, it is important to understand what it would cost to get a fullymitigated baseline. Here we assume targeting a Haswell (or newer) processor andusing all of the tricks to improve performance (so leaves the low 2gbunprotected and +/- 2gb surrounding any PC in the program). We ran bothGoogle’s microbenchmark suite and a large highly-tuned server built usingThinLTO and PGO. All were built with -march=haswell to give access to BMI2instructions, and benchmarks were run on large Haswell servers. We collecteddata both with an lfence-based mitigation and load hardening as presentedhere. The summary is that mitigating with load hardening is 1.77x faster thanmitigating with lfence, and the overhead of load hardening compared to anormal program is likely between a 10% overhead and a 50% overhead with mostlarge applications seeing a 30% overhead or less.

| Benchmark | lfence | Load Hardening | Mitigated Speedup || ————————————– | ——-: | ————-: | —————-: || Google microbenchmark suite | -74.8% | -36.4% | 2.5x || Large server QPS (using ThinLTO & PGO) | -62% | -29% | 1.8x |

Below is a visualization of the microbenchmark suite results which helps showthe distribution of results that is somewhat lost in the summary. The y-axis isa log-scale speedup ratio of load hardening relative to lfence (up -> faster-> better). Each box-and-whiskers represents one microbenchmark which may havemany different metrics measured. The red line marks the median, the box marksthe first and third quartiles, and the whiskers mark the min and max.

Microbenchmark result visualization

We don’t yet have benchmark data on SPEC or the LLVM test suite, but we canwork on getting that. Still, the above should give a pretty clearcharacterization of the performance, and specific benchmarks are unlikely toreveal especially interesting properties.

Future Work: Fine Grained Control and API-Integration

The performance overhead of this technique is likely to be very significant andsomething users wish to control or reduce. There are interesting options herethat impact the implementation strategy used.

One particularly appealing option is to allow both opt-in and opt-out of thismitigation at reasonably fine granularity such as on a per-function basis,including intelligent handling of inlining decisions – protected code can beprevented from inlining into unprotected code, and unprotected code will becomeprotected when inlined into protected code. For systems where only a limitedset of code is reachable by externally controlled inputs, it may be possible tolimit the scope of mitigation through such mechanisms without compromising theapplication’s overall security. The performance impact may also be focused in afew key functions that can be hand-mitigated in ways that have lowerperformance overhead while the remainder of the application receives automaticprotection.

For both limiting the scope of mitigation or manually mitigating hot functions,there needs to be some support for mixing mitigated and unmitigated codewithout completely defeating the mitigation. For the first use case, it wouldbe particularly desirable that mitigated code remains safe when being calledduring misspeculation from unmitigated code.

For the second use case, it may be important to connect the automaticmitigation technique to explicit mitigation APIs such as what is described inhttp://wg21.link/p0928 (or any other eventual API) so that there is a clean wayto switch from automatic to manual mitigation without immediately exposing ahole. However, the design for how to do this is hard to come up with until theAPIs are better established. We will revisit this as those APIs mature.