Interrupts and Interrupt Handling. Part 6.

Non-maskable interrupt handler

It is sixth part of the Interrupts and Interrupt Handling in the Linux kernel chapter and in the previous part we saw implementation of some exception handlers for the General Protection Fault exception, divide exception, invalid opcode exceptions and etc. As I wrote in the previous part we will see implementations of the rest exceptions in this part. We will see implementation of the following handlers:

in this part. So, let’s start.

Non-Maskable interrupt handling

A Non-Maskable interrupt is a hardware interrupt that cannot be ignored by standard masking techniques. In a general way, a non-maskable interrupt can be generated in either of two ways:

  • External hardware asserts the non-maskable interrupt pin on the CPU.
  • The processor receives a message on the system bus or the APIC serial bus with a delivery mode NMI.

When the processor receives a NMI from one of these sources, the processor handles it immediately by calling the NMI handler pointed to by interrupt vector which has number 2 (see table in the first part). We already filled the Interrupt Descriptor Table with the vector number, address of the nmi interrupt handler and NMI_STACK Interrupt Stack Table entry:

  1. set_intr_gate_ist(X86_TRAP_NMI, &nmi, NMI_STACK);

in the trap_init function which defined in the arch/x86/kernel/traps.c source code file. In the previous parts we saw that entry points of the all interrupt handlers are defined with the:

  1. .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
  2. ENTRY(\sym)
  3. ...
  4. ...
  5. ...
  6. END(\sym)
  7. .endm

macro from the arch/x86/entry/entry_64.S assembly source code file. But the handler of the Non-Maskable interrupts is not defined with this macro. It has own entry point:

  1. ENTRY(nmi)
  2. ...
  3. ...
  4. ...
  5. END(nmi)

in the same arch/x86/entry/entry_64.S assembly file. Lets dive into it and will try to understand how Non-Maskable interrupt handler works. The nmi handlers starts from the call of the:

  1. PARAVIRT_ADJUST_EXCEPTION_FRAME

macro but we will not dive into details about it in this part, because this macro related to the Paravirtualization stuff which we will see in another chapter. After this save the content of the rdx register on the stack:

  1. pushq %rdx

And allocated check that cs was not the kernel segment when an non-maskable interrupt occurs:

  1. cmpl $__KERNEL_CS, 16(%rsp)
  2. jne first_nmi

The __KERNEL_CS macro defined in the arch/x86/include/asm/segment.h and represented second descriptor in the Global Descriptor Table:

  1. #define GDT_ENTRY_KERNEL_CS 2
  2. #define __KERNEL_CS (GDT_ENTRY_KERNEL_CS*8)

more about GDT you can read in the second part of the Linux kernel booting process chapter. If cs is not kernel segment, it means that it is not nested NMI and we jump on the first_nmi label. Let’s consider this case. First of all we put address of the current stack pointer to the rdx and pushes 1 to the stack in the first_nmi label:

  1. first_nmi:
  2. movq (%rsp), %rdx
  3. pushq $1

Why do we push 1 on the stack? As the comment says: We allow breakpoints in NMIs. On the x86_64, like other architectures, the CPU will not execute another NMI until the first NMI is completed. A NMI interrupt finished with the iret instruction like other interrupts and exceptions do it. If the NMI handler triggers either a page fault or breakpoint or another exception which are use iret instruction too. If this happens while in NMI context, the CPU will leave NMI context and a new NMI may come in. The iret used to return from those exceptions will re-enable NMIs and we will get nested non-maskable interrupts. The problem the NMI handler will not return to the state that it was, when the exception triggered, but instead it will return to a state that will allow new NMIs to preempt the running NMI handler. If another NMI comes in before the first NMI handler is complete, the new NMI will write all over the preempted NMIs stack. We can have nested NMIs where the next NMI is using the top of the stack of the previous NMI. It means that we cannot execute it because a nested non-maskable interrupt will corrupt stack of a previous non-maskable interrupt. That’s why we have allocated space on the stack for temporary variable. We will check this variable that it was set when a previous NMI is executing and clear if it is not nested NMI. We push 1 here to the previously allocated space on the stack to denote that a non-maskable interrupt executed currently. Remember that when and NMI or another exception occurs we have the following stack frame:

  1. +------------------------+
  2. | SS |
  3. | RSP |
  4. | RFLAGS |
  5. | CS |
  6. | RIP |
  7. +------------------------+

and also an error code if an exception has it. So, after all of these manipulations our stack frame will look like this:

  1. +------------------------+
  2. | SS |
  3. | RSP |
  4. | RFLAGS |
  5. | CS |
  6. | RIP |
  7. | RDX |
  8. | 1 |
  9. +------------------------+

In the next step we allocate yet another 40 bytes on the stack:

  1. subq $(5*8), %rsp

and pushes the copy of the original stack frame after the allocated space:

  1. .rept 5
  2. pushq 11*8(%rsp)
  3. .endr

with the .rept assembly directive. We need in the copy of the original stack frame. Generally we need in two copies of the interrupt stack. First is copied interrupts stack: saved stack frame and copied stack frame. Now we pushes original stack frame to the saved stack frame which locates after the just allocated 40 bytes (copied stack frame). This stack frame is used to fixup the copied stack frame that a nested NMI may change. The second - copied stack frame modified by any nested NMIs to let the first NMI know that we triggered a second NMI and we should repeat the first NMI handler. Ok, we have made first copy of the original stack frame, now time to make second copy:

  1. addq $(10*8), %rsp
  2. .rept 5
  3. pushq -6*8(%rsp)
  4. .endr
  5. subq $(5*8), %rsp

After all of these manipulations our stack frame will be like this:

  1. +-------------------------+
  2. | original SS |
  3. | original Return RSP |
  4. | original RFLAGS |
  5. | original CS |
  6. | original RIP |
  7. +-------------------------+
  8. | temp storage for rdx |
  9. +-------------------------+
  10. | NMI executing variable |
  11. +-------------------------+
  12. | copied SS |
  13. | copied Return RSP |
  14. | copied RFLAGS |
  15. | copied CS |
  16. | copied RIP |
  17. +-------------------------+
  18. | Saved SS |
  19. | Saved Return RSP |
  20. | Saved RFLAGS |
  21. | Saved CS |
  22. | Saved RIP |
  23. +-------------------------+

After this we push dummy error code on the stack as we did it already in the previous exception handlers and allocate space for the general purpose registers on the stack:

  1. pushq $-1
  2. ALLOC_PT_GPREGS_ON_STACK

We already saw implementation of the ALLOC_PT_GREGS_ON_STACK macro in the third part of the interrupts chapter. This macro defined in the arch/x86/entry/calling.h and yet another allocates 120 bytes on stack for the general purpose registers, from the rdi to the r15:

  1. .macro ALLOC_PT_GPREGS_ON_STACK addskip=0
  2. addq $-(15*8+\addskip), %rsp
  3. .endm

After space allocation for the general registers we can see call of the paranoid_entry:

  1. call paranoid_entry

We can remember from the previous parts this label. It pushes general purpose registers on the stack, reads MSR_GS_BASE Model Specific register and checks its value. If the value of the MSR_GS_BASE is negative, we came from the kernel mode and just return from the paranoid_entry, in other way it means that we came from the usermode and need to execute swapgs instruction which will change user gs with the kernel gs:

  1. ENTRY(paranoid_entry)
  2. cld
  3. SAVE_C_REGS 8
  4. SAVE_EXTRA_REGS 8
  5. movl $1, %ebx
  6. movl $MSR_GS_BASE, %ecx
  7. rdmsr
  8. testl %edx, %edx
  9. js 1f
  10. SWAPGS
  11. xorl %ebx, %ebx
  12. 1: ret
  13. END(paranoid_entry)

Note that after the swapgs instruction we zeroed the ebx register. Next time we will check content of this register and if we executed swapgs than ebx must contain 0 and 1 in other way. In the next step we store value of the cr2 control register to the r12 register, because the NMI handler can cause page fault and corrupt the value of this control register:

  1. movq %cr2, %r12

Now time to call actual NMI handler. We push the address of the pt_regs to the rdi, error code to the rsi and call the do_nmi handler:

  1. movq %rsp, %rdi
  2. movq $-1, %rsi
  3. call do_nmi

We will back to the do_nmi little later in this part, but now let’s look what occurs after the do_nmi will finish its execution. After the do_nmi handler will be finished we check the cr2 register, because we can got page fault during do_nmi performed and if we got it we restore original cr2, in other way we jump on the label 1. After this we test content of the ebx register (remember it must contain 0 if we have used swapgs instruction and 1 if we didn’t use it) and execute SWAPGS_UNSAFE_STACK if it contains 1 or jump to the nmi_restore label. The SWAPGS_UNSAFE_STACK macro just expands to the swapgs instruction. In the nmi_restore label we restore general purpose registers, clear allocated space on the stack for this registers, clear our temporary variable and exit from the interrupt handler with the INTERRUPT_RETURN macro:

  1. movq %cr2, %rcx
  2. cmpq %rcx, %r12
  3. je 1f
  4. movq %r12, %cr2
  5. 1:
  6. testl %ebx, %ebx
  7. jnz nmi_restore
  8. nmi_swapgs:
  9. SWAPGS_UNSAFE_STACK
  10. nmi_restore:
  11. RESTORE_EXTRA_REGS
  12. RESTORE_C_REGS
  13. /* Pop the extra iret frame at once */
  14. REMOVE_PT_GPREGS_FROM_STACK 6*8
  15. /* Clear the NMI executing stack variable */
  16. movq $0, 5*8(%rsp)
  17. INTERRUPT_RETURN

where INTERRUPT_RETURN is defined in the arch/x86/include/irqflags.h and just expands to the iret instruction. That’s all.

Now let’s consider case when another NMI interrupt occurred when previous NMI interrupt didn’t finish its execution. You can remember from the beginning of this part that we’ve made a check that we came from userspace and jump on the first_nmi in this case:

  1. cmpl $__KERNEL_CS, 16(%rsp)
  2. jne first_nmi

Note that in this case it is first NMI every time, because if the first NMI catched page fault, breakpoint or another exception it will be executed in the kernel mode. If we didn’t come from userspace, first of all we test our temporary variable:

  1. cmpl $1, -8(%rsp)
  2. je nested_nmi

and if it is set to 1 we jump to the nested_nmi label. If it is not 1, we test the IST stack. In the case of nested NMIs we check that we are above the repeat_nmi. In this case we ignore it, in other way we check that we above than end_repeat_nmi and jump on the nested_nmi_out label.

Now let’s look on the do_nmi exception handler. This function defined in the arch/x86/kernel/nmi.c source code file and takes two parameters:

  • address of the pt_regs;
  • error code.

as all exception handlers. The do_nmi starts from the call of the nmi_nesting_preprocess function and ends with the call of the nmi_nesting_postprocess. The nmi_nesting_preprocess function checks that we likely do not work with the debug stack and if we on the debug stack set the update_debug_stack per-cpu variable to 1 and call the debug_stack_set_zero function from the arch/x86/kernel/cpu/common.c. This function increases the debug_stack_use_ctr per-cpu variable and loads new Interrupt Descriptor Table:

  1. static inline void nmi_nesting_preprocess(struct pt_regs *regs)
  2. {
  3. if (unlikely(is_debug_stack(regs->sp))) {
  4. debug_stack_set_zero();
  5. this_cpu_write(update_debug_stack, 1);
  6. }
  7. }

The nmi_nesting_postprocess function checks the update_debug_stack per-cpu variable which we set in the nmi_nesting_preprocess and resets debug stack or in another words it loads origin Interrupt Descriptor Table. After the call of the nmi_nesting_preprocess function, we can see the call of the nmi_enter in the do_nmi. The nmi_enter increases lockdep_recursion field of the interrupted process, update preempt counter and informs the RCU subsystem about NMI. There is also nmi_exit function that does the same stuff as nmi_enter, but vice-versa. After the nmi_enter we increase __nmi_count in the irq_stat structure and call the default_do_nmi function. First of all in the default_do_nmi we check the address of the previous nmi and update address of the last nmi to the actual:

  1. if (regs->ip == __this_cpu_read(last_nmi_rip))
  2. b2b = true;
  3. else
  4. __this_cpu_write(swallow_nmi, false);
  5. __this_cpu_write(last_nmi_rip, regs->ip);

After this first of all we need to handle CPU-specific NMIs:

  1. handled = nmi_handle(NMI_LOCAL, regs, b2b);
  2. __this_cpu_add(nmi_stats.normal, handled);

And then non-specific NMIs depends on its reason:

  1. reason = x86_platform.get_nmi_reason();
  2. if (reason & NMI_REASON_MASK) {
  3. if (reason & NMI_REASON_SERR)
  4. pci_serr_error(reason, regs);
  5. else if (reason & NMI_REASON_IOCHK)
  6. io_check_error(reason, regs);
  7. __this_cpu_add(nmi_stats.external, 1);
  8. return;
  9. }

That’s all.

Range Exceeded Exception

The next exception is the BOUND range exceeded exception. The BOUND instruction determines if the first operand (array index) is within the bounds of an array specified the second operand (bounds operand). If the index is not within bounds, a BOUND range exceeded exception or #BR is occurred. The handler of the #BR exception is the do_bounds function that defined in the arch/x86/kernel/traps.c. The do_bounds handler starts with the call of the exception_enter function and ends with the call of the exception_exit:

  1. prev_state = exception_enter();
  2. if (notify_die(DIE_TRAP, "bounds", regs, error_code,
  3. X86_TRAP_BR, SIGSEGV) == NOTIFY_STOP)
  4. goto exit;
  5. ...
  6. ...
  7. ...
  8. exception_exit(prev_state);
  9. return;

After we have got the state of the previous context, we add the exception to the notify_die chain and if it will return NOTIFY_STOP we return from the exception. More about notify chains and the context tracking functions you can read in the previous part. In the next step we enable interrupts if they were disabled with the contidional_sti function that checks IF flag and call the local_irq_enable depends on its value:

  1. conditional_sti(regs);
  2. if (!user_mode(regs))
  3. die("bounds", regs, error_code);

and check that if we didn’t came from user mode we send SIGSEGV signal with the die function. After this we check is MPX enabled or not, and if this feature is disabled we jump on the exit_trap label:

  1. if (!cpu_feature_enabled(X86_FEATURE_MPX)) {
  2. goto exit_trap;
  3. }
  4. where we execute `do_trap` function (more about it you can find in the previous part):
  5. ```C
  6. exit_trap:
  7. do_trap(X86_TRAP_BR, SIGSEGV, "bounds", regs, error_code, NULL);
  8. exception_exit(prev_state);

If MPX feature is enabled we check the BNDSTATUS with the get_xsave_field_ptr function and if it is zero, it means that the MPX was not responsible for this exception:

  1. bndcsr = get_xsave_field_ptr(XSTATE_BNDCSR);
  2. if (!bndcsr)
  3. goto exit_trap;

After all of this, there is still only one way when MPX is responsible for this exception. We will not dive into the details about Intel Memory Protection Extensions in this part, but will see it in another chapter.

Coprocessor exception and SIMD exception

The next two exceptions are x87 FPU Floating-Point Error exception or #MF and SIMD Floating-Point Exception or #XF. The first exception occurs when the x87 FPU has detected floating point error. For example divide by zero, numeric overflow and etc. The second exception occurs when the processor has detected SSE/SSE2/SSE3 SIMD floating-point exception. It can be the same as for the x87 FPU. The handlers for these exceptions are do_coprocessor_error and do_simd_coprocessor_error are defined in the arch/x86/kernel/traps.c and very similar on each other. They both make a call of the math_error function from the same source code file but pass different vector number. The do_coprocessor_error passes X86_TRAP_MF vector number to the math_error:

  1. dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code)
  2. {
  3. enum ctx_state prev_state;
  4. prev_state = exception_enter();
  5. math_error(regs, error_code, X86_TRAP_MF);
  6. exception_exit(prev_state);
  7. }

and do_simd_coprocessor_error passes X86_TRAP_XF to the math_error function:

  1. dotraplinkage void
  2. do_simd_coprocessor_error(struct pt_regs *regs, long error_code)
  3. {
  4. enum ctx_state prev_state;
  5. prev_state = exception_enter();
  6. math_error(regs, error_code, X86_TRAP_XF);
  7. exception_exit(prev_state);
  8. }

First of all the math_error function defines current interrupted task, address of its fpu, string which describes an exception, add it to the notify_die chain and return from the exception handler if it will return NOTIFY_STOP:

  1. struct task_struct *task = current;
  2. struct fpu *fpu = &task->thread.fpu;
  3. siginfo_t info;
  4. char *str = (trapnr == X86_TRAP_MF) ? "fpu exception" :
  5. "simd exception";
  6. if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, SIGFPE) == NOTIFY_STOP)
  7. return;

After this we check that we are from the kernel mode and if yes we will try to fix an exception with the fixup_exception function. If we cannot we fill the task with the exception’s error code and vector number and die:

  1. if (!user_mode(regs)) {
  2. if (!fixup_exception(regs)) {
  3. task->thread.error_code = error_code;
  4. task->thread.trap_nr = trapnr;
  5. die(str, regs, error_code);
  6. }
  7. return;
  8. }

If we came from the user mode, we save the fpu state, fill the task structure with the vector number of an exception and siginfo_t with the number of signal, errno, the address where exception occurred and signal code:

  1. fpu__save(fpu);
  2. task->thread.trap_nr = trapnr;
  3. task->thread.error_code = error_code;
  4. info.si_signo = SIGFPE;
  5. info.si_errno = 0;
  6. info.si_addr = (void __user *)uprobe_get_trap_addr(regs);
  7. info.si_code = fpu__exception_code(fpu, trapnr);

After this we check the signal code and if it is non-zero we return:

  1. if (!info.si_code)
  2. return;

Or send the SIGFPE signal in the end:

  1. force_sig_info(SIGFPE, &info, task);

That’s all.

Conclusion

It is the end of the sixth part of the Interrupts and Interrupt Handling chapter and we saw implementation of some exception handlers in this part, like non-maskable interrupt, SIMD and x87 FPU floating point exception. Finally we have finsihed with the trap_init function in this part and will go ahead in the next part. The next our point is the external interrupts and the early_irq_init function from the init/main.c.

If you have any questions or suggestions write me a comment or ping me at twitter.

Please note that English is not my first language, And I am really sorry for any inconvenience. If you find any mistakes please send me PR to linux-insides.

Links