Skip to content
Snippets Groups Projects
Commit 77a1e240 authored by Waiman Long's avatar Waiman Long
Browse files

x86/mce: Reduce number of machine checks taken during recovery

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2090231



commit 33761363
Author: Youquan Song <youquan.song@intel.com>
Date:   Thu, 23 Dec 2021 12:07:01 -0800

    x86/mce: Reduce number of machine checks taken during recovery

    When any of the copy functions in arch/x86/lib/copy_user_64.S take a
    fault, the fixup code copies the remaining byte count from %ecx to %edx
    and unconditionally jumps to .Lcopy_user_handle_tail to continue the
    copy in case any more bytes can be copied.

    If the fault was #PF this may copy more bytes (because the page fault
    handler might have fixed the fault). But when the fault is a machine
    check the original copy code will have copied all the way to the poisoned
    cache line. So .Lcopy_user_handle_tail will just take another machine
    check for no good reason.

    Every code path to .Lcopy_user_handle_tail comes from an exception fixup
    path, so add a check there to check the trap type (in %eax) and simply
    return the count of remaining bytes if the trap was a machine check.

    Doing this reduces the number of machine checks taken during synthetic
    tests from four to three.

    As well as reducing the number of machine checks, this also allows
    Skylake generation Xeons to recover some cases that currently fail. The
    is because REP; MOVSB is only recoverable when source and destination
    are well aligned and the byte count is large. That useless call to
    .Lcopy_user_handle_tail may violate one or more of these conditions and
    generate a fatal machine check.

      [ Tony: Add more details to commit message. ]
      [ bp: Fixup comment.
        Also, another tip patchset which is adding straight-line speculation
        mitigation changes the "ret" instruction to an all-caps macro "RET".
        But, since gas is case-insensitive, use "RET" in the newly added asm block
        already in order to simplify tip branch merging on its way upstream.
      ]

Signed-off-by: default avatarYouquan Song <youquan.song@intel.com>
Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
    Link: https://lore.kernel.org/r/YcTW5dh8yTGucDd+@agluck-desk2.amr.corp.intel.com



Signed-off-by: default avatarWaiman Long <longman@redhat.com>
parent 8764e53a
No related branches found
No related tags found
No related merge requests found
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment