Skip to content
Snippets Groups Projects
  1. Jan 24, 2023
  2. Jan 18, 2023
    • Greg Kroah-Hartman's avatar
    • Ferry Toth's avatar
      Revert "usb: ulpi: defer ulpi_register on ulpi_read_id timeout" · 74985c57
      Ferry Toth authored
      commit b659b613 upstream.
      
      This reverts commit 8a7b31d5.
      
      This patch results in some qemu test failures, specifically xilinx-zynq-a9
      machine and zynq-zc702 as well as zynq-zed devicetree files, when trying
      to boot from USB drive.
      
      Link: https://lore.kernel.org/lkml/20221220194334.GA942039@roeck-us.net/
      
      
      Fixes: 8a7b31d5 ("usb: ulpi: defer ulpi_register on ulpi_read_id timeout")
      Cc: stable@vger.kernel.org
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarFerry Toth <ftoth@exalondelft.nl>
      Link: https://lore.kernel.org/r/20221222205302.45761-1-ftoth@exalondelft.nl
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      74985c57
    • Jens Axboe's avatar
      io_uring/io-wq: only free worker if it was allocated for creation · a88a0d16
      Jens Axboe authored
      
      commit e6db6f93 upstream.
      
      We have two types of task_work based creation, one is using an existing
      worker to setup a new one (eg when going to sleep and we have no free
      workers), and the other is allocating a new worker. Only the latter
      should be freed when we cancel task_work creation for a new worker.
      
      Fixes: af82425c ("io_uring/io-wq: free worker if task_work creation is canceled")
      Reported-by: default avatar <syzbot+d56ec896af3637bdb7e4@syzkaller.appspotmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a88a0d16
    • Jens Axboe's avatar
      io_uring/io-wq: free worker if task_work creation is canceled · b912ed13
      Jens Axboe authored
      
      commit af82425c upstream.
      
      If we cancel the task_work, the worker will never come into existance.
      As this is the last reference to it, ensure that we get it freed
      appropriately.
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatar진호 <wnwlsgh98@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b912ed13
    • Rob Clark's avatar
      drm/virtio: Fix GEM handle creation UAF · 68bcd063
      Rob Clark authored
      
      [ Upstream commit 52531258 ]
      
      Userspace can guess the handle value and try to race GEM object creation
      with handle close, resulting in a use-after-free if we dereference the
      object after dropping the handle's reference.  For that reason, dropping
      the handle's reference must be done *after* we are done dereferencing
      the object.
      
      Signed-off-by: default avatarRob Clark <robdclark@chromium.org>
      Reviewed-by: default avatarChia-I Wu <olvaffe@gmail.com>
      Fixes: 62fb7a5e ("virtio-gpu: add 3d/virgl support")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarDmitry Osipenko <dmitry.osipenko@collabora.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20221216233355.542197-2-robdclark@gmail.com
      
      
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      68bcd063
    • Johan Hovold's avatar
      efi: fix NULL-deref in init error path · 4ca71bc0
      Johan Hovold authored
      
      [ Upstream commit 703c13fe ]
      
      In cases where runtime services are not supported or have been disabled,
      the runtime services workqueue will never have been allocated.
      
      Do not try to destroy the workqueue unconditionally in the unlikely
      event that EFI initialisation fails to avoid dereferencing a NULL
      pointer.
      
      Fixes: 98086df8 ("efi: add missed destroy_workqueue when efisubsys_init fails")
      Cc: stable@vger.kernel.org
      Cc: Li Heng <liheng40@huawei.com>
      Signed-off-by: default avatarJohan Hovold <johan+linaro@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      4ca71bc0
    • Mark Rutland's avatar
      arm64: cmpxchg_double*: hazard against entire exchange variable · 057f5ddf
      Mark Rutland authored
      
      [ Upstream commit 031af500 ]
      
      The inline assembly for arm64's cmpxchg_double*() implementations use a
      +Q constraint to hazard against other accesses to the memory location
      being exchanged. However, the pointer passed to the constraint is a
      pointer to unsigned long, and thus the hazard only applies to the first
      8 bytes of the location.
      
      GCC can take advantage of this, assuming that other portions of the
      location are unchanged, leading to a number of potential problems.
      
      This is similar to what we fixed back in commit:
      
        fee960be ("arm64: xchg: hazard against entire exchange variable")
      
      ... but we forgot to adjust cmpxchg_double*() similarly at the same
      time.
      
      The same problem applies, as demonstrated with the following test:
      
      | struct big {
      |         u64 lo, hi;
      | } __aligned(128);
      |
      | unsigned long foo(struct big *b)
      | {
      |         u64 hi_old, hi_new;
      |
      |         hi_old = b->hi;
      |         cmpxchg_double_local(&b->lo, &b->hi, 0x12, 0x34, 0x56, 0x78);
      |         hi_new = b->hi;
      |
      |         return hi_old ^ hi_new;
      | }
      
      ... which GCC 12.1.0 compiles as:
      
      | 0000000000000000 <foo>:
      |    0:   d503233f        paciasp
      |    4:   aa0003e4        mov     x4, x0
      |    8:   1400000e        b       40 <foo+0x40>
      |    c:   d2800240        mov     x0, #0x12                       // #18
      |   10:   d2800681        mov     x1, #0x34                       // #52
      |   14:   aa0003e5        mov     x5, x0
      |   18:   aa0103e6        mov     x6, x1
      |   1c:   d2800ac2        mov     x2, #0x56                       // #86
      |   20:   d2800f03        mov     x3, #0x78                       // #120
      |   24:   48207c82        casp    x0, x1, x2, x3, [x4]
      |   28:   ca050000        eor     x0, x0, x5
      |   2c:   ca060021        eor     x1, x1, x6
      |   30:   aa010000        orr     x0, x0, x1
      |   34:   d2800000        mov     x0, #0x0                        // #0    <--- BANG
      |   38:   d50323bf        autiasp
      |   3c:   d65f03c0        ret
      |   40:   d2800240        mov     x0, #0x12                       // #18
      |   44:   d2800681        mov     x1, #0x34                       // #52
      |   48:   d2800ac2        mov     x2, #0x56                       // #86
      |   4c:   d2800f03        mov     x3, #0x78                       // #120
      |   50:   f9800091        prfm    pstl1strm, [x4]
      |   54:   c87f1885        ldxp    x5, x6, [x4]
      |   58:   ca0000a5        eor     x5, x5, x0
      |   5c:   ca0100c6        eor     x6, x6, x1
      |   60:   aa0600a6        orr     x6, x5, x6
      |   64:   b5000066        cbnz    x6, 70 <foo+0x70>
      |   68:   c8250c82        stxp    w5, x2, x3, [x4]
      |   6c:   35ffff45        cbnz    w5, 54 <foo+0x54>
      |   70:   d2800000        mov     x0, #0x0                        // #0     <--- BANG
      |   74:   d50323bf        autiasp
      |   78:   d65f03c0        ret
      
      Notice that at the lines with "BANG" comments, GCC has assumed that the
      higher 8 bytes are unchanged by the cmpxchg_double() call, and that
      `hi_old ^ hi_new` can be reduced to a constant zero, for both LSE and
      LL/SC versions of cmpxchg_double().
      
      This patch fixes the issue by passing a pointer to __uint128_t into the
      +Q constraint, ensuring that the compiler hazards against the entire 16
      bytes being modified.
      
      With this change, GCC 12.1.0 compiles the above test as:
      
      | 0000000000000000 <foo>:
      |    0:   f9400407        ldr     x7, [x0, #8]
      |    4:   d503233f        paciasp
      |    8:   aa0003e4        mov     x4, x0
      |    c:   1400000f        b       48 <foo+0x48>
      |   10:   d2800240        mov     x0, #0x12                       // #18
      |   14:   d2800681        mov     x1, #0x34                       // #52
      |   18:   aa0003e5        mov     x5, x0
      |   1c:   aa0103e6        mov     x6, x1
      |   20:   d2800ac2        mov     x2, #0x56                       // #86
      |   24:   d2800f03        mov     x3, #0x78                       // #120
      |   28:   48207c82        casp    x0, x1, x2, x3, [x4]
      |   2c:   ca050000        eor     x0, x0, x5
      |   30:   ca060021        eor     x1, x1, x6
      |   34:   aa010000        orr     x0, x0, x1
      |   38:   f9400480        ldr     x0, [x4, #8]
      |   3c:   d50323bf        autiasp
      |   40:   ca0000e0        eor     x0, x7, x0
      |   44:   d65f03c0        ret
      |   48:   d2800240        mov     x0, #0x12                       // #18
      |   4c:   d2800681        mov     x1, #0x34                       // #52
      |   50:   d2800ac2        mov     x2, #0x56                       // #86
      |   54:   d2800f03        mov     x3, #0x78                       // #120
      |   58:   f9800091        prfm    pstl1strm, [x4]
      |   5c:   c87f1885        ldxp    x5, x6, [x4]
      |   60:   ca0000a5        eor     x5, x5, x0
      |   64:   ca0100c6        eor     x6, x6, x1
      |   68:   aa0600a6        orr     x6, x5, x6
      |   6c:   b5000066        cbnz    x6, 78 <foo+0x78>
      |   70:   c8250c82        stxp    w5, x2, x3, [x4]
      |   74:   35ffff45        cbnz    w5, 5c <foo+0x5c>
      |   78:   f9400480        ldr     x0, [x4, #8]
      |   7c:   d50323bf        autiasp
      |   80:   ca0000e0        eor     x0, x7, x0
      |   84:   d65f03c0        ret
      
      ... sampling the high 8 bytes before and after the cmpxchg, and
      performing an EOR, as we'd expect.
      
      For backporting, I've tested this atop linux-4.9.y with GCC 5.5.0. Note
      that linux-4.9.y is oldest currently supported stable release, and
      mandates GCC 5.1+. Unfortunately I couldn't get a GCC 5.1 binary to run
      on my machines due to library incompatibilities.
      
      I've also used a standalone test to check that we can use a __uint128_t
      pointer in a +Q constraint at least as far back as GCC 4.8.5 and LLVM
      3.9.1.
      
      Fixes: 5284e1b4 ("arm64: xchg: Implement cmpxchg_double")
      Fixes: e9a4b795 ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU")
      Reported-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Link: https://lore.kernel.org/lkml/Y6DEfQXymYVgL3oJ@boqun-archlinux/
      
      
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: https://lore.kernel.org/lkml/Y6GXoO4qmH9OIZ5Q@hirez.programming.kicks-ass.net/
      
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: stable@vger.kernel.org
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230104151626.3262137-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      057f5ddf
    • Mark Rutland's avatar
      arm64: atomics: remove LL/SC trampolines · 9a5fd084
      Mark Rutland authored
      
      [ Upstream commit b2c3ccbd ]
      
      When CONFIG_ARM64_LSE_ATOMICS=y, each use of an LL/SC atomic results in
      a fragment of code being generated in a subsection without a clear
      association with its caller. A trampoline in the caller branches to the
      LL/SC atomic with with a direct branch, and the atomic directly branches
      back into its trampoline.
      
      This breaks backtracing, as any PC within the out-of-line fragment will
      be symbolized as an offset from the nearest prior symbol (which may not
      be the function using the atomic), and since the atomic returns with a
      direct branch, the caller's PC may be missing from the backtrace.
      
      For example, with secondary_start_kernel() hacked to contain
      atomic_inc(NULL), the resulting exception can be reported as being taken
      from cpus_are_stuck_in_kernel():
      
      | Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
      | Mem abort info:
      |   ESR = 0x0000000096000004
      |   EC = 0x25: DABT (current EL), IL = 32 bits
      |   SET = 0, FnV = 0
      |   EA = 0, S1PTW = 0
      |   FSC = 0x04: level 0 translation fault
      | Data abort info:
      |   ISV = 0, ISS = 0x00000004
      |   CM = 0, WnR = 0
      | [0000000000000000] user address but active_mm is swapper
      | Internal error: Oops: 96000004 [#1] PREEMPT SMP
      | Modules linked in:
      | CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.19.0-11219-geb555cb5b794-dirty #3
      | Hardware name: linux,dummy-virt (DT)
      | pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      | pc : cpus_are_stuck_in_kernel+0xa4/0x120
      | lr : secondary_start_kernel+0x164/0x170
      | sp : ffff80000a4cbe90
      | x29: ffff80000a4cbe90 x28: 0000000000000000 x27: 0000000000000000
      | x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000
      | x23: 0000000000000000 x22: 0000000000000000 x21: 0000000000000000
      | x20: 0000000000000001 x19: 0000000000000001 x18: 0000000000000008
      | x17: 3030383832343030 x16: 3030303030307830 x15: ffff80000a4cbab0
      | x14: 0000000000000001 x13: 5d31666130663133 x12: 3478305b20313030
      | x11: 3030303030303078 x10: 3020726f73736563 x9 : 726f737365636f72
      | x8 : ffff800009ff2ef0 x7 : 0000000000000003 x6 : 0000000000000000
      | x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000100
      | x2 : 0000000000000000 x1 : ffff0000029bd880 x0 : 0000000000000000
      | Call trace:
      |  cpus_are_stuck_in_kernel+0xa4/0x120
      |  __secondary_switched+0xb0/0xb4
      | Code: 35ffffa3 17fffc6c d53cd040 f9800011 (885f7c01)
      | ---[ end trace 0000000000000000 ]---
      
      This is confusing and hinders debugging, and will be problematic for
      CONFIG_LIVEPATCH as these cases cannot be unwound reliably.
      
      This is very similar to recent issues with out-of-line exception fixups,
      which were removed in commits:
      
        35d67794 ("arm64: lib: __arch_clear_user(): fold fixups into body")
        4012e0e2 ("arm64: lib: __arch_copy_from_user(): fold fixups into body")
        139f9ab7 ("arm64: lib: __arch_copy_to_user(): fold fixups into body")
      
      When the trampolines were introduced in commit:
      
        addfc386 ("arm64: atomics: avoid out-of-line ll/sc atomics")
      
      The rationale was to improve icache performance by grouping the LL/SC
      atomics together. This has never been measured, and this theoretical
      benefit is outweighed by other factors:
      
      * As the subsections are collapsed into sections at object file
        granularity, these are spread out throughout the kernel and can share
        cachelines with unrelated code regardless.
      
      * GCC 12.1.0 has been observed to place the trampoline out-of-line in
        specialised __ll_sc_*() functions, introducing more branching than was
        intended.
      
      * Removing the trampolines has been observed to shrink a defconfig
        kernel Image by 64KiB when building with GCC 12.1.0.
      
      This patch removes the LL/SC trampolines, meaning that the LL/SC atomics
      will be inlined into their callers (or placed in out-of line functions
      using regular BL/RET pairs). When CONFIG_ARM64_LSE_ATOMICS=y, the LL/SC
      atomics are always called in an unlikely branch, and will be placed in a
      cold portion of the function, so this should have minimal impact to the
      hot paths.
      
      Other than the improved backtracing, there should be no functional
      change as a result of this patch.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20220817155914.3975112-2-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Stable-dep-of: 031af500 ("arm64: cmpxchg_double*: hazard against entire exchange variable")
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      9a5fd084
    • Mark Rutland's avatar
      arm64: atomics: format whitespace consistently · 28840e46
      Mark Rutland authored
      
      [ Upstream commit 8e6082e9 ]
      
      The code for the atomic ops is formatted inconsistently, and while this
      is not a functional problem it is rather distracting when working on
      them.
      
      Some have ops have consistent indentation, e.g.
      
      | #define ATOMIC_OP_ADD_RETURN(name, mb, cl...)                           \
      | static inline int __lse_atomic_add_return##name(int i, atomic_t *v)     \
      | {                                                                       \
      |         u32 tmp;                                                        \
      |                                                                         \
      |         asm volatile(                                                   \
      |         __LSE_PREAMBLE                                                  \
      |         "       ldadd" #mb "    %w[i], %w[tmp], %[v]\n"                 \
      |         "       add     %w[i], %w[i], %w[tmp]"                          \
      |         : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)        \
      |         : "r" (v)                                                       \
      |         : cl);                                                          \
      |                                                                         \
      |         return i;                                                       \
      | }
      
      While others have negative indentation for some lines, and/or have
      misaligned trailing backslashes, e.g.
      
      | static inline void __lse_atomic_##op(int i, atomic_t *v)                        \
      | {                                                                       \
      |         asm volatile(                                                   \
      |         __LSE_PREAMBLE                                                  \
      | "       " #asm_op "     %w[i], %[v]\n"                                  \
      |         : [i] "+r" (i), [v] "+Q" (v->counter)                           \
      |         : "r" (v));                                                     \
      | }
      
      This patch makes the indentation consistent and also aligns the trailing
      backslashes. This makes the code easier to read for those (like myself)
      who are easily distracted by these inconsistencies.
      
      This is intended as a cleanup.
      There should be no functional change as a result of this patch.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-2-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Stable-dep-of: 031af500 ("arm64: cmpxchg_double*: hazard against entire exchange variable")
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      28840e46
    • Peter Newman's avatar
      x86/resctrl: Fix task CLOSID/RMID update race · 5dac4c72
      Peter Newman authored
      
      [ Upstream commit fe1f0714 ]
      
      When the user moves a running task to a new rdtgroup using the task's
      file interface or by deleting its rdtgroup, the resulting change in
      CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the
      task(s) CPUs.
      
      x86 allows reordering loads with prior stores, so if the task starts
      running between a task_curr() check that the CPU hoisted before the
      stores in the CLOSID/RMID update then it can start running with the old
      CLOSID/RMID until it is switched again because __rdtgroup_move_task()
      failed to determine that it needs to be interrupted to obtain the new
      CLOSID/RMID.
      
      Refer to the diagram below:
      
      CPU 0                                   CPU 1
      -----                                   -----
      __rdtgroup_move_task():
        curr <- t1->cpu->rq->curr
                                              __schedule():
                                                rq->curr <- t1
                                              resctrl_sched_in():
                                                t1->{closid,rmid} -> {1,1}
        t1->{closid,rmid} <- {2,2}
        if (curr == t1) // false
         IPI(t1->cpu)
      
      A similar race impacts rdt_move_group_tasks(), which updates tasks in a
      deleted rdtgroup.
      
      In both cases, use smp_mb() to order the task_struct::{closid,rmid}
      stores before the loads in task_curr().  In particular, in the
      rdt_move_group_tasks() case, simply execute an smp_mb() on every
      iteration with a matching task.
      
      It is possible to use a single smp_mb() in rdt_move_group_tasks(), but
      this would require two passes and a means of remembering which
      task_structs were updated in the first loop. However, benchmarking
      results below showed too little performance impact in the simple
      approach to justify implementing the two-pass approach.
      
      Times below were collected using `perf stat` to measure the time to
      remove a group containing a 1600-task, parallel workload.
      
      CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads)
      
        # mkdir /sys/fs/resctrl/test
        # echo $$ > /sys/fs/resctrl/test/tasks
        # perf bench sched messaging -g 40 -l 100000
      
      task-clock time ranges collected using:
      
        # perf stat rmdir /sys/fs/resctrl/test
      
      Baseline:                     1.54 - 1.60 ms
      smp_mb() every matching task: 1.57 - 1.67 ms
      
        [ bp: Massage commit message. ]
      
      Fixes: ae28d1aa ("x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR")
      Fixes: 0efc89be ("x86/intel_rdt: Update task closid immediately on CPU in rmdir and unmount")
      Signed-off-by: default avatarPeter Newman <peternewman@google.com>
      Signed-off-by: default avatarBorislav Petkov (AMD) <bp@alien8.de>
      Reviewed-by: default avatarReinette Chatre <reinette.chatre@intel.com>
      Reviewed-by: default avatarBabu Moger <babu.moger@amd.com>
      Cc: <stable@kernel.org>
      Link: https://lore.kernel.org/r/20221220161123.432120-1-peternewman@google.com
      
      
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      5dac4c72
    • Reinette Chatre's avatar
      x86/resctrl: Use task_curr() instead of task_struct->on_cpu to prevent unnecessary IPI · 446c7251
      Reinette Chatre authored
      [ Upstream commit e0ad6dc8 ]
      
      James reported in [1] that there could be two tasks running on the same CPU
      with task_struct->on_cpu set. Using task_struct->on_cpu as a test if a task
      is running on a CPU may thus match the old task for a CPU while the
      scheduler is running and IPI it unnecessarily.
      
      task_curr() is the correct helper to use. While doing so move the #ifdef
      check of the CONFIG_SMP symbol to be a C conditional used to determine
      if this helper should be used to ensure the code is always checked for
      correctness by the compiler.
      
      [1] https://lore.kernel.org/lkml/a782d2f3-d2f6-795f-f4b1-9462205fd581@arm.com
      
      
      
      Reported-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarReinette Chatre <reinette.chatre@intel.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/e9e68ce1441a73401e08b641cc3b9a3cf13fe6d4.1608243147.git.reinette.chatre@intel.com
      
      
      Stable-dep-of: fe1f0714 ("x86/resctrl: Fix task CLOSID/RMID update race")
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      446c7251
    • Paolo Bonzini's avatar
      KVM: x86: Do not return host topology information from KVM_GET_SUPPORTED_CPUID · 196c6f0c
      Paolo Bonzini authored
      
      [ Upstream commit 45e966fc ]
      
      Passing the host topology to the guest is almost certainly wrong
      and will confuse the scheduler.  In addition, several fields of
      these CPUID leaves vary on each processor; it is simply impossible to
      return the right values from KVM_GET_SUPPORTED_CPUID in such a way that
      they can be passed to KVM_SET_CPUID2.
      
      The values that will most likely prevent confusion are all zeroes.
      Userspace will have to override it anyway if it wishes to present a
      specific topology to the guest.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      196c6f0c
    • Paolo Bonzini's avatar
      Documentation: KVM: add API issues section · 0027164b
      Paolo Bonzini authored
      
      [ Upstream commit cde363ab ]
      
      Add a section to document all the different ways in which the KVM API sucks.
      
      I am sure there are way more, give people a place to vent so that userspace
      authors are aware.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20220322110712.222449-4-pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      0027164b
    • Christophe JAILLET's avatar
      iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe() · caaea2ab
      Christophe JAILLET authored
      
      [ Upstream commit 142e821f ]
      
      A clk, prepared and enabled in mtk_iommu_v1_hw_init(), is not released in
      the error handling path of mtk_iommu_v1_probe().
      
      Add the corresponding clk_disable_unprepare(), as already done in the
      remove function.
      
      Fixes: b17336c5 ("iommu/mediatek: add support for mtk iommu generation one HW")
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Reviewed-by: default avatarYong Wu <yong.wu@mediatek.com>
      Reviewed-by: default avatarAngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
      Reviewed-by: default avatarMatthias Brugger <matthias.bgg@gmail.com>
      Link: https://lore.kernel.org/r/593e7b7d97c6e064b29716b091a9d4fd122241fb.1671473163.git.christophe.jaillet@wanadoo.fr
      
      
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      caaea2ab
    • Yong Wu's avatar
      iommu/mediatek-v1: Add error handle for mtk_iommu_probe · cf38e762
      Yong Wu authored
      
      [ Upstream commit ac304c07 ]
      
      In the original code, we lack the error handle. This patch adds them.
      
      Signed-off-by: default avatarYong Wu <yong.wu@mediatek.com>
      Link: https://lore.kernel.org/r/20210412064843.11614-2-yong.wu@mediatek.com
      
      
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      Stable-dep-of: 142e821f ("iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe()")
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      cf38e762
    • Aaron Thompson's avatar
      mm: Always release pages to the buddy allocator in memblock_free_late(). · 60806adc
      Aaron Thompson authored
      
      [ Upstream commit 115d9d77 ]
      
      If CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, memblock_free_pages()
      only releases pages to the buddy allocator if they are not in the
      deferred range. This is correct for free pages (as defined by
      for_each_free_mem_pfn_range_in_zone()) because free pages in the
      deferred range will be initialized and released as part of the deferred
      init process. memblock_free_pages() is called by memblock_free_late(),
      which is used to free reserved ranges after memblock_free_all() has
      run. All pages in reserved ranges have been initialized at that point,
      and accordingly, those pages are not touched by the deferred init
      process. This means that currently, if the pages that
      memblock_free_late() intends to release are in the deferred range, they
      will never be released to the buddy allocator. They will forever be
      reserved.
      
      In addition, memblock_free_pages() calls kmsan_memblock_free_pages(),
      which is also correct for free pages but is not correct for reserved
      pages. KMSAN metadata for reserved pages is initialized by
      kmsan_init_shadow(), which runs shortly before memblock_free_all().
      
      For both of these reasons, memblock_free_pages() should only be called
      for free pages, and memblock_free_late() should call __free_pages_core()
      directly instead.
      
      One case where this issue can occur in the wild is EFI boot on
      x86_64. The x86 EFI code reserves all EFI boot services memory ranges
      via memblock_reserve() and frees them later via memblock_free_late()
      (efi_reserve_boot_services() and efi_free_boot_services(),
      respectively). If any of those ranges happens to fall within the
      deferred init range, the pages will not be released and that memory will
      be unavailable.
      
      For example, on an Amazon EC2 t3.micro VM (1 GB) booting via EFI:
      
      v6.2-rc2:
        # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
        Node 0, zone      DMA
                spanned  4095
                present  3999
                managed  3840
        Node 0, zone    DMA32
                spanned  246652
                present  245868
                managed  178867
      
      v6.2-rc2 + patch:
        # grep -E 'Node|spanned|present|managed' /proc/zoneinfo
        Node 0, zone      DMA
                spanned  4095
                present  3999
                managed  3840
        Node 0, zone    DMA32
                spanned  246652
                present  245868
                managed  222816   # +43,949 pages
      
      Fixes: 3a80a7fa ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
      Signed-off-by: default avatarAaron Thompson <dev@aaront.org>
      Link: https://lore.kernel.org/r/01010185892de53e-e379acfb-7044-4b24-b30a-e2657c1ba989-000000@us-west-2.amazonses.com
      
      
      Signed-off-by: default avatarMike Rapoport (IBM) <rppt@kernel.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      60806adc
    • Gavin Li's avatar
      net/mlx5e: Don't support encap rules with gbp option · 092f0c2d
      Gavin Li authored
      
      [ Upstream commit d515d63c ]
      
      Previously, encap rules with gbp option would be offloaded by mistake but
      driver does not support gbp option offload.
      
      To fix this issue, check if the encap rule has gbp option and don't
      offload the rule
      
      Fixes: d8f9dfae ("net: sched: allow flower to match vxlan options")
      Signed-off-by: default avatarGavin Li <gavinl@nvidia.com>
      Reviewed-by: default avatarMaor Dickman <maord@nvidia.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      092f0c2d
    • Rahul Rameshbabu's avatar
      net/mlx5: Fix ptp max frequency adjustment range · b3d47227
      Rahul Rameshbabu authored
      
      [ Upstream commit fe91d572 ]
      
      .max_adj of ptp_clock_info acts as an absolute value for the amount in ppb
      that can be set for a single call of .adjfine. This means that a single
      call to .getfine cannot be greater than .max_adj or less than -(.max_adj).
      Provides correct value for max frequency adjustment value supported by
      devices.
      
      Fixes: 3d8c38af ("net/mlx5e: Add PTP Hardware Clock (PHC) support")
      Signed-off-by: default avatarRahul Rameshbabu <rrameshbabu@nvidia.com>
      Reviewed-by: default avatarGal Pressman <gal@nvidia.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      b3d47227
Loading