Skip to content
Snippets Groups Projects
  1. Mar 21, 2025
    • Lokesh Gidra's avatar
      ANDROID: userfaultfd: add MOVE ioctl mode to confirm bug-fixes · 32fd2083
      Lokesh Gidra authored
      Following issues were reported in the MOVE ioctl:
      1. Panic when trying to move a source page which is in swap-cache [1]
      2. Livelock when multiple threads try to move the same source page [2]
      
      Three patches have been upstreamed to fix these issues [3, 4, 5]
      
      MOVE ioctl was backported to ACK 6.1 and 6.6 for ART GC to use it [6].
      Therefore, on these kernels in order to be able to identify in the
      userspace if the fixes are included, this mode is added.
      
      NOTE: UFFDIO_MOVE_MODE_CONFIRM_FIXED mode is only for 6.1 and 6.6
      kernels, and will go away afterwards.
      
      [1] https://lore.kernel.org/linux-mm/20250219112519.92853-1-21cnbao@gmail.com/
      [2] https://github.com/lokeshgidra/uffd_move_ioctl_deadlock
      [3] https://web.git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-hotfixes-stable&id=c50f8e6053b0503375c2975bf47f182445aebb4c
      [4] https://web.git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-hotfixes-stable&id=37b338eed10581784e854d4262da05c8d960c748
      [5] https://web.git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=mm-hotfixes-stable&id=927e926d72d9155fde3264459fe9bfd7b5e40d28
      
      
      [6] b/274911254
      
      Bug: 401790618
      Bug: 405066974
      Change-Id: Ibd854ec7ac9ae6a2ca416767d032b6c71f1bc688
      Signed-off-by: default avatarLokesh Gidra <lokeshgidra@google.com>
      (cherry picked from commit 9bcabbda)
      Signed-off-by: default avatarYinchu Chen <chenyc5@motorola.com>
      2 tags
      32fd2083
    • Suren Baghdasaryan's avatar
      FROMGIT: userfaultfd: fix PTE unmapping stack-allocated PTE copies · 74c9f3f8
      Suren Baghdasaryan authored
      Current implementation of move_pages_pte() copies source and destination
      PTEs in order to detect concurrent changes to PTEs involved in the move.
      However these copies are also used to unmap the PTEs, which will fail if
      CONFIG_HIGHPTE is enabled because the copies are allocated on the stack.
      Fix this by using the actual PTEs which were kmap()ed.
      
      Link: https://lkml.kernel.org/r/20250226185510.2732648-3-surenb@google.com
      
      
      Fixes: adef4406 ("userfaultfd: UFFDIO_MOVE uABI")
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Reported-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Barry Song <21cnbao@gmail.com>
      Cc: Barry Song <v-songbaohua@oppo.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      
      Signed-off-by: default avatarLokesh Gidra <lokeshgidra@google.com>
      (cherry-picked from commit 927e926d https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
      
       mm-hotfixes-stable)
      Change-Id: I0ee6c1b509ea7c4fa68056d6e512d4ac167c9234
      Bug: 401790618
      Bug: 405066974
      (cherry picked from commit 8d8d44ff)
      Signed-off-by: default avatarYinchu Chen <chenyc5@motorola.com>
      74c9f3f8
    • Suren Baghdasaryan's avatar
      FROMGIT: userfaultfd: do not block on locking a large folio with raised refcount · a87db275
      Suren Baghdasaryan authored
      Lokesh recently raised an issue about UFFDIO_MOVE getting into a deadlock
      state when it goes into split_folio() with raised folio refcount.
      split_folio() expects the reference count to be exactly mapcount +
      num_pages_in_folio + 1 (see can_split_folio()) and fails with EAGAIN
      otherwise.
      
      If multiple processes are trying to move the same large folio, they raise
      the refcount (all tasks succeed in that) then one of them succeeds in
      locking the folio, while others will block in folio_lock() while keeping
      the refcount raised.  The winner of this race will proceed with calling
      split_folio() and will fail returning EAGAIN to the caller and unlocking
      the folio.  The next competing process will get the folio locked and will
      go through the same flow.  In the meantime the original winner will be
      retried and will block in folio_lock(), getting into the queue of waiting
      processes only to repeat the same path.  All this results in a livelock.
      
      An easy fix would be to avoid waiting for the folio lock while holding
      folio refcount, similar to madvise_free_huge_pmd() where folio lock is
      acquired before raising the folio refcount.  Since we lock and take a
      refcount of the folio while holding the PTE lock, changing the order of
      these operations should not break anything.
      
      Modify move_pages_pte() to try locking the folio first and if that fails
      and the folio is large then return EAGAIN without touching the folio
      refcount.  If the folio is single-page then split_folio() is not called,
      so we don't have this issue.  Lokesh has a reproducer [1] and I verified
      that this change fixes the issue.
      
      [1] https://github.com/lokeshgidra/uffd_move_ioctl_deadlock
      
      [akpm@linux-foundation.org: reflow comment to 80 cols, s/end/end up/]
      Link: https://lkml.kernel.org/r/20250226185510.2732648-2-surenb@google.com
      
      
      Fixes: adef4406 ("userfaultfd: UFFDIO_MOVE uABI")
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Reported-by: default avatarLokesh Gidra <lokeshgidra@google.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Acked-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Barry Song <21cnbao@gmail.com>
      Cc: Barry Song <v-songbaohua@oppo.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Matthew Wilcow (Oracle) <willy@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      
      Signed-off-by: default avatarLokesh Gidra <lokeshgidra@google.com>
      (cherry-picked from commit 37b338ee https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
      
       mm-hotfixes-stable)
      Change-Id: I71b307add9707ad3518a44623aea2e2ca417b95a
      Bug: 401790618
      Bug: 405066974
      (cherry picked from commit af439acc)
      Signed-off-by: default avatarYinchu Chen <chenyc5@motorola.com>
      a87db275
    • Barry Song's avatar
      FROMGIT: BACKPORT: mm: fix kernel BUG when userfaultfd_move encounters swapcache · 7c180424
      Barry Song authored
      userfaultfd_move() checks whether the PTE entry is present or a
      swap entry.
      
      - If the PTE entry is present, move_present_pte() handles folio
        migration by setting:
      
        src_folio->index = linear_page_index(dst_vma, dst_addr);
      
      - If the PTE entry is a swap entry, move_swap_pte() simply copies
        the PTE to the new dst_addr.
      
      This approach is incorrect because, even if the PTE is a swap entry,
      it can still reference a folio that remains in the swap cache.
      
      This creates a race window between steps 2 and 4.
       1. add_to_swap: The folio is added to the swapcache.
       2. try_to_unmap: PTEs are converted to swap entries.
       3. pageout: The folio is written back.
       4. Swapcache is cleared.
      If userfaultfd_move() occurs in the window between steps 2 and 4,
      after the swap PTE has been moved to the destination, accessing the
      destination triggers do_swap_page(), which may locate the folio in
      the swapcache. However, since the folio's index has not been updated
      to match the destination VMA, do_swap_page() will detect a mismatch.
      
      This can result in two critical issues depending on the system
      configuration.
      
      If KSM is disabled, both small and large folios can trigger a BUG
      during the add_rmap operation due to:
      
       page_pgoff(folio, page) != linear_page_index(vma, address)
      
      [   13.336953] page: refcount:6 mapcount:1 mapping:00000000f43db19c index:0xffffaf150 pfn:0x4667c
      [   13.337520] head: order:2 mapcount:1 entire_mapcount:0 nr_pages_mapped:1 pincount:0
      [   13.337716] memcg:ffff00000405f000
      [   13.337849] anon flags: 0x3fffc0000020459(locked|uptodate|dirty|owner_priv_1|head|swapbacked|node=0|zone=0|lastcpupid=0xffff)
      [   13.338630] raw: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
      [   13.338831] raw: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
      [   13.339031] head: 03fffc0000020459 ffff80008507b538 ffff80008507b538 ffff000006260361
      [   13.339204] head: 0000000ffffaf150 0000000000004000 0000000600000000 ffff00000405f000
      [   13.339375] head: 03fffc0000000202 fffffdffc0199f01 ffffffff00000000 0000000000000001
      [   13.339546] head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
      [   13.339736] page dumped because: VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address))
      [   13.340190] ------------[ cut here ]------------
      [   13.340316] kernel BUG at mm/rmap.c:1380!
      [   13.340683] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP
      [   13.340969] Modules linked in:
      [   13.341257] CPU: 1 UID: 0 PID: 107 Comm: a.out Not tainted 6.14.0-rc3-gcf42737e247a-dirty #299
      [   13.341470] Hardware name: linux,dummy-virt (DT)
      [   13.341671] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      [   13.341815] pc : __page_check_anon_rmap+0xa0/0xb0
      [   13.341920] lr : __page_check_anon_rmap+0xa0/0xb0
      [   13.342018] sp : ffff80008752bb20
      [   13.342093] x29: ffff80008752bb20 x28: fffffdffc0199f00 x27: 0000000000000001
      [   13.342404] x26: 0000000000000000 x25: 0000000000000001 x24: 0000000000000001
      [   13.342575] x23: 0000ffffaf0d0000 x22: 0000ffffaf0d0000 x21: fffffdffc0199f00
      [   13.342731] x20: fffffdffc0199f00 x19: ffff000006210700 x18: 00000000ffffffff
      [   13.342881] x17: 6c203d2120296567 x16: 6170202c6f696c6f x15: 662866666f67705f
      [   13.343033] x14: 6567617028454741 x13: 2929737365726464 x12: ffff800083728ab0
      [   13.343183] x11: ffff800082996bf8 x10: 0000000000000fd7 x9 : ffff80008011bc40
      [   13.343351] x8 : 0000000000017fe8 x7 : 00000000fffff000 x6 : ffff8000829eebf8
      [   13.343498] x5 : c0000000fffff000 x4 : 0000000000000000 x3 : 0000000000000000
      [   13.343645] x2 : 0000000000000000 x1 : ffff0000062db980 x0 : 000000000000005f
      [   13.343876] Call trace:
      [   13.344045]  __page_check_anon_rmap+0xa0/0xb0 (P)
      [   13.344234]  folio_add_anon_rmap_ptes+0x22c/0x320
      [   13.344333]  do_swap_page+0x1060/0x1400
      [   13.344417]  __handle_mm_fault+0x61c/0xbc8
      [   13.344504]  handle_mm_fault+0xd8/0x2e8
      [   13.344586]  do_page_fault+0x20c/0x770
      [   13.344673]  do_translation_fault+0xb4/0xf0
      [   13.344759]  do_mem_abort+0x48/0xa0
      [   13.344842]  el0_da+0x58/0x130
      [   13.344914]  el0t_64_sync_handler+0xc4/0x138
      [   13.345002]  el0t_64_sync+0x1ac/0x1b0
      [   13.345208] Code: aa1503e0 f000f801 910f6021 97ff5779 (d4210000)
      [   13.345504] ---[ end trace 0000000000000000 ]---
      [   13.345715] note: a.out[107] exited with irqs disabled
      [   13.345954] note: a.out[107] exited with preempt_count 2
      
      If KSM is enabled, Peter Xu also discovered that do_swap_page() may
      trigger an unexpected CoW operation for small folios because
      ksm_might_need_to_copy() allocates a new folio when the folio index
      does not match linear_page_index(vma, addr).
      
      This patch also checks the swapcache when handling swap entries. If a
      match is found in the swapcache, it processes it similarly to a present
      PTE.
      However, there are some differences. For example, the folio is no longer
      exclusive because folio_try_share_anon_rmap_pte() is performed during
      unmapping.
      Furthermore, in the case of swapcache, the folio has already been
      unmapped, eliminating the risk of concurrent rmap walks and removing the
      need to acquire src_folio's anon_vma or lock.
      
      Note that for large folios, in the swapcache handling path, we directly
      return -EBUSY since split_folio() will return -EBUSY regardless if
      the folio is under writeback or unmapped. This is not an urgent issue,
      so a follow-up patch may address it separately.
      
      [v-songbaohua@oppo.com: minor cleanup according to Peter Xu]
        Link: https://lkml.kernel.org/r/20250226024411.47092-1-21cnbao@gmail.com
      Link: https://lkml.kernel.org/r/20250226001400.9129-1-21cnbao@gmail.com
      
      
      Fixes: adef4406 ("userfaultfd: UFFDIO_MOVE uABI")
      Signed-off-by: default avatarBarry Song <v-songbaohua@oppo.com>
      Acked-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Axel Rasmussen <axelrasmussen@google.com>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport (IBM) <rppt@kernel.org>
      Cc: Nicolas Geoffray <ngeoffray@google.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: ZhangPeng <zhangpeng362@huawei.com>
      Cc: Tangquan Zheng <zhengtangquan@oppo.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      
      Conflicts:
      1. mm/userfaultfd.c
      [Removed pmd arguments being passed to move_swap_pte() to resolve conflicts - Lokesh Gidra]
      [Replaced swap_cache_index() with swp_offset() as the former doesn't exist - Lokesh Gidra]
      [Replaced folio_move_anon_rmap() with page_move_anon_rmap() as the
       former doesn't exist - Lokesh Gidra]
      
      Signed-off-by: default avatarLokesh Gidra <lokeshgidra@google.com>
      (cherry-picked from commit c50f8e60 https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
      
       mm-hotfixes-stable)
      Change-Id: I94caeac5bf78add4d78650929303a25d54d8a638
      Bug: 401790618
      Bug: 405066974
      (cherry picked from commit 7d6124b6)
      Signed-off-by: default avatarYinchu Chen <chenyc5@motorola.com>
      7c180424
  2. Mar 20, 2025
  3. Mar 10, 2025
  4. Feb 26, 2025
  5. Feb 17, 2025
    • Sebastian Achim's avatar
      ANDROID: cma: Add restrict_cma_redirect boot parameter · c5dc859b
      Sebastian Achim authored and Oleksiy Avramchenko's avatar Oleksiy Avramchenko committed
      
      Commit "mm,page_alloc,cma: conditionally prefer cma pageblocks for
      movable allocations" (16867664) introduced balancing of movable
      allocations between CMA and normal areas.
      
      Commit "ANDROID: cma: redirect page allocation to CMA" (f60c5572)
      removes it, making allocations go in CMA area first.
      
      1. Reintroduce the condition, so that CMA and normal area are used
      in a balanced way(as it used to be), so it prevents depleting of CMA
      region;
      
      2. Back-port a command line option(from 6.6), "restrict_cma_redirect",
      that can be used if only MOVABLE allocations marked as __GFP_CMA are
      eligible to be redirected to CMA region. By default it is true.
      
      The purpose of this change is to keep using CMA for movable allocations,
      but at the same time, to have enough free CMA pages for critical system
      areas such as modem initialization, GPU initialization and so on.
      
      Bug: 397184449
      Bug: 381168812
      Signed-off-by: default avatarSebastian Achim <sebastian.1.achim@sony.com>
      Signed-off-by: default avatarUladzislau Rezki <uladzislau.rezki@sony.com>
      Signed-off-by: default avatarOleksiy Avramchenko <oleksiy.avramchenko@sony.com>
      Change-Id: I5fd6d022340715e27754c687189c5ea0e56d9ee6
      (cherry picked from commit ccc91578)
  6. Jan 22, 2025
  7. Jan 21, 2025
  8. Jan 13, 2025
  9. Jan 10, 2025
    • Qun-Wei Lin's avatar
      UPSTREAM: mm: krealloc: Fix MTE false alarm in __do_krealloc · 6b18f0b5
      Qun-Wei Lin authored
      commit 70457385 upstream.
      
      This patch addresses an issue introduced by commit 1a83a716 ("mm:
      krealloc: consider spare memory for __GFP_ZERO") which causes MTE
      (Memory Tagging Extension) to falsely report a slab-out-of-bounds error.
      
      The problem occurs when zeroing out spare memory in __do_krealloc. The
      original code only considered software-based KASAN and did not account
      for MTE. It does not reset the KASAN tag before calling memset, leading
      to a mismatch between the pointer tag and the memory tag, resulting
      in a false positive.
      
      Example of the error:
      ==================================================================
      swapper/0: BUG: KASAN: slab-out-of-bounds in __memset+0x84/0x188
      swapper/0: Write at addr f4ffff8005f0fdf0 by task swapper/0/1
      swapper/0: Pointer tag: [f4], memory tag: [fe]
      swapper/0:
      swapper/0: CPU: 4 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.
      swapper/0: Hardware name: MT6991(ENG) (DT)
      swapper/0: Call trace:
      swapper/0:  dump_backtrace+0xfc/0x17c
      swapper/0:  show_stack+0x18/0x28
      swapper/0:  dump_stack_lvl+0x40/0xa0
      swapper/0:  print_report+0x1b8/0x71c
      swapper/0:  kasan_report+0xec/0x14c
      swapper/0:  __do_kernel_fault+0x60/0x29c
      swapper/0:  do_bad_area+0x30/0xdc
      swapper/0:  do_tag_check_fault+0x20/0x34
      swapper/0:  do_mem_abort+0x58/0x104
      swapper/0:  el1_abort+0x3c/0x5c
      swapper/0:  el1h_64_sync_handler+0x80/0xcc
      swapper/0:  el1h_64_sync+0x68/0x6c
      swapper/0:  __memset+0x84/0x188
      swapper/0:  btf_populate_kfunc_set+0x280/0x3d8
      swapper/0:  __register_btf_kfunc_id_set+0x43c/0x468
      swapper/0:  register_btf_kfunc_id_set+0x48/0x60
      swapper/0:  register_nf_nat_bpf+0x1c/0x40
      swapper/0:  nf_nat_init+0xc0/0x128
      swapper/0:  do_one_initcall+0x184/0x464
      swapper/0:  do_initcall_level+0xdc/0x1b0
      swapper/0:  do_initcalls+0x70/0xc0
      swapper/0:  do_basic_setup+0x1c/0x28
      swapper/0:  kernel_init_freeable+0x144/0x1b8
      swapper/0:  kernel_init+0x20/0x1a8
      swapper/0:  ret_from_fork+0x10/0x20
      ==================================================================
      
      Bug: 388132060
      (cherry picked from commit 486aeb5f
       https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/
      
       linux-6.1.y)
      Fixes: 1a83a716 ("mm: krealloc: consider spare memory for __GFP_ZERO")
      Signed-off-by: default avatarQun-Wei Lin <qun-wei.lin@mediatek.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarSeiya Wang <seiya.wang@mediatek.com>
      Change-Id: Iea0ba629183042d594665ab51b410965963d167e
    • Seiya Wang's avatar
      ANDROID: ABI: update protected symbol list · 01033a8e
      Seiya Wang authored
      
      Add bt_sock_linked to protected symbol list
      
      Bug: 387804010
      Bug: 388980392
      Change-Id: I96abbc18d9cb122708a07d80ae9f8fa2da276ef2
      Signed-off-by: default avatarSeiya Wang <seiya.wang@mediatek.com>
      (cherry picked from commit 770852bf)
      01033a8e
  10. Jan 08, 2025
  11. Dec 19, 2024
  12. Dec 16, 2024
  13. Dec 12, 2024
  14. Dec 02, 2024
  15. Nov 29, 2024
    • Quentin Perret's avatar
      ANDROID: KVM: arm64: Always check state from host_ack_unshare() · a887a44a
      Quentin Perret authored
      
      Similar to how we failed to cross-check the state from the completer's
      PoV on the hyp_ack_unshare() path, we fail to do so from
      host_ack_unshare().
      
      This shouldn't cause problems in practice as this can only be called on
      the guest_unshare_host() path, and guest currently don't have the
      ability to share their pages with anybody other than the host. But this
      again is rather fragile, so let's simply do the proper check -- it isn't
      very costly thanks to the hyp_vmemmap optimisation.
      
      Bug: 381409114
      Change-Id: I3770b7db55c579758863e41f50ab30f6a8bb4a0c
      Signed-off-by: default avatarQuentin Perret <qperret@google.com>
      a887a44a
    • Quentin Perret's avatar
      FROMLIST: KVM: arm64: Always check the state from hyp_ack_unshare() · fb69bae8
      Quentin Perret authored
      There are multiple pKVM memory transitions where the state of a page is
      not cross-checked from the completer's PoV for performance reasons.
      For example, if a page is PKVM_PAGE_OWNED from the initiator's PoV,
      we should be guaranteed by construction that it is PKVM_NOPAGE for
      everybody else, hence allowing us to save a page-table lookup.
      
      When it was introduced, hyp_ack_unshare() followed that logic and bailed
      out without checking the PKVM_PAGE_SHARED_BORROWED state in the
      hypervisor's stage-1. This was correct as we could safely assume that
      all host-initiated shares were directed at the hypervisor at the time.
      But with the introduction of other types of shares (e.g. for FF-A or
      non-protected guests), it is now very much required to cross check this
      state to prevent the host from running __pkvm_host_unshare_hyp() on a
      page shared with TZ or a non-protected guest.
      
      Thankfully, if an attacker were to try this, the hyp_unmap() call from
      hyp_complete_unshare() would fail, hence causing to WARN() from
      __do_unshare() with the host lock held, which is fatal. But this is
      fragile at best, and can hardly be considered a security measure.
      
      Let's just do the right thing and always check the state from
      hyp_ack_unshare().
      
      Bug: 381409114
      Link: https://lore.kernel.org/kvmarm/20241128154406.602875-1-qperret@google.com/
      
      
      Change-Id: Id3bbd1fc3c75df506b0919f4d6f7be74b6f013f3
      Signed-off-by: default avatarQuentin Perret <qperret@google.com>
      fb69bae8
Loading