Skip to content
Snippets Groups Projects
  1. Feb 28, 2023
    • Michael Ellerman's avatar
      powerpc: Avoid dead code/data elimination when using recordmcount · f8b2336f
      Michael Ellerman authored
      
      Although powerpc now has objtool mcount support, it's not enabled in all
      configurations due to dependencies.
      
      On those configurations, with some linkers (binutils 2.37 at least),
      it's still possible to hit the dreaded "recordmcount bug", eg. errors
      such as:
      
          CC      kernel/kexec_file.o
        Cannot find symbol for section 10: .text.unlikely.
        kernel/kexec_file.o: failed
        make[1]: *** [scripts/Makefile.build:287 : kernel/kexec_file.o] Error 1
      
      Those errors are much more prevalent when building with
      CONFIG_LD_DEAD_CODE_DATA_ELIMINATION, because it places every function
      in a separate section.
      
      CONFIG_LD_DEAD_CODE_DATA_ELIMINATION is marked experimental and is not
      enabled in any powerpc defconfigs or by major distros. Although it does
      have at least some users on 32-bit where kernel size tends to be more
      important.
      
      Avoid the build errors by blocking CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
      when the build is using recordmcount, rather than objtool. In practice
      that means for 64-bit big endian builds, or 64-bit clang builds - both
      because they lack CONFIG_MPROFILE_KERNEL.
      
      On 32-bit objtool is always used, so
      CONFIG_LD_DEAD_CODE_DATA_ELIMINATION is still available there.
      
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20230221130331.2714199-1-mpe@ellerman.id.au
      f8b2336f
  2. Feb 12, 2023
  3. Feb 10, 2023
  4. Feb 07, 2023
  5. Feb 02, 2023
  6. Jan 30, 2023
  7. Dec 07, 2022
  8. Dec 02, 2022
  9. Nov 24, 2022
  10. Nov 18, 2022
  11. Nov 01, 2022
    • Michael Ellerman's avatar
      powerpc/32: Select ARCH_SPLIT_ARG64 · 02a771c9
      Michael Ellerman authored
      
      On 32-bit kernels, 64-bit syscall arguments are split into two
      registers. For that to work with syscall wrappers, the prototype of the
      syscall must have the argument split so that the wrapper macro properly
      unpacks the arguments from pt_regs.
      
      The fanotify_mark() syscall is one such syscall, which already has a
      split prototype, guarded behind ARCH_SPLIT_ARG64.
      
      So select ARCH_SPLIT_ARG64 to get that prototype and fix fanotify_mark()
      on 32-bit kernels with syscall wrappers.
      
      Note also that fanotify_mark() is the only usage of ARCH_SPLIT_ARG64.
      
      Fixes: 7e92e01b ("powerpc: Provide syscall wrapper")
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20221101034852.2340319-1-mpe@ellerman.id.au
      02a771c9
  12. Oct 31, 2022
  13. Sep 28, 2022
    • Nicholas Miehlbradt's avatar
      powerpc/64s: Enable KFENCE on book3s64 · a5edf981
      Nicholas Miehlbradt authored
      
      KFENCE support was added for ppc32 in commit 90cbac0e
      ("powerpc: Enable KFENCE for PPC32").
      Enable KFENCE on ppc64 architecture with hash and radix MMUs.
      It uses the same mechanism as debug pagealloc to
      protect/unprotect pages. All KFENCE kunit tests pass on both
      MMUs.
      
      KFENCE memory is initially allocated using memblock but is
      later marked as SLAB allocated. This necessitates the change
      to __pud_free to ensure that the KFENCE pages are freed
      appropriately.
      
      Based on previous work by Christophe Leroy and Jordan Niethe.
      
      Signed-off-by: default avatarNicholas Miehlbradt <nicholas@linux.ibm.com>
      Reviewed-by: default avatarRussell Currey <ruscur@russell.cc>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20220926075726.2846-4-nicholas@linux.ibm.com
      a5edf981
    • Rohan McLure's avatar
      powerpc: Provide syscall wrapper · 7e92e01b
      Rohan McLure authored
      
      Implement syscall wrapper as per s390, x86, arm64. When enabled
      cause handlers to accept parameters from a stack frame rather than
      from user scratch register state. This allows for user registers to be
      safely cleared in order to reduce caller influence on speculation
      within syscall routine. The wrapper is a macro that emits syscall
      handler symbols that call into the target handler, obtaining its
      parameters from a struct pt_regs on the stack.
      
      As registers are already saved to the stack prior to calling
      system_call_exception, it appears that this function is executed more
      efficiently with the new stack-pointer convention than with parameters
      passed by registers, avoiding the allocation of a stack frame for this
      method. On a 32-bit system, we see >20% performance increases on the
      null_syscall microbenchmark, and on a Power 8 the performance gains
      amortise the cost of clearing and restoring registers which is
      implemented at the end of this series, seeing final result of ~5.6%
      performance improvement on null_syscall.
      
      Syscalls are wrapped in this fashion on all platforms except for the
      Cell processor as this commit does not provide SPU support. This can be
      quickly fixed in a successive patch, but requires spu_sys_callback to
      allocate a pt_regs structure to satisfy the wrapped calling convention.
      
      Co-developed-by: default avatarAndrew Donnellan <ajd@linux.ibm.com>
      Signed-off-by: default avatarAndrew Donnellan <ajd@linux.ibm.com>
      Signed-off-by: default avatarRohan McLure <rmclure@linux.ibm.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmai.com>
      [mpe: Make incompatible with COMPAT to retain clearing of high bits of args]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20220921065605.1051927-22-rmclure@linux.ibm.com
      7e92e01b
  14. Sep 26, 2022
  15. Sep 12, 2022
    • Zi Yan's avatar
      arch: mm: rename FORCE_MAX_ZONEORDER to ARCH_FORCE_MAX_ORDER · 0192445c
      Zi Yan authored
      This Kconfig option is used by individual arch to set its desired
      MAX_ORDER.  Rename it to reflect its actual use.
      
      Link: https://lkml.kernel.org/r/20220815143959.1511278-1-zi.yan@sent.com
      
      
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarZi Yan <ziy@nvidia.com>
      Acked-by: Guo Ren <guoren@kernel.org>			[csky]
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Acked-by: Huacai Chen <chenhuacai@kernel.org>		[LoongArch]
      Acked-by: Michael Ellerman <mpe@ellerman.id.au>		[powerpc]
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Taichi Sugaya <sugaya.taichi@socionext.com>
      Cc: Neil Armstrong <narmstrong@baylibre.com>
      Cc: Qin Jian <qinjian@cqplus1.com>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Dinh Nguyen <dinguyen@kernel.org>
      Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0192445c
  16. Jul 28, 2022
  17. Jul 27, 2022
  18. Jul 25, 2022
  19. Jul 21, 2022
    • Peter Zijlstra's avatar
      mmu_gather: Remove per arch tlb_{start,end}_vma() · 1e9fdf21
      Peter Zijlstra authored
      
      Scattered across the archs are 3 basic forms of tlb_{start,end}_vma().
      Provide two new MMU_GATHER_knobs to enumerate them and remove the per
      arch tlb_{start,end}_vma() implementations.
      
       - MMU_GATHER_NO_FLUSH_CACHE indicates the arch has flush_cache_range()
         but does *NOT* want to call it for each VMA.
      
       - MMU_GATHER_MERGE_VMAS indicates the arch wants to merge the
         invalidate across multiple VMAs if possible.
      
      With these it is possible to capture the three forms:
      
        1) empty stubs;
           select MMU_GATHER_NO_FLUSH_CACHE and MMU_GATHER_MERGE_VMAS
      
        2) start: flush_cache_range(), end: empty;
           select MMU_GATHER_MERGE_VMAS
      
        3) start: flush_cache_range(), end: flush_tlb_range();
           default
      
      Obviously, if the architecture does not have flush_cache_range() then
      it also doesn't need to select MMU_GATHER_NO_FLUSH_CACHE.
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1e9fdf21
  20. Jul 18, 2022
    • Jason A. Donenfeld's avatar
      random: remove CONFIG_ARCH_RANDOM · 9592eef7
      Jason A. Donenfeld authored
      
      When RDRAND was introduced, there was much discussion on whether it
      should be trusted and how the kernel should handle that. Initially, two
      mechanisms cropped up, CONFIG_ARCH_RANDOM, a compile time switch, and
      "nordrand", a boot-time switch.
      
      Later the thinking evolved. With a properly designed RNG, using RDRAND
      values alone won't harm anything, even if the outputs are malicious.
      Rather, the issue is whether those values are being *trusted* to be good
      or not. And so a new set of options were introduced as the real
      ones that people use -- CONFIG_RANDOM_TRUST_CPU and "random.trust_cpu".
      With these options, RDRAND is used, but it's not always credited. So in
      the worst case, it does nothing, and in the best case, maybe it helps.
      
      Along the way, CONFIG_ARCH_RANDOM's meaning got sort of pulled into the
      center and became something certain platforms force-select.
      
      The old options don't really help with much, and it's a bit odd to have
      special handling for these instructions when the kernel can deal fine
      with the existence or untrusted existence or broken existence or
      non-existence of that CPU capability.
      
      Simplify the situation by removing CONFIG_ARCH_RANDOM and using the
      ordinary asm-generic fallback pattern instead, keeping the two options
      that are actually used. For now it leaves "nordrand" for now, as the
      removal of that will take a different route.
      
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarBorislav Petkov <bp@suse.de>
      Acked-by: default avatarHeiko Carstens <hca@linux.ibm.com>
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      9592eef7
    • Anshuman Khandual's avatar
      mm/mmap: drop ARCH_HAS_VM_GET_PAGE_PROT · 3d923c5f
      Anshuman Khandual authored
      Now all the platforms enable ARCH_HAS_GET_PAGE_PROT.  They define and
      export own vm_get_page_prot() whether custom or standard
      DECLARE_VM_GET_PAGE_PROT.  Hence there is no need for default generic
      fallback for vm_get_page_prot().  Just drop this fallback and also
      ARCH_HAS_GET_PAGE_PROT mechanism.
      
      Link: https://lkml.kernel.org/r/20220711070600.2378316-27-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarChristophe Leroy <christophe.leroy@csgroup.eu>
      Acked-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Brian Cain <bcain@quicinc.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Dinh Nguyen <dinguyen@kernel.org>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@kernel.org>
      Cc: WANG Xuerui <kernel@xen0n.name>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3d923c5f
    • Anshuman Khandual's avatar
      powerpc/mm: move protection_map[] inside the platform · 6eac1eaf
      Anshuman Khandual authored
      This moves protection_map[] inside the platform and while here, also
      enable ARCH_HAS_VM_GET_PAGE_PROT on 32 bit and nohash 64 (aka book3e/64)
      platforms via DECLARE_VM_GET_PAGE_PROT.
      
      Link: https://lkml.kernel.org/r/20220711070600.2378316-4-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: default avatarChristophe Leroy <christophe.leroy@csgroup.eu>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Brian Cain <bcain@quicinc.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Dinh Nguyen <dinguyen@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vineet Gupta <vgupta@kernel.org>
      Cc: WANG Xuerui <kernel@xen0n.name>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6eac1eaf
  21. Jun 30, 2022
    • Frederic Weisbecker's avatar
      context_tracking: Split user tracking Kconfig · 24a9c541
      Frederic Weisbecker authored
      
      Context tracking is going to be used not only to track user transitions
      but also idle/IRQs/NMIs. The user tracking part will then become a
      separate feature. Prepare Kconfig for that.
      
      [ frederic: Apply Max Filippov feedback. ]
      
      Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
      Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Nicolas Saenz Julienne <nsaenz@kernel.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
      Cc: Yu Liao <liaoyu15@huawei.com>
      Cc: Phil Auld <pauld@redhat.com>
      Cc: Paul Gortmaker<paul.gortmaker@windriver.com>
      Cc: Alex Belits <abelits@marvell.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Reviewed-by: default avatarNicolas Saenz Julienne <nsaenzju@redhat.com>
      Tested-by: default avatarNicolas Saenz Julienne <nsaenzju@redhat.com>
      24a9c541
  22. Jun 29, 2022
Loading