Merge tag 'bpf_try_alloc_pages' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf try_alloc_pages() support from Alexei Starovoitov: "The pull includes work from Sebastian, Vlastimil and myself with a lot of help from Michal and Shakeel. This is a first step towards making kmalloc reentrant to get rid of slab wrappers: bpf_mem_alloc, kretprobe's objpool, etc. These patches make page allocator safe from any context. Vlastimil kicked off this effort at LSFMM 2024: https://lwn.net/Articles/974138/ and we continued at LSFMM 2025: https://lore.kernel.org/all/CAADnVQKfkGxudNUkcPJgwe3nTZ=xohnRshx9kLZBTmR_E1DFEg@mail.gmail.com/ Why: SLAB wrappers bind memory to a particular subsystem making it unavailable to the rest of the kernel. Some BPF maps in production consume Gbytes of preallocated memory. Top 5 in Meta: 1.5G, 1.2G, 1.1G, 300M, 200M. Once we have kmalloc that works in any context BPF map preallocation won't be necessary. How: Synchronous kmalloc/page alloc stack has multiple stages going from fast to slow: cmpxchg16 -> slab_alloc -> new_slab -> alloc_pages -> rmqueue_pcplist -> __rmqueue, where rmqueue_pcplist was already relying on trylock. This set changes rmqueue_bulk/rmqueue_buddy to attempt a trylock and return ENOMEM if alloc_flags & ALLOC_TRYLOCK. It then wraps this functionality into try_alloc_pages() helper. We make sure that the logic is sane in PREEMPT_RT. End result: try_alloc_pages()/free_pages_nolock() are safe to call from any context. try_kmalloc() for any context with similar trylock approach will follow. It will use try_alloc_pages() when slab needs a new page. Though such try_kmalloc/page_alloc() is an opportunistic allocator, this design ensures that the probability of successful allocation of small objects (up to one page in size) is high. Even before we have try_kmalloc(), we already use try_alloc_pages() in BPF arena implementation and it's going to be used more extensively in BPF" * tag 'bpf_try_alloc_pages' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: mm: Fix the flipped condition in gfpflags_allow_spinning() bpf: Use try_alloc_pages() to allocate pages for bpf needs. mm, bpf: Use memcg in try_alloc_pages(). memcg: Use trylock to access memcg stock_lock. mm, bpf: Introduce free_pages_nolock() mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation locking/local_lock: Introduce localtry_lock_t
No related branches found
No related tags found
Showing
- include/linux/bpf.h 1 addition, 1 deletioninclude/linux/bpf.h
- include/linux/gfp.h 23 additions, 0 deletionsinclude/linux/gfp.h
- include/linux/local_lock.h 70 additions, 0 deletionsinclude/linux/local_lock.h
- include/linux/local_lock_internal.h 146 additions, 0 deletionsinclude/linux/local_lock_internal.h
- include/linux/mm_types.h 4 additions, 0 deletionsinclude/linux/mm_types.h
- include/linux/mmzone.h 3 additions, 0 deletionsinclude/linux/mmzone.h
- kernel/bpf/arena.c 2 additions, 3 deletionskernel/bpf/arena.c
- kernel/bpf/syscall.c 20 additions, 3 deletionskernel/bpf/syscall.c
- lib/stackdepot.c 7 additions, 3 deletionslib/stackdepot.c
- mm/internal.h 1 addition, 0 deletionsmm/internal.h
- mm/memcontrol.c 39 additions, 18 deletionsmm/memcontrol.c
- mm/page_alloc.c 188 additions, 15 deletionsmm/page_alloc.c
- mm/page_owner.c 7 additions, 1 deletionmm/page_owner.c
Loading
Please register or sign in to comment