Skip to content
Snippets Groups Projects
Commit 08b72979 authored by Michal Schmidt's avatar Michal Schmidt
Browse files

octeontx2-pf: Avoid use of GFP_KERNEL in atomic context

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2040643



commit 87b93b67
Author: Geetha sowjanya <gakula@marvell.com>
Date:   Fri Jan 13 11:49:02 2023 +0530

    octeontx2-pf: Avoid use of GFP_KERNEL in atomic context

    Using GFP_KERNEL in preemption disable context, causing below warning
    when CONFIG_DEBUG_ATOMIC_SLEEP is enabled.

    [   32.542271] BUG: sleeping function called from invalid context at include/linux/sched/mm.h:274
    [   32.550883] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0
    [   32.558707] preempt_count: 1, expected: 0
    [   32.562710] RCU nest depth: 0, expected: 0
    [   32.566800] CPU: 3 PID: 1 Comm: swapper/0 Tainted: G        W          6.2.0-rc2-00269-gae9dcb91c606 #7
    [   32.576188] Hardware name: Marvell CN106XX board (DT)
    [   32.581232] Call trace:
    [   32.583670]  dump_backtrace.part.0+0xe0/0xf0
    [   32.587937]  show_stack+0x18/0x30
    [   32.591245]  dump_stack_lvl+0x68/0x84
    [   32.594900]  dump_stack+0x18/0x34
    [   32.598206]  __might_resched+0x12c/0x160
    [   32.602122]  __might_sleep+0x48/0xa0
    [   32.605689]  __kmem_cache_alloc_node+0x2b8/0x2e0
    [   32.610301]  __kmalloc+0x58/0x190
    [   32.613610]  otx2_sq_aura_pool_init+0x1a8/0x314
    [   32.618134]  otx2_open+0x1d4/0x9d0

    To avoid use of GFP_ATOMIC for memory allocation, disable preemption
    after all memory allocation is done.

    Fixes: 4af1b64f ("octeontx2-pf: Fix lmtst ID used in aura free")
Signed-off-by: default avatarGeetha sowjanya <gakula@marvell.com>
Signed-off-by: default avatarSunil Kovvuri Goutham <sgoutham@marvell.com>
Reviewed-by: default avatarLeon Romanovsky <leonro@nvidia.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>

Signed-off-by: default avatarMichal Schmidt <mschmidt@redhat.com>
parent ed360ae7
No related branches found
No related tags found
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment