Skip to content
Snippets Groups Projects
  1. Sep 20, 2017
  2. Sep 14, 2017
  3. Sep 12, 2017
  4. Sep 11, 2017
  5. Sep 09, 2017
  6. Sep 08, 2017
  7. Sep 07, 2017
    • Julien Grall's avatar
      BACKPORT: arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str · 8d6c7f77
      Julien Grall authored
      
      The loop that browses the array compat_hwcap_str will stop when a NULL
      is encountered, however NULL is missing at the end of array. This will
      lead to overrun until a NULL is found somewhere in the following memory.
      In reality, this works out because the compat_hwcap2_str array tends to
      follow immediately in memory, and that *is* terminated correctly.
      Furthermore, the unsigned int compat_elf_hwcap is checked before
      printing each capability, so we end up doing the right thing because
      the size of the two arrays is less than 32. Still, this is an obvious
      mistake and should be fixed.
      
      Note for backporting: commit 12d11817 ("arm64: Move
      /proc/cpuinfo handling code") moved this code in v4.4. Prior to that
      commit, the same change should be made in arch/arm64/kernel/setup.c.
      
      Bug: 37430238
      Fixes: 44b82b77 "arm64: Fix up /proc/cpuinfo"
      Cc: <stable@vger.kernel.org> # v3.19+ (but see note above prior to v4.4)
      Signed-off-by: default avatarJulien Grall <julien.grall@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarDavid Lin <dtwlin@google.com>
      8d6c7f77
  8. Sep 06, 2017
  9. Sep 05, 2017
  10. Aug 30, 2017
  11. Aug 28, 2017
    • Mikulas Patocka's avatar
      BACKPORT: dm bufio: don't take the lock in dm_bufio_shrink_count · ca50eff4
      Mikulas Patocka authored
      
      dm_bufio_shrink_count() is called from do_shrink_slab to find out how many
      freeable objects are there. The reported value doesn't have to be precise,
      so we don't need to take the dm-bufio lock.
      
      Suggested-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      
      Bug: 64122284
      Change-Id: Id2c3446e03e865f424be8666b1ee0822b9e33a63
      (cherry picked from commit d12067f4)
      Signed-off-by: default avatarSami Tolvanen <samitolvanen@google.com>
      ca50eff4
    • Douglas Anderson's avatar
      BACKPORT: dm bufio: avoid sleeping while holding the dm_bufio lock · 3e9a4d68
      Douglas Anderson authored
      We've seen in-field reports showing _lots_ (18 in one case, 41 in
      another) of tasks all sitting there blocked on:
      
        mutex_lock+0x4c/0x68
        dm_bufio_shrink_count+0x38/0x78
        shrink_slab.part.54.constprop.65+0x100/0x464
        shrink_zone+0xa8/0x198
      
      In the two cases analyzed, we see one task that looks like this:
      
        Workqueue: kverityd verity_prefetch_io
      
        __switch_to+0x9c/0xa8
        __schedule+0x440/0x6d8
        schedule+0x94/0xb4
        schedule_timeout+0x204/0x27c
        schedule_timeout_uninterruptible+0x44/0x50
        wait_iff_congested+0x9c/0x1f0
        shrink_inactive_list+0x3a0/0x4cc
        shrink_lruvec+0x418/0x5cc
        shrink_zone+0x88/0x198
        try_to_free_pages+0x51c/0x588
        __alloc_pages_nodemask+0x648/0xa88
        __get_free_pages+0x34/0x7c
        alloc_buffer+0xa4/0x144
        __bufio_new+0x84/0x278
        dm_bufio_prefetch+0x9c/0x154
        verity_prefetch_io+0xe8/0x10c
        process_one_work+0x240/0x424
        worker_thread+0x2fc/0x424
        kthread+0x10c/0x114
      
      ...and that looks to be the one holding the mutex.
      
      The problem has been reproduced on fairly easily:
      0. Be running Chrome OS w/ verity enabled on the root filesystem
      1. Pick test patch: http://crosreview.com/412360
      2. Install launchBalloons.sh and balloon.arm from
           http://crbug.com/468342
      
      
         ...that's just a memory stress test app.
      3. On a 4GB rk3399 machine, run
           nice ./launchBalloons.sh 4 900 100000
         ...that tries to eat 4 * 900 MB of memory and keep accessing.
      4. Login to the Chrome web browser and restore many tabs
      
      With that, I've seen printouts like:
        DOUG: long bufio 90758 ms
      ...and stack trace always show's we're in dm_bufio_prefetch().
      
      The problem is that we try to allocate memory with GFP_NOIO while
      we're holding the dm_bufio lock.  Instead we should be using
      GFP_NOWAIT.  Using GFP_NOIO can cause us to sleep while holding the
      lock and that causes the above problems.
      
      The current behavior explained by David Rientjes:
      
        It will still try reclaim initially because __GFP_WAIT (or
        __GFP_KSWAPD_RECLAIM) is set by GFP_NOIO.  This is the cause of
        contention on dm_bufio_lock() that the thread holds.  You want to
        pass GFP_NOWAIT instead of GFP_NOIO to alloc_buffer() when holding a
        mutex that can be contended by a concurrent slab shrinker (if
        count_objects didn't use a trylock, this pattern would trivially
        deadlock).
      
      This change significantly increases responsiveness of the system while
      in this state.  It makes a real difference because it unblocks kswapd.
      In the bug report analyzed, kswapd was hung:
      
         kswapd0         D ffffffc000204fd8     0    72      2 0x00000000
         Call trace:
         [<ffffffc000204fd8>] __switch_to+0x9c/0xa8
         [<ffffffc00090b794>] __schedule+0x440/0x6d8
         [<ffffffc00090bac0>] schedule+0x94/0xb4
         [<ffffffc00090be44>] schedule_preempt_disabled+0x28/0x44
         [<ffffffc00090d900>] __mutex_lock_slowpath+0x120/0x1ac
         [<ffffffc00090d9d8>] mutex_lock+0x4c/0x68
         [<ffffffc000708e7c>] dm_bufio_shrink_count+0x38/0x78
         [<ffffffc00030b268>] shrink_slab.part.54.constprop.65+0x100/0x464
         [<ffffffc00030dbd8>] shrink_zone+0xa8/0x198
         [<ffffffc00030e578>] balance_pgdat+0x328/0x508
         [<ffffffc00030eb7c>] kswapd+0x424/0x51c
         [<ffffffc00023f06c>] kthread+0x10c/0x114
         [<ffffffc000203dd0>] ret_from_fork+0x10/0x40
      
      By unblocking kswapd memory pressure should be reduced.
      
      Suggested-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      
      Bug: 64122284
      Change-Id: I1ce9367c921d7ab07ca9e3d403c95cd0d333915c
      (cherry picked from commit 9ea61cac)
      Signed-off-by: default avatarSami Tolvanen <samitolvanen@google.com>
      3e9a4d68