Skip to content
Snippets Groups Projects
Commit 0675d0fc authored by Carlos Llamas's avatar Carlos Llamas Committed by Treehugger Robot
Browse files

FROMLIST: binder: fix race between mmput() and do_exit()


Task A calls binder_update_page_range() to allocate and insert pages on
a remote address space from Task B. For this, Task A pins the remote mm
via mmget_not_zero() first. This can race with Task B do_exit() and the
final mmput() refcount decrement will come from Task A.

  Task A            | Task B
  ------------------+------------------
  mmget_not_zero()  |
                    |  do_exit()
                    |    exit_mm()
                    |      mmput()
  mmput()           |
    exit_mmap()     |
      remove_vma()  |
        fput()      |

In this case, the work of ____fput() from Task B is queued up in Task A
as TWA_RESUME. So in theory, Task A returns to userspace and the cleanup
work gets executed. However, Task A instead sleep, waiting for a reply
from Task B that never comes (it's dead).

This means the binder_deferred_release() is blocked until an unrelated
binder event forces Task A to go back to userspace. All the associated
death notifications will also be delayed until then.

In order to fix this use mmput_async() that will schedule the work in
the corresponding mm->async_put_work WQ instead of Task A.

Fixes: 457b9a6f ("Staging: android: add binder driver")
Reviewed-by: default avatarAlice Ryhl <aliceryhl@google.com>
Signed-off-by: default avatarCarlos Llamas <cmllamas@google.com>

Bug: 293845143
Link: https://lore.kernel.org/all/20231201172212.1813387-4-cmllamas@google.com/


Change-Id: I2ec43b375e115c0daf21df3893da634dbefeed3e
Signed-off-by: default avatarCarlos Llamas <cmllamas@google.com>
parent 0145780b
No related branches found
No related tags found
No related merge requests found
......@@ -272,7 +272,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
}
if (mm) {
mmap_write_unlock(mm);
mmput(mm);
mmput_async(mm);
}
return 0;
......@@ -305,7 +305,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
err_no_vma:
if (mm) {
mmap_write_unlock(mm);
mmput(mm);
mmput_async(mm);
}
return vma ? -ENOMEM : -ESRCH;
}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment