Skip to content

Commit

Permalink
userfaultfd: use ENOENT instead of EFAULT if the atomic copy user fails
Browse files Browse the repository at this point in the history
commit 9e36825 upstream.

Patch series "userfaultfd shmem updates".

Jann found two bugs in the userfaultfd shmem MAP_SHARED backend: the
lack of the VM_MAYWRITE check and the lack of i_size checks.

Then looking into the above we also fixed the MAP_PRIVATE case.

Hugh by source review also found a data loss source if UFFDIO_COPY is
used on shmem MAP_SHARED PROT_READ mappings (the production usages
incidentally run with PROT_READ|PROT_WRITE, so the data loss couldn't
happen in those production usages like with QEMU).

The whole patchset is marked for stable.

We verified QEMU postcopy live migration with guest running on shmem
MAP_PRIVATE run as well as before after the fix of shmem MAP_PRIVATE.
Regardless if it's shmem or hugetlbfs or MAP_PRIVATE or MAP_SHARED, QEMU
unconditionally invokes a punch hole if the guest mapping is filebacked
and a MADV_DONTNEED too (needed to get rid of the MAP_PRIVATE COWs and
for the anon backend).

This patch (of 5):

We internally used EFAULT to communicate with the caller, switch to
ENOENT, so EFAULT can be used as a non internal retval.

Link: http://lkml.kernel.org/r/[email protected]
Fixes: 4c27fe4 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Andrea Arcangeli <[email protected]>
Reviewed-by: Mike Rapoport <[email protected]>
Reviewed-by: Hugh Dickins <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Peter Xu <[email protected]>
Cc: "Dr. David Alan Gilbert" <[email protected]>
Cc: <[email protected]>
Cc: [email protected]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
  • Loading branch information
aagit authored and gregkh committed Dec 8, 2018
1 parent a88a1b3 commit 82c5a8c
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -4037,7 +4037,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,

/* fallback to copy_from_user outside mmap_sem */
if (unlikely(ret)) {
ret = -EFAULT;
ret = -ENOENT;
*pagep = page;
/* don't free the page */
goto out;
Expand Down
2 changes: 1 addition & 1 deletion mm/shmem.c
Original file line number Diff line number Diff line change
Expand Up @@ -2266,7 +2266,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
*pagep = page;
shmem_inode_unacct_blocks(inode, 1);
/* don't free the page */
return -EFAULT;
return -ENOENT;
}
} else { /* mfill_zeropage_atomic */
clear_highpage(page);
Expand Down
6 changes: 3 additions & 3 deletions mm/userfaultfd.c
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,

/* fallback to copy_from_user outside mmap_sem */
if (unlikely(ret)) {
ret = -EFAULT;
ret = -ENOENT;
*pagep = page;
/* don't free the page */
goto out;
Expand Down Expand Up @@ -275,7 +275,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,

cond_resched();

if (unlikely(err == -EFAULT)) {
if (unlikely(err == -ENOENT)) {
up_read(&dst_mm->mmap_sem);
BUG_ON(!page);

Expand Down Expand Up @@ -521,7 +521,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
src_addr, &page, zeropage);
cond_resched();

if (unlikely(err == -EFAULT)) {
if (unlikely(err == -ENOENT)) {
void *page_kaddr;

up_read(&dst_mm->mmap_sem);
Expand Down

0 comments on commit 82c5a8c

Please sign in to comment.