Skip to content

Commit

Permalink
x86, pmem: Fix cache flushing for iovec write < 8 bytes
Browse files Browse the repository at this point in the history
commit 8376efd upstream.

Commit 11e63f6 added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write.  But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.

Fixes: 11e63f6 ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
  • Loading branch information
bwh-ct authored and gregkh committed May 20, 2017
1 parent d34ecdc commit b8cd9dd
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion arch/x86/include/asm/pmem.h
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ static inline size_t arch_copy_from_iter_pmem(void __pmem *addr, size_t bytes,

if (bytes < 8) {
if (!IS_ALIGNED(dest, 4) || (bytes != 4))
__arch_wb_cache_pmem(addr, 1);
__arch_wb_cache_pmem(addr, bytes);
} else {
if (!IS_ALIGNED(dest, 8)) {
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
Expand Down

0 comments on commit b8cd9dd

Please sign in to comment.