Message ID | 20171205033210.38338-1-yi.zhang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/fs/dax.c b/fs/dax.c index 78b72c4..8e12848 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -539,10 +539,11 @@ static void *dax_insert_mapping_entry(struct address_space *mapping, /* we are replacing a zero page with block mapping */ if (dax_is_pmd_entry(entry)) unmap_mapping_range(mapping, - (vmf->pgoff << PAGE_SHIFT) & PMD_MASK, + ((loff_t)vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0); else /* pte entry */ - unmap_mapping_range(mapping, vmf->pgoff << PAGE_SHIFT, + unmap_mapping_range(mapping, + (loff_t)vmf->pgoff << PAGE_SHIFT, PAGE_SIZE, 0); }
On 32bit machine, when mmap2 a large enough file with pgoff more than ULONG_MAX >> PAGE_SHIFT, it will trigger offset overflow and lead to unmap the wrong page in dax_insert_mapping_entry(). This patch cast pgoff to 64bit to prevent the overflow. Signed-off-by: zhangyi (F) <yi.zhang@huawei.com> --- fs/dax.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)