Message ID | 20190308224823.15051-3-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | A couple hugetlbfs fixes | expand |
On 3/8/19 2:48 PM, Mike Kravetz wrote: > mm/hugetlb.c | 9 ++------- > 1 file changed, 2 insertions(+), 7 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 64ef640126cd..0527732c71f0 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3904,13 +3904,8 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm, > unsigned long key[2]; > u32 hash; > > - if (vma->vm_flags & VM_SHARED) { > - key[0] = (unsigned long) mapping; > - key[1] = idx; > - } else { > - key[0] = (unsigned long) mm; > - key[1] = address >> huge_page_shift(h); > - } > + key[0] = (unsigned long) mapping; > + key[1] = idx; > > hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0); Duh! If we no longer use mm and address they can be dropped from the function arguments and all callers. Before doing that, let's see if there is any objection to using the same key for shared and private mappings.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 64ef640126cd..0527732c71f0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3904,13 +3904,8 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm, unsigned long key[2]; u32 hash; - if (vma->vm_flags & VM_SHARED) { - key[0] = (unsigned long) mapping; - key[1] = idx; - } else { - key[0] = (unsigned long) mm; - key[1] = address >> huge_page_shift(h); - } + key[0] = (unsigned long) mapping; + key[1] = idx; hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0);
hugetlb uses a fault mutex hash table to prevent page faults of the same pages concurrently. The key for shared and private mappings is different. Shared keys off address_space and file index. Private keys off mm and virtual address. Consider a private mappings of a populated hugetlbfs file. A write fault will first map the page from the file and then do a COW to map a writable page. Hugetlbfs hole punch uses the fault mutex to prevent mappings of file pages. It uses the address_space file index key. However, private mappings will use a different key and could temporarily map the file page before COW. This causes problems (BUG) for the hole punch code as it expects the mutex to prevent additional uses/mappings of the page. There seems to be another potential COW issue/race with this approach of different private and shared keys as notes in commit 8382d914ebf7 ("mm, hugetlb: improve page-fault scalability"). Since every hugetlb mapping (even anon and private) is actually a file mapping, just use the address_space index key for all mappings. This results in potentially more hash collisions. However, this should not be the common case. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/hugetlb.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)