From patchwork Mon Feb 24 20:30:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 11401499 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 127C9924 for ; Mon, 24 Feb 2020 20:31:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C5D4B20675 for ; Mon, 24 Feb 2020 20:31:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FjqFqXGp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5D4B20675 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6A7716B0082; Mon, 24 Feb 2020 15:31:21 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 657806B0085; Mon, 24 Feb 2020 15:31:21 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56F726B0087; Mon, 24 Feb 2020 15:31:21 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 3EB496B0082 for ; Mon, 24 Feb 2020 15:31:21 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E8BDE440B for ; Mon, 24 Feb 2020 20:31:20 +0000 (UTC) X-FDA: 76526165520.02.curve07_28cd2d483b645 X-Spam-Summary: 2,0,0,d68037996e07b27a,d41d8cd98f00b204,3lzjuxgykcey4itsmvowwotm.kwutqv25-uus3iks.wzo@flex--walken.bounces.google.com,,RULES_HIT:2:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1593:1594:1605:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:3138:3139:3140:3141:3142:3152:3865:3867:3868:3872:3874:4049:4120:4250:4321:4605:5007:6119:6120:6261:6653:7875:7901:7903:9969:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13255:14096:14097:14659:21080:21444:21451:21627:21987:21990:30003:30054:30070:30090,0,RBL:209.85.210.201:@flex--walken.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: curve07_28cd2d483b645 X-Filterd-Recvd-Size: 9521 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 20:31:20 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id r29so7341301pfl.23 for ; Mon, 24 Feb 2020 12:31:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GTg0B2YE7fcbrpqM/a5ebub3RWMX6RXGVpvJfD5Qmh0=; b=FjqFqXGpuLSEV5tVSjWob8cnC0C+E/qhrOCVLQUwPt3v90EdDqkIyNJPE3NYXgpwFO fHNfFxcT6uKJ3uORt3tzFu1IEZbAAlxanhTIPZf0ITG+5ewOZi3WB05q/cNC1LKYFp3r sH3mQb17lsynf+nBZbZrr3zonHCnLMfz+bFcmfswkEFR2cp9KOd4AdP23QQ9v6GBsFJM P/X+v+YHEBnI3ZhqfO7YdncZDB98ZtjdokDKeLZL1Rl3FVRvKWWuc2TyLmZQsErFhzMB yI1FVlQaLCyiOq1cxABUVa0UoqW0tcV1hJiA0CG9HmkykZ/ROAeWoCWXbohdaufWkO6K 7wug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GTg0B2YE7fcbrpqM/a5ebub3RWMX6RXGVpvJfD5Qmh0=; b=cTIi76u+fZz9XD2JEtdFgtKLq195PvzVt5bKOulQIrp6tcLy9SCX9znX50vki9XTsC M3ZXHSafUP20+1+LzoLIMj5tQcLDPVL8JFh/cK/PcHpYAALUk78GdtIwhCCSN62S56y3 2LEYvngfWdWZZHw7nzbnbCvTCJe5VtsX99tb0yYpPzKc/ZE1bpfsRphCxtgx3+LLz1jQ y4+O3ZEGpHxfxCPSSTdziG1ihF/5uE3uoN/5Wxfg7GtSQwaOAmokTZ67szo5w1QRR4yo 23htJSzZGZTQ64+TrvsRYIECj92zWhfnxD5eVRMyQ6GuhiEvB0YoWt/Stfn2EzZnTDeJ 7V9Q== X-Gm-Message-State: APjAAAWBPLc1ea463pPRB2rqkW9svYZzkQ+2sXzE50qck91UyL+M1liv /RNy57FQP6DFAFO7Jvql9MdlSiu7j1k= X-Google-Smtp-Source: APXvYqwsiNHK8Gvg2c9Gun6qawNcLq4HiGXp2Q1S607hfFKcQrgkSe3mPRbm2018Lqk4mHt7iyE+rmmAQvU= X-Received: by 2002:a63:a351:: with SMTP id v17mr5754839pgn.319.1582576279336; Mon, 24 Feb 2020 12:31:19 -0800 (PST) Date: Mon, 24 Feb 2020 12:30:41 -0800 In-Reply-To: <20200224203057.162467-1-walken@google.com> Message-Id: <20200224203057.162467-9-walken@google.com> Mime-Version: 1.0 References: <20200224203057.162467-1-walken@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [RFC PATCH 08/24] mm/memory: allow specifying MM lock range to handle_mm_fault() From: Michel Lespinasse To: Peter Zijlstra , Andrew Morton , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , "Liam R . Howlett" , Jerome Glisse , Davidlohr Bueso , David Rientjes Cc: linux-mm , Michel Lespinasse X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This change adds a new handle_mm_fault_range() function, which behaves like handle_mm_fault() but specifies an explicit MM lock range. handle_mm_fault() remains as an inline wrapper which passes the default coarse locking range. Signed-off-by: Michel Lespinasse --- include/linux/hugetlb.h | 5 +++-- include/linux/mm.h | 11 +++++++++-- mm/hugetlb.c | 14 +++++++++----- mm/memory.c | 16 +++++++++------- 4 files changed, 30 insertions(+), 16 deletions(-) diff --git include/linux/hugetlb.h include/linux/hugetlb.h index 31d4920994b9..75992d78289e 100644 --- include/linux/hugetlb.h +++ include/linux/hugetlb.h @@ -88,7 +88,8 @@ int hugetlb_report_node_meminfo(int, char *); void hugetlb_show_meminfo(void); unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + struct mm_lock_range *range); int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, @@ -307,7 +308,7 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb, static inline vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, - unsigned int flags) + unsigned int flags, struct mm_lock_range *range) { BUG(); return 0; diff --git include/linux/mm.h include/linux/mm.h index a1c9a0aa898b..1b6b022064b4 100644 --- include/linux/mm.h +++ include/linux/mm.h @@ -1460,8 +1460,15 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page); int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU -extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); +extern vm_fault_t handle_mm_fault_range(struct vm_area_struct *vma, + unsigned long address, unsigned int flags, + struct mm_lock_range *range); +static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, + unsigned long address, unsigned int flags) +{ + return handle_mm_fault_range(vma, address, flags, + mm_coarse_lock_range()); +} extern int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); diff --git mm/hugetlb.c mm/hugetlb.c index 662f34b6c869..9d6fe9f291a7 100644 --- mm/hugetlb.c +++ mm/hugetlb.c @@ -3788,7 +3788,8 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping, static vm_fault_t hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma, struct address_space *mapping, pgoff_t idx, - unsigned long address, pte_t *ptep, unsigned int flags) + unsigned long address, pte_t *ptep, unsigned int flags, + struct mm_lock_range *range) { struct hstate *h = hstate_vma(vma); vm_fault_t ret = VM_FAULT_SIGBUS; @@ -3831,7 +3832,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, .vma = vma, .address = haddr, .flags = flags, - .range = mm_coarse_lock_range(), + .range = range, /* * Hard to debug if it ends up being * used by a callee that assumes @@ -3997,7 +3998,8 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx) #endif vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct mm_lock_range *range) { pte_t *ptep, entry; spinlock_t *ptl; @@ -4039,7 +4041,8 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, entry = huge_ptep_get(ptep); if (huge_pte_none(entry)) { - ret = hugetlb_no_page(mm, vma, mapping, idx, address, ptep, flags); + ret = hugetlb_no_page(mm, vma, mapping, idx, address, ptep, + flags, range); goto out_mutex; } @@ -4348,7 +4351,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, FAULT_FLAG_ALLOW_RETRY); fault_flags |= FAULT_FLAG_TRIED; } - ret = hugetlb_fault(mm, vma, vaddr, fault_flags); + ret = hugetlb_fault(mm, vma, vaddr, fault_flags, + mm_coarse_lock_range()); if (ret & VM_FAULT_ERROR) { err = vm_fault_to_errno(ret, flags); remainder = 0; diff --git mm/memory.c mm/memory.c index 6cb3359f0857..bc24a6bdaa06 100644 --- mm/memory.c +++ mm/memory.c @@ -4039,7 +4039,8 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) * return value. See filemap_fault() and __lock_page_or_retry(). */ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + struct mm_lock_range *range) { struct vm_fault vmf = { .vma = vma, @@ -4047,7 +4048,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, .flags = flags, .pgoff = linear_page_index(vma, address), .gfp_mask = __get_fault_gfp_mask(vma), - .range = mm_coarse_lock_range(), + .range = range, }; unsigned int dirty = flags & FAULT_FLAG_WRITE; struct mm_struct *mm = vma->vm_mm; @@ -4134,8 +4135,9 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, * The mmap_sem may have been released depending on flags and our * return value. See filemap_fault() and __lock_page_or_retry(). */ -vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, - unsigned int flags) +vm_fault_t handle_mm_fault_range(struct vm_area_struct *vma, + unsigned long address, unsigned int flags, + struct mm_lock_range *range) { vm_fault_t ret; @@ -4160,9 +4162,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, mem_cgroup_enter_user_fault(); if (unlikely(is_vm_hugetlb_page(vma))) - ret = hugetlb_fault(vma->vm_mm, vma, address, flags); + ret = hugetlb_fault(vma->vm_mm, vma, address, flags, range); else - ret = __handle_mm_fault(vma, address, flags); + ret = __handle_mm_fault(vma, address, flags, range); if (flags & FAULT_FLAG_USER) { mem_cgroup_exit_user_fault(); @@ -4178,7 +4180,7 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, return ret; } -EXPORT_SYMBOL_GPL(handle_mm_fault); +EXPORT_SYMBOL_GPL(handle_mm_fault_range); #ifndef __PAGETABLE_P4D_FOLDED /*