From patchwork Thu Dec 12 08:47:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28Intel=29?= X-Patchwork-Id: 11287619 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 165586C1 for ; Thu, 12 Dec 2019 08:48:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE57822527 for ; Thu, 12 Dec 2019 08:48:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="MDIF/NBN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE57822527 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6913A8E0015; Thu, 12 Dec 2019 03:48:00 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6425A8E0013; Thu, 12 Dec 2019 03:48:00 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50D4F8E0017; Thu, 12 Dec 2019 03:48:00 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 3730F8E0015 for ; Thu, 12 Dec 2019 03:48:00 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 0260B824999B for ; Thu, 12 Dec 2019 08:48:00 +0000 (UTC) X-FDA: 76255861920.20.linen00_458dec2790205 X-Spam-Summary: 2,0,0,56f0632d436ac9d4,d41d8cd98f00b204,thomas_os@shipmail.org,::linux-kernel@vger.kernel.org:dri-devel@lists.freedesktop.org:pv-drivers@vmware.com:linux-graphics-maintainer@vmware.com:thellstrom@vmware.com:akpm@linux-foundation.org:mhocko@suse.com:willy@infradead.org:kirill.shutemov@linux.intel.com:rcampbell@nvidia.com:jglisse@redhat.com:christian.koenig@amd.com,RULES_HIT:2:41:152:355:379:541:800:960:973:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1593:1594:1605:1606:1676:1730:1747:1777:1792:2393:2553:2559:2562:2897:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:3874:4119:4250:4321:4605:5007:6119:6261:6653:7576:7875:7903:7974:10004:11026:11473:11658:11914:12043:12050:12114:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13894:14096:14097:14394:14659:21080:21433:21451:21627:21795:21990:30003:30051:30054:30064:30090:30091,0,RBL:213.80.101.71:@shipmail.org:.lbl8.mailshell.net-62.2.203.100 64. 100.201. X-HE-Tag: linen00_458dec2790205 X-Filterd-Recvd-Size: 8375 Received: from ste-pvt-msa2.bahnhof.se (ste-pvt-msa2.bahnhof.se [213.80.101.71]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Dec 2019 08:47:57 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by ste-pvt-msa2.bahnhof.se (Postfix) with ESMTP id F36563F738; Thu, 12 Dec 2019 09:47:55 +0100 (CET) Authentication-Results: ste-pvt-msa2.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=MDIF/NBN; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se X-Spam-Flag: NO X-Spam-Score: -2.099 X-Spam-Level: X-Spam-Status: No, score=-2.099 tagged_above=-999 required=6.31 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no Authentication-Results: ste-ftg-msa2.bahnhof.se (amavisd-new); dkim=pass (1024-bit key) header.d=shipmail.org Received: from ste-pvt-msa2.bahnhof.se ([127.0.0.1]) by localhost (ste-ftg-msa2.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Btx1NEBhA_AG; Thu, 12 Dec 2019 09:47:55 +0100 (CET) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by ste-pvt-msa2.bahnhof.se (Postfix) with ESMTPA id 4DA8E3F5A2; Thu, 12 Dec 2019 09:47:51 +0100 (CET) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id 7B2D936042C; Thu, 12 Dec 2019 09:47:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1576140471; bh=xDdj038OxNx6FifZJPJkQxRdKvS25lsUfMjZTeK+6SM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MDIF/NBNgTVxbb+ZZxfRkZOV5hdSSC6PIfL5HN9g8RqVhm61mZ72BKJKvJrx8SPah obldM8qxBliaCh4RBwGAgKIoPBBH/OzWBPuRNF5XkEGzkL2N2S/9WJ6eItjMRwLD0N lQo7IYcgfqMahIPTpm3jw+tcCuX5ic11tXiTTyHw= From: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , Michal Hocko , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= Subject: [PATCH v4 1/2] mm: Add a vmf_insert_mixed_prot() function Date: Thu, 12 Dec 2019 09:47:40 +0100 Message-Id: <20191212084741.9251-2-thomas_os@shipmail.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191212084741.9251-1-thomas_os@shipmail.org> References: <20191212084741.9251-1-thomas_os@shipmail.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom The TTM module today uses a hack to be able to set a different page protection than struct vm_area_struct::vm_page_prot. To be able to do this properly, add the needed vm functionality as vmf_insert_mixed_prot(). Cc: Andrew Morton Cc: Michal Hocko Cc: "Matthew Wilcox (Oracle)" Cc: "Kirill A. Shutemov" Cc: Ralph Campbell Cc: "Jérôme Glisse" Cc: "Christian König" Signed-off-by: Thomas Hellstrom Acked-by: Christian König Acked-by: Michal Hocko --- include/linux/mm.h | 2 ++ include/linux/mm_types.h | 7 ++++++- mm/memory.c | 43 ++++++++++++++++++++++++++++++++++++---- 3 files changed, 47 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cc292273e6ba..29575d3c1e47 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2548,6 +2548,8 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn, pgprot_t pgprot); vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn); +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr, + pfn_t pfn, pgprot_t pgprot); vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn); int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2222fa795284..ac96afdbb4bc 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -307,7 +307,12 @@ struct vm_area_struct { /* Second cache line starts here. */ struct mm_struct *vm_mm; /* The address space we belong to. */ - pgprot_t vm_page_prot; /* Access permissions of this VMA. */ + + /* + * Access permissions of this VMA. + * See vmf_insert_mixed() for discussion. + */ + pgprot_t vm_page_prot; unsigned long vm_flags; /* Flags, see mm.h. */ /* diff --git a/mm/memory.c b/mm/memory.c index b1ca51a079f2..269a8a871e83 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1646,6 +1646,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, * vmf_insert_pfn_prot should only be used if using multiple VMAs is * impractical. * + * See vmf_insert_mixed_prot() for a discussion of the implication of using + * a value of @pgprot different from that of @vma->vm_page_prot. + * * Context: Process context. May allocate using %GFP_KERNEL. * Return: vm_fault_t value. */ @@ -1719,9 +1722,9 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn) } static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, - unsigned long addr, pfn_t pfn, bool mkwrite) + unsigned long addr, pfn_t pfn, pgprot_t pgprot, + bool mkwrite) { - pgprot_t pgprot = vma->vm_page_prot; int err; BUG_ON(!vm_mixed_ok(vma, pfn)); @@ -1764,10 +1767,42 @@ static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma, return VM_FAULT_NOPAGE; } +/** + * vmf_insert_mixed_prot - insert single pfn into user vma with specified pgprot + * @vma: user vma to map to + * @addr: target user address of this page + * @pfn: source kernel pfn + * @pgprot: pgprot flags for the inserted page + * + * This is exactly like vmf_insert_mixed(), except that it allows drivers to + * to override pgprot on a per-page basis. + * + * Typically this function should be used by drivers to set caching- and + * encryption bits different than those of @vma->vm_page_prot, because + * the caching- or encryption mode may not be known at mmap() time. + * This is ok as long as @vma->vm_page_prot is not used by the core vm + * to set caching and encryption bits for those vmas (except for COW pages). + * This is ensured by core vm only modifying these page table entries using + * functions that don't touch caching- or encryption bits, using pte_modify() + * if needed. (See for example mprotect()). + * Also when new page-table entries are created, this is only done using the + * fault() callback, and never using the value of vma->vm_page_prot, + * except for page-table entries that point to anonymous pages as the result + * of COW. + * + * Context: Process context. May allocate using %GFP_KERNEL. + * Return: vm_fault_t value. + */ +vm_fault_t vmf_insert_mixed_prot(struct vm_area_struct *vma, unsigned long addr, + pfn_t pfn, pgprot_t pgprot) +{ + return __vm_insert_mixed(vma, addr, pfn, pgprot, false); +} + vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { - return __vm_insert_mixed(vma, addr, pfn, false); + return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, false); } EXPORT_SYMBOL(vmf_insert_mixed); @@ -1779,7 +1814,7 @@ EXPORT_SYMBOL(vmf_insert_mixed); vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn) { - return __vm_insert_mixed(vma, addr, pfn, true); + return __vm_insert_mixed(vma, addr, pfn, vma->vm_page_prot, true); } EXPORT_SYMBOL(vmf_insert_mixed_mkwrite);