From patchwork Fri May 22 12:52:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565571 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21C27138A for ; Fri, 22 May 2020 12:52:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D7841206C3 for ; Fri, 22 May 2020 12:52:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="DINCmV2a" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7841206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E1EF80010; Fri, 22 May 2020 08:52:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0784D80008; Fri, 22 May 2020 08:52:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C025780011; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id 9B73680010 for ; Fri, 22 May 2020 08:52:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5EFAA824CA0B for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) X-FDA: 76844343408.21.lock33_895a75dc4453d X-Spam-Summary: 2,0,0,89b5bb3a12e65df2,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2736:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4605:5007:6119:6120:6261:6653:6742:7903:7904:10004:10226:11026:11473:11658:11914:12043:12048:12295:12296:12297:12438:12517:12519:12555:12895:13161:13229:13894:14096:21080:21444:21451:21627:21740:21987:21990:30003:30012:30054:30069:30079:30090,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: lock33_895a75dc4453d X-Filterd-Recvd-Size: 11998 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id w15so6418460lfe.11 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=DINCmV2aZlY4JnnPCopM4LgQCUWBheVG7kwiY1wVek04AAz6tE+2VQMpV7bu6lmZqF 0jh3mFvw0V7/oxUHvVwm/T6VOC0xmMbvGcg90XXh+J4zuYyWOqQxFAggr5YLDELpvvXx UrVe1p9EpqMd5uoiOA4U7S73PN7iZ0i29Tahu4ktBsbkFkiLShZ3UzAMG/Slnu8VVW+V SWH92LSWWdocDdDjrE7Rw7qJde4opaiV2xmFBHM1cYM3aCXnsJvU6fX8a84z3k0bBTlF L6geZ1hV7k8f/H80wvI+b+bxthM5QDsSWl1c1ikg+E9FBlRqF9A50NARmUYmW8Ibnomm eyvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=cDRVQjOPz+41xoiPiVSlyceSAWQYY7WQVpn0CNTZGUkjhxg8qg1B7ER/QlBaxv2WwC ajaC6ZoiSBhJ4Thl42GfJR2n/08M0x/O/7B8945QyxmthUSw93z8M4Q0dg9srDGtgwCb C36LfHXLbPLm0Q69+1HKAKvU/TLDYeg7omb6VifV45dT+eB39bGy6a4M9AyYyCNYebcx XWDgNqY3MJqODvIyupXCieTDrybDTPGAjnEM8phQII70zLzQDhymUkYSz7e0mTCLUYqA fh6E3yZtV7SJeLfQdaXr643bpLBeDAnwCZ2OUI00gtgeg7Bnm5fQ0xgrif6OIidv2GUm ZoNQ== X-Gm-Message-State: AOAM531apDM21nefF/JcRH1Wc7QMMbMAIi2CVv6hpaazxC71mMNpo/xK hSV6u/kNWULJ6ggX6pkovqsGAA== X-Google-Smtp-Source: ABdhPJzP9UaxWq9turrXCg6xZaiwqG/RivqdrtCbCfxbVzjv8F2JanBpprUM2BRBlHfmWRoh2x4YkQ== X-Received: by 2002:a19:3855:: with SMTP id d21mr7581245lfj.156.1590151942236; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id m4sm2307279ljb.46.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D0C22102055; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 07/16] KVM: mm: Introduce VM_KVM_PROTECTED Date: Fri, 22 May 2020 15:52:05 +0300 Message-Id: <20200522125214.31348-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new VMA flag that indicate a VMA that is not accessible to userspace but usable by kernel with GUP if FOLL_KVM is specified. The FOLL_KVM is only used in the KVM code. The code has to know how to deal with such pages. Signed-off-by: Kirill A. Shutemov --- include/linux/mm.h | 8 ++++++++ mm/gup.c | 20 ++++++++++++++++---- mm/huge_memory.c | 20 ++++++++++++++++---- mm/memory.c | 3 +++ mm/mmap.c | 3 +++ virt/kvm/async_pf.c | 4 ++-- virt/kvm/kvm_main.c | 9 +++++---- 7 files changed, 53 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e1882eec1752..4f7195365cc0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,6 +329,8 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#define VM_KVM_PROTECTED 0 + #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE #endif @@ -646,6 +648,11 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } +static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_KVM_PROTECTED; +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow @@ -2773,6 +2780,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ +#define FOLL_KVM 0x80000 /* access to VM_KVM_PROTECTED VMAs */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 87a6a59fe667..bd7b9484b35a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -385,10 +385,19 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(struct vm_area_struct *vma, + pte_t pte, unsigned int flags) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pte_page(pte)) == 1; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -431,7 +440,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(vma, pte, flags)) { pte_unmap_unlock(ptep, ptl); return NULL; } @@ -751,6 +760,9 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, ctx->page_mask = 0; + if (vma_is_kvm_protected(vma) && (flags & FOLL_KVM)) + flags &= ~FOLL_NUMA; + /* make this handle hugepd */ page = follow_huge_addr(mm, address, flags & FOLL_WRITE); if (!IS_ERR(page)) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6ecd1045113b..c3562648a4ef 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1518,10 +1518,19 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(struct vm_area_struct *vma, + pmd_t pmd, unsigned int flags) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pmd_page(pmd)) == 1; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1534,7 +1543,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(vma, *pmd, flags)) goto out; /* Avoid dumping huge zero page */ @@ -1609,6 +1618,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) bool was_writable; int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(pmd, *vmf->pmd))) goto out_unlock; diff --git a/mm/memory.c b/mm/memory.c index f703fe8c8346..d7228db6e4bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4013,6 +4013,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) bool was_writable = pte_savedwrite(vmf->orig_pte); int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + /* * The "pte" at this point cannot be used safely without * validation through pte_unmap_same(). It's of NUMA type but diff --git a/mm/mmap.c b/mm/mmap.c index f609e9ec4a25..d56c3f6efc99 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -112,6 +112,9 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags) (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | pgprot_val(arch_vm_get_page_prot(vm_flags))); + if (vm_flags & VM_KVM_PROTECTED) + ret = PAGE_NONE; + return arch_filter_pgprot(ret); } EXPORT_SYMBOL(vm_get_page_prot); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..7663e962510a 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -60,8 +60,8 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ down_read(&mm->mmap_sem); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE | FOLL_KVM, NULL, + NULL, &locked); if (locked) up_read(&mm->mmap_sem); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 033471f71dae..530af95efdf3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1727,7 +1727,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w static inline int check_user_page_hwpoison(unsigned long addr) { - int rc, flags = FOLL_HWPOISON | FOLL_WRITE; + int rc, flags = FOLL_HWPOISON | FOLL_WRITE | FOLL_KVM; rc = get_user_pages(addr, 1, flags, NULL, NULL); return rc == -EHWPOISON; @@ -1771,7 +1771,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, bool *writable, kvm_pfn_t *pfn) { - unsigned int flags = FOLL_HWPOISON; + unsigned int flags = FOLL_HWPOISON | FOLL_KVM; struct page *page; int npages = 0; @@ -2255,7 +2255,7 @@ int copy_from_guest(void *data, unsigned long hva, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, 0); + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(data, page_address(page) + offset, seg); @@ -2275,7 +2275,8 @@ int copy_to_guest(unsigned long hva, const void *data, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + npages = get_user_pages_unlocked(hva, 1, &page, + FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(page_address(page) + offset, data, seg);