From patchwork Fri May 22 12:51:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07357138A for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BCCB020759 for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="RZmS5aGo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BCCB020759 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 29DF180007; Fri, 22 May 2020 08:52:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 154458000A; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E992680007; Fri, 22 May 2020 08:52:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id D3F2080008 for ; Fri, 22 May 2020 08:52:20 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A1A4F2485 for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) X-FDA: 76844343240.04.angle70_88d1ca3153d13 X-Spam-Summary: 2,0,0,6836760430402235,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:2196:2199:2393:2538:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3871:3872:3874:4119:4250:4321:4385:4605:5007:6119:6120:6261:6653:6742:7875:7903:8568:9592:10004:11026:11473:11657:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:21080:21444:21451:21627:21990:30012:30054:30064:30067:30070,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: angle70_88d1ca3153d13 X-Filterd-Recvd-Size: 8986 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id k5so12452025lji.11 for ; Fri, 22 May 2020 05:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=RZmS5aGoxiEWf9GuUWonhlLEEnOL/5aU5nssz02nf/5SdXl7slR5EEYNs/4PSzvG7Z OIBZmyNLeNfUVd9CBS3x2JRsgMjUJtKmqgUg3/8kZfq6ZbRdgzt/ygHJlYwMEbP8ckSe Osr8w/5iuea4QUPKom2JgGBanIVuVe7v1DvGbNuZfXNy/NOHMFubiWQ7w+0xnU6DvkKM iBbwHSs+aX6zSPzMNbc8Vnvh+YhL2rZrjhyYC6ClzNUk0Gq5XJxPA9Ma4lZvu7HiCMS0 hLLe5Ms5M7Abmcvn1C2ToiDEdgV904P22iOK27KYOA/X+C/xBlkBHwTkmOlNo6KAYruK 6Nig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=is58Hw2zLJfLpnSQe0x9yCNJMMOdiZp4l4173BLxAsjZE30RrGO9G2G4Ran68wfK7D w0b5fcIT24GvhU0WFKwsuNO2B6736zkB6A378862Rbly+ZPBwAW4ccTCla5wUcpZ4nro hW7lTk1DAZYeoerJjGIulH20i9O7Gby6HLL6FO5djXXqSZSUSTWkrAlFp9G3hVn3QwKk A9/xEmT6F+f4At9T4e2V7oHq6gbUSbkvzKBYyzsLNq0ne5+sRXX85AJBYZ/I5SVwFChf P0f+YEPLQEqrmA5irgLWeAyjMA+YawnKplEaDfQU9B8Dzh2MEbjVFXMbYH2W1INMgEko KhCA== X-Gm-Message-State: AOAM531TlWGKS8+jasgqhAA2D+p5ahe4YeS07ISMUi+JKcXRZm9clAWA 3G8UJ05I7nDDIvGNtxpInbIegg== X-Google-Smtp-Source: ABdhPJzQ6T1oKzjxH1v0MRdbIIoC0LWPz5Z4w0jhXU9Ef/krG9OSTf62OuiRiejZ7fPahTpNDSD5dA== X-Received: by 2002:a2e:9196:: with SMTP id f22mr6908669ljg.21.1590151938451; Fri, 22 May 2020 05:52:18 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id z22sm2386655lfi.96.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id A1076101EB3; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 01/16] x86/mm: Move force_dma_unencrypted() to common code Date: Fri, 22 May 2020 15:51:59 +0300 Message-Id: <20200522125214.31348-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: force_dma_unencrypted() has to return true for KVM guest with the memory protected enabled. Move it out of AMD SME code. Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This is preparation for the following patches. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 8 +++++-- arch/x86/include/asm/io.h | 4 +++- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 30 ------------------------- arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++ 5 files changed, 49 insertions(+), 33 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2d3f963fd6f1..bc72bfd89bcf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1518,12 +1518,16 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select X86_MEM_ENCRYPT_COMMON ---help--- Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index e1aa17a468a8..c58d52fd7bf2 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -256,10 +256,12 @@ static inline void slow_down_io(void) #endif -#ifdef CONFIG_AMD_MEM_ENCRYPT #include extern struct static_key_false sev_enable_key; + +#ifdef CONFIG_AMD_MEM_ENCRYPT + static inline bool sev_key_active(void) { return static_branch_unlikely(&sev_enable_key); diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 98f7c6fa2eaa..af8683c053a3 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -49,6 +49,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index a03614bd3e1a..112304a706f3 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -15,10 +15,6 @@ #include #include #include -#include -#include -#include -#include #include #include @@ -350,32 +346,6 @@ bool sev_active(void) return sme_me_mask && sev_enabled; } -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) -{ - /* - * For SEV, all DMA must be to unencrypted addresses. - */ - if (sev_active()) - return true; - - /* - * For SME, all DMA must be to unencrypted addresses if the - * device does not support DMA to addresses that include the - * encryption mask. - */ - if (sme_active()) { - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, - dev->bus_dma_limit); - - if (dma_dev_mask <= dma_enc_mask) - return true; - } - - return false; -} - /* Architecture __weak replacement functions */ void __init mem_encrypt_free_decrypted_mem(void) { diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..964e04152417 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + */ + +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For SEV, all DMA must be to unencrypted/shared addresses. + */ + if (sev_active()) + return true; + + /* + * For SME, all DMA must be to unencrypted addresses if the + * device does not support DMA to addresses that include the + * encryption mask. + */ + if (sme_active()) { + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, + dev->bus_dma_limit); + + if (dma_dev_mask <= dma_enc_mask) + return true; + } + + return false; +} From patchwork Fri May 22 12:52:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0053790 for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C124D20825 for ; Fri, 22 May 2020 12:52:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="L10nCGuh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C124D20825 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 01CA080009; Fri, 22 May 2020 08:52:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F347B80008; Fri, 22 May 2020 08:52:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E23A180009; Fri, 22 May 2020 08:52:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id C8B1980007 for ; Fri, 22 May 2020 08:52:20 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 90168181AEF31 for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) X-FDA: 76844343240.10.sheep38_88d2e8ccaa901 X-Spam-Summary: 2,0,0,1f18f54b25fd11b6,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3872:4117:4250:4321:4605:5007:6119:6120:6261:6653:6742:7901:7974:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14181:14721:21080:21444:21451:21627:21990:30012:30054,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: sheep38_88d2e8ccaa901 X-Filterd-Recvd-Size: 6803 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id w10so12535394ljo.0 for ; Fri, 22 May 2020 05:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S+L43kzTWeV0qBo7/UXpF9mjurVSaJfxkFKdWz5la+8=; b=L10nCGuhKTvFcjopSTmqmu6TMttbOmfpAsWfm6aIGQA6P1R+wcUVFl1kMoxkQpr/Vr T/32LdrA4xVd1SJpudUw2/5zyKsaH51gEL9zXs3gJgSY64iKwW1DR1GtYwt9Ezmenij8 o4Nw4HaR4ZIi4zQiZSCGz6Z9BihS+pe9Qheuty0ulaJQdRS4WRQ0ugsPAd+HEbCZF071 0VRoI+63GAoIyKDgw3bpM/jwHrWXUmOjrKz+ZyFRAHOyDdMKsJeD93A6kb8lkAofAtAk iQ7AN9JCC8C7qxEFkUS5MqJNkL2O8GLBye8FWmv0Oz3rpZKZ8Ok5N43GLlyqj+zfnLlN P/AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S+L43kzTWeV0qBo7/UXpF9mjurVSaJfxkFKdWz5la+8=; b=ohbV+ua3UA5Zj1IA3HUW9O/r9AuHxjdjUR3igEooFH4tgmHuwmQ70xJtOgjFV1HgYa z1UNzZxXNCB/2nyFaDmVzzWq5jH9zlLVEmQPSy/9+a+VH5k61wlNqyENTSPJtedpeYFk Lp8GbPhuyio6wY9fsm0a0SXOAGN58DIStp0qwoz0ihkGlU/6GOpXNI+razCVdDuQ5EQp E6sq89PK9Uud3ajaS7WfjBy280aipwM7F7rErlemyIBXxQWm/ONUopLhRpj/Z3IaJLGk cdTG4NI13D4SASkRyof3kUiMH7I59S8EApzMAXQdFqVa40FvNtmV4Few+fCOGJffZWGp sEuw== X-Gm-Message-State: AOAM531dZoLawdMZFbxvVKslzFa3pUZ/iP17EUtVMIfQQUTtQz+p4Ur0 DnA6MIY9gVDOdyfzgHqYl0uBWA== X-Google-Smtp-Source: ABdhPJwBRkFSJRPO5VTttkswQA1+IQChU9KFl1wv2ylCjjjm9j08u45iNHd4WuA3Qe/3uKHyAKklAg== X-Received: by 2002:a2e:9bc3:: with SMTP id w3mr7882222ljj.170.1590151938848; Fri, 22 May 2020 05:52:18 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j133sm2404003lfd.58.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id A96B110204C; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 02/16] x86/kvm: Introduce KVM memory protection feature Date: Fri, 22 May 2020 15:52:00 +0300 Message-Id: <20200522125214.31348-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide basic helpers, KVM_FEATURE and a hypercall. Host side doesn't provide the feature yet, so it is a dead code for now. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/kvm_para.h | 5 +++++ arch/x86/include/uapi/asm/kvm_para.h | 3 ++- arch/x86/kernel/kvm.c | 16 ++++++++++++++++ include/uapi/linux/kvm_para.h | 3 ++- 4 files changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h index 9b4df6eaa11a..3ce84fc07144 100644 --- a/arch/x86/include/asm/kvm_para.h +++ b/arch/x86/include/asm/kvm_para.h @@ -10,11 +10,16 @@ extern void kvmclock_init(void); #ifdef CONFIG_KVM_GUEST bool kvm_check_and_clear_guest_paused(void); +bool kvm_mem_protected(void); #else static inline bool kvm_check_and_clear_guest_paused(void) { return false; } +static inline bool kvm_mem_protected(void) +{ + return false; +} #endif /* CONFIG_KVM_GUEST */ #define KVM_HYPERCALL \ diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 2a8e0b6b9805..c3b499acc98f 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -28,9 +28,10 @@ #define KVM_FEATURE_PV_UNHALT 7 #define KVM_FEATURE_PV_TLB_FLUSH 9 #define KVM_FEATURE_ASYNC_PF_VMEXIT 10 -#define KVM_FEATURE_PV_SEND_IPI 11 +#define KVM_FEATURE_PV_SEND_IPI 11 #define KVM_FEATURE_POLL_CONTROL 12 #define KVM_FEATURE_PV_SCHED_YIELD 13 +#define KVM_FEATURE_MEM_PROTECTED 14 #define KVM_HINTS_REALTIME 0 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6efe0410fb72..bda761ca0d26 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -35,6 +35,13 @@ #include #include +static bool mem_protected; + +bool kvm_mem_protected(void) +{ + return mem_protected; +} + static int kvmapf = 1; static int __init parse_no_kvmapf(char *arg) @@ -727,6 +734,15 @@ static void __init kvm_init_platform(void) { kvmclock_init(); x86_platform.apic_post_init = kvm_apic_init; + + if (kvm_para_has_feature(KVM_FEATURE_MEM_PROTECTED)) { + if (kvm_hypercall0(KVM_HC_ENABLE_MEM_PROTECTED)) { + pr_err("Failed to enable KVM memory protection\n"); + return; + } + + mem_protected = true; + } } const __initconst struct hypervisor_x86 x86_hyper_kvm = { diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 8b86609849b9..1a216f32e572 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -27,8 +27,9 @@ #define KVM_HC_MIPS_EXIT_VM 7 #define KVM_HC_MIPS_CONSOLE_OUTPUT 8 #define KVM_HC_CLOCK_PAIRING 9 -#define KVM_HC_SEND_IPI 10 +#define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 +#define KVM_HC_ENABLE_MEM_PROTECTED 12 /* * hypercalls use architecture specific From patchwork Fri May 22 12:52:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFA8E90 for ; Fri, 22 May 2020 12:52:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8E94F20825 for ; Fri, 22 May 2020 12:52:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="j6Temqp5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E94F20825 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 132548000B; Fri, 22 May 2020 08:52:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 01DC98000C; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D13988000B; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id B825380008 for ; Fri, 22 May 2020 08:52:21 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7B6BF824CA0B for ; Fri, 22 May 2020 12:52:21 +0000 (UTC) X-FDA: 76844343282.18.shake41_88eb8e36dea4a X-Spam-Summary: 2,0,0,193909a61a971f21,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:4117:4250:4321:4605:5007:6119:6120:6261:6653:6742:7875:7903:8660:9036:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13148:13230:13894:14096:14181:14721:21080:21220:21444:21451:21627:21939:21990:30045:30054:30070,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: shake41_88eb8e36dea4a X-Filterd-Recvd-Size: 6679 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id m18so12478276ljo.5 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YzqQsjWsD3KM16gewdvtpsRwCFP82zpuuyGt2UtUvco=; b=j6Temqp5EdCwy2lHEeBRNU/roZu6P35IXqFKlgDFfcRBuTYSUzvoVVn3YuRD/rx9VD A/33zGrbtZ5asrkQE65MmYaLJtBAEY84pZxygcVD/NIjWPOHi2M0V+WECH/S/BiPQcjj ye0nW3hVLvZJ9O5MEjEmTV62uQQDzHRq2b2l05cYTQowGXzbX7D6xJuXQOctBQIUszcW svrImc0ptd9lZd1snb+8vxKeIxuW1Zu0t9X2Cwq5D2IKtwHlftfhGib5sGsFpuwYWQp7 imBMkbE7Z20hpw2drt92fqwRw+0cXJIXL8sTiwxb11wDDXa/PIa0fbCDv1b2cCqwzOGa hvwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YzqQsjWsD3KM16gewdvtpsRwCFP82zpuuyGt2UtUvco=; b=VZ1WQW5cCNeVBsxhL58oDt9csJ0slOBLDFs2U8TZIrL8uyed2HpEHMLRnSFJxFuDnD 5fPKhOdUL4/bDDW2xN0uu61vlp3f9obBLQMcIfE5U7P4ynnduTyz4gtqAsh83GI4Sc/7 XzKprSlFKZLcUEEXCvn11x89rCgF0O7F9ZbK4FlFhf67Pz8G3RcVMtrlamA1OLTYDoo7 DTlz+3eWnumD96pDLChzzCs/qn48dzCg9Q1azMBq76hncd9a/IzyHs01A+vfOWIbfKr1 G4ltn6UXb6wP6kSCsHr0Pm/0CBVgGMTKbHSEMVjyTFoMOL9RvreZrMkEPytSFzUppi5s 9MxQ== X-Gm-Message-State: AOAM532zwguaBmjszjNXrVJlJL1tnFslO8wJhHzCBOVqwpZWB2+uh7+L i56MVgZ8kI8prOnOpWZSW8uBuA== X-Google-Smtp-Source: ABdhPJysUi0S/B10wTzt4/F1rMdfE2sKWbCTwK5LOYcv3t2ygEQAFnn1cMIJjclfQZFWD15ToOwdhA== X-Received: by 2002:a2e:9a41:: with SMTP id k1mr7755117ljj.143.1590151939515; Fri, 22 May 2020 05:52:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id y21sm428982ljg.48.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id B146A10204F; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 03/16] x86/kvm: Make DMA pages shared Date: Fri, 22 May 2020 15:52:01 +0300 Message-Id: <20200522125214.31348-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make force_dma_unencrypted() return true for KVM to get DMA pages mapped as shared. __set_memory_enc_dec() now informs the host via hypercall if the state of the page has changed from shared to private or back. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/mm/mem_encrypt_common.c | 5 +++-- arch/x86/mm/pat/set_memory.c | 7 +++++++ include/uapi/linux/kvm_para.h | 2 ++ 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index bc72bfd89bcf..86c012582f51 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -799,6 +799,7 @@ config KVM_GUEST depends on PARAVIRT select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL + select X86_MEM_ENCRYPT_COMMON default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index 964e04152417..a878e7f246d5 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -10,14 +10,15 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { /* - * For SEV, all DMA must be to unencrypted/shared addresses. + * For SEV and KVM, all DMA must be to unencrypted/shared addresses. */ - if (sev_active()) + if (sev_active() || kvm_mem_protected()) return true; /* diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index b8c55a2e402d..6f075766bb94 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -1972,6 +1973,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) struct cpa_data cpa; int ret; + if (kvm_mem_protected()) { + unsigned long gfn = __pa(addr) >> PAGE_SHIFT; + int call = enc ? KVM_HC_MEM_UNSHARE : KVM_HC_MEM_SHARE; + return kvm_hypercall2(call, gfn, numpages); + } + /* Nothing to do if memory encryption is not active */ if (!mem_encrypt_active()) return 0; diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 1a216f32e572..c6d8c988e330 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -30,6 +30,8 @@ #define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 #define KVM_HC_ENABLE_MEM_PROTECTED 12 +#define KVM_HC_MEM_SHARE 13 +#define KVM_HC_MEM_UNSHARE 14 /* * hypercalls use architecture specific From patchwork Fri May 22 12:52:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47AC0159A for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1691B20759 for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="T5hCoobM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1691B20759 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7BDCA8000A; Fri, 22 May 2020 08:52:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 793D280008; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F3938000A; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id 3FD6080008 for ; Fri, 22 May 2020 08:52:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E97ED824CA0B for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) X-FDA: 76844343240.02.fear53_88e2346c9d960 X-Spam-Summary: 2,0,0,fc9db9a1cf8d68ba,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:69:355:379:541:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2895:2897:3138:3139:3140:3141:3142:3354:3622:3865:3867:3868:3871:3874:4118:4250:4321:4385:4605:5007:6119:6120:6261:6653:6742:7901:8568:8660:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:12986:13141:13148:13230:13894:14181:14721:21080:21444:21451:21627:21939:21990:30054:30070,0,RBL:209.85.208.195:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: fear53_88e2346c9d960 X-Filterd-Recvd-Size: 7481 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id v16so12502555ljc.8 for ; Fri, 22 May 2020 05:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QDGQlabqNdXWT8KYXiZwEeKnt25k8ILFDWxwAiuz/rs=; b=T5hCoobMWtJ7Xzo20ohRf6NdcQ3ttgHexOKNzdWduVQbkLxApAPl+W+t7qaQWvvPaL szS1ESDyCBl8nxotiDX/fMAWiyPnn53Y573QSOYSRIKxkWL0hFikT7PbX/xhc6GpbupH Jx3l5SbaTAsfUfZ10CSxgW0gxp6Cm2E9v82GsvCLwFYiFiZkeS2r0a/IwAaNcT0g776C 7W0DJCCofxJ5gED2yaCZ/LqHet1TQUSlADKixI+VvxGE2mHgUd5Dkn1QmrHHAhn68P+f a2mkP4+qpHJmX+8ZpqpZJ8eyw6wwk1TMtgSBnlxvgDuZ5J/XSH61k5FXpKubBlHpB7A+ tHsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QDGQlabqNdXWT8KYXiZwEeKnt25k8ILFDWxwAiuz/rs=; b=d84sBkfOmY0eOcc+l1lglVjeDOXPmS24bNew2/jA3BjIJ4fkz8b4fKzej6RM0UZnEd kXK0NbfkvPWkFeVRCreg+gYtrC0p85ycMqdkLQ6UJ1F1NKgh3ZTGV5QzhxNyM4DFooq+ QSiQr2cVCWCa3uGnlCfy72tvcdMWhHEn+ROxDmKhd7obA8sqS2Mi4vGpHc9c5b0gQG8Y V4+qDW5YQ3nbXnVR2oJpNIaVNZspFTozWos7dGhvrRgTPBhHASG7tOzeg7EOEcWALXOi CDpiv/4rCjwoBSAHjismUaDw1rBrbMeMA3xMs+G6+mnmsyLNpnaiTDaZWCUl/H78AN2D qjuw== X-Gm-Message-State: AOAM533txgNCR5MwZMZsT0MTeWYzbSpHAI/t4WISIaSPH75nIuTZ0ljA 6pI/K3zXaBntvN+TkmqdIXPZmg== X-Google-Smtp-Source: ABdhPJzbEgynOtDCfmMokHSL9w+mx6+aIeiKdBvwClE9qAMTx107s9z4ayxTSo/aYXpRjHq4p8yErA== X-Received: by 2002:a2e:9ada:: with SMTP id p26mr7407068ljj.14.1590151939171; Fri, 22 May 2020 05:52:19 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e18sm343134lja.55.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id B92F7102051; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 04/16] x86/kvm: Use bounce buffers for KVM memory protection Date: Fri, 22 May 2020 15:52:02 +0300 Message-Id: <20200522125214.31348-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mirror SEV, use SWIOTLB always if KVM memory protection is enabled. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kernel/kvm.c | 2 ++ arch/x86/kernel/pci-swiotlb.c | 3 ++- arch/x86/mm/mem_encrypt.c | 20 -------------------- arch/x86/mm/mem_encrypt_common.c | 23 +++++++++++++++++++++++ 5 files changed, 28 insertions(+), 21 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 86c012582f51..58dd44a1b92f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -800,6 +800,7 @@ config KVM_GUEST select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL select X86_MEM_ENCRYPT_COMMON + select SWIOTLB default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index bda761ca0d26..f50d65df4412 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -742,6 +743,7 @@ static void __init kvm_init_platform(void) } mem_protected = true; + swiotlb_force = SWIOTLB_FORCE; } } diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..814060a6ceb0 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -13,6 +13,7 @@ #include #include #include +#include int swiotlb __read_mostly; @@ -49,7 +50,7 @@ int __init pci_swiotlb_detect_4gb(void) * buffers are allocated and used for devices that do not support * the addressing range required for the encryption mask. */ - if (sme_active()) + if (sme_active() || kvm_mem_protected()) swiotlb = 1; return swiotlb; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 112304a706f3..35c748ee3fcb 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -370,23 +370,3 @@ void __init mem_encrypt_free_decrypted_mem(void) free_init_pages("unused decrypted", vaddr, vaddr_end); } - -void __init mem_encrypt_init(void) -{ - if (!sme_me_mask) - return; - - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - - /* - * With SEV, we need to unroll the rep string I/O instructions. - */ - if (sev_active()) - static_branch_enable(&sev_enable_key); - - pr_info("AMD %s active\n", - sev_active() ? "Secure Encrypted Virtualization (SEV)" - : "Secure Memory Encryption (SME)"); -} - diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index a878e7f246d5..7900f3788010 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -37,3 +37,26 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask && !kvm_mem_protected()) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); + + /* + * With SEV, we need to unroll the rep string I/O instructions. + */ + if (sev_active()) + static_branch_enable(&sev_enable_key); + + if (sme_me_mask) { + pr_info("AMD %s active\n", + sev_active() ? "Secure Encrypted Virtualization (SEV)" + : "Secure Memory Encryption (SME)"); + } else { + pr_info("KVM memory protection enabled\n"); + } +} From patchwork Fri May 22 12:52:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565565 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C536138A for ; Fri, 22 May 2020 12:52:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B609206C3 for ; Fri, 22 May 2020 12:52:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="GtiEZ4n1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2B609206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 59C8F8000D; Fri, 22 May 2020 08:52:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 550E480008; Fri, 22 May 2020 08:52:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 131D88000D; Fri, 22 May 2020 08:52:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id E44EC80008 for ; Fri, 22 May 2020 08:52:22 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A9B8A181AEF31 for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) X-FDA: 76844343324.13.alarm56_891eaa2a9d348 X-Spam-Summary: 2,0,0,e1fe8f367fb8c887,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:2736:3138:3139:3140:3141:3142:3352:3865:3867:3871:3874:4250:4321:5007:6119:6120:6261:6653:6742:7901:10004:11026:11658:11914:12043:12048:12296:12297:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21433:21444:21451:21627:21990:30054:30062,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: alarm56_891eaa2a9d348 X-Filterd-Recvd-Size: 4803 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id l15so12122472lje.9 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TRTK9a5mpqTDY5ZfccpPpqg6MJ7UvrTh0xOAXZSLOT0=; b=GtiEZ4n19ouyAizgKUR7+QtiVY5GOsdnqMTjHrIb62L5xETehoiRwghxpTXWAFsud/ mzMElUVEBDDRWzoET4wNfdBr+wjaqzM/XIt1wcZOSbxfJkSXPhaXh78G/xODWbBkjHWI Ml2DHUVIQJ87N1mWdC9TF590bqRjFqLm61X3ZvCMq8khw6tLBA9ohicbe26vEv2hL8U5 4F9TmwyGNQDDzBo+XOMmQUHnWmvDEW6nCGxTzjxiqD3j423EdCtU2NM74RLdpHP02Q1T BbVYwG/7tf+sGDZcMWGVGYeXypePS7/N7EB2aXkxTZ+AwJIqmQ4ZMG6MMGqCPisBllWD ayxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TRTK9a5mpqTDY5ZfccpPpqg6MJ7UvrTh0xOAXZSLOT0=; b=iQIqueq5komNRlDOKdkJdsNF2K+xh3Cb3Wzeacu15GZA377t/sk9fO7BFqxgvJVkdD tXS9SpzqnUiuTRBT7O35K8wdTyBKSoFyD0xhFuGnn2wVChgtSKd0BvjzPmP4mf6We7sv D3FlUkgftFNKO2Gl6q8Pzt3Yk6eF22jkZ/iOt7+9bdXp4Y1TvAs9v9R9c6DvPULiJZuX KrYGa+KMYSSNsmZXgJmcna4iIQx6+gVmqfna8bNNvskeO/5ipe/d+78g4H+pTxXgkNrP SeZ8rIiVarWuEmkMbZmiuwngGHi6S9jKL3nmL33BRriO5FaGTIXeGe6u0LjOmXGYWmf/ DdeQ== X-Gm-Message-State: AOAM532/GpmNP6F+q+r3KT5RhfruLYWWkX6tptRMJm18NQZqweW5Z7Sh wGMh6sG3CYqxioqEgY4DZXHjlA== X-Google-Smtp-Source: ABdhPJxgD8Uk4BEWMvgEPW1VD/uD+DbfKpa/h/WYtV9wQd/dZu3w9fynzecAsTLspWi1RmapCaP6QQ== X-Received: by 2002:a2e:9f43:: with SMTP id v3mr7757905ljk.285.1590151940948; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id n8sm2401340lfb.20.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id C0A1C102053; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Date: Fri, 22 May 2020 15:52:03 +0300 Message-Id: <20200522125214.31348-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: VirtIO for KVM is a primary way to provide IO. All memory that used for communication with the host has to be marked as shared. The easiest way to archive that is to use DMA API that already knows how to deal with shared memory. Signed-off-by: Kirill A. Shutemov --- drivers/virtio/virtio_ring.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 58b96baa8d48..bd9c56160107 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef DEBUG /* For development, we want to crash whenever the ring is screwed. */ @@ -255,6 +256,9 @@ static bool vring_use_dma_api(struct virtio_device *vdev) if (xen_domain()) return true; + if (kvm_mem_protected()) + return true; + return false; } From patchwork Fri May 22 12:52:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565563 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4D1790 for ; Fri, 22 May 2020 12:52:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A6BB9206C3 for ; Fri, 22 May 2020 12:52:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="0fINIB2q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A6BB9206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 134748000F; Fri, 22 May 2020 08:52:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 046C78000E; Fri, 22 May 2020 08:52:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DDA798000D; Fri, 22 May 2020 08:52:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id BCFD280008 for ; Fri, 22 May 2020 08:52:22 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 81FD5180AD820 for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) X-FDA: 76844343324.06.gun41_891ad0250492c X-Spam-Summary: 2,0,0,eb7c90182ab75c43,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3867:3871:4049:4250:4321:4605:5007:6119:6120:6261:6653:6742:7903:9036:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:14110:21080:21444:21451:21627:21987:21990:30051:30054,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: gun41_891ad0250492c X-Filterd-Recvd-Size: 10099 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id q2so12439866ljm.10 for ; Fri, 22 May 2020 05:52:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YfqDw99wlD4f4u2TbTJ6qHze4NSI0wbOg8F4/7GJtp0=; b=0fINIB2q2FFPAT+xSTZ65uNII/27pj5fJVYLosTwProbYATdYgQuDBG3a2I8I3nFzi LWQCOlA7QXSD6k9T6ZSf+p2nLXjsX7MhLfwOiouiIs8DzcihJv8k0oB8V6USr19PlxaN 9TP2nKUTAvfDOhaMkXa6cbW3bC25fYtZje89nBuKcTfHA1uwShTP7dn/JbwrieClk6Oj lDRid2YBwhyiKH3rY0plHD3x66DuxS2/uPUjrPP3CQ9ZguxuqBjXuiQYHPK/JRVHrItW Mptrk8PiJhWHrslmXiyyTVgpNiU4biVgczasRq833LYThCovuhuF9lwnSNRieapCjAaN P7HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YfqDw99wlD4f4u2TbTJ6qHze4NSI0wbOg8F4/7GJtp0=; b=Ss7EBlhkgljrL4XBt9GUx7FwfF/DFbnZyJ0KFx5if5RMqpjA5aGwBu1RYCSOLiT77n V4SyeyJVdTnmnuesUUOgmDc1oC62mpLJTE1oai24Bs0URWEuMU2m7DTzhkHTkLxf0C1G hsBHAWD0Olmm/Z8J/SLHFHNhmhc8JTlscB5khyz+Eik7UT8jNN14NXE3DlpiyNdNDSPi bR3pmMC+wbwIQ1NnhFY9ZZrVMW2Fh+CPOSwudZRJTbpj68jL0o32OIFz90AGZLOV2149 44+KxgrhyQ+X/fMlcx/WeI1gKAk/aoQ3qkuUrBCv+zheO/sy/IjOVr+MfgV3ffrX/oMp PoAQ== X-Gm-Message-State: AOAM531qxB4iB4+MwulJpqVUO3jYGJeT9zXvTpgd23oL4UiNWvUtnWbj o7/kJp6PvZK40MWoRFBRm/rrFw== X-Google-Smtp-Source: ABdhPJzUQtdSq+up9u40Vs7gZXaMufhhVUg0JUUDOWbSlD4zO7U6sTxMHmCVlgCqI/FGQiCPCeitTw== X-Received: by 2002:a2e:97c3:: with SMTP id m3mr6974669ljj.23.1590151940561; Fri, 22 May 2020 05:52:20 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id s8sm2406642lfd.61.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id C8F18102054; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 06/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Date: Fri, 22 May 2020 15:52:04 +0300 Message-Id: <20200522125214.31348-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: New helpers copy_from_guest()/copy_to_guest() to be used if KVM memory protection feature is enabled. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 +++ virt/kvm/kvm_main.c | 78 ++++++++++++++++++++++++++++++++++------ 2 files changed, 72 insertions(+), 10 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 131cc1527d68..bd0bb600f610 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -503,6 +503,7 @@ struct kvm { struct srcu_struct srcu; struct srcu_struct irq_srcu; pid_t userspace_pid; + bool mem_protected; }; #define kvm_err(fmt, ...) \ @@ -727,6 +728,9 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); void kvm_get_pfn(kvm_pfn_t pfn); +int copy_from_guest(void *data, unsigned long hva, int len); +int copy_to_guest(unsigned long hva, const void *data, int len); + void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 731c1e517716..033471f71dae 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2248,8 +2248,48 @@ static int next_segment(unsigned long len, int offset) return len; } +int copy_from_guest(void *data, unsigned long hva, int len) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, 0); + if (npages != 1) + return -EFAULT; + memcpy(data, page_address(page) + offset, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + + return 0; +} + +int copy_to_guest(unsigned long hva, const void *data, int len) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + if (npages != 1) + return -EFAULT; + memcpy(page_address(page) + offset, data, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + return 0; +} + static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) + void *data, int offset, int len, + bool protected) { int r; unsigned long addr; @@ -2257,7 +2297,10 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_from_user(data, (void __user *)addr + offset, len); + if (protected) + r = copy_from_guest(data, addr + offset, len); + else + r = __copy_from_user(data, (void __user *)addr + offset, len); if (r) return -EFAULT; return 0; @@ -2268,7 +2311,8 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -2277,7 +2321,8 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); @@ -2350,7 +2395,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, - const void *data, int offset, int len) + const void *data, int offset, int len, + bool protected) { int r; unsigned long addr; @@ -2358,7 +2404,11 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, addr = gfn_to_hva_memslot(memslot, gfn); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_to_user((void __user *)addr + offset, data, len); + + if (protected) + r = copy_to_guest(addr + offset, data, len); + else + r = __copy_to_user((void __user *)addr + offset, data, len); if (r) return -EFAULT; mark_page_dirty_in_slot(memslot, gfn); @@ -2370,7 +2420,8 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2379,7 +2430,8 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2495,7 +2547,10 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_write_guest(kvm, gpa, data, len); - r = __copy_to_user((void __user *)ghc->hva + offset, data, len); + if (kvm->mem_protected) + r = copy_to_guest(ghc->hva + offset, data, len); + else + r = __copy_to_user((void __user *)ghc->hva + offset, data, len); if (r) return -EFAULT; mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); @@ -2530,7 +2585,10 @@ int kvm_read_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_read_guest(kvm, ghc->gpa, data, len); - r = __copy_from_user(data, (void __user *)ghc->hva, len); + if (kvm->mem_protected) + r = copy_from_guest(data, ghc->hva, len); + else + r = __copy_from_user(data, (void __user *)ghc->hva, len); if (r) return -EFAULT; From patchwork Fri May 22 12:52:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565571 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21C27138A for ; Fri, 22 May 2020 12:52:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D7841206C3 for ; Fri, 22 May 2020 12:52:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="DINCmV2a" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D7841206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E1EF80010; Fri, 22 May 2020 08:52:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0784D80008; Fri, 22 May 2020 08:52:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C025780011; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id 9B73680010 for ; Fri, 22 May 2020 08:52:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5EFAA824CA0B for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) X-FDA: 76844343408.21.lock33_895a75dc4453d X-Spam-Summary: 2,0,0,89b5bb3a12e65df2,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2736:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4605:5007:6119:6120:6261:6653:6742:7903:7904:10004:10226:11026:11473:11658:11914:12043:12048:12295:12296:12297:12438:12517:12519:12555:12895:13161:13229:13894:14096:21080:21444:21451:21627:21740:21987:21990:30003:30012:30054:30069:30079:30090,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: lock33_895a75dc4453d X-Filterd-Recvd-Size: 11998 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id w15so6418460lfe.11 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=DINCmV2aZlY4JnnPCopM4LgQCUWBheVG7kwiY1wVek04AAz6tE+2VQMpV7bu6lmZqF 0jh3mFvw0V7/oxUHvVwm/T6VOC0xmMbvGcg90XXh+J4zuYyWOqQxFAggr5YLDELpvvXx UrVe1p9EpqMd5uoiOA4U7S73PN7iZ0i29Tahu4ktBsbkFkiLShZ3UzAMG/Slnu8VVW+V SWH92LSWWdocDdDjrE7Rw7qJde4opaiV2xmFBHM1cYM3aCXnsJvU6fX8a84z3k0bBTlF L6geZ1hV7k8f/H80wvI+b+bxthM5QDsSWl1c1ikg+E9FBlRqF9A50NARmUYmW8Ibnomm eyvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OC110xasR+GTk4uxI41+0arGy8rv1X+JHVBAtK64k7Q=; b=cDRVQjOPz+41xoiPiVSlyceSAWQYY7WQVpn0CNTZGUkjhxg8qg1B7ER/QlBaxv2WwC ajaC6ZoiSBhJ4Thl42GfJR2n/08M0x/O/7B8945QyxmthUSw93z8M4Q0dg9srDGtgwCb C36LfHXLbPLm0Q69+1HKAKvU/TLDYeg7omb6VifV45dT+eB39bGy6a4M9AyYyCNYebcx XWDgNqY3MJqODvIyupXCieTDrybDTPGAjnEM8phQII70zLzQDhymUkYSz7e0mTCLUYqA fh6E3yZtV7SJeLfQdaXr643bpLBeDAnwCZ2OUI00gtgeg7Bnm5fQ0xgrif6OIidv2GUm ZoNQ== X-Gm-Message-State: AOAM531apDM21nefF/JcRH1Wc7QMMbMAIi2CVv6hpaazxC71mMNpo/xK hSV6u/kNWULJ6ggX6pkovqsGAA== X-Google-Smtp-Source: ABdhPJzP9UaxWq9turrXCg6xZaiwqG/RivqdrtCbCfxbVzjv8F2JanBpprUM2BRBlHfmWRoh2x4YkQ== X-Received: by 2002:a19:3855:: with SMTP id d21mr7581245lfj.156.1590151942236; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id m4sm2307279ljb.46.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D0C22102055; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 07/16] KVM: mm: Introduce VM_KVM_PROTECTED Date: Fri, 22 May 2020 15:52:05 +0300 Message-Id: <20200522125214.31348-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new VMA flag that indicate a VMA that is not accessible to userspace but usable by kernel with GUP if FOLL_KVM is specified. The FOLL_KVM is only used in the KVM code. The code has to know how to deal with such pages. Signed-off-by: Kirill A. Shutemov --- include/linux/mm.h | 8 ++++++++ mm/gup.c | 20 ++++++++++++++++---- mm/huge_memory.c | 20 ++++++++++++++++---- mm/memory.c | 3 +++ mm/mmap.c | 3 +++ virt/kvm/async_pf.c | 4 ++-- virt/kvm/kvm_main.c | 9 +++++---- 7 files changed, 53 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e1882eec1752..4f7195365cc0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,6 +329,8 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#define VM_KVM_PROTECTED 0 + #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE #endif @@ -646,6 +648,11 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } +static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_KVM_PROTECTED; +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow @@ -2773,6 +2780,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ +#define FOLL_KVM 0x80000 /* access to VM_KVM_PROTECTED VMAs */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 87a6a59fe667..bd7b9484b35a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -385,10 +385,19 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(struct vm_area_struct *vma, + pte_t pte, unsigned int flags) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pte_page(pte)) == 1; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -431,7 +440,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(vma, pte, flags)) { pte_unmap_unlock(ptep, ptl); return NULL; } @@ -751,6 +760,9 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, ctx->page_mask = 0; + if (vma_is_kvm_protected(vma) && (flags & FOLL_KVM)) + flags &= ~FOLL_NUMA; + /* make this handle hugepd */ page = follow_huge_addr(mm, address, flags & FOLL_WRITE); if (!IS_ERR(page)) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6ecd1045113b..c3562648a4ef 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1518,10 +1518,19 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(struct vm_area_struct *vma, + pmd_t pmd, unsigned int flags) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pmd_page(pmd)) == 1; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1534,7 +1543,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(vma, *pmd, flags)) goto out; /* Avoid dumping huge zero page */ @@ -1609,6 +1618,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) bool was_writable; int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(pmd, *vmf->pmd))) goto out_unlock; diff --git a/mm/memory.c b/mm/memory.c index f703fe8c8346..d7228db6e4bf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4013,6 +4013,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) bool was_writable = pte_savedwrite(vmf->orig_pte); int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + /* * The "pte" at this point cannot be used safely without * validation through pte_unmap_same(). It's of NUMA type but diff --git a/mm/mmap.c b/mm/mmap.c index f609e9ec4a25..d56c3f6efc99 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -112,6 +112,9 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags) (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | pgprot_val(arch_vm_get_page_prot(vm_flags))); + if (vm_flags & VM_KVM_PROTECTED) + ret = PAGE_NONE; + return arch_filter_pgprot(ret); } EXPORT_SYMBOL(vm_get_page_prot); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 15e5b037f92d..7663e962510a 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -60,8 +60,8 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ down_read(&mm->mmap_sem); - get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, - &locked); + get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE | FOLL_KVM, NULL, + NULL, &locked); if (locked) up_read(&mm->mmap_sem); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 033471f71dae..530af95efdf3 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1727,7 +1727,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w static inline int check_user_page_hwpoison(unsigned long addr) { - int rc, flags = FOLL_HWPOISON | FOLL_WRITE; + int rc, flags = FOLL_HWPOISON | FOLL_WRITE | FOLL_KVM; rc = get_user_pages(addr, 1, flags, NULL, NULL); return rc == -EHWPOISON; @@ -1771,7 +1771,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, bool *writable, kvm_pfn_t *pfn) { - unsigned int flags = FOLL_HWPOISON; + unsigned int flags = FOLL_HWPOISON | FOLL_KVM; struct page *page; int npages = 0; @@ -2255,7 +2255,7 @@ int copy_from_guest(void *data, unsigned long hva, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, 0); + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(data, page_address(page) + offset, seg); @@ -2275,7 +2275,8 @@ int copy_to_guest(unsigned long hva, const void *data, int len) int npages, seg; while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + npages = get_user_pages_unlocked(hva, 1, &page, + FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(page_address(page) + offset, data, seg); From patchwork Fri May 22 12:52:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565567 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB15C138A for ; Fri, 22 May 2020 12:52:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9AC1206C3 for ; Fri, 22 May 2020 12:52:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="BCBMwrmx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9AC1206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C7BD28000E; Fri, 22 May 2020 08:52:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB46080008; Fri, 22 May 2020 08:52:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC8098000E; Fri, 22 May 2020 08:52:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 86FB780008 for ; Fri, 22 May 2020 08:52:23 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 49ECA181AEF31 for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) X-FDA: 76844343366.28.beam66_8937d79112b32 X-Spam-Summary: 2,0,0,71dba814fa0f2cea,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:5007:6120:6261:6653:6742:10004:11026:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13161:13229:13311:13357:13894:14181:14384:14721:21080:21444:21451:21627:21795:30051:30054,0,RBL:209.85.208.195:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: beam66_8937d79112b32 X-Filterd-Recvd-Size: 4832 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:22 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id z6so12437798ljm.13 for ; Fri, 22 May 2020 05:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CZiTRxJYlwT1JI2O16YcOTpTGTvsIt3/iftpkk7xc8A=; b=BCBMwrmxMDVQ8OtO/KzaGXdCQjeM5F3cHHB0vfwb+cvf3Z9A3LDLjE7FRdVA24erc1 AwGxByPjg3hQAFjypKvSVkWdRRBvKLHfBx29kditdFzk85rwJq1bXca7C6+N6UmBzHt2 OXnkj1PjOYNeHoVV4Ifh9uH4tgA161gjedRNrQL0TngxOBh8LfMvjqFNZWSE5EVvQVjf /MPNmP2sqqH6b4eEbAFXnNVzihJ3KDwwxVKCXR3tgnkR35deNmWrXA6h1mMyzS88L//S C5oUjasf0WZz8yTkaFbxXPLdszje883xE+ngP1PTwL+ClRS5mxxgTFJiVccDWlLid61Z YBIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CZiTRxJYlwT1JI2O16YcOTpTGTvsIt3/iftpkk7xc8A=; b=ZDSBMtCy88c0a3iLUDiECiuaZNm8yPXYyobo5S8jDln2X9QmmSnaKYveJvWxyzpoZ8 3vQLF+C1IWzGHthJY+bKi9m9xCQtfbnmopGW2wiPnuSpKOCB+X3QZ6Td8YbddA472pu4 tJXKGruQePZwfVZe5dE6M4QQ3WE3wODhq6GjdJg414IuXlR5VNPWHfIZn6nEAaC1w92a UWy2WOl33lSMDMaVrcQQocm6nOdYYa8W01mg4ldy46ahUiJSuS4rTbNLQ3yQm4Iu2vec exkS50eTm7EJKN4Wgf9b2mJ4eQ3GEgOlpSBU3SPchdgpQvjN1VaHuzhILM58AqvZ4yEo wLVA== X-Gm-Message-State: AOAM532PTA81cJoWnOxS06TK2knIVEqSDsHL7OChPZyO4praaiTJQ0f3 mbIaynkbVamW7LVi2FETwiIxOQ== X-Google-Smtp-Source: ABdhPJxTEKdBkbnDP1WqaMmPg43HON2muTRB5YapI1uWS4rRjwdDOETM87HycrYnjbyjjuXd8HuRsw== X-Received: by 2002:a2e:6e17:: with SMTP id j23mr7349735ljc.106.1590151941240; Fri, 22 May 2020 05:52:21 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e18sm343155lja.55.2020.05.22.05.52.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:20 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D8584102056; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 08/16] KVM: x86: Use GUP for page walk instead of __get_user() Date: Fri, 22 May 2020 15:52:06 +0300 Message-Id: <20200522125214.31348-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The user mapping doesn't have the page mapping for protected memory. Signed-off-by: Kirill A. Shutemov --- arch/x86/kvm/mmu/paging_tmpl.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 9bdf9b7d9a96..ef0c5bc8ad7e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -400,8 +400,14 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, goto error; ptep_user = (pt_element_t __user *)((void *)host_addr + offset); - if (unlikely(__get_user(pte, ptep_user))) - goto error; + if (vcpu->kvm->mem_protected) { + if (copy_from_guest(&pte, host_addr + offset, + sizeof(pte))) + goto error; + } else { + if (unlikely(__get_user(pte, ptep_user))) + goto error; + } walker->ptep_user[walker->level - 1] = ptep_user; trace_kvm_mmu_paging_element(pte, walker->level); From patchwork Fri May 22 12:52:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565569 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7F49990 for ; Fri, 22 May 2020 12:52:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4090A206C3 for ; Fri, 22 May 2020 12:52:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="uMPxr4NV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4090A206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C073280012; Fri, 22 May 2020 08:52:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B8DA880008; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BADF80011; Fri, 22 May 2020 08:52:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 7D4BE80008 for ; Fri, 22 May 2020 08:52:24 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3D6791F10 for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) X-FDA: 76844343408.07.bit77_895cd7a88e90f X-Spam-Summary: 2,0,0,af2c69e5942b3766,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3872:3874:4119:4250:4321:4605:5007:6119:6120:6261:6653:6742:7558:7903:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:21080:21222:21324:21444:21451:21627:21990:30012:30054,0,RBL:209.85.167.65:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: bit77_895cd7a88e90f X-Filterd-Recvd-Size: 8993 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id c12so6437665lfc.10 for ; Fri, 22 May 2020 05:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=uMPxr4NVfF3vE1qVqq7sYsFj93VH+Y34ADY3/+IJkn11M5r/WatHHUjnukprdosb1T yo55gN9SX8HuBKmD+23WnyRvesSBKuSXQVtISqJ7086/9z6VeO2Xww8Ar+v85g91tiJt W0C/h2VXuOHKVnZNbOcSQkmhe8wicTwNh8SySiF7S62NpX2whckYamOo0Y5tYn0NNACH lbYF8qvggQ3Hq+I2RP22XsY014kHkea5U1sutNdpE0tb9CGfLQtUaz+qUWdf4lxp/yZZ EVwfQUaqG5nxl49hfwRyoY83XXfPpb3nZL1+IYFkOn7WLdUmDIZ44pLogc8bXKvC0BG/ sDgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=j2/vuMi890bpJ5y0t7xywOAn8150Dzh7/cc8lYIsDKY=; b=i08HXLVMVbliIubKXmOktRMtaXbo1EHzpv4XGhPgBKAW102ZyCR5Jfsq7xy5r+m96l fhnjKnLpHjR78b8h/OoEYkx4cgxHuICCbehHmagdT/Ya4x1c3M2kva9hYwb6u4sdu0jg ZQYxJkm0y8DEylWJjxBLWVhxpqA/QNDK0FPnMsBWY29KezO55AU9E6+i7Ev5zr83lVQx uT74mxEJEmCK/MrjKMZleuDfx1Jx7z4SWqVvRYMMjV5XsgC4K5gSJ9QJ+C3DRSp3+J5b o6zgqg8NcyVLcZ3DwLsnbsKN//HCUOnRca2OtnXSNRX+4RscSH+FDn3GUQeWUHX1j6Dh IHxw== X-Gm-Message-State: AOAM5304a21CMfl0A9MquDpt4KLempQQV9qec7/tIwjMxKL+F+CmtyUh KWvE55MK0Mb+oDMSThSI1Cehdw== X-Google-Smtp-Source: ABdhPJwQqkBG5wVjIjX0BwokaMhLdwCURG8El+OhoC+P0UogOvR1pNP5w8yY/1CvTRn+oqftAvC5tA== X-Received: by 2002:a19:c64c:: with SMTP id w73mr7336911lff.67.1590151942584; Fri, 22 May 2020 05:52:22 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id i11sm2644335ljg.9.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:21 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E0440102057; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 09/16] KVM: Protected memory extension Date: Fri, 22 May 2020 15:52:07 +0300 Message-Id: <20200522125214.31348-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add infrastructure that handles protected memory extension. Arch-specific code has to provide hypercalls and define non-zero VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 ++ mm/mprotect.c | 1 + virt/kvm/kvm_main.c | 131 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 136 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bd0bb600f610..d7072f6d6aa0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -700,6 +700,10 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); +int kvm_protect_all_memory(struct kvm *kvm); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/mm/mprotect.c b/mm/mprotect.c index 494192ca954b..552be3b4c80a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -505,6 +505,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, vm_unacct_memory(charged); return error; } +EXPORT_SYMBOL_GPL(mprotect_fixup); /* * pkey==-1 when doing a legacy mprotect() diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 530af95efdf3..07d45da5d2aa 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -155,6 +155,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; +static int protect_memory(unsigned long start, unsigned long end, bool protect); + __weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable) { @@ -1309,6 +1311,14 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out_bitmap; + if (mem->memory_size && kvm->mem_protected) { + r = protect_memory(new.userspace_addr, + new.userspace_addr + new.npages * PAGE_SIZE, + true); + if (r) + goto out_bitmap; + } + if (old.dirty_bitmap && !new.dirty_bitmap) kvm_destroy_dirty_bitmap(&old); return 0; @@ -2652,6 +2662,127 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +static int protect_memory(unsigned long start, unsigned long end, bool protect) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + int ret; + + if (down_write_killable(&mm->mmap_sem)) + return -EINTR; + + ret = -ENOMEM; + vma = find_vma(current->mm, start); + if (!vma) + goto out; + + ret = -EINVAL; + if (vma->vm_start > start) + goto out; + + if (start > vma->vm_start) + prev = vma; + else + prev = vma->vm_prev; + + ret = 0; + while (true) { + unsigned long newflags, tmp; + + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + + newflags = vma->vm_flags; + if (protect) + newflags |= VM_KVM_PROTECTED; + else + newflags &= ~VM_KVM_PROTECTED; + + /* The VMA has been handled as part of other memslot */ + if (newflags == vma->vm_flags) + goto next; + + ret = mprotect_fixup(vma, &prev, start, tmp, newflags); + if (ret) + goto out; + +next: + start = tmp; + if (start < prev->vm_end) + start = prev->vm_end; + + if (start >= end) + goto out; + + vma = prev->vm_next; + if (!vma || vma->vm_start != start) { + ret = -ENOMEM; + goto out; + } + } +out: + up_write(&mm->mmap_sem); + return ret; +} + +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect) +{ + struct kvm_memory_slot *memslot; + unsigned long start, end; + gfn_t numpages; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + if (!npages) + return 0; + + memslot = gfn_to_memslot(kvm, gfn); + /* Not backed by memory. It's okay. */ + if (!memslot) + return 0; + + start = gfn_to_hva_many(memslot, gfn, &numpages); + end = start + npages * PAGE_SIZE; + + /* XXX: Share range across memory slots? */ + if (WARN_ON(numpages < npages)) + return -EINVAL; + + return protect_memory(start, end, protect); +} +EXPORT_SYMBOL_GPL(kvm_protect_memory); + +int kvm_protect_all_memory(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + unsigned long start, end; + int i, ret = 0;; + + if (!VM_KVM_PROTECTED) + return -KVM_ENOSYS; + + mutex_lock(&kvm->slots_lock); + kvm->mem_protected = true; + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + start = memslot->userspace_addr; + end = start + memslot->npages * PAGE_SIZE; + ret = protect_memory(start, end, true); + if (ret) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_protect_all_memory); + void kvm_sigset_activate(struct kvm_vcpu *vcpu) { if (!vcpu->sigset_active) From patchwork Fri May 22 12:52:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565581 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44DE990 for ; Fri, 22 May 2020 12:52:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1391120849 for ; Fri, 22 May 2020 12:52:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="w7+V51LF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1391120849 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E3F280017; Fri, 22 May 2020 08:52:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E76980015; Fri, 22 May 2020 08:52:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC20980017; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id D38B980015 for ; Fri, 22 May 2020 08:52:26 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 94B98180AD820 for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) X-FDA: 76844343492.20.sink30_89b790fc5cf44 X-Spam-Summary: 2,0,0,f8f5a45859660b58,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3868:4117:4250:4321:5007:6119:6120:6261:6653:6742:7904:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14721:21080:21444:21451:21627:30054:30079,0,RBL:209.85.208.195:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: sink30_89b790fc5cf44 X-Filterd-Recvd-Size: 6138 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id m12so10088297ljc.6 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hp9LVrV/jryYEOmKqMZilHNSbd08lh9WQOd5VsCx30A=; b=w7+V51LF61oIWXbD/uIzaRfC1I5yP8CeMVMcem9dRCqApl+zLMn39ALMj56hHtJ9Qf Gyb+f7KQfxfRYMPPlqUwSJltzR+TxpbCxAwW2trP0D5VP1QRHIqZblwOCnhLP5TSw2ox tSDHd0F/H4fqN+mgyYgPjTGtZZCBTHLGg8/fSKOx0jH5hz8hgx3KHFfdloZtRSES0pIf 2aOwZG4Q3FRMLhqztET2rmONddbkJeBUOmSg+4wYmi51JmHJDjcjjy3TIZNuwPhxGj2y UD5n8gyfyOquHbuAKSgJrf3yvDpZ2xnmbdfHr9HKoYZrbgZBrLkqru1euBjF0BiEeAhZ lS/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hp9LVrV/jryYEOmKqMZilHNSbd08lh9WQOd5VsCx30A=; b=W5SuUEKoCSIBkzAljneHM1Q4B2u1CGsOj2LoNtyteev/pRLDnbZCzIGXRwF4T9wnyE NE8WO9zoU23+XFhPm4Cxs0dPwmpmDBKkbOx5ucXd4IuuK3xIlNTorx9wLIARpuGLRbOU yiRE6xqRvHeh1ZRXjUOCWBEEBadCGMzMclpzEOrPpJ/M8aPl+FgkioQat2j3sruWgktz MqlbWh8EglaQHCl4a5KlwpbxN9wtpYQ4qtgx6U4Mw2IRAWBIBIGD52LPrDWrdafzYJSc 1CHJ8pVuG88vkEXldDfcSm9nQQRQ43NtH24FxwAnTBAbsF01IlCzWNI5NjyhY/wWHt7Q UeTw== X-Gm-Message-State: AOAM532RiQqbNubKg79mC3ebAohSZnqvs/dMenb65PhK8wDfMP7J6nTt MaXxQYD/Hzu9sDFTKPBB9QFyww== X-Google-Smtp-Source: ABdhPJxjv+o2Z6h/SDQekvHRlmANly8ZZinNLfWWZwgF4Ua2uGZvW18C8lVOUb91lDab94je8Nxavg== X-Received: by 2002:a05:651c:547:: with SMTP id q7mr5071399ljp.437.1590151944946; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id w15sm1266864ljj.57.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E7E01102058; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 10/16] KVM: x86: Enabled protected memory extension Date: Fri, 22 May 2020 15:52:08 +0300 Message-Id: <20200522125214.31348-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Wire up hypercalls for the feature and define VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kvm/cpuid.c | 3 +++ arch/x86/kvm/x86.c | 9 +++++++++ include/linux/mm.h | 4 ++++ 4 files changed, 17 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 58dd44a1b92f..420e3947f0c6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -801,6 +801,7 @@ config KVM_GUEST select ARCH_CPUIDLE_HALTPOLL select X86_MEM_ENCRYPT_COMMON select SWIOTLB + select ARCH_USES_HIGH_VMA_FLAGS default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 901cd1fdecd9..94cc5e45467e 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) (1 << KVM_FEATURE_POLL_CONTROL) | (1 << KVM_FEATURE_PV_SCHED_YIELD); + if (VM_KVM_PROTECTED) + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); + if (sched_info_on()) entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c17e6eb9ad43..acba0ac07f61 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) kvm_sched_yield(vcpu->kvm, a0); ret = 0; break; + case KVM_HC_ENABLE_MEM_PROTECTED: + ret = kvm_protect_all_memory(vcpu->kvm); + break; + case KVM_HC_MEM_SHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); + break; + case KVM_HC_MEM_UNSHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); + break; default: ret = -KVM_ENOSYS; break; diff --git a/include/linux/mm.h b/include/linux/mm.h index 4f7195365cc0..6eb771c14968 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 +#else #define VM_KVM_PROTECTED 0 +#endif #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE From patchwork Fri May 22 12:52:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565575 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7249D90 for ; Fri, 22 May 2020 12:52:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37B90206C3 for ; Fri, 22 May 2020 12:52:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="lp2cPjep" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37B90206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3526D80011; Fri, 22 May 2020 08:52:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2B7D880008; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0941A80011; Fri, 22 May 2020 08:52:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id D825980008 for ; Fri, 22 May 2020 08:52:25 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9FB6D180AD820 for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) X-FDA: 76844343450.08.seat29_8992be843f203 X-Spam-Summary: 2,0,0,7ee462a84afabd6b,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2901:2914:3138:3139:3140:3141:3142:3354:3867:3868:3872:4118:4321:4385:4605:5007:6120:6261:6653:6742:9707:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:13972:14096:14110:14181:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: seat29_8992be843f203 X-Filterd-Recvd-Size: 7299 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id u16so4680322lfl.8 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xSlGXwZufpWJ/6PcI9wa5h49Q8Cff6k8tuJGZLYF0wg=; b=lp2cPjep8pO3ef7efxdX23yH12G8mY0S4LTqFaOt/6xv78Aem/EuLlfjxwRSjMbbX6 c/dhpm9jzQrXTaRqmuhQnVhYJ57/t1GipR/F3oPpH3D9CpdHSxRqWtAWWfxqBHwcX35q N/mtjNJ0ytEFesRnXoV+7dcOCh8RsSxoqPlYhBDXhNelnTH6s4nT8zWyM/ICFQNSBqOB wpqX8Yb6D2a4QNiHpDYDllg292XKYpO5BULq04WYMN60SYONE1D1z/40f5Ghu0nEf2eD cY62ID2aNCkj5MbDfWn54C9PukEjzMouBZ+xToc5f1RZETPmdrUqEoHRwkg9Cd6Z6A7V SOKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xSlGXwZufpWJ/6PcI9wa5h49Q8Cff6k8tuJGZLYF0wg=; b=UbpuWNZ0+OT0COigvXc7fW9rWHBn4/EBl3jUw+rYhakpGmKNcfnyffRagjh2tMNlh3 XydCl2TSnCdBrNN4Q68aKkT6tVeC4RmGTSlphkwfZTO0ykrs6bsB47pzsemk6HvmCKH3 Srx+pbuQVAtrNUvphYU9BFkmCuv47Keo1G09UEZ3C9Rxq0RduuwCZVUjlFs//7pL/Oqk GGNu6MF5NBbojOrIy3L50v/FWj2YBwhkitKUI27xnS7OD5d3O3Ch7+OpLFaaFp3+akux A45PVCO7OQIx+trO1YHuNAtbEmuyZQcwNiD0SSI3IvzG/i5rT9FMngJjRNmIiQA3KFce gz7w== X-Gm-Message-State: AOAM533paaHzUcjvOmfa03wJZG9PTism3NZK0dwTcTeo9YN7Kq1MmUCy FbL65hSG6kbFVCPqEGghmlNFZA== X-Google-Smtp-Source: ABdhPJz0KS216ZdFcrTTxidcO5MGn8K/8zWzWL1hNqwpMUy/NI3epk39K070+VkyxSdfYpnbmRHWxQ== X-Received: by 2002:a05:6512:14c:: with SMTP id m12mr7492997lfo.165.1590151944083; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id h8sm1020840ljg.28.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id EF8CA102059; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 11/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Date: Fri, 22 May 2020 15:52:09 +0300 Message-Id: <20200522125214.31348-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We are going unmap guest pages from direct mapping and cannot rely on it for guest memory access. Use temporary kmap_atomic()-style mapping to access guest memory. Signed-off-by: Kirill A. Shutemov --- virt/kvm/kvm_main.c | 57 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 07d45da5d2aa..63282def3760 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2258,17 +2258,45 @@ static int next_segment(unsigned long len, int offset) return len; } +static pte_t **guest_map_ptes; +static struct vm_struct *guest_map_area; + +static void *map_page_atomic(struct page *page) +{ + pte_t *pte; + void *vaddr; + + preempt_disable(); + pte = guest_map_ptes[smp_processor_id()]; + vaddr = guest_map_area->addr + smp_processor_id() * PAGE_SIZE; + set_pte(pte, mk_pte(page, PAGE_KERNEL)); + return vaddr; +} + +static void unmap_page_atomic(void *vaddr) +{ + pte_t *pte = guest_map_ptes[smp_processor_id()]; + set_pte(pte, __pte(0)); + __flush_tlb_one_kernel((unsigned long)vaddr); + preempt_enable(); +} + int copy_from_guest(void *data, unsigned long hva, int len) { int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; while ((seg = next_segment(len, offset)) != 0) { npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(data, page_address(page) + offset, seg); + + vaddr = map_page_atomic(page); + memcpy(data, vaddr + offset, seg); + unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -2283,13 +2311,18 @@ int copy_to_guest(unsigned long hva, const void *data, int len) int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; while ((seg = next_segment(len, offset)) != 0) { npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(page_address(page) + offset, data, seg); + + vaddr = map_page_atomic(page); + memcpy(vaddr + offset, data, seg); + unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -4921,6 +4954,18 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (r) goto out_free; + if (VM_KVM_PROTECTED) { + guest_map_ptes = kmalloc_array(num_possible_cpus(), + sizeof(pte_t *), GFP_KERNEL); + if (!guest_map_ptes) + goto out_unreg; + + guest_map_area = alloc_vm_area(PAGE_SIZE * num_possible_cpus(), + guest_map_ptes); + if (!guest_map_ptes) + goto out_unreg; + } + kvm_chardev_ops.owner = module; kvm_vm_fops.owner = module; kvm_vcpu_fops.owner = module; @@ -4944,6 +4989,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, return 0; out_unreg: + if (guest_map_area) + free_vm_area(guest_map_area); + if (guest_map_ptes) + kfree(guest_map_ptes); kvm_async_pf_deinit(); out_free: kmem_cache_destroy(kvm_vcpu_cache); @@ -4965,6 +5014,10 @@ EXPORT_SYMBOL_GPL(kvm_init); void kvm_exit(void) { + if (guest_map_area) + free_vm_area(guest_map_area); + if (guest_map_ptes) + kfree(guest_map_ptes); debugfs_remove_recursive(kvm_debugfs_dir); misc_deregister(&kvm_dev); kmem_cache_destroy(kvm_vcpu_cache); From patchwork Fri May 22 12:52:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565583 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14889138A for ; Fri, 22 May 2020 12:52:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE5A0206C3 for ; Fri, 22 May 2020 12:52:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="i0Xow3WA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE5A0206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 76F8B80015; Fri, 22 May 2020 08:52:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7242480008; Fri, 22 May 2020 08:52:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 507CB80018; Fri, 22 May 2020 08:52:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 1958180008 for ; Fri, 22 May 2020 08:52:27 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D41CF180AD822 for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) X-FDA: 76844343492.15.neck23_89bdbe03d741e X-Spam-Summary: 2,0,0,0b07288b4c8a1ef4,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1540:1711:1714:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3351:3868:3871:3872:4321:5007:6117:6119:6120:6261:6653:6742:7903:9036:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:13972:14181:14384:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.167.65:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: neck23_89bdbe03d741e X-Filterd-Recvd-Size: 4791 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id u16so4680362lfl.8 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+GYG0qSSXXWNbW6SjclZipkt2clARhTBGL72G1TzR+g=; b=i0Xow3WAyL0kpKfuXIdY3nsoVq7W+EV6oRNEhiS9/GqxAvqsfGsFEey4lDq1sbsAgR c2wOtAadryEu9FVsEyb1ufe/OZRu6HsI0zAsPfnJX9Cza8D1+FE0u0l9tdERa4Ih4Qn7 fRUKi+BP4n4l/2k0Hne8Q33RiCfF5XEv1W2P9dxFOKQTZJRUJ0/vO6IRlqN1zBseWmfP 8B17qjaBI3U81aZuJDuNQwNREoT4KhZYstsVtwwjfCtaska2sJP88wjLooXeBtpWQPjW 0N9lRb6YrjPKdWl3ZP52L/8qV5vUTrMVgo4oObShrgOoEz+q8mEDmzi+0iTqjyjvzrfw 41/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+GYG0qSSXXWNbW6SjclZipkt2clARhTBGL72G1TzR+g=; b=raZ20giWgbWv+ty2vbuDkrBBnScamVApW/1GHa/RlvfpGLT4rzJ42xCeoHaa9kCWEa exkduDmcfzIwoF2CubAb/yDEB7FKuI4BqVf1mx8DNenfBZt6M6nYsrlxS8A206s7FKl4 UfLU9aRVFEvT2ETGXz94OH1IawW4uyCb5O79DH05ycxw5NQSb4VCax8DBcaykV93A64D K36IEJrhsqD/C8zgHHHk6iPz1AXkKxOtwXhJtSkEONrhhClu5XwLRqU5v5tVkIAXlIVl fCaVQoW+JZHm4b9qULeVpNCquBWBHCKli3FV0ye7gN+wNN6UXs+P3U+D4YSj7v6RnNjY 8xUQ== X-Gm-Message-State: AOAM532MoZk+a84oqB9nRD7dtdXh/4qHZxCZu5F/bTC4ds2zzjxKRRvW 1xumMrKpXSujmyfvavDA3HkX1Q== X-Google-Smtp-Source: ABdhPJycCB0b0poeoDUYOr547M1WglFr8iGoptYLIwiwPw+BjbaimdNLYbqeeMIrOC3lCpI+0PYUJQ== X-Received: by 2002:a19:6b14:: with SMTP id d20mr1776578lfa.202.1590151945216; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v28sm2405723lfd.35.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 02FC710205A; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 12/16] x86/kvm: Share steal time page with host Date: Fri, 22 May 2020 15:52:10 +0300 Message-Id: <20200522125214.31348-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: struct kvm_steal_time is shared between guest and host. Mark it as shared. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/kvm.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index f50d65df4412..b0f445796ed1 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -286,11 +286,15 @@ static void kvm_register_steal_time(void) { int cpu = smp_processor_id(); struct kvm_steal_time *st = &per_cpu(steal_time, cpu); + unsigned long phys; if (!has_steal_clock) return; - wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED)); + phys = slow_virt_to_phys(st); + if (kvm_mem_protected()) + kvm_hypercall2(KVM_HC_MEM_SHARE, phys >> PAGE_SHIFT, 1); + wrmsrl(MSR_KVM_STEAL_TIME, (phys | KVM_MSR_ENABLED)); pr_info("kvm-stealtime: cpu %d, msr %llx\n", cpu, (unsigned long long) slow_virt_to_phys(st)); } From patchwork Fri May 22 12:52:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565577 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C201490 for ; Fri, 22 May 2020 12:52:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 90BB3206D5 for ; Fri, 22 May 2020 12:52:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="RPoE0F31" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 90BB3206D5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 79D1C80014; Fri, 22 May 2020 08:52:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6AEC080008; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5542380015; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 3971980014 for ; Fri, 22 May 2020 08:52:26 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F15FE824CA0B for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) X-FDA: 76844343450.08.kitty27_899bbd44bc360 X-Spam-Summary: 2,0,0,64a7bfd9fd08165d,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1711:1714:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3350:3865:3867:3868:3871:3872:5007:6120:6261:6653:6742:7903:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: kitty27_899bbd44bc360 X-Filterd-Recvd-Size: 4584 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id q2so12440052ljm.10 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=RPoE0F31yzBfOvWe7SLr/WKR/C2Sx4Gh7bwAeclpxq0/Vb+GNjxAJuEDdywAhAV+rG LPSwpxjdhaPrO6tPK9kLyU9lN+J2ICMlxxas0V3l5kGBMo3GGtqma7ZKf2e7FtyzTZqS KG5R8274XsAkdGpxCT1EQsqp3V4eoAOQ4oqvJ4wtWaqB9Ir9xHzwM7bFrSpATnNTj6er 1QQiE6gfmsKimMLDlHETHNkpXH6oKGpbXDEkowiisx49AcyncgVvc25A2cOV7mVWKXHV 79ghSiISaygzuziCXm5Lg5/wF/eQqqmW0HwJzq/V+XbVuTqhqr1DrqhesngodJsRG9EX SMCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=jG3tecxQ1qZVGX6Ua07ebFFaXCWnicasQAdGPLWrhLAkb0lAzxnJvd5LDUhIGSu++e g2CO9YpN0ZONN6BuNhSdwdLSFPOD/QfIUJKjFnvNGKyKYgI61IkECuchvnXR444i/ONE VSlHVhBvAqtnApnHXqRyTyMTXPge/8rzvQQNmuLRK0QUcMEs9Es0hKgQrJ2uBpW9eZT8 xYFam8gsb501KrmytP415eGTB5Ykfx2R/o3ZzDuIbXTfLUPL/NPsGxaaV22IClaCyqRi MFMkSAOfFzx8e/jndshDeWIdtqSbhrQbz7+AxHTl+gvpQiVOOH+nG+Hv97MHtLnzJ4st u1yA== X-Gm-Message-State: AOAM532YSnrf+Y9+xg48UU+D5WkzvTzhMhfMRS0fcoH9JT3NmvT4ntDW /qlljcZuIgS3JdNQzBdmcVy0CA== X-Google-Smtp-Source: ABdhPJxgFPaCjAkcKRCcIN9VQB3EgCJQ/VVHg9h7HUrXezHSNTMlbVQq98oVAeqSL+RKdW5J2/HvFQ== X-Received: by 2002:a05:651c:2ce:: with SMTP id f14mr7217431ljo.87.1590151944358; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id t22sm2303766ljk.11.2020.05.22.05.52.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 0ABFA10205B; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 13/16] x86/kvmclock: Share hvclock memory with the host Date: Fri, 22 May 2020 15:52:11 +0300 Message-Id: <20200522125214.31348-14-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hvclock is shared between the guest and the hypervisor. It has to be accessible by host. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/kvmclock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index 34b18f6eeb2c..ac6c2abe0d0f 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -253,7 +253,7 @@ static void __init kvmclock_init_mem(void) * hvclock is shared between the guest and the hypervisor, must * be mapped decrypted. */ - if (sev_active()) { + if (sev_active() || kvm_mem_protected()) { r = set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { From patchwork Fri May 22 12:52:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 300A190 for ; Fri, 22 May 2020 12:53:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5375206C3 for ; Fri, 22 May 2020 12:53:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="nAvZfwpo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5375206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 832B380019; Fri, 22 May 2020 08:52:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 797E480008; Fri, 22 May 2020 08:52:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 637EE80019; Fri, 22 May 2020 08:52:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 3D75A80008 for ; Fri, 22 May 2020 08:52:28 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E26741F08 for ; Fri, 22 May 2020 12:52:27 +0000 (UTC) X-FDA: 76844343534.19.river25_89e4862a81241 X-Spam-Summary: 2,0,0,17d496a752c058b6,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2199:2393:2559:2562:2918:3138:3139:3140:3141:3142:3865:3867:3871:3872:4049:4250:4321:4605:5007:6119:6120:6261:6653:6742:7903:8660:10004:11026:11473:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12660:12895:13148:13161:13229:13230:13894:14096:21080:21444:21451:21627:21939:21987:21990:30012:30054:30070,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: river25_89e4862a81241 X-Filterd-Recvd-Size: 10716 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:27 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id w10so12535860ljo.0 for ; Fri, 22 May 2020 05:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iuw1xkRjcXmsK4stEeuiGW4w7Bgw3ocR3EwBbivjZB8=; b=nAvZfwpo2XufBEG9O4I+bP/QA3rUDiglZWAfRAUbbmxFJHVM7MjJX6jw9D+v83wIEh bLAe30hWjHZbCfPlen5i0Pk/uCZMnaHGf+QEwyDoeKzBvpuREkRedR6H+OFtWMTQ9B1A V0TJrPi4cZ8U+PtvIXGZ0+HBjbkEp5kHOmiNTiI4Z4ed72WscM9psLvvGE7W4WJZy1t9 AAGCxTd5v19W2bwiyUOA3qjxhuuygCbECxupUCzA4tTRbc0DEx33sLJ1UrDrpBoXbCpa 7wl1kPLvtGIQPMjEhtvLJn57b7mKbvxjyERMnAY8TmQ6cKzezAJWQSNWeN0WzBFY62q8 zoFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iuw1xkRjcXmsK4stEeuiGW4w7Bgw3ocR3EwBbivjZB8=; b=F0rRiisw54rczG9/aG/kj45xG/WnSZWhF2LCtLbFlcdb/BdSx46uEw6IprYJaDNduH B42NkUAthW1VrWnakJtuQPh2QzxmbpodeixdK+pkaQt1GH9oMKheyaLCzPcUHaxwM45Q i8FN+y2mlVXkdkZwEokP90Ov3s+fRghYir30fHHcgMoDK+YWYZLoPjGfC9cY4Zs6pO7U +cWIAjAMjWUfF4SjyICAedUQ8uFzR4SFFrsUYP1DKyhE42NJ47ax5MzKZJ8+WtRISS+V wd5+4yxXFt7BAPc5DGJ5rysklBkaIHOVwetKNY+fztsSF56yanvdipMuLMfaRf3LFWUQ L+lQ== X-Gm-Message-State: AOAM531HtW6SBHTRaxGqMmmFG4OyegpCSNdsbJKGhcIwfcYTLaeALf9x w1HXu2ijkIJ65lG9GZZpswcDyw== X-Google-Smtp-Source: ABdhPJwqihqvAsOaLlmGwWVOsCLhtqXYamXtJyWkA59wywoSZ5SCKVLtje40pqAx/jhBiNJOmWuLpg== X-Received: by 2002:a2e:7a02:: with SMTP id v2mr5172371ljc.374.1590151946210; Fri, 22 May 2020 05:52:26 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j10sm2312515ljc.21.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 1206D10205C; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 14/16] KVM: Introduce gfn_to_pfn_memslot_protected() Date: Fri, 22 May 2020 15:52:12 +0300 Message-Id: <20200522125214.31348-15-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new interface allows to detect if the page is protected. A protected page cannot be accessed directly by the host: it has to be mapped manually. This is preparation for the next patch. Signed-off-by: Kirill A. Shutemov --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/kvm/mmu/mmu.c | 6 +++-- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 35 ++++++++++++++++++-------- 5 files changed, 32 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 2b35f9bcf892..e9a13ecf812f 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -587,7 +587,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, } else { /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok); + writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index aa12cd4078b3..58f8df466a94 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -798,7 +798,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p); + writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8071952e9cf2..0fc095a66a3c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4096,7 +4096,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); async = false; - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable, + NULL); if (!async) return false; /* *pfn has correct page already */ @@ -4110,7 +4111,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, return true; } - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable, + NULL); return false; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d7072f6d6aa0..eca18ef9b1f4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -724,7 +724,7 @@ kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable); + bool *writable, bool *protected); void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 63282def3760..8bcf3201304a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1779,9 +1779,10 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, bool *protected, kvm_pfn_t *pfn) { unsigned int flags = FOLL_HWPOISON | FOLL_KVM; + struct vm_area_struct *vma; struct page *page; int npages = 0; @@ -1795,9 +1796,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (async) flags |= FOLL_NOWAIT; - npages = get_user_pages_unlocked(addr, 1, &page, flags); - if (npages != 1) + down_read(¤t->mm->mmap_sem); + npages = get_user_pages(addr, 1, flags, &page, &vma); + if (npages != 1) { + up_read(¤t->mm->mmap_sem); return npages; + } + if (protected) + *protected = vma_is_kvm_protected(vma); + up_read(¤t->mm->mmap_sem); /* map read fault as writable if possible */ if (unlikely(!write_fault) && writable) { @@ -1888,7 +1895,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * whether the mapping is writable. */ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) + bool write_fault, bool *writable, bool *protected) { struct vm_area_struct *vma; kvm_pfn_t pfn = 0; @@ -1903,7 +1910,8 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, write_fault, writable, protected, + &pfn); if (npages == 1) return pfn; @@ -1937,7 +1945,7 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable) + bool *writable, bool *protected) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -1960,7 +1968,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, async, write_fault, - writable); + writable, protected); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); @@ -1968,19 +1976,26 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable); + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); +static kvm_pfn_t gfn_to_pfn_memslot_protected(struct kvm_memory_slot *slot, + gfn_t gfn, bool *protected) +{ + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, + protected); +} + kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); From patchwork Fri May 22 12:52:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A48BB159A for ; Fri, 22 May 2020 12:52:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7329D206C3 for ; Fri, 22 May 2020 12:52:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="k/PBhVH4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7329D206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DFF4D80018; Fri, 22 May 2020 08:52:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DB10080008; Fri, 22 May 2020 08:52:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C50A080018; Fri, 22 May 2020 08:52:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id AA68480008 for ; Fri, 22 May 2020 08:52:27 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6DBDF1F08 for ; Fri, 22 May 2020 12:52:27 +0000 (UTC) X-FDA: 76844343534.10.line19_89ce3c3085d56 X-Spam-Summary: 2,0,0,ea5c30f8a316cd6c,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2198:2199:2393:2559:2562:2901:2914:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3874:4120:4250:4321:4470:4605:5007:6119:6120:6261:6653:6742:7558:7903:8603:10004:11026:11473:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:21080:21324:21444:21451:21611:21627:21990:30012:30054:30070,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: line19_89ce3c3085d56 X-Filterd-Recvd-Size: 9148 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id o14so12528777ljp.4 for ; Fri, 22 May 2020 05:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WBg6Grrpfv0UYLyySue87ooSEtb1PaH0oM+7q4JuIvE=; b=k/PBhVH4U/84hhUDy2iNb6HmIdHpCtBme2OP7Q9seCU+mJ5bOwnJL0bkIYzY4tQzQd AMzF0kp3hxSOb/UdlBe3T50OcmZy7ZQXruO8tyi/YjR7rEj/6Flotp02nt90TGQfWa4V ghNFpOeIsz7O57IG/xOWpo/82qT8dM3ouBDmuc4hGLo6ga6Spz5pPGV6GpsCySA4/uy0 g2/jU3t+DFQldIRJEWbDz5sU+QxU8yjwcXCUawNI9urbOdCIm4lXK39VbLyiZ7g8kkT5 fPELafo+80qdXYx+Ta3tnd+1UQkuXvVbgUS2V+cGTdWJWBcSm4HDrl3GFlsaOmFW/Iuv Xlrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WBg6Grrpfv0UYLyySue87ooSEtb1PaH0oM+7q4JuIvE=; b=sAAu7iBnNgjYZKiLKqifIC4/zHZ9XXNaZKHgw/Z0SNTdxnP2iyoeS8+vORUkP0eSfT pUsbR0WNq+TFWhH5fktJXkxKcLMRFvSsfgCWns/JQvEGmyl9eVpq0qXCnQaz9cuieZOS wSOLR8jDyQFfSh9/wz+XOwuhVoYMq4xwqzsCFktLBjNKnQWe4qV3fzHwo6Vy3fB03jes Sk4r2Vwn3cTxojMBDvvHOJ23MMQXDQcmSnDlP+xERWrYk5dnxrAMDJqQNBRYb9nbDmos o6qFtYZ1XyJyGStpxYzIQz0l43380ZxyiGJqYShJ7woN84kmwTh3g10vV7tJ2acQmGzO T28w== X-Gm-Message-State: AOAM533A6G7TeYMu2qiD1xI+6aSJW5J3k+bTQpJPr2/v2CF8DG/7gChT 9YKtnQb9lUo7GWYOkdT6njQJgw== X-Google-Smtp-Source: ABdhPJwNm4ecA72ZG7EXOMbXL+0nWYPqAQuoYcz8FiPFtT8yxBCXefIBszRD/aNFrU9Q+/CufU2ZrQ== X-Received: by 2002:a2e:3a08:: with SMTP id h8mr5865621lja.1.1590151945625; Fri, 22 May 2020 05:52:25 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v5sm1441492ljh.131.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 19BE810205D; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 15/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Date: Fri, 22 May 2020 15:52:13 +0300 Message-Id: <20200522125214.31348-16-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We cannot access protected pages directly. Use ioremap() to create a temporary mapping of the page. The mapping is destroyed on __kvm_unmap_gfn(). The new interface gfn_to_pfn_memslot_protected() is used to detect if the page is protected. ioremap_cache_force() is a hack to bypass IORES_MAP_SYSTEM_RAM check in the x86 ioremap code. We need a better solution. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/io.h | 2 ++ arch/x86/include/asm/pgtable_types.h | 1 + arch/x86/mm/ioremap.c | 16 +++++++++++++--- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 14 +++++++++++--- 5 files changed, 28 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index c58d52fd7bf2..a3e1bfad1026 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -184,6 +184,8 @@ extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size); #define ioremap_uc ioremap_uc extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size); #define ioremap_cache ioremap_cache +extern void __iomem *ioremap_cache_force(resource_size_t offset, unsigned long size); +#define ioremap_cache_force ioremap_cache_force extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val); #define ioremap_prot ioremap_prot extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size); diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index b6606fe6cfdf..66cc22abda7b 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -147,6 +147,7 @@ enum page_cache_mode { _PAGE_CACHE_MODE_UC = 3, _PAGE_CACHE_MODE_WT = 4, _PAGE_CACHE_MODE_WP = 5, + _PAGE_CACHE_MODE_WB_FORCE = 6, _PAGE_CACHE_MODE_NUM = 8 }; diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 18c637c0dc6f..e48fc0e130b2 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -202,9 +202,12 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, __ioremap_check_mem(phys_addr, size, &io_desc); /* - * Don't allow anybody to remap normal RAM that we're using.. + * Don't allow anybody to remap normal RAM that we're using, unless + * _PAGE_CACHE_MODE_WB_FORCE is used. */ - if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { + if (pcm == _PAGE_CACHE_MODE_WB_FORCE) { + pcm = _PAGE_CACHE_MODE_WB; + } else if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n", &phys_addr, &last_addr); return NULL; @@ -419,6 +422,13 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size) } EXPORT_SYMBOL(ioremap_cache); +void __iomem *ioremap_cache_force(resource_size_t phys_addr, unsigned long size) +{ + return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB_FORCE, + __builtin_return_address(0), false); +} +EXPORT_SYMBOL(ioremap_cache_force); + void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size, unsigned long prot_val) { @@ -467,7 +477,7 @@ void iounmap(volatile void __iomem *addr) p = find_vm_area((void __force *)addr); if (!p) { - printk(KERN_ERR "iounmap: bad address %p\n", addr); + printk(KERN_ERR "iounmap: bad address %px\n", addr); dump_stack(); return; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index eca18ef9b1f4..b6944f88033d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -237,6 +237,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool protected; }; /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8bcf3201304a..71aac117357f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2091,6 +2091,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn); + bool protected = false; u64 gen = slots->generation; if (!map) @@ -2107,12 +2108,16 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, } else { if (atomic) return -EAGAIN; - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = gfn_to_pfn_memslot_protected(slot, gfn, &protected); } if (is_error_noslot_pfn(pfn)) return -EINVAL; - if (pfn_valid(pfn)) { + if (protected) { + if (atomic) + return -EAGAIN; + hva = ioremap_cache_force(pfn_to_hpa(pfn), PAGE_SIZE); + } else if (pfn_valid(pfn)) { page = pfn_to_page(pfn); if (atomic) hva = kmap_atomic(page); @@ -2133,6 +2138,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, map->hva = hva; map->pfn = pfn; map->gfn = gfn; + map->protected = protected; return 0; } @@ -2163,7 +2169,9 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, if (!map->hva) return; - if (map->page != KVM_UNMAPPED_PAGE) { + if (map->protected) { + iounmap(map->hva); + } else if (map->page != KVM_UNMAPPED_PAGE) { if (atomic) kunmap_atomic(map->hva); else From patchwork Fri May 22 12:52:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565579 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63475138A for ; Fri, 22 May 2020 12:52:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2509B206C3 for ; Fri, 22 May 2020 12:52:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="qzMU1Yyk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2509B206C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D3B5380016; Fri, 22 May 2020 08:52:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CEC2280008; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8C9C80015; Fri, 22 May 2020 08:52:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id A32BE80008 for ; Fri, 22 May 2020 08:52:26 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5F4DF1F10 for ; Fri, 22 May 2020 12:52:26 +0000 (UTC) X-FDA: 76844343492.13.bikes32_89a9ee31ef838 X-Spam-Summary: 2,0,0,48b7b8b91abfd60e,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:968:973:988:989:1260:1311:1314:1345:1359:1431:1434:1437:1515:1605:1730:1747:1777:1792:1801:2393:2559:2562:2897:2898:3138:3139:3140:3141:3142:3308:3369:3865:3867:3868:3871:3872:4050:4250:4321:4605:5007:6119:6120:6261:6653:6742:7903:8634:8957:10004:11026:11233:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13894:14096:21080:21324:21444:21451:21627:21966:21987:21990:30003:30054:30070,0,RBL:209.85.208.195:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bikes32_89a9ee31ef838 X-Filterd-Recvd-Size: 11688 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:25 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id c11so10326264ljn.2 for ; Fri, 22 May 2020 05:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pRaycCbZ+3CnIYdCK6nzb5QeCq4+CXz15hhrfgpmlnU=; b=qzMU1YykXzSpxfq4gUGJDa7r6eExwnPKDaXlBK/O4gsy+HAeO9hWikW7PXlS15tSOH 7H2ujvQyQ05kpn3qDXjrynE8jghjk+tV+50UPlqbQIX2du+hpaNowisoN3ub3qLjZ5f3 99uEnzvP+pKXBJgAF83Y0N6c9SKtYKnnrIwDVaq4Kv2Wr4a1zhoeEi0d4NhcPOVBti0r vHJJz6mQyDZ0DqRM/9/4Yxr3+0jwg6iVABSBDnyxepF8R+wLTQn/tl3MyqWiC7CeSR0U kzl1lMGgQZQwNGnw8k9TRIqO4lZbttrvG76y6b1LYb/UbTJV1kAVw7lgseAYLJzdrIFo mGfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pRaycCbZ+3CnIYdCK6nzb5QeCq4+CXz15hhrfgpmlnU=; b=spk+D13wuryDWqWKP/R5blb61Wp8NlCxtZellEBARXcOlxyjoYeR1XbOD1yli/Zaof ToeglJvWDJ9QhJu88B1iYsRZAyDIcMT4aArk98wfx17LOrhQNWrh4fJboc7nvnawj6tV JgQrZpWxvJMRX891VNB7990Y71TsmXrcx9hZrojDIl/ENMklgjuFIMzZA4/mSYQWg/Fm zJh46gkq/qKnV8NE8SrTtyrbjbJkx65uu9PndRZy47RpwPWeWOZWg4Dji6FSXjv9N736 h+7sKiawn0h1WCqURdkj0ofUYEck8+k7BR8Q+hBm+lu3Y35m4S7atgS2tRS9oj9C/Qzt GUTA== X-Gm-Message-State: AOAM5336kLz8Idg0KPY3xiR0qXj3ZbGUzuvVwod2hjFgg36lQ0gNQmVG mq8QqRmPsc4W09YR7wtFGIBjug== X-Google-Smtp-Source: ABdhPJw8ESx6i7S5LKvY5bQPxCQloW++OXZLrfBoGefxWzX7bSl6+c0AL/x/tGcrHssIDz/lu11VOg== X-Received: by 2002:a2e:5d1:: with SMTP id 200mr7029616ljf.157.1590151944653; Fri, 22 May 2020 05:52:24 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id p23sm1665017ljh.117.2020.05.22.05.52.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:22 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 21A53102061; Fri, 22 May 2020 15:52:20 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 16/16] KVM: Unmap protected pages from direct mapping Date: Fri, 22 May 2020 15:52:14 +0300 Message-Id: <20200522125214.31348-17-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the protected memory feature enabled, unmap guest memory from kernel's direct mappings. Migration and KSM is disabled for protected memory as it would require a special treatment. Signed-off-by: Kirill A. Shutemov --- arch/x86/mm/pat/set_memory.c | 1 + include/linux/kvm_host.h | 3 ++ mm/huge_memory.c | 9 +++++ mm/ksm.c | 3 ++ mm/memory.c | 13 +++++++ mm/rmap.c | 4 ++ virt/kvm/kvm_main.c | 74 ++++++++++++++++++++++++++++++++++++ 7 files changed, 107 insertions(+) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6f075766bb94..13988413af40 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2227,6 +2227,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable) arch_flush_lazy_mmu_mode(); } +EXPORT_SYMBOL_GPL(__kernel_map_pages); #ifdef CONFIG_HIBERNATION bool kernel_page_present(struct page *page) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b6944f88033d..e1d7762b615c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -705,6 +705,9 @@ int kvm_protect_all_memory(struct kvm *kvm); int kvm_protect_memory(struct kvm *kvm, unsigned long gfn, unsigned long npages, bool protect); +void kvm_map_page(struct page *page, int nr_pages); +void kvm_unmap_page(struct page *page, int nr_pages); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c3562648a4ef..d8a444a401cc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -650,6 +651,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); count_memcg_events(memcg, THP_FAULT_ALLOC, 1); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_unmap_page(page, HPAGE_PMD_NR); } return 0; @@ -1886,6 +1891,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); + + /* Map the page back to the direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_map_page(page, HPAGE_PMD_NR); } else if (thp_migration_supported()) { swp_entry_t entry; diff --git a/mm/ksm.c b/mm/ksm.c index 281c00129a2e..942b88782ac2 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -527,6 +527,9 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, return NULL; if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) return NULL; + /* TODO */ + if (vma_is_kvm_protected(vma)) + return NULL; return vma; } diff --git a/mm/memory.c b/mm/memory.c index d7228db6e4bf..74773229b854 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -71,6 +71,7 @@ #include #include #include +#include #include @@ -1088,6 +1089,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); } + + /* Map the page back to the direct mapping */ + if (vma_is_anonymous(vma) && vma_is_kvm_protected(vma)) + kvm_map_page(page, 1); + rss[mm_counter(page)]--; page_remove_rmap(page, false); if (unlikely(page_mapcount(page) < 0)) @@ -3312,6 +3318,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) struct page *page; vm_fault_t ret = 0; pte_t entry; + bool set = false; /* File mapping without ->vm_ops ? */ if (vma->vm_flags & VM_SHARED) @@ -3397,6 +3404,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) page_add_new_anon_rmap(page, vma, vmf->address, false); mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); + set = true; setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3404,6 +3412,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) update_mmu_cache(vma, vmf->address, vmf->pte); unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma) && set) + kvm_unmap_page(page, 1); + return ret; release: mem_cgroup_cancel_charge(page, memcg, false); diff --git a/mm/rmap.c b/mm/rmap.c index f79a206b271a..a9b2e347d1ab 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1709,6 +1709,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg) { + /* TODO */ + if (vma_is_kvm_protected(vma)) + return true; + return vma_is_temporary_stack(vma); } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 71aac117357f..defc33d3a124 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -2718,6 +2719,72 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +void kvm_map_page(struct page *page, int nr_pages) +{ + int i; + + /* Clear page before returning it to the direct mapping */ + for (i = 0; i < nr_pages; i++) { + void *p = map_page_atomic(page + i); + memset(p, 0, PAGE_SIZE); + unmap_page_atomic(p); + } + + kernel_map_pages(page, nr_pages, 1); +} +EXPORT_SYMBOL_GPL(kvm_map_page); + +void kvm_unmap_page(struct page *page, int nr_pages) +{ + kernel_map_pages(page, nr_pages, 0); +} +EXPORT_SYMBOL_GPL(kvm_unmap_page); + +static int adjust_direct_mapping_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct mm_walk *walk) +{ + bool protect = (bool)walk->private; + pte_t *pte; + struct page *page; + + if (pmd_trans_huge(*pmd)) { + page = pmd_page(*pmd); + if (is_huge_zero_page(page)) + return 0; + VM_BUG_ON_PAGE(total_mapcount(page) != 1, page); + /* XXX: Would it fail with direct device assignment? */ + VM_BUG_ON_PAGE(page_count(page) != 1, page); + kernel_map_pages(page, HPAGE_PMD_NR, !protect); + return 0; + } + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) { + pte_t entry = *pte; + + if (!pte_present(entry)) + continue; + + if (is_zero_pfn(pte_pfn(entry))) + continue; + + page = pte_page(entry); + + VM_BUG_ON_PAGE(page_mapcount(page) != 1, page); + /* XXX: Would it fail with direct device assignment? */ + VM_BUG_ON_PAGE(page_count(page) != + total_mapcount(compound_head(page)), page); + kernel_map_pages(page, 1, !protect); + } + + return 0; +} + +static const struct mm_walk_ops adjust_direct_mapping_ops = { + .pmd_entry = adjust_direct_mapping_pte_range, +}; + static int protect_memory(unsigned long start, unsigned long end, bool protect) { struct mm_struct *mm = current->mm; @@ -2763,6 +2830,13 @@ static int protect_memory(unsigned long start, unsigned long end, bool protect) if (ret) goto out; + if (vma_is_anonymous(vma)) { + ret = walk_page_range_novma(mm, start, tmp, + &adjust_direct_mapping_ops, NULL, + (void *) protect); + if (ret) + goto out; + } next: start = tmp; if (start < prev->vm_end)