From patchwork Fri May 22 12:51:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11565549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07357138A for ; Fri, 22 May 2020 12:52:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BCCB020759 for ; Fri, 22 May 2020 12:52:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="RZmS5aGo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BCCB020759 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 29DF180007; Fri, 22 May 2020 08:52:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 154458000A; Fri, 22 May 2020 08:52:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E992680007; Fri, 22 May 2020 08:52:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id D3F2080008 for ; Fri, 22 May 2020 08:52:20 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A1A4F2485 for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) X-FDA: 76844343240.04.angle70_88d1ca3153d13 X-Spam-Summary: 2,0,0,6836760430402235,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:2196:2199:2393:2538:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3871:3872:3874:4119:4250:4321:4385:4605:5007:6119:6120:6261:6653:6742:7875:7903:8568:9592:10004:11026:11473:11657:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:21080:21444:21451:21627:21990:30012:30054:30064:30067:30070,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: angle70_88d1ca3153d13 X-Filterd-Recvd-Size: 8986 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 May 2020 12:52:20 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id k5so12452025lji.11 for ; Fri, 22 May 2020 05:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=RZmS5aGoxiEWf9GuUWonhlLEEnOL/5aU5nssz02nf/5SdXl7slR5EEYNs/4PSzvG7Z OIBZmyNLeNfUVd9CBS3x2JRsgMjUJtKmqgUg3/8kZfq6ZbRdgzt/ygHJlYwMEbP8ckSe Osr8w/5iuea4QUPKom2JgGBanIVuVe7v1DvGbNuZfXNy/NOHMFubiWQ7w+0xnU6DvkKM iBbwHSs+aX6zSPzMNbc8Vnvh+YhL2rZrjhyYC6ClzNUk0Gq5XJxPA9Ma4lZvu7HiCMS0 hLLe5Ms5M7Abmcvn1C2ToiDEdgV904P22iOK27KYOA/X+C/xBlkBHwTkmOlNo6KAYruK 6Nig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2K3PqYltBNxWX5unFFr/4jbZ1d4hZaPsIl+jySYYIzI=; b=is58Hw2zLJfLpnSQe0x9yCNJMMOdiZp4l4173BLxAsjZE30RrGO9G2G4Ran68wfK7D w0b5fcIT24GvhU0WFKwsuNO2B6736zkB6A378862Rbly+ZPBwAW4ccTCla5wUcpZ4nro hW7lTk1DAZYeoerJjGIulH20i9O7Gby6HLL6FO5djXXqSZSUSTWkrAlFp9G3hVn3QwKk A9/xEmT6F+f4At9T4e2V7oHq6gbUSbkvzKBYyzsLNq0ne5+sRXX85AJBYZ/I5SVwFChf P0f+YEPLQEqrmA5irgLWeAyjMA+YawnKplEaDfQU9B8Dzh2MEbjVFXMbYH2W1INMgEko KhCA== X-Gm-Message-State: AOAM531TlWGKS8+jasgqhAA2D+p5ahe4YeS07ISMUi+JKcXRZm9clAWA 3G8UJ05I7nDDIvGNtxpInbIegg== X-Google-Smtp-Source: ABdhPJzQ6T1oKzjxH1v0MRdbIIoC0LWPz5Z4w0jhXU9Ef/krG9OSTf62OuiRiejZ7fPahTpNDSD5dA== X-Received: by 2002:a2e:9196:: with SMTP id f22mr6908669ljg.21.1590151938451; Fri, 22 May 2020 05:52:18 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id z22sm2386655lfi.96.2020.05.22.05.52.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 May 2020 05:52:17 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id A1076101EB3; Fri, 22 May 2020 15:52:19 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFC 01/16] x86/mm: Move force_dma_unencrypted() to common code Date: Fri, 22 May 2020 15:51:59 +0300 Message-Id: <20200522125214.31348-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: force_dma_unencrypted() has to return true for KVM guest with the memory protected enabled. Move it out of AMD SME code. Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This is preparation for the following patches. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 8 +++++-- arch/x86/include/asm/io.h | 4 +++- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 30 ------------------------- arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++ 5 files changed, 49 insertions(+), 33 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2d3f963fd6f1..bc72bfd89bcf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1518,12 +1518,16 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select X86_MEM_ENCRYPT_COMMON ---help--- Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index e1aa17a468a8..c58d52fd7bf2 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -256,10 +256,12 @@ static inline void slow_down_io(void) #endif -#ifdef CONFIG_AMD_MEM_ENCRYPT #include extern struct static_key_false sev_enable_key; + +#ifdef CONFIG_AMD_MEM_ENCRYPT + static inline bool sev_key_active(void) { return static_branch_unlikely(&sev_enable_key); diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 98f7c6fa2eaa..af8683c053a3 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -49,6 +49,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index a03614bd3e1a..112304a706f3 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -15,10 +15,6 @@ #include #include #include -#include -#include -#include -#include #include #include @@ -350,32 +346,6 @@ bool sev_active(void) return sme_me_mask && sev_enabled; } -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) -{ - /* - * For SEV, all DMA must be to unencrypted addresses. - */ - if (sev_active()) - return true; - - /* - * For SME, all DMA must be to unencrypted addresses if the - * device does not support DMA to addresses that include the - * encryption mask. - */ - if (sme_active()) { - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, - dev->bus_dma_limit); - - if (dma_dev_mask <= dma_enc_mask) - return true; - } - - return false; -} - /* Architecture __weak replacement functions */ void __init mem_encrypt_free_decrypted_mem(void) { diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..964e04152417 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + */ + +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For SEV, all DMA must be to unencrypted/shared addresses. + */ + if (sev_active()) + return true; + + /* + * For SME, all DMA must be to unencrypted addresses if the + * device does not support DMA to addresses that include the + * encryption mask. + */ + if (sme_active()) { + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, + dev->bus_dma_limit); + + if (dma_dev_mask <= dma_enc_mask) + return true; + } + + return false; +}