From patchwork Thu Sep 15 14:29:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 445B3C6FA89 for ; Thu, 15 Sep 2022 14:34:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF9946B0075; Thu, 15 Sep 2022 10:34:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA8776B0078; Thu, 15 Sep 2022 10:34:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD3BD8D0001; Thu, 15 Sep 2022 10:34:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9B42D6B0075 for ; Thu, 15 Sep 2022 10:34:15 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 725B7AB450 for ; Thu, 15 Sep 2022 14:34:15 +0000 (UTC) X-FDA: 79914564870.16.9F64356 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf06.hostedemail.com (Postfix) with ESMTP id 83140180097 for ; Thu, 15 Sep 2022 14:34:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252454; x=1694788454; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GqJ8BWA2xllQ6m1R4ZOnTmOeGfyvP6hMWqDM8wTtsYc=; b=jI95cE5P7C+1X3eQ7FgdyJU3vqVmcTaS0PtNBc3MMNQxa09rUHsSg7Yp JyqOhR4yrDWNL6U8j/NGQZOIvjTvNcMqZek7y45Bz+m/OfteBQHmgazfh U7BNuMAxM8maJz8irNZSebUmFTE2hzAjLEokjQ0Vll0AYZv2zPfuzZdnp FfBIPG3FuuJmQNx8aFVPs12BDQEARAcA5rOSjnyfKJX345cvFrLl9C28m DkOv+Ff7zWnSoQqk0UXO0TL7Zc/dBP3Jz9wv5czRzZDRUgNyHR9ApnX1W OK78ltzNFQS7nVyORHcoJnmh0XRteLrkqqXbc89gmRhwBwDq6za3mlZyV Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="299540853" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="299540853" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:34:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945976926" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:02 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Date: Thu, 15 Sep 2022 22:29:06 +0800 Message-Id: <20220915142913.2213336-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=jI95cE5P; spf=none (imf06.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252454; a=rsa-sha256; cv=none; b=qa80mC9jIntV17bh1g4y8U3prdzMALMJQhcNH0NKE/TuVIZtHQV6e9Nca2ovfBPz2KwUjh vIv2r7R/7EVJ/DTcYMa4W+mBL6XOhwZoxPPr9g/ob5TgNPiQYxgeGy2kvEvY7u7kMN9bID scQEzavHLilEEUm/FKMs3PEZJ6VSXW0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252454; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eBWnAEep32Mz+H4odzKPmQwCdSe8qk1Z0AUmh68befI=; b=Umvg1Qb1qiB94EePM0UOSvpWRTR2R/gC08atBTaCOiPWMHv3JmFmItXoKNi74Py88sAaCV n+XFPGvA5lS61fIdka1s8gdYhBfl1T3ggL5V8GYv2dLlS3/NgscVB68HyyZgy0PfOEpCot cxmn9ACM5UjB0YDGDepR7/Ri6thiVQ4= X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 83140180097 Authentication-Results: imf06.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=jI95cE5P; spf=none (imf06.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: h5imu3yxzexeeaqfck5ocjc3pssiwtu3 X-HE-Tag: 1663252454-763356 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" KVM can use memfd-provided memory for guest memory. For normal userspace accessible memory, KVM userspace (e.g. QEMU) mmaps the memfd into its virtual address space and then tells KVM to use the virtual address to setup the mapping in the secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd-provided memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so it should be prevented. This patch introduces userspace inaccessible memfd (created with MFD_INACCESSIBLE). Its memory is inaccessible from userspace through ordinary MMU access (e.g. read/write/mmap) but can be accessed via in-kernel interface so KVM can directly interact with core-mm without the need to map the memory into KVM userspace. It provides semantics required for KVM guest private(encrypted) memory support that a file descriptor with this flag set is going to be used as the source of guest memory in confidential computing environments such as Intel TDX/AMD SEV. KVM userspace is still in charge of the lifecycle of the memfd. It should pass the opened fd to KVM. KVM uses the kernel APIs newly added in this patch to obtain the physical memory address and then populate the secondary page table entries. The userspace inaccessible memfd can be fallocate-ed and hole-punched from userspace. When hole-punching happens, KVM can get notified through inaccessible_notifier it then gets chance to remove any mapped entries of the range in the secondary page tables. The userspace inaccessible memfd itself is implemented as a shim layer on top of real memory file systems like tmpfs/hugetlbfs but this patch only implemented tmpfs. The allocated memory is currently marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- include/linux/memfd.h | 24 ++++ include/uapi/linux/magic.h | 1 + include/uapi/linux/memfd.h | 1 + mm/Makefile | 2 +- mm/memfd.c | 25 ++++- mm/memfd_inaccessible.c | 219 +++++++++++++++++++++++++++++++++++++ 6 files changed, 270 insertions(+), 2 deletions(-) create mode 100644 mm/memfd_inaccessible.c diff --git a/include/linux/memfd.h b/include/linux/memfd.h index 4f1600413f91..334ddff08377 100644 --- a/include/linux/memfd.h +++ b/include/linux/memfd.h @@ -3,6 +3,7 @@ #define __LINUX_MEMFD_H #include +#include #ifdef CONFIG_MEMFD_CREATE extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg); @@ -13,4 +14,27 @@ static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned long a) } #endif +struct inaccessible_notifier; + +struct inaccessible_notifier_ops { + void (*invalidate)(struct inaccessible_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct inaccessible_notifier { + struct list_head list; + const struct inaccessible_notifier_ops *ops; +}; + +void inaccessible_register_notifier(struct file *file, + struct inaccessible_notifier *notifier); +void inaccessible_unregister_notifier(struct file *file, + struct inaccessible_notifier *notifier); + +int inaccessible_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, + int *order); +void inaccessible_put_pfn(struct file *file, pfn_t pfn); + +struct file *memfd_mkinaccessible(struct file *memfd); + #endif /* __LINUX_MEMFD_H */ diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 6325d1d0e90f..9d066be3d7e8 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -101,5 +101,6 @@ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ +#define INACCESSIBLE_MAGIC 0x494e4143 /* "INAC" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/include/uapi/linux/memfd.h b/include/uapi/linux/memfd.h index 7a8a26751c23..48750474b904 100644 --- a/include/uapi/linux/memfd.h +++ b/include/uapi/linux/memfd.h @@ -8,6 +8,7 @@ #define MFD_CLOEXEC 0x0001U #define MFD_ALLOW_SEALING 0x0002U #define MFD_HUGETLB 0x0004U +#define MFD_INACCESSIBLE 0x0008U /* * Huge page size encoding when MFD_HUGETLB is specified, and a huge page diff --git a/mm/Makefile b/mm/Makefile index 9a564f836403..f82e5d4b4388 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -126,7 +126,7 @@ obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o obj-$(CONFIG_PERCPU_STATS) += percpu-stats.o obj-$(CONFIG_ZONE_DEVICE) += memremap.o obj-$(CONFIG_HMM_MIRROR) += hmm.o -obj-$(CONFIG_MEMFD_CREATE) += memfd.o +obj-$(CONFIG_MEMFD_CREATE) += memfd.o memfd_inaccessible.o obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o diff --git a/mm/memfd.c b/mm/memfd.c index 08f5f8304746..1853a90f49ff 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -261,7 +261,8 @@ long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg) #define MFD_NAME_PREFIX_LEN (sizeof(MFD_NAME_PREFIX) - 1) #define MFD_NAME_MAX_LEN (NAME_MAX - MFD_NAME_PREFIX_LEN) -#define MFD_ALL_FLAGS (MFD_CLOEXEC | MFD_ALLOW_SEALING | MFD_HUGETLB) +#define MFD_ALL_FLAGS (MFD_CLOEXEC | MFD_ALLOW_SEALING | MFD_HUGETLB | \ + MFD_INACCESSIBLE) SYSCALL_DEFINE2(memfd_create, const char __user *, uname, @@ -283,6 +284,14 @@ SYSCALL_DEFINE2(memfd_create, return -EINVAL; } + /* Disallow sealing when MFD_INACCESSIBLE is set. */ + if ((flags & MFD_INACCESSIBLE) && (flags & MFD_ALLOW_SEALING)) + return -EINVAL; + + /* TODO: add hugetlb support */ + if ((flags & MFD_INACCESSIBLE) && (flags & MFD_HUGETLB)) + return -EINVAL; + /* length includes terminating zero */ len = strnlen_user(uname, MFD_NAME_MAX_LEN + 1); if (len <= 0) @@ -331,10 +340,24 @@ SYSCALL_DEFINE2(memfd_create, *file_seals &= ~F_SEAL_SEAL; } + if (flags & MFD_INACCESSIBLE) { + struct file *inaccessible_file; + + inaccessible_file = memfd_mkinaccessible(file); + if (IS_ERR(inaccessible_file)) { + error = PTR_ERR(inaccessible_file); + goto err_file; + } + + file = inaccessible_file; + } + fd_install(fd, file); kfree(name); return fd; +err_file: + fput(file); err_fd: put_unused_fd(fd); err_name: diff --git a/mm/memfd_inaccessible.c b/mm/memfd_inaccessible.c new file mode 100644 index 000000000000..2d33cbdd9282 --- /dev/null +++ b/mm/memfd_inaccessible.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "linux/sbitmap.h" +#include +#include +#include +#include +#include +#include + +struct inaccessible_data { + struct mutex lock; + struct file *memfd; + struct list_head notifiers; +}; + +static void inaccessible_notifier_invalidate(struct inaccessible_data *data, + pgoff_t start, pgoff_t end) +{ + struct inaccessible_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->invalidate(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static int inaccessible_release(struct inode *inode, struct file *file) +{ + struct inaccessible_data *data = inode->i_mapping->private_data; + + fput(data->memfd); + kfree(data); + return 0; +} + +static long inaccessible_fallocate(struct file *file, int mode, + loff_t offset, loff_t len) +{ + struct inaccessible_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (mode & FALLOC_FL_PUNCH_HOLE) { + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + } + + ret = memfd->f_op->fallocate(memfd, mode, offset, len); + inaccessible_notifier_invalidate(data, offset, offset + len); + return ret; +} + +static const struct file_operations inaccessible_fops = { + .release = inaccessible_release, + .fallocate = inaccessible_fallocate, +}; + +static int inaccessible_getattr(struct user_namespace *mnt_userns, + const struct path *path, struct kstat *stat, + u32 request_mask, unsigned int query_flags) +{ + struct inode *inode = d_inode(path->dentry); + struct inaccessible_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat, + request_mask, query_flags); +} + +static int inaccessible_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *attr) +{ + struct inode *inode = d_inode(dentry); + struct inaccessible_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (attr->ia_valid & ATTR_SIZE) { + if (memfd->f_inode->i_size) + return -EPERM; + + if (!PAGE_ALIGNED(attr->ia_size)) + return -EINVAL; + } + + ret = memfd->f_inode->i_op->setattr(mnt_userns, + file_dentry(memfd), attr); + return ret; +} + +static const struct inode_operations inaccessible_iops = { + .getattr = inaccessible_getattr, + .setattr = inaccessible_setattr, +}; + +static int inaccessible_init_fs_context(struct fs_context *fc) +{ + if (!init_pseudo(fc, INACCESSIBLE_MAGIC)) + return -ENOMEM; + + fc->s_iflags |= SB_I_NOEXEC; + return 0; +} + +static struct file_system_type inaccessible_fs = { + .owner = THIS_MODULE, + .name = "[inaccessible]", + .init_fs_context = inaccessible_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static struct vfsmount *inaccessible_mnt; + +static __init int inaccessible_init(void) +{ + inaccessible_mnt = kern_mount(&inaccessible_fs); + if (IS_ERR(inaccessible_mnt)) + return PTR_ERR(inaccessible_mnt); + return 0; +} +fs_initcall(inaccessible_init); + +struct file *memfd_mkinaccessible(struct file *memfd) +{ + struct inaccessible_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd = memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode = alloc_anon_inode(inaccessible_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |= S_IFREG; + inode->i_op = &inaccessible_iops; + inode->i_mapping->private_data = data; + + file = alloc_file_pseudo(inode, inaccessible_mnt, + "[memfd:inaccessible]", O_RDWR, + &inaccessible_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + } + + file->f_flags |= O_LARGEFILE; + + mapping = memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); + + return file; +} + +void inaccessible_register_notifier(struct file *file, + struct inaccessible_notifier *notifier) +{ + struct inaccessible_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_add(¬ifier->list, &data->notifiers); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(inaccessible_register_notifier); + +void inaccessible_unregister_notifier(struct file *file, + struct inaccessible_notifier *notifier) +{ + struct inaccessible_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_del(¬ifier->list); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(inaccessible_unregister_notifier); + +int inaccessible_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, + int *order) +{ + struct inaccessible_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + struct page *page; + int ret; + + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); + if (ret) + return ret; + + *pfn = page_to_pfn_t(page); + *order = thp_order(compound_head(page)); + SetPageUptodate(page); + unlock_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(inaccessible_get_pfn); + +void inaccessible_put_pfn(struct file *file, pfn_t pfn) +{ + struct page *page = pfn_t_to_page(pfn); + + if (WARN_ON_ONCE(!page)) + return; + + put_page(page); +} +EXPORT_SYMBOL_GPL(inaccessible_put_pfn); From patchwork Thu Sep 15 14:29:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC634C6FA89 for ; Thu, 15 Sep 2022 14:34:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E27A6B0074; Thu, 15 Sep 2022 10:34:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46AB48D0001; Thu, 15 Sep 2022 10:34:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BD3F6B007B; Thu, 15 Sep 2022 10:34:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 19CE66B0078 for ; Thu, 15 Sep 2022 10:34:25 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D1436A096A for ; Thu, 15 Sep 2022 14:34:24 +0000 (UTC) X-FDA: 79914565248.10.D6129DC Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf23.hostedemail.com (Postfix) with ESMTP id E214C1400C8 for ; Thu, 15 Sep 2022 14:34:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252464; x=1694788464; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xFjpOSgDyAg++QDCr6PaiUiP56dtKDXT/2+Tctlc03w=; b=fEDXKg1qWcl5dccgns4nb31+ymvlF/Tz0/Gexq3HMNH8lstxWCA0jI0A ePSeDZ6UsZpyj2Qzwc99gdDJdG1reYByueaxleHD96bkyRC+5dmtq98oS /9DJvP0qnk46ehWTcdLmmivJ+kVXBAMW6fws4tmmjMllBwzm4kbrEJ0JM froO6QVwRdP8fHy8sBOUIhgp1RCTqZU2GTefGmy0gwojPAkdKQqnRWL6c p0n8q7VwMo9pYGA2P3LKOgfFiPFLdMmygvBkZR1LcvFqCbvZP0G7vd6Si s2iwJrx2npz6CGerdeCuS0B0CFw4WQICHLkroDNXc4mAknikBsqCip8OM A==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="385021499" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="385021499" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:34:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945976960" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:12 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 2/8] KVM: Extend the memslot to support fd-based private memory Date: Thu, 15 Sep 2022 22:29:07 +0800 Message-Id: <20220915142913.2213336-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=fEDXKg1q; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252464; a=rsa-sha256; cv=none; b=URGs0HqP0tp0FM0JO0g621ESWp9TZKM0Z6pPEcloPuoGHpMqPRcMz0+bhHUjGH9B94M4Kp Pgzc9S9nJN1A1YcD204H2IfMqY7WnArbNmdHkBkOD6JQeMDlgRbLQnzo6xTtZAcH2ozdLs 4rzqZTGheh4rspeUsbM2OOADDYEgVF8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252464; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5mLHWzqcX8XAz6WxW7LKoF1iMnoqL/4qEi/VzIDJm5w=; b=QXs//Wd+Nb2yB8wwM/3l6B3leQx2qR9EnU71QOneVSeye0rDC0ANn4H1gf9cBtVePzTbAi 2PLslGmRnjq+INASmKh+MyVGhUe+4Xm/7gwcVjv3z0L2Fnk49sM694Hy8LJg+nGF8E6J4g 2tiENJXYWPx0aC7HoTeobzSmEnSIf+M= X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E214C1400C8 Authentication-Results: imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=fEDXKg1q; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: 7qzd7f8wxfbddspqeuz5dqx3onzhffd9 X-HE-Tag: 1663252463-968445 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In memory encryption usage, guest memory may be encrypted with special key and can be accessed only by the VM itself. We call such memory private memory. It's valueless and sometimes can cause problem to allow userspace to access guest private memory. This patch extends the KVM memslot definition so that guest private memory can be provided though an inaccessible_notifier enlightened file descriptor (fd), without being mmaped into userspace. This new extension, indicated by the new flag KVM_MEM_PRIVATE, adds two additional KVM memslot fields private_fd/private_offset to allow userspace to specify that guest private memory provided from the private_fd and guest_phys_addr mapped at the private_offset of the private_fd, spanning a range of memory_size. The extended memslot can still have the userspace_addr(hva). When use, a single memslot can maintain both private memory through private fd(private_fd/private_offset) and shared memory through hva(userspace_addr). Whether the private or shared part is visible to guest is maintained by other KVM code. Since there is no userspace mapping for private fd so we cannot get_user_pages() to get the pfn in KVM, instead we add a new inaccessible_notifier in the internal memslot structure and rely on it to get pfn by interacting with the memory file systems. Together with the change, a new config HAVE_KVM_PRIVATE_MEM is added and right now it is selected on X86_64 for Intel TDX usage. To make code maintenance easy, internally we use a binary compatible alias struct kvm_user_mem_region to handle both the normal and the '_ext' variants. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- Documentation/virt/kvm/api.rst | 38 +++++++++++++++++++++----- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 13 +++++++-- include/uapi/linux/kvm.h | 28 +++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 49 ++++++++++++++++++++++++++++------ 7 files changed, 116 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index abd7c32126ce..c1fac1e9f820 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1319,7 +1319,7 @@ yet and must be cleared on entry. :Capability: KVM_CAP_USER_MEMORY :Architectures: all :Type: vm ioctl -:Parameters: struct kvm_userspace_memory_region (in) +:Parameters: struct kvm_userspace_memory_region(_ext) (in) :Returns: 0 on success, -1 on error :: @@ -1332,9 +1332,18 @@ yet and must be cleared on entry. __u64 userspace_addr; /* start of the userspace allocated memory */ }; + struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 private_offset; + __u32 private_fd; + __u32 pad1; + __u64 pad2[14]; + }; + /* for kvm_memory_region::flags */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) + #define KVM_MEM_PRIVATE (1UL << 2) This ioctl allows the user to create, modify or delete a guest physical memory slot. Bits 0-15 of "slot" specify the slot id and this value @@ -1365,12 +1374,27 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr be identical. This allows large pages in the guest to be backed by large pages in the host. -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it, -to make a new slot read-only. In this case, writes to this memory will be -posted to userspace as KVM_EXIT_MMIO exits. +kvm_userspace_memory_region_ext includes all the kvm_userspace_memory_region +fields. It also includes additional fields for some specific features. See +below description of flags field for more information. It's recommended to use +kvm_userspace_memory_region_ext in new userspace code. + +The flags field supports below flags: + +- KVM_MEM_LOG_DIRTY_PAGES can be set to instruct KVM to keep track of writes to + memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to use it. + +- KVM_MEM_READONLY can be set, if KVM_CAP_READONLY_MEM capability allows it, to + make a new slot read-only. In this case, writes to this memory will be posted + to userspace as KVM_EXIT_MMIO exits. + +- KVM_MEM_PRIVATE can be set to indicate a new slot has private memory backed by + a file descirptor(fd) and the content of the private memory is invisible to + userspace. In this case, userspace should use private_fd/private_offset in + kvm_userspace_memory_region_ext to instruct KVM to provide private memory to + guest. Userspace should guarantee not to map the same pfn indicated by + private_fd/private_offset to different gfns with multiple memslots. Failed to + do this may result undefined behavior. When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of the memory region are automatically reflected into the guest. For example, an diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index e3cbd7706136..31db64ec0b33 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -48,6 +48,7 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_KVM_PRIVATE_MEM if X86_64 help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d7374d768296..081f62ccc9a1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12183,7 +12183,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_user_mem_region m; m.slot = id | (i << 16); m.flags = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f4519d3689e1..eac1787b899b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -44,6 +44,7 @@ #include #include +#include #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS @@ -576,8 +577,16 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *private_file; + loff_t private_offset; + struct inaccessible_notifier notifier; }; +static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +{ + return slot && (slot->flags & KVM_MEM_PRIVATE); +} + static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; @@ -1104,9 +1113,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index eed0315a77a6..3ef462fb3b2a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,33 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; +struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 private_offset; + __u32 private_fd; + __u32 pad1; + __u64 pad2[14]; +}; + +#ifdef __KERNEL__ +/* + * kvm_user_mem_region is a kernel-only alias of kvm_userspace_memory_region_ext + * that "unpacks" kvm_userspace_memory_region so that KVM can directly access + * all fields from the top-level "extended" region. + */ +struct kvm_user_mem_region { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; + __u64 private_offset; + __u32 private_fd; + __u32 pad1; + __u64 pad2[14]; +}; +#endif + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +137,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_PRIVATE (1UL << 2) /* for KVM_IRQ_LINE */ struct kvm_irq_level { diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index a8c5c9f06b3c..ccaff13cc5b8 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -72,3 +72,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_KVM_PRIVATE_MEM + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 584a5bab3af3..12dc0dc57b06 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1526,7 +1526,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -1920,7 +1920,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2024,7 +2024,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { int r; @@ -2036,7 +2036,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_user_mem_region *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4622,6 +4622,33 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +#define SANITY_CHECK_MEM_REGION_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region, field)); \ +} while (0) + +#define SANITY_CHECK_MEM_REGION_EXT_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region_ext, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region_ext, field)); \ +} while (0) + +static void kvm_sanity_check_user_mem_region_alias(void) +{ + SANITY_CHECK_MEM_REGION_FIELD(slot); + SANITY_CHECK_MEM_REGION_FIELD(flags); + SANITY_CHECK_MEM_REGION_FIELD(guest_phys_addr); + SANITY_CHECK_MEM_REGION_FIELD(memory_size); + SANITY_CHECK_MEM_REGION_FIELD(userspace_addr); + SANITY_CHECK_MEM_REGION_EXT_FIELD(private_offset); + SANITY_CHECK_MEM_REGION_EXT_FIELD(private_fd); +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4645,14 +4672,20 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_user_mem_region mem; + unsigned long size = sizeof(struct kvm_userspace_memory_region); + + kvm_sanity_check_user_mem_region_alias(); r = -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(&mem, argp, size); + goto out; + + r = -EINVAL; + if (mem.flags & KVM_MEM_PRIVATE) goto out; - r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } case KVM_GET_DIRTY_LOG: { From patchwork Thu Sep 15 14:29:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC58FECAAA1 for ; Thu, 15 Sep 2022 14:34:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F0898D0002; Thu, 15 Sep 2022 10:34:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 777FD8D0001; Thu, 15 Sep 2022 10:34:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F2488D0002; Thu, 15 Sep 2022 10:34:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 481288D0001 for ; Thu, 15 Sep 2022 10:34:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BE3C71C6262 for ; Thu, 15 Sep 2022 14:34:35 +0000 (UTC) X-FDA: 79914565710.11.A0655C9 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf22.hostedemail.com (Postfix) with ESMTP id 04E03C00C2 for ; Thu, 15 Sep 2022 14:34:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252474; x=1694788474; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zh1uU9OfCds79z0Ds97NHO+IjadPr1HQahUg4D7PcaE=; b=IHzD76B6dpVd/ZPdBUMnMfUk/Uv1OVt/MZUpIMKi/LXdEvpzhIC/Xo+3 S5swdEB2nIjvD570dtu3IQDogyP74XQ6kcjqmbUyYhgSZCcI3KL9sk1Rt lh3AEc/Ey0ssam0ZLzC4h4R2fJMjmqevfXZZP7RolyWnvptelQHTE1LcQ x0D6XHHeOcCNzP7MzS6RwmJmqdrQSVTEUTvwIdlNy+OjC40k6AhJQ7zGB cAdgBIgQ7+s1iCfkyPziZqXtygJH6uaGdj6ebZJe1SVRpPIrB9+kJGr9Z 17HFo7lVVJg0JeqAjuwy2IU0WeFWeDDFxjPcaXEowgipOiNxDQmR6v+aY A==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="281759296" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="281759296" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:34:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977002" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:22 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 3/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit Date: Thu, 15 Sep 2022 22:29:08 +0800 Message-Id: <20220915142913.2213336-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IHzD76B6; spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252474; a=rsa-sha256; cv=none; b=w9zllp0tu1Gnz1Z9C9uISeqLeDSSiMc4qlJ7TV0euvvB89sSwubhXXTzchro7+T8Y5qtJY wigP9BBlcZg8yJFbehk/DIuSUK8BWC1gVQxQCvOytsSoC6OYiVeY5aoN8rajmk6ko1RjJV TcYS+D38ANpH7cLwpEqOs9k5BH50DDM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X3whFcQy4DiYJehiz1E4F67fBFUtgX6WLcXgaPYTV90=; b=aPZHqm1dghbe0GtoHMM9X2HAAHIlQyMlptKZnNOHvyssx/0Vzyb+j2EdberxJjrzLQXJOD xsyakqButmhOpnztnCwZcOsCpMGF2YTXPJ8OHLa5j2vFvTnWZFpU9D7liS3M2EbflcD8KJ qbKGunkonkM0vZr7HoFb8daJZvitzLc= X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 04E03C00C2 Authentication-Results: imf22.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=IHzD76B6; spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: q3hzpicgqioypb5ta8pi6sxa777aj7c5 X-HE-Tag: 1663252473-109822 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This new KVM exit allows userspace to handle memory-related errors. It indicates an error happens in KVM at guest memory range [gpa, gpa+size). The flags includes additional information for userspace to handle the error. Currently bit 0 is defined as 'private memory' where '1' indicates error happens due to private memory access and '0' indicates error happens due to shared memory access. When private memory is enabled, this new exit will be used for KVM to exit to userspace for shared <-> private memory conversion in memory encryption usage. In such usage, typically there are two kind of memory conversions: - explicit conversion: happens when guest explicitly calls into KVM to map a range (as private or shared), KVM then exits to userspace to do the map/unmap operations. - implicit conversion: happens in KVM page fault handler where KVM exits to userspace for an implicit conversion when the page is in a different state than requested (private or shared). Suggested-by: Sean Christopherson Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- Documentation/virt/kvm/api.rst | 23 +++++++++++++++++++++++ include/uapi/linux/kvm.h | 9 +++++++++ 2 files changed, 32 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index c1fac1e9f820..1a6c003b2a0b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6638,6 +6638,29 @@ array field represents return values. The userspace should update the return values of SBI call before resuming the VCPU. For more details on RISC-V SBI spec refer, https://github.com/riscv/riscv-sbi-doc. +:: + + /* KVM_EXIT_MEMORY_FAULT */ + struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; + +If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has +encountered a memory error which is not handled by KVM kernel module and +userspace may choose to handle it. The 'flags' field indicates the memory +properties of the exit. + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by + private memory access when the bit is set otherwise the memory error is + caused by shared memory access when the bit is clear. + +'gpa' and 'size' indicate the memory range the error occurs at. The userspace +may handle the error and return to KVM to retry the previous memory access. + :: /* KVM_EXIT_NOTIFY */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 3ef462fb3b2a..0c8db7b7c138 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -300,6 +300,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MEMORY_FAULT 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -538,6 +539,14 @@ struct kvm_run { #define KVM_NOTIFY_CONTEXT_INVALID (1 << 0) __u32 flags; } notify; + /* KVM_EXIT_MEMORY_FAULT */ + struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; /* Fix the size of the union. */ char padding[256]; }; From patchwork Thu Sep 15 14:29:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA867C6FA89 for ; Thu, 15 Sep 2022 14:34:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F9A68D0003; Thu, 15 Sep 2022 10:34:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A7D98D0001; Thu, 15 Sep 2022 10:34:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FA918D0003; Thu, 15 Sep 2022 10:34:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0A6AB8D0001 for ; Thu, 15 Sep 2022 10:34:44 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DBE341A09F5 for ; Thu, 15 Sep 2022 14:34:43 +0000 (UTC) X-FDA: 79914566046.17.0701FC8 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf19.hostedemail.com (Postfix) with ESMTP id 0790A1A0091 for ; Thu, 15 Sep 2022 14:34:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252483; x=1694788483; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Raca1wOUdVWzIsRmNLgx+j7A1SHMFBohtFeko5o9+Ds=; b=FcrxbcePreunxZFirfZQLhk5+KAS6hfV6ssV0KAmYGKQgoFJ4NV3Sqn0 ewLGv7K4ZvndXxJ4QqYBpp+E5FA2ubxkh1eRDicUwK8ZAkUb8g3YbYNHf 9DAQ4T+1/RC18/ggTJVDE4ej+ISBqETExH1IdnzK0Lbz7N+nEW9KYhL+W xcfCH/RYCSDEfC7Aig/bFsQBqjdaAzmRSDcnLr5jPkGfTz8EFGOaZlws0 C7SG3LblrrklnxzMZtClV7w/BIiSj4dcpyKJFkilY7ZsU6Qs6KwDryIhU n/y74YLj2KlHN5Tmx/aH8KAKD6oPLhRYxt/XPcrLq/0TdU0jF1wSjQdJ2 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="298726365" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="298726365" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:34:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977075" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:32 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Thu, 15 Sep 2022 22:29:09 +0800 Message-Id: <20220915142913.2213336-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=FcrxbceP; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252483; a=rsa-sha256; cv=none; b=Za5dHVNHBWUVAhdQ4qLMQ9HMbc+QmuEcSpHdfl9auCVSOGZyvbSbivTWUmD9XerwflrbIQ lgwnQNciwACVQ5FfPsdGyDsGaNt0DHoII01RglUWSEYi7BLX1nApa7EUa1PHeMiAiHhZUg ZuxDr6WNw2ZcsJ6eE/mtLw01riHfT8c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252483; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=He0Ijmf21PGBrboF0204SJXtyhXPPMbhvE53Ph3lTJc=; b=3qNSS02ZucdP+HjqC6jRxFvWSrYbqUrapV+dT4UZgmKaswY+T2jxUFvoOh08gdMeJ4ctys okXxlBYm4zqyLH43V2ZwJt1eDGHPZNId9yIuc860tUuWaEbASk1IB5EhHlvevXW1bs2RAo ZLsHDZhcJzAqQKDj74hJwN+OMhLG32Y= X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0790A1A0091 Authentication-Results: imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=FcrxbceP; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: 7he1am4tnuqjbm59z18deonxh7cyua7u X-HE-Tag: 1663252482-485603 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently in mmu_notifier validate path, hva range is recorded and then checked against in the mmu_notifier_retry_hva() of the page fault path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing non private memory case, gfn is expected to continue to work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. The patch also fixes a bug in kvm_zap_gfn_range() which has already been using gfn when calling kvm_mmu_invalidate_begin/end() while these functions accept hva in current code. Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 2 +- include/linux/kvm_host.h | 18 +++++++--------- virt/kvm/kvm_main.c | 45 ++++++++++++++++++++++++++-------------- 3 files changed, 39 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e418ef3ecfcb..08abad4f3e6f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4203,7 +4203,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index eac1787b899b..2125b50f6345 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -776,8 +776,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1366,10 +1366,8 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end); long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1938,9 +1936,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) return 0; } -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1950,8 +1948,8 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, * positives, due to shortcuts when handing concurrent invalidations. */ if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >= kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) + gfn >= kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) return 1; if (kvm->mmu_invalidate_seq != mmu_seq) return 1; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 12dc0dc57b06..fa9dd2d2c001 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,8 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); +typedef void (*on_lock_fn_t)(struct kvm *kvm, gfn_t start, gfn_t end); typedef void (*on_unlock_fn_t)(struct kvm *kvm); @@ -628,7 +627,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, locked = true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm, gfn_range.start, + gfn_range.end); if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,15 +715,9 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, + gfn_t end) { - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; if (likely(kvm->mmu_invalidate_in_progress == 1)) { kvm->mmu_invalidate_range_start = start; kvm->mmu_invalidate_range_end = end; @@ -744,6 +738,28 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, } } +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; +} + +static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + update_invalidate_range(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) +{ + mark_invalidate_in_progress(kvm, start, end); + update_invalidate_range(kvm, start, end); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,8 +768,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .start = range->start, .end = range->end, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, - .on_lock = kvm_mmu_invalidate_begin, + .handler = kvm_mmu_handle_gfn_range, + .on_lock = mark_invalidate_in_progress, .on_unlock = kvm_arch_guest_memory_reclaimed, .flush_on_ret = true, .may_block = mmu_notifier_range_blockable(range), @@ -791,8 +807,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) { /* * This sequence increase will notify the kvm page fault that From patchwork Thu Sep 15 14:29:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B36FDECAAA1 for ; Thu, 15 Sep 2022 14:34:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E3BA8D0005; Thu, 15 Sep 2022 10:34:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 491AA8D0001; Thu, 15 Sep 2022 10:34:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E5288D0005; Thu, 15 Sep 2022 10:34:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1825D8D0001 for ; Thu, 15 Sep 2022 10:34:55 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E72D51C6258 for ; Thu, 15 Sep 2022 14:34:54 +0000 (UTC) X-FDA: 79914566508.08.C1EE608 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf23.hostedemail.com (Postfix) with ESMTP id 6C9DA1400CA for ; Thu, 15 Sep 2022 14:34:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252493; x=1694788493; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0iTL7qFpKCg0CL7PXnYLUJ7HzbQvUn8wMbNEMyKZCps=; b=b29bq2og8/+2bGelFvoM0y93sPmBCGwvQCUWutwZYRTAA2wApEMMI2L4 8J68goliQav0Spu1B1VKdfgD7iCKxj57K6/wE3oEDkPeB7GKswIWu4y8t zKDdg9DjvtelCh80rXwKzP7zO7LRxBeHFYZXdJO19CTucsedrokqpOQfD qCcSEP6O7mioF3ZmiXuJEOq2KAjzw3W/HJNPVX5R651ZWXHlaadqIFIUJ wW5anrtVb5v6u6x7umKJXaN5ATiDkqITuwwdpyAtbl742bOVHfaC7PpWZ N5LTo40TZ44otDKUA0K/aCBH2k6dv0l+CvsqxE6qhGIQQTSV/CdV3ShSG g==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="362690407" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="362690407" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:34:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977174" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:42 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 5/8] KVM: Register/unregister the guest private memory regions Date: Thu, 15 Sep 2022 22:29:10 +0800 Message-Id: <20220915142913.2213336-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252493; a=rsa-sha256; cv=none; b=gFcvI5pmneqtu4ZLr9HWHEXK7985wP9gj1QdbGyVHwNpfyt9IHPjPLwsj1bZEL1VNDr6Cm JOWj7b3MeVgHe8CBbQBhcqjggPajlCqYW2QKj2+cShCuHgSAPAqOQJ6VZDKbb44sk6QdH2 HMiXDSfAg6HuK+Ru0pnSsmAJ5VNT2Lw= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=b29bq2og; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252493; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vv+WML1nsRkJXwh2hTYDzvw2WzAr6j0pi1rqpnc3QEs=; b=lZV5P1IxEuzPje6fiWmvUKrTHLRMOu4b6HM+uuRiM7txCvj85mUzyCzYjw2I8Ew9Xxq3ts FM5LWXuPmUmY1E896BNJsrYpbh3HxBRA7KuYjxTH93PRrGzPn6stQTdya5O0dOPUf4PC8J /HpxKNERL0O/v9uIejIZC1EnHUJIkM0= Authentication-Results: imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=b29bq2og; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: e6hsejprnirn59p3bf5yc56a16p6cdnp X-Rspamd-Queue-Id: 6C9DA1400CA X-HE-Tag: 1663252493-813952 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If CONFIG_HAVE_KVM_PRIVATE_MEM=y, userspace can register/unregister the guest private memory regions through KVM_MEMORY_ENCRYPT_{UN,}REG_REGION ioctls. The patch reuses existing SEV ioctl number but differs that the address in the region for KVM_PRIVATE_MEM case is gpa while for SEV case it's hva. Which usages should the ioctls go is determined by the newly added kvm_arch_has_private_mem(). Architecture which supports KVM_PRIVATE_MEM should override this function. The current implementation defaults all memory to private. The shared memory regions are stored in a xarray variable for memory efficiency and zapping existing memory mappings is also a side effect of these two ioctls when defined. Signed-off-by: Chao Peng --- Documentation/virt/kvm/api.rst | 17 ++++++-- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu.h | 2 - include/linux/kvm_host.h | 13 ++++++ virt/kvm/kvm_main.c | 73 +++++++++++++++++++++++++++++++++ 5 files changed, 100 insertions(+), 6 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 1a6c003b2a0b..c0f800d04ffc 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4715,10 +4715,19 @@ Documentation/virt/kvm/x86/amd-memory-encryption.rst. This ioctl can be used to register a guest memory region which may contain encrypted data (e.g. guest RAM, SMRAM etc). -It is used in the SEV-enabled guest. When encryption is enabled, a guest -memory region may contain encrypted data. The SEV memory encryption -engine uses a tweak such that two identical plaintext pages, each at -different locations will have differing ciphertexts. So swapping or +Currently this ioctl supports registering memory regions for two usages: +private memory and SEV-encrypted memory. + +When private memory is enabled, this ioctl is used to register guest private +memory region and the addr/size of kvm_enc_region represents guest physical +address (GPA). In this usage, this ioctl zaps the existing guest memory +mappings in KVM that fallen into the region. + +When SEV-encrypted memory is enabled, this ioctl is used to register guest +memory region which may contain encrypted data for a SEV-enabled guest. The +addr/size of kvm_enc_region represents userspace address (HVA). The SEV +memory encryption engine uses a tweak such that two identical plaintext pages, +each at different locations will have differing ciphertexts. So swapping or moving ciphertext of those pages will not result in plaintext being swapped. So relocating (or migrating) physical backing pages for the SEV guest will require some additional steps. diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2c96c43c313a..cfad6ba1a70a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,7 @@ #include #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ZAP_GFN_RANGE #define KVM_MAX_VCPUS 1024 diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6bdaacb6faa0..c94b620bf94b 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -211,8 +211,6 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, return -(u32)fault & errcode; } -void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); - int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); int kvm_mmu_post_init_vm(struct kvm *kvm); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2125b50f6345..d65690cae80b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -260,6 +260,15 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range); #endif +#ifdef __KVM_HAVE_ZAP_GFN_RANGE +void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); +#else +static inline void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start + gfn_t gfn_end) +{ +} +#endif + enum { OUTSIDE_GUEST_MODE, IN_GUEST_MODE, @@ -795,6 +804,9 @@ struct kvm { struct notifier_block pm_notifier; #endif char stats_id[KVM_STATS_NAME_SIZE]; +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + struct xarray mem_attr_array; +#endif }; #define kvm_err(fmt, ...) \ @@ -1454,6 +1466,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_has_private_mem(struct kvm *kvm); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fa9dd2d2c001..de5cce8c82c7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -937,6 +937,47 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM +#define KVM_MEM_ATTR_SHARED 0x0001 +static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, + bool is_private) +{ + gfn_t start, end; + unsigned long index; + void *entry; + int r; + + if (size == 0 || gpa + size < gpa) + return -EINVAL; + if (gpa & (PAGE_SIZE - 1) || size & (PAGE_SIZE - 1)) + return -EINVAL; + + start = gpa >> PAGE_SHIFT; + end = (gpa + size - 1 + PAGE_SIZE) >> PAGE_SHIFT; + + /* + * Guest memory defaults to private, kvm->mem_attr_array only stores + * shared memory. + */ + entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + + for (index = start; index < end; index++) { + r = xa_err(xa_store(&kvm->mem_attr_array, index, entry, + GFP_KERNEL_ACCOUNT)); + if (r) + goto err; + } + + kvm_zap_gfn_range(kvm, start, end); + + return r; +err: + for (; index > start; index--) + xa_erase(&kvm->mem_attr_array, index); + return r; +} +#endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -1165,6 +1206,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + xa_init(&kvm->mem_attr_array); +#endif INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1338,6 +1382,9 @@ static void kvm_destroy_vm(struct kvm *kvm) kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + xa_destroy(&kvm->mem_attr_array); +#endif cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -1541,6 +1588,11 @@ static void kvm_replace_memslot(struct kvm *kvm, } } +bool __weak kvm_arch_has_private_mem(struct kvm *kvm) +{ + return false; +} + static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -4703,6 +4755,24 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + case KVM_MEMORY_ENCRYPT_REG_REGION: + case KVM_MEMORY_ENCRYPT_UNREG_REGION: { + struct kvm_enc_region region; + bool set = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION; + + if (!kvm_arch_has_private_mem(kvm)) + goto arch_vm_ioctl; + + r = -EFAULT; + if (copy_from_user(®ion, argp, sizeof(region))) + goto out; + + r = kvm_vm_ioctl_set_mem_attr(kvm, region.addr, + region.size, set); + break; + } +#endif case KVM_GET_DIRTY_LOG: { struct kvm_dirty_log log; @@ -4856,6 +4926,9 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_get_stats_fd(kvm); break; default: +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM +arch_vm_ioctl: +#endif r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: From patchwork Thu Sep 15 14:29:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2EDCECAAA1 for ; Thu, 15 Sep 2022 14:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F79E8D0006; Thu, 15 Sep 2022 10:35:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 380BD8D0001; Thu, 15 Sep 2022 10:35:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D3748D0006; Thu, 15 Sep 2022 10:35:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 0B7688D0001 for ; Thu, 15 Sep 2022 10:35:04 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D8A051A0C0B for ; Thu, 15 Sep 2022 14:35:03 +0000 (UTC) X-FDA: 79914566886.15.7EAA5F0 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf03.hostedemail.com (Postfix) with ESMTP id 3B7EF200A6 for ; Thu, 15 Sep 2022 14:35:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252503; x=1694788503; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OdNh9Zw3Oa88e2pjFDCMImTzlT1i+7tGALsj6VLXH/Q=; b=W+jksVdapGFPukCG5SOBtNTJCx7AB1PmBKQ6qlndjPas0rfY0vO1A2Av CK2i9pwJYvVbn0UTPz4vBu2mdUWQDQ84fVKRkF8rqTOcSO0WhfjrdVErk /+5FkU0Wkn0Ub+MnWsg8GZyYUvygeoRMsES7x/6qBwgYJR/DWORPf4mXL O2X7tabtpuRM1rcJbCaFqTWgZClYi0/9fpKFwX0/AN6tw/r5Ngsa9jqw8 sZXTpgCDFi59zkRzqnoNi3PVBMlt94CNh099J8qwA6XnYjzDo9AE7BZN2 jwUyMfkbNbDw64oy6+EC0laq5pKN0X4vMxt31wmbRpqKJl1gv/2t0fmFS A==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="297457239" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="297457239" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:35:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977198" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:34:52 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 6/8] KVM: Update lpage info when private/shared memory are mixed Date: Thu, 15 Sep 2022 22:29:11 +0800 Message-Id: <20220915142913.2213336-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252503; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZOEVjuUzfphXbRyZHy0EZbIAaeZYUd9HmD2zyBBdSLA=; b=Q0NpfvXcP5JmyAwDfBnveqHK9s2+5LdWVoSy8IO1XXfD5sMbMfnUHSXVnWaIL3+gW8lJ9G /Tnkc/TlbkBhMzKjAV8kwWw6ZSDl93y6Qxfcllfgv175pzxQxPxUDYcvuIp6Romu/GEudT bzPaDUkFi/rYjOj+yBhJ3x5xaB6jnW8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=W+jksVda; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf03.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252503; a=rsa-sha256; cv=none; b=3eHDcQjmqlqmaFnxDc3MvJlzxPsriEV/gV0c9WwpSCmLJK9SgH7YWlJ00/L6qzNBsJa4xH NInVnLOHP0WK9dyJ3v38q+Oxf9f+TkLMu61XlxLvKULQm7VfK1Xa+gW2q5B+SI+PdIJCUp 1ElMEK7FiWC032Y97MlW6MwhHDolfGE= Authentication-Results: imf03.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=W+jksVda; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf03.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=chao.p.peng@linux.intel.com X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 3B7EF200A6 X-Stat-Signature: i89k16mh5hrzcnb6s5x1yaew77weq6e1 X-HE-Tag: 1663252502-4545 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When private/shared memory are mixed in a large page, the lpage_info may not be accurate and should be updated with this mixed info. A large page has mixed pages can't be really mapped as large page since its private/shared pages are from different physical memory. This patch updates lpage_info when private/shared memory attribute is changed. If both private and shared pages are within a large page region, it can't be mapped as large page. It's a bit challenge to track the mixed info in a 'count' like variable, this patch instead reserves a bit in disallow_lpage to indicate a large page include mixed private/share pages. Signed-off-by: Chao Peng --- arch/x86/include/asm/kvm_host.h | 8 +++ arch/x86/kvm/mmu/mmu.c | 119 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 + include/linux/kvm_host.h | 17 +++++ virt/kvm/kvm_main.c | 11 ++- 5 files changed, 154 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index cfad6ba1a70a..85119ed9527a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -38,6 +38,7 @@ #define __KVM_HAVE_ARCH_VCPU_DEBUGFS #define __KVM_HAVE_ZAP_GFN_RANGE +#define __KVM_HAVE_ARCH_UPDATE_MEM_ATTR #define KVM_MAX_VCPUS 1024 @@ -945,6 +946,13 @@ struct kvm_vcpu_arch { #endif }; +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at the + * level. The remaining bits will be used as a reference count for other users. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 08abad4f3e6f..a0f198cede3d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -762,11 +762,16 @@ static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + disallow_count = linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage += count; - WARN_ON(linfo->disallow_lpage < 0); } } @@ -6894,3 +6899,115 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_lpage_recovery_thread) kthread_stop(kvm->arch.nx_lpage_recovery_thread); } + +static bool mem_attr_is_mixed(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + gfn_t gfn = start; + void *entry; + bool shared, private; + bool mixed = false; + + if (attr == KVM_MEM_ATTR_SHARED) { + shared = true; + private = false; + } else { + shared = false; + private = true; + } + + rcu_read_lock(); + entry = xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn != xas.xa_index, kvm); + + if (entry) + private = true; + else + shared = true; + + if (private && shared) { + mixed = true; + goto out; + } + + entry = xas_next(&xas); + gfn++; + } +out: + rcu_read_unlock(); + return mixed; +} + +static inline void update_mixed(struct kvm_lpage_info *linfo, bool mixed) +{ + if (mixed) + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static void update_mem_lpage_info(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned int attr, + gfn_t start, gfn_t end) +{ + unsigned long lpage_start, lpage_end; + unsigned long gfn, pages, mask; + int level; + + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages = KVM_PAGES_PER_HPAGE(level); + mask = ~(pages - 1); + lpage_start = start & mask; + lpage_end = (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they are not mixed. + */ + update_mixed(lpage_info_slot(lpage_start, slot, level), + mem_attr_is_mixed(kvm, attr, lpage_start, + lpage_start + pages)); + + if (lpage_start == lpage_end) + return; + + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) + update_mixed(lpage_info_slot(gfn, slot, level), false); + + update_mixed(lpage_info_slot(lpage_end, slot, level), + mem_attr_is_mixed(kvm, attr, lpage_end, + lpage_end + pages)); + } +} + +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int i; + + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), + "Unsupported mem attribute.\n"); + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot = iter.slot; + start = max(start, slot->base_gfn); + end = min(end, slot->base_gfn + slot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + update_mem_lpage_info(kvm, slot, attr, start, end); + } + } +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 081f62ccc9a1..ef11cda6f13f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12321,6 +12321,8 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) linfo[lpages - 1].disallow_lpage = 1; ugfn = slot->userspace_addr >> PAGE_SHIFT; + if (kvm_slot_can_be_private(slot)) + ugfn |= slot->private_offset >> PAGE_SHIFT; /* * If the gfn and userspace address are not aligned wrt each * other, disable large page support for this slot. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d65690cae80b..fd36ce6597ad 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2277,4 +2277,21 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + +#define KVM_MEM_ATTR_SHARED 0x0001 +#define KVM_MEM_ATTR_PRIVATE 0x0002 + +#ifdef __KVM_HAVE_ARCH_UPDATE_MEM_ATTR +void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end); +#else +static inline void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ +} +#endif + +#endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */ + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index de5cce8c82c7..97d893f7482c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -938,13 +938,13 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ #ifdef CONFIG_HAVE_KVM_PRIVATE_MEM -#define KVM_MEM_ATTR_SHARED 0x0001 static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, bool is_private) { gfn_t start, end; unsigned long index; void *entry; + int attr; int r; if (size == 0 || gpa + size < gpa) @@ -959,7 +959,13 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, * Guest memory defaults to private, kvm->mem_attr_array only stores * shared memory. */ - entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + if (is_private) { + attr = KVM_MEM_ATTR_PRIVATE; + entry = NULL; + } else { + attr = KVM_MEM_ATTR_SHARED; + entry = xa_mk_value(KVM_MEM_ATTR_SHARED); + } for (index = start; index < end; index++) { r = xa_err(xa_store(&kvm->mem_attr_array, index, entry, @@ -969,6 +975,7 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, } kvm_zap_gfn_range(kvm, start, end); + kvm_arch_update_mem_attr(kvm, attr, start, end); return r; err: From patchwork Thu Sep 15 14:29:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8D00C6FA89 for ; Thu, 15 Sep 2022 14:35:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B3308D0007; Thu, 15 Sep 2022 10:35:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 83C318D0001; Thu, 15 Sep 2022 10:35:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68EE58D0007; Thu, 15 Sep 2022 10:35:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 58DEA8D0001 for ; Thu, 15 Sep 2022 10:35:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3ABC540680 for ; Thu, 15 Sep 2022 14:35:14 +0000 (UTC) X-FDA: 79914567348.18.BC2A390 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf05.hostedemail.com (Postfix) with ESMTP id 7F2481000BB for ; Thu, 15 Sep 2022 14:35:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252513; x=1694788513; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OtV5Z+PSFLnhwjt/AzM9n4q6l2/VthWxC8/mcPUKv2E=; b=KY4gxg+TtwIXMNk7E/AtZkuajXklHIioI0P1uFsRTg4CUGIGPWrGpSIJ 8weRUB9k0Ocq77RlVdGU2wCP8+pOlbinMAPL33L0Bwwv45XGVvKaCtxfN en7fhP+n4L9vV0+7hvnnVqJwwGUWjTjQ7f+g+dH8FHupPjYJ+AmvotQPN UQrfUP94EX+oQMOMQGWYkFZZs4j9uKkFkuPkZc+5nicdSqQbIA5pGpOr7 03DM7Cr/OdKt88QtBGgDouxcvstLr9OT1VTK60GynY6fcORQKTm/pciRA TEFeN4TQCtv77YaNVLFKa+lHGRbgqn47WXuY7R6YB4eyMaq3X7ymrhYEO w==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="362690491" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="362690491" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:35:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977249" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:35:02 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 7/8] KVM: Handle page fault for private memory Date: Thu, 15 Sep 2022 22:29:12 +0800 Message-Id: <20220915142913.2213336-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252513; a=rsa-sha256; cv=none; b=56nhbB8cKsnneB5n820bAIQ/XERkzwwEB5BtYVS9cZ2xq4+HR1APtfqLmpIc7hQRZLXcpW 1DW0ZCy0Udpu9i4LTZfgpQuk2JlZz23tfNdN+VFpq/LPYpyAGj5PCOi/4ZkpCWEBPx35nm H0fBjBaZ7KBg1Npg85PplSyZzZjlR0U= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=KY4gxg+T; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252513; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sT8ytDZoVSQarwsGmZiUJbc904MhrfUYUukwiJPiCDo=; b=63qRgBKJC6ip4n9HOWyyeP5VfTOWGCTzYCI3KFCYJ10cXoo5BawlQYVd6tHo8rUHjKKo96 B+1dmwt9ctvH2dnmzHWS9KS+D4EjWthCXSoNNhbDILKhvKMYoSjYlx3YoW5BSEUUhEFZNM PdRdj4I3fzaAJRxfMqOJzj8P37/S7NE= X-Rspam-User: X-Rspamd-Queue-Id: 7F2481000BB X-Rspamd-Server: rspam05 X-Stat-Signature: f6sqdhak5kw68u9p6acgdnqac9pda6aa Authentication-Results: imf05.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=KY4gxg+T; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf05.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com X-HE-Tag: 1663252513-699987 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A memslot with KVM_MEM_PRIVATE being set can include both fd-based private memory and hva-based shared memory. Architecture code (like TDX code) can tell whether the on-going fault is private or not. This patch adds a 'is_private' field to kvm_page_fault to indicate this and architecture code is expected to set it. To handle page fault for such memslot, the handling logic is different depending on whether the fault is private or shared. KVM checks if 'is_private' matches the host's view of the page (this is maintained in mem_attr_array). - For a successful match, private pfn is obtained with inaccessible_get_pfn() from private fd and shared pfn is obtained with existing get_user_pages(). - For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to userspace. Userspace then can convert memory between private/shared in host's view and then retry the access. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 54 ++++++++++++++++++++++++++++++++- arch/x86/kvm/mmu/mmu_internal.h | 18 +++++++++++ arch/x86/kvm/mmu/mmutrace.h | 1 + include/linux/kvm_host.h | 24 +++++++++++++++ 4 files changed, 96 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a0f198cede3d..81ab20003824 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3028,6 +3028,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } + if (kvm_mem_is_private(kvm, gfn)) + return max_level; + if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -4127,6 +4130,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } +static inline u8 order_to_level(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static int kvm_faultin_pfn_private(struct kvm_page_fault *fault) +{ + int order; + struct kvm_memory_slot *slot = fault->slot; + + if (kvm_private_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) + return RET_PF_RETRY; + + fault->max_level = min(order_to_level(order), fault->max_level); + fault->map_writable = !(slot->flags & KVM_MEM_READONLY); + return RET_PF_CONTINUE; +} + static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4159,6 +4188,22 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_EMULATE; } + if (kvm_slot_can_be_private(slot) && + fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; + if (fault->is_private) + vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; + else + vcpu->run->memory.flags = 0; + vcpu->run->memory.padding = 0; + vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size = PAGE_SIZE; + return RET_PF_USER; + } + + if (fault->is_private) + return kvm_faultin_pfn_private(fault); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, fault->write, &fault->map_writable, @@ -4267,7 +4312,11 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + + if (fault->is_private) + kvm_private_mem_put_pfn(fault->slot, fault->pfn); + else + kvm_release_pfn_clean(fault->pfn); return r; } @@ -5543,6 +5592,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err return -EIO; } + if (r == RET_PF_USER) + return 0; + if (r < 0) return r; if (r != RET_PF_EMULATE) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..a55e352246a7 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -188,6 +188,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; /* @@ -236,6 +237,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_USER: need to exit to userspace to handle this fault. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. * @@ -252,6 +254,7 @@ enum { RET_PF_RETRY, RET_PF_EMULATE, RET_PF_INVALID, + RET_PF_USER, RET_PF_FIXED, RET_PF_SPURIOUS, }; @@ -318,4 +321,19 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifndef CONFIG_HAVE_KVM_PRIVATE_MEM +static inline int kvm_private_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + WARN_ON_ONCE(1); + return -EOPNOTSUPP; +} + +static inline void kvm_private_mem_put_pfn(struct kvm_memory_slot *slot, + kvm_pfn_t pfn) +{ + WARN_ON_ONCE(1); +} +#endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */ + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..2d7555381955 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -58,6 +58,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_INVALID); +TRACE_DEFINE_ENUM(RET_PF_USER); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index fd36ce6597ad..b9906cdf468b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2292,6 +2292,30 @@ static inline void kvm_arch_update_mem_attr(struct kvm *kvm, unsigned int attr, } #endif +static inline int kvm_private_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + int ret; + pfn_t pfnt; + pgoff_t index = gfn - slot->base_gfn + + (slot->private_offset >> PAGE_SHIFT); + + ret = inaccessible_get_pfn(slot->private_file, index, &pfnt, order); + *pfn = pfn_t_to_pfn(pfnt); + return ret; +} + +static inline void kvm_private_mem_put_pfn(struct kvm_memory_slot *slot, + kvm_pfn_t pfn) +{ + inaccessible_put_pfn(slot->private_file, pfn_to_pfn_t(pfn)); +} + +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return !xa_load(&kvm->mem_attr_array, gfn); +} + #endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */ #endif From patchwork Thu Sep 15 14:29:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12977467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D020C6FA89 for ; Thu, 15 Sep 2022 14:35:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF41A8D0005; Thu, 15 Sep 2022 10:35:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7DA68D0001; Thu, 15 Sep 2022 10:35:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD09C8D0005; Thu, 15 Sep 2022 10:35:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BD0EB8D0001 for ; Thu, 15 Sep 2022 10:35:29 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9FB7E1C6298 for ; Thu, 15 Sep 2022 14:35:24 +0000 (UTC) X-FDA: 79914567768.10.D60032B Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf02.hostedemail.com (Postfix) with ESMTP id 03DF6800B3 for ; Thu, 15 Sep 2022 14:35:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663252523; x=1694788523; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vo7O6JJ0Mt3TYlCfYDqCaXE1DrArKMV3xwA8EiMd43o=; b=fqbFgqcRZdUhMwse+y4C9MiC8S9V5xGqnIyuRV83nLPw83W0zBGvLX4z beyR+p/mmAeM4XfR+tAWPA5ooZSwlZtbCtVooaE2n/UKrSlhnuCCaGxWx qSTfTaBJ9oo5+lcOmty0EIbfsIv3Ulb7UWToJxsuLn7Guwa4pwZCBu9E8 tdJNjok9+KcJbZxFwrvdpPyIHAhQRCaw0aBdnLw73d72mDQbFh2WbRZ7X PO73K1g/91cArkoZjzYUS3UNYBWcRZx3S1uglvw1mRJNtdFLKw39G+Km1 2S+6PFvyrnQlu/PPo7DHrwIZZtwd5Ey4w8zs40kueiJgcXfyqUuOtmWfU Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10470"; a="324992274" X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="324992274" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Sep 2022 07:35:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,318,1654585200"; d="scan'208";a="945977274" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga005.fm.intel.com with ESMTP; 15 Sep 2022 07:35:11 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v8 8/8] KVM: Enable and expose KVM_MEM_PRIVATE Date: Thu, 15 Sep 2022 22:29:13 +0800 Message-Id: <20220915142913.2213336-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=fqbFgqcR; spf=none (imf02.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663252523; a=rsa-sha256; cv=none; b=HBodVObNyk915SNzUlFH7j0s9bMG/BTpaWsZ5kV0Tk/wi5pSwSW5wG/dWUM7N/49VOzRo/ zUJJ9scERG/XF3eAFypW0UxJRwKMW6O9tYj9i1Ya6Q5jMqDSvn3NNeH/wINxjnH2F6wYAM l+koSkoHe4kDRFksKqFxdgYbGwLP5Sc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663252523; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PWNEJoopKsOHbP5U9dRGwaINlFiFik9SgW5DubdgIpo=; b=dmRiBXqbZrfNejzCePXG+9iaF7+ohbN6wdVmCZ3obuXqdZzTH+6xT6kSEurGY6gX3UKukP r8iSusDIRSshmhD9DBse1+tt2gAbcgDUDCaJRqjqNeB3oQCSa4lLZPJ/doNYGhUeCpFME6 4s4Td5m/611zxgCyyZYluRCwafzm8qs= X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 03DF6800B3 X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=fqbFgqcR; spf=none (imf02.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: 59hde18umjpgqqzjx76jpipmy4kazwkr X-HE-Tag: 1663252522-116957 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Expose KVM_MEM_PRIVATE and memslot fields private_fd/offset to userspace. KVM will register/unregister private memslot to fd-based memory backing store and response to invalidation event from inaccessible_notifier to zap the existing memory mappings in the secondary page table. Whether KVM_MEM_PRIVATE is actually exposed to userspace is determined by architecture code which can turn on it by overriding the default kvm_arch_has_private_mem(). A 'kvm' reference is added in memslot structure since in inaccessible_notifier callback we can only obtain a memslot reference but 'kvm' is needed to do the zapping. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 116 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 111 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b9906cdf468b..cb4eefac709c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -589,6 +589,7 @@ struct kvm_memory_slot { struct file *private_file; loff_t private_offset; struct inaccessible_notifier notifier; + struct kvm *kvm; }; static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 97d893f7482c..87e239d35b96 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -983,6 +983,57 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, xa_erase(&kvm->mem_attr_array, index); return r; } + +static void kvm_private_notifier_invalidate(struct inaccessible_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + unsigned long base_pgoff = slot->private_offset >> PAGE_SHIFT; + gfn_t start_gfn = slot->base_gfn; + gfn_t end_gfn = slot->base_gfn + slot->npages; + + + if (start > base_pgoff) + start_gfn = slot->base_gfn + start - base_pgoff; + + if (end < base_pgoff + slot->npages) + end_gfn = slot->base_gfn + end - base_pgoff; + + if (start_gfn >= end_gfn) + return; + + kvm_zap_gfn_range(slot->kvm, start_gfn, end_gfn); +} + +static struct inaccessible_notifier_ops kvm_private_notifier_ops = { + .invalidate = kvm_private_notifier_invalidate, +}; + +static inline void kvm_private_mem_register(struct kvm_memory_slot *slot) +{ + slot->notifier.ops = &kvm_private_notifier_ops; + inaccessible_register_notifier(slot->private_file, &slot->notifier); +} + +static inline void kvm_private_mem_unregister(struct kvm_memory_slot *slot) +{ + inaccessible_unregister_notifier(slot->private_file, &slot->notifier); +} + +#else /* !CONFIG_HAVE_KVM_PRIVATE_MEM */ + +static inline void kvm_private_mem_register(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +static inline void kvm_private_mem_unregister(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + #endif /* CONFIG_HAVE_KVM_PRIVATE_MEM */ #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER @@ -1029,6 +1080,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) /* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { + if (slot->flags & KVM_MEM_PRIVATE) { + kvm_private_mem_unregister(slot); + fput(slot->private_file); + } + kvm_destroy_dirty_bitmap(slot); kvm_arch_free_memslot(kvm, slot); @@ -1600,10 +1656,16 @@ bool __weak kvm_arch_has_private_mem(struct kvm *kvm) return false; } -static int check_memory_region_flags(const struct kvm_user_mem_region *mem) +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; +#ifdef CONFIG_HAVE_KVM_PRIVATE_MEM + if (kvm_arch_has_private_mem(kvm)) + valid_flags |= KVM_MEM_PRIVATE; +#endif + #ifdef __KVM_HAVE_READONLY_MEM valid_flags |= KVM_MEM_READONLY; #endif @@ -1679,6 +1741,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, { int r; + if (change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_private_mem_register(new); + /* * If dirty logging is disabled, nullify the bitmap; the old bitmap * will be freed on "commit". If logging is enabled in both old and @@ -1707,6 +1772,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); + if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_private_mem_unregister(new); + return r; } @@ -2004,7 +2072,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; - r = check_memory_region_flags(mem); + r = check_memory_region_flags(kvm, mem); if (r) return r; @@ -2023,6 +2091,10 @@ int __kvm_set_memory_region(struct kvm *kvm, !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; + if (mem->flags & KVM_MEM_PRIVATE && + (mem->private_offset & (PAGE_SIZE - 1) || + mem->private_offset > U64_MAX - mem->memory_size)) + return -EINVAL; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) @@ -2061,6 +2133,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_PRIVATE) + return -EINVAL; if ((mem->userspace_addr != old->userspace_addr) || (npages != old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2089,10 +2164,27 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages = npages; new->flags = mem->flags; new->userspace_addr = mem->userspace_addr; + if (mem->flags & KVM_MEM_PRIVATE) { + new->private_file = fget(mem->private_fd); + if (!new->private_file) { + r = -EINVAL; + goto out; + } + new->private_offset = mem->private_offset; + } + + new->kvm = kvm; r = kvm_set_memslot(kvm, old, new, change); if (r) - kfree(new); + goto out; + + return 0; + +out: + if (new->private_file) + fput(new->private_file); + kfree(new); return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -4747,16 +4839,28 @@ static long kvm_vm_ioctl(struct file *filp, } case KVM_SET_USER_MEMORY_REGION: { struct kvm_user_mem_region mem; - unsigned long size = sizeof(struct kvm_userspace_memory_region); + unsigned int flags_offset = offsetof(typeof(mem), flags); + unsigned long size; + u32 flags; kvm_sanity_check_user_mem_region_alias(); + memset(&mem, 0, sizeof(mem)); + r = -EFAULT; - if (copy_from_user(&mem, argp, size); + if (get_user(flags, (u32 __user *)(argp + flags_offset))) + goto out; + + if (flags & KVM_MEM_PRIVATE) + size = sizeof(struct kvm_userspace_memory_region_ext); + else + size = sizeof(struct kvm_userspace_memory_region); + + if (copy_from_user(&mem, argp, size)) goto out; r = -EINVAL; - if (mem.flags & KVM_MEM_PRIVATE) + if ((flags ^ mem.flags) & KVM_MEM_PRIVATE) goto out; r = kvm_vm_ioctl_set_memory_region(kvm, &mem);