From patchwork Wed Sep 16 13:06:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11779725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFA3814F6 for ; Wed, 16 Sep 2020 13:07:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 923AF22210 for ; Wed, 16 Sep 2020 13:07:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="key not found in DNS" (0-bit key) header.d=suse.com header.i=@suse.com header.b="PqTNkcQ/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 923AF22210 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIX9C-0003K5-0X; Wed, 16 Sep 2020 13:06:54 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIX9B-0003Jz-KR for xen-devel@lists.xenproject.org; Wed, 16 Sep 2020 13:06:53 +0000 X-Inumbo-ID: c6a1c9c2-bda3-4f25-bfae-49c0130ad8c0 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id c6a1c9c2-bda3-4f25-bfae-49c0130ad8c0; Wed, 16 Sep 2020 13:06:52 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=cantorsusede; t=1600261611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0En6BQajf+iOFoIoydQYPVLEBtCfd6q/DLTTgKDoCnA=; b=PqTNkcQ/1a+0hn4sQaZTni0DcP40BfF+0wEdJl6f70yuLvrWyZMV+G4gVyzl7v3X+uliuq Kz/ZNBt2Wvn6iYOyd2OZ9CYb87l2do4q9J8nXLFlw427r45/L+5m2hkQEw2J4yGly2ezQ7 D4Ritsz1Trrh0nWMC1h9mLJJujInYkfNKFJ4OCJXq0n3jWqUJ2b4ioyG/Lf/g0c+71u42x sVp9BAHLOzH2SgSvFict4hiUh4fYENoADfQKMncnhT3Cz9nu+KYY2qHqKZEvWfacbCttrj DzH4lvcc9DKmfwIGKlt3IaKdgfJClqG9+qCHCBgbPPYrI/+jk2EUPPRZp6rvfQ== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id DEB55AC85; Wed, 16 Sep 2020 13:07:06 +0000 (UTC) Subject: [PATCH v2 1/4] x86/shim: fix build with PV_SHIM_EXCLUSIVE and SHADOW_PAGING From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tim Deegan References: Message-ID: <83789565-57db-3632-fc4c-47c08266ffc9@suse.com> Date: Wed, 16 Sep 2020 15:06:51 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" While there's little point in enabling both, the combination ought to at least build correctly. Drop the direct PV_SHIM_EXCLUSIVE conditionals and instead zap PG_log_dirty to zero under the right conditions, and key other #ifdef-s off of that. While there also expand on ded576ce07e9 ("x86/shadow: dirty VRAM tracking is needed for HVM only"): There was yet another is_hvm_domain() missing, and code touching the struct fields needs to be guarded by suitable #ifdef-s as well. While there also guard shadow-mode-only fields accordingly. Fixes: 8b5b49ceb3d9 ("x86: don't include domctl and alike in shim-exclusive builds") Reported-by: Andrew Cooper Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- a/xen/arch/x86/mm/paging.c +++ b/xen/arch/x86/mm/paging.c @@ -47,7 +47,7 @@ /* Per-CPU variable for enforcing the lock ordering */ DEFINE_PER_CPU(int, mm_lock_level); -#ifndef CONFIG_PV_SHIM_EXCLUSIVE +#if PG_log_dirty /************************************************/ /* LOG DIRTY SUPPORT */ @@ -630,7 +630,7 @@ void paging_log_dirty_init(struct domain d->arch.paging.log_dirty.ops = ops; } -#endif /* CONFIG_PV_SHIM_EXCLUSIVE */ +#endif /* PG_log_dirty */ /************************************************/ /* CODE FOR PAGING SUPPORT */ @@ -671,7 +671,7 @@ void paging_vcpu_init(struct vcpu *v) shadow_vcpu_init(v); } -#ifndef CONFIG_PV_SHIM_EXCLUSIVE +#if PG_log_dirty int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc, XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl, bool_t resuming) @@ -792,7 +792,7 @@ long paging_domctl_continuation(XEN_GUES return ret; } -#endif /* CONFIG_PV_SHIM_EXCLUSIVE */ +#endif /* PG_log_dirty */ /* Call when destroying a domain */ int paging_teardown(struct domain *d) @@ -808,7 +808,7 @@ int paging_teardown(struct domain *d) if ( preempted ) return -ERESTART; -#ifndef CONFIG_PV_SHIM_EXCLUSIVE +#if PG_log_dirty /* clean up log dirty resources. */ rc = paging_free_log_dirty_bitmap(d, 0); if ( rc == -ERESTART ) --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -2869,12 +2869,14 @@ void shadow_teardown(struct domain *d, b * calls now that we've torn down the bitmap */ d->arch.paging.mode &= ~PG_log_dirty; - if ( d->arch.hvm.dirty_vram ) +#ifdef CONFIG_HVM + if ( is_hvm_domain(d) && d->arch.hvm.dirty_vram ) { xfree(d->arch.hvm.dirty_vram->sl1ma); xfree(d->arch.hvm.dirty_vram->dirty_bitmap); XFREE(d->arch.hvm.dirty_vram); } +#endif out: paging_unlock(d); --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -618,6 +618,7 @@ _sh_propagate(struct vcpu *v, } } +#ifdef CONFIG_HVM if ( unlikely(level == 1) && is_hvm_domain(d) ) { struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram; @@ -632,6 +633,7 @@ _sh_propagate(struct vcpu *v, sflags &= ~_PAGE_RW; } } +#endif /* Read-only memory */ if ( p2m_is_readonly(p2mt) ) @@ -1050,6 +1052,7 @@ static inline void shadow_vram_get_l1e(s mfn_t sl1mfn, struct domain *d) { +#ifdef CONFIG_HVM mfn_t mfn = shadow_l1e_get_mfn(new_sl1e); int flags = shadow_l1e_get_flags(new_sl1e); unsigned long gfn; @@ -1074,6 +1077,7 @@ static inline void shadow_vram_get_l1e(s dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) | ((unsigned long)sl1e & ~PAGE_MASK); } +#endif } static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e, @@ -1081,6 +1085,7 @@ static inline void shadow_vram_put_l1e(s mfn_t sl1mfn, struct domain *d) { +#ifdef CONFIG_HVM mfn_t mfn = shadow_l1e_get_mfn(old_sl1e); int flags = shadow_l1e_get_flags(old_sl1e); unsigned long gfn; @@ -1140,6 +1145,7 @@ static inline void shadow_vram_put_l1e(s dirty_vram->last_dirty = NOW(); } } +#endif } static int shadow_set_l1e(struct domain *d, --- a/xen/include/asm-x86/paging.h +++ b/xen/include/asm-x86/paging.h @@ -67,8 +67,12 @@ #define PG_translate 0 #define PG_external 0 #endif +#if defined(CONFIG_HVM) || !defined(CONFIG_PV_SHIM_EXCLUSIVE) /* Enable log dirty mode */ #define PG_log_dirty (XEN_DOMCTL_SHADOW_ENABLE_LOG_DIRTY << PG_mode_shift) +#else +#define PG_log_dirty 0 +#endif /* All paging modes. */ #define PG_MASK (PG_refcounts | PG_log_dirty | PG_translate | PG_external) @@ -154,7 +158,7 @@ struct paging_mode { /***************************************************************************** * Log dirty code */ -#ifndef CONFIG_PV_SHIM_EXCLUSIVE +#if PG_log_dirty /* get the dirty bitmap for a specific range of pfns */ void paging_log_dirty_range(struct domain *d, @@ -195,23 +199,28 @@ int paging_mfn_is_dirty(struct domain *d #define L4_LOGDIRTY_IDX(pfn) ((pfn_x(pfn) >> (PAGE_SHIFT + 3 + PAGETABLE_ORDER * 2)) & \ (LOGDIRTY_NODE_ENTRIES-1)) +#ifdef CONFIG_HVM /* VRAM dirty tracking support */ struct sh_dirty_vram { unsigned long begin_pfn; unsigned long end_pfn; +#ifdef CONFIG_SHADOW_PAGING paddr_t *sl1ma; uint8_t *dirty_bitmap; s_time_t last_dirty; +#endif }; +#endif -#else /* !CONFIG_PV_SHIM_EXCLUSIVE */ +#else /* !PG_log_dirty */ static inline void paging_log_dirty_init(struct domain *d, const struct log_dirty_ops *ops) {} static inline void paging_mark_dirty(struct domain *d, mfn_t gmfn) {} static inline void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn) {} +static inline bool paging_mfn_is_dirty(struct domain *d, mfn_t gmfn) { return false; } -#endif /* CONFIG_PV_SHIM_EXCLUSIVE */ +#endif /* PG_log_dirty */ /***************************************************************************** * Entry points into the paging-assistance code */ From patchwork Wed Sep 16 13:07:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11779727 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C02F14F6 for ; Wed, 16 Sep 2020 13:08:13 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 59AF122274 for ; Wed, 16 Sep 2020 13:08:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="key not found in DNS" (0-bit key) header.d=suse.com header.i=@suse.com header.b="ZMCgT1UM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59AF122274 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIX9m-0003Oz-En; Wed, 16 Sep 2020 13:07:30 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIX9l-0003Op-EV for xen-devel@lists.xenproject.org; Wed, 16 Sep 2020 13:07:29 +0000 X-Inumbo-ID: 377699ee-9b2d-4acd-b99f-ba79f4b08d8a Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 377699ee-9b2d-4acd-b99f-ba79f4b08d8a; Wed, 16 Sep 2020 13:07:28 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=cantorsusede; t=1600261647; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kIrzz/UY5hRDyT5hFr2dKOMfBiERGGw/r88iz1oYnO4=; b=ZMCgT1UMxhcaB9efeJDpHpFAlWiqWpdZtOHsbro5YhjprgKwQcegHMoJaPm+GRxZMXgQZH Ta3awVDdpQBrDdluwd2YNGaO0x+Yda2nKEbsawJiN7NyaPklPpWwVdsCp6XL+C5Wyj7RyG FQYNtsLRcY4fIslxbhXEeKC0zXS8+CCKOyOyDuTI2xP/5CvOrPfZjBOfQkBqz3KfGUpP0e VwjSm/WwLDVNypERAA3JJ0y3cbrhfbHrwV7O0VhZDUZENfqWOSXy3u/bjW6mPkfwijHkXt FWE7gjkkiywcCLaSAe8lR8phs+jluzaVCtj3Z/bx9vfVqfH545vQIkKGLBRjxA== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 23733AF0E; Wed, 16 Sep 2020 13:07:43 +0000 (UTC) Subject: [PATCH v2 2/4] x86/shim: adjust Kconfig defaults From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: Message-ID: <3d199a42-7b16-f673-6817-769824d56ebf@suse.com> Date: Wed, 16 Sep 2020 15:07:27 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Just like HVM, defaulting SHADOW_PAGING and TBOOT to Yes in shim- exclusive mode makes no sense, as the respective code is dead there. Also adjust the shim default config file: It needs to specifiy values only for settings where a non-default value is wanted. Signed-off-by: Jan Beulich Reviewed-by: Roger Pau Monné --- v2: Use simple default expression where possible. --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -116,9 +116,9 @@ config XEN_SHSTK compatiblity can be provided via the PV Shim mechanism. config SHADOW_PAGING - bool "Shadow Paging" - default y - ---help--- + bool "Shadow Paging" + default !PV_SHIM_EXCLUSIVE + ---help--- Shadow paging is a software alternative to hardware paging support (Intel EPT, AMD NPT). @@ -165,8 +165,8 @@ config HVM_FEP If unsure, say N. config TBOOT - def_bool y - prompt "Xen tboot support" if EXPERT + bool "Xen tboot support" if EXPERT + default y if !PV_SHIM_EXCLUSIVE select CRYPTO ---help--- Allows support for Trusted Boot using the Intel(R) Trusted Execution --- a/xen/arch/x86/configs/pvshim_defconfig +++ b/xen/arch/x86/configs/pvshim_defconfig @@ -8,12 +8,9 @@ CONFIG_NR_CPUS=32 CONFIG_EXPERT=y CONFIG_SCHED_NULL=y # Disable features not used by the PV shim -# CONFIG_HVM is not set # CONFIG_XEN_SHSTK is not set # CONFIG_HYPFS is not set -# CONFIG_SHADOW_PAGING is not set # CONFIG_BIGMEM is not set -# CONFIG_TBOOT is not set # CONFIG_KEXEC is not set # CONFIG_XENOPROF is not set # CONFIG_XSM is not set From patchwork Wed Sep 16 13:08:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11779731 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9ED314F6 for ; Wed, 16 Sep 2020 13:08:46 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9849C22210 for ; Wed, 16 Sep 2020 13:08:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="key not found in DNS" (0-bit key) header.d=suse.com header.i=@suse.com header.b="WZ320q41" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9849C22210 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIXAI-0003TD-Od; Wed, 16 Sep 2020 13:08:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIXAH-0003T2-Iq for xen-devel@lists.xenproject.org; Wed, 16 Sep 2020 13:08:01 +0000 X-Inumbo-ID: d2997bba-642d-42ba-9a39-321a4dbe2d4a Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d2997bba-642d-42ba-9a39-321a4dbe2d4a; Wed, 16 Sep 2020 13:08:00 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=cantorsusede; t=1600261680; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hYPkibsC0LeUUORujDjgVjc3ApbAi3m5v6WMoC7LJ8g=; b=WZ320q41n3qVyMVl7/0eEkM4tHzqSqRdn5C8aKvea0W8dMzVV8f8w7sijowxbfiZZ5y5JW +aAR1i//rapApvw6bTeIhHnHYKw15pqUoaTWNwxFnoFqblm3MoCsghOmaoRLgDxkenrNPl 2n0vLmf+XoBK+p4at0uv9notHJdOTvPi7qXz6IXvOodkncIqpXtg8BUbPkyRIQ8oe3Vcln V3U7vztESwnEb6Oq9jMr8oF9C5DeefZjui31J0aWgvdJt1sVxBS2N5mBCegMZTdm1ji5bi WFFPm6Aaag9Q89h51FcxW8v69B01x0DKKxt4Z/Ga2jzbhbSP1IRxDGMhnFnGXA== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 82D4DB2CE; Wed, 16 Sep 2020 13:08:15 +0000 (UTC) Subject: [PATCH v2 3/4] x86/shim: don't permit HVM and PV_SHIM_EXCLUSIVE at the same time From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap References: Message-ID: Date: Wed, 16 Sep 2020 15:08:00 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" This combination doesn't really make sense (and there likely are more); in particular even if the code built with both options set, HVM guests wouldn't work (and I think one wouldn't be able to create one in the first place). The alternative here would be some presumably intrusive #ifdef-ary to get this combination to actually build (but still not work) again. Signed-off-by: Jan Beulich Acked-by: Roger Pau Monné --- v2: Restore lost default setting. --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -23,7 +23,7 @@ config X86 select HAS_PDX select HAS_SCHED_GRANULARITY select HAS_UBSAN - select HAS_VPCI if !PV_SHIM_EXCLUSIVE && HVM + select HAS_VPCI if HVM select NEEDS_LIBELF select NUMA @@ -90,8 +90,9 @@ config PV_LINEAR_PT If unsure, say Y. config HVM - def_bool !PV_SHIM_EXCLUSIVE - prompt "HVM support" + bool "HVM support" + depends on !PV_SHIM_EXCLUSIVE + default y ---help--- Interfaces to support HVM domains. HVM domains require hardware virtualisation extensions (e.g. Intel VT-x, AMD SVM), but can boot From patchwork Wed Sep 16 13:08:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11779733 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95C506CA for ; Wed, 16 Sep 2020 13:09:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5502922274 for ; Wed, 16 Sep 2020 13:09:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="key not found in DNS" (0-bit key) header.d=suse.com header.i=@suse.com header.b="l8SoG9tE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5502922274 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIXAy-0003ar-23; Wed, 16 Sep 2020 13:08:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kIXAw-0003aT-J8 for xen-devel@lists.xenproject.org; Wed, 16 Sep 2020 13:08:42 +0000 X-Inumbo-ID: 31c6b7a7-83c5-4253-81dc-744445b36ad8 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 31c6b7a7-83c5-4253-81dc-744445b36ad8; Wed, 16 Sep 2020 13:08:41 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=cantorsusede; t=1600261720; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LBtWaFEBZUGhu8aWD/XRiEveL+8Latdj2CST4lhLqjg=; b=l8SoG9tEs0g/Wx4szXqDO7hh3Rrcz/wJlouvgonOkhpVbTgKnSOpbdvc+P2Qx1MWm6h1qI wW3g0530iD3DP850coYwNvARp8nCf7uC+XNzRo/CRzIhqdV0i/+By7BPpVQar9aN+1/aS6 SBF5OLOf6qivj5W1ckzS5G5S5sdnfKMWhUFgQ4IhvN5Ato4gJp3p8gLeTdOTJE+03+Ttxt ET/UJAKStoCeR/Mp7g6dRJVWCXamv0acHT531XdF+RhiCqg9hkx5JYMAf5xzuqLyhCdj2t FlmhoAyICxJVZmDQfNh0CE71A7RExnZtbrCeoIzIPWTo8Mr/lf6l8isw1Uy0nQ== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B9ADEAC85; Wed, 16 Sep 2020 13:08:55 +0000 (UTC) Subject: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e() From: Jan Beulich To: "xen-devel@lists.xenproject.org" Cc: Andrew Cooper , Wei Liu , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , George Dunlap , Tim Deegan References: Message-ID: <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com> Date: Wed, 16 Sep 2020 15:08:40 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" By passing the functions an MFN and flags, only a single instance of each is needed; they were pretty large for being inline functions anyway. While moving the code, also adjust coding style and add const where sensible / possible. Signed-off-by: Jan Beulich --- v2: New. --- a/xen/arch/x86/mm/shadow/hvm.c +++ b/xen/arch/x86/mm/shadow/hvm.c @@ -903,6 +903,104 @@ int shadow_track_dirty_vram(struct domai return rc; } +void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f, + mfn_t sl1mfn, const void *sl1e, + const struct domain *d) +{ + unsigned long gfn; + struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram; + + ASSERT(is_hvm_domain(d)); + + if ( !dirty_vram /* tracking disabled? */ || + !(l1f & _PAGE_RW) /* read-only mapping? */ || + !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */) + return; + + gfn = gfn_x(mfn_to_gfn(d, mfn)); + /* Page sharing not supported on shadow PTs */ + BUG_ON(SHARED_M2P(gfn)); + + if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) ) + { + unsigned long i = gfn - dirty_vram->begin_pfn; + const struct page_info *page = mfn_to_page(mfn); + + if ( (page->u.inuse.type_info & PGT_count_mask) == 1 ) + /* Initial guest reference, record it */ + dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) | + PAGE_OFFSET(sl1e); + } +} + +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f, + mfn_t sl1mfn, const void *sl1e, + const struct domain *d) +{ + unsigned long gfn; + struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram; + + ASSERT(is_hvm_domain(d)); + + if ( !dirty_vram /* tracking disabled? */ || + !(l1f & _PAGE_RW) /* read-only mapping? */ || + !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */) + return; + + gfn = gfn_x(mfn_to_gfn(d, mfn)); + /* Page sharing not supported on shadow PTs */ + BUG_ON(SHARED_M2P(gfn)); + + if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) ) + { + unsigned long i = gfn - dirty_vram->begin_pfn; + const struct page_info *page = mfn_to_page(mfn); + bool dirty = false; + paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e); + + if ( (page->u.inuse.type_info & PGT_count_mask) == 1 ) + { + /* Last reference */ + if ( dirty_vram->sl1ma[i] == INVALID_PADDR ) + { + /* We didn't know it was that one, let's say it is dirty */ + dirty = true; + } + else + { + ASSERT(dirty_vram->sl1ma[i] == sl1ma); + dirty_vram->sl1ma[i] = INVALID_PADDR; + if ( l1f & _PAGE_DIRTY ) + dirty = true; + } + } + else + { + /* We had more than one reference, just consider the page dirty. */ + dirty = true; + /* Check that it's not the one we recorded. */ + if ( dirty_vram->sl1ma[i] == sl1ma ) + { + /* Too bad, we remembered the wrong one... */ + dirty_vram->sl1ma[i] = INVALID_PADDR; + } + else + { + /* + * Ok, our recorded sl1e is still pointing to this page, let's + * just hope it will remain. + */ + } + } + + if ( dirty ) + { + dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8); + dirty_vram->last_dirty = NOW(); + } + } +} + /* * Local variables: * mode: C --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -1047,107 +1047,6 @@ static int shadow_set_l2e(struct domain return flags; } -static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e, - shadow_l1e_t *sl1e, - mfn_t sl1mfn, - struct domain *d) -{ -#ifdef CONFIG_HVM - mfn_t mfn = shadow_l1e_get_mfn(new_sl1e); - int flags = shadow_l1e_get_flags(new_sl1e); - unsigned long gfn; - struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram; - - if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */ - || !(flags & _PAGE_RW) /* read-only mapping? */ - || !mfn_valid(mfn) ) /* mfn can be invalid in mmio_direct */ - return; - - gfn = gfn_x(mfn_to_gfn(d, mfn)); - /* Page sharing not supported on shadow PTs */ - BUG_ON(SHARED_M2P(gfn)); - - if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) ) - { - unsigned long i = gfn - dirty_vram->begin_pfn; - struct page_info *page = mfn_to_page(mfn); - - if ( (page->u.inuse.type_info & PGT_count_mask) == 1 ) - /* Initial guest reference, record it */ - dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) - | ((unsigned long)sl1e & ~PAGE_MASK); - } -#endif -} - -static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e, - shadow_l1e_t *sl1e, - mfn_t sl1mfn, - struct domain *d) -{ -#ifdef CONFIG_HVM - mfn_t mfn = shadow_l1e_get_mfn(old_sl1e); - int flags = shadow_l1e_get_flags(old_sl1e); - unsigned long gfn; - struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram; - - if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */ - || !(flags & _PAGE_RW) /* read-only mapping? */ - || !mfn_valid(mfn) ) /* mfn can be invalid in mmio_direct */ - return; - - gfn = gfn_x(mfn_to_gfn(d, mfn)); - /* Page sharing not supported on shadow PTs */ - BUG_ON(SHARED_M2P(gfn)); - - if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) ) - { - unsigned long i = gfn - dirty_vram->begin_pfn; - struct page_info *page = mfn_to_page(mfn); - int dirty = 0; - paddr_t sl1ma = mfn_to_maddr(sl1mfn) - | ((unsigned long)sl1e & ~PAGE_MASK); - - if ( (page->u.inuse.type_info & PGT_count_mask) == 1 ) - { - /* Last reference */ - if ( dirty_vram->sl1ma[i] == INVALID_PADDR ) { - /* We didn't know it was that one, let's say it is dirty */ - dirty = 1; - } - else - { - ASSERT(dirty_vram->sl1ma[i] == sl1ma); - dirty_vram->sl1ma[i] = INVALID_PADDR; - if ( flags & _PAGE_DIRTY ) - dirty = 1; - } - } - else - { - /* We had more than one reference, just consider the page dirty. */ - dirty = 1; - /* Check that it's not the one we recorded. */ - if ( dirty_vram->sl1ma[i] == sl1ma ) - { - /* Too bad, we remembered the wrong one... */ - dirty_vram->sl1ma[i] = INVALID_PADDR; - } - else - { - /* Ok, our recorded sl1e is still pointing to this page, let's - * just hope it will remain. */ - } - } - if ( dirty ) - { - dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8); - dirty_vram->last_dirty = NOW(); - } - } -#endif -} - static int shadow_set_l1e(struct domain *d, shadow_l1e_t *sl1e, shadow_l1e_t new_sl1e, @@ -1156,6 +1055,7 @@ static int shadow_set_l1e(struct domain { int flags = 0; shadow_l1e_t old_sl1e; + unsigned int old_sl1f; #if SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC mfn_t new_gmfn = shadow_l1e_get_mfn(new_sl1e); #endif @@ -1194,7 +1094,9 @@ static int shadow_set_l1e(struct domain new_sl1e = shadow_l1e_flip_flags(new_sl1e, rc); /* fall through */ case 0: - shadow_vram_get_l1e(new_sl1e, sl1e, sl1mfn, d); + shadow_vram_get_mfn(shadow_l1e_get_mfn(new_sl1e), + shadow_l1e_get_flags(new_sl1e), + sl1mfn, sl1e, d); break; } #undef PAGE_FLIPPABLE @@ -1205,20 +1107,19 @@ static int shadow_set_l1e(struct domain shadow_write_entries(sl1e, &new_sl1e, 1, sl1mfn); flags |= SHADOW_SET_CHANGED; - if ( (shadow_l1e_get_flags(old_sl1e) & _PAGE_PRESENT) - && !sh_l1e_is_magic(old_sl1e) ) + old_sl1f = shadow_l1e_get_flags(old_sl1e); + if ( (old_sl1f & _PAGE_PRESENT) && !sh_l1e_is_magic(old_sl1e) && + shadow_mode_refcounts(d) ) { /* We lost a reference to an old mfn. */ /* N.B. Unlike higher-level sets, never need an extra flush * when writing an l1e. Because it points to the same guest frame * as the guest l1e did, it's the guest's responsibility to * trigger a flush later. */ - if ( shadow_mode_refcounts(d) ) - { - shadow_vram_put_l1e(old_sl1e, sl1e, sl1mfn, d); - shadow_put_page_from_l1e(old_sl1e, d); - TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_SHADOW_L1_PUT_REF); - } + shadow_vram_put_mfn(shadow_l1e_get_mfn(old_sl1e), old_sl1f, + sl1mfn, sl1e, d); + shadow_put_page_from_l1e(old_sl1e, d); + TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_SHADOW_L1_PUT_REF); } return flags; } @@ -1944,9 +1845,12 @@ void sh_destroy_l1_shadow(struct domain /* Decrement refcounts of all the old entries */ mfn_t sl1mfn = smfn; SHADOW_FOREACH_L1E(sl1mfn, sl1e, 0, 0, { - if ( (shadow_l1e_get_flags(*sl1e) & _PAGE_PRESENT) - && !sh_l1e_is_magic(*sl1e) ) { - shadow_vram_put_l1e(*sl1e, sl1e, sl1mfn, d); + unsigned int sl1f = shadow_l1e_get_flags(*sl1e); + + if ( (sl1f & _PAGE_PRESENT) && !sh_l1e_is_magic(*sl1e) ) + { + shadow_vram_put_mfn(shadow_l1e_get_mfn(*sl1e), sl1f, + sl1mfn, sl1e, d); shadow_put_page_from_l1e(*sl1e, d); } }); --- a/xen/arch/x86/mm/shadow/private.h +++ b/xen/arch/x86/mm/shadow/private.h @@ -410,6 +410,14 @@ void shadow_update_paging_modes(struct v * With user_only == 1, unhooks only the user-mode mappings. */ void shadow_unhook_mappings(struct domain *d, mfn_t smfn, int user_only); +/* VRAM dirty tracking helpers. */ +void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f, + mfn_t sl1mfn, const void *sl1e, + const struct domain *d); +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f, + mfn_t sl1mfn, const void *sl1e, + const struct domain *d); + #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) /* Allow a shadowed page to go out of sync */ int sh_unsync(struct vcpu *v, mfn_t gmfn);