From patchwork Mon Feb 3 16:12:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 11363075 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37C13924 for ; Mon, 3 Feb 2020 16:13:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1E5CA2080D for ; Mon, 3 Feb 2020 16:13:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E5CA2080D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyeKq-0007Wm-2D; Mon, 03 Feb 2020 16:12:28 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iyeKo-0007WX-P7 for xen-devel@lists.xenproject.org; Mon, 03 Feb 2020 16:12:26 +0000 X-Inumbo-ID: f544603e-469f-11ea-8e75-12813bfff9fa Received: from mga02.intel.com (unknown [134.134.136.20]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f544603e-469f-11ea-8e75-12813bfff9fa; Mon, 03 Feb 2020 16:12:21 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Feb 2020 08:12:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,398,1574150400"; d="scan'208";a="337726512" Received: from kchen27-mobl.amr.corp.intel.com (HELO localhost.localdomain) ([10.251.20.224]) by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2020 08:12:19 -0800 From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Mon, 3 Feb 2020 08:12:06 -0800 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v7 4/7] x86/mem_sharing: use default_access in add_to_physmap X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Tamas K Lengyel , Tamas K Lengyel , Wei Liu , George Dunlap , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" When plugging a hole in the target physmap don't use the access permission returned by __get_gfn_type_access as it is non-sensical (p2m_access_n) in the use-case add_to_physmap was intended to be used in. It leads to vm_events being sent out for access violations at unexpected locations. Make use of p2m->default_access instead and document the ambiguity surrounding "hole" types and corner-cases with custom mem_access being set on holes. Signed-off-by: Tamas K Lengyel Reviewed-by: Jan Beulich --- v7: add detailed comment explaining the issue and why this fix is correct --- xen/arch/x86/mm/mem_sharing.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 2b3be5b125..3835bc928f 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1061,6 +1061,29 @@ err_out: return ret; } +/* + * This function is intended to be used for plugging a "hole" in the client's + * physmap with a shared memory entry. Unfortunately the definition of a "hole" + * is currently ambigious. There are two cases one can run into a "hole": + * 1) there is no pagetable entry at all + * 2) there is a pagetable entry with a type that passes p2m_is_hole + * + * The intended use-case for this function is case 1. + * + * During 1) the mem_access being returned is p2m_access_n and that is + * incorrect to be applied to the new entry being added the client physmap, + * thus we make use of the p2m->default_access instead. + * When 2) is true it is possible that the existing pagetable entry also has + * a mem_access permission set, which could be p2m_access_n. Since we can't + * differentiate whether we are in case 1) or 2), we default to using the + * access permission defined as default for the p2m, thus in + * case 2) overwriting any custom mem_access permission the user may have set + * on a hole page. Custom mem_access permissions being set on a hole are + * unheard of but technically possible. + * + * TODO: to properly resolve this issue implement differentiation between the + * two "hole" types. + */ static int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh, struct domain *cd, unsigned long cgfn, bool lock) @@ -1071,11 +1094,10 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh, p2m_type_t smfn_type, cmfn_type; struct gfn_info *gfn_info; struct p2m_domain *p2m = p2m_get_hostp2m(cd); - p2m_access_t a; struct two_gfns tg; get_two_gfns(sd, _gfn(sgfn), &smfn_type, NULL, &smfn, - cd, _gfn(cgfn), &cmfn_type, &a, &cmfn, 0, &tg, lock); + cd, _gfn(cgfn), &cmfn_type, NULL, &cmfn, 0, &tg, lock); /* Get the source shared page, check and lock */ ret = XENMEM_SHARING_OP_S_HANDLE_INVALID; @@ -1110,7 +1132,7 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh, } ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K, - p2m_ram_shared, a); + p2m_ram_shared, p2m->default_access); /* Tempted to turn this into an assert */ if ( ret )