From patchwork Wed Jan 8 17:14:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tamas K Lengyel X-Patchwork-Id: 11324117 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1AFF109A for ; Wed, 8 Jan 2020 17:15:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B734E2067D for ; Wed, 8 Jan 2020 17:15:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B734E2067D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipEuk-0000T9-Vs; Wed, 08 Jan 2020 17:14:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ipEuj-0000Sw-JJ for xen-devel@lists.xenproject.org; Wed, 08 Jan 2020 17:14:37 +0000 X-Inumbo-ID: 580c60b0-323a-11ea-8599-bc764e2007e4 Received: from mga14.intel.com (unknown [192.55.52.115]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 580c60b0-323a-11ea-8599-bc764e2007e4; Wed, 08 Jan 2020 17:14:36 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Jan 2020 09:14:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,410,1571727600"; d="scan'208";a="395806082" Received: from tlengyel-mobl2.amr.corp.intel.com (HELO localhost.localdomain) ([10.251.132.23]) by orsmga005.jf.intel.com with ESMTP; 08 Jan 2020 09:14:33 -0800 From: Tamas K Lengyel To: xen-devel@lists.xenproject.org Date: Wed, 8 Jan 2020 09:14:01 -0800 Message-Id: X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 04/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Tamas K Lengyel , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" All callers pass 0 in. Signed-off-by: Tamas K Lengyel Reviewed-by: Wei Liu --- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/mm/p2m.c | 5 ++--- xen/common/memory.c | 2 +- xen/include/asm-x86/mem_sharing.h | 8 +++----- 4 files changed, 7 insertions(+), 10 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 24f08d7043..38e9006c92 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1898,7 +1898,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, if ( npfec.write_access && (p2mt == p2m_ram_shared) ) { ASSERT(p2m_is_hostp2m(p2m)); - sharing_enomem = mem_sharing_unshare_page(currd, gfn, 0); + sharing_enomem = mem_sharing_unshare_page(currd, gfn); rc = 1; goto out_put_gfn; } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 3119269073..baea632acc 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -515,7 +515,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l, * Try to unshare. If we fail, communicate ENOMEM without * sleeping. */ - if ( mem_sharing_unshare_page(p2m->domain, gfn_l, 0) < 0 ) + if ( mem_sharing_unshare_page(p2m->domain, gfn_l) < 0 ) mem_sharing_notify_enomem(p2m->domain, gfn_l, false); mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL); } @@ -896,8 +896,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, { /* Do an unshare to cleanly take care of all corner cases. */ int rc; - rc = mem_sharing_unshare_page(p2m->domain, - gfn_x(gfn_add(gfn, i)), 0); + rc = mem_sharing_unshare_page(p2m->domain, gfn_x(gfn_add(gfn, i))); if ( rc ) { p2m_unlock(p2m); diff --git a/xen/common/memory.c b/xen/common/memory.c index 309e872edf..c7d2bac452 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -352,7 +352,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn) * might be the only one using this shared page, and we need to * trigger proper cleanup. Once done, this is like any other page. */ - rc = mem_sharing_unshare_page(d, gmfn, 0); + rc = mem_sharing_unshare_page(d, gmfn); if ( rc ) { mem_sharing_notify_enomem(d, gmfn, false); diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h index af2a1038b5..cf7848709f 100644 --- a/xen/include/asm-x86/mem_sharing.h +++ b/xen/include/asm-x86/mem_sharing.h @@ -69,10 +69,9 @@ int __mem_sharing_unshare_page(struct domain *d, uint16_t flags); static inline int mem_sharing_unshare_page(struct domain *d, - unsigned long gfn, - uint16_t flags) + unsigned long gfn) { - int rc = __mem_sharing_unshare_page(d, gfn, flags); + int rc = __mem_sharing_unshare_page(d, gfn, 0); BUG_ON(rc && (rc != -ENOMEM)); return rc; } @@ -115,8 +114,7 @@ static inline unsigned int mem_sharing_get_nr_shared_mfns(void) return 0; } -static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn, - uint16_t flags) +static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn) { ASSERT_UNREACHABLE(); return -EOPNOTSUPP;