From patchwork Mon Sep 11 04:37:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haozhong Zhang X-Patchwork-Id: 9946611 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 693D86035D for ; Mon, 11 Sep 2017 04:42:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5EDCE27D0C for ; Mon, 11 Sep 2017 04:42:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 533E828429; Mon, 11 Sep 2017 04:42:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D4D1927D0C for ; Mon, 11 Sep 2017 04:42:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drGVC-0001WO-4Z; Mon, 11 Sep 2017 04:39:18 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1drGVA-0001TO-69 for xen-devel@lists.xen.org; Mon, 11 Sep 2017 04:39:16 +0000 Received: from [193.109.254.147] by server-4.bemta-6.messagelabs.com id 6A/15-03283-37316B95; Mon, 11 Sep 2017 04:39:15 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrNLMWRWlGSWpSXmKPExsXS1tYholskvC3 SoH2FicWSj4tZHBg9ju7+zRTAGMWamZeUX5HAmnH8xVrWgqWGFd37vrA1MDYrdzFycQgJTGeU mPlvCXMXIyeHhACvxJFlM1gh7ACJcy3T2SGKehkllvZsBEuwCehLrHh8EMwWEZCWuPb5MiOIz SzQwCTx9m8RiC0s4CTRv/wEO4jNIqAqsXn5KhYQm1fATuL3/7/sEAvkJXa1XQSbwwkUP/jyHd gRQgK2EgtOL2CdwMi7gJFhFaNGcWpRWWqRrpG5XlJRZnpGSW5iZo6uoYGZXm5qcXFiempOYlK xXnJ+7iZGYEAwAMEOxsVrAw8xSnIwKYnyvju+JVKILyk/pTIjsTgjvqg0J7X4EKMMB4eSBO8Z wW2RQoJFqempFWmZOcDQhElLcPAoifAuAknzFhck5hZnpkOkTjHqcnTcvPuHSYglLz8vVUqcd zNIkQBIUUZpHtwIWJxcYpSVEuZlBDpKiKcgtSg3swRV/hWjOAejkjCvjBDQFJ7MvBK4Ta+Ajm ACOoLn0haQI0oSEVJSDYwLt7PdS9thwStVk9a7UUBhm8XKr71OM07UZYTez4+e3bd5X+fMDSF 6LAeYgrw3X9B2yt/bLvwq7GlHT2KXUt0f34D6c2fnh9//vDKowS5l9iz9Va38mYZTFepOmWVX OJq5C6rXz31xJEv/atrpaU1v9vJ1VzAVtW43ZON30Z3MI/L4j1L+FSWW4oxEQy3mouJEAHcuL F6OAgAA X-Env-Sender: haozhong.zhang@intel.com X-Msg-Ref: server-2.tower-27.messagelabs.com!1505104735!56506342!9 X-Originating-IP: [134.134.136.20] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 48415 invoked from network); 11 Sep 2017 04:39:14 -0000 Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20) by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 11 Sep 2017 04:39:14 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Sep 2017 21:39:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.42,376,1500966000"; d="scan'208"; a="1217078403" Received: from hz-desktop.sh.intel.com (HELO localhost) ([10.239.159.142]) by fmsmga002.fm.intel.com with ESMTP; 10 Sep 2017 21:39:12 -0700 From: Haozhong Zhang To: xen-devel@lists.xen.org Date: Mon, 11 Sep 2017 12:37:55 +0800 Message-Id: <20170911043820.14617-15-haozhong.zhang@intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170911043820.14617-1-haozhong.zhang@intel.com> References: <20170911043820.14617-1-haozhong.zhang@intel.com> Cc: Haozhong Zhang , Andrew Cooper , Jan Beulich , Chao Peng , Dan Williams Subject: [Xen-devel] [RFC XEN PATCH v3 14/39] x86_64/mm: refactor memory_add() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Separate the revertible part of memory_add_common(), which will also be used in PMEM management. The separation will ease the failure recovery in PMEM management. Several coding-style issues in the touched code are fixed as well. No functional change is introduced. Signed-off-by: Haozhong Zhang --- Cc: Jan Beulich Cc: Andrew Cooper --- xen/arch/x86/x86_64/mm.c | 98 +++++++++++++++++++++++++++--------------------- 1 file changed, 56 insertions(+), 42 deletions(-) diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index f635e4bf70..c8ffafe8a8 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1337,21 +1337,16 @@ static int mem_hotadd_check(unsigned long spfn, unsigned long epfn) return 1; } -/* - * A bit paranoid for memory allocation failure issue since - * it may be reason for memory add - */ -int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) +static int memory_add_common(struct mem_hotadd_info *info, + unsigned int pxm, bool direct_map) { - struct mem_hotadd_info info; + unsigned long spfn = info->spfn, epfn = info->epfn; int ret; nodeid_t node; unsigned long old_max = max_page, old_total = total_pages; unsigned long old_node_start, old_node_span, orig_online; unsigned long i; - dprintk(XENLOG_INFO, "memory_add %lx ~ %lx with pxm %x\n", spfn, epfn, pxm); - if ( !mem_hotadd_check(spfn, epfn) ) return -EINVAL; @@ -1366,22 +1361,25 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) return -EINVAL; } - i = virt_to_mfn(HYPERVISOR_VIRT_END - 1) + 1; - if ( spfn < i ) - { - ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), spfn, - min(epfn, i) - spfn, PAGE_HYPERVISOR); - if ( ret ) - goto destroy_directmap; - } - if ( i < epfn ) + if ( direct_map ) { - if ( i < spfn ) - i = spfn; - ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), i, - epfn - i, __PAGE_HYPERVISOR_RW); - if ( ret ) - goto destroy_directmap; + i = virt_to_mfn(HYPERVISOR_VIRT_END - 1) + 1; + if ( spfn < i ) + { + ret = map_pages_to_xen((unsigned long)mfn_to_virt(spfn), spfn, + min(epfn, i) - spfn, PAGE_HYPERVISOR); + if ( ret ) + goto destroy_directmap; + } + if ( i < epfn ) + { + if ( i < spfn ) + i = spfn; + ret = map_pages_to_xen((unsigned long)mfn_to_virt(i), i, + epfn - i, __PAGE_HYPERVISOR_RW); + if ( ret ) + goto destroy_directmap; + } } old_node_start = node_start_pfn(node); @@ -1398,22 +1396,18 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) } else { - if (node_start_pfn(node) > spfn) + if ( node_start_pfn(node) > spfn ) NODE_DATA(node)->node_start_pfn = spfn; - if (node_end_pfn(node) < epfn) + if ( node_end_pfn(node) < epfn ) NODE_DATA(node)->node_spanned_pages = epfn - node_start_pfn(node); } - info.spfn = spfn; - info.epfn = epfn; - info.cur = spfn; - - ret = extend_frame_table(&info); + ret = extend_frame_table(info); if ( ret ) goto restore_node_status; /* Set max_page as setup_m2p_table will use it*/ - if (max_page < epfn) + if ( max_page < epfn ) { max_page = epfn; max_pdx = pfn_to_pdx(max_page - 1) + 1; @@ -1421,7 +1415,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) total_pages += epfn - spfn; set_pdx_range(spfn, epfn); - ret = setup_m2p_table(&info); + ret = setup_m2p_table(info); if ( ret ) goto destroy_m2p; @@ -1429,11 +1423,12 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) if ( iommu_enabled && !iommu_passthrough && !need_iommu(hardware_domain) ) { for ( i = spfn; i < epfn; i++ ) - if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) ) + if ( iommu_map_page(hardware_domain, i, i, + IOMMUF_readable|IOMMUF_writable) ) break; if ( i != epfn ) { - while (i-- > old_max) + while ( i-- > old_max ) /* If statement to satisfy __must_check. */ if ( iommu_unmap_page(hardware_domain, i) ) continue; @@ -1442,14 +1437,10 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) } } - /* We can't revert any more */ - share_hotadd_m2p_table(&info); - transfer_pages_to_heap(&info); - return 0; destroy_m2p: - destroy_m2p_mapping(&info); + destroy_m2p_mapping(info); max_page = old_max; total_pages = old_total; max_pdx = pfn_to_pdx(max_page - 1) + 1; @@ -1459,9 +1450,32 @@ restore_node_status: node_set_offline(node); NODE_DATA(node)->node_start_pfn = old_node_start; NODE_DATA(node)->node_spanned_pages = old_node_span; - destroy_directmap: - destroy_xen_mappings((unsigned long)mfn_to_virt(spfn), - (unsigned long)mfn_to_virt(epfn)); +destroy_directmap: + if ( direct_map ) + destroy_xen_mappings((unsigned long)mfn_to_virt(spfn), + (unsigned long)mfn_to_virt(epfn)); + + return ret; +} + +/* + * A bit paranoid for memory allocation failure issue since + * it may be reason for memory add + */ +int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) +{ + struct mem_hotadd_info info = { .spfn = spfn, .epfn = epfn, .cur = spfn }; + int ret; + + dprintk(XENLOG_INFO, "memory_add %lx ~ %lx with pxm %x\n", spfn, epfn, pxm); + + ret = memory_add_common(&info, pxm, true); + if ( !ret ) + { + /* We can't revert any more */ + share_hotadd_m2p_table(&info); + transfer_pages_to_heap(&info); + } return ret; }