From patchwork Fri Aug 4 17:05:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Ostrovsky X-Patchwork-Id: 9881743 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A17D1602B8 for ; Fri, 4 Aug 2017 17:05:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C1D8288F7 for ; Fri, 4 Aug 2017 17:05:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 80AD82896E; Fri, 4 Aug 2017 17:05:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7BB21288F7 for ; Fri, 4 Aug 2017 17:05:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ddg0e-0006jo-I9; Fri, 04 Aug 2017 17:03:36 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ddg0c-0006he-Tp for xen-devel@lists.xen.org; Fri, 04 Aug 2017 17:03:35 +0000 Received: from [85.158.139.211] by server-16.bemta-5.messagelabs.com id 49/D0-01712-6E8A4895; Fri, 04 Aug 2017 17:03:34 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupnkeJIrShJLcpLzFFi42LpnVTnqvt0RUu kwYuDShZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bnPfkFa1QrenZfYGtg3CTdxcjFISQwiUli x4a3bBDOL0aJT6seMUE4Gxglft7pYYZwehglvhzYBeRwcrAJGEmcPTqdEcQWEZCWuPb5MiNIE bNAA5PE83MHwRLCAu4SW1d9ZwWxWQRUJV5d2cQOYvMKeErMX/cZzJYQUJCY8vA92FBOAS+J5t YXYHEhoJqT938wQdQYS/TN6mOZwMi3gJFhFaNGcWpRWWqRrqGFXlJRZnpGSW5iZo6uoYGpXm5 qcXFiempOYlKxXnJ+7iZGYLAwAMEOxqbtnocYJTmYlER5q481RQrxJeWnVGYkFmfEF5XmpBYf YpTh4FCS4F2/vCVSSLAoNT21Ii0zBxi2MGkJDh4lEd4HIGne4oLE3OLMdIjUKUZjjg2r139h4 ng14f83JiGWvPy8VClx3sUgpQIgpRmleXCDYPF0iVFWSpiXEeg0IZ6C1KLczBJU+VeM4hyMSs K8lSBTeDLzSuD2vQI6hQnolD91jSCnlCQipKQaGBe+u7+0rvEK21kP7knWMhfmFdUHeqhNmsX jdpZ15/O5Gyc4RKmlSb7vznEP3iLR9e0Wo8TM+WsjNOzY71514l8/h5X/TOXzY15lNpu1hb0M D0a9Oa3dKcoWs1H9yEk/n749W5XybFk/PD4xN90p6pZ+0sX6iKZyZ8WDK7Q8lk9I/W34dnp2n BJLcUaioRZzUXEiACR0V36iAgAA X-Env-Sender: boris.ostrovsky@oracle.com X-Msg-Ref: server-15.tower-206.messagelabs.com!1501866211!91902703!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 6772 invoked from network); 4 Aug 2017 17:03:33 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 4 Aug 2017 17:03:33 -0000 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v74H3Qj4028050 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 4 Aug 2017 17:03:26 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v74H3PpN005482 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 4 Aug 2017 17:03:26 GMT Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v74H3PY2015175; Fri, 4 Aug 2017 17:03:25 GMT Received: from ovs104.us.oracle.com (/10.149.76.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 04 Aug 2017 10:03:25 -0700 From: Boris Ostrovsky To: xen-devel@lists.xen.org Date: Fri, 4 Aug 2017 13:05:41 -0400 Message-Id: <1501866346-9774-4-git-send-email-boris.ostrovsky@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1501866346-9774-1-git-send-email-boris.ostrovsky@oracle.com> References: <1501866346-9774-1-git-send-email-boris.ostrovsky@oracle.com> X-Source-IP: aserv0021.oracle.com [141.146.126.233] Cc: sstabellini@kernel.org, wei.liu2@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, tim@xen.org, jbeulich@suse.com, Boris Ostrovsky Subject: [Xen-devel] [PATCH v6 3/8] mm: Scrub pages in alloc_heap_pages() if needed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP When allocating pages in alloc_heap_pages() first look for clean pages. If none is found then retry, take pages marked as unscrubbed and scrub them. Note that we shouldn't find unscrubbed pages in alloc_heap_pages() yet. However, this will become possible when we stop scrubbing from free_heap_pages() and instead do it from idle loop. Since not all allocations require clean pages (such as xenheap allocations) introduce MEMF_no_scrub flag that callers can set if they are willing to consume unscrubbed pages. Signed-off-by: Boris Ostrovsky Reviewed-by: Jan Beulich --- Changes in v6: * Dropped unnecessary need_scrub. xen/common/page_alloc.c | 33 +++++++++++++++++++++++++++++---- xen/include/xen/mm.h | 4 +++- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 6d7422d..eedff2d 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -706,6 +706,7 @@ static struct page_info *get_free_buddy(unsigned int zone_lo, nodemask_t nodemask = d ? d->node_affinity : node_online_map; unsigned int j, zone, nodemask_retry = 0; struct page_info *pg; + bool use_unscrubbed = (memflags & MEMF_no_scrub); if ( node == NUMA_NO_NODE ) { @@ -737,8 +738,20 @@ static struct page_info *get_free_buddy(unsigned int zone_lo, /* Find smallest order which can satisfy the request. */ for ( j = order; j <= MAX_ORDER; j++ ) + { if ( (pg = page_list_remove_head(&heap(node, zone, j))) ) - return pg; + { + /* + * We grab single pages (order=0) even if they are + * unscrubbed. Given that scrubbing one page is fairly quick + * it is not worth breaking higher orders. + */ + if ( (order == 0) || use_unscrubbed || + pg->u.free.first_dirty == INVALID_DIRTY_IDX) + return pg; + page_list_add_tail(pg, &heap(node, zone, j)); + } + } } while ( zone-- > zone_lo ); /* careful: unsigned zone may wrap */ if ( (memflags & MEMF_exact_node) && req_node != NUMA_NO_NODE ) @@ -822,6 +835,10 @@ static struct page_info *alloc_heap_pages( } pg = get_free_buddy(zone_lo, zone_hi, order, memflags, d); + /* Try getting a dirty buddy if we couldn't get a clean one. */ + if ( !pg && !(memflags & MEMF_no_scrub) ) + pg = get_free_buddy(zone_lo, zone_hi, order, + memflags | MEMF_no_scrub, d); if ( !pg ) { /* No suitable memory blocks. Fail the request. */ @@ -867,7 +884,15 @@ static struct page_info *alloc_heap_pages( for ( i = 0; i < (1 << order); i++ ) { /* Reference count must continuously be zero for free pages. */ - BUG_ON(pg[i].count_info != PGC_state_free); + BUG_ON((pg[i].count_info & ~PGC_need_scrub) != PGC_state_free); + + if ( test_bit(_PGC_need_scrub, &pg[i].count_info) ) + { + if ( !(memflags & MEMF_no_scrub) ) + scrub_one_page(&pg[i]); + node_need_scrub[node]--; + } + pg[i].count_info = PGC_state_inuse; if ( !(memflags & MEMF_no_tlbflush) ) @@ -1751,7 +1776,7 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags) ASSERT(!in_irq()); pg = alloc_heap_pages(MEMZONE_XEN, MEMZONE_XEN, - order, memflags, NULL); + order, memflags | MEMF_no_scrub, NULL); if ( unlikely(pg == NULL) ) return NULL; @@ -1801,7 +1826,7 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags) if ( !(memflags >> _MEMF_bits) ) memflags |= MEMF_bits(xenheap_bits); - pg = alloc_domheap_pages(NULL, order, memflags); + pg = alloc_domheap_pages(NULL, order, memflags | MEMF_no_scrub); if ( unlikely(pg == NULL) ) return NULL; diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 503b92e..e1f9c42 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -248,7 +248,9 @@ struct npfec { #define MEMF_no_tlbflush (1U<<_MEMF_no_tlbflush) #define _MEMF_no_icache_flush 7 #define MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush) -#define _MEMF_node 8 +#define _MEMF_no_scrub 8 +#define MEMF_no_scrub (1U<<_MEMF_no_scrub) +#define _MEMF_node 16 #define MEMF_node_mask ((1U << (8 * sizeof(nodeid_t))) - 1) #define MEMF_node(n) ((((n) + 1) & MEMF_node_mask) << _MEMF_node) #define MEMF_get_node(f) ((((f) >> _MEMF_node) - 1) & MEMF_node_mask)