From patchwork Thu Jul 5 06:49:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10508241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 833EA60532 for ; Thu, 5 Jul 2018 06:59:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 700EB28DF2 for ; Thu, 5 Jul 2018 06:59:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63D4F28E68; Thu, 5 Jul 2018 06:59:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ADB3128DF2 for ; Thu, 5 Jul 2018 06:59:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 148C46B026D; Thu, 5 Jul 2018 02:59:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0D3806B026E; Thu, 5 Jul 2018 02:59:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E66766B026F; Thu, 5 Jul 2018 02:59:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id A2D6A6B026D for ; Thu, 5 Jul 2018 02:59:31 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id a4-v6so1415033pls.16 for ; Wed, 04 Jul 2018 23:59:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:from :to:cc:date:message-id:in-reply-to:references:user-agent :mime-version:content-transfer-encoding; bh=M1d9CuZzJQWETviYhPe2Jk16dDbQtS6C5JiA2N9PlPE=; b=liVozfZqf44/KQCpUOK104bcbab5GGJvEousRH+akkFIrFYs8vPM/0zpWaG9j5KnJI y2InccqDb5AtrqH2vygX6AYDKgxFAwK9ppVSECpsLSMrOCZM6mvrprKg35TtTxC3B4GE UmUhtseIVwKZsL/9jOtYH6oiRLxRqWrZptBe00NBmu8X/sTExfbBn1b/q01YN33tjAD3 mfVJp0xZGNigP+r+UxPKgFJGMSqep0w3garXGwut28DhgWTTtKfqwaDKD6ZebaVsqaxE 0a0K1G+PkQAeoXLmrse6wwMf/ws9MB1XWWkgrZb6/zSOolrIeiZlLYbkiz/QVh9fo7Gq HSOg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APt69E0W9c9mILJ/rG+8UqpBYm4QxloPlGKjvNyqP8MLdhSpIBLwNiJt WtsNkbed0mKz7gMIRKoB/kVl0GIibbUnxFKh8Osk2wQdAILN5G3tmUkd8nPp94A9XurlX8zWBTI PrE37hlWVB4AFewoxCVWdp6EH76TqjgeE32UNVWMTlho6UWL/iuPsuY3Dbwzk4KZFWA== X-Received: by 2002:a62:4395:: with SMTP id l21-v6mr5183587pfi.196.1530773971328; Wed, 04 Jul 2018 23:59:31 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcUzonNJWg8VuhUpHKKtBZ7CF3+RldW/yf5hB40+16BDNYjWYapJK3jtZMK8ptqQ+PNrFhe X-Received: by 2002:a62:4395:: with SMTP id l21-v6mr5183524pfi.196.1530773970282; Wed, 04 Jul 2018 23:59:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530773970; cv=none; d=google.com; s=arc-20160816; b=LtcS1IsVZ1wx9Rv+JPf/ikPQeZK1y8Xw6Kt41680H+6q4e4k/sQazAbAj8Uhdx327k 6A+3Bal489HsR5fJ+Q3p8uu4n48TTl+bqoPPNI5msA1cqfFENppzI7cA50GN6LWIdxdC TM9EdmA0FvLa0M/GMLI2O2xmOUFXyUePRkh/ompopZvNlw7Xr2f6LH5GVcKSmM8V/dYJ I4kKhvjAfsRzgsBpKKcJq6gMEPVbNKNGkkzjWO7I7QtUOJtLEPvSCK5TDJoNYpbhNQ4G q5lty879uhooc+Av9E/5xMcAVNBwtwCtWaPi8hgndrNuLX42HvmrTmWt3WYO5+LUg8yL W4eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:cc:to:from:subject :arc-authentication-results; bh=M1d9CuZzJQWETviYhPe2Jk16dDbQtS6C5JiA2N9PlPE=; b=De412qHb1WQ+qvdR6LDavNQNhdhJK3zkGFb8eKVmakdAXyfiP3fJUn6eYlZz2thnuy noJbyLuYsZXWM3SOTv7a3fNHu7SQQKLuI/ahIlm4W3GlL4iqO16XubMXL2Fq1zHVdz5S ry1t57JuInxa4hPQTNTP3la5nWpD3jrz5mmQ6kXg1/h5sLpPF9iomnGm7x2nvHWNoRX3 T07Cl59XGJdR36l4TlI9cTfpDrpvdwwrPFyqXK4uMs65LQSqFQ4OvZAx5HOMDIvd8eDn Oa6M6lZFPvKdbAfYeNAHe/8xP7jEqbh1D5qosj+RXfGXUQOab5hvGFiUEg5K3JRN3VD4 ME6Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga03.intel.com (mga03.intel.com. [134.134.136.65]) by mx.google.com with ESMTPS id n6-v6si2035817pla.398.2018.07.04.23.59.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 04 Jul 2018 23:59:30 -0700 (PDT) Received-SPF: pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.65 as permitted sender) client-ip=134.134.136.65; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dan.j.williams@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=dan.j.williams@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Jul 2018 23:59:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,311,1526367600"; d="scan'208";a="52135818" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by fmsmga007.fm.intel.com with ESMTP; 04 Jul 2018 23:59:21 -0700 Subject: [PATCH 04/13] mm: Multithread ZONE_DEVICE initialization From: Dan Williams To: akpm@linux-foundation.org Cc: Michal Hocko , Vlastimil Babka , vishal.l.verma@intel.com, hch@lst.de, linux-nvdimm@lists.01.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Wed, 04 Jul 2018 23:49:23 -0700 Message-ID: <153077336359.40830.13007326947037437465.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> References: <153077334130.40830.2714147692560185329.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-2-gc94f MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On large / multi-socket persistent memory systems it can potentially take minutes to initialize the memmap. Even though such systems have multiple persistent memory namespaces that are registered asynchronously, they serialize on the mem_hotplug_begin() lock. The method for hiding memmap initialization in the typical memory case can not be directly reused for persistent memory. In the typical / volatile memory case pages are background freed to the memory allocator as they become initialized. For persistent memory the aim is to push everything to the background, but since it is dax mapped there is no way to redirect applications to limit their usage to the initialized set. I.e. any address may be directly accessed at any time. The bulk of the work is memmap_init_zone(). Splitting the work into threads yields a 1.5x to 2x performance in the time to initialize a 128GB namespace. However, the work is still serialized when there are multiple namespaces and the work is ultimately limited by memory-media write bandwidth. So, this commie is only a preparation step towards ultimately moving all memmap initialization completely into the background. Cc: Andrew Morton Cc: Michal Hocko Cc: Vlastimil Babka Signed-off-by: Dan Williams --- include/linux/memmap_async.h | 17 +++++ mm/page_alloc.c | 145 ++++++++++++++++++++++++++++-------------- 2 files changed, 113 insertions(+), 49 deletions(-) diff --git a/include/linux/memmap_async.h b/include/linux/memmap_async.h index 11aa9f3a523e..d2011681a910 100644 --- a/include/linux/memmap_async.h +++ b/include/linux/memmap_async.h @@ -2,12 +2,24 @@ #ifndef __LINUX_MEMMAP_ASYNC_H #define __LINUX_MEMMAP_ASYNC_H #include +#include +struct dev_pagemap; struct vmem_altmap; +/* + * Regardless of how many threads we request here the workqueue core may + * limit based on the amount of other concurrent 'async' work in the + * system, see WQ_MAX_ACTIVE + */ +#define NR_MEMMAP_THREADS 16 + struct memmap_init_env { struct vmem_altmap *altmap; + struct dev_pagemap *pgmap; bool want_memblock; + unsigned long zone; + int context; int nid; }; @@ -19,6 +31,11 @@ struct memmap_init_memmap { int result; }; +struct memmap_init_pages { + struct resource res; + struct memmap_init_env *env; +}; + struct memmap_async_state { struct memmap_init_env env; struct memmap_init_memmap memmap; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fb45cfeb4a50..6d0ed17cf305 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -5455,6 +5456,68 @@ void __ref build_all_zonelists(pg_data_t *pgdat) ASYNC_DOMAIN_EXCLUSIVE(memmap_init_domain); +static void __meminit memmap_init_one(unsigned long pfn, unsigned long zone, + int nid, enum memmap_context context, struct dev_pagemap *pgmap) +{ + struct page *page = pfn_to_page(pfn); + + __init_single_page(page, pfn, zone, nid); + if (context == MEMMAP_HOTPLUG) + SetPageReserved(page); + + /* + * Mark the block movable so that blocks are reserved for + * movable at startup. This will force kernel allocations to + * reserve their blocks rather than leaking throughout the + * address space during boot when many long-lived kernel + * allocations are made. + * + * bitmap is created for zone's valid pfn range. but memmap can + * be created for invalid pages (for alignment) check here not + * to call set_pageblock_migratetype() against pfn out of zone. + * + * Please note that MEMMAP_HOTPLUG path doesn't clear memmap + * because this is done early in sparse_add_one_section + */ + if (!(pfn & (pageblock_nr_pages - 1))) { + set_pageblock_migratetype(page, MIGRATE_MOVABLE); + cond_resched(); + } + + if (is_zone_device_page(page)) { + struct vmem_altmap *altmap = &pgmap->altmap; + + if (WARN_ON_ONCE(!pgmap)) + return; + + /* skip invalid device pages */ + if (pgmap->altmap_valid && (pfn < (altmap->base_pfn + + vmem_altmap_offset(altmap)))) + return; + /* + * ZONE_DEVICE pages union ->lru with a ->pgmap back + * pointer. It is a bug if a ZONE_DEVICE page is ever + * freed or placed on a driver-private list. Seed the + * storage with poison. + */ + page->lru.prev = LIST_POISON2; + page->pgmap = pgmap; + percpu_ref_get(pgmap->ref); + } +} + +static void __ref memmap_init_async(void *data, async_cookie_t cookie) +{ + struct memmap_init_pages *args = data; + struct memmap_init_env *env = args->env; + struct resource *res = &args->res; + unsigned long pfn; + + for (pfn = PHYS_PFN(res->start); pfn < PHYS_PFN(res->end+1); pfn++) + memmap_init_one(pfn, env->zone, env->nid, env->context, + env->pgmap); +} + /* * Initially all pages are reserved - free ones are freed * up by free_all_bootmem() once the early boot process is @@ -5469,7 +5532,6 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, struct vmem_altmap *altmap = NULL; unsigned long pfn; unsigned long nr_initialised = 0; - struct page *page; #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP struct memblock_region *r = NULL, *tmp; #endif @@ -5486,14 +5548,43 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (altmap && start_pfn == altmap->base_pfn) start_pfn += altmap->reserve; - for (pfn = start_pfn; pfn < end_pfn; pfn++) { + if (context != MEMMAP_EARLY) { /* * There can be holes in boot-time mem_map[]s handed to this * function. They do not exist on hotplugged memory. */ - if (context != MEMMAP_EARLY) - goto not_early; + ASYNC_DOMAIN_EXCLUSIVE(local); + struct memmap_init_pages args[NR_MEMMAP_THREADS]; + struct memmap_init_env env = { + .nid = nid, + .zone = zone, + .pgmap = pgmap, + .context = context, + }; + unsigned long step, rem; + int i; + + size = end_pfn - start_pfn; + step = size / NR_MEMMAP_THREADS; + rem = size % NR_MEMMAP_THREADS; + for (i = 0; i < NR_MEMMAP_THREADS; i++) { + struct memmap_init_pages *t = &args[i]; + + t->env = &env; + t->res.start = PFN_PHYS(start_pfn); + t->res.end = PFN_PHYS(start_pfn + step) - 1; + if (i == NR_MEMMAP_THREADS-1) + t->res.end += PFN_PHYS(rem); + + async_schedule_domain(memmap_init_async, t, &local); + + start_pfn += step; + } + async_synchronize_full_domain(&local); + return; + } + for (pfn = start_pfn; pfn < end_pfn; pfn++) { if (!early_pfn_valid(pfn)) continue; if (!early_pfn_in_nid(pfn, nid)) @@ -5522,51 +5613,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, } } #endif - -not_early: - page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid); - if (context == MEMMAP_HOTPLUG) - SetPageReserved(page); - - /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. - * - * bitmap is created for zone's valid pfn range. but memmap - * can be created for invalid pages (for alignment) - * check here not to call set_pageblock_migratetype() against - * pfn out of zone. - * - * Please note that MEMMAP_HOTPLUG path doesn't clear memmap - * because this is done early in sparse_add_one_section - */ - if (!(pfn & (pageblock_nr_pages - 1))) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); - cond_resched(); - } - - if (is_zone_device_page(page)) { - if (WARN_ON_ONCE(!pgmap)) - continue; - - /* skip invalid device pages */ - if (altmap && (pfn < (altmap->base_pfn - + vmem_altmap_offset(altmap)))) - continue; - /* - * ZONE_DEVICE pages union ->lru with a ->pgmap back - * pointer. It is a bug if a ZONE_DEVICE page is ever - * freed or placed on a driver-private list. Seed the - * storage with poison. - */ - page->lru.prev = LIST_POISON2; - page->pgmap = pgmap; - percpu_ref_get(pgmap->ref); - } + memmap_init_one(pfn, zone, nid, context, NULL); } }