From patchwork Thu Nov 29 15:53:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 10704837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0E00113AD for ; Thu, 29 Nov 2018 15:53:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1BEC2C174 for ; Thu, 29 Nov 2018 15:53:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E60BD2C488; Thu, 29 Nov 2018 15:53:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D94B2C174 for ; Thu, 29 Nov 2018 15:53:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 585D86B535D; Thu, 29 Nov 2018 10:53:39 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50E676B535E; Thu, 29 Nov 2018 10:53:39 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3B1EE6B535F; Thu, 29 Nov 2018 10:53:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id E674C6B535D for ; Thu, 29 Nov 2018 10:53:38 -0500 (EST) Received: by mail-pg1-f200.google.com with SMTP id q62so1422642pgq.9 for ; Thu, 29 Nov 2018 07:53:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=eNgIK8JADPn5IV81HOeTMQNy9s9uSVCvlzfCEh1+zZI=; b=Dj94XKX5e9ib6VfPeaS3pcemHmZvVIsjoB2DFq2Z7iawj+Zp07eCaMFOEi9QpwXYaq 5GEKrKo9yK9t1w3urWhLjk/KKsJBAkcRKwnZtETvBT7Tl6ptjkpdKGldK9BaTi2QdiSZ My1CvSPt+yrED9fmViWyvKRQIrCbq1/Ku4ZJwHLFSyiGg0ijjHQGU/1S9psyfBc2FkOz ojrqAPdlH7Ebsd6LV1xqPk9SPo0GsX7negGbzWDWLbqY6w0RZUKTpVkF2eYClvvGdhPJ rVdXR6Eqon4FgHXekbN99eedsvJR/hWC4iJAG44XUnlbK+JnpiQ54HRGtj70tTII0l38 3cSA== X-Gm-Message-State: AA+aEWYWDUVTVXycVFoSnxmG7QoEUvaPCB8D4ZbrEHoB75m6Ldf3BaVl 9TZqbBoYAfOeawc3TfsXIRWfZLRrts5MV4jENguw4w7VVyO+WGyl3+da2GCSEL/vugKNZr0FRDU bFICaacO0UJrkoV4AZL8+W3W3vmQ7CDRa1+v7hTsPG4kWXcnydW8Cb1Yj3QM2/7HHIsmbo9kyXw EAN/Q8XmkbbI8/nVVpV5B6rYfuw4tmjDIdE6/pne4aagVDZ9wx0Ji9dKuydKEqnzkPZA9XCPs25 OD8gAH7xgf8bxEb09vnX8JjHu+pC6/IFCqSpX5yLpR6dwuwbo4UzylkNTL7vjZRmG8wzCBy3grH 5VV2jrE9MT8Fb1kgUtfcFOgdW6gEZg2Sc9pWCtzl5azK7oYM3CvnVMBVxVqyyhqYcxL9LM4zywV D X-Received: by 2002:a17:902:d697:: with SMTP id v23mr1925158ply.261.1543506818438; Thu, 29 Nov 2018 07:53:38 -0800 (PST) X-Received: by 2002:a17:902:d697:: with SMTP id v23mr1925118ply.261.1543506817658; Thu, 29 Nov 2018 07:53:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543506817; cv=none; d=google.com; s=arc-20160816; b=zVz3xV8ysYccIcNarnHyM9nmAjRISyZ+qfz3RvkuDrZBwV2sAg9eGsw6dDB7vjXuEj xNjJgUpeHaKRbVTTFNXdetIoaglolq6pKTcUWf/xRbXt44LJ+8YsgU70jFaMGxkmpPVQ IW1DgYurMec3tOA5Z/ZQAvzvYGNGWg5uEP8R7tYyeBqzi7NxWcKF7OfHSkDHIq/k47L7 wsTO5fOV0/SiPMMeoOh3cahhwNqJE2OiqJ/7Dh4qv0j9ZzKRE1TBpGxK+cwyGuwCPinC EKKy0y4WRhEcnBImFPPeGbi+EMK7NuONEINhJAQrTorziOU5N0RganFTtYVesdGTitpD iR3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eNgIK8JADPn5IV81HOeTMQNy9s9uSVCvlzfCEh1+zZI=; b=xWBEKrUHTf3CqOM2q6eZfI0FzSnUg2aRIP5j8p5v5n7Cq4ItI63SYdIkDkhphB0k7H YmVoqLia7QmqarNbOmGLIiv3cbIbc2Fe/bF4ltrc9Z5UVjDewgF0rLK1e9BzRRz2X5aC cYhiQbZDswzhoAt7+dEl79A/pdE8qSDCMqiPRw1XX4XcEKjHQcmH9CvGtn7HGYOVFHgy BrpT1PrXZIRlOMCFj+W0+i+YboJS0TU84Jb60KuAWlSj60p9KL5Y1Yb/xeQ1bbc4HPqF jy6f+HjzIJ2PJnyDksASw5wufYnBeqLis5ROOSXEePJASGFRKLooYQtiCqW9AA7Uj4Zi CiEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K9b1IDx2; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 37sor3117472ple.23.2018.11.29.07.53.37 for (Google Transport Security); Thu, 29 Nov 2018 07:53:37 -0800 (PST) Received-SPF: pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=K9b1IDx2; spf=pass (google.com: domain of richard.weiyang@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=richard.weiyang@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eNgIK8JADPn5IV81HOeTMQNy9s9uSVCvlzfCEh1+zZI=; b=K9b1IDx26za/ArbdtnPv7dMoH+qqIH4J42u7sYwvt3pk/2d1DRAvFJ5B+TAa9wa9KK +zS0BlSjXFcTWGhQN4Lvfp9pwBtgEphGCZvMEtv/038e7FU/dQ9em1FP1WD4EzrFIUBW LLSiacPvbKupBYUf88CCBGBQ7CNleSFHMqDJg/dGDGHriHJJ8O6B04Bod68uqh3SDxSk x7iPZvvx0o3tzDlxhB6y1O768Zmz/X4Xgf2bEqkm/XB+UmMUFYTwpwk71683bG4DXBvq C6ElRcZSS0EVOToTNRJhWgsAgsorjCpiv7gUWNXb+dYnmbtJFoHeYQSmEcm2GgPqImfB VkEQ== X-Google-Smtp-Source: AFSGD/XfWLf4rpIBDiUvmMO0E5Pao+0p4fEIarJGW4cgvvGy03wzvi2L0ZRsTj+GlromYBS0rwVa0A== X-Received: by 2002:a17:902:108a:: with SMTP id c10mr1999686pla.131.1543506817264; Thu, 29 Nov 2018 07:53:37 -0800 (PST) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id p2sm3278557pfb.28.2018.11.29.07.53.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 29 Nov 2018 07:53:36 -0800 (PST) From: Wei Yang To: mhocko@suse.com, dave.hansen@intel.com, osalvador@suse.de, david@redhat.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, Wei Yang Subject: [PATCH v3 1/2] mm, sparse: drop pgdat_resize_lock in sparse_add/remove_one_section() Date: Thu, 29 Nov 2018 23:53:15 +0800 Message-Id: <20181129155316.8174-1-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20181128091243.19249-1-richard.weiyang@gmail.com> References: <20181128091243.19249-1-richard.weiyang@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP pgdat_resize_lock is used to protect pgdat's memory region information like: node_start_pfn, node_present_pages, etc. While in function sparse_add/remove_one_section(), pgdat_resize_lock is used to protect initialization/release of one mem_section. This looks not proper. Based on current implementation, even remove this lock, mem_section is still away from contention, because it is protected by global mem_hotpulg_lock. Following is the current call trace of sparse_add/remove_one_section() mem_hotplug_begin() arch_add_memory() add_pages() __add_pages() __add_section() sparse_add_one_section() mem_hotplug_done() mem_hotplug_begin() arch_remove_memory() __remove_pages() __remove_section() sparse_remove_one_section() mem_hotplug_done() The comment above the pgdat_resize_lock also mentions "Holding this will also guarantee that any pfn_valid() stays that way.", which is true with the current implementation and false after this patch. But current implementation doesn't meet this comment. There isn't any pfn walkers to take the lock so this looks like a relict from the past. This patch also removes this comment. Signed-off-by: Wei Yang Acked-by: Michal Hocko --- v3: * adjust the changelog with the reason for this change * remove a comment for pgdat_resize_lock * separate the prototype change of sparse_add_one_section() to another one v2: * adjust changelog to show this procedure is serialized by global mem_hotplug_lock --- include/linux/mmzone.h | 2 -- mm/sparse.c | 9 +-------- 2 files changed, 1 insertion(+), 10 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1bb749bee284..0a66085d7ced 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -638,8 +638,6 @@ typedef struct pglist_data { /* * Must be held any time you expect node_start_pfn, * node_present_pages, node_spanned_pages or nr_zones stay constant. - * Holding this will also guarantee that any pfn_valid() stays that - * way. * * pgdat_resize_lock() and pgdat_resize_unlock() are provided to * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG diff --git a/mm/sparse.c b/mm/sparse.c index 33307fc05c4d..5825f276485f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -669,7 +669,6 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat, struct mem_section *ms; struct page *memmap; unsigned long *usemap; - unsigned long flags; int ret; /* @@ -689,8 +688,6 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat, return -ENOMEM; } - pgdat_resize_lock(pgdat, &flags); - ms = __pfn_to_section(start_pfn); if (ms->section_mem_map & SECTION_MARKED_PRESENT) { ret = -EEXIST; @@ -707,7 +704,6 @@ int __meminit sparse_add_one_section(struct pglist_data *pgdat, sparse_init_one_section(ms, section_nr, memmap, usemap); out: - pgdat_resize_unlock(pgdat, &flags); if (ret < 0) { kfree(usemap); __kfree_section_memmap(memmap, altmap); @@ -769,10 +765,8 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap) { struct page *memmap = NULL; - unsigned long *usemap = NULL, flags; - struct pglist_data *pgdat = zone->zone_pgdat; + unsigned long *usemap = NULL; - pgdat_resize_lock(pgdat, &flags); if (ms->section_mem_map) { usemap = ms->pageblock_flags; memmap = sparse_decode_mem_map(ms->section_mem_map, @@ -780,7 +774,6 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, ms->section_mem_map = 0; ms->pageblock_flags = NULL; } - pgdat_resize_unlock(pgdat, &flags); clear_hwpoisoned_pages(memmap + map_offset, PAGES_PER_SECTION - map_offset);