From patchwork Mon Mar 29 12:06:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12170013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FF3BC433DB for ; Mon, 29 Mar 2021 12:08:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB04861930 for ; Mon, 29 Mar 2021 12:08:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB04861930 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A8E96B0085; Mon, 29 Mar 2021 08:08:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 56CAB6B0087; Mon, 29 Mar 2021 08:08:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40E286B0088; Mon, 29 Mar 2021 08:08:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 281E46B0085 for ; Mon, 29 Mar 2021 08:08:02 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DF4058249980 for ; Mon, 29 Mar 2021 12:08:01 +0000 (UTC) X-FDA: 77972788362.07.4B3E2BF Received: from outbound-smtp25.blacknight.com (outbound-smtp25.blacknight.com [81.17.249.193]) by imf23.hostedemail.com (Postfix) with ESMTP id A9C8CA0009E4 for ; Mon, 29 Mar 2021 12:08:00 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp25.blacknight.com (Postfix) with ESMTPS id 3F1EFCAB71 for ; Mon, 29 Mar 2021 13:08:00 +0100 (IST) Received: (qmail 20566 invoked from network); 29 Mar 2021 12:08:00 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 29 Mar 2021 12:08:00 -0000 From: Mel Gorman To: Linux-MM Cc: Linux-RT-Users , LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Mel Gorman Subject: [PATCH 6/6] mm/page_alloc: Reduce duration that IRQs are disabled for VM counters Date: Mon, 29 Mar 2021 13:06:48 +0100 Message-Id: <20210329120648.19040-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329120648.19040-1-mgorman@techsingularity.net> References: <20210329120648.19040-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A9C8CA0009E4 X-Stat-Signature: f9msm9d15jdp9qjyg4uppf8w81y56dyp Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf23; identity=mailfrom; envelope-from=""; helo=outbound-smtp25.blacknight.com; client-ip=81.17.249.193 X-HE-DKIM-Result: none/none X-HE-Tag: 1617019680-543691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: IRQs are left disabled for the zone and node VM event counters. On some architectures this is unnecessary and it confuses what the scope of the locking for per-cpu lists and VM counters are. This patch reduces the scope of IRQs being disabled via local_[lock|unlock] and relies on preemption disabling for the per-cpu counters. This is not completely free on all architectures as architectures without HAVE_CMPXCHG_DOUBLE will disable/enable IRQs again for the mod_zone_freepage_state call. However, it clarifies what the per-cpu pages lock protects and how zone stats may need IRQs disabled if ever called from an IRQ context. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 32c64839c145..25d9351e75d8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3461,11 +3461,17 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, pcp = this_cpu_ptr(zone->per_cpu_pageset); list = &pcp->lists[migratetype]; page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); + local_unlock_irqrestore(&pagesets.lock, flags); if (page) { + /* + * per-cpu counter updates are not preempt-safe but is + * acceptable to race versus interrupts. + */ + preempt_disable(); __count_zid_vm_events(PGALLOC, page_zonenum(page), 1); zone_statistics(preferred_zone, zone, 1); + preempt_enable(); } - local_unlock_irqrestore(&pagesets.lock, flags); return page; } @@ -3517,15 +3523,17 @@ struct page *rmqueue(struct zone *preferred_zone, if (!page) page = __rmqueue(zone, order, migratetype, alloc_flags); } while (page && check_new_pages(page, order)); - spin_unlock(&zone->lock); + spin_unlock_irqrestore(&zone->lock, flags); + if (!page) goto failed; + + preempt_disable(); __mod_zone_freepage_state(zone, -(1 << order), get_pcppage_migratetype(page)); - __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone, 1); - local_irq_restore(flags); + preempt_enable(); out: /* Separate test+clear to avoid unnecessary atomics */ @@ -5090,10 +5098,12 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nr_populated++; } + local_unlock_irqrestore(&pagesets.lock, flags); + + preempt_disable(); __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); - - local_unlock_irqrestore(&pagesets.lock, flags); + preempt_enable(); return nr_populated;