From patchwork Fri Jul 8 14:44:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12911246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD88C43334 for ; Fri, 8 Jul 2022 14:44:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 66F086B0072; Fri, 8 Jul 2022 10:44:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F7856B0073; Fri, 8 Jul 2022 10:44:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C0F96B0074; Fri, 8 Jul 2022 10:44:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 36C076B0072 for ; Fri, 8 Jul 2022 10:44:12 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 415D28AA for ; Fri, 8 Jul 2022 14:44:11 +0000 (UTC) X-FDA: 79664202702.06.434FC00 Received: from outbound-smtp06.blacknight.com (outbound-smtp06.blacknight.com [81.17.249.39]) by imf05.hostedemail.com (Postfix) with ESMTP id 64DCE100029 for ; Fri, 8 Jul 2022 14:44:10 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp06.blacknight.com (Postfix) with ESMTPS id B73F3C2AAB for ; Fri, 8 Jul 2022 15:44:08 +0100 (IST) Received: (qmail 14303 invoked from network); 8 Jul 2022 14:44:08 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 8 Jul 2022 14:44:08 -0000 Date: Fri, 8 Jul 2022 15:44:06 +0100 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , Yu Zhao , Marek Szyprowski , LKML , Linux-MM Subject: [PATCH] mm/page_alloc: replace local_lock with normal spinlock -fix -fix Message-ID: <20220708144406.GJ27531@techsingularity.net> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657291450; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references; bh=PHXVzczV2TKrfYi9y1unPULP+7fDS9eiV/Ax1n9gPwE=; b=L4FolLzcrKQKlT9LmXYc8dSHV22j2OhMJZWZURwMr43WY5SD9OmXLRHyJHum3+2r5e2YhW Q1sHLCVi1W9km/OGHcvJg03OIEVFvw25i1HgQPJ4K3kjVicu1yoUJ0DMDu3J/6LTcIIvKU KeuYHM0eBKeRhqzx8rbB+F7840S6YC8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657291450; a=rsa-sha256; cv=none; b=5oBXmeb2gM54a0Kw7NbwsFeNVG+W37kr6yvtRmi4rfmYp9Si07haWscZMwOszg/iF8fIOO YNklSJ7NjBcLMjJhN54sM3IzcY8LVyZ7XkgpdYkkrvsC+S3u1Xc2+GtyYDkDhqgRt8uJvl UdqkD2U4WMYOhxrv1WiTYpZNQCRt7nw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.39 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Rspamd-Queue-Id: 64DCE100029 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.39 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Rspamd-Server: rspam03 X-Stat-Signature: ua76xthctjhob1rsmisgsa4uhnez3ezy X-HE-Tag: 1657291450-386009 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock pcp->lock and then enable preemption. This lacks symmetry against both the pcpu_spin helpers and differs from how local_unlock_* is implemented. While this is harmless, it's unnecessary and it's generally better to unwind locks and preemption state in the reverse order as they were acquired. This is a fix on top of the mm-unstable patch mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch Signed-off-by: Mel Gorman --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 934d1b5a5449..d0141e51e613 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock); #define pcpu_spin_unlock(member, ptr) \ ({ \ - spin_unlock(&ptr->member); \ pcpu_task_unpin(); \ + spin_unlock(&ptr->member); \ }) #define pcpu_spin_unlock_irqrestore(member, ptr, flags) \ ({ \ - spin_unlock_irqrestore(&ptr->member, flags); \ pcpu_task_unpin(); \ + spin_unlock_irqrestore(&ptr->member, flags); \ }) /* struct per_cpu_pages specific helpers. */