From patchwork Wed Nov 17 01:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 546D1C433F5 for ; Wed, 17 Nov 2021 01:23:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBA1E61A58 for ; Wed, 17 Nov 2021 01:23:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DBA1E61A58 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 900FB6B007B; Tue, 16 Nov 2021 20:21:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B03F6B007D; Tue, 16 Nov 2021 20:21:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 750FE6B007E; Tue, 16 Nov 2021 20:21:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 67C646B007B for ; Tue, 16 Nov 2021 20:21:16 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3303F83EA1 for ; Wed, 17 Nov 2021 01:21:06 +0000 (UTC) X-FDA: 78816668532.14.3A38E85 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by imf06.hostedemail.com (Postfix) with ESMTP id B1AEB801A8BF for ; Wed, 17 Nov 2021 01:21:04 +0000 (UTC) Received: by mail-qv1-f43.google.com with SMTP id a24so865036qvb.5 for ; Tue, 16 Nov 2021 17:21:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Uxu9jRtxw4HfsqmOsn2OoknK/KseQaLw1cgxWnJ0WVI=; b=FAWaBWJRTtgyJuy6C8oDxJdNSGUWKTCxi5HYaaEvXAoe/HLLS1aP7503u4sKh5FI9M J4Y93zdlrjP/qmcrupDpO6//AXqwkPYrY66wUrX8NJ6VnkhuV0yam8tPuKfeRw1zAMNl 8ozhTvBfQOVvkzKS71CMNN3HzFGsB0HeVLTETeJBuwettGcSYeXZK2dBprIigt2Oeih9 yNQH80ovRYcWT9kPXeCWCvozLy3yrA4R4y2WSeeIB0gd2G6OsDqdG2pKEhcCBkHvfmIf Z8mpSeNy/D8NeNBD8qnp46ccxcXqNM3CYsg4DH4/O8yb2qrj+R9VoD5aJuh0+xH+D1Xn yQ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Uxu9jRtxw4HfsqmOsn2OoknK/KseQaLw1cgxWnJ0WVI=; b=Q3p7aV7+Fwo1kdsghmTEfx+xrEuOCH4fVCJtuoAyIfTB+BDL7P8ypUYBug5cLggRp4 URL0HYgobNUuZWIi8lVl3W1wCJ8V3n9gA99bWI2VR3UNufJiEtpbozDmU26RoLfaWMVf o7Iljqep4B/q5FXQmSs4vDtF9s+N8sRxM/Dzc4G1CS+1zp/JKJBWOlLwQBJthbvAlpmn JJNp9aqvCTe7uLxvcpGp/ZWBu9y7k+LaUrVZCIeKbA7znWGg0V/Saq5NA15r33BZSIac yLX0nSCMJOEpy7ive1UEWdbRTa6sAqv5Q2p1Sx4Ul4nJxQrz3eH8pSk+YK7jEMkxOiSV jPOA== X-Gm-Message-State: AOAM533BD1cA83Z7A8JEaYBWUWjy14a/02HSGV6oiCxsjIXwU88BW/eq vkguqEB0F/ZOHGILx4Wjjo9n3w== X-Google-Smtp-Source: ABdhPJz+3QFjrUicdH92hbNUAeG2NY1ye5tA6FEG0dT0KUu0X9r4bLKsd2TFebc82z8FnQQFd6IqTQ== X-Received: by 2002:a0c:df0c:: with SMTP id g12mr50307969qvl.24.1637112065186; Tue, 16 Nov 2021 17:21:05 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:04 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 04/10] mm: remove set_page_count() from page_frag_alloc_align Date: Wed, 17 Nov 2021 01:20:53 +0000 Message-Id: <20211117012059.141450-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: ufd79tjb6byyggbbc6bm4wxbzy96j1sp X-Rspamd-Queue-Id: B1AEB801A8BF X-Rspamd-Server: rspam07 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=FAWaBWJR; spf=pass (imf06.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.43 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112064-225750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed the expected one. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 11 +++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index f3c61dc6344a..27880aca2e2f 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, new_val); + return new_val; +} + static inline void page_ref_add(struct page *page, int nr) { int old_val = atomic_fetch_add(nr, &page->_refcount); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40..e8e88111028a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5516,6 +5516,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5554,8 +5555,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;