From patchwork Thu Nov 21 16:28:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13882134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FEDDD7878B for ; Thu, 21 Nov 2024 16:28:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F48B6B0085; Thu, 21 Nov 2024 11:28:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A52D6B0088; Thu, 21 Nov 2024 11:28:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 893666B0089; Thu, 21 Nov 2024 11:28:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6B41B6B0085 for ; Thu, 21 Nov 2024 11:28:33 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E92C11C7F97 for ; Thu, 21 Nov 2024 16:28:32 +0000 (UTC) X-FDA: 82810634760.28.4BAEE30 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf25.hostedemail.com (Postfix) with ESMTP id EBF92A0012 for ; Thu, 21 Nov 2024 16:27:51 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HTvI9Mxk; spf=pass (imf25.hostedemail.com: domain of 3rV8_ZwYKCKocebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3rV8_ZwYKCKocebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732206359; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=DzshYmtDI4/275THI8fMkxHrRvuypnNNcYPdRzlGWBM=; b=BC39tFWa1Wn/sirPTqK6wCCbQM4gXWkeVlmU/npVIWOf6e98esL0sRTglxpgDU+Z4MNOWx eUArwV/RAu6rT8fVNs4yMBbWz/myHrzo6l3MuQb4tALNXx1MPrnD8j4RVtG8nBnZNtmtsi GTvCSVLwO8NF39VPBDy5pT5IxQ7L5CI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732206359; a=rsa-sha256; cv=none; b=iEICqlkao75jromvs+4RbdxcqitGESs2QUvEbwzUvEzJIsuFPapcj8C7PLOj0Ao9sowlXU 8Ai+VcEvk9vldG8Um4koM6MfjRkBtS/ZwPSGMBOei3SGUr9cFyEzDFcTDb4WQVIGq/1YFX eMsm3PFb5IaldVsnTw8n0T+4WOH90m0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=HTvI9Mxk; spf=pass (imf25.hostedemail.com: domain of 3rV8_ZwYKCKocebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3rV8_ZwYKCKocebOXLQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e388c4bd92bso1895733276.1 for ; Thu, 21 Nov 2024 08:28:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732206509; x=1732811309; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=DzshYmtDI4/275THI8fMkxHrRvuypnNNcYPdRzlGWBM=; b=HTvI9MxkQipHfOdel/Ws277grJ5TPKoKt7DLXRndWAOhEa4sFf/KreBNj1CuGJT2H5 82rnZpoi+nirIWB8lgoMkm9msI7ey1FlXpPiEIcbzlV6ZtJPtI5g52oJyYgMGFKeFOXh 5+ZVLjczA1GnylzUtJ/nmdAZEr/KSfO6o3IxiMG/Rdy2yiIIhyxS9Nf1CmwceDiEzvv+ aWNS0zV54QwYqUqtNhPI+ONefPVoufF0ofkUDsjlRnEQUtYFmuwjzfkm3M7GwBRZWOVd IjtjKD0EeffBjpKZdwud/MN3/7VwF35aMs94iE9y0Bi+Wlinc96cD0+Z2SP5wyssNE4Q J6TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732206509; x=1732811309; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=DzshYmtDI4/275THI8fMkxHrRvuypnNNcYPdRzlGWBM=; b=KA3GkY/6ExeHPPt1Iaf0xOCmHFsorRWVbjFO/uLk3VKZE3Uam4XNCkY8Ggv+X3Ia1e CYI9IFF1Qxk4VCSoUiiW66rRoVOmwzXJ0YkvqoKzV9YdGFusU46rLW4Z3TDS7wrbM9jz PMLDL0zYe2dQin9Z6AvwMpQbI/5msabVUxouHtcZqP7smHV84Lh7AG7u8UOuFeV6keR/ SNW/Z+wx0rhB30X1lk1ILLgZxfiQ94B3JUvRNDU8ce/1f5owVEJVbVNv05d/SDw4iiFZ 9CDII3zMcmfnNGj3BqLVhoXsHsTAOFfIiTIN8LBSuJPF1ORtduGi6pUskXZjTwq4Qvew PYfg== X-Forwarded-Encrypted: i=1; AJvYcCWKMN8fT/rRGdynPNNG5uCUJPxNmLdXmpK+KDTyCGsuZjZr/3qzOKiC32IrfPAXXPmJH1LmiIabgw==@kvack.org X-Gm-Message-State: AOJu0YzbT+fQddpOzEdHECPIM73EUa0/pc4ekxQdc1Q7d4vU49xBZP6x djjLQJOhu0Hfv5mWFZh8gUGEpTnd/Y3btELI4GiB3g3oxw+2NCkPZFpoCVZL0839lUR4Ss94NcV A3w== X-Google-Smtp-Source: AGHT+IE33qxVBVPbXuIrP36mPBIu7ro+9/sXMzayf2Pcl9WA+DZzBVqH09gzO70e2f7gvoUxthb+t2JPckY= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:ab6:ec44:b69c:2388]) (user=surenb job=sendgmr) by 2002:a05:6902:1b04:b0:e38:840d:3a9e with SMTP id 3f1490d57ef6-e38cb60a27fmr47366276.7.1732206509139; Thu, 21 Nov 2024 08:28:29 -0800 (PST) Date: Thu, 21 Nov 2024 08:28:24 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241121162826.987947-1-surenb@google.com> Subject: [PATCH v2 1/3] seqlock: add raw_seqcount_try_begin From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: EBF92A0012 X-Stat-Signature: ybwkhzfm4hnnbdj3onw6zukqrx5uh87i X-HE-Tag: 1732206471-685468 X-HE-Meta: U2FsdGVkX19K4zj5KfcmB/zUf4TUplMmHl4E31BN38EK2nxd5xrKII+ct6gygevaLbZhaMehIAy0LViF4YT+3v2BmAlLitL90CL4Ub4abqOSgnncyTTGS7wTFIUgrNic7tgXTWmQuSxojFl6ZakaFbfFbLLU1RYaYUZ9hsKEME+bzrmb6CD1pL/Gn9oS5YN0ALoxG7vmNwe8LhdH7XfzCGcIRym0nAJ37ikz6syvzBdzkqmUn/oiVy+XVibClvNGkoqT8jN3JN6+GXcxxdHMqUxi8TqVA+sBXRjAzWtKyBcHwdFW2+Fu20J/1cj5V2a6d4XT9c2gjWYMef6ZpJ0Nk1xS8Ug9zKmMHubqOTDYkP1i9taeoKK5k51zKy2G0uuOLMLLH6EmJx2uViAbqu/7jVIw5uFXNeQ9shnZySdyYfZjKfY7DKOsx//ORray6XEIiwr8GcuqTZ7e/1X8zGV6YBeu4hP/58lN63plUtO2iw1ZOcFbAKMmYfKUrd5J9QPNvJnj048O+PGCh1/19TaJcJn7LqHJuZUgL9hd1+vCyPIPcS8SR1LdZHnbYvvmxNNKy0y3zbe53sVN1jTXxHj4WpZ89BjBIv9HgNh3eVGXD7Pr2485aQocL27xkjd3s6yB9umOija04bct4WrpATunu0k/UiJiUtym3iIrZVsd11t6DBJxs89uViiF/y0B7dL1gtZDRMhr7wt2nVQ3IV2BsuaNemJqRKV7VmlVmIxO4XPu1fSyqBTAMPJU4lPVUTyebWJ1YH+KNJa/ZUrQg6mIbk6J+Hhl5iQxaFDOBJJ+DaEwWXa8e0fGkma/GAjaOT+hqbY9r4LbqzcyUCaxmDM9L01t74CVhq7Qaf7l3YN8aFaQRixdaP+6bBFdKB12er40QZhODG/1maDTKCHauhcsPcz9rYVCXYCze4csXmWmOXOhQ4a+B448gRezgHxtFLMFnKfggvH2FgNPjaWdhH8 xQwiWUHC JgBKwCI/omP1GlOncKtJrv2+IAY/6beqQxBlWqoCUYBj6tRMXmjMguEDlFyljRBWJm47SHy7j0+LUD3jgTNKbwX56ZgiE8H5Kd+mrSYTVfBIrBykF9RQEBKDTQvkPKyRILerZlFSUV7gsRHWqAYxBW0g5drJvQxPRxaGRm6PTscuI/BsUvn1mtik4TFft9Dyfc6IxLRVcqcEUwvoOXrDojuhcQumyt/al4Cx++HkQtQwsAry66CJnrqQWhbAiuNvU68rrr/+lFkCqZwCvprVYLdxpf9MnMaGSbYjnRUgLBM6R4Bj/McjhLbljja2waaXvfB72YLgr22vls3y6wWC24pP1HlfulORiIggICk7CyG+hGiEp452/dhV63v3ozddRyEXliH6JGaOmAAAVpzEnN6Nv8O3cA7ShwFH4DZbTXXwMBk+WigX/0jBcftlqHdn2A+bx8cdlC0UfIM28187sJlCU+U1L78pKCWkNVHpjzKaMim+8I4IAADu+HJBOIeRe3jH9/5Qix5V8aoIy6PA83BQwcaEdbqOkXpDQoW9vPf/nYVH5Znn+D3IXsqsQljTMC/r9BpLzCqlVPK7D8otPjDT568ctEvlxcDzka5fhimN4oP4Gdk6MNM1XDgaQwPNb4Z5Y3O97OE8/ndaCUS9jWtNgjnUkEyUQ8tlcA0dx6SSYFuBz/bMADCBtkbeIxko50DLqm5y25ooIbSqouL3YXM8N9Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000678, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely if the counter is odd, instead of doing the speculation knowing it will fail. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: David Hildenbrand Reviewed-by: Liam R. Howlett --- Applies over Linus' ToT include/linux/seqlock.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) base-commit: 43fb83c17ba2d63dfb798f0be7453ed55ca3f9c2 diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5298765d6ca4..22c2c48b4265 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -318,6 +318,28 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) __seq; \ }) +/** + * raw_seqcount_try_begin() - begin a seqcount_t read critical section + * w/o lockdep and w/o counter stabilization + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants + * + * Similar to raw_seqcount_begin(), except it enables eliding the critical + * section entirely if odd, instead of doing the speculation knowing it will + * fail. + * + * Useful when counter stabilization is more or less equivalent to taking + * the lock and there is a slowpath that does that. + * + * If true, start will be set to the (even) sequence count read. + * + * Return: true when a read critical section is started. + */ +#define raw_seqcount_try_begin(s, start) \ +({ \ + start = raw_read_seqcount(s); \ + !(start & 1); \ +}) + /** * raw_seqcount_begin() - begin a seqcount_t read critical section w/o * lockdep and w/o counter stabilization From patchwork Thu Nov 21 16:28:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13882135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1435AD78788 for ; Thu, 21 Nov 2024 16:28:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B4766B0088; Thu, 21 Nov 2024 11:28:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3637A6B0089; Thu, 21 Nov 2024 11:28:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DD246B008A; Thu, 21 Nov 2024 11:28:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 006986B0088 for ; Thu, 21 Nov 2024 11:28:33 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A610BADFD1 for ; Thu, 21 Nov 2024 16:28:33 +0000 (UTC) X-FDA: 82810633668.10.AB1E19A Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 7C1D01A0006 for ; Thu, 21 Nov 2024 16:27:25 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0LAmIRo4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3rl8_ZwYKCKsdfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3rl8_ZwYKCKsdfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732206444; a=rsa-sha256; cv=none; b=dEjwATT2fw4lJ2+hQCYmIxbMVahAdj3Ow1UIBCUGG+eyZzotHdBOcuXhwGoohQ8TgxYgqm M0AdZCk0McdxS9gjZHbNlOrRh2GS6BwLeirZH+dugphlZZ++lc2M5GT4fn/HQOIRkojpCr LY+vZGSwqimsJftKkkvxYOvo94M6Hb8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=0LAmIRo4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3rl8_ZwYKCKsdfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3rl8_ZwYKCKsdfcPYMRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732206444; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BJ8UTjZk7jir+LycT80EIvFwdpwWL3Ztdnpnjs747EY=; b=3Pm5dPYrumVua+I2OiUbWuE6TlRu9+DVBY2ibynaFIBZnGUJmF05i8KI5Y+HT8e8d9iYgO sogNeahUv6HIZNKPO4Z1eEouRx5iIVfvpqhh6fwkWNiYk7V/5kp83PnOpb911eYTQ31eNC xiYRa8DkstRmVBdJNKT4e/IFODDljr0= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e38902a3200so1761144276.2 for ; Thu, 21 Nov 2024 08:28:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732206511; x=1732811311; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BJ8UTjZk7jir+LycT80EIvFwdpwWL3Ztdnpnjs747EY=; b=0LAmIRo4f79jgyLATiZs+krQmQIVFNAWE+xEJZ4DCfRs5ftVn1Q1bZscTn7SlmW95K n+L5efriiz7SV+KSigDhbAObDYeT9eTMNZ3G0PAS9rM4NRYA94ESR3b1c78GpjJ7m34P QgL69875mrfSxah9bKjWUNlTT4LvwBkNTnYUTMk4E24k60bDIpJjLcIhqw+m4PiE+zrW st+LmUUxyMJ0n6ptd25FrgjMGY4atOcFAri3Aqh47au62SWrDKKWpwZsGeywGxdBn8HV CKR7CPQywGx0VZHgPtglINQ+nrPZO6wll4pyPciNmzHeB+4Dda5L75I2gFsyJIQr5cn6 Otkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732206511; x=1732811311; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BJ8UTjZk7jir+LycT80EIvFwdpwWL3Ztdnpnjs747EY=; b=f087Zk0kHAFOxOmzFvY7ubyuxrBIC0/2rFcjBfkJ+NSOb9E8fMf7jrs+drHf3pAXeo +X5OXviv7xT7as82xUJk0wsmfXT2q5vASrDQB2WBHscJoGbr7trhb2IM7aVv1mJiIcDH lA7IOiAZ8vRUHS7TYSR18RatW3E43YKamP3fy9NYBLTO8ZOzYXjmMdEqFNuJl1gOIV8j Rw4v+2t6iqyvsQxDtze2xKO5HnAViuKoJrwErxc1Uw2HHmKyDefnj74f7VOD37X9R++C HwLZ63yOopIhK0BmEyYDJ5r+feoedM0tdM+4HicYb1rK6uoyhLqlifkk+mrq7UmK3CGU OZ0w== X-Forwarded-Encrypted: i=1; AJvYcCVfOYAcSGk2Pb6OOaSa6FNYckdBWoMjnu9P71gYU+Dz7aEeH3HFJP+GeI582KMRICksjvQhrGcdSg==@kvack.org X-Gm-Message-State: AOJu0Yw8XJNIQxL9D/HCf68cL4qhp/swp1IhuFR4U2Ed1tkhV+MH7i3P mUjFGb211H0ksytleUWc6HN6Gq/z9y6e4P3Lg/FKhQGHhWITTYwxss3ujrBaGdXCQq7FsRrpRvH rSA== X-Google-Smtp-Source: AGHT+IEiWQTUWgLiCmkwi9eVp67gN+iglBj6XKTbng/qFRBIlXjEnuyDXqw9cHcVPHq3FWFlD33DHj9qCN0= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:ab6:ec44:b69c:2388]) (user=surenb job=sendgmr) by 2002:a25:8811:0:b0:e25:6701:410b with SMTP id 3f1490d57ef6-e38cb5f7069mr34619276.5.1732206510848; Thu, 21 Nov 2024 08:28:30 -0800 (PST) Date: Thu, 21 Nov 2024 08:28:25 -0800 In-Reply-To: <20241121162826.987947-1-surenb@google.com> Mime-Version: 1.0 References: <20241121162826.987947-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241121162826.987947-2-surenb@google.com> Subject: [PATCH v2 2/3] mm: convert mm_lock_seq to a proper seqcount From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com X-Rspam-User: X-Rspamd-Queue-Id: 7C1D01A0006 X-Rspamd-Server: rspam01 X-Stat-Signature: kmtbmmyhe6qs4myg6yruz7rqobz97u85 X-HE-Tag: 1732206445-506006 X-HE-Meta: U2FsdGVkX1/YJLoe8l4M7sls6YwQkN2QP7PX26ReQQTV4ml2tNKoIrt7wIO0ZzAc3o3QioDU8aMhgYoce9Ks2YRPxAdCrN6rJSq+LBi51hMkjdYRaEHt0TRByeQ3LJLht9ObU1gdTTULWU0T5kY+fWVWiB966in0STCQh/JTUVYG7svu80kWs3P9mB0ACIhCNPvQxA4m0CHrgX1ytAzeH1PLJFh85vfEL7Iv2o6+fYY8mcMSX9xqIDitX/HoYdJIB6ua16AnQAJvCmxbpnRfiWf2Z1WaUxAL/ArNi+flz6K1H7KdX2sNghalpcfOSXpVF6YiCPcy4xFxs9bTqhcih6minw55cB0rxGDlz04BUy9rsWnAtUbs04xJu+oSLmgDbHljgQ9zGIEUv/ZuCO/Zkjwef5DSkJ+gNo43W2lFdT7jHm9cFcGnEAn9YlM65RTZTLR2O6kOW4G5j2HAY7ArH2MnFSzhVg5zAnl05oJiUaD416Fkz13lACC/VH4Q//YgApAgRVuce0NNAsIMr2l2rWLlQ1lGYtWPls7n5+ZBALW5fuTydcsqEo22Slx1Za3A/I9Sez9aJ4375XXG715QmOjv3tTVxLRgf1YTBYu+qH75cu7Jl7Vm598obhHwOKTFr38vPSrDyLxdH5c1SEO37Fb4K2gaNR8yIq+/BzPZ2s3S+vUALGag63FVND2iIFvx65Q0o/1F7SYrum213iJ3DiOWYc/JAIwmMQ34V7SozwicCNyfzarFl358thaP1AlOmBieEWkRY1MfI8FHS99ahyYAMQCoVYNzEhp8IrWDrcYoJ0YZE10j+TLKHj1W5kQUcKK5TZB6YcGDfL3VqZM5L1UHXoEwICWUar9aeUOMLVsYzD/r9Nu0FvW5/ThXXuHrBSrp4OxY8SXO8NznEr7qAEjE/58oJImy36dGOuYAI3Z7bYtMro1favsC7zIsgWADShx94cW9POd1tGOihdM mQ4cZOWD aAF5L5rIm4lJp4XRkcNvL+yB1R3bR+ZO3/NFLw8+u6uSut3HxRBM7T/5teRehd17BOQ7yr/2ifAluUD67SflPwRm/EV4vl0KrAhKuIBrs2xhKjxIAihWntPJGbCPKu3M1OHWzqJ9sYKVd72J4g5h+o6lj0uxFbncJTzDUOiXvhnv2S/gyAcC3TjteZEiS31TqK1j2dt2FjUQludc6oOoGBGlG0vk6uE8bOCW03u3AD7hjnOJbPuHNZDiQQH7Yb9Aennl2vZQ1eP/DKpWXvPVx9zl++Ge/D14BlBVg5TWVab0/5CmR9eiTYFMeEYe8E8vKwXhb46Ad7VePAFxIObG6bfocsJqz0CuJmzsG/VDVwpQnWZ+G90JvhwWEpVLawdoU01wB09D3Xg/A10dd8Fc8pWKMj8v00llYQq1l+0rs9cudoT0ojxSsQobXLcyUOO/OKvSnX05vgTwtfLJj35DzfTptch0Anxpi5X9OCRv/JtifYqTNLn3WEYRbZP3X8/Nkez8jXxI0U56qcMBjTqk1DVxB7R1FoTqhkUt1B+Ecqm6r/a10elLQRZG3+ecGXYydslP/Br1lXr69jbPhmiFCMepQF4sanZyM6oeIJi5xxNgt+oIGIqcp+jaPJ+H6Bg2p0hLGuFj+oTYNG8sMro2CNs3H69luSKpaWQXwzOB78coM0XChgu35FejFVXd5fGL7dzKasL7gR7QI33+TcupWHs6wHFBBifJmZQLtg5nj5aUrgjo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert mm_lock_seq to be seqcount_t and change all mmap_write_lock variants to increment it, in-line with the usual seqcount usage pattern. This lets us check whether the mmap_lock is write-locked by checking mm_lock_seq.sequence counter (odd=locked, even=unlocked). This will be used when implementing mmap_lock speculation functions. As a result vm_lock_seq is also change to be unsigned to match the type of mm_lock_seq.sequence. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Changes since v1 [1] - Added ASSERT_EXCLUSIVE_WRITER() instead of a comment in vma_end_write_all, per Peter Zijlstra [1] https://lore.kernel.org/all/20241024205231.1944747-1-surenb@google.com/ include/linux/mm.h | 12 +++---- include/linux/mm_types.h | 7 ++-- include/linux/mmap_lock.h | 55 +++++++++++++++++++++----------- kernel/fork.c | 5 +-- mm/init-mm.c | 2 +- tools/testing/vma/vma.c | 4 +-- tools/testing/vma/vma_internal.h | 4 +-- 7 files changed, 53 insertions(+), 36 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index feb5c8021bef..e6de22738ee1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -710,7 +710,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) @@ -727,7 +727,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == smp_load_acquire(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { up_read(&vma->vm_lock->lock); return false; } @@ -742,7 +742,7 @@ static inline void vma_end_read(struct vm_area_struct *vma) } /* WARNING! Can only be used if mmap_lock is expected to be write-locked */ -static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) +static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq) { mmap_assert_write_locked(vma->vm_mm); @@ -750,7 +750,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) * current task is holding mmap_write_lock, both vma->vm_lock_seq and * mm->mm_lock_seq can't be concurrently modified. */ - *mm_lock_seq = vma->vm_mm->mm_lock_seq; + *mm_lock_seq = vma->vm_mm->mm_lock_seq.sequence; return (vma->vm_lock_seq == *mm_lock_seq); } @@ -761,7 +761,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) */ static inline void vma_start_write(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; if (__is_vma_write_locked(vma, &mm_lock_seq)) return; @@ -779,7 +779,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) static inline void vma_assert_write_locked(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 381d22eba088..ac72888a54b8 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -715,7 +715,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; /* Unstable RCU readers are allowed to read this. */ struct vma_lock *vm_lock; #endif @@ -909,6 +909,9 @@ struct mm_struct { * Roughly speaking, incrementing the sequence number is * equivalent to releasing locks on VMAs; reading the sequence * number can be part of taking a read lock on a VMA. + * Incremented every time mmap_lock is write-locked/unlocked. + * Initialized to 0, therefore odd values indicate mmap_lock + * is write-locked and even values that it's released. * * Can be modified under write mmap_lock using RELEASE * semantics. @@ -917,7 +920,7 @@ struct mm_struct { * Can be read with ACQUIRE semantics if not holding write * mmap_lock. */ - int mm_lock_seq; + seqcount_t mm_lock_seq; #endif diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index de9dc20b01ba..083b7fa2588e 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,39 +71,38 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK -/* - * Drop all currently-held per-VMA locks. - * This is called from the mmap_lock implementation directly before releasing - * a write-locked mmap_lock (or downgrading it to read-locked). - * This should normally NOT be called manually from other places. - * If you want to call this manually anyway, keep in mind that this will release - * *all* VMA write locks, including ones from further up the stack. - */ -static inline void vma_end_write_all(struct mm_struct *mm) +static inline void mm_lock_seqcount_init(struct mm_struct *mm) { - mmap_assert_write_locked(mm); - /* - * Nobody can concurrently modify mm->mm_lock_seq due to exclusive - * mmap_lock being held. - * We need RELEASE semantics here to ensure that preceding stores into - * the VMA take effect before we unlock it with this store. - * Pairs with ACQUIRE semantics in vma_start_read(). - */ - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); + seqcount_init(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) +{ + do_raw_write_seqcount_begin(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_end(struct mm_struct *mm) +{ + do_raw_write_seqcount_end(&mm->mm_lock_seq); } + #else -static inline void vma_end_write_all(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} #endif static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); } static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); down_write(&mm->mmap_lock); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -111,6 +110,7 @@ static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) { __mmap_lock_trace_start_locking(mm, true); down_write_nested(&mm->mmap_lock, subclass); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -120,10 +120,27 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm) __mmap_lock_trace_start_locking(mm, true); ret = down_write_killable(&mm->mmap_lock); + if (!ret) + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, ret == 0); return ret; } +/* + * Drop all currently-held per-VMA locks. + * This is called from the mmap_lock implementation directly before releasing + * a write-locked mmap_lock (or downgrading it to read-locked). + * This should normally NOT be called manually from other places. + * If you want to call this manually anyway, keep in mind that this will release + * *all* VMA write locks, including ones from further up the stack. + */ +static inline void vma_end_write_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + ASSERT_EXCLUSIVE_WRITER(mm->mm_lock_seq); + mm_lock_seqcount_end(mm); +} + static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index e58d27c05788..8cd36645b9fc 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,7 +449,7 @@ static bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; } @@ -1262,9 +1262,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); -#ifdef CONFIG_PER_VMA_LOCK - mm->mm_lock_seq = 0; -#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index 24c809379274..6af3ad675930 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,7 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK - .mm_lock_seq = 0, + .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index b33b47342d41..9074aaced9c5 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -87,7 +87,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, * begun. Linking to the tree will have caused this to be incremented, * which means we will get a false positive otherwise. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return vma; } @@ -212,7 +212,7 @@ static bool vma_write_started(struct vm_area_struct *vma) int seq = vma->vm_lock_seq; /* We reset after each check. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; /* The vma_start_write() stub simply increments this value. */ return seq > -1; diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index c5b9da034511..4007ec580f85 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -231,7 +231,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; struct vma_lock *vm_lock; #endif @@ -406,7 +406,7 @@ static inline bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; } From patchwork Thu Nov 21 16:28:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13882136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA27AD7878D for ; Thu, 21 Nov 2024 16:28:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00C0D6B0089; Thu, 21 Nov 2024 11:28:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EFC766B008A; Thu, 21 Nov 2024 11:28:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4F086B008C; Thu, 21 Nov 2024 11:28:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B31B36B0089 for ; Thu, 21 Nov 2024 11:28:35 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 75E8F80BF2 for ; Thu, 21 Nov 2024 16:28:35 +0000 (UTC) X-FDA: 82810631694.20.10873C3 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 70DBB20008 for ; Thu, 21 Nov 2024 16:28:09 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=A0n2Ud+1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3sF8_ZwYKCK0fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3sF8_ZwYKCK0fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732206446; a=rsa-sha256; cv=none; b=EoonnGq3919pTbv8P+lKxDJQLy96wlFmPsTOeXI5SEq336C+tc4uTA7ZlGNsEr5FCwRd7v ZVwtbuq/TD0v31eUe7+gYCw2qI9HPxIMnm78kAENpLGQ4gDgT36HimDZ/UfHE3hZXC9LX/ mhjc7gZnMKcIoHZcbDo8FaKCjLjqBBA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=A0n2Ud+1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3sF8_ZwYKCK0fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3sF8_ZwYKCK0fheRaOTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732206446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aR8IuZPS55VJxAfGr69rX+AU9gT7VgeBh1eRZ8ctshE=; b=TlMUyk85cNPN8vzi3v8duIe+bTMNBtOMClX9yFoI14t4zzC4MxxRYhMmKRrieeLEFdDECA zykJutP/8wvIDXAWKFctmMETFGjwJxvVky5WwVGqM2T8FtwOWwnNyA1jwlhLqsE6yU/0J5 eW8tTKoacvl9i1jaYQvlaozb405fv5Y= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e3891b4a68bso1834694276.2 for ; Thu, 21 Nov 2024 08:28:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732206513; x=1732811313; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aR8IuZPS55VJxAfGr69rX+AU9gT7VgeBh1eRZ8ctshE=; b=A0n2Ud+18niTYPBJfUttpMMcnYY3LWUiSWGrHdv3ELWfupnbI5XOQd7kHcDwgqsINl q9oddv/WAM6p03IhX//EaB4ZOSHEZTQZxuW5GU8qKSYYeRVV4sYVltpFSPztvuzxUCB0 6kxGWe7OjQoLiAcNT3nNNE5AmKrxDMjjvBbXN1bZQxf5CeTS7wxJugIfvIx5Dzrp51XZ vX0Y/P228aHJScN2IqdA2xAUw1QO7YXZWM5CTWgf61e8dm6YBMazQj5B3Mwiz5qGWmfT do5QdnWSavrHuw1b7J8jiJBCJIjp4rwoKTynyL7NWAlM61YO7zzVhtpQEWJK0f+z1q+2 DuAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732206513; x=1732811313; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aR8IuZPS55VJxAfGr69rX+AU9gT7VgeBh1eRZ8ctshE=; b=e/vZOPs/NhqE/D0PSaDU/bBVOEZbJCvcF8Qi+jq6B35TOzFc+E+A+hF1QNiHSEQZor xMPMsk0Py84Q/ZF36JR1kQM8NF8vN/7KCQA2HiPubcQBa7j4JB9Y7v45QsU4OCkVOavj 1RYV/LYsxwFRLVr9RO5Loyg9+Zu7IVnOfGgOYO8/Tf5kVWC4sLjrOc99Zc9JbjEwuAko z7ABE6dPDPvVTRt1ilkIbSiayhStPPXvy24TypdRrkIdvK1Q+4OVtpg1jBVvS4LufWJt DzZ0LpLcP8Ho9YJ5G6WsgzlfH+tkypWSFTb09gC3o0nJdKi/Fi17Z37pMVtlFCEIJpYU bzaQ== X-Forwarded-Encrypted: i=1; AJvYcCXJFYs+8GptNybmV2LV9JrJ/jKD1RvPxrI5Hn5ZTMIePonMj6478AXezLHkAY4gJ4G3vZhmz4+Tyw==@kvack.org X-Gm-Message-State: AOJu0YwhBpFl5bq6POV4xAEoA/Bsdf0YPHuiiR2hWLoITwFlFpakB/l9 Roqb+AvH+35s6dlzKTL13TQWvIx2HkmPkx9jtlHu1eGl0gCTYHwh+5u+3FcFNGwSe90elK72lDK LVg== X-Google-Smtp-Source: AGHT+IHUqHsAWi+Wxn16ObPiD6xo041RvCrQwuBa4MVJn53Kia7YFyxv9Td6tAJu+eIvJbInkjv+ljJDVwE= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:ab6:ec44:b69c:2388]) (user=surenb job=sendgmr) by 2002:a25:dcc5:0:b0:e30:be64:f7e6 with SMTP id 3f1490d57ef6-e38cb566988mr3028276.3.1732206512791; Thu, 21 Nov 2024 08:28:32 -0800 (PST) Date: Thu, 21 Nov 2024 08:28:26 -0800 In-Reply-To: <20241121162826.987947-1-surenb@google.com> Mime-Version: 1.0 References: <20241121162826.987947-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Message-ID: <20241121162826.987947-3-surenb@google.com> Subject: [PATCH v2 3/3] mm: introduce mmap_lock_speculate_{try_begin|retry} From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com X-Rspam-User: X-Rspamd-Queue-Id: 70DBB20008 X-Rspamd-Server: rspam01 X-Stat-Signature: qg79gpa71ox4hbo4cre9urun977mmnk1 X-HE-Tag: 1732206489-352987 X-HE-Meta: U2FsdGVkX19l3VRx7NZOowsPSEx1qKFRVIDQZbVAz9gY3Ae271u3OZAW9UCb58bXVawNrxCqIHPU7cTgN+BtFH/LFEO4z563wCn8ptW5MRlfj6cEhZMyd+WAalvasT32FRyJf/E5U5CavxBvqIAU9RDaAP5vTlt6/MF6z6wEQDwARwOuhbqJH1FlVwxp1y2moolTaa941HR6gurIdoFpCch4YP+2aSJYy6h5lYF83V1d1EeHTHHV9zQEW7Mwl60bElUhagYRVWkJk7VENbOyhaaGIVF6dE10Pv2ykPGZrPqr5dWy6S+CDp5pGXGvsSzWVC3YyrvF3gh3bK/362/t1OK2OGdzhHgM5Se0/NW+S1zmmRUIH2yDgaTvDN1B+sevPKx26E/4wNDvPJwY4we7Xg3VfluALzCLdFxgn4eRx0JpVQA6lCl6JtRWZsjQbqFg0W4sCkFlckwgFC0854L2BPr1o/i+G0iV8Xk/m9UmzryKAZJcF5Hi32jKGaFD6xNWvLYxkSIunxWotci7Jb82X2nXd5bgtSSSk9tmDVFd/R+haOR5kaDnnmmV8cWm4bmX57AQ5lXWcQy10ICzSogWMcRE4p8R0ZAkvqahvLjU/8ml4gONvZGA/gMkeFkNB/vYK33CGsOwkoJf0clhCcY1WDKh/L8T/I4E2NHCVcOcubfxwcwCI92w56uVybSsUOPHPjp4YGMJQdtcLWciHF3BgGxvk51U54vO1dgn09WVL5A9sZVbwtFAD8Tci7LQJl0XOsWfnP37blBPCmnM9oOPfReV8Lk3LGy7HezNIiNM0szVl9oDb9JL9ceXelODcu61oF8kI1umUyqPBPIzr+9wrII/kABU2UWd6ms23z3s396YuTCmZ+gYFUAcqmbMQKKCKhJuDgMxuV2GkSXzrA1oLPNAuCZBIDz3W4fe2pLx+jPB/xbEw24qne+DcSKLiqSFgSoV9zJZULuA8Il0XEv XBj4hfHV hJpqMEDlHc/ExaMvcaVcDIQxb3as3npht56sGJbayOH1dIcnZJkl6tkB3AlZedlIvVoeTVdvN+BLhy5c1kOv2fzIFDNWtPrYMkI9+iUd0oCDBh42u6QUZCjisLjX9Aa+ejMRc3L4wdx+krKlDGv+myFNgdW5KsXm8oH/OdI30JDY82+1q1N+qElFYc9CanTzBYYPiTaq5I+FKO4OSnWh4Dw4futIZrjqjaiOu0Ghe3VAb7Oitx9kJ91WXFC0fu5NtBUiIhQSA6dXMzv9ji6uNpccuUDT2LUc3MKsEYegxLhZZUVWzr3IvHHWnl1SsdEgGaKJ6ZvqU/sahUvBNMHUnbmoOyViklNm5thetAVRwtPoLEbEE8MaBUIKYnDJEUOLNt7DtNwSg+qIjAIxLjW3CtIbe0tIo1XcpF+wE4arzhx1QBqERKD/Bw9WNbyH3X+kZUEjGoIK8no03ydZeqIz9txllYVxOlaBSbxsdrgrI5i/V5Lx/IZEke5voqyVRBaak1keq0WU1QSx9dAjMyOAY6kPKLLUCxL/48ttuKLEXsmdu38gsiftpHtk935+cCHFPaSWfsEC5MEgQgbuJLecZwRCoOGA2zXxpEDUxpVaOf0g5aa9KxPBj2WAxqGJRu9Mm9m+cKkqCx5OKXUdtdDRmvouPNC8Hz8Fs91AWqha18R/FdLwGeuRIUmgIfh6ykjOuiMZodBk1j6H5fzbMZIT/vO0HIn/AUE+Ql63VfBk2GsF7URkzLeZzHVgOHg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001953, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add helper functions to speculatively perform operations without read-locking mmap_lock, expecting that mmap_lock will not be write-locked and mm is not modified from under us. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Changes since v1 [1] - Changed to use new raw_seqcount_try_begin() API, per Peter Zijlstra - Renamed mmap_lock_speculation_{begin|end} into mmap_lock_speculate_{try_begin|retry}, per Peter Zijlstra Note: the return value of mmap_lock_speculate_retry() is opposive to what it was in mmap_lock_speculation_end(). true now means speculation failed. [1] https://lore.kernel.org/all/20241024205231.1944747-2-surenb@google.com/ include/linux/mmap_lock.h | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 083b7fa2588e..0b39a0f99a3b 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,6 +71,7 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK + static inline void mm_lock_seqcount_init(struct mm_struct *mm) { seqcount_init(&mm->mm_lock_seq); @@ -86,11 +87,39 @@ static inline void mm_lock_seqcount_end(struct mm_struct *mm) do_raw_write_seqcount_end(&mm->mm_lock_seq); } -#else +static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq) +{ + /* + * Since mmap_lock is a sleeping lock, and waiting for it to become + * unlocked is more or less equivalent with taking it ourselves, don't + * bother with the speculative path if mmap_lock is already write-locked + * and take the slow path, which takes the lock. + */ + return raw_seqcount_try_begin(&mm->mm_lock_seq, *seq); +} + +static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq) +{ + return do_read_seqcount_retry(&mm->mm_lock_seq, seq); +} + +#else /* CONFIG_PER_VMA_LOCK */ + static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} -#endif + +static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq) +{ + return false; +} + +static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq) +{ + return true; +} + +#endif /* CONFIG_PER_VMA_LOCK */ static inline void mmap_init_lock(struct mm_struct *mm) {