From patchwork Fri Nov 22 17:44:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13883494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B88C1E69185 for ; Fri, 22 Nov 2024 17:44:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47A146B0085; Fri, 22 Nov 2024 12:44:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 451816B0088; Fri, 22 Nov 2024 12:44:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3406B6B0089; Fri, 22 Nov 2024 12:44:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 125A26B0085 for ; Fri, 22 Nov 2024 12:44:22 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BCD9841BBB for ; Fri, 22 Nov 2024 17:44:21 +0000 (UTC) X-FDA: 82814454072.04.866790C Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 04FDB180008 for ; Fri, 22 Nov 2024 17:43:25 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="pe/V5VD4"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 388JAZwYKCLwuwtgpdiqqing.eqonkpwz-oomxcem.qti@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=388JAZwYKCLwuwtgpdiqqing.eqonkpwz-oomxcem.qti@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732297392; a=rsa-sha256; cv=none; b=uM3u5EccVE6OVYW/ecKzhXy7xDOu9eD/qSr1SIoL3zOuvqXMZ4oibJC13NMxmylAVGFdCV qdGfFqRGb5w54g7OxSHwdiKY9XcuEa6BbZ/6rgZpCMgRcbU8tTmnTAJ8mRjmsJ6xuI2ese rnkROAnkko+jDxTs7K5h7fOvETUc/Zo= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="pe/V5VD4"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 388JAZwYKCLwuwtgpdiqqing.eqonkpwz-oomxcem.qti@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=388JAZwYKCLwuwtgpdiqqing.eqonkpwz-oomxcem.qti@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732297392; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=yj1n0zU6JR6TtWJJSwgCR8Y16uC2MOhbrp4MIHnT7BQ=; b=x0tZRIzIIRJ1hLMQoV0YCRpSTm47DjCi4NTgzxix2Ifv9Lh1lzME0AgofB5gwBzhe4wY/T KDoZd0KqnaWFJRiGTBPeoJsRxzCizl2YHJ4w02J4jnicUzQXUc5f2Gcc3Bt0olwfNpfeJG v7OZUhcVJ/UszK69lqmOKGeGZ3DzGI8= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6eebd4e7cc5so27414187b3.0 for ; Fri, 22 Nov 2024 09:44:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732297459; x=1732902259; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=yj1n0zU6JR6TtWJJSwgCR8Y16uC2MOhbrp4MIHnT7BQ=; b=pe/V5VD4Pjace2HZ+llmbPJDHuen70L1+7tnvocZstJUrOeAreiFMdKQMciOp/kMTJ rFQWxFPPnflqBH31ayw7VkxcoaXnT9x4ytQ2dRkudLWw+3iM8AuEF2oT2C7LtbltMPJP aw8TN7p12/pt8LYjZ5PuPa31UAqwEw3uo+ap4AYpCZSi1UfU3mRM4EX+KsIGl1Xvu8YY mvV6k6kqkd9YnFWr8xY+8f0O0Fy+uJlA1Hn9w6fqsbR76dVR0Vtp1OBfSMb87uRloIC6 CrI5wZjdP3W0TF+lXJgj7RPRcFBVgqtK1skzQb9yzyNg3OLQUlqHkKXMEg0/n1Ft6wVd uLoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732297459; x=1732902259; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=yj1n0zU6JR6TtWJJSwgCR8Y16uC2MOhbrp4MIHnT7BQ=; b=ENxTEr5h0QbEJ3334OPntMIpHhOSRL1bL0aoGRsklDH2NJGkA1bkAUF5cH1M/VZGX5 MtvegJSbFFw0TKuFrPXrMNXNtLXSo/KxfIFZG1QvhaVgoZtbMcAUubb7QP4s1GzxKtcC tHaE+E6Z1O99E+EAHS2MEabWfkZMzxWfR8pzYfMBa5KFsWJFAMB3DlbC14hUAxEx6vvr wTU0h8WpZlyNnLU/vV2maec5qnWLBr1VnIR1CdHMbduwBgwLjZbLDVHQcRZa7ijNsKCZ RXJIz9z+TMicoPhJCG1k5007YWLR6l3OlpPT9+3yRW1L18ggno+xmWEqxcfjT/K4kTd/ tUFQ== X-Forwarded-Encrypted: i=1; AJvYcCX/PfFxtP5PGFxSZnxywGnVsTYvEpOTHP0Lxs/1t1uYoEnxoHu3r5AtqruOqZtgQEK5HRyK8j2XFw==@kvack.org X-Gm-Message-State: AOJu0YwuNsjmnNBWN2hFoqWI/S8e9pnrkY6eK66uevPTT8rUuRM3rR3g nUMIu6bXfk9ytN4tspPCy7K0CtvAzfa93V5rMXc4/IUeW6j/aN7bp0pJwUopMaSCConB8EnP4hs inA== X-Google-Smtp-Source: AGHT+IGq3cGWzdvvhpU8u0wQHDoiOGsV0+vlaDIGvQqvSQKVuhJtv4ZEC6vOgRsc5zcknFaXDhykLQhGUlY= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:8f07:6a96:7af9:8fe6]) (user=surenb job=sendgmr) by 2002:a05:690c:dc3:b0:620:32ea:e1d4 with SMTP id 00721157ae682-6eee07b3563mr576407b3.0.1732297459074; Fri, 22 Nov 2024 09:44:19 -0800 (PST) Date: Fri, 22 Nov 2024 09:44:14 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122174416.1367052-1-surenb@google.com> Subject: [PATCH v3 1/3] seqlock: add raw_seqcount_try_begin From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, "Liam R. Howlett" X-Rspam-User: X-Rspamd-Queue-Id: 04FDB180008 X-Rspamd-Server: rspam01 X-Stat-Signature: bwiwqt9ibm4zx3ectanyk8ok8x6c5f74 X-HE-Tag: 1732297405-811707 X-HE-Meta: U2FsdGVkX19RD8ZGxlFrZTrecEuOyr4HviXZV2SJ/DCd7ZMVAGTMqasjeGMY4LbtZAesQh6KxNy+aIdhycIuxfYomv4tu+LIvwBF6RwuZBDFWXY3kvkMI1efEI/NNoXOsEvNhV1dR3OAnGQDGLgH65xss9kVulyYHbdroI2aQEo7MhxsbG1AwyyEXOG+Lbv4Yt1XpVbcaoGgGXWCc/BH8gebLHigMQzhSA/3Ud55i5bDMB0l7hUq2kgaN7SWNJvM1T3gY6mvVo6KR77XHGz910lEIgJOEpcxL23d19T6pTnb4fcvuZX3RaAQd10hz6iESisfeYdUaV1TmpieEN9uZ17Z6hUv3xPRIroaqM2pLOv0dBZBWgvoLMVTWS+l4j4ioV+FNnUUPwfjW7c3h/fFiF7RuHTGINAd8/wlj5UepOi9YE9VQk4wjC4mw0U+b16kzZp1qfOEmUmms8/j/288ikIqmU7EExX0NZArRqOmxyoiVYVhNJsLpIty1hdQe6Lcscy1JEkLjbjr2NKEumgXOsUzEn0RtvlmlZtnIh/tsoGOudraxsL30wtDIZWeFiB5PF9QLzHC4/dL7re0I1K/pMLMP4xTKs5xLrV0UUQfDFIrQq+N4vMOV96mj2bI80pIpa8d51VQudxA5j6KEn/M8+ed6ILNFXxucFxIJolm8X6LzoAJh9gzjtRTxGpLo19S03ehAyCDBq19fjkcpfWgTZC3oKqhCOU/OoAe9knHouAgvpZl4D00JnqQPdV29pFojsPqJhTs6QHMBrayubWZJjE+OzorR2giSdAFPgv/+S6kKF7nquRkCLJ0VHVo1lCz89tCn3zwocQ3ypAxZm7BRG1PQ79lDXvPA/immx81aZTWLvDD94Zc0mP/D3McIzmZjeYKR9y6z72TKp83j/kEeS3KidXlPkpS7s82/03pqghB8Xaqw/l9BHRZPE/t04S493r8lmEqmipij6Hcur0 WZpLym+F +3qaoF29tWcMQKfC98vx0kReWXWax5H8d/juVDkxjtNQEaG2TWq6sWONlQu7TLwmvHpqq1TZcsFdPfao5mINMPaRLgf7pNgg6zFVA7KHpj1z/o8D99NqggICeJ11TUkFE2GVGo/qYJyip8c6m/Nu2FFQeVWPx+w9Eb50oUXiBI36VYQ2GaLS2IWcZBAwOEinyILFQT/zcFJRNkfTPIT2nEzPGBJfYuBotOxRNOqEr+JG0gMKyRrTnFLY8WRdVksRibrHOBY23FWe8psWAxIJgRoLXTto1SQLJm8ClJFqD4c2KIiSjojyXYll2nySuhuMJl+lO98Cm2bE6Lrxs9WtKX9MOhd/adSpLYmWG4UgL4OTaNRgMv/+ZsVqhkLizGjEAjrA85GmdLfvXjhDmxMuWsjMnskmNgiAtnfXNbrKWM+ZLApyvZIBBzPiAJR5Ctnz46Mp/hKZRZFLm1ia9HnEfbS7bJVqfpn3DIxhybiQb4F0vqSdI7UUrsmB1T/YIoUHDQkvRRjQuehsWyCsrI6PsD+P9MigqEJefsRaYuksiztk/A2cpBl3PRYshkYj6V2OF/D3/ApmgePWIbDN1iqVjxMvoPYdnMYlgSOKpxilVkSOCVC1pbFBvkWoV5If52EUZUlx0F9gUwXtJrqpPilfUvbMOzXYy/eWZxNA/SfXGA3SIKhUUlJ/ohSRAXpGhbuS/vPBsZdD62ASbgCLBOZQjC+VRNOu60cuJEwcP1q/wzZga8pA6ZBWPLaXjisb1R+n/BkttTexg53AQthPn10+20tz0g3BEW4+X9E7f X-Bogosity: Ham, tests=bogofilter, spamicity=0.000720, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add raw_seqcount_try_begin() to opens a read critical section of the given seqcount_t if the counter is even. This enables eliding the critical section entirely if the counter is odd, instead of doing the speculation knowing it will fail. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: David Hildenbrand Reviewed-by: Liam R. Howlett --- Applies over Linus' ToT Changes since v2 [1] - Added SOB, per David Hildenbrand - Added SOB, per Liam Howlett [1] https://lore.kernel.org/all/20241121162826.987947-1-surenb@google.com/ include/linux/seqlock.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) base-commit: 28eb75e178d389d325f1666e422bc13bbbb9804c diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 5298765d6ca4..22c2c48b4265 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -318,6 +318,28 @@ SEQCOUNT_LOCKNAME(mutex, struct mutex, true, mutex) __seq; \ }) +/** + * raw_seqcount_try_begin() - begin a seqcount_t read critical section + * w/o lockdep and w/o counter stabilization + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants + * + * Similar to raw_seqcount_begin(), except it enables eliding the critical + * section entirely if odd, instead of doing the speculation knowing it will + * fail. + * + * Useful when counter stabilization is more or less equivalent to taking + * the lock and there is a slowpath that does that. + * + * If true, start will be set to the (even) sequence count read. + * + * Return: true when a read critical section is started. + */ +#define raw_seqcount_try_begin(s, start) \ +({ \ + start = raw_read_seqcount(s); \ + !(start & 1); \ +}) + /** * raw_seqcount_begin() - begin a seqcount_t read critical section w/o * lockdep and w/o counter stabilization From patchwork Fri Nov 22 17:44:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13883495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20D26E69186 for ; Fri, 22 Nov 2024 17:44:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A95856B0088; Fri, 22 Nov 2024 12:44:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A45776B0089; Fri, 22 Nov 2024 12:44:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 898C56B008A; Fri, 22 Nov 2024 12:44:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6DCAB6B0088 for ; Fri, 22 Nov 2024 12:44:24 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 21EDA12159D for ; Fri, 22 Nov 2024 17:44:24 +0000 (UTC) X-FDA: 82814452434.01.3ED018F Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf30.hostedemail.com (Postfix) with ESMTP id D812180003 for ; Fri, 22 Nov 2024 17:42:39 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eL97kZIr; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 39MJAZwYKCL0vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=39MJAZwYKCL0vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732297325; a=rsa-sha256; cv=none; b=KhljuJELXei+VGX0a6pa0sImlYFdCm+0PX4mwmK3vzC8X0lT2QI8dlABLhdfoeCc/ZqCjS PQ6yodGWuwyZWnNRRq3maEYwXFVMnAwipzYhA8yhZxaW/SMjAWGGj6vGc0+ZztLNbppfvU rBYn6lGYFvEPjsPfrNnq17ckLEkMGmQ= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eL97kZIr; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 39MJAZwYKCL0vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=39MJAZwYKCL0vxuhqejrrjoh.frpolqx0-ppnydfn.ruj@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732297325; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y9U4TVZkKcCmZFtWgX6W4KkJk03LljkHsyC2TkkZS9s=; b=N8rIrj+dM8bGmU+eQhGqE6R3P+9lynblpzLv02a+CWYk195xX9GMvk//18RGG8EFAu67G9 ZP9PrM0UqThj69ELnUios9yc7h8L84SBmMkmDf4Ik8JoVUyWZp3Oe19dTIJQtH5g1Fo+HK jpLSrSd0+JN6Ng7uOUliY9ZCJpHrscE= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e38f2935f64so2454928276.3 for ; Fri, 22 Nov 2024 09:44:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732297461; x=1732902261; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y9U4TVZkKcCmZFtWgX6W4KkJk03LljkHsyC2TkkZS9s=; b=eL97kZIrFGHyl7zdYWlLBXnYZxys2w2ciP/y85fzQzG/bgjpbj+NJExfSybQxw4u+1 wGmq+ctWvZPWJcvUfE5ypJTycdILbuUbxWfmOQ53wfUh/xzwDNYvFwXzvhMwSeHXANXL Qp1Upw/eqFVjeWYnZATzDhWwwFakf/nNhE1PPcRMs8wi8cos6AnM28CU4vSoMw7Yiady 0HKiLTuNm1Jm3D4fkpvxJBAUAio353TyEt0enaZSK/qnzxsXCAGumA1VwzJBJgum/+q2 cJxchG/0KEtKkBPSmGXGT9QctpDvBwMZtW4nAXBG6clivv+wr3OPjamr9c9hsYtjbgFZ uTkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732297461; x=1732902261; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y9U4TVZkKcCmZFtWgX6W4KkJk03LljkHsyC2TkkZS9s=; b=k0DRJwrcFxx+U1EvUJx9pegRmtLQmMkmqa3nF2XI+u6yN7yLT3ClTPtMiEnQD7D1Fl m8acDsxH3yHMQjc/TPAiEKW7i8NimOchUgaR3SF4dlV5B7Up+wckHASoPoEWhxezngfn JrkpIGzbyZaHLaer9cWCte00kZ2Se9r6B6392VZBJ/f5w1PzbvjKfG0sRmQxn5/9noq+ 0QhEFPrp8pD0v4cJtxRXW5/ZfjUkmgzBCe6SGzxtmGDfV5PXndaGO7iqr4j6rr9xFbir XlE22L0ALOh8VPl7kwpkoKeCfH3nUxoLgZfRWAPrM+g6moohUX0qLJbBRT8sov5U1+S8 rQeA== X-Forwarded-Encrypted: i=1; AJvYcCU6hiaXmFFyCg22RqwuqV0FoZR+9bCDbSVPVTLLfy8OF2jEd2jpcnFsn1mBG1zMDS/hRkvIRXtd4g==@kvack.org X-Gm-Message-State: AOJu0Yw0NIiCAxG6RAAyySbM13xSjmv6I5mFTKVG4Ppk57lYTGgqYkub GXGywPIYKcyC2/YRnl8VDBhUfRdnJ8tPzTiXeRBlrLMAvg1hoCbLxpK3IASDQ+BrlE2mSTZ4B4j 1nQ== X-Google-Smtp-Source: AGHT+IH1PwSE6q9EP4ofLXjJAcHkSp4h6LwXUDLNp66O+MsNHCONFIwsGc1aR+WqaWzPDTuEE21hmT05Phg= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:8f07:6a96:7af9:8fe6]) (user=surenb job=sendgmr) by 2002:a25:dcc6:0:b0:e38:22af:99a6 with SMTP id 3f1490d57ef6-e38f8aef910mr5118276.1.1732297460897; Fri, 22 Nov 2024 09:44:20 -0800 (PST) Date: Fri, 22 Nov 2024 09:44:15 -0800 In-Reply-To: <20241122174416.1367052-1-surenb@google.com> Mime-Version: 1.0 References: <20241122174416.1367052-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122174416.1367052-2-surenb@google.com> Subject: [PATCH v3 2/3] mm: convert mm_lock_seq to a proper seqcount From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, "Liam R. Howlett" X-Rspamd-Queue-Id: D812180003 X-Stat-Signature: xjopfgefmrcpoiua8yr4u3h5et3mxwe6 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1732297359-224905 X-HE-Meta: U2FsdGVkX1+kpTOsT3P0m8QRrYhmPxasGk/o3PHeDZvj9ccjAFuXPQhxWVIYLvzoBFzMFo35TOu0BaRFJuRnLCg0X8rc/GXU0NIM/2YLBYSHNwOoY8eIqVar4t+lOHtLnczML514I+D/lYVMXiOOnGw+GNBzTHvAZ6kJiYlyuwrr1dd8YJLpPqrkL80zs7rUwuqIiiotfVitMoZdp1V+WlsTMKlsgBXWdk2z2YziiUyozeBuWcvSa6S9xpInQArIsP49Vvbf+GdKH8D6C2J/Gacu1laEPn+ZeKPSGlHABRRHZwMdItyKK5GjMw6BMIWvW3Dj4lrOIq4I13RWE/bBiIjJTwfN7pWhyXhzKRis+IfHflxxIvURBI/RKFO4Wq0T9xZ5W13KWTbOlbaHKzIOgIRZv7h3zrA76mvVLEl0KW6KdceqKPr456oS1qTJQiMfWtxkW8KQ7OzX/IZYUL1ClZZsFiruz71tweOzYv2mDdv3EsXSuPo767AwKrZ8DT/3Dd/2rNsIyPX/Ah2FG86x0IyryZsTZwgEm9m/4C7H4+/kppDgGIRdjRQrKgR3TDlqxRHscVhQAUYBKehAVAllM6/n6ZNgECj395LDPZNufmjrJU9z/EuEyETzr05BNDyXKaQJ7RtkZSoGkXuexb5F2Z9kySRk3SffcCvrGErtDd9Nho5/Y9ZmyuaZOSbV41ITnXm3JmApgX9Urcj38i5ygOJ876AjB07LynXJKB81hiHdILXM2dlCh4Smq4F9AhN4nUb2OGs+Mps2p3kq0vtpnoM2wvcNiHLWsM0/xS6oQmZROeaN9sHgouyQC+GuDR31MPHfXn3WmuxbBwarb+/Ae/OY8cLDLfnfTZLetq0kOk/pac6/fWv2Oo/1UEFVYesqUlx613OxnLb4VGxL90RFfz3ArxL8Rr2XAkObe88iGtKW0vJRwGbZRHZ21pGMa2pMwZ9s4rBS3uLly4vZ4eW eziSUMip FxyQPv3W1H9WKpKqgWH5/QY4voWBB7hTcXZIv9bRFXXK3h6Q4eo0hko/YTOTwCdboxpyvFWDvngv2jBfmVepnVEQinccQ+8jN7ohB0F+ZeVOSfJ4H8Mn9t4cS3qd6OOsjJdyJwW5f2Vg1/QbOlRWU1sGozIr4VtPhwqoDqB1NK4W0Dla/dMsB3qOUq9gcQqTnmX30fnEByVuu38aCmwoHQVTLeGkafPTvcJu0v4Z6VqpRpc2Ma3skWBU9jur/AAeYiqvYln5C5Nf7ivqy9nHTpDUtotLhroV7gtMCVwfryqESsn9zVF7YR4YX8ff7B3y9VD2/PrH4YdFWEmrJz64aHUtbBX1uBoYZWoI8dlgG7PS4qsVK3RpzhrgCrNEb+Q87y6XXGgrc1qr0+iZk5gGNPek6CxXGj1cYtohyr0cEqr/sz0ejXKNXVEDpi0woNo3ZuvXcsHbQat7d8vllX8VMNLJMjrx64L1glZwlHE9bpEf6wKLNXFgl/JNInw3nzhUyMfM8MA/ohf0M4lO5gAxbmRwzZEiPm2UJV+T4/8PyFR4BM+oi4nv8gUCk4+djG3eCjdg1XjEwobuSijVMDGcIsxAxVds1isnNcAp/GkiLrit1Ww/s6CJcxhqF7UjR49oZ9AWBIxVmeHR2G2+UjqpiKDG4ZCHyMNxOBkJ4dgHYJ7qcojkgZ9fF+m5bCmiB/85wwOkLZIpoG2veOo2LpbK7I1lWzb1tOtD6jrlJxYMBFCooHLRrebVPr1eBP8gBImToQpgSv3QSx3EPsAw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert mm_lock_seq to be seqcount_t and change all mmap_write_lock variants to increment it, in-line with the usual seqcount usage pattern. This lets us check whether the mmap_lock is write-locked by checking mm_lock_seq.sequence counter (odd=locked, even=unlocked). This will be used when implementing mmap_lock speculation functions. As a result vm_lock_seq is also change to be unsigned to match the type of mm_lock_seq.sequence. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Changes since v2 [1] - Moved ASSERT_EXCLUSIVE_WRITER() into mm_lock_seqcount_end to fix !CONFIG_PER_VMA_LOCK, per kernel test bot - Added SOB, per Liam Howlett [1] https://lore.kernel.org/all/20241121162826.987947-2-surenb@google.com/ include/linux/mm.h | 12 +++---- include/linux/mm_types.h | 7 ++-- include/linux/mmap_lock.h | 55 +++++++++++++++++++++----------- kernel/fork.c | 5 +-- mm/init-mm.c | 2 +- tools/testing/vma/vma.c | 4 +-- tools/testing/vma/vma_internal.h | 4 +-- 7 files changed, 53 insertions(+), 36 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 673771f34674..a553131644e7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -710,7 +710,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) return false; if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) @@ -727,7 +727,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == smp_load_acquire(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { up_read(&vma->vm_lock->lock); return false; } @@ -742,7 +742,7 @@ static inline void vma_end_read(struct vm_area_struct *vma) } /* WARNING! Can only be used if mmap_lock is expected to be write-locked */ -static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) +static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq) { mmap_assert_write_locked(vma->vm_mm); @@ -750,7 +750,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) * current task is holding mmap_write_lock, both vma->vm_lock_seq and * mm->mm_lock_seq can't be concurrently modified. */ - *mm_lock_seq = vma->vm_mm->mm_lock_seq; + *mm_lock_seq = vma->vm_mm->mm_lock_seq.sequence; return (vma->vm_lock_seq == *mm_lock_seq); } @@ -761,7 +761,7 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq) */ static inline void vma_start_write(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; if (__is_vma_write_locked(vma, &mm_lock_seq)) return; @@ -779,7 +779,7 @@ static inline void vma_start_write(struct vm_area_struct *vma) static inline void vma_assert_write_locked(struct vm_area_struct *vma) { - int mm_lock_seq; + unsigned int mm_lock_seq; VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma); } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index e85beea1206e..4e06bde7ec36 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -697,7 +697,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; /* Unstable RCU readers are allowed to read this. */ struct vma_lock *vm_lock; #endif @@ -891,6 +891,9 @@ struct mm_struct { * Roughly speaking, incrementing the sequence number is * equivalent to releasing locks on VMAs; reading the sequence * number can be part of taking a read lock on a VMA. + * Incremented every time mmap_lock is write-locked/unlocked. + * Initialized to 0, therefore odd values indicate mmap_lock + * is write-locked and even values that it's released. * * Can be modified under write mmap_lock using RELEASE * semantics. @@ -899,7 +902,7 @@ struct mm_struct { * Can be read with ACQUIRE semantics if not holding write * mmap_lock. */ - int mm_lock_seq; + seqcount_t mm_lock_seq; #endif diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index de9dc20b01ba..9715326f5a85 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,39 +71,39 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK -/* - * Drop all currently-held per-VMA locks. - * This is called from the mmap_lock implementation directly before releasing - * a write-locked mmap_lock (or downgrading it to read-locked). - * This should normally NOT be called manually from other places. - * If you want to call this manually anyway, keep in mind that this will release - * *all* VMA write locks, including ones from further up the stack. - */ -static inline void vma_end_write_all(struct mm_struct *mm) +static inline void mm_lock_seqcount_init(struct mm_struct *mm) { - mmap_assert_write_locked(mm); - /* - * Nobody can concurrently modify mm->mm_lock_seq due to exclusive - * mmap_lock being held. - * We need RELEASE semantics here to ensure that preceding stores into - * the VMA take effect before we unlock it with this store. - * Pairs with ACQUIRE semantics in vma_start_read(). - */ - smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); + seqcount_init(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) +{ + do_raw_write_seqcount_begin(&mm->mm_lock_seq); +} + +static inline void mm_lock_seqcount_end(struct mm_struct *mm) +{ + ASSERT_EXCLUSIVE_WRITER(mm->mm_lock_seq); + do_raw_write_seqcount_end(&mm->mm_lock_seq); } + #else -static inline void vma_end_write_all(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} +static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} #endif static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); + mm_lock_seqcount_init(mm); } static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); down_write(&mm->mmap_lock); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -111,6 +111,7 @@ static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) { __mmap_lock_trace_start_locking(mm, true); down_write_nested(&mm->mmap_lock, subclass); + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -120,10 +121,26 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm) __mmap_lock_trace_start_locking(mm, true); ret = down_write_killable(&mm->mmap_lock); + if (!ret) + mm_lock_seqcount_begin(mm); __mmap_lock_trace_acquire_returned(mm, true, ret == 0); return ret; } +/* + * Drop all currently-held per-VMA locks. + * This is called from the mmap_lock implementation directly before releasing + * a write-locked mmap_lock (or downgrading it to read-locked). + * This should normally NOT be called manually from other places. + * If you want to call this manually anyway, keep in mind that this will release + * *all* VMA write locks, including ones from further up the stack. + */ +static inline void vma_end_write_all(struct mm_struct *mm) +{ + mmap_assert_write_locked(mm); + mm_lock_seqcount_end(mm); +} + static inline void mmap_write_unlock(struct mm_struct *mm) { __mmap_lock_trace_released(mm, true); diff --git a/kernel/fork.c b/kernel/fork.c index e58d27c05788..8cd36645b9fc 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -449,7 +449,7 @@ static bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; } @@ -1262,9 +1262,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); -#ifdef CONFIG_PER_VMA_LOCK - mm->mm_lock_seq = 0; -#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; diff --git a/mm/init-mm.c b/mm/init-mm.c index 24c809379274..6af3ad675930 100644 --- a/mm/init-mm.c +++ b/mm/init-mm.c @@ -40,7 +40,7 @@ struct mm_struct init_mm = { .arg_lock = __SPIN_LOCK_UNLOCKED(init_mm.arg_lock), .mmlist = LIST_HEAD_INIT(init_mm.mmlist), #ifdef CONFIG_PER_VMA_LOCK - .mm_lock_seq = 0, + .mm_lock_seq = SEQCNT_ZERO(init_mm.mm_lock_seq), #endif .user_ns = &init_user_ns, .cpu_bitmap = CPU_BITS_NONE, diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index b33b47342d41..9074aaced9c5 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -87,7 +87,7 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, * begun. Linking to the tree will have caused this to be incremented, * which means we will get a false positive otherwise. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return vma; } @@ -212,7 +212,7 @@ static bool vma_write_started(struct vm_area_struct *vma) int seq = vma->vm_lock_seq; /* We reset after each check. */ - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; /* The vma_start_write() stub simply increments this value. */ return seq > -1; diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index c5b9da034511..4007ec580f85 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -231,7 +231,7 @@ struct vm_area_struct { * counter reuse can only lead to occasional unnecessary use of the * slowpath. */ - int vm_lock_seq; + unsigned int vm_lock_seq; struct vma_lock *vm_lock; #endif @@ -406,7 +406,7 @@ static inline bool vma_lock_alloc(struct vm_area_struct *vma) return false; init_rwsem(&vma->vm_lock->lock); - vma->vm_lock_seq = -1; + vma->vm_lock_seq = UINT_MAX; return true; } From patchwork Fri Nov 22 17:44:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13883496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27179E69183 for ; Fri, 22 Nov 2024 17:44:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B59456B008A; Fri, 22 Nov 2024 12:44:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AE3E86B008C; Fri, 22 Nov 2024 12:44:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 981A36B0092; Fri, 22 Nov 2024 12:44:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 721B86B008A for ; Fri, 22 Nov 2024 12:44:26 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3314C1416A1 for ; Fri, 22 Nov 2024 17:44:26 +0000 (UTC) X-FDA: 82814454240.16.1AFE68A Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf28.hostedemail.com (Postfix) with ESMTP id 4A728C0002 for ; Fri, 22 Nov 2024 17:43:25 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rZkHLVRE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 398JAZwYKCMAy0xkthmuumrk.iusrot03-ssq1giq.uxm@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=398JAZwYKCMAy0xkthmuumrk.iusrot03-ssq1giq.uxm@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732297371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JmMtuCkRvyQ52Tg7fGJRZ5Y2Wsu/zSjhmlAZ0eI81A0=; b=NuzFRFt4zdwiFV4HMD9ERjkgjKKlUTjIY96uYiBnZfvf735MYuYhrE0E+uPgGSQ2UFXg7V lxHFgIgEdaunYXqec4pRy6GLFVczXHfBWzV6WerPQVLn5te5xaBk1cShSVJ2zs16omadjT 9Yo04Vy2RgZzBk5KtvFtZMtNszPwXFg= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rZkHLVRE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 398JAZwYKCMAy0xkthmuumrk.iusrot03-ssq1giq.uxm@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=398JAZwYKCMAy0xkthmuumrk.iusrot03-ssq1giq.uxm@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732297371; a=rsa-sha256; cv=none; b=1CzY22z/w2mBvAI3wv4KXRVEeaUd7HBh8N1AP5ACciVtQehMdaOjqMTSH/g/G2g7OnU3qy p+rCv1+WZz7bY6BcvgEYrOo06HTFw4UK3PNAkUEgc1EUnp+CDqAS8OpIqxEbEOnQ4DxhJ1 zG+tjy21mI/+Fp44drZ0f2YLL3ogvlI= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e30b8fd4ca1so3660093276.3 for ; Fri, 22 Nov 2024 09:44:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732297463; x=1732902263; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JmMtuCkRvyQ52Tg7fGJRZ5Y2Wsu/zSjhmlAZ0eI81A0=; b=rZkHLVREejmHLjD9H++IxLGBJc++fY9giC41/6+E7kgc9S9v4EXitQdaTqVuu/HuyV ci5S2+SRo9y32ZUk1prvpsCjuFHtz/RFKrnnlxMPBARqv9QC7W49+/t+Sw9KqSzXiw0O jFFw9VXvnHPNrjL9thOAopJ8lhdVYh+egDC+FYK3eeWfIcICtzaRVZGpvKhRgqkezSOy VF2Yiz+WNc9b6jtBxL17CYtCcuVvCvwUe0IFwNeiECZY7Tvrgq12Ex4p0fPFkQ9Zn5PE yihNk5UHBqc0BV2CRBCmZo8v6YeqqY84e0DLEfrzLJUnC8k/sIICBMom2BBeLw+DK5TJ ym2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732297463; x=1732902263; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JmMtuCkRvyQ52Tg7fGJRZ5Y2Wsu/zSjhmlAZ0eI81A0=; b=E5KGqqaPoCzUuNRtDYsB1AZNCxzlLl3lWZxNC9kl7/fMkB8jlVkYx4otBlSoncHs9y sGDe3bYolv52xe+nQjI+yhyP30ia+x1NEyJwJZEKRNoDcBFPlR6MuiaUqCW7L3B6du0B 4NFzXDRw0fOjTPqhEBfRv7NCRjs2sqkg+nXcZpHJmYhRnT+YeC+om20hpXzV9tf8kiqs TvseEpNFTVgpKxS95DGojaKQ8aT+WxXSGedvMSeOfkDfFN79zwkYwHSRZnFntUhlkk1+ tKDpivcBS4DUd0fkZvYwlakvGNmY6ioDRiBV1zttXK/+orG5gZyTFN7xNq/1bm0oJKGn 7hLw== X-Forwarded-Encrypted: i=1; AJvYcCUbKwqjCz6Qa/t/oBuFKZt35l9r2Zob1S+JFBU2VFhcR0Bv+naJODdS78UtT5drqiqA9Fa5WJzp1Q==@kvack.org X-Gm-Message-State: AOJu0Yzq3DA4nqABMTamVadMZCVPxhAgOV6CLt4jH42xpQ00GTGyzWHW rWhxeH+WAYd8gmAiC3k1hW1+1LFRGGQ2XQIsOMXD9UvaeAaVgP/o6aMKOuj+8+vTbu2pLeUGZTc Pxw== X-Google-Smtp-Source: AGHT+IEV9E/JH5zB3aH55fu2gyNobb2STJxxHLoQPN4QfW7QVDU8lvjjtSetWq5+AkJvlm8IMcLFXFtwqtU= X-Received: from surenb-desktop.mtv.corp.google.com ([2a00:79e0:2e3f:8:8f07:6a96:7af9:8fe6]) (user=surenb job=sendgmr) by 2002:a05:6902:1745:b0:e33:111b:c6a4 with SMTP id 3f1490d57ef6-e38f8acbe09mr1492276.1.1732297463091; Fri, 22 Nov 2024 09:44:23 -0800 (PST) Date: Fri, 22 Nov 2024 09:44:16 -0800 In-Reply-To: <20241122174416.1367052-1-surenb@google.com> Mime-Version: 1.0 References: <20241122174416.1367052-1-surenb@google.com> X-Mailer: git-send-email 2.47.0.371.ga323438b13-goog Message-ID: <20241122174416.1367052-3-surenb@google.com> Subject: [PATCH v3 3/3] mm: introduce mmap_lock_speculate_{try_begin|retry} From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: peterz@infradead.org, andrii@kernel.org, jannh@google.com, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, mhocko@kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, david@redhat.com, willy@infradead.org, brauner@kernel.org, oleg@redhat.com, arnd@arndb.de, richard.weiyang@gmail.com, zhangpeng.00@bytedance.com, linmiaohe@huawei.com, viro@zeniv.linux.org.uk, hca@linux.ibm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, "Liam R. Howlett" X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4A728C0002 X-Stat-Signature: 93b9spxjzmsm8i7c3eabbpfk5seq3jeh X-HE-Tag: 1732297405-282306 X-HE-Meta: U2FsdGVkX19J5wIAAswHd/2kK5WniY6Mth0jWtDp0/+yRDVUHL+giU0AHBcuEvsz/lAs/NgEIORtpiE+AR56+pQ8p4nfyHLWZtFVxKtPSIq4AYo/I+tVhiOwRWQLwu70PPqSMrKmNLGdKGyKq9RYf9nBzfQmQPs49CgmnVCMgCLEnDa/dJOihMR90KsQKN8I4e7CvzLA/jKnKvpPNeVMQ4wXb9a0sbZ4VV+7ScZgqgIVbvJJakffSzOgJ+IDE7nLXH1IWDg9/eVd8DZKgc10BIdI3As7vGwuZqX7xGawcFnMbCIHWZLx2+3RjdE1VTBC/yGPKNAzwRKloxbd8exH3LxQMFyMxGsNDDD6qF1HDxM1QmkPCNNYxo62iVY/J6n52GWjWWqdb2zsl4oWacxNpmw+ZmyVIdjacsnVrOuS8q8IlaBQwLUJ7piq/2gBqL0sbu6wFMKZdFhusZxPHrU5cZ18lXBoLiFB8ej7Hfyi92kJvzhd0h1UQ1LgsNFCU3NMG8XI+4ZZYEPzWCgi3TAf7JUU+OfSBZvfFnjQJsmsRtj+1TjfC+t0fgR7UzOa3WL8B4kjz7AKyOcj/epTpYqLdpdS2bdIIdvzMTxQQG7q2QaOu/lKHWj+2akkOCgIU4VMW6jdmVdVqwzX1FWyYTCayS/fBqTPsaTWm66/X6h5H5qVP5tvv1AwosOylJugjX1du+pdZoWLXo731mdgF23DKm0OMv/5+QUz182SiVcFsAGYvSjDvhvI5QSjyxNSJSK1Q69ecBumb0LTxLKRU/0mjhTZ9XsGlv++tKFBMFelNJKWvKWrl06zAiN6eXtrS8/gG9iMYs6816bypIvYWsQV8tdxr3CLWfjwTLqp6dL8DIJsoxGGtWQdctcSMPAodmgr6Ie4OkS47TA4aCIWzMiBEfMhrLh+qWfjZh4DXL5NIzdxslSMuolNc5bG8UYgTBJnqMGn0BXtAOCMno02F9+ WfgSd6Pj a/Bs4UDSwxvQ4YBKL5OShi3s8VF6JRdVybdvUxQDs6zbhkO17+uBxNsNPptqgRp/UXzziiNw/jhM0cl0T180S2yrb+a3oc97FFnzDH364vFwnDS11rLZYYIYYgm0Be75KfRL4cxL/Ig7ySnXvMRo7Ioe+QxQ0KHsJd+bgNyusFzeQ4yyjFk9pXoGJIOlcz8IiuEHoAxF1su4IKCsxg5S1jEznw9fZz7nljjSO2b2cqzcjD6DwBmfcjFM/kVO2uvspfRNnth4WLDBQQqGlgwTWUGmmtx4Tlmfh/EsgpUJkeKUcpzzH9Kmx6tF5438QMcc3DgZlGJktXdkHncGzH0ptsBBXXUuG0zM4nSV7rdhfL/h7FDJqXZRsMzvBZINxS4gSps0IcQcOWvKr2KC524du6CTdihEv30qJ6GZmf9izkwoxuRM9Yn8fL3Z0V16KQUFdB3QqMaCIppn+KAivbgiEtcfM2Z20/kHI74a8ekRtYe7OiXTHa+lhiSDYeqxU4ACoukaZSxSztcxas4XxgOwNwsuwpZhx3flzhmtcF5kh59Q11dexNWDboLzGptL2qHzkBx4PHRX4jmq33OzCjoBqsLGayMSdStUk8JqSQqhXTtwa1/Ln6PQkLRGRcMSFinEZUQl63nKppWMbXJfanSSOinNXM8XrC6FQOpq5obuERvK+JKuxe1Cnd5zPza+O9emxZbuXFRHBdLlmTdi2eL5NffigQzhIcvvIlKuOkdNhyA3TWk1f4DGzsXE/MBxy9UIfi8lOvwRrh2/iUf4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add helper functions to speculatively perform operations without read-locking mmap_lock, expecting that mmap_lock will not be write-locked and mm is not modified from under us. Suggested-by: Peter Zijlstra Signed-off-by: Suren Baghdasaryan Reviewed-by: Liam R. Howlett --- Changes since v2 [1] - Added SOB, per Liam Howlett [1] https://lore.kernel.org/all/20241121162826.987947-3-surenb@google.com/ include/linux/mmap_lock.h | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 9715326f5a85..8ac3041df053 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,6 +71,7 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK + static inline void mm_lock_seqcount_init(struct mm_struct *mm) { seqcount_init(&mm->mm_lock_seq); @@ -87,11 +88,39 @@ static inline void mm_lock_seqcount_end(struct mm_struct *mm) do_raw_write_seqcount_end(&mm->mm_lock_seq); } -#else +static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq) +{ + /* + * Since mmap_lock is a sleeping lock, and waiting for it to become + * unlocked is more or less equivalent with taking it ourselves, don't + * bother with the speculative path if mmap_lock is already write-locked + * and take the slow path, which takes the lock. + */ + return raw_seqcount_try_begin(&mm->mm_lock_seq, *seq); +} + +static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq) +{ + return do_read_seqcount_retry(&mm->mm_lock_seq, seq); +} + +#else /* CONFIG_PER_VMA_LOCK */ + static inline void mm_lock_seqcount_init(struct mm_struct *mm) {} static inline void mm_lock_seqcount_begin(struct mm_struct *mm) {} static inline void mm_lock_seqcount_end(struct mm_struct *mm) {} -#endif + +static inline bool mmap_lock_speculate_try_begin(struct mm_struct *mm, unsigned int *seq) +{ + return false; +} + +static inline bool mmap_lock_speculate_retry(struct mm_struct *mm, unsigned int seq) +{ + return true; +} + +#endif /* CONFIG_PER_VMA_LOCK */ static inline void mmap_init_lock(struct mm_struct *mm) {