From patchwork Sun Nov 10 15:28:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13869953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B840D12D43 for ; Sun, 10 Nov 2024 15:29:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E38B6B00BA; Sun, 10 Nov 2024 10:29:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4707F6B00BB; Sun, 10 Nov 2024 10:29:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E7E26B00BC; Sun, 10 Nov 2024 10:29:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0C2226B00BA for ; Sun, 10 Nov 2024 10:29:32 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C055DC0C68 for ; Sun, 10 Nov 2024 15:29:31 +0000 (UTC) X-FDA: 82770567726.07.38209CC Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf17.hostedemail.com (Postfix) with ESMTP id 9CEF640012 for ; Sun, 10 Nov 2024 15:28:59 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel-dk.20230601.gappssmtp.com header.s=20230601 header.b=pgOEkR5f; spf=pass (imf17.hostedemail.com: domain of axboe@kernel.dk designates 209.85.214.169 as permitted sender) smtp.mailfrom=axboe@kernel.dk; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731252517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c6ru6+H4oQlmjhy4ZS59TJ0rgrpyxDTaEbRj/Vr+gBQ=; b=2nhwsnSKw4WicCbyIaBFwV/mTw9C7pl0VBGFmxPzoSIr0P+zD5EdLkY+eJ/s0IDX6UFbNI rALAJ2io/wTxjjixBPeXxc9JgqShmKeSa2G1RUrGfH0cTE++ZZovfsEHYpOWuWTjmzRN1A qxYVknNaHeYPImlUz9GN1tgk+Uu9oKI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731252517; a=rsa-sha256; cv=none; b=jMQetVNC7GnR8GoAoG+pasw0FTOFIxMHfeR5wcwY35SAUVcUUf4EzKK7RCZH3d/2zKRLRd moUaYTXX8H4yQTImvsvnwAYP/0s5IXEYs94/LOntkTmqOYXl4iLCRv/BV46T9+af2koPTg LLX/81Uv7zV5A/5gXwJg89yikob3E6s= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel-dk.20230601.gappssmtp.com header.s=20230601 header.b=pgOEkR5f; spf=pass (imf17.hostedemail.com: domain of axboe@kernel.dk designates 209.85.214.169 as permitted sender) smtp.mailfrom=axboe@kernel.dk; dmarc=none Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-20cf3e36a76so38486215ad.0 for ; Sun, 10 Nov 2024 07:29:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1731252568; x=1731857368; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=c6ru6+H4oQlmjhy4ZS59TJ0rgrpyxDTaEbRj/Vr+gBQ=; b=pgOEkR5fqRNuWXzrqEmF81jtywUxQ5tRzLou94rZV590KY30OafYUwHNGgVlFwS+QJ Cms2byr9Bs5UxQ/+ONIDBNUeGfQDKNyQypA/Xn+O6TxZuyZl80uwLGHA7i+U7oBq0SG6 eKBYEyAlsexehPgP6Cf4eiQmlTGYL37PEloqvNHKC8KmU1Ou6LlVuabWYz5uNdZWoeO+ U3Ec0WEq0V1SBCY7Kwh7TUsBhYjWAJVjPGUPyr66YJtiol1ux98HH2eani4fiI8KX8q1 fbA/KYR1YqQ9e19PdRv8S4wyW2dyew4ryn728uGxnd72oqj8Sm5YAT2x/uXwzPmb8Gxa XWLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731252568; x=1731857368; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=c6ru6+H4oQlmjhy4ZS59TJ0rgrpyxDTaEbRj/Vr+gBQ=; b=IIyHYH0ATi6Hr/VGw1nSszqJvq0TyDRBTAqB36S4cVqlR1VyR+H1sn84XMorxQfVy6 wVsNRsmBTDZIYsvCm7TRYgQesm03xPZ3J4O15AvhmonMUiy//clRt2qynqmWNh6eJEx+ yhmIBoBKpKHW2XtKc67zsjZXMuJz7Uy8SpQCkjbRPGmBAPhX7EMdpjLrW8q+oFprU+c0 wrXr+1+jpFc6JkmAI3rRsO2Ac3g9O8Fzi5srIDT0SvwuDhZUe4gWZ2jX/emurZwgs0hX K/YZEcOv6ozkw4mWNmNjI6DC03Cq9jkLLV5ZrMyP4Uo2+sRDdkiZIndF9M4rKePMH7kK 89zQ== X-Gm-Message-State: AOJu0Yy+W9+OLl1os4qG54Vcu2ZbqtZefix4N/RXPVFViXlBRiPJuLWd qZJtY34cLTBRwtS1XmrruRYSGNSQIJIf8v0G1nbtA5dLfdlZDWpEP5HDoBfNEZMw2AdlWfbkSSW YhtI= X-Google-Smtp-Source: AGHT+IEOVFhFei02kQq1ltQWyF+j1C3XIjJZGZV1LAM2AT1TQ/js4+EAJqEYuNTXGh31ae8isAOkHA== X-Received: by 2002:a17:903:2391:b0:20c:b274:34d0 with SMTP id d9443c01a7336-211835d19e6mr136506735ad.46.1731252568109; Sun, 10 Nov 2024 07:29:28 -0800 (PST) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e99a5f935dsm9940973a91.35.2024.11.10.07.29.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 10 Nov 2024 07:29:27 -0800 (PST) From: Jens Axboe To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: hannes@cmpxchg.org, clm@meta.com, linux-kernel@vger.kernel.org, willy@infradead.org, Jens Axboe Subject: [PATCH 10/15] mm/filemap: make buffered writes work with RWF_UNCACHED Date: Sun, 10 Nov 2024 08:28:02 -0700 Message-ID: <20241110152906.1747545-11-axboe@kernel.dk> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241110152906.1747545-1-axboe@kernel.dk> References: <20241110152906.1747545-1-axboe@kernel.dk> MIME-Version: 1.0 X-Stat-Signature: c7s79xwnqmj3gizcsshhsz7megabu971 X-Rspam-User: X-Rspamd-Queue-Id: 9CEF640012 X-Rspamd-Server: rspam02 X-HE-Tag: 1731252539-107284 X-HE-Meta: U2FsdGVkX1+xd8ZcVGHW9Cl6q/4m9egKgukbiHJfJ8ex7ElTGSPF0NpcyaO6ZUgdLTR7lRVCQ7aQfUqZcW2JyOikBWQ2dPLxe7yE/0ghk1ZiJhDVu8BVQcSY9E/zx38hVdF6Xb2Sq0OYidxGdRmN/l0zXbZ5WObv+BCB+gSsYZH5JlUUdeQwaMIbU8LxN45DEiXm9gbY99yA/Ruo5yPyEh0RKvDFpAqXWJM/GV3GwK42tQjr2HAsUrnbW8tMWKx1oeOJECeFeOPKjniDihbSBQ9IP4m/efhoYF+PvX7+j1Lr3sE2crzwkDeoF8s9/13zrmdf+2vn4AqH8BYiZrPiEYEzIBqB/vMPbd7Hp1EWyQHDmNrRc5mexQVMBfoCdyWt0UVWsuhoNVWG8GR0vVg6gNaR1Mu4Sdi1uw6lci1rwjD1Cgow9Ifw61zxZnlmZcfzyIKDb0pwUAPMlGBnvin7wHqnvoqfm4gj6pAQZUgcBM52tvjAZF6HZevwKEnWiM15GQCva26hUqk1cFv8JdUbj9K59JlGgaKJbWZQilzvR3Q7JkxksTOCsx0SDnOkx4l9k8NFLNW8Pmt8eZc9CeWPUjgDevRGyCglA0qctxdSN+XI7HBwilzQLofe/jz2vC5SvEF6RJ9D1C4NeFNPhfy5zkR1bUKZitd2d7as3qFERujSdlhGet1dk4OxdL8/1Mpxnn5YlKSIWdrUPWua1F3GBlz19S3w42JaRDSscCYTRFpg3hM7MmxHc2yySY2HnlBXUv5q+d6SUvRB176DvyfF2fA2Yl3a9JAwA88kxIWRzlr3E/N9zml/mIkg3Rsh82DSe1pHn1QbKkuOq7DF+GkNurEy4V3YjuqbBfbqhJjnFtZVapZG0WSXuTbGkRbIVql/aK37A7oWAReX/VmckCPqEVCW0oLVMLLTli/KepJXTN/XTvc9eCsOj2W52KmgV03bc6Uv7uzv+aAhXOwSnuy 0+H1AByl onNntQG5VXRwmjALgsdOfNvDSAzdEdS9RXmp9Dkl/CSReF77X4W9Tt/5ZTfw4ixFb4lg5TtnsD/FL8NLOhI3i+OabmarxSVZbNXAEQNajtSuJJ6TLKzaKabDBPSHKmu5h81GKxLR2IUCi6w/GR2v2vn3t8ZjgF51LaUGgNrDGzxsVKBHSHU49gElnimiN+OYGLHBGay3Fqh+RjBGob7sjHoYeKzaHWhv+10hyaRsAIFcWaHTqJ+ViYTPwKe/3dYMdpJuFXDBjkYxHYEu7Zou6Vgf6+nD5A98TQhYzpGpYatCXz/j93qHKUwHtr2I8TU/oZBanS1WnJIsyC843DrlQFTDXuVZc2lbORp+k X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If RWF_UNCACHED is set for a write, mark new folios being written with uncached. This is done by passing in the fact that it's an uncached write through the folio pointer. We can only get there when IOCB_UNCACHED was allowed, which can only happen if the file system opts in. Opting in means they need to check for the LSB in the folio pointer to know if it's an uncached write or not. If it is, then FGP_UNCACHED should be used if creating new folios is necessary. Uncached writes will drop any folios they create upon writeback completion, but leave folios that may exist in that range alone. Since ->write_begin() doesn't currently take any flags, and to avoid needing to change the callback kernel wide, use the foliop being passed in to ->write_begin() to signal if this is an uncached write or not. File systems can then use that to mark newly created folios as uncached. Add a helper, generic_uncached_write(), that generic_file_write_iter() calls upon successful completion of an uncached write. This provides similar benefits to using RWF_UNCACHED with reads. Testing buffered writes on 32 files: writing bs 65536, uncached 0 1s: 196035MB/sec, MB=196035 2s: 132308MB/sec, MB=328147 3s: 132438MB/sec, MB=460586 4s: 116528MB/sec, MB=577115 5s: 103898MB/sec, MB=681014 6s: 108893MB/sec, MB=789907 7s: 99678MB/sec, MB=889586 8s: 106545MB/sec, MB=996132 9s: 106826MB/sec, MB=1102958 10s: 101544MB/sec, MB=1204503 11s: 111044MB/sec, MB=1315548 12s: 124257MB/sec, MB=1441121 13s: 116031MB/sec, MB=1557153 14s: 114540MB/sec, MB=1671694 15s: 115011MB/sec, MB=1786705 16s: 115260MB/sec, MB=1901966 17s: 116068MB/sec, MB=2018034 18s: 116096MB/sec, MB=2134131 where it's quite obvious where the page cache filled, and performance dropped from to about half of where it started, settling in at around 115GB/sec. Meanwhile, 32 kswapds were running full steam trying to reclaim pages. Running the same test with uncached buffered writes: writing bs 65536, uncached 1 1s: 198974MB/sec 2s: 189618MB/sec 3s: 193601MB/sec 4s: 188582MB/sec 5s: 193487MB/sec 6s: 188341MB/sec 7s: 194325MB/sec 8s: 188114MB/sec 9s: 192740MB/sec 10s: 189206MB/sec 11s: 193442MB/sec 12s: 189659MB/sec 13s: 191732MB/sec 14s: 190701MB/sec 15s: 191789MB/sec 16s: 191259MB/sec 17s: 190613MB/sec 18s: 191951MB/sec and the behavior is fully predictable, performing the same throughout even after the page cache would otherwise have fully filled with dirty data. It's also about 65% faster, and using half the CPU of the system compared to the normal buffered write. Signed-off-by: Jens Axboe --- include/linux/pagemap.h | 29 +++++++++++++++++++++++++++++ mm/filemap.c | 26 +++++++++++++++++++++++--- 2 files changed, 52 insertions(+), 3 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0122b3fbe2ac..5469664f66c3 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -14,6 +14,7 @@ #include #include #include /* for in_interrupt() */ +#include #include struct folio_batch; @@ -70,6 +71,34 @@ static inline int filemap_write_and_wait(struct address_space *mapping) return filemap_write_and_wait_range(mapping, 0, LLONG_MAX); } +/* + * generic_uncached_write - start uncached writeback + * @iocb: the iocb that was written + * @written: the amount of bytes written + * + * When writeback has been handled by write_iter, this helper should be called + * if the file system supports uncached writes. If %IOCB_UNCACHED is set, it + * will kick off writeback for the specified range. + */ +static inline void generic_uncached_write(struct kiocb *iocb, ssize_t written) +{ + if (iocb->ki_flags & IOCB_UNCACHED) { + struct address_space *mapping = iocb->ki_filp->f_mapping; + + /* kick off uncached writeback */ + __filemap_fdatawrite_range(mapping, iocb->ki_pos, + iocb->ki_pos + written, WB_SYNC_NONE); + } +} + +/* + * Value passed in to ->write_begin() if IOCB_UNCACHED is set for the write, + * and the ->write_begin() handler on a file system supporting FOP_UNCACHED + * must check for this and pass FGP_UNCACHED for folio creation. + */ +#define foliop_uncached ((struct folio *) 0xfee1c001) +#define foliop_is_uncached(foliop) (*(foliop) == foliop_uncached) + /** * filemap_set_wb_err - set a writeback error on an address_space * @mapping: mapping in which to set writeback error diff --git a/mm/filemap.c b/mm/filemap.c index efd02b047541..cfbfc8b14b1f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -430,6 +430,7 @@ int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start, return filemap_fdatawrite_wbc(mapping, &wbc); } +EXPORT_SYMBOL_GPL(__filemap_fdatawrite_range); static inline int __filemap_fdatawrite(struct address_space *mapping, int sync_mode) @@ -1609,7 +1610,14 @@ static void folio_end_uncached(struct folio *folio) { bool reset = true; - if (folio_trylock(folio)) { + /* + * Hitting !in_task() should not happen off RWF_UNCACHED writeback, but + * can happen if normal writeback just happens to find dirty folios + * that were created as part of uncached writeback, and that writeback + * would otherwise not need non-IRQ handling. Just skip the + * invalidation in that case. + */ + if (in_task() && folio_trylock(folio)) { reset = !invalidate_complete_folio2(folio->mapping, folio, 0); folio_unlock(folio); } @@ -4061,7 +4069,7 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i) ssize_t written = 0; do { - struct folio *folio; + struct folio *folio = NULL; size_t offset; /* Offset into folio */ size_t bytes; /* Bytes to write to folio */ size_t copied; /* Bytes copied from user */ @@ -4089,6 +4097,16 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i) break; } + /* + * If IOCB_UNCACHED is set here, we now the file system + * supports it. And hence it'll know to check folip for being + * set to this magic value. If so, it's an uncached write. + * Whenever ->write_begin() changes prototypes again, this + * can go away and just pass iocb or iocb flags. + */ + if (iocb->ki_flags & IOCB_UNCACHED) + folio = foliop_uncached; + status = a_ops->write_begin(file, mapping, pos, bytes, &folio, &fsdata); if (unlikely(status < 0)) @@ -4219,8 +4237,10 @@ ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ret = __generic_file_write_iter(iocb, from); inode_unlock(inode); - if (ret > 0) + if (ret > 0) { + generic_uncached_write(iocb, ret); ret = generic_write_sync(iocb, ret); + } return ret; } EXPORT_SYMBOL(generic_file_write_iter);