From patchwork Wed May 12 13:46:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 12253689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 777FDC43470 for ; Wed, 12 May 2021 13:46:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3846F613EB for ; Wed, 12 May 2021 13:46:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231193AbhELNrn (ORCPT ); Wed, 12 May 2021 09:47:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:41334 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230486AbhELNrm (ORCPT ); Wed, 12 May 2021 09:47:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 5897EB1B1; Wed, 12 May 2021 13:46:32 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id D2B801E0A4C; Wed, 12 May 2021 15:46:31 +0200 (CEST) From: Jan Kara To: Cc: Christoph Hellwig , Dave Chinner , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, , linux-f2fs-devel@lists.sourceforge.net, , , Miklos Szeredi , Steve French , Ted Tso , Matthew Wilcox , Jan Kara Subject: [PATCH 0/11 v5] fs: Hole punch vs page cache filling races Date: Wed, 12 May 2021 15:46:08 +0200 Message-Id: <20210512101639.22278-1-jack@suse.cz> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Hello, here is another version of my patches to address races between hole punching and page cache filling functions for ext4 and other filesystems. The biggest change since the last time is update of the documentation to reflect the fact Dave Chinner spotted that some places also use this type of lock to block changes to existing page cache pages through memory mappings. Out of all filesystem supporting hole punching, only GFS2 and OCFS2 remain unresolved. GFS2 people are working on their own solution (cluster locking is involved), OCFS2 has even bigger issues (maintainers informed, looking into it). As a next step, I'd like to actually make sure all calls to truncate_inode_pages() happen under mapping->invalidate_lock, add the assert and then we can also get rid of i_size checks in some places (truncate can use the same serialization scheme as hole punch). But that step is mostly a cleanup so I'd like to get these functional fixes in first. Note that the first patch of the series is already in mm tree but I'm submitting it here so that the series applies to Linus' tree cleanly. Changes since v4: * Rebased onto 5.13-rc1 * Removed shmfs conversion patches * Fixed up zonefs changelog * Fixed up XFS comments * Added patch fixing up definition of file_operations in Documentation/vfs/ * Updated documentation and comments to explain invalidate_lock is used also to prevent changes through memory mappings to existing pages for some VFS operations. Changes since v3: * Renamed and moved lock to struct address_space * Added conversions of tmpfs, ceph, cifs, fuse, f2fs * Fixed error handling path in filemap_read() * Removed .page_mkwrite() cleanup from the series for now Changes since v2: * Added documentation and comments regarding lock ordering and how the lock is supposed to be used * Added conversions of ext2, xfs, zonefs * Added patch removing i_mapping_sem protection from .page_mkwrite handlers Changes since v1: * Moved to using inode->i_mapping_sem instead of aops handler to acquire appropriate lock --- Motivation: Amir has reported [1] a that ext4 has a potential issues when reads can race with hole punching possibly exposing stale data from freed blocks or even corrupting filesystem when stale mapping data gets used for writeout. The problem is that during hole punching, new page cache pages can get instantiated and block mapping from the looked up in a punched range after truncate_inode_pages() has run but before the filesystem removes blocks from the file. In principle any filesystem implementing hole punching thus needs to implement a mechanism to block instantiating page cache pages during hole punching to avoid this race. This is further complicated by the fact that there are multiple places that can instantiate pages in page cache. We can have regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also result in reading in page cache pages through force_page_cache_readahead(). There are couple of ways how to fix this. First way (currently implemented by XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are serialized with hole punching. This is easy to do but as a result all reads would then be serialized with writes and thus mixed read-write workloads suffer heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it when creating new pages in the page cache and looking up their corresponding block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem which provides necessary serialization with hole punching for ext4. Honza [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/ Previous versions: Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/ Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz Link: http://lore.kernel.org/r/20210423171010.12-1-jack@suse.cz