diff mbox series

[mmotm] mm: warn on deleting redirtied only if accounted

Message ID b5a1106c-7226-a5c6-ad41-ad4832cae1f@google.com (mailing list archive)
State New
Headers show
Series [mmotm] mm: warn on deleting redirtied only if accounted | expand

Commit Message

Hugh Dickins March 4, 2022, 4:25 a.m. UTC
filemap_unaccount_folio() has a WARN_ON_ONCE(folio_test_dirty(folio)).
It is good to warn of late dirtying on a persistent filesystem, but late
dirtying on tmpfs can only lose data which is expected to be thrown away;
and it's a pity if that warning comes ONCE on tmpfs, then hides others
which really matter.  Make it conditional on mapping_cap_writeback().

Cleanup: then folio_account_cleaned() no longer needs to check that
for itself, and so no longer needs to know the mapping.

Signed-off-by: Hugh Dickins <hughd@google.com>
---

 include/linux/pagemap.h |  3 +--
 mm/filemap.c            | 14 +++++++++-----
 mm/page-writeback.c     | 18 ++++++++----------
 3 files changed, 18 insertions(+), 17 deletions(-)

Comments

Matthew Wilcox March 4, 2022, 4:38 a.m. UTC | #1
On Thu, Mar 03, 2022 at 08:25:50PM -0800, Hugh Dickins wrote:
> filemap_unaccount_folio() has a WARN_ON_ONCE(folio_test_dirty(folio)).
> It is good to warn of late dirtying on a persistent filesystem, but late
> dirtying on tmpfs can only lose data which is expected to be thrown away;
> and it's a pity if that warning comes ONCE on tmpfs, then hides others
> which really matter.  Make it conditional on mapping_cap_writeback().
> 
> Cleanup: then folio_account_cleaned() no longer needs to check that
> for itself, and so no longer needs to know the mapping.

At first blush, I like this a lot.  Will look more in the morning.
diff mbox series

Patch

--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -991,8 +991,7 @@  static inline void __set_page_dirty(struct page *page,
 {
 	__folio_mark_dirty(page_folio(page), mapping, warn);
 }
-void folio_account_cleaned(struct folio *folio, struct address_space *mapping,
-			  struct bdi_writeback *wb);
+void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb);
 void __folio_cancel_dirty(struct folio *folio);
 static inline void folio_cancel_dirty(struct folio *folio)
 {
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -193,16 +193,20 @@  static void filemap_unaccount_folio(struct address_space *mapping,
 	/*
 	 * At this point folio must be either written or cleaned by
 	 * truncate.  Dirty folio here signals a bug and loss of
-	 * unwritten data.
+	 * unwritten data - on ordinary filesystems.
 	 *
-	 * This fixes dirty accounting after removing the folio entirely
+	 * But it's harmless on in-memory filesystems like tmpfs; and can
+	 * occur when a driver which did get_user_pages() sets page dirty
+	 * before putting it, while the inode is being finally evicted.
+	 *
+	 * Below fixes dirty accounting after removing the folio entirely
 	 * but leaves the dirty flag set: it has no effect for truncated
 	 * folio and anyway will be cleared before returning folio to
 	 * buddy allocator.
 	 */
-	if (WARN_ON_ONCE(folio_test_dirty(folio)))
-		folio_account_cleaned(folio, mapping,
-					inode_to_wb(mapping->host));
+	if (WARN_ON_ONCE(folio_test_dirty(folio) &&
+			 mapping_can_writeback(mapping)))
+		folio_account_cleaned(folio, inode_to_wb(mapping->host));
 }
 
 /*
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2548,16 +2548,14 @@  static void folio_account_dirtied(struct folio *folio,
  *
  * Caller must hold lock_page_memcg().
  */
-void folio_account_cleaned(struct folio *folio, struct address_space *mapping,
-			  struct bdi_writeback *wb)
+void folio_account_cleaned(struct folio *folio, struct bdi_writeback *wb)
 {
-	if (mapping_can_writeback(mapping)) {
-		long nr = folio_nr_pages(folio);
-		lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, -nr);
-		zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
-		wb_stat_mod(wb, WB_RECLAIMABLE, -nr);
-		task_io_account_cancelled_write(nr * PAGE_SIZE);
-	}
+	long nr = folio_nr_pages(folio);
+
+	lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, -nr);
+	zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
+	wb_stat_mod(wb, WB_RECLAIMABLE, -nr);
+	task_io_account_cancelled_write(nr * PAGE_SIZE);
 }
 
 /*
@@ -2768,7 +2766,7 @@  void __folio_cancel_dirty(struct folio *folio)
 		wb = unlocked_inode_to_wb_begin(inode, &cookie);
 
 		if (folio_test_clear_dirty(folio))
-			folio_account_cleaned(folio, mapping, wb);
+			folio_account_cleaned(folio, wb);
 
 		unlocked_inode_to_wb_end(inode, &cookie);
 		folio_memcg_unlock(folio);