From patchwork Fri Nov 22 23:23:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13883697 Received: from mail-yw1-f173.google.com (mail-yw1-f173.google.com [209.85.128.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6039D18BC0B for ; Fri, 22 Nov 2024 23:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317867; cv=none; b=dYii1V56A0qbt6usFk6kqhciN6CB0kN+/pLL6uh/7HfMGakmO6Mzw9kSpLsp3O8FVbSr7U9vIupJAcmC9ML1FDfcjp+M3sQzM2tYXt58CQUf2mRBwKL/uOi87XFhhpP60Ee3+s5bSiqAjnMvrt0P0ff3t6WNsZKJlqKHWvzFVAo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317867; c=relaxed/simple; bh=eQgH7T/CLuXVM79Gn7+HepWbp7aZ67dVdNuYoVQHzco=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IWFssb4xfTAm6DVubzHVR37V1hYKKwDPwRMAve8E+Ru52UMpt38KcGCyYL9C4t64hpfDpnRmc9ZMysC0B+Wqrxr/IFIAay052B5BErlc3or+sGtIeCNQL3KUUPoAcTwYLAWA0EZtuUfb+9BnFDOzZTWVJP8xyIG1qi+o7D8qEzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NSeXvtab; arc=none smtp.client-ip=209.85.128.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NSeXvtab" Received: by mail-yw1-f173.google.com with SMTP id 00721157ae682-6ee676b4e20so28807917b3.3 for ; Fri, 22 Nov 2024 15:24:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732317865; x=1732922665; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t5u3R1yw+kui0tyYMObpxZCZs3WeGMww1fJq5IP8R1c=; b=NSeXvtabdi9XiPLzSd8lzSmVTXS1hkRbaoUJRimPhY1kByX7dwcNeQIveGMSwiAoSH WBeJGfH22JHnSbmGvxlP04w0GpjcD8SbYiIX5DcQICptWP0JbQG84eQ75J9Y0xI/S8W0 fGUO9BuZ2icsQDQ2/Jqiz4gmZ5JQ8Lxuoi7Zl1T2yaBcHFlT6ReFVMVSg41hBM7JDj9K CT83l2Gd8U+6WgbdZ68W7Ej+niLw74rXTlephnfCOk5Rxikrx7OltwArmvhbtcr1wGLk 0doz6DvK891DS2qTU4mIOK45zdZBuX5ygfrDkQ+MWTYdWnk+UkPhs05W8o8AWa2feEKV ZCYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732317865; x=1732922665; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t5u3R1yw+kui0tyYMObpxZCZs3WeGMww1fJq5IP8R1c=; b=slhoAeWG2KewxPnAIKsrJccWQp9LvKvEGCpGJAf5qItqPrKhj4HEeJsvohtGY2aRqJ GADM7hT6mIeXlvTol80SLPuk0yOMoXDU5yEihqlqmapYkpuVAA09UBpDYMBiRBw6CG4N IqIqYL6IiEPxwaSPkz1nwXoFxfkd/86dQVjyShQDgyumvIiiBg9WIFsUAJtTX2GAW6bU cqcox45t3OAr3LaAUDofOIny3ksdo72KDgwG8oLy8+rXpNmM41dOqlR1WVvKe8OG5IEE MH7ux1a9nhJo92GYLM/Kdi+cUQikNrpeYEgQwkmVrXTVp4Xom6ort7lMrXJirmrXiTnx MpLA== X-Forwarded-Encrypted: i=1; AJvYcCVcuW+tNjgvu1xIyUV1FQdRH8yNcHEEMS/1DiY4/qHII1P4GkHZ3F1hWOkXxuJ3HaKhdF8XPMFfSWWVANxo@vger.kernel.org X-Gm-Message-State: AOJu0YyTbyJSXEpfWEmtgrhF9wnhHufBSrraVr0Kel0BvKv5ziAogkQj LSEDG8bQ0mlcIOJBDsFn3OJRNOF9iOMEBakQhPuv7Hb4CTskr+Pt X-Gm-Gg: ASbGncs8CrQ3ErxCgnZFL/TStFQhAlmn6YSJZj1W4wbilsQKpulass131122+Rr0odT yLcebwUtpmxYNjO/057vbN+d5jd1LfDAqHVOUbhjATmsKvyEZg+EbDr5H6l5ZSLQSM3fqbEiBU/ NRRUrZSRru2cY6sgElVqd2MfuGzyTwW6Ceh/9no14cptWDp2+XzFHc2HETyy20VhQ/xiJ+vDZTa QqZdmUfe5AODeqRwPw61hFYdUUbhBbj3PFKOl6IQll+7EWFFLXyAEp/Q20wP/jOPcq5XhgiEvgR KxojoI4/ X-Google-Smtp-Source: AGHT+IE31ZflB95lghWnoApBp5JoBxyoMgfn9RvO479pqfokI54v3mfaN5/ufMEOG8SbV3Cy8wagiw== X-Received: by 2002:a05:690c:6e09:b0:6ee:664e:8c03 with SMTP id 00721157ae682-6eee08b90bcmr51886217b3.19.1732317865368; Fri, 22 Nov 2024 15:24:25 -0800 (PST) Received: from localhost (fwdproxy-nha-008.fbsv.net. [2a03:2880:25ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6eedfe2c142sm6918817b3.48.2024.11.22.15.24.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 15:24:25 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, bernd.schubert@fastmail.fm, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v6 1/5] mm: add AS_WRITEBACK_INDETERMINATE mapping flag Date: Fri, 22 Nov 2024 15:23:55 -0800 Message-ID: <20241122232359.429647-2-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241122232359.429647-1-joannelkoong@gmail.com> References: <20241122232359.429647-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a new mapping flag AS_WRITEBACK_INDETERMINATE which filesystems may set to indicate that writing back to disk may take an indeterminate amount of time to complete. Extra caution should be taken when waiting on writeback for folios belonging to mappings where this flag is set. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- include/linux/pagemap.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 68a5f1ff3301..fcf7d4dd7e2b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -210,6 +210,7 @@ enum mapping_flags { AS_STABLE_WRITES = 7, /* must wait for writeback before modifying folio contents */ AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */ + AS_WRITEBACK_INDETERMINATE = 9, /* Use caution when waiting on writeback */ /* Bits 16-25 are used for FOLIO_ORDER */ AS_FOLIO_ORDER_BITS = 5, AS_FOLIO_ORDER_MIN = 16, @@ -335,6 +336,16 @@ static inline bool mapping_inaccessible(struct address_space *mapping) return test_bit(AS_INACCESSIBLE, &mapping->flags); } +static inline void mapping_set_writeback_indeterminate(struct address_space *mapping) +{ + set_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags); +} + +static inline bool mapping_writeback_indeterminate(struct address_space *mapping) +{ + return test_bit(AS_WRITEBACK_INDETERMINATE, &mapping->flags); +} + static inline gfp_t mapping_gfp_mask(struct address_space * mapping) { return mapping->gfp_mask; From patchwork Fri Nov 22 23:23:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13883698 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADAEC18A94C for ; Fri, 22 Nov 2024 23:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317869; cv=none; b=SFXSJFt8P5ksuR+tM24MnEH7yBSa+fvG+MRCV08JyJuPMKhPE/kbBQ3sP5kjKwfqLwgAHF4bwDsZvOmCyk3/c1FJ0rOnug0Uv07TqO+nQ6gy7wT0c+femuGAhLRfQ8AyJuqVpYdabWuNZPYZjQb/8EkVS1tNyjuSqxrftIrVF4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317869; c=relaxed/simple; bh=snEhwbjkOQNKji7/3kWdqfeV1eyrezZ9SDDncgYtzj4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D6H4qvpQgmvilRhsREoDyxVJumxLGVF4KbQvMM2/IACUaaySjgVjGlMvGFBxd5wzNpEBVUktUGguNiFkqJMfsNp+S/qCmwfNa9Vczft7mDTOwEhtEYvo9EbZDfWZ6oAzPrtQXpSQtstTJ58WUUN93kfIkj4sJRpG+lpzvzAj/JA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jI6u3I8q; arc=none smtp.client-ip=209.85.128.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jI6u3I8q" Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-6eb0c2dda3cso33763767b3.1 for ; Fri, 22 Nov 2024 15:24:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732317867; x=1732922667; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hqCGll30CJiyTPfCOluQLlSYspNea277q8s6Em+tUXY=; b=jI6u3I8qmgJSq9My+EYB0VUJjb+UF8fxjqGnAgTwuVNVwRQrkkW13lbe+5Sp5HDyjg /yCU8Y2vk5pzXl2BTJVpwIqo4NDq4V+MgLh0hq8JeJEWc/2SjYyQ0BUuadgz0IzEgC5a mR3EzsMXvl3Fi+Bxa1Ub5sRXi13Goc2L8FkZ3mJrObz2SMvDR/MNvIZbhcNreRWocmch o0pC2vFFfRTPmGJgeS/Oytorh8tsaE/5uLlmXYHMu+CUovgJLsopu4pc2XQHaFgmVNXq vS+sNYi4QQjR/fOCfjFXpkF02JbLoIv8hSbfannMf230+iyUuD2PsdmlDuhF11/+UR0i nHfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732317867; x=1732922667; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hqCGll30CJiyTPfCOluQLlSYspNea277q8s6Em+tUXY=; b=nsV2ao27MbtxC9GNVG0sLJghyGfK+TLC2BlsOjuTUFLRKnLV+bonB1+LotbdbvMePU /y3EWHqqOVSjeKxz4My/apiTTxiL5nLAjIFS45XlP4mP6SJOwFcJRNtbtO80sei84qYS JyLkbaf8A/6PI0l8EGB6su7xuAkHsGgQJGtg/DB2FDmlCBnk2ZESuSeJQxssLxuY3nZ2 E5ZoRVY4rH2IV5LrTU4WfBMsT1G+rwTBaW6i+CPvoNWKCo3YlVQwHrpatoNdafxiUNVv kT/NJwjm0GNaLQz8xrfhktKGJ8NHAmEKCSDnnwiuE88PMLilc0L4eJAqomPwJxO0/R0c ppNA== X-Forwarded-Encrypted: i=1; AJvYcCVXaUfddLKqFp/xpvpr1h1LobZMrGgoLtvd9NDRGbzd4Hm5jPSqPe1HUonNNa/hy7YguTu9Hz8ob0LrDeQR@vger.kernel.org X-Gm-Message-State: AOJu0YxuEvBD3HmnwbetStf5o0KMS5Zp0B0mdZ91SUOLmm8ic9YhIowp /x3UQNr+UO3fBoj/yv2lD44FWaqGXylS7AkD1hOFHBb7XtX1o4TM X-Gm-Gg: ASbGncvREM/AbdQAJPk9ibBp98J3mNvDxPTUCj3N8QuvM+/PGB/gZneKRoMMh9N+X1g Tip8c1W/WEKZ+SKq4g5aYINRHk7BJseqdfp/bZz6iHIqrfhu7I6z4hMUUF6l8T5G7aUyOOLCfc+ hB7ohHtouHD6lnig8u/eojbGB5zjxflZR0rr4+bz75j1ZzW20FqRm61+aa/GeESJXdAF8kOX/Dd 7Hleybh6z18fo/P5uTdqQgJtpAEh2DgDOfl4OGjci27Az354HfO5QDzxcZ8sj8zN/5IQeC0kHci HM34Yjog X-Google-Smtp-Source: AGHT+IFKU5jQK1lagqG1ya3hMgLoQLO7xg2eEmLkI7xWE77dGiRpSdFWIYkKQ4nezV34Rrjdl5rIMA== X-Received: by 2002:a05:690c:688a:b0:6ea:96bf:1706 with SMTP id 00721157ae682-6eee07c224emr61916467b3.0.1732317866710; Fri, 22 Nov 2024 15:24:26 -0800 (PST) Received: from localhost (fwdproxy-nha-006.fbsv.net. [2a03:2880:25ff:6::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6eedfe2c142sm6918917b3.48.2024.11.22.15.24.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 15:24:26 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, bernd.schubert@fastmail.fm, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v6 2/5] mm: skip reclaiming folios in legacy memcg writeback indeterminate contexts Date: Fri, 22 Nov 2024 15:23:56 -0800 Message-ID: <20241122232359.429647-3-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241122232359.429647-1-joannelkoong@gmail.com> References: <20241122232359.429647-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently in shrink_folio_list(), reclaim for folios under writeback falls into 3 different cases: 1) Reclaim is encountering an excessive number of folios under writeback and this folio has both the writeback and reclaim flags set 2) Dirty throttling is enabled (this happens if reclaim through cgroup is not enabled, if reclaim through cgroupv2 memcg is enabled, or if reclaim is on the root cgroup), or if the folio is not marked for immediate reclaim, or if the caller does not have __GFP_FS (or __GFP_IO if it's going to swap) set 3) Legacy cgroupv1 encounters a folio that already has the reclaim flag set and the caller did not have __GFP_FS (or __GFP_IO if swap) set In cases 1) and 2), we activate the folio and skip reclaiming it while in case 3), we wait for writeback to finish on the folio and then try to reclaim the folio again. In case 3, we wait on writeback because cgroupv1 does not have dirty folio throttling, as such this is a mitigation against the case where there are too many folios in writeback with nothing else to reclaim. For filesystems where writeback may take an indeterminate amount of time to write to disk, this has the possibility of stalling reclaim. In this commit, if legacy memcg encounters a folio with the reclaim flag set (eg case 3) and the folio belongs to a mapping that has the AS_WRITEBACK_INDETERMINATE flag set, the folio will be activated and skip reclaim (eg default to behavior in case 2) instead. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- mm/vmscan.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 749cdc110c74..37ce6b6dac06 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1129,8 +1129,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * 2) Global or new memcg reclaim encounters a folio that is * not marked for immediate reclaim, or the caller does not * have __GFP_FS (or __GFP_IO if it's simply going to swap, - * not to fs). In this case mark the folio for immediate - * reclaim and continue scanning. + * not to fs), or the writeback may take an indeterminate + * amount of time to complete. In this case mark the folio + * for immediate reclaim and continue scanning. * * Require may_enter_fs() because we would wait on fs, which * may not have submitted I/O yet. And the loop driver might @@ -1155,6 +1156,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, * takes to write them to disk. */ if (folio_test_writeback(folio)) { + mapping = folio_mapping(folio); + /* Case 1 above */ if (current_is_kswapd() && folio_test_reclaim(folio) && @@ -1165,7 +1168,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, /* Case 2 above */ } else if (writeback_throttling_sane(sc) || !folio_test_reclaim(folio) || - !may_enter_fs(folio, sc->gfp_mask)) { + !may_enter_fs(folio, sc->gfp_mask) || + (mapping && mapping_writeback_indeterminate(mapping))) { /* * This is slightly racy - * folio_end_writeback() might have From patchwork Fri Nov 22 23:23:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13883699 Received: from mail-yw1-f178.google.com (mail-yw1-f178.google.com [209.85.128.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DC3B18BC0B for ; Fri, 22 Nov 2024 23:24:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317870; cv=none; b=nqwxm/4PIt8Qp+7+eqAmkX/3kl0+0tJIEM6z0eGdwq/VEodIYQvmrwuQ9DjHpaTckITWmjwUF2whsqASE2aiVm8cLaiNOVYEmI4y93UbqrK4RZHeyjPdJIP+GklUWbbLOfvrJyPDU9BV8a3E3DbThGPbKFuSxb5t63VUmZQ7qeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317870; c=relaxed/simple; bh=u0RxDJBthszbybGQk9wFjDGCIn5Najs8610GW0GBdbw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k11FHR3K+rIMjmXfO15bNcWoby4GNeefNUOuUgOuI6C+SqFBKQgSBElwebWkSvcHnjLG0bb8kerNDa00yPycni18rZiY+2iJDixCHS0Rj5lTD4lFqx+h4XiiQNexHkJSj/9yYuqMKaqnRP6Z0GW8OC/Awx0t2I8bFAa1u0E7b9c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bNGAlwUu; arc=none smtp.client-ip=209.85.128.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bNGAlwUu" Received: by mail-yw1-f178.google.com with SMTP id 00721157ae682-6e377e4aea3so21394457b3.3 for ; Fri, 22 Nov 2024 15:24:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732317868; x=1732922668; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J6f5Cg5ni0UxlK+AJQaxldsRM/7KRVbxPRgVDGJ6GHg=; b=bNGAlwUujCixHM1PgkLwcxuUQ7759uzAIUY793BWXZAlpjF+pTWOvp1/jKipHmA9OX 9FD1b1IqZhfG6MKDX8zS4NmUI1ebwo1D3gZQ4pQPDOUbA8fQOEVKHikcgKWKggpdhkn1 otyiWgi9P8CsKm0bjUibN51IEqhY0RDPV554vZd7XCMdNAk9ebAc+DXn47KCvZhdWBXS Q+NasH2xBIL/jYxFN2n/lVWIAsOpjuwMSi9enI4z8ryVnUA4WKxKhum7JtjcGqdrGzt+ lTPia18g+j+puVcx/JVQzOZMkEN9eST2Ta9z0wXbxVzHCev8braGJFAWVU89UHrNAXSZ oxyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732317868; x=1732922668; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J6f5Cg5ni0UxlK+AJQaxldsRM/7KRVbxPRgVDGJ6GHg=; b=YjNj/t7W7UC6ZwDTcFOM6Xe6b+4Jg3i3umciPBGVE3qOJ5D/aqT8bkR0PW2ssptyt0 FsPuywRoMFDqzhBDhrMBMFB9KOHrL8EAVTohF9Eh2Rh3Zo8fSRYIX2qktWPU4S026Dea obj1AwNjbJlFQMVnPCLrTVMIgGLVErsLOIHmizZXHxgkmvMccLB/VEL5is0W9tLy4pOM OkjnrLt2Ia9iUVVLV5J/BjYzLedSzFCO9VG16ifjCC1+J1crOF/i9oFQxoiI0mWBW3Yr Yr3FeuTFwGZ7cB4bSiRMnVPpKOm82HvajCilucd7NhRlhyv9Knhxq1Jlijbd2aRJkpk1 Pw2g== X-Forwarded-Encrypted: i=1; AJvYcCWR6HjHdesSftNe8a4tXcqwcxoPfVA3nc2wl4wqPpZFnKJinAJRA6mPG2i1kn4/oR9qoq6mJWW3Wp9aXBhC@vger.kernel.org X-Gm-Message-State: AOJu0YyXrOfjk7zalc4JE4Wd2q55Gx7PInCDLwOxZwCD2zjsWJ4GBlGc Axpgbverr6T6d9MjxL6Zc+cchXoPNXo/572fFe5+FUIgJiVpL2I7 X-Gm-Gg: ASbGncvQ5Mk+hMepmpFKWcjdURtjYbmc/mbVzIp2aAwy0Ak/5Tjry6AGqdF/l6ofQ9v D9jQ6a5KfxSvZ3TxXE/GCkJQzb0iqWCqCYBmmAq/VgmzZWO3UijyT6cBzXqX41vXbzNt4zZrcmK zHcJOqZ9A+zS/x6vGJO76kWV4pvBBhwrQoevfdd4s2tCJappbk0rJgAW4caDW6RBUWTKhr36Vxt AF0jQ36mIQN/urqznz53MZz89PTUIVjJCLLwkrCLgUb3ZCi5f8erksI2/Ta9FUHYlsjElhfMbYU h7RoMxoTvw== X-Google-Smtp-Source: AGHT+IEnS3FrA3zTT50mBSWxAEndxeY552vJ6l3IoZ28ABUPNtO80E0sf2ZpGC6IcKOhHuSDRBLYgg== X-Received: by 2002:a05:690c:6807:b0:6e3:323f:d8fb with SMTP id 00721157ae682-6eee08a11f9mr64934617b3.14.1732317868047; Fri, 22 Nov 2024 15:24:28 -0800 (PST) Received: from localhost (fwdproxy-nha-113.fbsv.net. [2a03:2880:25ff:71::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6eee0094db9sm6794387b3.99.2024.11.22.15.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 15:24:27 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, bernd.schubert@fastmail.fm, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v6 3/5] fs/writeback: in wait_sb_inodes(), skip wait for AS_WRITEBACK_INDETERMINATE mappings Date: Fri, 22 Nov 2024 15:23:57 -0800 Message-ID: <20241122232359.429647-4-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241122232359.429647-1-joannelkoong@gmail.com> References: <20241122232359.429647-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For filesystems with the AS_WRITEBACK_INDETERMINATE flag set, writeback operations may take an indeterminate time to complete. For example, writing data back to disk in FUSE filesystems depends on the userspace server successfully completing writeback. In this commit, wait_sb_inodes() skips waiting on writeback if the inode's mapping has AS_WRITEBACK_INDETERMINATE set, else sync(2) may take an indeterminate amount of time to complete. If the caller wishes to ensure the data for a mapping with the AS_WRITEBACK_INDETERMINATE flag set has actually been written back to disk, they should use fsync(2)/fdatasync(2) instead. Signed-off-by: Joanne Koong Reviewed-by: Jingbo Xu --- fs/fs-writeback.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index d8bec3c1bb1f..ad192db17ce4 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2659,6 +2659,9 @@ static void wait_sb_inodes(struct super_block *sb) if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK)) continue; + if (mapping_writeback_indeterminate(mapping)) + continue; + spin_unlock_irq(&sb->s_inode_wblist_lock); spin_lock(&inode->i_lock); From patchwork Fri Nov 22 23:23:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13883700 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A31518A94C for ; Fri, 22 Nov 2024 23:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317871; cv=none; b=nkpu0cFNXyuzCrogaCuRqekssdst8Ino/SZp0/7yneF+pyCnEZsdH3EvVvJO7pVjuysj/1fU+c4XyX3VKD12IHub4o8agsIhSMX3RJJ3F9cjQl0vnRpxh0PWb5CZaXE2tyvdyJcmUJmKMypj9rnEFPIVpv2EeHX2lNUklaqK0Nc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317871; c=relaxed/simple; bh=IVpGLtzMb4kNKP9bQAnrKqlOV5ZZtfVQVvyUBCZQ4Ns=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d2vYFpbj7kbnS29px/gqRqGgbEYd9UEv7GWnK/70WbrhEKdwpZ2Cdk2afBGSGpjS3XruiRn/QrPs5A2pTHkSNo9MQKzgQHKnKdOpr2lOLJQiDY+o/KWrpiN8BE6kvC9XwLmyXnD3VaI4MqfU+HcrCcf3zNsEZAv9xMXPZupSe4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=G+HsX4q9; arc=none smtp.client-ip=209.85.219.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="G+HsX4q9" Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-e38b425ce60so2288729276.1 for ; Fri, 22 Nov 2024 15:24:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732317869; x=1732922669; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VEBVGrksPHjOfc4WHY5YwyTpscFate/a4s+cTyDDYoU=; b=G+HsX4q9AXZZUaqqH0Jw2O6e/BvCAWU0gQXqrFwfGll+HlPIjCXGCiy/dR7BiWdl6Z 0ACMEMwt4+ge2ufiaYm8C38xfsBPmVZApqbkjqIPh5vCChMKza/xE+oYKhG6qPjauQ2m F/7myQKVUJQce1ED0//96hgPCNQEzpiWsIFdBp6jwgI3GuWNj4vusEtthLeLrrQcOXYp vcS6KhupNNhO+FZHD4Hc0cr22NoiFiyAW7u8JPS3urMkdG057ThaXj7E+qO2e7ayooqq QVw65FuNjbE1qsAM8RSl+ULhcWxHTGxf5PHtPUg4N8QR4WMShl/rQ1iPhcdw14pyIkES GNJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732317869; x=1732922669; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VEBVGrksPHjOfc4WHY5YwyTpscFate/a4s+cTyDDYoU=; b=brs5rvBbrfVmdwhmvHY5a+ibYhzceGqfiELzMtGaUF4ZlrMHmXUi73LkrtevvGUnEg o60bNjpTeQJAap3xU0/uYuV4lIhgHf9Pt/S43cJxjYA7ZeUWPv/UNallTIEQpzVRUqX0 AAMnTuhoLyriWSE7VK/p2z/+5WZOQ8D3tsO4/GlnxntwqDO+YpS5YpFHHsoDU93xppOD ZZIGbsLVsFe0lcDLEqzAv+EseMhXbEoyluwKZII7M3lQXR9sXWBLhzBW1j65ed7+spJF c6i154R1FGhFWxDiLSdibiAIYenGm/V28snqjIOu78s0fzQmlgxc/FK1rhG1F/SwY972 PU8A== X-Forwarded-Encrypted: i=1; AJvYcCU9U2JUrpv8V8SL/pCuJUa0m+Fq+BmQfPHR+nAB1OzYHeyV72e0oosIiyilNbc/i+APoUuFQi1r9dr6vUqi@vger.kernel.org X-Gm-Message-State: AOJu0Yxzor1NaDxx2WWobTcSFjr3qZJul+LBUJEeu2CeHtCR7dZaYhJR w5A65H9qCHtM0XL7w7zY6izWquQR4PjEh9Jrb1h1rjT6ktq661G2 X-Gm-Gg: ASbGncslRqMDnHJwFNkSDcQm0tgRfd2IMV84PyjiEVfUIQKsaFpNlL/lWSpzhuW8CxY /Gzg2Fbe3H8r2h0ayJ2S095DptaeFkWnq8sfqS1CqmOpYRAVoHC7thkF/WRisWJN/xZk9Kls0eD 5drwv9XP5HkgymSvH4FF4phWSzgtu9xOfp1oqEL+k7Hr7+/zXuIrzyV16qfYmneQTRkhXH5ZUov 5fZXOktrUBq/fb3R0WgBzHiMwlzxgMZBWnauYcRbADXTioWGcR1c+WSKkNeWziX+qiMMuV4sZfm tM48Kr3TNQ== X-Google-Smtp-Source: AGHT+IFqmAlC0SDQCwdmjhSHUlnQkK2XB4yL/Hv+xmhGPIMjC7dibMuwLQyjuqRbligoAGkvKR8vfw== X-Received: by 2002:a25:720b:0:b0:e38:85ec:9f28 with SMTP id 3f1490d57ef6-e38e182add6mr8426271276.25.1732317869472; Fri, 22 Nov 2024 15:24:29 -0800 (PST) Received: from localhost (fwdproxy-nha-114.fbsv.net. [2a03:2880:25ff:72::face:b00c]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e38f609c74esm874073276.35.2024.11.22.15.24.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 15:24:29 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, bernd.schubert@fastmail.fm, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v6 4/5] mm/migrate: skip migrating folios under writeback with AS_WRITEBACK_INDETERMINATE mappings Date: Fri, 22 Nov 2024 15:23:58 -0800 Message-ID: <20241122232359.429647-5-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241122232359.429647-1-joannelkoong@gmail.com> References: <20241122232359.429647-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 For migrations called in MIGRATE_SYNC mode, skip migrating the folio if it is under writeback and has the AS_WRITEBACK_INDETERMINATE flag set on its mapping. If the AS_WRITEBACK_INDETERMINATE flag is set on the mapping, the writeback may take an indeterminate amount of time to complete, and waits may get stuck. Signed-off-by: Joanne Koong Reviewed-by: Shakeel Butt --- mm/migrate.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index df91248755e4..fe73284e5246 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1260,7 +1260,10 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - break; + if (!src->mapping || + !mapping_writeback_indeterminate(src->mapping)) + break; + fallthrough; default: rc = -EBUSY; goto out; From patchwork Fri Nov 22 23:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joanne Koong X-Patchwork-Id: 13883701 Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B9F018A94C for ; Fri, 22 Nov 2024 23:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317875; cv=none; b=Wk3OD64k9zVN5d7KZfn3d31jBkpz7XG1+rAHUOfdR7cJs7Y0FaCwH3gDoIESycz+mNeItwm7Y3DIpiDweeys+qjF/affWKdrJExjEhvHW4G2zXWZ7uX60P5MmrKZr1b5F7dUkGpdp34a5ZCBAvDCxEn7QpxwrOg5qVMZc6ObBx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732317875; c=relaxed/simple; bh=7Kq9FVK3c6u4kA+MmaM/oDJFG34Nv2KMEjwaowx1w7E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LmoEocDJCcRRCbwf2iNbfWrciOvsI3mkTc4fdPLEKaNlOAnrGQNQWTEke8NknqESRlF4TPqxjQr9VWegF6lgA0zUTfmDH83HSJDEPLaFZfdOhiztqo+UyI22ayIITIkkoQ4ySnIW1RDAGbV+hnSg7k0Pn4o6/3RG8GHUZcQsMfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bOaOYK31; arc=none smtp.client-ip=209.85.128.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bOaOYK31" Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-6eed41d2b12so19888477b3.1 for ; Fri, 22 Nov 2024 15:24:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732317872; x=1732922672; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E9JVNPyYm/QyejD35mtSNrdYYKrlXcB+rCT1AAYKOoM=; b=bOaOYK31vcRBS/JEdx0EcQO32tlUlPsKDK5mljrRxwRavIOdwyFt6A833N6iMZkIWY G5yJV8vVnlkKFxzwcYNBQ/d7aVr/1EExhVJcCwVJH2RIhhg/bk79qM7kDNdDLCRGcoYi ndbjFDI7zB9SezDx7cXIm3X2y4SDV7lhAMCj63uYMDvn6FSlI+aytQ1vA8xHhRPOrC0a gmyGavYhtet4h3NZTLvl18Vm+sBEqycDxlS/5ZmHBsRzE1hctE74+TtZv56ingibsN+Q xdXJ0CQDkbwRCTueB0nCWQG+6Bmzb+Xs51mRBzeu5Jst7a2XV7/M8pcb/j9aA71K173u Cfsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732317872; x=1732922672; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E9JVNPyYm/QyejD35mtSNrdYYKrlXcB+rCT1AAYKOoM=; b=e4ESdHqbhPAgU1+5gP+DgaVksDJrlbQnQImuyHf8Hi5AiUszS2vbqdy20+U9dtGnr3 5jzvzAjkKrdh1KrHqf9flqTOTFSESQThI+VzQmuJe7+hIrDFu2gvFO5qVvvQmZ4qFk0N ChN61u+AfjUXiqMrGRieYkIU+J758gYH3ifeeBOjX5ikl1GbE9Q8CslBz3Zbc4SqqVvD xTQQk1qyXIZaeQ4IECh4mLGQXPjD1mQSv9gibQOiXRGm0nGD2ZbpWuqzurmq0iiGuY9/ fKdCAWGdHnBvOP9ok0+b9AJWv8OVlxQ+K/ndelE5BMZykx4ZqUurKHes3D66cdvVnmJD HlhQ== X-Forwarded-Encrypted: i=1; AJvYcCWaaY3LIL1BgWJAhtHdnDw95Buvlsc+DekJLRA7gSe/cVq9RhfwOOHZqgbamPxeCmGYz3X9q5F/zigGSTka@vger.kernel.org X-Gm-Message-State: AOJu0YxZ7ldPPByHZuoiUgfFFsPgz5MtgErneaAzfNGuxGkVlPjhNquk 4KAHR3qHc4UZYYRxWwD8w5y60SXeSyunyG4YRE4Yz1yJEq6dppeZ X-Gm-Gg: ASbGncvJDOq23AdxzflipAp3XwxVN96s0v+9aWThshyNKla6MfPPrxJprLd/5yQ8gEz eY5alTxgSQwCEmG8X7AC4SQj2ztZbZvVPOVOyIpEuP2ZePZ7JkZeoZtLNDvwgzNXbZ4tvCWQ5DS 39Rm5IUVt9OfKNXNoYXlG8xL0w2jhWB4p3RDeDUrkk4zYK/TeIajDmwRpqro3VKCTQJj1sA6LNJ 2lkrUuUCukdVaXoCDGE1ZBB7F1KkCgEn6zIquY5vArfsg9Dx1t/48lXRcHfpTRSj4odI/HdBFkj TZBKQQxg X-Google-Smtp-Source: AGHT+IFJWthOFckJcmHEjim+PgNKr4M5VHJtM7w3iVmCFtJehmT32r3fjFwSh1ZcAtWqaMxwgjJa8A== X-Received: by 2002:a05:690c:3686:b0:6e5:adf8:b0a8 with SMTP id 00721157ae682-6eee08c3f73mr53176627b3.6.1732317871785; Fri, 22 Nov 2024 15:24:31 -0800 (PST) Received: from localhost (fwdproxy-nha-001.fbsv.net. [2a03:2880:25ff:1::face:b00c]) by smtp.gmail.com with ESMTPSA id 00721157ae682-6eedfe153e3sm6832367b3.22.2024.11.22.15.24.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Nov 2024 15:24:31 -0800 (PST) From: Joanne Koong To: miklos@szeredi.hu, linux-fsdevel@vger.kernel.org Cc: shakeel.butt@linux.dev, jefflexu@linux.alibaba.com, josef@toxicpanda.com, bernd.schubert@fastmail.fm, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v6 5/5] fuse: remove tmp folio for writebacks and internal rb tree Date: Fri, 22 Nov 2024 15:23:59 -0800 Message-ID: <20241122232359.429647-6-joannelkoong@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241122232359.429647-1-joannelkoong@gmail.com> References: <20241122232359.429647-1-joannelkoong@gmail.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In the current FUSE writeback design (see commit 3be5a52b30aa ("fuse: support writable mmap")), a temp page is allocated for every dirty page to be written back, the contents of the dirty page are copied over to the temp page, and the temp page gets handed to the server to write back. This is done so that writeback may be immediately cleared on the dirty page, and this in turn is done for two reasons: a) in order to mitigate the following deadlock scenario that may arise if reclaim waits on writeback on the dirty page to complete: * single-threaded FUSE server is in the middle of handling a request that needs a memory allocation * memory allocation triggers direct reclaim * direct reclaim waits on a folio under writeback * the FUSE server can't write back the folio since it's stuck in direct reclaim b) in order to unblock internal (eg sync, page compaction) waits on writeback without needing the server to complete writing back to disk, which may take an indeterminate amount of time. With a recent change that added AS_WRITEBACK_INDETERMINATE and mitigates the situations described above, FUSE writeback does not need to use temp pages if it sets AS_WRITEBACK_INDETERMINATE on its inode mappings. This commit sets AS_WRITEBACK_INDETERMINATE on the inode mappings and removes the temporary pages + extra copying and the internal rb tree. fio benchmarks -- (using averages observed from 10 runs, throwing away outliers) Setup: sudo mount -t tmpfs -o size=30G tmpfs ~/tmp_mount ./libfuse/build/example/passthrough_ll -o writeback -o max_threads=4 -o source=~/tmp_mount ~/fuse_mount fio --name=writeback --ioengine=sync --rw=write --bs={1k,4k,1M} --size=2G --numjobs=2 --ramp_time=30 --group_reporting=1 --directory=/root/fuse_mount bs = 1k 4k 1M Before 351 MiB/s 1818 MiB/s 1851 MiB/s After 341 MiB/s 2246 MiB/s 2685 MiB/s % diff -3% 23% 45% Signed-off-by: Joanne Koong Reviewed-by: Jingbo Xu --- fs/fuse/file.c | 360 ++++------------------------------------------- fs/fuse/fuse_i.h | 3 - 2 files changed, 28 insertions(+), 335 deletions(-) diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 88d0946b5bc9..1970d1a699a6 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -415,89 +415,11 @@ u64 fuse_lock_owner_id(struct fuse_conn *fc, fl_owner_t id) struct fuse_writepage_args { struct fuse_io_args ia; - struct rb_node writepages_entry; struct list_head queue_entry; - struct fuse_writepage_args *next; struct inode *inode; struct fuse_sync_bucket *bucket; }; -static struct fuse_writepage_args *fuse_find_writeback(struct fuse_inode *fi, - pgoff_t idx_from, pgoff_t idx_to) -{ - struct rb_node *n; - - n = fi->writepages.rb_node; - - while (n) { - struct fuse_writepage_args *wpa; - pgoff_t curr_index; - - wpa = rb_entry(n, struct fuse_writepage_args, writepages_entry); - WARN_ON(get_fuse_inode(wpa->inode) != fi); - curr_index = wpa->ia.write.in.offset >> PAGE_SHIFT; - if (idx_from >= curr_index + wpa->ia.ap.num_folios) - n = n->rb_right; - else if (idx_to < curr_index) - n = n->rb_left; - else - return wpa; - } - return NULL; -} - -/* - * Check if any page in a range is under writeback - */ -static bool fuse_range_is_writeback(struct inode *inode, pgoff_t idx_from, - pgoff_t idx_to) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - bool found; - - if (RB_EMPTY_ROOT(&fi->writepages)) - return false; - - spin_lock(&fi->lock); - found = fuse_find_writeback(fi, idx_from, idx_to); - spin_unlock(&fi->lock); - - return found; -} - -static inline bool fuse_page_is_writeback(struct inode *inode, pgoff_t index) -{ - return fuse_range_is_writeback(inode, index, index); -} - -/* - * Wait for page writeback to be completed. - * - * Since fuse doesn't rely on the VM writeback tracking, this has to - * use some other means. - */ -static void fuse_wait_on_page_writeback(struct inode *inode, pgoff_t index) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - - wait_event(fi->page_waitq, !fuse_page_is_writeback(inode, index)); -} - -static inline bool fuse_folio_is_writeback(struct inode *inode, - struct folio *folio) -{ - pgoff_t last = folio_next_index(folio) - 1; - return fuse_range_is_writeback(inode, folio_index(folio), last); -} - -static void fuse_wait_on_folio_writeback(struct inode *inode, - struct folio *folio) -{ - struct fuse_inode *fi = get_fuse_inode(inode); - - wait_event(fi->page_waitq, !fuse_folio_is_writeback(inode, folio)); -} - /* * Wait for all pending writepages on the inode to finish. * @@ -886,13 +808,6 @@ static int fuse_do_readfolio(struct file *file, struct folio *folio) ssize_t res; u64 attr_ver; - /* - * With the temporary pages that are used to complete writeback, we can - * have writeback that extends beyond the lifetime of the folio. So - * make sure we read a properly synced folio. - */ - fuse_wait_on_folio_writeback(inode, folio); - attr_ver = fuse_get_attr_version(fm->fc); /* Don't overflow end offset */ @@ -1003,17 +918,12 @@ static void fuse_send_readpages(struct fuse_io_args *ia, struct file *file) static void fuse_readahead(struct readahead_control *rac) { struct inode *inode = rac->mapping->host; - struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_conn *fc = get_fuse_conn(inode); unsigned int max_pages, nr_pages; - pgoff_t first = readahead_index(rac); - pgoff_t last = first + readahead_count(rac) - 1; if (fuse_is_bad(inode)) return; - wait_event(fi->page_waitq, !fuse_range_is_writeback(inode, first, last)); - max_pages = min_t(unsigned int, fc->max_pages, fc->max_read / PAGE_SIZE); @@ -1172,7 +1082,7 @@ static ssize_t fuse_send_write_pages(struct fuse_io_args *ia, int err; for (i = 0; i < ap->num_folios; i++) - fuse_wait_on_folio_writeback(inode, ap->folios[i]); + folio_wait_writeback(ap->folios[i]); fuse_write_args_fill(ia, ff, pos, count); ia->write.in.flags = fuse_write_flags(iocb); @@ -1622,7 +1532,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, return res; } } - if (!cuse && fuse_range_is_writeback(inode, idx_from, idx_to)) { + if (!cuse && filemap_range_has_writeback(mapping, pos, (pos + count - 1))) { if (!write) inode_lock(inode); fuse_sync_writes(inode); @@ -1819,38 +1729,34 @@ static ssize_t fuse_splice_write(struct pipe_inode_info *pipe, struct file *out, static void fuse_writepage_free(struct fuse_writepage_args *wpa) { struct fuse_args_pages *ap = &wpa->ia.ap; - int i; if (wpa->bucket) fuse_sync_bucket_dec(wpa->bucket); - for (i = 0; i < ap->num_folios; i++) - folio_put(ap->folios[i]); - fuse_file_put(wpa->ia.ff, false); kfree(ap->folios); kfree(wpa); } -static void fuse_writepage_finish_stat(struct inode *inode, struct folio *folio) -{ - struct backing_dev_info *bdi = inode_to_bdi(inode); - - dec_wb_stat(&bdi->wb, WB_WRITEBACK); - node_stat_sub_folio(folio, NR_WRITEBACK_TEMP); - wb_writeout_inc(&bdi->wb); -} - static void fuse_writepage_finish(struct fuse_writepage_args *wpa) { struct fuse_args_pages *ap = &wpa->ia.ap; struct inode *inode = wpa->inode; struct fuse_inode *fi = get_fuse_inode(inode); + struct backing_dev_info *bdi = inode_to_bdi(inode); int i; - for (i = 0; i < ap->num_folios; i++) - fuse_writepage_finish_stat(inode, ap->folios[i]); + for (i = 0; i < ap->num_folios; i++) { + /* + * Benchmarks showed that ending writeback within the + * scope of the fi->lock alleviates xarray lock + * contention and noticeably improves performance. + */ + folio_end_writeback(ap->folios[i]); + dec_wb_stat(&bdi->wb, WB_WRITEBACK); + wb_writeout_inc(&bdi->wb); + } wake_up(&fi->page_waitq); } @@ -1861,7 +1767,6 @@ static void fuse_send_writepage(struct fuse_mount *fm, __releases(fi->lock) __acquires(fi->lock) { - struct fuse_writepage_args *aux, *next; struct fuse_inode *fi = get_fuse_inode(wpa->inode); struct fuse_write_in *inarg = &wpa->ia.write.in; struct fuse_args *args = &wpa->ia.ap.args; @@ -1898,19 +1803,8 @@ __acquires(fi->lock) out_free: fi->writectr--; - rb_erase(&wpa->writepages_entry, &fi->writepages); fuse_writepage_finish(wpa); spin_unlock(&fi->lock); - - /* After rb_erase() aux request list is private */ - for (aux = wpa->next; aux; aux = next) { - next = aux->next; - aux->next = NULL; - fuse_writepage_finish_stat(aux->inode, - aux->ia.ap.folios[0]); - fuse_writepage_free(aux); - } - fuse_writepage_free(wpa); spin_lock(&fi->lock); } @@ -1938,43 +1832,6 @@ __acquires(fi->lock) } } -static struct fuse_writepage_args *fuse_insert_writeback(struct rb_root *root, - struct fuse_writepage_args *wpa) -{ - pgoff_t idx_from = wpa->ia.write.in.offset >> PAGE_SHIFT; - pgoff_t idx_to = idx_from + wpa->ia.ap.num_folios - 1; - struct rb_node **p = &root->rb_node; - struct rb_node *parent = NULL; - - WARN_ON(!wpa->ia.ap.num_folios); - while (*p) { - struct fuse_writepage_args *curr; - pgoff_t curr_index; - - parent = *p; - curr = rb_entry(parent, struct fuse_writepage_args, - writepages_entry); - WARN_ON(curr->inode != wpa->inode); - curr_index = curr->ia.write.in.offset >> PAGE_SHIFT; - - if (idx_from >= curr_index + curr->ia.ap.num_folios) - p = &(*p)->rb_right; - else if (idx_to < curr_index) - p = &(*p)->rb_left; - else - return curr; - } - - rb_link_node(&wpa->writepages_entry, parent, p); - rb_insert_color(&wpa->writepages_entry, root); - return NULL; -} - -static void tree_insert(struct rb_root *root, struct fuse_writepage_args *wpa) -{ - WARN_ON(fuse_insert_writeback(root, wpa)); -} - static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args, int error) { @@ -1994,41 +1851,6 @@ static void fuse_writepage_end(struct fuse_mount *fm, struct fuse_args *args, if (!fc->writeback_cache) fuse_invalidate_attr_mask(inode, FUSE_STATX_MODIFY); spin_lock(&fi->lock); - rb_erase(&wpa->writepages_entry, &fi->writepages); - while (wpa->next) { - struct fuse_mount *fm = get_fuse_mount(inode); - struct fuse_write_in *inarg = &wpa->ia.write.in; - struct fuse_writepage_args *next = wpa->next; - - wpa->next = next->next; - next->next = NULL; - tree_insert(&fi->writepages, next); - - /* - * Skip fuse_flush_writepages() to make it easy to crop requests - * based on primary request size. - * - * 1st case (trivial): there are no concurrent activities using - * fuse_set/release_nowrite. Then we're on safe side because - * fuse_flush_writepages() would call fuse_send_writepage() - * anyway. - * - * 2nd case: someone called fuse_set_nowrite and it is waiting - * now for completion of all in-flight requests. This happens - * rarely and no more than once per page, so this should be - * okay. - * - * 3rd case: someone (e.g. fuse_do_setattr()) is in the middle - * of fuse_set_nowrite..fuse_release_nowrite section. The fact - * that fuse_set_nowrite returned implies that all in-flight - * requests were completed along with all of their secondary - * requests. Further primary requests are blocked by negative - * writectr. Hence there cannot be any in-flight requests and - * no invocations of fuse_writepage_end() while we're in - * fuse_set_nowrite..fuse_release_nowrite section. - */ - fuse_send_writepage(fm, next, inarg->offset + inarg->size); - } fi->writectr--; fuse_writepage_finish(wpa); spin_unlock(&fi->lock); @@ -2115,19 +1937,16 @@ static void fuse_writepage_add_to_bucket(struct fuse_conn *fc, } static void fuse_writepage_args_page_fill(struct fuse_writepage_args *wpa, struct folio *folio, - struct folio *tmp_folio, uint32_t folio_index) + uint32_t folio_index) { struct inode *inode = folio->mapping->host; struct fuse_args_pages *ap = &wpa->ia.ap; - folio_copy(tmp_folio, folio); - - ap->folios[folio_index] = tmp_folio; + ap->folios[folio_index] = folio; ap->descs[folio_index].offset = 0; ap->descs[folio_index].length = PAGE_SIZE; inc_wb_stat(&inode_to_bdi(inode)->wb, WB_WRITEBACK); - node_stat_add_folio(tmp_folio, NR_WRITEBACK_TEMP); } static struct fuse_writepage_args *fuse_writepage_args_setup(struct folio *folio, @@ -2162,18 +1981,12 @@ static int fuse_writepage_locked(struct folio *folio) struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_writepage_args *wpa; struct fuse_args_pages *ap; - struct folio *tmp_folio; struct fuse_file *ff; - int error = -ENOMEM; + int error = -EIO; - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0); - if (!tmp_folio) - goto err; - - error = -EIO; ff = fuse_write_file_get(fi); if (!ff) - goto err_nofile; + goto err; wpa = fuse_writepage_args_setup(folio, ff); error = -ENOMEM; @@ -2184,22 +1997,17 @@ static int fuse_writepage_locked(struct folio *folio) ap->num_folios = 1; folio_start_writeback(folio); - fuse_writepage_args_page_fill(wpa, folio, tmp_folio, 0); + fuse_writepage_args_page_fill(wpa, folio, 0); spin_lock(&fi->lock); - tree_insert(&fi->writepages, wpa); list_add_tail(&wpa->queue_entry, &fi->queued_writes); fuse_flush_writepages(inode); spin_unlock(&fi->lock); - folio_end_writeback(folio); - return 0; err_writepage_args: fuse_file_put(ff, false); -err_nofile: - folio_put(tmp_folio); err: mapping_set_error(folio->mapping, error); return error; @@ -2209,7 +2017,6 @@ struct fuse_fill_wb_data { struct fuse_writepage_args *wpa; struct fuse_file *ff; struct inode *inode; - struct folio **orig_folios; unsigned int max_folios; }; @@ -2244,69 +2051,11 @@ static void fuse_writepages_send(struct fuse_fill_wb_data *data) struct fuse_writepage_args *wpa = data->wpa; struct inode *inode = data->inode; struct fuse_inode *fi = get_fuse_inode(inode); - int num_folios = wpa->ia.ap.num_folios; - int i; spin_lock(&fi->lock); list_add_tail(&wpa->queue_entry, &fi->queued_writes); fuse_flush_writepages(inode); spin_unlock(&fi->lock); - - for (i = 0; i < num_folios; i++) - folio_end_writeback(data->orig_folios[i]); -} - -/* - * Check under fi->lock if the page is under writeback, and insert it onto the - * rb_tree if not. Otherwise iterate auxiliary write requests, to see if there's - * one already added for a page at this offset. If there's none, then insert - * this new request onto the auxiliary list, otherwise reuse the existing one by - * swapping the new temp page with the old one. - */ -static bool fuse_writepage_add(struct fuse_writepage_args *new_wpa, - struct folio *folio) -{ - struct fuse_inode *fi = get_fuse_inode(new_wpa->inode); - struct fuse_writepage_args *tmp; - struct fuse_writepage_args *old_wpa; - struct fuse_args_pages *new_ap = &new_wpa->ia.ap; - - WARN_ON(new_ap->num_folios != 0); - new_ap->num_folios = 1; - - spin_lock(&fi->lock); - old_wpa = fuse_insert_writeback(&fi->writepages, new_wpa); - if (!old_wpa) { - spin_unlock(&fi->lock); - return true; - } - - for (tmp = old_wpa->next; tmp; tmp = tmp->next) { - pgoff_t curr_index; - - WARN_ON(tmp->inode != new_wpa->inode); - curr_index = tmp->ia.write.in.offset >> PAGE_SHIFT; - if (curr_index == folio->index) { - WARN_ON(tmp->ia.ap.num_folios != 1); - swap(tmp->ia.ap.folios[0], new_ap->folios[0]); - break; - } - } - - if (!tmp) { - new_wpa->next = old_wpa->next; - old_wpa->next = new_wpa; - } - - spin_unlock(&fi->lock); - - if (tmp) { - fuse_writepage_finish_stat(new_wpa->inode, - folio); - fuse_writepage_free(new_wpa); - } - - return false; } static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, @@ -2315,15 +2064,6 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, { WARN_ON(!ap->num_folios); - /* - * Being under writeback is unlikely but possible. For example direct - * read to an mmaped fuse file will set the page dirty twice; once when - * the pages are faulted with get_user_pages(), and then after the read - * completed. - */ - if (fuse_folio_is_writeback(data->inode, folio)) - return true; - /* Reached max pages */ if (ap->num_folios == fc->max_pages) return true; @@ -2333,7 +2073,7 @@ static bool fuse_writepage_need_send(struct fuse_conn *fc, struct folio *folio, return true; /* Discontinuity */ - if (data->orig_folios[ap->num_folios - 1]->index + 1 != folio_index(folio)) + if (ap->folios[ap->num_folios - 1]->index + 1 != folio_index(folio)) return true; /* Need to grow the pages array? If so, did the expansion fail? */ @@ -2352,7 +2092,6 @@ static int fuse_writepages_fill(struct folio *folio, struct inode *inode = data->inode; struct fuse_inode *fi = get_fuse_inode(inode); struct fuse_conn *fc = get_fuse_conn(inode); - struct folio *tmp_folio; int err; if (!data->ff) { @@ -2367,54 +2106,23 @@ static int fuse_writepages_fill(struct folio *folio, data->wpa = NULL; } - err = -ENOMEM; - tmp_folio = folio_alloc(GFP_NOFS | __GFP_HIGHMEM, 0); - if (!tmp_folio) - goto out_unlock; - - /* - * The page must not be redirtied until the writeout is completed - * (i.e. userspace has sent a reply to the write request). Otherwise - * there could be more than one temporary page instance for each real - * page. - * - * This is ensured by holding the page lock in page_mkwrite() while - * checking fuse_page_is_writeback(). We already hold the page lock - * since clear_page_dirty_for_io() and keep it held until we add the - * request to the fi->writepages list and increment ap->num_folios. - * After this fuse_page_is_writeback() will indicate that the page is - * under writeback, so we can release the page lock. - */ if (data->wpa == NULL) { err = -ENOMEM; wpa = fuse_writepage_args_setup(folio, data->ff); - if (!wpa) { - folio_put(tmp_folio); + if (!wpa) goto out_unlock; - } fuse_file_get(wpa->ia.ff); data->max_folios = 1; ap = &wpa->ia.ap; } folio_start_writeback(folio); - fuse_writepage_args_page_fill(wpa, folio, tmp_folio, ap->num_folios); - data->orig_folios[ap->num_folios] = folio; + fuse_writepage_args_page_fill(wpa, folio, ap->num_folios); err = 0; - if (data->wpa) { - /* - * Protected by fi->lock against concurrent access by - * fuse_page_is_writeback(). - */ - spin_lock(&fi->lock); - ap->num_folios++; - spin_unlock(&fi->lock); - } else if (fuse_writepage_add(wpa, folio)) { + ap->num_folios++; + if (!data->wpa) data->wpa = wpa; - } else { - folio_end_writeback(folio); - } out_unlock: folio_unlock(folio); @@ -2441,13 +2149,6 @@ static int fuse_writepages(struct address_space *mapping, data.wpa = NULL; data.ff = NULL; - err = -ENOMEM; - data.orig_folios = kcalloc(fc->max_pages, - sizeof(struct folio *), - GFP_NOFS); - if (!data.orig_folios) - goto out; - err = write_cache_pages(mapping, wbc, fuse_writepages_fill, &data); if (data.wpa) { WARN_ON(!data.wpa->ia.ap.num_folios); @@ -2456,7 +2157,6 @@ static int fuse_writepages(struct address_space *mapping, if (data.ff) fuse_file_put(data.ff, false); - kfree(data.orig_folios); out: return err; } @@ -2481,8 +2181,6 @@ static int fuse_write_begin(struct file *file, struct address_space *mapping, if (IS_ERR(folio)) goto error; - fuse_wait_on_page_writeback(mapping->host, folio->index); - if (folio_test_uptodate(folio) || len >= folio_size(folio)) goto success; /* @@ -2545,13 +2243,9 @@ static int fuse_launder_folio(struct folio *folio) { int err = 0; if (folio_clear_dirty_for_io(folio)) { - struct inode *inode = folio->mapping->host; - - /* Serialize with pending writeback for the same page */ - fuse_wait_on_page_writeback(inode, folio->index); err = fuse_writepage_locked(folio); if (!err) - fuse_wait_on_page_writeback(inode, folio->index); + folio_wait_writeback(folio); } return err; } @@ -2595,7 +2289,7 @@ static vm_fault_t fuse_page_mkwrite(struct vm_fault *vmf) return VM_FAULT_NOPAGE; } - fuse_wait_on_folio_writeback(inode, folio); + folio_wait_writeback(folio); return VM_FAULT_LOCKED; } @@ -3413,9 +3107,12 @@ static const struct address_space_operations fuse_file_aops = { void fuse_init_file_inode(struct inode *inode, unsigned int flags) { struct fuse_inode *fi = get_fuse_inode(inode); + struct fuse_conn *fc = get_fuse_conn(inode); inode->i_fop = &fuse_file_operations; inode->i_data.a_ops = &fuse_file_aops; + if (fc->writeback_cache) + mapping_set_writeback_indeterminate(&inode->i_data); INIT_LIST_HEAD(&fi->write_files); INIT_LIST_HEAD(&fi->queued_writes); @@ -3423,7 +3120,6 @@ void fuse_init_file_inode(struct inode *inode, unsigned int flags) fi->iocachectr = 0; init_waitqueue_head(&fi->page_waitq); init_waitqueue_head(&fi->direct_io_waitq); - fi->writepages = RB_ROOT; if (IS_ENABLED(CONFIG_FUSE_DAX)) fuse_dax_inode_init(inode, flags); diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h index 74744c6f2860..23736c5c64c1 100644 --- a/fs/fuse/fuse_i.h +++ b/fs/fuse/fuse_i.h @@ -141,9 +141,6 @@ struct fuse_inode { /* waitq for direct-io completion */ wait_queue_head_t direct_io_waitq; - - /* List of writepage requestst (pending or sent) */ - struct rb_root writepages; }; /* readdir cache (directory only) */