From patchwork Wed Jun 1 21:01:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAE51CCA473 for ; Wed, 1 Jun 2022 21:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230428AbiFAVB4 (ORCPT ); Wed, 1 Jun 2022 17:01:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbiFAVBy (ORCPT ); Wed, 1 Jun 2022 17:01:54 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5AFA61BF173 for ; Wed, 1 Jun 2022 14:01:52 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251GTTmH015007 for ; Wed, 1 Jun 2022 14:01:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=+IYV+ohWMnP9Z2WqVp2E+CnW9cqqI1fK3UacqUP+NEU=; b=TNdvL5H9Nnbr5ngz7PfC+qxRx5KoUDRycNbiHAywt2stgsiwkD+CJvAzRkQJig18OBKK qZAm40cAczX/wnle++9nk7FIfdVUQKDonMs75YhI66e8QeRkrZGvFdICsZ36p7PLwZ0D kKYhyyoqZKqPtXrYDJk0Z01/dwH9VyEofy4= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdt5jqff6-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:01:52 -0700 Received: from twshared4937.07.ash9.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:01:51 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id B86C6FEB2394; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 01/15] mm: Move starting of background writeback into the main balancing loop Date: Wed, 1 Jun 2022 14:01:27 -0700 Message-ID: <20220601210141.3773402-2-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 38ZABpBqG0jleG__yDv20SI5M4RPy8bc X-Proofpoint-ORIG-GUID: 38ZABpBqG0jleG__yDv20SI5M4RPy8bc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org From: Jan Kara We start background writeback if we are over background threshold after exiting the main loop in balance_dirty_pages(). This may result in basing the decision on already stale values (we may have slept for significant amount of time) and it is also inconvenient for refactoring needed for async dirty throttling. Move the check into the main waiting loop. Signed-off-by: Jan Kara Signed-off-by: Stefan Roesch --- mm/page-writeback.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 55c2776ae699..e59c523aed1a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1627,6 +1627,19 @@ static void balance_dirty_pages(struct bdi_writeback *wb, } } + /* + * In laptop mode, we wait until hitting the higher threshold + * before starting background writeout, and then write out all + * the way down to the lower threshold. So slow writers cause + * minimal disk activity. + * + * In normal mode, we start background writeout at the lower + * background_thresh, to keep the amount of dirty memory low. + */ + if (!laptop_mode && nr_reclaimable > gdtc->bg_thresh && + !writeback_in_progress(wb)) + wb_start_background_writeback(wb); + /* * Throttle it only when the background writeback cannot * catch-up. This avoids (excessively) small writeouts @@ -1657,6 +1670,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb, break; } + /* Start writeback even when in laptop mode */ if (unlikely(!writeback_in_progress(wb))) wb_start_background_writeback(wb); @@ -1823,23 +1837,6 @@ static void balance_dirty_pages(struct bdi_writeback *wb, if (!dirty_exceeded && wb->dirty_exceeded) wb->dirty_exceeded = 0; - - if (writeback_in_progress(wb)) - return; - - /* - * In laptop mode, we wait until hitting the higher threshold before - * starting background writeout, and then write out all the way down - * to the lower threshold. So slow writers cause minimal disk activity. - * - * In normal mode, we start background writeout at the lower - * background_thresh, to keep the amount of dirty memory low. - */ - if (laptop_mode) - return; - - if (nr_reclaimable > gdtc->bg_thresh) - wb_start_background_writeback(wb); } static DEFINE_PER_CPU(int, bdp_ratelimits); From patchwork Wed Jun 1 21:01:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C34E3C43334 for ; Wed, 1 Jun 2022 21:01:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230518AbiFAVBt (ORCPT ); Wed, 1 Jun 2022 17:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230436AbiFAVBt (ORCPT ); Wed, 1 Jun 2022 17:01:49 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 790161BF168 for ; Wed, 1 Jun 2022 14:01:48 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251K52TX028648 for ; Wed, 1 Jun 2022 14:01:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=kEVeF1raHjzqCCTZ1PYGpmk3OQwPxUvEougAHhggs/M=; b=MIQLVDM65hkWiJ0lj6EVBtpJJYyYIf98ZkTWo1bMxOEy0y4v9tHTqrLZMDRFnnGQdBNi kxmJQW35KHX88Mf2FMrS70P0sgPJ7aRlnDnVRolIVskefxOf5xtsWa7upTHJNv+N6tCy scjCwY3fbHsQKcFPW1kcApydM3OYFm2c2J4= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdt5jqfeq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:01:48 -0700 Received: from snc-exhub201.TheFacebook.com (2620:10d:c085:21d::7) by snc-exhub102.TheFacebook.com (2620:10d:c085:11d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:01:47 -0700 Received: from twshared4937.07.ash9.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:01:46 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id BB990FEB2396; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 02/15] mm: Move updates of dirty_exceeded into one place Date: Wed, 1 Jun 2022 14:01:28 -0700 Message-ID: <20220601210141.3773402-3-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: O92XqE8C_euGDD2U9kEgvPrJB4feOCFW X-Proofpoint-ORIG-GUID: O92XqE8C_euGDD2U9kEgvPrJB4feOCFW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org From: Jan Kara Transition of wb->dirty_exceeded from 0 to 1 happens before we go to sleep in balance_dirty_pages() while transition from 1 to 0 happens when exiting from balance_dirty_pages(), possibly based on old values. This does not make a lot of sense since wb->dirty_exceeded should simply reflect whether wb is over dirty limit and so we should ratelimit entering to balance_dirty_pages() less. Move the two updates together. Signed-off-by: Jan Kara Signed-off-by: Stefan Roesch --- mm/page-writeback.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index e59c523aed1a..90b1998c16a1 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1729,8 +1729,8 @@ static void balance_dirty_pages(struct bdi_writeback *wb, sdtc = mdtc; } - if (dirty_exceeded && !wb->dirty_exceeded) - wb->dirty_exceeded = 1; + if (dirty_exceeded != wb->dirty_exceeded) + wb->dirty_exceeded = dirty_exceeded; if (time_is_before_jiffies(READ_ONCE(wb->bw_time_stamp) + BANDWIDTH_INTERVAL)) @@ -1834,9 +1834,6 @@ static void balance_dirty_pages(struct bdi_writeback *wb, if (fatal_signal_pending(current)) break; } - - if (!dirty_exceeded && wb->dirty_exceeded) - wb->dirty_exceeded = 0; } static DEFINE_PER_CPU(int, bdp_ratelimits); From patchwork Wed Jun 1 21:01:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86FBBCCA479 for ; Wed, 1 Jun 2022 21:04:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230421AbiFAVER (ORCPT ); Wed, 1 Jun 2022 17:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbiFAVEQ (ORCPT ); Wed, 1 Jun 2022 17:04:16 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D4B325292 for ; Wed, 1 Jun 2022 14:04:15 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251GNQ6s025924 for ; Wed, 1 Jun 2022 14:04:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=NdvWe67YNsLgboqtQSfBVdFzVUFCjXf2l75TyQfe1vU=; b=EUysm4El/eOViAMeAH0tM3FLt/zNbYtVZqTxzT4aO3xaxF4my03sLEjraKbXr5TZmdYV IIjIfjpecmC11Sv6AVd5+9B5ghgsViI6Z/HI6ZEzhM+Pnj4tGrb5Zq6uE25Jkg8ylMr9 C+7uMfjRr0cVJT2G9S+7WdXP8wt+wGNjFZQ= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge144nedw-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:14 -0700 Received: from twshared8508.05.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:11 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id C1F7DFEB2398; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 03/15] mm: Add balance_dirty_pages_ratelimited_flags() function Date: Wed, 1 Jun 2022 14:01:29 -0700 Message-ID: <20220601210141.3773402-4-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: z330cIzNIAyV3w4JBUWjpFgByTEuP4uj X-Proofpoint-ORIG-GUID: z330cIzNIAyV3w4JBUWjpFgByTEuP4uj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org From: Jan Kara This adds the helper function balance_dirty_pages_ratelimited_flags(). It adds the parameter flags to balance_dirty_pages_ratelimited(). The flags parameter is passed to balance_dirty_pages(). For async buffered writes the flag value will be BDP_ASYNC. If balance_dirty_pages() gets called for async buffered write, we don't want to wait. Instead we need to indicate to the caller that throttling is needed so that it can stop writing and offload the rest of the write to a context that can block. The new helper function is also used by balance_dirty_pages_ratelimited(). Signed-off-by: Jan Kara Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig --- include/linux/writeback.h | 7 ++++++ mm/page-writeback.c | 48 +++++++++++++++++++++++++-------------- 2 files changed, 38 insertions(+), 17 deletions(-) diff --git a/include/linux/writeback.h b/include/linux/writeback.h index da21d63f70e2..b8c9610c2313 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -364,7 +364,14 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty); unsigned long wb_calc_thresh(struct bdi_writeback *wb, unsigned long thresh); void wb_update_bandwidth(struct bdi_writeback *wb); + +/* Invoke balance dirty pages in async mode. */ +#define BDP_ASYNC 0x0001 + void balance_dirty_pages_ratelimited(struct address_space *mapping); +int balance_dirty_pages_ratelimited_flags(struct address_space *mapping, + unsigned int flags); + bool wb_over_bg_thresh(struct bdi_writeback *wb); typedef int (*writepage_t)(struct page *page, struct writeback_control *wbc, diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 90b1998c16a1..684ab599438a 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -1554,8 +1554,8 @@ static inline void wb_dirty_limits(struct dirty_throttle_control *dtc) * If we're over `background_thresh' then the writeback threads are woken to * perform some writeout. */ -static void balance_dirty_pages(struct bdi_writeback *wb, - unsigned long pages_dirtied) +static int balance_dirty_pages(struct bdi_writeback *wb, + unsigned long pages_dirtied, unsigned int flags) { struct dirty_throttle_control gdtc_stor = { GDTC_INIT(wb) }; struct dirty_throttle_control mdtc_stor = { MDTC_INIT(wb, &gdtc_stor) }; @@ -1575,6 +1575,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb, struct backing_dev_info *bdi = wb->bdi; bool strictlimit = bdi->capabilities & BDI_CAP_STRICTLIMIT; unsigned long start_time = jiffies; + int ret = 0; for (;;) { unsigned long now = jiffies; @@ -1803,6 +1804,10 @@ static void balance_dirty_pages(struct bdi_writeback *wb, period, pause, start_time); + if (flags & BDP_ASYNC) { + ret = -EAGAIN; + break; + } __set_current_state(TASK_KILLABLE); wb->dirty_sleep = now; io_schedule_timeout(pause); @@ -1834,6 +1839,7 @@ static void balance_dirty_pages(struct bdi_writeback *wb, if (fatal_signal_pending(current)) break; } + return ret; } static DEFINE_PER_CPU(int, bdp_ratelimits); @@ -1854,28 +1860,18 @@ static DEFINE_PER_CPU(int, bdp_ratelimits); */ DEFINE_PER_CPU(int, dirty_throttle_leaks) = 0; -/** - * balance_dirty_pages_ratelimited - balance dirty memory state - * @mapping: address_space which was dirtied - * - * Processes which are dirtying memory should call in here once for each page - * which was newly dirtied. The function will periodically check the system's - * dirty state and will initiate writeback if needed. - * - * Once we're over the dirty memory limit we decrease the ratelimiting - * by a lot, to prevent individual processes from overshooting the limit - * by (ratelimit_pages) each. - */ -void balance_dirty_pages_ratelimited(struct address_space *mapping) +int balance_dirty_pages_ratelimited_flags(struct address_space *mapping, + unsigned int flags) { struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); struct bdi_writeback *wb = NULL; int ratelimit; + int ret = 0; int *p; if (!(bdi->capabilities & BDI_CAP_WRITEBACK)) - return; + return ret; if (inode_cgwb_enabled(inode)) wb = wb_get_create_current(bdi, GFP_KERNEL); @@ -1915,9 +1911,27 @@ void balance_dirty_pages_ratelimited(struct address_space *mapping) preempt_enable(); if (unlikely(current->nr_dirtied >= ratelimit)) - balance_dirty_pages(wb, current->nr_dirtied); + balance_dirty_pages(wb, current->nr_dirtied, flags); wb_put(wb); + return ret; +} + +/** + * balance_dirty_pages_ratelimited - balance dirty memory state + * @mapping: address_space which was dirtied + * + * Processes which are dirtying memory should call in here once for each page + * which was newly dirtied. The function will periodically check the system's + * dirty state and will initiate writeback if needed. + * + * Once we're over the dirty memory limit we decrease the ratelimiting + * by a lot, to prevent individual processes from overshooting the limit + * by (ratelimit_pages) each. + */ +void balance_dirty_pages_ratelimited(struct address_space *mapping) +{ + balance_dirty_pages_ratelimited_flags(mapping, 0); } EXPORT_SYMBOL(balance_dirty_pages_ratelimited); From patchwork Wed Jun 1 21:01:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA1E6C43334 for ; Wed, 1 Jun 2022 21:04:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbiFAVEP (ORCPT ); Wed, 1 Jun 2022 17:04:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230420AbiFAVEP (ORCPT ); Wed, 1 Jun 2022 17:04:15 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D0F722C4BB for ; Wed, 1 Jun 2022 14:04:14 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251JHmTD008868 for ; Wed, 1 Jun 2022 14:04:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=1DB32oSN1O4JWM49BdHQjWcRikEvvBQGUNBPsacN0tY=; b=FlxIkYikPNJGjN8Qid2xmYNKlAKofMyTYCmNT1YJUhXzHLorJsIA/taVLceRomkb36vW PA/dvJ9d995AlHfBkm4P/CD57tVw9m1Y8khk4ZX4wShzFs9Np78ApemFSQwyPyh91by6 X3HhJN91WiJJCqLAwusJuUliEGsa5pYCVAI= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge5vcm9e1-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:13 -0700 Received: from twshared10560.18.frc3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:13 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id C85A1FEB239A; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 04/15] iomap: Add flags parameter to iomap_page_create() Date: Wed, 1 Jun 2022 14:01:30 -0700 Message-ID: <20220601210141.3773402-5-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: KOYlAC7bTxdpPjN7901QUSYVKk9-pkr_ X-Proofpoint-ORIG-GUID: KOYlAC7bTxdpPjN7901QUSYVKk9-pkr_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Add the kiocb flags parameter to the function iomap_page_create(). Depending on the value of the flags parameter it enables different gfp flags. No intended functional changes in this patch. Signed-off-by: Stefan Roesch Reviewed-by: Jan Kara Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index d2a9f699e17e..705f80cd2d4e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -44,16 +44,23 @@ static inline struct iomap_page *to_iomap_page(struct folio *folio) static struct bio_set iomap_ioend_bioset; static struct iomap_page * -iomap_page_create(struct inode *inode, struct folio *folio) +iomap_page_create(struct inode *inode, struct folio *folio, unsigned int flags) { struct iomap_page *iop = to_iomap_page(folio); unsigned int nr_blocks = i_blocks_per_folio(inode, folio); + gfp_t gfp; if (iop || nr_blocks <= 1) return iop; + if (flags & IOMAP_NOWAIT) + gfp = GFP_NOWAIT; + else + gfp = GFP_NOFS | __GFP_NOFAIL; + iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), - GFP_NOFS | __GFP_NOFAIL); + gfp); + spin_lock_init(&iop->uptodate_lock); if (folio_test_uptodate(folio)) bitmap_fill(iop->uptodate, nr_blocks); @@ -226,7 +233,7 @@ static int iomap_read_inline_data(const struct iomap_iter *iter, if (WARN_ON_ONCE(size > iomap->length)) return -EIO; if (offset > 0) - iop = iomap_page_create(iter->inode, folio); + iop = iomap_page_create(iter->inode, folio, iter->flags); else iop = to_iomap_page(folio); @@ -264,7 +271,7 @@ static loff_t iomap_readpage_iter(const struct iomap_iter *iter, return iomap_read_inline_data(iter, folio); /* zero post-eof blocks as the page may be mapped */ - iop = iomap_page_create(iter->inode, folio); + iop = iomap_page_create(iter->inode, folio, iter->flags); iomap_adjust_read_range(iter->inode, folio, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -547,7 +554,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t len, struct folio *folio) { const struct iomap *srcmap = iomap_iter_srcmap(iter); - struct iomap_page *iop = iomap_page_create(iter->inode, folio); + struct iomap_page *iop; loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); @@ -558,6 +565,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return 0; folio_clear_error(folio); + iop = iomap_page_create(iter->inode, folio, iter->flags); + do { iomap_adjust_read_range(iter->inode, folio, &block_start, block_end - block_start, &poff, &plen); @@ -1329,7 +1338,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct folio *folio, u64 end_pos) { - struct iomap_page *iop = iomap_page_create(inode, folio); + struct iomap_page *iop = iomap_page_create(inode, folio, 0); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); unsigned nblocks = i_blocks_per_folio(inode, folio); From patchwork Wed Jun 1 21:01:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A3A2C433EF for ; Wed, 1 Jun 2022 21:04:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230516AbiFAVE2 (ORCPT ); Wed, 1 Jun 2022 17:04:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbiFAVEU (ORCPT ); Wed, 1 Jun 2022 17:04:20 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BD7822FE4A for ; Wed, 1 Jun 2022 14:04:17 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251K3k6n028658 for ; Wed, 1 Jun 2022 14:04:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=SanAPp/GdGsVkoF9K0kGfp+PZp6DCWAFgZk5IqjhpxA=; b=QXfb/uVVLSkuKSxIuj2dp3p4MlMpnI8918rFKtavGeCf7sgj8TSJoeOqI5DB7ZnbeU7t Uq2N7rpskf1XrL6bySj3iLwDXYLNBpZ3Nk99DIq8peeoZQZMCilp9jDysGjQFX0MDMS0 Qhea9zc1mGjVgr0Zf09rSU+X0nx3iq9XBsg= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdt5jqfy8-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:16 -0700 Received: from twshared10560.18.frc3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:13 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id CF9ABFEB239D; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 05/15] iomap: Add async buffered write support Date: Wed, 1 Jun 2022 14:01:31 -0700 Message-ID: <20220601210141.3773402-6-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: QMWCiAViY1Jt-rksEnQqFoaUXAIHBpMx X-Proofpoint-ORIG-GUID: QMWCiAViY1Jt-rksEnQqFoaUXAIHBpMx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds async buffered write support to iomap. This replaces the call to balance_dirty_pages_ratelimited() with the call to balance_dirty_pages_ratelimited_flags. This allows to specify if the write request is async or not. In addition this also moves the above function call to the beginning of the function. If the function call is at the end of the function and the decision is made to throttle writes, then there is no request that io-uring can wait on. By moving it to the beginning of the function, the write request is not issued, but returns -EAGAIN instead. io-uring will punt the request and process it in the io-worker. By moving the function call to the beginning of the function, the write throttling will happen one page later. Signed-off-by: Stefan Roesch Reviewed-by: Jan Kara Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 705f80cd2d4e..b06a5c24a4db 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -558,6 +558,7 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, loff_t block_size = i_blocksize(iter->inode); loff_t block_start = round_down(pos, block_size); loff_t block_end = round_up(pos + len, block_size); + unsigned int nr_blocks = i_blocks_per_folio(iter->inode, folio); size_t from = offset_in_folio(folio, pos), to = from + len; size_t poff, plen; @@ -566,6 +567,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, folio_clear_error(folio); iop = iomap_page_create(iter->inode, folio, iter->flags); + if ((iter->flags & IOMAP_NOWAIT) && !iop && nr_blocks > 1) + return -EAGAIN; do { iomap_adjust_read_range(iter->inode, folio, &block_start, @@ -583,7 +586,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, return -EIO; folio_zero_segments(folio, poff, from, to, poff + plen); } else { - int status = iomap_read_folio_sync(block_start, folio, + int status; + + if (iter->flags & IOMAP_NOWAIT) + return -EAGAIN; + + status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); if (status) return status; @@ -612,6 +620,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; + if (iter->flags & IOMAP_NOWAIT) + fgp |= FGP_NOWAIT; + BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); if (srcmap != &iter->iomap) BUG_ON(pos + len > srcmap->offset + srcmap->length); @@ -631,7 +642,7 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, folio = __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT, fgp, mapping_gfp_mask(iter->inode->i_mapping)); if (!folio) { - status = -ENOMEM; + status = (iter->flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOMEM; goto out_no_page; } if (pos + len > folio_pos(folio) + folio_size(folio)) @@ -749,6 +760,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) loff_t pos = iter->pos; ssize_t written = 0; long status = 0; + struct address_space *mapping = iter->inode->i_mapping; + unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0; do { struct folio *folio; @@ -761,6 +774,11 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) bytes = min_t(unsigned long, PAGE_SIZE - offset, iov_iter_count(i)); again: + status = balance_dirty_pages_ratelimited_flags(mapping, + bdp_flags); + if (unlikely(status)) + break; + if (bytes > length) bytes = length; @@ -769,6 +787,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. + * + * For async buffered writes the assumption is that the user + * page has already been faulted in. This can be optimized by + * faulting the user page. */ if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { status = -EFAULT; @@ -780,7 +802,7 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) break; page = folio_file_page(folio, pos >> PAGE_SHIFT); - if (mapping_writably_mapped(iter->inode->i_mapping)) + if (mapping_writably_mapped(mapping)) flush_dcache_page(page); copied = copy_page_from_iter_atomic(page, offset, bytes, i); @@ -805,8 +827,6 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) pos += status; written += status; length -= status; - - balance_dirty_pages_ratelimited(iter->inode->i_mapping); } while (iov_iter_count(i) && length); return written ? written : status; @@ -824,6 +844,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, }; int ret; + if (iocb->ki_flags & IOCB_NOWAIT) + iter.flags |= IOMAP_NOWAIT; + while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_write_iter(&iter, i); if (iter.pos == iocb->ki_pos) From patchwork Wed Jun 1 21:01:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 967CCCCA479 for ; Wed, 1 Jun 2022 21:01:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbiFAVB5 (ORCPT ); Wed, 1 Jun 2022 17:01:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbiFAVBz (ORCPT ); Wed, 1 Jun 2022 17:01:55 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 894AC34B9C for ; Wed, 1 Jun 2022 14:01:54 -0700 (PDT) Received: from pps.filterd (m0109333.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251INehb008370 for ; Wed, 1 Jun 2022 14:01:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=5yh7bJ43zWawOb4heVTpXC/CGgCWptHJL+luNQUKZk8=; b=Hb4u4VI4+HqIbREptSYISF37ee5WH7G2UhrsgYnaVn9M1VbVf/3oPo+9LG/Hg53XsDHf OFvif8yn9lOYgJsl+13VXhC8j5A5d474Z+JGYTbn+6uvCMHtVn6oPWbXoT/+ZPle1hcl nkXri9gI21BytzWLmgZmB4g6aIUJnXNYuXo= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge5vcm8yc-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:01:54 -0700 Received: from twshared10560.18.frc3.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:01:52 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id D5A49FEB23A1; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 06/15] iomap: Return error code from iomap_write_iter() Date: Wed, 1 Jun 2022 14:01:32 -0700 Message-ID: <20220601210141.3773402-7-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: -W4vK19ln1_hODJ-bJJL43Hqq16Bfjau X-Proofpoint-ORIG-GUID: -W4vK19ln1_hODJ-bJJL43Hqq16Bfjau X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Change the signature of iomap_write_iter() to return an error code. In case we cannot allocate a page in iomap_write_begin(), we will not retry the memory alloction in iomap_write_begin(). Signed-off-by: Stefan Roesch --- fs/iomap/buffered-io.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index b06a5c24a4db..e96ab9a3072c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -754,12 +754,13 @@ static size_t iomap_write_end(struct iomap_iter *iter, loff_t pos, size_t len, return ret; } -static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) +static int iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i, loff_t *processed) { loff_t length = iomap_length(iter); loff_t pos = iter->pos; ssize_t written = 0; long status = 0; + int error = 0; struct address_space *mapping = iter->inode->i_mapping; unsigned int bdp_flags = (iter->flags & IOMAP_NOWAIT) ? BDP_ASYNC : 0; @@ -774,9 +775,9 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) bytes = min_t(unsigned long, PAGE_SIZE - offset, iov_iter_count(i)); again: - status = balance_dirty_pages_ratelimited_flags(mapping, + error = balance_dirty_pages_ratelimited_flags(mapping, bdp_flags); - if (unlikely(status)) + if (unlikely(error)) break; if (bytes > length) @@ -793,12 +794,12 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) * faulting the user page. */ if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { - status = -EFAULT; + error = -EFAULT; break; } - status = iomap_write_begin(iter, pos, bytes, &folio); - if (unlikely(status)) + error = iomap_write_begin(iter, pos, bytes, &folio); + if (unlikely(error)) break; page = folio_file_page(folio, pos >> PAGE_SHIFT); @@ -829,7 +830,8 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) length -= status; } while (iov_iter_count(i) && length); - return written ? written : status; + *processed = written ? written : error; + return error; } ssize_t @@ -843,12 +845,15 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, .flags = IOMAP_WRITE, }; int ret; + int error = 0; if (iocb->ki_flags & IOCB_NOWAIT) iter.flags |= IOMAP_NOWAIT; - while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_write_iter(&iter, i); + while ((ret = iomap_iter(&iter, ops)) > 0) { + if (error != -EAGAIN) + error = iomap_write_iter(&iter, i, &iter.processed); + } if (iter.pos == iocb->ki_pos) return ret; return iter.pos - iocb->ki_pos; From patchwork Wed Jun 1 21:01:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BD3EC433EF for ; Wed, 1 Jun 2022 21:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231263AbiFAVHP (ORCPT ); Wed, 1 Jun 2022 17:07:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231261AbiFAVHO (ORCPT ); Wed, 1 Jun 2022 17:07:14 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9BB46B7DD for ; Wed, 1 Jun 2022 14:07:12 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251E8EQb020018 for ; Wed, 1 Jun 2022 14:07:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=Yt6TEqCrJwA/g0p7FZ8GGE3PH7HsqaIrUF33gFTVMx4=; b=h6jD3sBZFnEGkI/etm+Lugt7k80ZUCeu0gtuZGp8f4QKXCk6kbJPx6hEP8r6h/IMHpOX wzJN+WtbiTYVTCBt6r/LCo3tZ7Q2XWSpeJH73CN9CicjIsp14nZKnqsPWLRaqKUSY8/Y 1jJLjTHIGzAIab7++CunsZfg/LVlVuyXH0M= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge9m2jydc-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:07:11 -0700 Received: from twshared10560.18.frc3.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:07:09 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id DBCE5FEB23A3; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 07/15] fs: Add check for async buffered writes to generic_write_checks Date: Wed, 1 Jun 2022 14:01:33 -0700 Message-ID: <20220601210141.3773402-8-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: c-1JTo88koYPkKEs1qvTfjn5ynbN0LRK X-Proofpoint-ORIG-GUID: c-1JTo88koYPkKEs1qvTfjn5ynbN0LRK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This introduces the flag FMODE_BUF_WASYNC. If devices support async buffered writes, this flag can be set. It also modifies the check in generic_write_checks to take async buffered writes into consideration. Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig --- fs/read_write.c | 4 +++- include/linux/fs.h | 3 +++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/fs/read_write.c b/fs/read_write.c index e643aec2b0ef..175d98713b9a 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -1633,7 +1633,9 @@ int generic_write_checks_count(struct kiocb *iocb, loff_t *count) if (iocb->ki_flags & IOCB_APPEND) iocb->ki_pos = i_size_read(inode); - if ((iocb->ki_flags & IOCB_NOWAIT) && !(iocb->ki_flags & IOCB_DIRECT)) + if ((iocb->ki_flags & IOCB_NOWAIT) && + !((iocb->ki_flags & IOCB_DIRECT) || + (file->f_mode & FMODE_BUF_WASYNC))) return -EINVAL; return generic_write_check_limits(iocb->ki_filp, iocb->ki_pos, count); diff --git a/include/linux/fs.h b/include/linux/fs.h index 01403e637271..bdf1ce48a458 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -180,6 +180,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset, /* File supports async buffered reads */ #define FMODE_BUF_RASYNC ((__force fmode_t)0x40000000) +/* File supports async nowait buffered writes */ +#define FMODE_BUF_WASYNC ((__force fmode_t)0x80000000) + /* * Attribute flags. These should be or-ed together to figure out what * has been changed! From patchwork Wed Jun 1 21:01:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3767BC43334 for ; Wed, 1 Jun 2022 21:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231269AbiFAVHS (ORCPT ); Wed, 1 Jun 2022 17:07:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231190AbiFAVHO (ORCPT ); Wed, 1 Jun 2022 17:07:14 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0824561629 for ; Wed, 1 Jun 2022 14:07:13 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251HgMO2011588 for ; Wed, 1 Jun 2022 14:07:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=iNcYV+f/9Xb9s2DKDhXxqtFLBfqGp5UatKpqDyVtVPI=; b=mbRI2JE8wrEfyVfd+5ioDAnsMcaQEKDqUsnZDez9U0g7Dk3LSDz5C6yZ2cE9abl4VxUg CeB2AOlCYRPYM4ybmU6OX1c8PBeUQnzh4zdAmiWQicu6TL0Ad5N/LgRj0S19bnjUCUir k1wqxVBKVTGUUQT6iCICejr1+gzJuzTkIcc= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge3wk4ub5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:07:13 -0700 Received: from twshared24024.25.frc3.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:07:11 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id E1DDBFEB23A5; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 08/15] fs: add __remove_file_privs() with flags parameter Date: Wed, 1 Jun 2022 14:01:34 -0700 Message-ID: <20220601210141.3773402-9-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: mVckQa25hJLLKoqThBDJmJKbFXGbuN08 X-Proofpoint-ORIG-GUID: mVckQa25hJLLKoqThBDJmJKbFXGbuN08 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds the function __remove_file_privs, which allows the caller to pass the kiocb flags parameter. No intended functional changes in this patch. Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara --- fs/inode.c | 57 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 37 insertions(+), 20 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 9d9b422504d1..ac1cf5aa78c8 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -2010,36 +2010,43 @@ static int __remove_privs(struct user_namespace *mnt_userns, return notify_change(mnt_userns, dentry, &newattrs, NULL); } -/* - * Remove special file priviledges (suid, capabilities) when file is written - * to or truncated. - */ -int file_remove_privs(struct file *file) +static int __file_remove_privs(struct file *file, unsigned int flags) { struct dentry *dentry = file_dentry(file); struct inode *inode = file_inode(file); + int error; int kill; - int error = 0; - /* - * Fast path for nothing security related. - * As well for non-regular files, e.g. blkdev inodes. - * For example, blkdev_write_iter() might get here - * trying to remove privs which it is not allowed to. - */ if (IS_NOSEC(inode) || !S_ISREG(inode->i_mode)) return 0; kill = dentry_needs_remove_privs(dentry); - if (kill < 0) + if (kill <= 0) return kill; - if (kill) - error = __remove_privs(file_mnt_user_ns(file), dentry, kill); + + if (flags & IOCB_NOWAIT) + return -EAGAIN; + + error = __remove_privs(file_mnt_user_ns(file), dentry, kill); if (!error) inode_has_no_xattr(inode); return error; } + +/** + * file_remove_privs - remove special file privileges (suid, capabilities) + * @file: file to remove privileges from + * + * When file is modified by a write or truncation ensure that special + * file privileges are removed. + * + * Return: 0 on success, negative errno on failure. + */ +int file_remove_privs(struct file *file) +{ + return __file_remove_privs(file, 0); +} EXPORT_SYMBOL(file_remove_privs); /** @@ -2090,18 +2097,28 @@ int file_update_time(struct file *file) } EXPORT_SYMBOL(file_update_time); -/* Caller must hold the file's inode lock */ +/** + * file_modified - handle mandated vfs changes when modifying a file + * @file: file that was modified + * + * When file has been modified ensure that special + * file privileges are removed and time settings are updated. + * + * Context: Caller must hold the file's inode lock. + * + * Return: 0 on success, negative errno on failure. + */ int file_modified(struct file *file) { - int err; + int ret; /* * Clear the security bits if the process is not being run by root. * This keeps people from modifying setuid and setgid binaries. */ - err = file_remove_privs(file); - if (err) - return err; + ret = __file_remove_privs(file, 0); + if (ret) + return ret; if (unlikely(file->f_mode & FMODE_NOCMTIME)) return 0; From patchwork Wed Jun 1 21:01:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66959CCA473 for ; Wed, 1 Jun 2022 21:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230458AbiFAVEc (ORCPT ); Wed, 1 Jun 2022 17:04:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230502AbiFAVE2 (ORCPT ); Wed, 1 Jun 2022 17:04:28 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A42CF2309AD for ; Wed, 1 Jun 2022 14:04:24 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251I4COe010631 for ; Wed, 1 Jun 2022 14:04:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=aWGWWAgH7Bi/du0tMMunpKLV8jXk2Gb/WQnwadv9en0=; b=ikJDvnU3N1AnSdYXSBN/z345AIkwnUQKGe2IkGtbNy90uZ7Cf+kLCIrKcZ6FqaNjrmCG +cddw22iSlX/2CtXGlL2nhyKt8o8OufEuioMQRq0ajdVR7pVyi9rfcUpkhBJxbZytFYz NVuRcI0v8lXGbKILo0lWNk2W85huBbtGMJk= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge3wk4tqh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:23 -0700 Received: from twshared5413.23.frc3.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:22 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id E813DFEB23A7; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 09/15] fs: Split off inode_needs_update_time and __file_update_time Date: Wed, 1 Jun 2022 14:01:35 -0700 Message-ID: <20220601210141.3773402-10-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: t_kNE_BiXa0zyJnXW4VmJ8kqaJd2c_XU X-Proofpoint-ORIG-GUID: t_kNE_BiXa0zyJnXW4VmJ8kqaJd2c_XU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This splits off the functions inode_needs_update_time() and __file_update_time() from the function file_update_time(). This is required to support async buffered writes. No intended functional changes in this patch. Signed-off-by: Stefan Roesch Reviewed-by: Jan Kara --- fs/inode.c | 76 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 50 insertions(+), 26 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index ac1cf5aa78c8..c44573a32c6a 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -2049,35 +2049,18 @@ int file_remove_privs(struct file *file) } EXPORT_SYMBOL(file_remove_privs); -/** - * file_update_time - update mtime and ctime time - * @file: file accessed - * - * Update the mtime and ctime members of an inode and mark the inode - * for writeback. Note that this function is meant exclusively for - * usage in the file write path of filesystems, and filesystems may - * choose to explicitly ignore update via this function with the - * S_NOCMTIME inode flag, e.g. for network filesystem where these - * timestamps are handled by the server. This can return an error for - * file systems who need to allocate space in order to update an inode. - */ - -int file_update_time(struct file *file) +static int inode_needs_update_time(struct inode *inode, struct timespec64 *now) { - struct inode *inode = file_inode(file); - struct timespec64 now; int sync_it = 0; - int ret; /* First try to exhaust all avenues to not sync */ if (IS_NOCMTIME(inode)) return 0; - now = current_time(inode); - if (!timespec64_equal(&inode->i_mtime, &now)) + if (!timespec64_equal(&inode->i_mtime, now)) sync_it = S_MTIME; - if (!timespec64_equal(&inode->i_ctime, &now)) + if (!timespec64_equal(&inode->i_ctime, now)) sync_it |= S_CTIME; if (IS_I_VERSION(inode) && inode_iversion_need_inc(inode)) @@ -2086,15 +2069,50 @@ int file_update_time(struct file *file) if (!sync_it) return 0; - /* Finally allowed to write? Takes lock. */ - if (__mnt_want_write_file(file)) - return 0; + return sync_it; +} + +static int __file_update_time(struct file *file, struct timespec64 *now, + int sync_mode) +{ + int ret = 0; + struct inode *inode = file_inode(file); - ret = inode_update_time(inode, &now, sync_it); - __mnt_drop_write_file(file); + /* try to update time settings */ + if (!__mnt_want_write_file(file)) { + ret = inode_update_time(inode, now, sync_mode); + __mnt_drop_write_file(file); + } return ret; } + + /** + * file_update_time - update mtime and ctime time + * @file: file accessed + * + * Update the mtime and ctime members of an inode and mark the inode for + * writeback. Note that this function is meant exclusively for usage in + * the file write path of filesystems, and filesystems may choose to + * explicitly ignore updates via this function with the _NOCMTIME inode + * flag, e.g. for network filesystem where these imestamps are handled + * by the server. This can return an error for file systems who need to + * allocate space in order to update an inode. + * + * Return: 0 on success, negative errno on failure. + */ +int file_update_time(struct file *file) +{ + int ret; + struct inode *inode = file_inode(file); + struct timespec64 now = current_time(inode); + + ret = inode_needs_update_time(inode, &now); + if (ret <= 0) + return ret; + + return __file_update_time(file, &now, ret); +} EXPORT_SYMBOL(file_update_time); /** @@ -2111,6 +2129,8 @@ EXPORT_SYMBOL(file_update_time); int file_modified(struct file *file) { int ret; + struct inode *inode = file_inode(file); + struct timespec64 now = current_time(inode); /* * Clear the security bits if the process is not being run by root. @@ -2123,7 +2143,11 @@ int file_modified(struct file *file) if (unlikely(file->f_mode & FMODE_NOCMTIME)) return 0; - return file_update_time(file); + ret = inode_needs_update_time(inode, &now); + if (ret <= 0) + return ret; + + return __file_update_time(file, &now, ret); } EXPORT_SYMBOL(file_modified); From patchwork Wed Jun 1 21:01:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87A75CCA479 for ; Wed, 1 Jun 2022 21:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231133AbiFAVHT (ORCPT ); Wed, 1 Jun 2022 17:07:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231261AbiFAVHT (ORCPT ); Wed, 1 Jun 2022 17:07:19 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87CF96C0C6 for ; Wed, 1 Jun 2022 14:07:17 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251HgMOA011588 for ; Wed, 1 Jun 2022 14:07:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=/c1tXQ+b6bxtOlEa/BmLZu1viU9avcsa0t6EYp9adkM=; b=ahLuLGnBdh3I8IY+HUwXj7x1wIS/tEzXcJcIi+XHlYbF1CeV04fGbd8QYEKjaZPdf1MG U8H/KOmqVjASX+7i0DTKsqV+9QGJM78ryP/0PEoM1DNiaeYwFzXiMitPPBj70/2Ie9Xf HvcugF6DUtMBfnXXFQW9Kho5cnTjN5ZSVR8= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge3wk4ub5-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:07:16 -0700 Received: from twshared14818.18.frc3.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:07:13 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id EE9ACFEB23A9; Wed, 1 Jun 2022 14:01:42 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 10/15] fs: Add async write file modification handling. Date: Wed, 1 Jun 2022 14:01:36 -0700 Message-ID: <20220601210141.3773402-11-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: tXh92rM2gHNlIwCVU1IbXoSxJVVO2v0t X-Proofpoint-ORIG-GUID: tXh92rM2gHNlIwCVU1IbXoSxJVVO2v0t X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds a file_modified_async() function to return -EAGAIN if the request either requires to remove privileges or needs to update the file modification time. This is required for async buffered writes, so the request gets handled in the io worker of io-uring. Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara --- fs/inode.c | 43 +++++++++++++++++++++++++++++++++++++++++-- include/linux/fs.h | 1 + 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index c44573a32c6a..4503bed063e7 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -2116,17 +2116,21 @@ int file_update_time(struct file *file) EXPORT_SYMBOL(file_update_time); /** - * file_modified - handle mandated vfs changes when modifying a file + * file_modified_flags - handle mandated vfs changes when modifying a file * @file: file that was modified + * @flags: kiocb flags * * When file has been modified ensure that special * file privileges are removed and time settings are updated. * + * If IOCB_NOWAIT is set, special file privileges will not be removed and + * time settings will not be updated. It will return -EAGAIN. + * * Context: Caller must hold the file's inode lock. * * Return: 0 on success, negative errno on failure. */ -int file_modified(struct file *file) +static int file_modified_flags(struct file *file, int flags) { int ret; struct inode *inode = file_inode(file); @@ -2146,11 +2150,46 @@ int file_modified(struct file *file) ret = inode_needs_update_time(inode, &now); if (ret <= 0) return ret; + if (flags & IOCB_NOWAIT) + return -EAGAIN; return __file_update_time(file, &now, ret); } + +/** + * file_modified - handle mandated vfs changes when modifying a file + * @file: file that was modified + * + * When file has been modified ensure that special + * file privileges are removed and time settings are updated. + * + * Context: Caller must hold the file's inode lock. + * + * Return: 0 on success, negative errno on failure. + */ +int file_modified(struct file *file) +{ + return file_modified_flags(file, 0); +} EXPORT_SYMBOL(file_modified); +/** + * kiocb_modified - handle mandated vfs changes when modifying a file + * @iocb: iocb that was modified + * + * When file has been modified ensure that special + * file privileges are removed and time settings are updated. + * + * Context: Caller must hold the file's inode lock. + * + * Return: 0 on success, negative errno on failure. + */ +int kiocb_modified(struct kiocb *iocb) +{ + return file_modified_flags(iocb->ki_filp, iocb->ki_flags); +} +EXPORT_SYMBOL_GPL(kiocb_modified); + int inode_needs_sync(struct inode *inode) { if (IS_SYNC(inode)) diff --git a/include/linux/fs.h b/include/linux/fs.h index bdf1ce48a458..553e57ec3efa 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2392,6 +2392,7 @@ static inline void file_accessed(struct file *file) } extern int file_modified(struct file *file); +int kiocb_modified(struct kiocb *iocb); int sync_inode_metadata(struct inode *inode, int wait); From patchwork Wed Jun 1 21:01:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8C8DCCA473 for ; Wed, 1 Jun 2022 21:04:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230496AbiFAVEb (ORCPT ); Wed, 1 Jun 2022 17:04:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230497AbiFAVE2 (ORCPT ); Wed, 1 Jun 2022 17:04:28 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BDB02309A3 for ; Wed, 1 Jun 2022 14:04:24 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251E8EM3020007 for ; Wed, 1 Jun 2022 14:04:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=zPAffvAaic7LVXqG7td55F9UJCh5+LhD9V8EiIcalbA=; b=aYlWfgwThWLXtZ+0tKdwynpNlwQuFBVkWvC0MpC5VgKrvEfOMsgoeO10HhxCXD1uwkV3 4qLO/zCQQ4oPSvHz64hk7IoHaigbQgwRq13yVVm2W0/1niPv8axakV4KEA+mY91bFiY/ K2waWEOjashEqZmjihrsWf3nLU2/MCV/gsw= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3ge9m2jxrj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:23 -0700 Received: from twshared5413.23.frc3.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:21 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 03836FEB23AB; Wed, 1 Jun 2022 14:01:43 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 11/15] fs: Optimization for concurrent file time updates. Date: Wed, 1 Jun 2022 14:01:37 -0700 Message-ID: <20220601210141.3773402-12-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: mwaFq_JDiKAl2N6UI6iocFOM5RhhWxap X-Proofpoint-ORIG-GUID: mwaFq_JDiKAl2N6UI6iocFOM5RhhWxap X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This introduces the S_PENDING_TIME flag. If an async buffered write needs to update the time, it cannot be processed in the fast path of io-uring. When a time update is pending this flag is set for async buffered writes. Other concurrent async buffered writes for the same file do not need to wait while this time update is pending. This reduces the number of async buffered writes that need to get punted to the io-workers in io-uring. Signed-off-by: Stefan Roesch --- fs/inode.c | 11 +++++++++-- include/linux/fs.h | 3 +++ 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 4503bed063e7..7185d860d423 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -2150,10 +2150,17 @@ static int file_modified_flags(struct file *file, int flags) ret = inode_needs_update_time(inode, &now); if (ret <= 0) return ret; - if (flags & IOCB_NOWAIT) + if (flags & IOCB_NOWAIT) { + if (IS_PENDING_TIME(inode)) + return 0; + + inode_set_flags(inode, S_PENDING_TIME, S_PENDING_TIME); return -EAGAIN; + } - return __file_update_time(file, &now, ret); + ret = __file_update_time(file, &now, ret); + inode_set_flags(inode, 0, S_PENDING_TIME); + return ret; } /** diff --git a/include/linux/fs.h b/include/linux/fs.h index 553e57ec3efa..15f9a7beba55 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2151,6 +2151,8 @@ struct super_operations { #define S_CASEFOLD (1 << 15) /* Casefolded file */ #define S_VERITY (1 << 16) /* Verity file (using fs/verity/) */ #define S_KERNEL_FILE (1 << 17) /* File is in use by the kernel (eg. fs/cachefiles) */ +#define S_PENDING_TIME (1 << 18) /* File update time is pending */ + /* * Note that nosuid etc flags are inode-specific: setting some file-system @@ -2193,6 +2195,7 @@ static inline bool sb_rdonly(const struct super_block *sb) { return sb->s_flags #define IS_ENCRYPTED(inode) ((inode)->i_flags & S_ENCRYPTED) #define IS_CASEFOLDED(inode) ((inode)->i_flags & S_CASEFOLD) #define IS_VERITY(inode) ((inode)->i_flags & S_VERITY) +#define IS_PENDING_TIME(inode) ((inode)->i_flags & S_PENDING_TIME) #define IS_WHITEOUT(inode) (S_ISCHR(inode->i_mode) && \ (inode)->i_rdev == WHITEOUT_DEV) From patchwork Wed Jun 1 21:01:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09C37C43334 for ; Wed, 1 Jun 2022 21:04:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231149AbiFAVEr (ORCPT ); Wed, 1 Jun 2022 17:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230454AbiFAVEd (ORCPT ); Wed, 1 Jun 2022 17:04:33 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78EE516A27A for ; Wed, 1 Jun 2022 14:04:31 -0700 (PDT) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251EG7Sg011679 for ; Wed, 1 Jun 2022 14:04:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=Yh7+qvp7wia1iDmc7QPLBPdsuYBbsRWXU5abdWcnXVM=; b=jQ95DVp5WTjKWyjOEONHEbsXt+MlD7J/7AFNFuwFuzsA13TA+CGZrmhHC7Ls2qL2Qoo4 aSCdYNQBPtRNwnxUbq+wzwrs4+oUajeZVGwQLE95t+C+fSvoQo+dIuWDvvFJqJe/P4ja jLHAP6trQntmXtT1UALL5So4zIXoZpu81zI= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdv91xp0j-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:30 -0700 Received: from twshared19572.14.frc2.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:29 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 0C52DFEB23AD; Wed, 1 Jun 2022 14:01:43 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 12/15] io_uring: Add support for async buffered writes Date: Wed, 1 Jun 2022 14:01:38 -0700 Message-ID: <20220601210141.3773402-13-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 1bJC3U20G9wwa-gzudjVTSq-w-OxLCYZ X-Proofpoint-ORIG-GUID: 1bJC3U20G9wwa-gzudjVTSq-w-OxLCYZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This enables the async buffered writes for the filesystems that support async buffered writes in io-uring. Buffered writes are enabled for blocks that are already in the page cache or can be acquired with noio. Signed-off-by: Stefan Roesch --- fs/io_uring.c | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 9f1c682d7caf..c0771e215669 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4257,7 +4257,7 @@ static inline int io_iter_do_read(struct io_kiocb *req, struct iov_iter *iter) return -EINVAL; } -static bool need_read_all(struct io_kiocb *req) +static bool need_complete_io(struct io_kiocb *req) { return req->flags & REQ_F_ISREG || S_ISBLK(file_inode(req->file)->i_mode); @@ -4386,7 +4386,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags) } else if (ret == -EIOCBQUEUED) { goto out_free; } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock || - (req->flags & REQ_F_NOWAIT) || !need_read_all(req)) { + (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) { /* read all, failed, already did sync or don't want to retry */ goto done; } @@ -4482,9 +4482,10 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!io_file_supports_nowait(req))) goto copy_iov; - /* file path doesn't support NOWAIT for non-direct_IO */ - if (force_nonblock && !(kiocb->ki_flags & IOCB_DIRECT) && - (req->flags & REQ_F_ISREG)) + /* File path supports NOWAIT for non-direct_IO only for block devices. */ + if (!(kiocb->ki_flags & IOCB_DIRECT) && + !(kiocb->ki_filp->f_mode & FMODE_BUF_WASYNC) && + (req->flags & REQ_F_ISREG)) goto copy_iov; kiocb->ki_flags |= IOCB_NOWAIT; @@ -4538,6 +4539,24 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags) /* IOPOLL retry should happen for io-wq threads */ if (ret2 == -EAGAIN && (req->ctx->flags & IORING_SETUP_IOPOLL)) goto copy_iov; + + if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) { + struct io_async_rw *rw; + + /* This is a partial write. The file pos has already been + * updated, setup the async struct to complete the request + * in the worker. Also update bytes_done to account for + * the bytes already written. + */ + iov_iter_save_state(&s->iter, &s->iter_state); + ret = io_setup_async_rw(req, iovec, s, true); + + rw = req->async_data; + if (rw) + rw->bytes_done += ret2; + + return ret ? ret : -EAGAIN; + } done: kiocb_done(req, ret2, issue_flags); } else { From patchwork Wed Jun 1 21:01:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AE57C433EF for ; Wed, 1 Jun 2022 21:04:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230526AbiFAVEs (ORCPT ); Wed, 1 Jun 2022 17:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230502AbiFAVEf (ORCPT ); Wed, 1 Jun 2022 17:04:35 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52CEC230949 for ; Wed, 1 Jun 2022 14:04:33 -0700 (PDT) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251JDHxj026437 for ; Wed, 1 Jun 2022 14:04:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=/DrIfFwryF28jpgKYd8aC2BVP4/JsBupN+LpZHKHGXE=; b=SPS4VuOzPAgnOy8HFdjkD/jOGSFDYghVjKrhauRoOaBs29+dgDN2JC4ddTo1b3E0l60p Ra5UNoJv/wXy9o5C3fAnczXeb43+AhVleyC7iiNoC8IgDb74Nk/78cm954Ilq6bACGiU ed3xN4frCI7avB9k1wf+pI8KdVvioPLlXNY= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdbt6cn0q-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:04:32 -0700 Received: from twshared4937.07.ash9.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::e) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:04:27 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 0FEB2FEB23AF; Wed, 1 Jun 2022 14:01:43 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , Subject: [PATCH v7 13/15] io_uring: Add tracepoint for short writes Date: Wed, 1 Jun 2022 14:01:39 -0700 Message-ID: <20220601210141.3773402-14-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: v2juEhMZoeaofrzRXbUKr8td3jrfmhph X-Proofpoint-GUID: v2juEhMZoeaofrzRXbUKr8td3jrfmhph X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds the io_uring_short_write tracepoint to io_uring. A short write is issued if not all pages that are required for a write are in the page cache and the async buffered writes have to return EAGAIN. Signed-off-by: Stefan Roesch --- fs/io_uring.c | 3 +++ include/trace/events/io_uring.h | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index c0771e215669..9ab68138f442 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4543,6 +4543,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags) if (ret2 != req->cqe.res && ret2 >= 0 && need_complete_io(req)) { struct io_async_rw *rw; + trace_io_uring_short_write(req->ctx, kiocb->ki_pos - ret2, + req->cqe.res, ret2); + /* This is a partial write. The file pos has already been * updated, setup the async struct to complete the request * in the worker. Also update bytes_done to account for diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h index 66fcc5a1a5b1..25df513660cc 100644 --- a/include/trace/events/io_uring.h +++ b/include/trace/events/io_uring.h @@ -600,6 +600,31 @@ TRACE_EVENT(io_uring_cqe_overflow, __entry->cflags, __entry->ocqe) ); +TRACE_EVENT(io_uring_short_write, + + TP_PROTO(void *ctx, u64 fpos, u64 wanted, u64 got), + + TP_ARGS(ctx, fpos, wanted, got), + + TP_STRUCT__entry( + __field(void *, ctx) + __field(u64, fpos) + __field(u64, wanted) + __field(u64, got) + ), + + TP_fast_assign( + __entry->ctx = ctx; + __entry->fpos = fpos; + __entry->wanted = wanted; + __entry->got = got; + ), + + TP_printk("ring %p, fpos %lld, wanted %lld, got %lld", + __entry->ctx, __entry->fpos, + __entry->wanted, __entry->got) +); + #endif /* _TRACE_IO_URING_H */ /* This part must be outside protection */ From patchwork Wed Jun 1 21:01:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54E86C433EF for ; Wed, 1 Jun 2022 21:08:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231277AbiFAVIM (ORCPT ); Wed, 1 Jun 2022 17:08:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231267AbiFAVIM (ORCPT ); Wed, 1 Jun 2022 17:08:12 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54C514D603 for ; Wed, 1 Jun 2022 14:08:11 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.5/8.17.1.5) with ESMTP id 251IbLN8020383 for ; Wed, 1 Jun 2022 14:08:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=UYSVIRU3OqHDXp36Z+9UaO8Gk0kt2REYQWE1ANvgX7U=; b=keoA+bCwpvGGQ1fvUj6GcBQX1BHxuhWSDiLo4Tmnx/ljZ8LiieXiVZqgCD2ps9ArT8EY gcqVboNEm4BG1v0IQ6EAs7E8Sr9FPdRDNdPfe5JNLIgMyNSaBfqXZSYN5vuEX4CsDV6s GgNCOmmUeyLvyfJpuPwMC/i53PfXILMmum0= Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3ge5atvexa-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:08:10 -0700 Received: from twshared19572.14.frc2.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:08:07 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 167ACFEB23B1; Wed, 1 Jun 2022 14:01:43 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 14/15] xfs: Specify lockmode when calling xfs_ilock_for_iomap() Date: Wed, 1 Jun 2022 14:01:40 -0700 Message-ID: <20220601210141.3773402-15-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: w1CMOflI-6X2QBXlEz_897CHnb7YdlZb X-Proofpoint-ORIG-GUID: w1CMOflI-6X2QBXlEz_897CHnb7YdlZb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This patch changes the helper function xfs_ilock_for_iomap such that the lock mode must be passed in. Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_iomap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 5a393259a3a3..bcf7c3694290 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -664,7 +664,7 @@ xfs_ilock_for_iomap( unsigned flags, unsigned *lockmode) { - unsigned mode = XFS_ILOCK_SHARED; + unsigned int mode = *lockmode; bool is_write = flags & (IOMAP_WRITE | IOMAP_ZERO); /* @@ -742,7 +742,7 @@ xfs_direct_write_iomap_begin( int nimaps = 1, error = 0; bool shared = false; u16 iomap_flags = 0; - unsigned lockmode; + unsigned int lockmode = XFS_ILOCK_SHARED; ASSERT(flags & (IOMAP_WRITE | IOMAP_ZERO)); @@ -1172,7 +1172,7 @@ xfs_read_iomap_begin( xfs_fileoff_t end_fsb = xfs_iomap_end_fsb(mp, offset, length); int nimaps = 1, error = 0; bool shared = false; - unsigned lockmode; + unsigned int lockmode = XFS_ILOCK_SHARED; ASSERT(!(flags & (IOMAP_WRITE | IOMAP_ZERO))); From patchwork Wed Jun 1 21:01:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 12867303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F090C43334 for ; Wed, 1 Jun 2022 21:08:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231272AbiFAVIL (ORCPT ); Wed, 1 Jun 2022 17:08:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231267AbiFAVIL (ORCPT ); Wed, 1 Jun 2022 17:08:11 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EA394CD6D for ; Wed, 1 Jun 2022 14:08:09 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 251KQDXj028690 for ; Wed, 1 Jun 2022 14:08:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=nQH2lmMMGXtJ/jcy4qJSIzXTyuwO7PWxs6HFqy0bsS4=; b=lGs5wtkaRUSFoIsEjC46E4CLsggKbpn3Ugw4yPQahhmLhNYdKNW2ovbSiE6EpKoMz/4R wW6nkR5bi8GfWfc56GP4JxT9IrjDDBOkF9mB2Ju5/E0bpwV8aEOsYHoq+uO3RxKZRuCY g7qXpKCKi4r7eq9pHSSNqC6EHfbdmNlqhOs= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3gdt5jqgua-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 01 Jun 2022 14:08:08 -0700 Received: from twshared19572.14.frc2.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 1 Jun 2022 14:08:08 -0700 Received: by devvm225.atn0.facebook.com (Postfix, from userid 425415) id 1CA19FEB23B3; Wed, 1 Jun 2022 14:01:43 -0700 (PDT) From: Stefan Roesch To: , , , , CC: , , , , , Christoph Hellwig Subject: [PATCH v7 15/15] xfs: Add async buffered write support Date: Wed, 1 Jun 2022 14:01:41 -0700 Message-ID: <20220601210141.3773402-16-shr@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220601210141.3773402-1-shr@fb.com> References: <20220601210141.3773402-1-shr@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: BwEN7fmGkP0uIfgaug9k9g0p3sounaR2 X-Proofpoint-ORIG-GUID: BwEN7fmGkP0uIfgaug9k9g0p3sounaR2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-01_08,2022-06-01_01,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This adds the async buffered write support to XFS. For async buffered write requests, the request will return -EAGAIN if the ilock cannot be obtained immediately. Signed-off-by: Stefan Roesch Reviewed-by: Christoph Hellwig --- fs/xfs/xfs_file.c | 11 +++++------ fs/xfs/xfs_iomap.c | 5 ++++- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index a60632ecc3f0..4d65ff007c7d 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -410,7 +410,7 @@ xfs_file_write_checks( spin_unlock(&ip->i_flags_lock); out: - return file_modified(file); + return kiocb_modified(iocb); } static int @@ -700,12 +700,11 @@ xfs_file_buffered_write( bool cleared_space = false; unsigned int iolock; - if (iocb->ki_flags & IOCB_NOWAIT) - return -EOPNOTSUPP; - write_retry: iolock = XFS_IOLOCK_EXCL; - xfs_ilock(ip, iolock); + ret = xfs_ilock_iocb(iocb, iolock); + if (ret) + return ret; ret = xfs_file_write_checks(iocb, from, &iolock); if (ret) @@ -1165,7 +1164,7 @@ xfs_file_open( { if (xfs_is_shutdown(XFS_M(inode->i_sb))) return -EIO; - file->f_mode |= FMODE_NOWAIT | FMODE_BUF_RASYNC; + file->f_mode |= FMODE_NOWAIT | FMODE_BUF_RASYNC | FMODE_BUF_WASYNC; return generic_file_open(inode, file); } diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index bcf7c3694290..5d50fed291b4 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -886,6 +886,7 @@ xfs_buffered_write_iomap_begin( bool eof = false, cow_eof = false, shared = false; int allocfork = XFS_DATA_FORK; int error = 0; + unsigned int lockmode = XFS_ILOCK_EXCL; if (xfs_is_shutdown(mp)) return -EIO; @@ -897,7 +898,9 @@ xfs_buffered_write_iomap_begin( ASSERT(!XFS_IS_REALTIME_INODE(ip)); - xfs_ilock(ip, XFS_ILOCK_EXCL); + error = xfs_ilock_for_iomap(ip, flags, &lockmode); + if (error) + return error; if (XFS_IS_CORRUPT(mp, !xfs_ifork_has_extents(&ip->i_df)) || XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BMAPIFORMAT)) {