From patchwork Thu Sep 1 13:33:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 12962656 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2176ECAAD3 for ; Thu, 1 Sep 2022 13:36:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232892AbiIANgR (ORCPT ); Thu, 1 Sep 2022 09:36:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232418AbiIANfm (ORCPT ); Thu, 1 Sep 2022 09:35:42 -0400 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B78EDB6; Thu, 1 Sep 2022 06:34:06 -0700 (PDT) Received: by mail-wr1-x42d.google.com with SMTP id k9so22433666wri.0; Thu, 01 Sep 2022 06:34:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=6yyRgULoHRta/eb3LyurLgmxqUUvQOMkvOIjaPH3158=; b=Sc2XuBLdQaFWHfKco2e5NBK6WsVvh1nSyCM5PUfflM3qsq8rCP1IKVyV2XAIWubo6U /3GSboqR51u8Ip3kd/RRLc0nK8cCHPzncgjz2mXet7Sa2OHxvk/X3G3uebFUeDOfgT5E TzAZkyHh5PaGLQ6JoUC/Ggmn7xyi8O69QMsMxYFMRi6e6AwOSadMbRYXzOytU+WYxz2b LkvB19v7zYe/pgJe+JM2LNWZcDawWRoaeq6larc2UVF8kaMkuLS1PvodM5aHVHBeLmH2 olcvtkMLDuECMWg3oKWiyx1P+a8n8gibKhgeTFvtM+d1+PipAMITrjP5v1e/+qKDMdSS 8p9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=6yyRgULoHRta/eb3LyurLgmxqUUvQOMkvOIjaPH3158=; b=YvohwGaHE2mZONlykCFQ2uV2G3DdN5zCJ6MBblD/91ucko+tb/4OrzHQ84O22MnNSD gMKm09YbXAOIyJ9oLWc19KpxB+wPIU25XrGvbdcIi/4CuzJBc6tVaOG0QPNRc/EQ9Dx0 RBCRpAXwrnXK7LslIL90RVwlI5ynqR5micjF6lRq9t56ZU+RZNPJRT3LQSJmTl9kOF6e 1inN4WYuO2kUbpV3VXgy+byzFO+FQOtCXy+Whu5k+Cgt2LXi8C8uc/iQnPsSJnAiJicb V5WcFznurGGTF4GIOCv9Ucp2PL8vCBT9RFfdPtUoezib51pDbo2IDvxL/RTKmZVxB7ml ZqWg== X-Gm-Message-State: ACgBeo2Qja/Gj2BTAO+i/or/SLVR6xXvGHj0s+AJvsBwdWx4LssTYmh5 dBGONbvsNRV0AB6NoUE2n6o= X-Google-Smtp-Source: AA6agR5Yf/y0ob8AtApCxI9+cgACE7mv51qI0KXxbcCSs8lHqHQjtAVEFrOG/CvCxCU82DSUA7M2Mg== X-Received: by 2002:a05:6000:1563:b0:222:c827:1a19 with SMTP id 3-20020a056000156300b00222c8271a19mr14337146wrz.705.1662039244776; Thu, 01 Sep 2022 06:34:04 -0700 (PDT) Received: from localhost.localdomain ([77.137.66.49]) by smtp.gmail.com with ESMTPSA id az26-20020adfe19a000000b0022529d3e911sm15516390wrb.109.2022.09.01.06.34.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 06:34:04 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , "Darrick J . Wong" , Leah Rumancik , Chandan Babu R , Luis Chamberlain , Adam Manzanares , linux-xfs@vger.kernel.org, stable@vger.kernel.org, Dave Chinner Subject: [PATCH 5.10 v3 1/5] xfs: remove infinite loop when reserving free block pool Date: Thu, 1 Sep 2022 16:33:52 +0300 Message-Id: <20220901133356.2473299-2-amir73il@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220901133356.2473299-1-amir73il@gmail.com> References: <20220901133356.2473299-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org commit 15f04fdc75aaaa1cccb0b8b3af1be290e118a7bc upstream. [Added wrapper xfs_fdblocks_unavailable() for 5.10.y backport] Infinite loops in kernel code are scary. Calls to xfs_reserve_blocks should be rare (people should just use the defaults!) so we really don't need to try so hard. Simplify the logic here by removing the infinite loop. Cc: Brian Foster Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_fsops.c | 52 +++++++++++++++++++--------------------------- fs/xfs/xfs_mount.h | 8 +++++++ 2 files changed, 29 insertions(+), 31 deletions(-) diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c index ef1d5bb88b93..6d4f4271e7be 100644 --- a/fs/xfs/xfs_fsops.c +++ b/fs/xfs/xfs_fsops.c @@ -376,46 +376,36 @@ xfs_reserve_blocks( * If the request is larger than the current reservation, reserve the * blocks before we update the reserve counters. Sample m_fdblocks and * perform a partial reservation if the request exceeds free space. + * + * The code below estimates how many blocks it can request from + * fdblocks to stash in the reserve pool. This is a classic TOCTOU + * race since fdblocks updates are not always coordinated via + * m_sb_lock. */ - error = -ENOSPC; - do { - free = percpu_counter_sum(&mp->m_fdblocks) - - mp->m_alloc_set_aside; - if (free <= 0) - break; - - delta = request - mp->m_resblks; - lcounter = free - delta; - if (lcounter < 0) - /* We can't satisfy the request, just get what we can */ - fdblks_delta = free; - else - fdblks_delta = delta; - + free = percpu_counter_sum(&mp->m_fdblocks) - + xfs_fdblocks_unavailable(mp); + delta = request - mp->m_resblks; + if (delta > 0 && free > 0) { /* * We'll either succeed in getting space from the free block - * count or we'll get an ENOSPC. If we get a ENOSPC, it means - * things changed while we were calculating fdblks_delta and so - * we should try again to see if there is anything left to - * reserve. - * - * Don't set the reserved flag here - we don't want to reserve - * the extra reserve blocks from the reserve..... + * count or we'll get an ENOSPC. Don't set the reserved flag + * here - we don't want to reserve the extra reserve blocks + * from the reserve. */ + fdblks_delta = min(free, delta); spin_unlock(&mp->m_sb_lock); error = xfs_mod_fdblocks(mp, -fdblks_delta, 0); spin_lock(&mp->m_sb_lock); - } while (error == -ENOSPC); - /* - * Update the reserve counters if blocks have been successfully - * allocated. - */ - if (!error && fdblks_delta) { - mp->m_resblks += fdblks_delta; - mp->m_resblks_avail += fdblks_delta; + /* + * Update the reserve counters if blocks have been successfully + * allocated. + */ + if (!error) { + mp->m_resblks += fdblks_delta; + mp->m_resblks_avail += fdblks_delta; + } } - out: if (outval) { outval->resblks = mp->m_resblks; diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index dfa429b77ee2..3a6bc9dc11b5 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -406,6 +406,14 @@ extern int xfs_initialize_perag(xfs_mount_t *mp, xfs_agnumber_t agcount, xfs_agnumber_t *maxagi); extern void xfs_unmountfs(xfs_mount_t *); +/* Accessor added for 5.10.y backport */ +static inline uint64_t +xfs_fdblocks_unavailable( + struct xfs_mount *mp) +{ + return mp->m_alloc_set_aside; +} + extern int xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta, bool reserved); extern int xfs_mod_frextents(struct xfs_mount *mp, int64_t delta); From patchwork Thu Sep 1 13:33:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 12962653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FC4FECAAD1 for ; Thu, 1 Sep 2022 13:36:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232107AbiIANgO (ORCPT ); Thu, 1 Sep 2022 09:36:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232146AbiIANfm (ORCPT ); Thu, 1 Sep 2022 09:35:42 -0400 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16D66DD5; Thu, 1 Sep 2022 06:34:08 -0700 (PDT) Received: by mail-wm1-x32b.google.com with SMTP id n23-20020a7bc5d7000000b003a62f19b453so1379339wmk.3; Thu, 01 Sep 2022 06:34:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=pbhYhCVy+vOkWyvQ7AmT+FzIvJlqjatuYMTY3L/HslY=; b=C63vUPjBGRXMpqYryIcGVTqZriybt9mtRuA38vIuPphixORXfxbDoKPPPYYDYU33t+ tatMhENMViYLFwM35AJ8lqDtKfHZPHpLz8+f9iT186zrKmhXE2dE8HukOJ1FYe84TeBX S+THgqJuiq3cFEHnb4SKWTZKMQ9xef+zoMFcwl/D8RBc9sVxUKwjlIe2Ml38NY0MS1DY zK8kA+bAJO5evmyPROH3eyz49CXMHLg2yUHJsQXoxJGZxsMBg1yOt6M5R+ic37OwvorE tzXXiwyxZ6WyuoHe7gLAYwxym7HrnVQ2DGlWudTXIhvnaT0xXVNuKlCH+U0QlIMMNSdZ cAqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=pbhYhCVy+vOkWyvQ7AmT+FzIvJlqjatuYMTY3L/HslY=; b=63GuCzXFEU9tdU4rOqOkzPMqbn5X6+nAAU6adOfb3gU8dGPo8HGv8hKKe8qqgaM8eE gs0eTPbfhgP2Tw9YVl58H4OnNu0E4UI3WtsAC43Tnp2GTFCvxOI8EodeCrgzGaESZTE1 JpLj2C704ch4kHYYSAchMUDx4+LVPcorDFSqUZQw28Gzc6LBVB9dwvbZJJQRB7uIVfJ2 CDcTvpLQFokmHRhP/xhEsVjYjc8fmUf1Cf+auS+l6awlyYP+Wgguod6UEqpIGrjsHPCS TGju5FYKSdKrA72FnkwZgS32QEtmNsY+fsdUXSVQl7rni2nktCSitULVESqt71C0EWQA Vmeg== X-Gm-Message-State: ACgBeo01r1Gmbyd0Jr5dYq3adqxdw5cLynY1Z07MWh+h3/FynKI0D7uF wxF0Z4pGV/JZHlKayEf3D2Q= X-Google-Smtp-Source: AA6agR7rOkK6HBprga+z0yn27BQE45IffF10cIeTPOsJ9SA+fiN9zHi6QXqbKCjuol9N62djqdx5Hw== X-Received: by 2002:a05:600c:348d:b0:3a6:b4e:ff6d with SMTP id a13-20020a05600c348d00b003a60b4eff6dmr4987755wmq.95.1662039246543; Thu, 01 Sep 2022 06:34:06 -0700 (PDT) Received: from localhost.localdomain ([77.137.66.49]) by smtp.gmail.com with ESMTPSA id az26-20020adfe19a000000b0022529d3e911sm15516390wrb.109.2022.09.01.06.34.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 06:34:06 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , "Darrick J . Wong" , Leah Rumancik , Chandan Babu R , Luis Chamberlain , Adam Manzanares , linux-xfs@vger.kernel.org, stable@vger.kernel.org, Dave Chinner Subject: [PATCH 5.10 v3 2/5] xfs: always succeed at setting the reserve pool size Date: Thu, 1 Sep 2022 16:33:53 +0300 Message-Id: <20220901133356.2473299-3-amir73il@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220901133356.2473299-1-amir73il@gmail.com> References: <20220901133356.2473299-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: "Darrick J. Wong" commit 0baa2657dc4d79202148be79a3dc36c35f425060 upstream. Nowadays, xfs_mod_fdblocks will always choose to fill the reserve pool with freed blocks before adding to fdblocks. Therefore, we can change the behavior of xfs_reserve_blocks slightly -- setting the target size of the pool should always succeed, since a deficiency will eventually be made up as blocks get freed. Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_fsops.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c index 6d4f4271e7be..dacead0d0934 100644 --- a/fs/xfs/xfs_fsops.c +++ b/fs/xfs/xfs_fsops.c @@ -380,11 +380,14 @@ xfs_reserve_blocks( * The code below estimates how many blocks it can request from * fdblocks to stash in the reserve pool. This is a classic TOCTOU * race since fdblocks updates are not always coordinated via - * m_sb_lock. + * m_sb_lock. Set the reserve size even if there's not enough free + * space to fill it because mod_fdblocks will refill an undersized + * reserve when it can. */ free = percpu_counter_sum(&mp->m_fdblocks) - xfs_fdblocks_unavailable(mp); delta = request - mp->m_resblks; + mp->m_resblks = request; if (delta > 0 && free > 0) { /* * We'll either succeed in getting space from the free block @@ -401,10 +404,8 @@ xfs_reserve_blocks( * Update the reserve counters if blocks have been successfully * allocated. */ - if (!error) { - mp->m_resblks += fdblks_delta; + if (!error) mp->m_resblks_avail += fdblks_delta; - } } out: if (outval) { From patchwork Thu Sep 1 13:33:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 12962655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 176A1C54EE9 for ; Thu, 1 Sep 2022 13:36:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232644AbiIANgQ (ORCPT ); Thu, 1 Sep 2022 09:36:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232448AbiIANfn (ORCPT ); Thu, 1 Sep 2022 09:35:43 -0400 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55FB011A1B; Thu, 1 Sep 2022 06:34:10 -0700 (PDT) Received: by mail-wr1-x42b.google.com with SMTP id b5so22379492wrr.5; Thu, 01 Sep 2022 06:34:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=rXO269FD1Qqh/3kSRd+jcHDLkTXo4nk6CloH8biq0G4=; b=G7guemJfElS3TczNoMMBULiKgj2V5cRpuxIlQboLqa2/8or9xnBucM1JS7HCei2gy1 FD/7Idf8y0PXx9LhmzoHjOcKo+uiEZEkkp6Z5Tkhq9trfBxTN4H9ZuqkDg8gR67fITRA cdeTM61RfrM73u/CijeaGupj3yVEkvsXEj+i/YhpOikvaeBLZFTek8aMUQeOacm2p2O1 Clsu8M5tdtLawqd/37pBrjDgy+fhgGoSxdciA4i+75zSab1AUVB3mYIj3esgVuHnQU89 0+dlRtYTRNTUqig3CM4hBuN0SBNkVe+xUW7u0RtQ6nDGHU5tfKRNW74ZaY8bZB17BRbR VvyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=rXO269FD1Qqh/3kSRd+jcHDLkTXo4nk6CloH8biq0G4=; b=t/DJjO7mYXWPrgmiXKb4HSgOXiyP9QSfNDznmZRZBvreuub2NpuLmCYyiz9thdVlU0 ZNP994GZkit72S30tA0dbgnril6SJEYJkTY4XnI97cKwSbv4poRNWR2I684EpmAskrUV GrsdEa3jhYAKfOH+3CsOJVl6Pc91IFvjbg2gPVjvSPA9Byj39O9NazbE5apG4wPLWfUO tWJiUM3S9Xt8Uh/+rdTqKallieHPBznTfzXgG4vqmew5nBr9kaO2aWuh3Xr1j123ksO9 y5n9qWiBMmLtTqDbpEuGHNFp51uYe8sVcNmbTejV/faptYNIcrCt3fiw0Ceq4ea/z/xo BZ6w== X-Gm-Message-State: ACgBeo3dCmnAENUZSRPSNw/sSTI+7JCnBQZbvBuB7y/K0dkclRF6zL7x GJsdHfAgAu/HlByH4dWyRXg= X-Google-Smtp-Source: AA6agR45s1ab/I1+42fPETGyFtE+7Wbc53xvDhZVC0a8wX+YRUTbNmwToh/NXjEN7vXYDfyquwXUNA== X-Received: by 2002:a05:6000:616:b0:226:d80b:76ab with SMTP id bn22-20020a056000061600b00226d80b76abmr11974342wrb.547.1662039248313; Thu, 01 Sep 2022 06:34:08 -0700 (PDT) Received: from localhost.localdomain ([77.137.66.49]) by smtp.gmail.com with ESMTPSA id az26-20020adfe19a000000b0022529d3e911sm15516390wrb.109.2022.09.01.06.34.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 06:34:07 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , "Darrick J . Wong" , Leah Rumancik , Chandan Babu R , Luis Chamberlain , Adam Manzanares , linux-xfs@vger.kernel.org, stable@vger.kernel.org, Dave Chinner Subject: [PATCH 5.10 v3 3/5] xfs: fix overfilling of reserve pool Date: Thu, 1 Sep 2022 16:33:54 +0300 Message-Id: <20220901133356.2473299-4-amir73il@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220901133356.2473299-1-amir73il@gmail.com> References: <20220901133356.2473299-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: "Darrick J. Wong" commit 82be38bcf8a2e056b4c99ce79a3827fa743df6ec upstream. Due to cycling of m_sb_lock, it's possible for multiple callers of xfs_reserve_blocks to race at changing the pool size, subtracting blocks from fdblocks, and actually putting it in the pool. The result of all this is that we can overfill the reserve pool to hilarious levels. xfs_mod_fdblocks, when called with a positive value, already knows how to take freed blocks and either fill the reserve until it's full, or put them in fdblocks. Use that instead of setting m_resblks_avail directly. Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_fsops.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/fs/xfs/xfs_fsops.c b/fs/xfs/xfs_fsops.c index dacead0d0934..775f833146e3 100644 --- a/fs/xfs/xfs_fsops.c +++ b/fs/xfs/xfs_fsops.c @@ -394,18 +394,17 @@ xfs_reserve_blocks( * count or we'll get an ENOSPC. Don't set the reserved flag * here - we don't want to reserve the extra reserve blocks * from the reserve. + * + * The desired reserve size can change after we drop the lock. + * Use mod_fdblocks to put the space into the reserve or into + * fdblocks as appropriate. */ fdblks_delta = min(free, delta); spin_unlock(&mp->m_sb_lock); error = xfs_mod_fdblocks(mp, -fdblks_delta, 0); - spin_lock(&mp->m_sb_lock); - - /* - * Update the reserve counters if blocks have been successfully - * allocated. - */ if (!error) - mp->m_resblks_avail += fdblks_delta; + xfs_mod_fdblocks(mp, fdblks_delta, 0); + spin_lock(&mp->m_sb_lock); } out: if (outval) { From patchwork Thu Sep 1 13:33:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 12962658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AD68ECAAD1 for ; Thu, 1 Sep 2022 13:36:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234274AbiIANgU (ORCPT ); Thu, 1 Sep 2022 09:36:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232557AbiIANfp (ORCPT ); Thu, 1 Sep 2022 09:35:45 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BAF013D37; Thu, 1 Sep 2022 06:34:12 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id h204-20020a1c21d5000000b003a5b467c3abso1375612wmh.5; Thu, 01 Sep 2022 06:34:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=xLW9Gby42tyiBFlIoDzW7LpOcnfeeekTuyuaQuWlQqE=; b=B2yUD5QhyZpKXP3npKSUaxsF8Gs4LF6DfyVpnz5aRppwwc1GGyPNkuYCG8mGH8o6YW I8VWd3UW7eFt3If2sNC2e+meW8TbUde0f2COw1kA6Xy8P5e326ggGV8K8qGeItkYp4xU zmk2J5jaNQ0b9UIMfBfhG7ZPsTF8tfLaTW/E0qjs/1cVNFbsoPt+WnNKg7YQ2ik2Jdq8 warGfiVadYQERqWY8l6hp2PuAJB7DXPFmWWZrm2jATPUYrbcp7y4SxSzZyxHM1DDFlgb BSv18VLj3AaFevnpwwcQXqcl9VAHyvrjimV7hOQB2RqgEWBNFjp8jEMRlAsMpPnmNEyo OzQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=xLW9Gby42tyiBFlIoDzW7LpOcnfeeekTuyuaQuWlQqE=; b=PUKmkAhDCRmJINxpE8BwIV5JCjhL1G/7IqTShAt/jnawiFK1dGAcO7hfuCtnUlrCx0 2eldMOGGmgEKZhsiBRaKFwJuB1qPXneO3IjM5gDolmel4v2epOGbeWHRkAkjswWZp/C5 O6ehsW7fz2v3VYI5v50HcWQ4p+ORkclKCDXIskAvgRBnkkdNyLTHIUHR/benhMvoObW4 pSphsrAmZa2v2xvfwOLQwRpepMRSXrHOA1ipp73m3DM0nAWcNSEBf4wGiJpUmeU1nUkK QVmMPElXohw/CjMbnUim2GwN3cjA8VnIXoEseLDQP600UhPKfOmggDMYBhddknlzVRxb /08w== X-Gm-Message-State: ACgBeo1SzInb3zog0srZdyMQGYuBLBxkFTEXwpBBvnD/Lyjajkkxk1Li ZJWf8wM+sUkSJKFJ+m+iJGc= X-Google-Smtp-Source: AA6agR5LFv8uN4XkQ3pyBPSxydoGoXrU8nQ4GDlUyaFCVq0xQAOgGXUrHuRgxaWGBn985GeEVsvMmw== X-Received: by 2002:a05:600c:3583:b0:3a7:eeaf:62d3 with SMTP id p3-20020a05600c358300b003a7eeaf62d3mr5229741wmq.170.1662039250218; Thu, 01 Sep 2022 06:34:10 -0700 (PDT) Received: from localhost.localdomain ([77.137.66.49]) by smtp.gmail.com with ESMTPSA id az26-20020adfe19a000000b0022529d3e911sm15516390wrb.109.2022.09.01.06.34.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 06:34:09 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , "Darrick J . Wong" , Leah Rumancik , Chandan Babu R , Luis Chamberlain , Adam Manzanares , linux-xfs@vger.kernel.org, stable@vger.kernel.org, Brian Foster , Christoph Hellwig , Dave Chinner Subject: [PATCH 5.10 v3 4/5] xfs: fix soft lockup via spinning in filestream ag selection loop Date: Thu, 1 Sep 2022 16:33:55 +0300 Message-Id: <20220901133356.2473299-5-amir73il@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220901133356.2473299-1-amir73il@gmail.com> References: <20220901133356.2473299-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Brian Foster commit f650df7171b882dca737ddbbeb414100b31f16af upstream. The filestream AG selection loop uses pagf data to aid in AG selection, which depends on pagf initialization. If the in-core structure is not initialized, the caller invokes the AGF read path to do so and carries on. If another task enters the loop and finds a pagf init already in progress, the AGF read returns -EAGAIN and the task continues the loop. This does not increment the current ag index, however, which means the task spins on the current AGF buffer until unlocked. If the AGF read I/O submitted by the initial task happens to be delayed for whatever reason, this results in soft lockup warnings via the spinning task. This is reproduced by xfs/170. To avoid this problem, fix the AGF trylock failure path to properly iterate to the next AG. If a task iterates all AGs without making progress, the trylock behavior is dropped in favor of blocking locks and thus a soft lockup is no longer possible. Fixes: f48e2df8a877ca1c ("xfs: make xfs_*read_agf return EAGAIN to ALLOC_FLAG_TRYLOCK callers") Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_filestream.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c index db23e455eb91..bc41ec0c483d 100644 --- a/fs/xfs/xfs_filestream.c +++ b/fs/xfs/xfs_filestream.c @@ -128,11 +128,12 @@ xfs_filestream_pick_ag( if (!pag->pagf_init) { err = xfs_alloc_pagf_init(mp, NULL, ag, trylock); if (err) { - xfs_perag_put(pag); - if (err != -EAGAIN) + if (err != -EAGAIN) { + xfs_perag_put(pag); return err; + } /* Couldn't lock the AGF, skip this AG. */ - continue; + goto next_ag; } } From patchwork Thu Sep 1 13:33:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amir Goldstein X-Patchwork-Id: 12962657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21BABECAAD3 for ; Thu, 1 Sep 2022 13:36:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233264AbiIANgT (ORCPT ); Thu, 1 Sep 2022 09:36:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232420AbiIANfo (ORCPT ); Thu, 1 Sep 2022 09:35:44 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87BDB12AAB; Thu, 1 Sep 2022 06:34:12 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id bd26-20020a05600c1f1a00b003a5e82a6474so1463832wmb.4; Thu, 01 Sep 2022 06:34:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=4oxQGuYR2UpJS0BVOv4mLxNV2a7chSFxPlvyuLBVozo=; b=nfPugp+mlewgWjXzavyFXVyGayt2b2L1T8HgTXmkz9bm72dalA8eycHQhtBx5kyG04 9niQuzJohcBFAlj779nZ4om3Heyjs3ggoMzrFohtmFWKAqsIIQJsrVg9Rp0Hd/DL9HxR 0sUD+vQ9N3+hZ9bQMipJ4V03wOI6k5ee4r+jntn6sr5Ck+PdJDt6CmtcBl8FIn4+T8g1 4adVIBmBvn1MuaCRZLjpo6RaAQ+qGWat1Urc88uLuqvJF3rgskBy3fqSRVx375/UcsLH 1KRwGEcg1bus1njlgnw4quwM1eLGKQmfQKwQBYo/7bGsIn7Q6EADXtE0g9rplV6fm91j ERow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=4oxQGuYR2UpJS0BVOv4mLxNV2a7chSFxPlvyuLBVozo=; b=lPXjRYcdCwBOXIXrlTr9K8qe/36+Pyu0KLNEs/RKHJuyQ/SOJHyAyNxp0jDdfmqLQ8 PvYmmYSHNnPfujqjcgxUI68kNxUKuLTNql9HMHP+1c5N0nM1E8cCBP0y/8lSqE9ECSU7 F9YQMoWFMsXGtFpAEKqtK95fEbgMSjpf6BOnzmVy0zftwnznG4McO7sUKIqgn/Vsspde N5zcrYHmXqpbXXFN1608D+hDcXCpsiIbkNxiqZF6n5I/KgR4GnFnScq2Q/drugnwSH6b zbBVmIaHT6o1fYcAOvOQchDjBdjACfQ7oYd//l/ZMMAt7Yd/Czb4JtWNKujomSe2ueB3 hGbg== X-Gm-Message-State: ACgBeo1sWfiHoNoYjct6QMuyqkpc/ZtLeMWgvWhfZTxX9AltUP2wh3SO tvPNx9p44hOZ3V+d9blDrxxho/COggg= X-Google-Smtp-Source: AA6agR5Uo1m/JLdMWOG4XMVOghs3Wl4lKjEK85W9rr53mKjkzJvnaNE02uNTdzzmW5JmNm3S/cp6KQ== X-Received: by 2002:a1c:44d7:0:b0:3a6:725:c0a7 with SMTP id r206-20020a1c44d7000000b003a60725c0a7mr5484383wma.137.1662039252117; Thu, 01 Sep 2022 06:34:12 -0700 (PDT) Received: from localhost.localdomain ([77.137.66.49]) by smtp.gmail.com with ESMTPSA id az26-20020adfe19a000000b0022529d3e911sm15516390wrb.109.2022.09.01.06.34.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Sep 2022 06:34:11 -0700 (PDT) From: Amir Goldstein To: Greg Kroah-Hartman Cc: Sasha Levin , "Darrick J . Wong" , Leah Rumancik , Chandan Babu R , Luis Chamberlain , Adam Manzanares , linux-xfs@vger.kernel.org, stable@vger.kernel.org, Eric Sandeen , Dave Chinner , Dave Chinner Subject: [PATCH 5.10 v3 5/5] xfs: revert "xfs: actually bump warning counts when we send warnings" Date: Thu, 1 Sep 2022 16:33:56 +0300 Message-Id: <20220901133356.2473299-6-amir73il@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220901133356.2473299-1-amir73il@gmail.com> References: <20220901133356.2473299-1-amir73il@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org From: Eric Sandeen commit bc37e4fb5cac2925b2e286b1f1d4fc2b519f7d92 upstream. This reverts commit 4b8628d57b725b32616965e66975fcdebe008fe7. XFS quota has had the concept of a "quota warning limit" since the earliest Irix implementation, but a mechanism for incrementing the warning counter was never implemented, as documented in the xfs_quota(8) man page. We do know from the historical archive that it was never incremented at runtime during quota reservation operations. With this commit, the warning counter quickly increments for every allocation attempt after the user has crossed a quote soft limit threshold, and this in turn transitions the user to hard quota failures, rendering soft quota thresholds and timers useless. This was reported as a regression by users. Because the intended behavior of this warning counter has never been understood or documented, and the result of this change is a regression in soft quota functionality, revert this commit to make soft quota limits and timers operable again. Fixes: 4b8628d57b72 ("xfs: actually bump warning counts when we send warnings) Signed-off-by: Eric Sandeen Reviewed-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong --- fs/xfs/xfs_trans_dquot.c | 1 - 1 file changed, 1 deletion(-) diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c index fe45b0c3970c..288ea38c43ad 100644 --- a/fs/xfs/xfs_trans_dquot.c +++ b/fs/xfs/xfs_trans_dquot.c @@ -615,7 +615,6 @@ xfs_dqresv_check( return QUOTA_NL_ISOFTLONGWARN; } - res->warnings++; return QUOTA_NL_ISOFTWARN; }