From patchwork Mon Jan 15 22:59:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 13520260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF21DC3DA79 for ; Mon, 15 Jan 2024 23:01:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF25E6B007E; Mon, 15 Jan 2024 18:01:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BC0B16B0080; Mon, 15 Jan 2024 18:01:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9231F6B0081; Mon, 15 Jan 2024 18:01:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 59CC36B0080 for ; Mon, 15 Jan 2024 18:01:25 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 232411C11EE for ; Mon, 15 Jan 2024 23:01:25 +0000 (UTC) X-FDA: 81683068530.20.EA9B8D8 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf11.hostedemail.com (Postfix) with ESMTP id 1801D40027 for ; Mon, 15 Jan 2024 23:01:22 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=HFjTCXo9; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf11.hostedemail.com: domain of david@fromorbit.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705359683; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DvVUbq4f8t6Cv63H6GruWlEMt6Dq66cPsSdsWLn42bk=; b=qutXkxIN7d9Kih34bmal6WC0PGktzXKB1NY7wgnYCFq/uQ5N49ixYvhMfV4WkR5nfQVKG+ jlWFCfoi56SboJemwT2WHUnTTbakCCMybUSYW1L3ePfQ4dfjsPCPiXwksyhUF0FIlaenJs 7g2pKjuh44o9asA/n2WX5ihAm4MM9yA= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=HFjTCXo9; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf11.hostedemail.com: domain of david@fromorbit.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705359683; a=rsa-sha256; cv=none; b=Grx9m8IzsmGTSCh+DkSiDDX5gqoWSa6awFjHAWCvbBmEJ6bFZXZ47+j3vHKVbHnQZ4jl9H RIxDuG1p7MK2IEGamuHyzNtta6ALHOJFplxbgIiVD5QQOC8UuKUU1eyINgbwRsCqcmtCcd sCAl8ruNTOg0ymPhxGfx1IhOY2CUILA= Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6db81c6287dso1075676b3a.0 for ; Mon, 15 Jan 2024 15:01:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1705359682; x=1705964482; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DvVUbq4f8t6Cv63H6GruWlEMt6Dq66cPsSdsWLn42bk=; b=HFjTCXo9Qx0lEZqnOBSHd+kPQl+xVB68KCXtwpW5oye8GhOqNPzbilAkjPzNhKyCxD EpxPeGilt4XNXYGnB9wre/LGblxe1CkZjCAI3eljYevftrIicUwWqMJqp0rzdUmN+t+b 4dMew25oYceAiAZsbNv+yxW7Ew0ynPhn3RQvPwAB7KLjmq39FfYlbM7SCt1gd89oyx90 b/4gpexQM6GtEKAk/K1gXGHqy+hL1T3YMoDcXw2gEyUgZ10vzk8M8Httd1xVLWM//d9A 1OhI/PkK8Zd5Vpsa3owhBktEjaax40VvYgtkJofL4WNaiogKM2nqtRSYkE0o5Wc3Hvhl dixg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705359682; x=1705964482; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DvVUbq4f8t6Cv63H6GruWlEMt6Dq66cPsSdsWLn42bk=; b=tsX+N5DVTpuF3oips+1ZPLUP6c370j48DIJOsjW3uZT0VTcf3qIs9hgh4qkkBEv7p5 uTJzqe+CEIhLnnqTncPpEKcZKRxbwi0WYRYyaZrKnPVZxgvZTd2KZfnCdAe6spLDgwRq 2Pe2KGpFVy/MoEaWeR2q6ifVoOHnf93jkDBGwID9Xt7t7Br5kChD0wkut5V/CWfk+k6n dwN16pyZM8Tvp/BACJNkSX5yckjvc9bx7Ha0Kf42Mu7CfpBiU5xaUJDVrQYqhpkgHI4P NWc4IFko6PqPhPG4/le+TGECHyl8OA/c1BeCbCDe6feGku79hzUhKltYLsxTwHh1xIvb T5/Q== X-Gm-Message-State: AOJu0Yy/IQjosYLvC+4g7KfPUSKAsFzBw0mVr2rUywlujmk6Nq8l7W/3 a0lMv60VVKLOhxYruxsOADYoXZ6or7CTzQ== X-Google-Smtp-Source: AGHT+IFaFLF0wQYlQlLLCZ8wr4ZkEhNZnxbAO8R0dD2lI0Z+w6FchnIrD8zwblq5AgkNyfZu4lZQBA== X-Received: by 2002:a05:6a20:8407:b0:199:fe49:6bb3 with SMTP id c7-20020a056a20840700b00199fe496bb3mr9488275pzd.5.1705359681921; Mon, 15 Jan 2024 15:01:21 -0800 (PST) Received: from dread.disaster.area (pa49-180-249-6.pa.nsw.optusnet.com.au. [49.180.249.6]) by smtp.gmail.com with ESMTPSA id ff23-20020a056a002f5700b006d9361fcfc8sm8397979pfb.177.2024.01.15.15.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Jan 2024 15:01:19 -0800 (PST) Received: from [192.168.253.23] (helo=devoid.disaster.area) by dread.disaster.area with esmtp (Exim 4.96) (envelope-from ) id 1rPVxE-00AtKF-0f; Tue, 16 Jan 2024 10:01:15 +1100 Received: from dave by devoid.disaster.area with local (Exim 4.97) (envelope-from ) id 1rPVxD-0000000H8fs-2pfv; Tue, 16 Jan 2024 10:01:15 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: willy@infradead.org, linux-mm@kvack.org Subject: [PATCH 07/12] xfs: use __GFP_NOLOCKDEP instead of GFP_NOFS Date: Tue, 16 Jan 2024 09:59:45 +1100 Message-ID: <20240115230113.4080105-8-david@fromorbit.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240115230113.4080105-1-david@fromorbit.com> References: <20240115230113.4080105-1-david@fromorbit.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1801D40027 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: kk4wtjdt5tq9bppzy63rgxxznm5j3bgw X-HE-Tag: 1705359682-123069 X-HE-Meta: U2FsdGVkX18KRy+409ybMBeiyiWLdHePKYZ9rKyIQglA8mWTxGQ3rUs291voX011SIZHezyITwnGDInJWsHbjMqRKx9RSsZ1S/PTUSaNqsJ3338shhdt4jc3s1OymeQ/n2mVNvAdPzh0iJWbcTV4M/Q20YYPOdjUD1ClRrGhdqTZZDzkG84wstCiJ/DwkFFcpzVytSx6RIsw978lWDaaCFMY299whWqD/HL7OJeJ09ItCLgfLhissmi77Na0a/yxvsrRrLf+2NXAYrS//l6P6LGOVZR0hz61T94Fp5FGODVr3QJKgezjtaGQUUgoY4fWUEEJI0jPkZ0uxwEDH7LgC6+w6clgNQM+KT2v5OpORJelPW9TFtw/WUp5Rf3hD2YcUer7ChM26gfPnMFEPTNHjiS5GM5Q80ljcoP3e3o3kjHfMTD23MkyrJm8Noj83tG38b0OFjXR1WhEcqMGhCoUrmCv7Viyk0p+kvj4ek0Ya/9fU2FKUXL9PGyFYXAyZAlq4ld/SysrlWqM/mfpQwRPnzPA4av/C531cF7kFs2wBZJ2h6z30Wx7u1ekLCwCD/07zUcRU+qOuq/xJ5fvymMHvLRP2EloAYR5L54F/TTwhR+mZYjDg9f/SipLmWK4qrojibp4R7yfDsna9RXDSyDlwxeTWt3Qo5yKMi0Q9WRX2kzhPWgxERsalYS5EhcbqoegfyGM+F8njIRXwnGpRcNMeV4pAN+wXVSapaJzkpLS2tI2N+qaiZDNvnTpSqSJg9VU6n37WPY97sDpgxuL0GkzrObFax3HOGqipun1Lo/eB0u9OS6Z36RxJFwz2SKv5FAlukBWJNS0qHt0dQuFPNDIIhzY7reqn4BhNCE50Kel69z9Kz8jZQR6QQQWWr3jksmj0CBCxcu1IHj3mBZosAd4avbFA+m1iVXDfFcSZWLdZue/m9rk8LqmTMJP5gWKgpOWZjVxDzCcfSWpLOaKOQ9 kOzTwztB mhWkRobUXvLTvZjqv2bHc2ehNtSMv6cQh7mVcZW1Ts8dEm+FVeC7u7WFdvsvP/7y+/JH/yCHXpB0TylhQash6E6wYwYkkRFGPpFjcW/xP0MTXf1MbFy4SKqjri46gqqskKAdZLYWuHKX+meVwcvFq9GLjEoOseNRXLf1tADczjxhidbYTW9ZfOLSNnO8d3GLptxtAUvwc9ysaozpNEgoz2xE+alhRj5NoY+2wX1wPr8lNvCFB9MfLPBisaOxZtuwck5Kl8wz79m4oNUbxrblC1HrcFbwSmW1UktrW+lbvVD5fXgYIBaBqe4HOzOPyxjSj61EImcohUezj+X8w/tUtPYz0xH1FkCaVMXNE59U1e6/XmZJZ7kL/w+5HVZxxS36P7y62oKL5OJO9Fb8Chm25sq3gDgjzROA0y9gA9n75IS++LeJHfV6XnhJTV+zg4ASd3+c3OzDRmJWJ3DBO/nebnzXMVKCfu1arYID0CfFgdj11w5nlt4aUQHugtZrxejkQVRrxIqgs63M45IZ2VrJSYJoAsw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Dave Chinner In the past we've had problems with lockdep false positives stemming from inode locking occurring in memory reclaim contexts (e.g. from superblock shrinkers). Lockdep doesn't know that inodes access from above memory reclaim cannot be accessed from below memory reclaim (and vice versa) but there has never been a good solution to solving this problem with lockdep annotations. This situation isn't unique to inode locks - buffers are also locked above and below memory reclaim, and we have to maintain lock ordering for them - and against inodes - appropriately. IOWs, the same code paths and locks are taken both above and below memory reclaim and so we always need to make sure the lock orders are consistent. We are spared the lockdep problems this might cause by the fact that semaphores and bit locks aren't covered by lockdep. In general, this sort of lockdep false positive detection is cause by code that runs GFP_KERNEL memory allocation with an actively referenced inode locked. When it is run from a transaction, memory allocation is automatically GFP_NOFS, so we don't have reclaim recursion issues. So in the places where we do memory allocation with inodes locked outside of a transaction, we have explicitly set them to use GFP_NOFS allocations to prevent lockdep false positives from being reported if the allocation dips into direct memory reclaim. More recently, __GFP_NOLOCKDEP was added to the memory allocation flags to tell lockdep not to track that particular allocation for the purposes of reclaim recursion detection. This is a much better way of preventing false positives - it allows us to use GFP_KERNEL context outside of transactions, and allows direct memory reclaim to proceed normally without throwing out false positive deadlock warnings. The obvious places that lock inodes and do memory allocation are the lookup paths and inode extent list initialisation. These occur in non-transactional GFP_KERNEL contexts, and so can run direct reclaim and lock inodes. This patch makes a first path through all the explicit GFP_NOFS allocations in XFS and converts the obvious ones to GFP_KERNEL | __GFP_NOLOCKDEP as a first step towards removing explicit GFP_NOFS allocations from the XFS code. Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong --- fs/xfs/libxfs/xfs_ag.c | 2 +- fs/xfs/libxfs/xfs_btree.h | 4 +++- fs/xfs/libxfs/xfs_da_btree.c | 8 +++++--- fs/xfs/libxfs/xfs_dir2.c | 14 ++++---------- fs/xfs/libxfs/xfs_iext_tree.c | 22 +++++++++++++--------- fs/xfs/libxfs/xfs_inode_fork.c | 8 +++++--- fs/xfs/xfs_icache.c | 5 ++--- fs/xfs/xfs_qm.c | 6 +++--- 8 files changed, 36 insertions(+), 33 deletions(-) diff --git a/fs/xfs/libxfs/xfs_ag.c b/fs/xfs/libxfs/xfs_ag.c index 937ea48d5cc0..036f4ee43fd3 100644 --- a/fs/xfs/libxfs/xfs_ag.c +++ b/fs/xfs/libxfs/xfs_ag.c @@ -389,7 +389,7 @@ xfs_initialize_perag( pag->pag_agno = index; pag->pag_mount = mp; - error = radix_tree_preload(GFP_NOFS); + error = radix_tree_preload(GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (error) goto out_free_pag; diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h index d906324e25c8..75a0e2c8e115 100644 --- a/fs/xfs/libxfs/xfs_btree.h +++ b/fs/xfs/libxfs/xfs_btree.h @@ -725,7 +725,9 @@ xfs_btree_alloc_cursor( { struct xfs_btree_cur *cur; - cur = kmem_cache_zalloc(cache, GFP_NOFS | __GFP_NOFAIL); + /* BMBT allocations can come through from non-transactional context. */ + cur = kmem_cache_zalloc(cache, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); cur->bc_tp = tp; cur->bc_mp = mp; cur->bc_btnum = btnum; diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c index 3383b4525381..444ec1560f43 100644 --- a/fs/xfs/libxfs/xfs_da_btree.c +++ b/fs/xfs/libxfs/xfs_da_btree.c @@ -85,7 +85,8 @@ xfs_da_state_alloc( { struct xfs_da_state *state; - state = kmem_cache_zalloc(xfs_da_state_cache, GFP_NOFS | __GFP_NOFAIL); + state = kmem_cache_zalloc(xfs_da_state_cache, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); state->args = args; state->mp = args->dp->i_mount; return state; @@ -2519,7 +2520,8 @@ xfs_dabuf_map( int error = 0, nirecs, i; if (nfsb > 1) - irecs = kzalloc(sizeof(irec) * nfsb, GFP_NOFS | __GFP_NOFAIL); + irecs = kzalloc(sizeof(irec) * nfsb, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); nirecs = nfsb; error = xfs_bmapi_read(dp, bno, nfsb, irecs, &nirecs, @@ -2533,7 +2535,7 @@ xfs_dabuf_map( */ if (nirecs > 1) { map = kzalloc(nirecs * sizeof(struct xfs_buf_map), - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); if (!map) { error = -ENOMEM; goto out_free_irecs; diff --git a/fs/xfs/libxfs/xfs_dir2.c b/fs/xfs/libxfs/xfs_dir2.c index e60aa8f8d0a7..728f72f0d078 100644 --- a/fs/xfs/libxfs/xfs_dir2.c +++ b/fs/xfs/libxfs/xfs_dir2.c @@ -333,7 +333,8 @@ xfs_dir_cilookup_result( !(args->op_flags & XFS_DA_OP_CILOOKUP)) return -EEXIST; - args->value = kmalloc(len, GFP_NOFS | __GFP_RETRY_MAYFAIL); + args->value = kmalloc(len, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_RETRY_MAYFAIL); if (!args->value) return -ENOMEM; @@ -364,15 +365,8 @@ xfs_dir_lookup( ASSERT(S_ISDIR(VFS_I(dp)->i_mode)); XFS_STATS_INC(dp->i_mount, xs_dir_lookup); - /* - * We need to use KM_NOFS here so that lockdep will not throw false - * positive deadlock warnings on a non-transactional lookup path. It is - * safe to recurse into inode recalim in that case, but lockdep can't - * easily be taught about it. Hence KM_NOFS avoids having to add more - * lockdep Doing this avoids having to add a bunch of lockdep class - * annotations into the reclaim path for the ilock. - */ - args = kzalloc(sizeof(*args), GFP_NOFS | __GFP_NOFAIL); + args = kzalloc(sizeof(*args), + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); args->geo = dp->i_mount->m_dir_geo; args->name = name->name; args->namelen = name->len; diff --git a/fs/xfs/libxfs/xfs_iext_tree.c b/fs/xfs/libxfs/xfs_iext_tree.c index 16f18b08fe4c..8796f2b3e534 100644 --- a/fs/xfs/libxfs/xfs_iext_tree.c +++ b/fs/xfs/libxfs/xfs_iext_tree.c @@ -394,12 +394,18 @@ xfs_iext_leaf_key( return leaf->recs[n].lo & XFS_IEXT_STARTOFF_MASK; } +static inline void * +xfs_iext_alloc_node( + int size) +{ + return kzalloc(size, GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); +} + static void xfs_iext_grow( struct xfs_ifork *ifp) { - struct xfs_iext_node *node = kzalloc(NODE_SIZE, - GFP_NOFS | __GFP_NOFAIL); + struct xfs_iext_node *node = xfs_iext_alloc_node(NODE_SIZE); int i; if (ifp->if_height == 1) { @@ -455,8 +461,7 @@ xfs_iext_split_node( int *nr_entries) { struct xfs_iext_node *node = *nodep; - struct xfs_iext_node *new = kzalloc(NODE_SIZE, - GFP_NOFS | __GFP_NOFAIL); + struct xfs_iext_node *new = xfs_iext_alloc_node(NODE_SIZE); const int nr_move = KEYS_PER_NODE / 2; int nr_keep = nr_move + (KEYS_PER_NODE & 1); int i = 0; @@ -544,8 +549,7 @@ xfs_iext_split_leaf( int *nr_entries) { struct xfs_iext_leaf *leaf = cur->leaf; - struct xfs_iext_leaf *new = kzalloc(NODE_SIZE, - GFP_NOFS | __GFP_NOFAIL); + struct xfs_iext_leaf *new = xfs_iext_alloc_node(NODE_SIZE); const int nr_move = RECS_PER_LEAF / 2; int nr_keep = nr_move + (RECS_PER_LEAF & 1); int i; @@ -586,8 +590,7 @@ xfs_iext_alloc_root( { ASSERT(ifp->if_bytes == 0); - ifp->if_data = kzalloc(sizeof(struct xfs_iext_rec), - GFP_NOFS | __GFP_NOFAIL); + ifp->if_data = xfs_iext_alloc_node(sizeof(struct xfs_iext_rec)); ifp->if_height = 1; /* now that we have a node step into it */ @@ -607,7 +610,8 @@ xfs_iext_realloc_root( if (new_size / sizeof(struct xfs_iext_rec) == RECS_PER_LEAF) new_size = NODE_SIZE; - new = krealloc(ifp->if_data, new_size, GFP_NOFS | __GFP_NOFAIL); + new = krealloc(ifp->if_data, new_size, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); memset(new + ifp->if_bytes, 0, new_size - ifp->if_bytes); ifp->if_data = new; cur->leaf = new; diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c index f6d5b86b608d..709fda3d742f 100644 --- a/fs/xfs/libxfs/xfs_inode_fork.c +++ b/fs/xfs/libxfs/xfs_inode_fork.c @@ -50,7 +50,8 @@ xfs_init_local_fork( mem_size++; if (size) { - char *new_data = kmalloc(mem_size, GFP_NOFS | __GFP_NOFAIL); + char *new_data = kmalloc(mem_size, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); memcpy(new_data, data, size); if (zero_terminate) @@ -205,7 +206,8 @@ xfs_iformat_btree( } ifp->if_broot_bytes = size; - ifp->if_broot = kmalloc(size, GFP_NOFS | __GFP_NOFAIL); + ifp->if_broot = kmalloc(size, + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); ASSERT(ifp->if_broot != NULL); /* * Copy and convert from the on-disk structure @@ -690,7 +692,7 @@ xfs_ifork_init_cow( return; ip->i_cowfp = kmem_cache_zalloc(xfs_ifork_cache, - GFP_NOFS | __GFP_NOFAIL); + GFP_KERNEL | __GFP_NOLOCKDEP | __GFP_NOFAIL); ip->i_cowfp->if_format = XFS_DINODE_FMT_EXTENTS; } diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index dba514a2c84d..06046827b5fe 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -659,10 +659,9 @@ xfs_iget_cache_miss( /* * Preload the radix tree so we can insert safely under the * write spinlock. Note that we cannot sleep inside the preload - * region. Since we can be called from transaction context, don't - * recurse into the file system. + * region. */ - if (radix_tree_preload(GFP_NOFS)) { + if (radix_tree_preload(GFP_KERNEL | __GFP_NOLOCKDEP)) { error = -EAGAIN; goto out_destroy; } diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index 46a7fe70e57e..384a5349e696 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -643,9 +643,9 @@ xfs_qm_init_quotainfo( if (error) goto out_free_lru; - INIT_RADIX_TREE(&qinf->qi_uquota_tree, GFP_NOFS); - INIT_RADIX_TREE(&qinf->qi_gquota_tree, GFP_NOFS); - INIT_RADIX_TREE(&qinf->qi_pquota_tree, GFP_NOFS); + INIT_RADIX_TREE(&qinf->qi_uquota_tree, GFP_KERNEL); + INIT_RADIX_TREE(&qinf->qi_gquota_tree, GFP_KERNEL); + INIT_RADIX_TREE(&qinf->qi_pquota_tree, GFP_KERNEL); mutex_init(&qinf->qi_tree_lock); /* mutex used to serialize quotaoffs */