From patchwork Fri Jun 9 19:46:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anna Schumaker X-Patchwork-Id: 13274337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B28B1C7EE37 for ; Fri, 9 Jun 2023 19:46:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230172AbjFITqR (ORCPT ); Fri, 9 Jun 2023 15:46:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229517AbjFITqR (ORCPT ); Fri, 9 Jun 2023 15:46:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B28A35A7 for ; Fri, 9 Jun 2023 12:46:15 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6B971602F3 for ; Fri, 9 Jun 2023 19:46:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80F77C433D2; Fri, 9 Jun 2023 19:46:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686339974; bh=/E1EHoJSWzYbAcL9AzZ/2dkUvD66hVq9kCDl0SCwkck=; h=From:To:Cc:Subject:Date:From; b=oOl9I8nr10rsjjqDcNpNsfC/FPwaNTkrJm/ODQrikK4infS0fD93WXdW5+vqFPA41 JqsJpzsFC8G5g5GHvS9aaqSqFVIqHq5TuPIz5ZooKnJ3jw/UU7MfeuVlVfRD9Th1Jm FeKBDBLMPdeRVMwb5XmyPfUHwTXqLDguRDp2CrgphwUO3voOfExV4mpJiZ7u6V3HBF acIXb8gIO+Xs8ljrhtCP/ORLnSdrL3pOcPgyVhi33rygKbwLMddjJZSGnnata+I4hR sZwTtMG9eaWYCcC+3wd0DovPvkpvZIaUSDTwzto8rcEDDZl6k0ZXX0Rm045AwtUZZJ cRtNYUn2xPBcQ== From: Anna Schumaker To: linux-nfs@vger.kernel.org, trond.myklebust@hammerspace.com Cc: anna@kernel.org Subject: [PATCH v2 1/3] NFSv4.2: Fix READ_PLUS smatch warnings Date: Fri, 9 Jun 2023 15:46:11 -0400 Message-ID: <20230609194613.848590-1-anna@kernel.org> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Anna Schumaker Smatch reports: fs/nfs/nfs42xdr.c:1131 decode_read_plus() warn: missing error code? 'status' Which Dan suggests to fix by doing a hardcoded "return 0" from the "if (segments == 0)" check. Additionally, smatch reports that the "status = -EIO" assignment is not used. This patch addresses both these issues. Reported-by: kernel test robot Reported-by: Dan Carpenter Closes: https://lore.kernel.org/r/202305222209.6l5VM2lL-lkp@intel.com/ Fixes: d3b00a802c845 ("NFS: Replace the READ_PLUS decoding code") Signed-off-by: Anna Schumaker --- fs/nfs/nfs42xdr.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c index a6df815a140c..ef3b150970ff 100644 --- a/fs/nfs/nfs42xdr.c +++ b/fs/nfs/nfs42xdr.c @@ -1136,13 +1136,12 @@ static int decode_read_plus(struct xdr_stream *xdr, struct nfs_pgio_res *res) res->eof = be32_to_cpup(p++); segments = be32_to_cpup(p++); if (segments == 0) - return status; + return 0; segs = kmalloc_array(segments, sizeof(*segs), GFP_KERNEL); if (!segs) return -ENOMEM; - status = -EIO; for (i = 0; i < segments; i++) { status = decode_read_plus_segment(xdr, &segs[i]); if (status < 0) From patchwork Fri Jun 9 19:46:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anna Schumaker X-Patchwork-Id: 13274339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37DBCC7EE45 for ; Fri, 9 Jun 2023 19:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230499AbjFITqS (ORCPT ); Fri, 9 Jun 2023 15:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229709AbjFITqR (ORCPT ); Fri, 9 Jun 2023 15:46:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7195435A9 for ; Fri, 9 Jun 2023 12:46:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0507665B58 for ; Fri, 9 Jun 2023 19:46:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12E88C433EF; Fri, 9 Jun 2023 19:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686339975; bh=sZUdB4HXewAprQJemD6d+JaBNZUoLjQO4bIc7WuNV9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ArJ8uRlWrSc//5EjrAlwQ9acH3+x7VvgfIipq9QbHJLPdmRYx8Ak9sXvNaG63t5J5 2GloJA7x0Lhhil35hGB3KsXegaDTfLjdzPapHrqpyXmKqz2tH60/Dp/dGfLP8/M3ws TcXxXpUMGw1phZTMJQOIfWgYLc75Kh4qMHOJsWwJYM3GOkA5tW2nPV6HdKHLd+TlHS RfoMhDQ6scJBIIQ8pERFKLG8h5YhYFhCcunQYsAoSu0YTNaK1ubzlDWhq/rA4KKASf Gv1H56BOsyCo73DGEauvdv+blHgnxcYMcGw8dz6Ok+90Xug+pyYodZQOynHeU9CqHB mVrpuv0x4sPiQ== From: Anna Schumaker To: linux-nfs@vger.kernel.org, trond.myklebust@hammerspace.com Cc: anna@kernel.org Subject: [PATCH v2 2/3] NFSv4.2: Fix READ_PLUS size calculations Date: Fri, 9 Jun 2023 15:46:12 -0400 Message-ID: <20230609194613.848590-2-anna@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230609194613.848590-1-anna@kernel.org> References: <20230609194613.848590-1-anna@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Anna Schumaker I bump the decode_read_plus_maxsz to account for hole segments, but I need to subtract out this increase when calling rpc_prepare_reply_pages() so the common case of single data segment replies can be directly placed into the xdr pages without needing to be shifted around. Reported-by: Chuck Lever Fixes: d3b00a802c845 ("NFS: Replace the READ_PLUS decoding code") Signed-off-by: Anna Schumaker --- fs/nfs/nfs42xdr.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c index ef3b150970ff..75765382cc0e 100644 --- a/fs/nfs/nfs42xdr.c +++ b/fs/nfs/nfs42xdr.c @@ -51,10 +51,16 @@ (1 /* data_content4 */ + \ 2 /* data_info4.di_offset */ + \ 1 /* data_info4.di_length */) +#define NFS42_READ_PLUS_HOLE_SEGMENT_SIZE \ + (1 /* data_content4 */ + \ + 2 /* data_info4.di_offset */ + \ + 2 /* data_info4.di_length */) +#define READ_PLUS_SEGMENT_SIZE_DIFF (NFS42_READ_PLUS_HOLE_SEGMENT_SIZE - \ + NFS42_READ_PLUS_DATA_SEGMENT_SIZE) #define decode_read_plus_maxsz (op_decode_hdr_maxsz + \ 1 /* rpr_eof */ + \ 1 /* rpr_contents count */ + \ - NFS42_READ_PLUS_DATA_SEGMENT_SIZE) + NFS42_READ_PLUS_HOLE_SEGMENT_SIZE) #define encode_seek_maxsz (op_encode_hdr_maxsz + \ encode_stateid_maxsz + \ 2 /* offset */ + \ @@ -781,8 +787,8 @@ static void nfs4_xdr_enc_read_plus(struct rpc_rqst *req, encode_putfh(xdr, args->fh, &hdr); encode_read_plus(xdr, args, &hdr); - rpc_prepare_reply_pages(req, args->pages, args->pgbase, - args->count, hdr.replen); + rpc_prepare_reply_pages(req, args->pages, args->pgbase, args->count, + hdr.replen - READ_PLUS_SEGMENT_SIZE_DIFF); encode_nops(&hdr); } From patchwork Fri Jun 9 19:46:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anna Schumaker X-Patchwork-Id: 13274338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4EA8C7EE29 for ; Fri, 9 Jun 2023 19:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229709AbjFITqS (ORCPT ); Fri, 9 Jun 2023 15:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229517AbjFITqS (ORCPT ); Fri, 9 Jun 2023 15:46:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E871B35B1 for ; Fri, 9 Jun 2023 12:46:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7B9CF65B59 for ; Fri, 9 Jun 2023 19:46:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 945FBC433D2; Fri, 9 Jun 2023 19:46:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686339975; bh=rWHKIeLp+DkUfZAEfHJ1g8fjTC73smlcD7Nwn1mUkKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bDXa5vfgNqE2WpuGxhTzjm2XHEEmpNCbUf5SFep/9dpuE/TOJKFDioIM96n4tEc3g TtJLKaWNoijbWuIuQ8GXrmsf5nnp+ibYK7VAj4E4trZJy0zVB67lrSkXdvX39s4p1f laI69iYAIcwqCSlKHqQoVetVHDB+ZYenH5y7A0X4vpnMPvwrMnCb5HkE9NPfBK1mao PXudZ+z60MGTtK/1WtiqwlvVK72aPmt+OueHEKjRA7DLASAxdfMWYwmDGc2jHi2xZ3 4rkPLjEH7Z+4/FggLshi/lXx3Y6Ioco4xzMqKFLRHXvlOkHpgWCTkePxl9v2FOb4D/ cXmi43f6H9IDw== From: Anna Schumaker To: linux-nfs@vger.kernel.org, trond.myklebust@hammerspace.com Cc: anna@kernel.org Subject: [PATCH v2 3/3] NFSv4.2: Rework scratch handling for READ_PLUS (again) Date: Fri, 9 Jun 2023 15:46:13 -0400 Message-ID: <20230609194613.848590-3-anna@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230609194613.848590-1-anna@kernel.org> References: <20230609194613.848590-1-anna@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Anna Schumaker I found that the read code might send multiple requests using the same nfs_pgio_header, but nfs4_proc_read_setup() is only called once. This is how we ended up occasionally double-freeing the scratch buffer, but also means we set a NULL pointer but non-zero length to the xdr scratch buffer. This results in an oops the first time decoding needs to copy something to scratch, which frequently happens when decoding READ_PLUS hole segments. I fix this by moving scratch handling into the pageio read code. I provide a function to allocate scratch space for decoding read replies, and free the scratch buffer when the nfs_pgio_header is freed. Krzysztof Kozlowski hit a bug a while ago with similar symptoms, and I'm hopeful that this patch fixes his issue. Reported-by: Krzysztof Kozlowski Fixes: fbd2a05f29a9 (NFSv4.2: Rework scratch handling for READ_PLUS) Signed-off-by: Anna Schumaker pick 7de817656b9b NFSD: Repeal and replace the READ_PLUS implementation --- fs/nfs/internal.h | 1 + fs/nfs/nfs42.h | 1 + fs/nfs/nfs42xdr.c | 2 +- fs/nfs/nfs4proc.c | 13 +------------ fs/nfs/read.c | 10 ++++++++++ 5 files changed, 14 insertions(+), 13 deletions(-) diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 3cc027d3bd58..1607c23f68d4 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -489,6 +489,7 @@ extern const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, struct inode *inode, bool force_mds, const struct nfs_pgio_completion_ops *compl_ops); +extern bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size); extern int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio, struct nfs_open_context *ctx, struct folio *folio); diff --git a/fs/nfs/nfs42.h b/fs/nfs/nfs42.h index 0fe5aacbcfdf..b59876b01a1e 100644 --- a/fs/nfs/nfs42.h +++ b/fs/nfs/nfs42.h @@ -13,6 +13,7 @@ * more? Need to consider not to pre-alloc too much for a compound. */ #define PNFS_LAYOUTSTATS_MAXDEV (4) +#define READ_PLUS_SCRATCH_SIZE (16) /* nfs4.2proc.c */ #ifdef CONFIG_NFS_V4_2 diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c index 75765382cc0e..20aa5e746497 100644 --- a/fs/nfs/nfs42xdr.c +++ b/fs/nfs/nfs42xdr.c @@ -1351,7 +1351,7 @@ static int nfs4_xdr_dec_read_plus(struct rpc_rqst *rqstp, struct compound_hdr hdr; int status; - xdr_set_scratch_buffer(xdr, res->scratch, sizeof(res->scratch)); + xdr_set_scratch_buffer(xdr, res->scratch, READ_PLUS_SCRATCH_SIZE); status = decode_compound_hdr(xdr, &hdr); if (status) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index d3665390c4cb..73dc8a793ae9 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -5437,18 +5437,8 @@ static bool nfs4_read_plus_not_supported(struct rpc_task *task, return false; } -static inline void nfs4_read_plus_scratch_free(struct nfs_pgio_header *hdr) -{ - if (hdr->res.scratch) { - kfree(hdr->res.scratch); - hdr->res.scratch = NULL; - } -} - static int nfs4_read_done(struct rpc_task *task, struct nfs_pgio_header *hdr) { - nfs4_read_plus_scratch_free(hdr); - if (!nfs4_sequence_done(task, &hdr->res.seq_res)) return -EAGAIN; if (nfs4_read_stateid_changed(task, &hdr->args)) @@ -5468,8 +5458,7 @@ static bool nfs42_read_plus_support(struct nfs_pgio_header *hdr, /* Note: We don't use READ_PLUS with pNFS yet */ if (nfs_server_capable(hdr->inode, NFS_CAP_READ_PLUS) && !hdr->ds_clp) { msg->rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_READ_PLUS]; - hdr->res.scratch = kmalloc(32, GFP_KERNEL); - return hdr->res.scratch != NULL; + return nfs_read_alloc_scratch(hdr, READ_PLUS_SCRATCH_SIZE); } return false; } diff --git a/fs/nfs/read.c b/fs/nfs/read.c index f71eeee67e20..7dc21a48e3e7 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -47,6 +47,8 @@ static struct nfs_pgio_header *nfs_readhdr_alloc(void) static void nfs_readhdr_free(struct nfs_pgio_header *rhdr) { + if (rhdr->res.scratch != NULL) + kfree(rhdr->res.scratch); kmem_cache_free(nfs_rdata_cachep, rhdr); } @@ -108,6 +110,14 @@ void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio) } EXPORT_SYMBOL_GPL(nfs_pageio_reset_read_mds); +bool nfs_read_alloc_scratch(struct nfs_pgio_header *hdr, size_t size) +{ + WARN_ON(hdr->res.scratch != NULL); + hdr->res.scratch = kmalloc(size, GFP_KERNEL); + return hdr->res.scratch != NULL; +} +EXPORT_SYMBOL_GPL(nfs_read_alloc_scratch); + static void nfs_readpage_release(struct nfs_page *req, int error) { struct folio *folio = nfs_page_to_folio(req);