From patchwork Thu May 18 10:25:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 13246416 X-Patchwork-Delegate: bpf@iogearbox.net Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11672125A1; Thu, 18 May 2023 10:25:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3F10C4339B; Thu, 18 May 2023 10:25:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684405538; bh=horsSNtaATRHh7WRspe0LOv/cMkQnBJ+gC4dIJ6oeec=; h=From:To:Cc:Subject:Date:From; b=UiNQg/eYb4x6iZ1LdMXXHsaZn1Hr98TWOyCGSu9FBy5Ds0nuEkQfNm6ik1cJeiNNe l7maUw/oEBkmnXL/pYk+c20+krnLrWuFhLUccIpf+cZQi9NknqACLbF48gcTSEi8XA ZFm/9h7p54dfnkUFW9AhVQjJaglr3UEuukO/C8D/thR1BCU/nNnnnHyalO77axp1Kh qBpQQi2sjzYaTylk1l+nfsn9LgtSUFTFXwfzDmbaQSwMSHRU4c6Ik+3drjWr/GXaYb /jOpcxP5WoZEXEEBnSfXNPTWHgI4fZU60kh9/NOMZQyCCE8OQhc0k2+F/7NjCcVptu U/l9sIALbzAwg== From: Will Deacon To: bpf@vger.kernel.org Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Will Deacon , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Krzesimir Nowak , Andrey Ignatov , Yonghong Song Subject: [PATCH v2] bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields Date: Thu, 18 May 2023 11:25:28 +0100 Message-Id: <20230518102528.1341-1-will@kernel.org> X-Mailer: git-send-email 2.20.1 Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net A narrow load from a 64-bit context field results in a 64-bit load followed potentially by a 64-bit right-shift and then a bitwise AND operation to extract the relevant data. In the case of a 32-bit access, an immediate mask of 0xffffffff is used to construct a 64-bit BPP_AND operation which then sign-extends the mask value and effectively acts as a glorified no-op. For example: 0: 61 10 00 00 00 00 00 00 r0 = *(u32 *)(r1 + 0) results in the following code generation for a 64-bit field: ldr x7, [x7] // 64-bit load mov x10, #0xffffffffffffffff and x7, x7, x10 Fix the mask generation so that narrow loads always perform a 32-bit AND operation: ldr x7, [x7] // 64-bit load mov w10, #0xffffffff and w7, w7, w10 Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: John Fastabend Cc: Krzesimir Nowak Cc: Andrey Ignatov Acked-by: Yonghong Song Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Signed-off-by: Will Deacon --- v2: Improve commit message and add Acked-by. kernel/bpf/verifier.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fbcf5a4e2fcd..5871aa78d01a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -17033,7 +17033,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn_buf[cnt++] = BPF_ALU64_IMM(BPF_RSH, insn->dst_reg, shift); - insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, + insn_buf[cnt++] = BPF_ALU32_IMM(BPF_AND, insn->dst_reg, (1ULL << size * 8) - 1); } }