From patchwork Thu Aug 24 13:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Puranjay Mohan X-Patchwork-Id: 13364146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C3CFC27C40 for ; Thu, 24 Aug 2023 13:31:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=C6D0jb3nOBr0URqx+rj1tGEJ485/Q0JOluUnhyKZwlM=; b=HOd3V9+AKufDUF ZHwm/sbZg+Ld8O2QhWEfVao5KbITIM0PKxvU2ithdj8bQeYoXaVEHGwOm1BnpMDLxSl5oPey2yZmh mFE5yCLQKcTQk8LtRqpfUaZo9BuMDDA+pww9kdQXTTd/QivB90zFFnIG7fLxNAnFeHSjR0Z72lJDN CD8l8eyD3J1Aftv0Sz7NrNbSDJZ2BrSn1h42BUn/E152Gwduq1PenjEaIxT8y0L8iJ5KdUkR5PbkX 4bmHiuuD5KMbMdWIDQzytwjUMFkVBrDQ7mCpZb4H8zqNhpxmk/3Oe4qy1rMs/eLeAHwun1sTOY3Po 5AphWTA4XpfrYGyUuNlA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZAR6-00392V-2L; Thu, 24 Aug 2023 13:31:44 +0000 Received: from mail-wr1-x434.google.com ([2a00:1450:4864:20::434]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qZAR3-00390p-2A for linux-riscv@lists.infradead.org; Thu, 24 Aug 2023 13:31:43 +0000 Received: by mail-wr1-x434.google.com with SMTP id ffacd0b85a97d-307d20548adso5568749f8f.0 for ; Thu, 24 Aug 2023 06:31:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1692883898; x=1693488698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HOjnMoCdk8ero0eIWnTRV8IDjWokCkOdT233dw8f8cM=; b=iMnUSfI2+Q9SIs1K6hZhMo+Arr01ZqvJxwD3uxjGKTxs2C3mOEX8zTkSa/iw1/ea+A 1obGFCB1YEh4uU1B9tgX84lYcb0Efl5eHke1ZttpwSWnkzqrkRoI8QXmaqnlazK0TD5+ 9tsnxUjQJNkcnAO60l48rHELcsLqQKRjrtwVMkLHZFhD3Fsa/mrCPT4+o+K9IrXPhiaG sYLcigRuUCUk0huxgn6LcK5vsuf1FNxZqb/70GapWKjid2DGaXJagHrHYax7v/q3dpy1 5uX+gS81yYp8mQKkfYi+eMqRjrxMSLuHa0Vdk/52tySindL0WVTzzrxO7oz8N0y0DGq0 OKcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692883898; x=1693488698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HOjnMoCdk8ero0eIWnTRV8IDjWokCkOdT233dw8f8cM=; b=b0wvE98VYKvLjTVDR1jqegHPP9XvIZaOWj6UfPAtLJMcM+PCPiUoZgdsrZv7WVICPt x9cVlEfkzAkKUt+9EJ8RKPDNYDPHy/QLiZmACa+Q+WFdVjZ/k5LP8mqG1VoNWfK1TRAj EmozmtQmKT/TsKiziYbpgtuRBLwt9FToosHtYedpJL3UaJjcThedc9DL+S/BdV5EdGTe flH1wapfdXr3JJ8CqRxmd4t6KM5Vh6Tza/RSUtfIWCmRmhkDC1V92iwAcEDzUYiCfF2p spqB23d14EWBonbrb0PNvlEhliFhBhuny9zoFh075QMjECjQ1ji1THpl3/FQI/+yJgCq YSvw== X-Gm-Message-State: AOJu0YxWjzeY5WdLDaog/XDbR7O64H00N35f8Pt9lI3fMvHGh3sfCr4a SD3zlzQHxzkRn/evZWsE8BM= X-Google-Smtp-Source: AGHT+IGYrNSHQo0VfFl5CRfj3Gtb5eiirSq1QGjvtMJU8Gr4wK3Ak9RlHQPqQpRwwQlFkga9ww4Lcg== X-Received: by 2002:a5d:62cf:0:b0:319:79a9:4d9e with SMTP id o15-20020a5d62cf000000b0031979a94d9emr11901798wrv.44.1692883898481; Thu, 24 Aug 2023 06:31:38 -0700 (PDT) Received: from ip-172-31-30-46.eu-west-1.compute.internal (ec2-54-170-241-106.eu-west-1.compute.amazonaws.com. [54.170.241.106]) by smtp.gmail.com with ESMTPSA id h11-20020a5d548b000000b00317e77106dbsm22396112wrv.48.2023.08.24.06.31.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Aug 2023 06:31:38 -0700 (PDT) From: Puranjay Mohan To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, pulehui@huawei.com, conor.dooley@microchip.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, kpsingh@kernel.org, bjorn@kernel.org, bpf@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: puranjay12@gmail.com Subject: [PATCH bpf-next v2 1/3] riscv: extend patch_text_nosync() for multiple pages Date: Thu, 24 Aug 2023 13:31:33 +0000 Message-Id: <20230824133135.1176709-2-puranjay12@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230824133135.1176709-1-puranjay12@gmail.com> References: <20230824133135.1176709-1-puranjay12@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230824_063141_711539_D926DB77 X-CRM114-Status: GOOD ( 15.45 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The patch_insn_write() function currently doesn't work for multiple pages of instructions, therefore patch_text_nosync() will fail with a page fault if called with lengths spanning multiple pages. This commit extends the patch_insn_write() function to support multiple pages by copying at max 2 pages at a time in a loop. This implementation is similar to text_poke_copy() function of x86. Signed-off-by: Puranjay Mohan Reviewed-by: Björn Töpel --- arch/riscv/kernel/patch.c | 39 ++++++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 575e71d6c8ae..465b2eebbc37 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -53,12 +53,18 @@ static void patch_unmap(int fixmap) } NOKPROBE_SYMBOL(patch_unmap); -static int patch_insn_write(void *addr, const void *insn, size_t len) +static int __patch_insn_write(void *addr, const void *insn, size_t len) { void *waddr = addr; bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE; int ret; + /* + * Only two pages can be mapped at a time for writing. + */ + if (len > 2 * PAGE_SIZE) + return -EINVAL; + /* * Before reaching here, it was expected to lock the text_mutex * already, so we don't need to give another lock here and could @@ -74,7 +80,7 @@ static int patch_insn_write(void *addr, const void *insn, size_t len) lockdep_assert_held(&text_mutex); if (across_pages) - patch_map(addr + len, FIX_TEXT_POKE1); + patch_map(addr + PAGE_SIZE, FIX_TEXT_POKE1); waddr = patch_map(addr, FIX_TEXT_POKE0); @@ -87,15 +93,38 @@ static int patch_insn_write(void *addr, const void *insn, size_t len) return ret; } -NOKPROBE_SYMBOL(patch_insn_write); +NOKPROBE_SYMBOL(__patch_insn_write); #else -static int patch_insn_write(void *addr, const void *insn, size_t len) +static int __patch_insn_write(void *addr, const void *insn, size_t len) { return copy_to_kernel_nofault(addr, insn, len); } -NOKPROBE_SYMBOL(patch_insn_write); +NOKPROBE_SYMBOL(__patch_insn_write); #endif /* CONFIG_MMU */ +static int patch_insn_write(void *addr, const void *insn, size_t len) +{ + size_t patched = 0; + size_t size; + int ret = 0; + + /* + * Copy the instructions to the destination address, two pages at a time + * because __patch_insn_write() can only handle len <= 2 * PAGE_SIZE. + */ + while (patched < len && !ret) { + size = min_t(size_t, + PAGE_SIZE * 2 - offset_in_page(addr + patched), + len - patched); + ret = __patch_insn_write(addr + patched, insn + patched, size); + + patched += size; + } + + return ret; +} +NOKPROBE_SYMBOL(patch_insn_write); + int patch_text_nosync(void *addr, const void *insns, size_t len) { u32 *tp = addr;