From patchwork Mon Aug 28 16:59:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Puranjay Mohan X-Patchwork-Id: 13368156 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4E8ACC83F15 for ; Mon, 28 Aug 2023 17:00:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yvYWilFqraoIXEgcXZGepUnEFAkx+DyH7YlOEySjcAY=; b=DCBgQb0HNrPJwp KxXDdUP5FG4oCknGTeKLrgznSEwMLIJfHNEafITxHBHxzsSa3/g+sRYul/9kyB5G844XOUjkLbRAt oq9FLBwqsWJ6F9Ox/mLF8VkaB8NHmC8rXqldo9ikw683QZ6hdxcxi2x2TUoZC507N7xwiqHJKHli2 g+QEpjByFPELHEwM7Cjf4QhYQ8oyWQAOIzDmhqjiKzZKC18GjlAKBrzrLXoG27KWchZSd0z3oYzkG t6N6pS/HOdZ1cHib0XqcFWuQoH1cXjJi3PB1d2hHUqxzjtChrTKDFDb++yUMdeXtE6Jr8gAc7Gj8t bImCNJvGK92Gq8U5bZlw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qafaz-009wxV-1o; Mon, 28 Aug 2023 17:00:09 +0000 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qafaw-009wvI-0h for linux-riscv@lists.infradead.org; Mon, 28 Aug 2023 17:00:07 +0000 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-401bdff4cb4so23751965e9.3 for ; Mon, 28 Aug 2023 10:00:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693242000; x=1693846800; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8P7/Zkeac2PFc5cssTAi6ZYqGhNKmBRPTm9VR7EbbnQ=; b=k5b70U+ZUANzv/IDScpVmw3R2qaefcllK7WC8Nha9UpJvpdmbkS5pfzSapqRgPxfZo T9p177CuHrWQGR9yj5JGvAYv9zaYYH+05pfrcKjjwtHPb/eqXq5rOLWjgu6fwHagd7l7 FwjMFX0biUYYB0LBJ/b/uvxG2jr+aFTe4MRBAeYHxcMeFsS/K7JZFkMCfDC2pYTF5UxV pmjptilC7+XLgYiVah62rVoqBxlNacLZEW/uFl2r++7kaQHKRqhmc+ZEXZ4/MgJadxiN SlZ9UxZ9lxzgzzjyg9R/9TyQNSNtqidAvkopPpQP3cU0w3DGRhIopD+MbMFu1DdfXQkN trUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693242000; x=1693846800; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8P7/Zkeac2PFc5cssTAi6ZYqGhNKmBRPTm9VR7EbbnQ=; b=DDK2RdsaZDoCDa4Sd5WWYmMlam/TVVLQFS2PdXT2nfnjevh3mbNMgckBCQCsvCbmQ3 /DyE69W1tqbTEZr3c+BetuIqXeQToUjGsnk+NTM05oxcIBHhqupJt6u/Wr3+zF269u5Y rkmVBVYQakE+Y0z6DZO8WEigKe0EjqOHd41F/u4lfMIw0plsWo0o28KrzSM/XOwv/bal RyXID/WSi2Fx7HEVNQm2yjwuCgqWVlOD/tJ4dveb9r0ZSYJCv6k/U6QtR0qkCCt7si5S b4M2SxdNCfQoiIXvoOqUt+94dty4SbhPzY0TlXwvyPRks89MoOlH2z0LNKnz5Ywdsnwz RrQw== X-Gm-Message-State: AOJu0Yw/EGmDmgdcNZfU1mpRbRgYK6xLKKTWDsuTzLkJmNDpxptgdhkx diL7/wqGQ97PcR5r9xy7T6Y= X-Google-Smtp-Source: AGHT+IHEmWlVZTHQ0UdjiaiuoDBvpVXt9N2cIoSQkZkdFqsjMRcSrFBXHhd6HB1m2RxLs3sY30C9cg== X-Received: by 2002:a1c:f304:0:b0:3fc:60:7dbf with SMTP id q4-20020a1cf304000000b003fc00607dbfmr20878500wmq.41.1693241999647; Mon, 28 Aug 2023 09:59:59 -0700 (PDT) Received: from ip-172-31-30-46.eu-west-1.compute.internal (ec2-54-170-241-106.eu-west-1.compute.amazonaws.com. [54.170.241.106]) by smtp.gmail.com with ESMTPSA id g9-20020a056000118900b0031ad5fb5a0fsm11033613wrx.58.2023.08.28.09.59.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Aug 2023 09:59:59 -0700 (PDT) From: Puranjay Mohan To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, pulehui@huawei.com, conor.dooley@microchip.com, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, kpsingh@kernel.org, bjorn@kernel.org, bpf@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org Cc: puranjay12@gmail.com Subject: [PATCH bpf-next v3 1/3] riscv: extend patch_text_nosync() for multiple pages Date: Mon, 28 Aug 2023 16:59:56 +0000 Message-Id: <20230828165958.1714079-2-puranjay12@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230828165958.1714079-1-puranjay12@gmail.com> References: <20230828165958.1714079-1-puranjay12@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230828_100006_278155_810141C1 X-CRM114-Status: GOOD ( 15.49 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The patch_insn_write() function currently doesn't work for multiple pages of instructions, therefore patch_text_nosync() will fail with a page fault if called with lengths spanning multiple pages. This commit extends the patch_insn_write() function to support multiple pages by copying at max 2 pages at a time in a loop. This implementation is similar to text_poke_copy() function of x86. Signed-off-by: Puranjay Mohan Reviewed-by: Björn Töpel Reviewed-by: Pu Lehui --- arch/riscv/kernel/patch.c | 37 ++++++++++++++++++++++++++++++++----- 1 file changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c index 575e71d6c8ae..2c97e246f4dc 100644 --- a/arch/riscv/kernel/patch.c +++ b/arch/riscv/kernel/patch.c @@ -53,12 +53,18 @@ static void patch_unmap(int fixmap) } NOKPROBE_SYMBOL(patch_unmap); -static int patch_insn_write(void *addr, const void *insn, size_t len) +static int __patch_insn_write(void *addr, const void *insn, size_t len) { void *waddr = addr; bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE; int ret; + /* + * Only two pages can be mapped at a time for writing. + */ + if (len + offset_in_page(addr) > 2 * PAGE_SIZE) + return -EINVAL; + /* * Before reaching here, it was expected to lock the text_mutex * already, so we don't need to give another lock here and could @@ -74,7 +80,7 @@ static int patch_insn_write(void *addr, const void *insn, size_t len) lockdep_assert_held(&text_mutex); if (across_pages) - patch_map(addr + len, FIX_TEXT_POKE1); + patch_map(addr + PAGE_SIZE, FIX_TEXT_POKE1); waddr = patch_map(addr, FIX_TEXT_POKE0); @@ -87,15 +93,36 @@ static int patch_insn_write(void *addr, const void *insn, size_t len) return ret; } -NOKPROBE_SYMBOL(patch_insn_write); +NOKPROBE_SYMBOL(__patch_insn_write); #else -static int patch_insn_write(void *addr, const void *insn, size_t len) +static int __patch_insn_write(void *addr, const void *insn, size_t len) { return copy_to_kernel_nofault(addr, insn, len); } -NOKPROBE_SYMBOL(patch_insn_write); +NOKPROBE_SYMBOL(__patch_insn_write); #endif /* CONFIG_MMU */ +static int patch_insn_write(void *addr, const void *insn, size_t len) +{ + size_t patched = 0; + size_t size; + int ret = 0; + + /* + * Copy the instructions to the destination address, two pages at a time + * because __patch_insn_write() can only handle len <= 2 * PAGE_SIZE. + */ + while (patched < len && !ret) { + size = min_t(size_t, PAGE_SIZE * 2 - offset_in_page(addr + patched), len - patched); + ret = __patch_insn_write(addr + patched, insn + patched, size); + + patched += size; + } + + return ret; +} +NOKPROBE_SYMBOL(patch_insn_write); + int patch_text_nosync(void *addr, const void *insns, size_t len) { u32 *tp = addr;