From patchwork Thu Feb 20 17:44:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksii Kurochko X-Patchwork-Id: 13984349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 800DAC021B3 for ; Thu, 20 Feb 2025 17:44:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.894047.1302871 (Exim 4.92) (envelope-from ) id 1tlAar-00048K-GH; Thu, 20 Feb 2025 17:44:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 894047.1302871; Thu, 20 Feb 2025 17:44:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAar-00048D-D7; Thu, 20 Feb 2025 17:44:13 +0000 Received: by outflank-mailman (input) for mailman id 894047; Thu, 20 Feb 2025 17:44:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAaq-000469-Ek for xen-devel@lists.xenproject.org; Thu, 20 Feb 2025 17:44:12 +0000 Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com [2a00:1450:4864:20::236]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4a6cd30e-efb2-11ef-9896-31a8f345e629; Thu, 20 Feb 2025 18:44:10 +0100 (CET) Received: by mail-lj1-x236.google.com with SMTP id 38308e7fff4ca-30a28bf1baaso10450721fa.3 for ; Thu, 20 Feb 2025 09:44:10 -0800 (PST) Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-3091e04742esm24070201fa.86.2025.02.20.09.44.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2025 09:44:08 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4a6cd30e-efb2-11ef-9896-31a8f345e629 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740073450; x=1740678250; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kBJSULeS2fy8c8GgWkzT2anLuPP/suhjYJ4LNyq9Zcs=; b=Bopmv4Y1xdPJWZYfx8IRq9i9eOOSHF8hGm2OjJDsg/nv1DMuotF/0dblHYNrR1IHRY Ix4neqAs+zpvwIZfQ0zfNk9WdJQW7Mf7Dc1xBVi1EUQSXir5f/4g13w+6TwO7NHkyfqm Nvo1Yv59Cc7dPfUdA5pzxiElGIx+t3btOREd3u8OIfmdj2uvjPIU/u2ngYl42s6vaYkP 1nSA04PpQG+P3/2QKkB2r+06fnIj2+n4lU+Dyhk+WDChbs/yRpcaSpswjXZlVAxpHkMr 1HwTRGqolyhrN5cpCEJ/wwQlvL3Rg4RZ+VyrjBioyQ1o8+ua/OJ32+dVEvh2oFsSMHYv xr8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740073450; x=1740678250; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kBJSULeS2fy8c8GgWkzT2anLuPP/suhjYJ4LNyq9Zcs=; b=p/BE6pWjQSxUto9v2/B5tmoY6iovK3Gaz6BZXwZqTJeiB7k4JzzEdxvYKjNbJHN3db rCP6FBa4DWkjT5OfKyzWa6BXnlr/t+ln/cwBF/MdUr3n2SsJf9RdvVrBREhuxR9lHjW9 K3nAx6lVAy1sPf0WlGmOWkwJXpRg9lWeES5CiI1Zih7X6xGv+gjR/sABmmbVjFU1TOI9 nxTIMu4sr+E1MTaEz12CN49qQBphKi7oCbhC29iM1kbJgogO6iuUnIy+dIXUeAWI424W G4lfbuUg5oCsXmN9b05PqW+iv4Dzfv9hOyACBD4i3rG6ZofT6e3JxC1k5lSegerOA6BQ PyeA== X-Gm-Message-State: AOJu0YwW+qKx925SFYsEDj8vRXaWQI9nfEWjoHyKPhbyInfzBVeghTYk 8MttiWrcrIMiBVVsGI/Iv5jEfDJoe/TEHfYEBzDptVJ3HPIsr4inpjIYBw== X-Gm-Gg: ASbGncu/lIWa4mMt8hQnFZRomTRfF6HJ0s4w21CG6u044XhMX4Ir/3XQ9jryeeQb3IA f5ffoJzBEMCNAHxidtiTmJX1ts5xHFqKjhM32aV0rpzB7Z+JIgRsQFBPiJWqh8GshHhP1V5kDRR WeEmWgj11EjQlhBIAlD12bFacTrz5Sw0GeVYW0SiGKxAG8UvAylEGi/ZOlXdH73xOd1rAZmUCu+ FB13mA9SCZiR0g5cC8Y2MiC8MycvDs+DL1QXErNdQy9JzIsesegOgU1/ZYqoARjWepu4mtjK9JY g+NebP9m8tGsOQD8 X-Google-Smtp-Source: AGHT+IFc1aPnQ7vyttwkv/h+iW0tB3JI/J7o+lr7IUSLic1+If5i4YWS9yzSqvhwpJ7kejD/g3SABw== X-Received: by 2002:a2e:8946:0:b0:308:f01f:183b with SMTP id 38308e7fff4ca-30a5985b0c5mr79691fa.2.1740073449481; Thu, 20 Feb 2025 09:44:09 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH for 4.21 v5 1/3] xen/riscv: implement software page table walking Date: Thu, 20 Feb 2025 18:44:02 +0100 Message-ID: <5e189ab129463cc81baac69f9e9ea6a65b2fb902.1739985805.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: MIME-Version: 1.0 RISC-V doesn't have hardware feature to ask MMU to translate virtual address to physical address ( like Arm has, for example ), so software page table walking is implemented. Signed-off-by: Oleksii Kurochko Reviewed-by: Jan Beulich --- Changes in v5: - Update the comment above _pt_walk() about returning of optional level of the found pte. - Rename local variable `pte_p *entry` to `ptep` in pt_walk() function. - Add Reviewed-by: Jan Beulich . --- Changes in v4: - Update the comment message above _pt_walk(): add information that `pte_level` is optional and add a note that `table` should be unmapped by a caller. - Unmap `table` in pt_walk(). --- Changes in v3: - Remove circular dependency. - Move declaration of pt_walk() to asm/page.h. - Revert other not connected to pt_walk() changes. - Update the commit message. - Drop unnessary anymore local variables of pt_walk(). - Refactor pt_walk() to use for() loop instead of while() loop as it was suggested by Jan B. - Introduce _pt_walk() which returns pte_t * and update prototype of pt_walk() to return pte_t by value. --- Changes in v2: - Update the code of pt_walk() to return pte_t instead of paddr_t. - Fix typos and drop blankets inside parentheses in the comment. - Update the `ret` handling; there is no need for `mfn` calculation anymore as pt_walk() returns or pte_t of a leaf node or non-present pte_t. - Drop the comment before unmap_table(). - Drop local variable `pa` as pt_walk() is going to return pte_t instead of paddr_t. - Add the comment above pt_walk() to explain what it does and returns. - Add additional explanation about possible return values of pt_next_level() used inside while loop in pt_walk(). - Change `pa` to `table` in the comment before while loop in pt_walk() as actually this loop finds a page table where paga table entry for `va` is located. - After including in , the following compilation error occurs: ./arch/riscv/include/asm/cmpxchg.h:56:9: error: implicit declaration of function 'GENMASK' To resolve this, needs to be included at the top of . - To avoid an issue with the implicit declaration of map_domain_page() and unmap_domain_page() after including in , the implementation of flush_page_to_ram() has been moved to mm.c. (look for more detailed explanation in the commit message) As a result drop inclusion of in . - Update the commit message. --- xen/arch/riscv/include/asm/page.h | 2 ++ xen/arch/riscv/pt.c | 60 +++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h index 7a6174a109..0439a1a9ee 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -208,6 +208,8 @@ static inline pte_t pte_from_mfn(mfn_t mfn, unsigned int flags) return (pte_t){ .pte = pte }; } +pte_t pt_walk(vaddr_t va, unsigned int *pte_level); + #endif /* __ASSEMBLY__ */ #endif /* ASM__RISCV__PAGE_H */ diff --git a/xen/arch/riscv/pt.c b/xen/arch/riscv/pt.c index a703e0f1bd..9c1f8f6b55 100644 --- a/xen/arch/riscv/pt.c +++ b/xen/arch/riscv/pt.c @@ -185,6 +185,66 @@ static int pt_next_level(bool alloc_tbl, pte_t **table, unsigned int offset) return XEN_TABLE_NORMAL; } +/* + * _pt_walk() performs software page table walking and returns the pte_t of + * a leaf node or the leaf-most not-present pte_t if no leaf node is found + * for further analysis. + * + * _pt_walk() can optionally return the level of the found pte. Pass NULL + * for `pte_level` if this information isn't needed. + * + * Note: unmapping of final `table` should be done by a caller. + */ +static pte_t *_pt_walk(vaddr_t va, unsigned int *pte_level) +{ + const mfn_t root = get_root_page(); + unsigned int level; + pte_t *table; + + DECLARE_OFFSETS(offsets, va); + + table = map_table(root); + + /* + * Find `table` of an entry which corresponds to `va` by iterating for each + * page level and checking if the entry points to a next page table or + * to a page. + * + * Two cases are possible: + * - ret == XEN_TABLE_SUPER_PAGE means that the entry was found; + * (Despite the name) XEN_TABLE_SUPER_PAGE also covers 4K mappings. If + * pt_next_level() is called for page table level 0, it results in the + * entry being a pointer to a leaf node, thereby returning + * XEN_TABLE_SUPER_PAGE, despite of the fact this leaf covers 4k mapping. + * - ret == XEN_TABLE_MAP_NONE means that requested `va` wasn't actually + * mapped. + */ + for ( level = HYP_PT_ROOT_LEVEL; ; --level ) + { + int ret = pt_next_level(false, &table, offsets[level]); + + if ( ret == XEN_TABLE_MAP_NONE || ret == XEN_TABLE_SUPER_PAGE ) + break; + + ASSERT(level); + } + + if ( pte_level ) + *pte_level = level; + + return table + offsets[level]; +} + +pte_t pt_walk(vaddr_t va, unsigned int *pte_level) +{ + pte_t *ptep = _pt_walk(va, pte_level); + pte_t pte = *ptep; + + unmap_table(ptep); + + return pte; +} + /* Update an entry at the level @target. */ static int pt_update_entry(mfn_t root, vaddr_t virt, mfn_t mfn, unsigned int target, From patchwork Thu Feb 20 17:44:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksii Kurochko X-Patchwork-Id: 13984350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DC38AC021B4 for ; Thu, 20 Feb 2025 17:44:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.894048.1302881 (Exim 4.92) (envelope-from ) id 1tlAas-0004N8-PD; Thu, 20 Feb 2025 17:44:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 894048.1302881; Thu, 20 Feb 2025 17:44:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAas-0004Mz-LU; Thu, 20 Feb 2025 17:44:14 +0000 Received: by outflank-mailman (input) for mailman id 894048; Thu, 20 Feb 2025 17:44:13 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAar-000469-5K for xen-devel@lists.xenproject.org; Thu, 20 Feb 2025 17:44:13 +0000 Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com [2a00:1450:4864:20::233]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4b1abe36-efb2-11ef-9896-31a8f345e629; Thu, 20 Feb 2025 18:44:11 +0100 (CET) Received: by mail-lj1-x233.google.com with SMTP id 38308e7fff4ca-30762598511so11430811fa.0 for ; Thu, 20 Feb 2025 09:44:11 -0800 (PST) Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-3091e04742esm24070201fa.86.2025.02.20.09.44.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2025 09:44:10 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4b1abe36-efb2-11ef-9896-31a8f345e629 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740073451; x=1740678251; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ybdXdHkZSonOwKVvoIRin+PeqdKjeJlJeqR9NB1lb84=; b=VhhWodohb98C0g+oMPkj1NDuwwF3xyE0IBLoRf3rJM5C5YY7BS7kLm8J822/w3TQZx eNM+rdbCXiO32u4YGpYpRqvvIa0DDqCuGz7DFz2IRpEXG+sO/mEEnBkUzDANpXN7TKa1 7zjthbui7n13uxuiwI65Ph8fXg4fLMTYkBXAW0jeiO1RySeH70TkysydhR8DfIFFpVCb SUpVIkDRcMGwytNvR0QWow/czMRjwRiNDtI3h1w432voeYgG3HDUi/nKyfX1ZsCzO0jN 0CSnQuyCE2Qx0edG+An2k4Zjp/NYpn2ddnmPVcnfHPbjSsb4FM5VcInwQK/+ZABq09Iz P8Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740073451; x=1740678251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ybdXdHkZSonOwKVvoIRin+PeqdKjeJlJeqR9NB1lb84=; b=fiqW2HzNcwCQyecmrP0zOUESrOX50JF0C23yYKOm9brqOnK9qRMRpD5M6Gj0ViAP2T hrr7fxqm9GeckhzmxdrpqGFFMM95/JQIWleDDszqXe2nxtcCoNxj3a9fTjWQTxJ1hMuS 5AFjK4BfofCiIrxuFysfn+wupjEprPpqdpHX7njd80Wu+MZLOd5wJ+X05qNrZDkrGOUv 9PvDGpebV8c/+Yzbt9AlbiBNfn8jslchGZeYmXp2ST71t25lQQFaQkmgLaVcLUERAeEO rgqEYsCsO34Lv9zL43pHno9Ec/1nDgFQ5w7VNLVkA4TYIP5s/M1YRjfxXYzj5zYFfAr3 wBzg== X-Gm-Message-State: AOJu0YznRk4vDcjEnwYyV+1CxkFNDdNC6Zsm5vW+FGDbDOZz4ifb7jac YikzerjBVh0JZGUjG+nEFv/9oBbDdpPchO+90rSTVOM3T9VxRZmDhVjNvg== X-Gm-Gg: ASbGnctOltTsrAxYI20ZZOU1oGgW7UWk+oFA3iYzX/g2oSS35NPIXvXUlfUBVsBXYQH O3tWgnEp8eOVqu0KAkxNsJZhcpXkfIX+pCOW5CYAy5Py06VDgur4Cl1MQvMp/pVdnoHa0kfFyWc TwDrjgJLPCsn86lqa5zic875rZhRZneRGO8xVWOjAc58F4zgRVqKJ6FyfnFedN+iUUzpIvvyywx 8XpRtoj7POcd/dW9uxkHyAFGAfzgC/pxW2enHlecyyIfkVKIv/B3pQogZ0xtWQQi+KWnC4xekiT l+E7+gH/iqal2tVf X-Google-Smtp-Source: AGHT+IFN+dJukUyKnbfJ1cjzy20FIUJmAad9sYHU1BkEVAwGOqE2+TlxUi41homD+qzm7FoDp5srbw== X-Received: by 2002:a2e:920d:0:b0:309:2ed:7331 with SMTP id 38308e7fff4ca-30a5990b1bbmr38541fa.18.1740073450526; Thu, 20 Feb 2025 09:44:10 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH for 4.21 v5 2/3] xen/riscv: update defintion of vmap_to_mfn() Date: Thu, 20 Feb 2025 18:44:03 +0100 Message-ID: <2a7119b5276ae5ea5f237a67a25378ec0212462b.1739985805.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: References: MIME-Version: 1.0 vmap_to_mfn() uses virt_to_maddr(), which is designed to work with VA from either the direct map region or Xen's linkage region (XEN_VIRT_START). An assertion will occur if it is used with other regions, in particular for the VMAP region. Since RISC-V lacks a hardware feature to request the MMU to translate a VA to a PA (as Arm does, for example), software page table walking (pt_walk()) is used for the VMAP region to obtain the mfn from pte_t. To avoid introduce a circular dependency between asm/mm.h and asm/page.h by including each other, the static inline function _vmap_to_mfn() is introduced in asm/page.h, as it uses struct pte_t and pte_is_mapping() from asm/page.h. _vmap_to_mfn() is then reused in the definition of vmap_to_mfn() macro in asm/mm.h. Fixes: 7db8d2bd9b ("xen/riscv: add minimal stuff to mm.h to build full Xen") Signed-off-by: Oleksii Kurochko Reviewed-by: Jan Beulich --- Changes in v5: - Minor code style fixes. - Add Reviewed-by: Jan Beulich . --- Changes in v4: - Convert _vmap_to_mfn() macro to static inline function. - Update the commit message: change macro to static inline function for _vmap_to_mfn(). --- Changes in v3: - Move vmap_to_mfn_ to asm/page.h to deal with circular dependency. - Convert vmap_to_mfn_() to macros. - Rename vmap_to_mfn_ to _vmap_to_mfn. - Update _vmap_to_mfn() to work with pte_t instead of pte_t*. - Add parentheses around va argument for macros vmap_to_mfn(). - Updated the commit message. --- Changes in v2: - Update defintion of vmap_to_mfn() as pt_walk() now returns pte_t instead of paddr_t. - Update the commit message. --- xen/arch/riscv/include/asm/mm.h | 2 +- xen/arch/riscv/include/asm/page.h | 9 +++++++++ 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h index 292aa48fc1..4035cd400a 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -23,7 +23,7 @@ extern vaddr_t directmap_virt_start; #define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga)) #define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn)) #define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma)) -#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)(va))) +#define vmap_to_mfn(va) _vmap_to_mfn((vaddr_t)(va)) #define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) static inline void *maddr_to_virt(paddr_t ma) diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h index 0439a1a9ee..bf8988f657 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -210,6 +210,15 @@ static inline pte_t pte_from_mfn(mfn_t mfn, unsigned int flags) pte_t pt_walk(vaddr_t va, unsigned int *pte_level); +static inline mfn_t _vmap_to_mfn(vaddr_t va) +{ + pte_t entry = pt_walk(va, NULL); + + BUG_ON(!pte_is_mapping(entry)); + + return mfn_from_pte(entry); +} + #endif /* __ASSEMBLY__ */ #endif /* ASM__RISCV__PAGE_H */ From patchwork Thu Feb 20 17:44:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksii Kurochko X-Patchwork-Id: 13984351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2D1BC021B2 for ; Thu, 20 Feb 2025 17:44:25 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.894049.1302886 (Exim 4.92) (envelope-from ) id 1tlAat-0004Tt-6s; Thu, 20 Feb 2025 17:44:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 894049.1302886; Thu, 20 Feb 2025 17:44:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAat-0004SC-3m; Thu, 20 Feb 2025 17:44:15 +0000 Received: by outflank-mailman (input) for mailman id 894049; Thu, 20 Feb 2025 17:44:14 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1tlAas-000469-AA for xen-devel@lists.xenproject.org; Thu, 20 Feb 2025 17:44:14 +0000 Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com [2a00:1450:4864:20::234]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4bb81d93-efb2-11ef-9896-31a8f345e629; Thu, 20 Feb 2025 18:44:12 +0100 (CET) Received: by mail-lj1-x234.google.com with SMTP id 38308e7fff4ca-30918c29da2so12230761fa.0 for ; Thu, 20 Feb 2025 09:44:12 -0800 (PST) Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-3091e04742esm24070201fa.86.2025.02.20.09.44.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Feb 2025 09:44:11 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4bb81d93-efb2-11ef-9896-31a8f345e629 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740073452; x=1740678252; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ByMotOx+mg/tt181nsmiVauqar57S+pEhhwOboNaaQ8=; b=B7gzMial04Y3mZWHLUEyiVSQ0brjWMn+tStcQ65FhxmovpQVVSDYWik+Na1wYU7kvP mluIYJkgmKo237E5HhyphlAxdUMKQZbfMnMt+jeXVuf9Zo+j/KrMfNdpnsDN92c77hBC eKGRI/uII6pXyX5k6zc7VVd90AinRkL2+uJgcjaeSKXVu0JVy7lTmpiNz+hASVFAQQMY 6JhXNuGQOrH2Bz3G06xKGJyLzKZBcuu2uk1tagsjrrqFt1PxUletE3WB9xv3J6vI/jtM UgRGdWRWLkd0MwU81StfK/QCf+iai5Pa2wg2U1bj2yNyPlrvXWN2g976jvPYSSe7hz5G UYRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740073452; x=1740678252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ByMotOx+mg/tt181nsmiVauqar57S+pEhhwOboNaaQ8=; b=lAPfBcH/ND4dnW4mU5gvqG8cfu0ZaLhUxhxDaISL2Z+JKJXeOXy9PiTAe2oIG2Itk4 6Nm90ez5MA8RPaQ8GCwcF0CHD8ru/fkgbJXsfRyVbPUAKa0s9vYT4mCEtxSHvsCrKFaL 0P1n6e8THYo2HoszfY/fjBBjkuBj7LKnoooyZTTG602WqqdpMIb8Ct43PP0U0BWWzQW6 NC4qRNgaIhOA6+bBw1IKwfth9BrPDi+12Xo6rp+VW2jGlm6QgVwG87ThZNeavrHian7g qfe3HTNmy9AH06UF+vc6B4XT8CM3QEf+vp/kPQWvgi16p5SGxqcygfhE5QazrDY2ei0T Zrqw== X-Gm-Message-State: AOJu0YzSkYnm4O4MaW2k69yGKv5j+LUAcvOEVG+IlzpLKv/GrfPIHB7v fvzMuIikwvMo0BqsnrN/4el1c9l7Y9ACDDukb3N5ayovaGFKUiU4z++kiA== X-Gm-Gg: ASbGncutz8ZOS4O1Nm81q9SkP/3R4CSmQJPXWcEaJSJ6I2WCdGyq6wicVH9NQl12Zp1 aCwsGq+CPUg2KjZU6LALXb/zRrm+gqCrD3Mz9s59kyYc7alweiybiWGUFEg2rqwutI3zqgMVpXE wDK1+Ahv9P2wXUH3PDzKH36dbaxFZue/FcvGNgWjCgudGzoztWnhAmhViWp2jvK8Ed1ezHVS0Vw CzRg94KM1Goz/8ZKMRnB/An4tKTUvSB8/wk2FBk6eiG5FjptOnyhi/s/EB1rCb7t7g2T3cFUcUx IKVKlh1IWG0UoOSW X-Google-Smtp-Source: AGHT+IGHBhVfksM9PIJKumLtUoBrV8237hM6wnR7zxTsFdVqw59TzBcgY+ltZbxJMJLvaIObOSYZsw== X-Received: by 2002:a2e:91c4:0:b0:308:eabd:298a with SMTP id 38308e7fff4ca-30a59898251mr72641fa.15.1740073451650; Thu, 20 Feb 2025 09:44:11 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH for 4.21 v5 3/3] xen/riscv: update mfn calculation in pt_mapping_level() Date: Thu, 20 Feb 2025 18:44:04 +0100 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: References: MIME-Version: 1.0 When pt_update() is called with arguments (..., INVALID_MFN, ..., 0 or 1), it indicates that a mapping is being destroyed/modifyed. In the case when modifying or destroying a mapping, it is necessary to search until a leaf node is found, instead of searching for a page table entry based on the precalculated `level` and `order`(look at pt_update()). This is because when `mfn` == INVALID_MFN, the `mask` (in pt_mapping_level()) will take into account only `vfn`, which could accidentally return an incorrect level, leading to the discovery of an incorrect page table entry. For example, if `vfn` is page table level 1 aligned, but it was mapped as page table level 0, then pt_mapping_level() will return `level` = 1, since only `vfn` (which is page table level 1 aligned) is taken into account when `mfn` == INVALID_MFN (look at pt_mapping_level()). Fixes: c2f1ded524 ("xen/riscv: page table handling") Signed-off-by: Oleksii Kurochko --- Changes in v5: - Rename *entry to *ptep in pt_update_entry(). - Fix code style issue in the comment. - Move NULL check of pointer to `table` inside unmap_table and then drop it in pt_update_entry(). --- Changes in v4: - Move defintion of local variable table inside `else` case as it is used only there. - Change unmap_table(table) to unmap_table(entry) for unifying both cases when _pt_walk() is used and when pte is seached on the specified level. - Initialize local variable `entry` to avoid compilation error caused by uninitialized variable. --- Changes in v3: - Drop ASSERT() for order as it isn't needed anymore. - Drop PTE_LEAF_SEARCH and use instead level=CONFIG_PAGING_LEVELS; refactor connected code correspondingly. - Calculate order once. - Drop initializer for local variable order. - Drop BUG_ON(!pte_is_mapping(*entry)) for the case when leaf searching happens as there is a similar check in pt_check_entry(). Look at pt.c:41 and pt.c:75. --- Changes in v2: - Introduce PTE_LEAF_SEARCH to tell page table update operation to walk down to wherever the leaf entry is. - Use introduced PTE_LEAF_SEARCH to not searching pte_t entry twice. - Update the commit message. --- xen/arch/riscv/pt.c | 116 +++++++++++++++++++++++++++++--------------- 1 file changed, 78 insertions(+), 38 deletions(-) diff --git a/xen/arch/riscv/pt.c b/xen/arch/riscv/pt.c index 9c1f8f6b55..518939b443 100644 --- a/xen/arch/riscv/pt.c +++ b/xen/arch/riscv/pt.c @@ -102,6 +102,9 @@ static pte_t *map_table(mfn_t mfn) static void unmap_table(const pte_t *table) { + if ( !table ) + return; + /* * During early boot, map_table() will not use map_domain_page() * but the PMAP. @@ -245,14 +248,21 @@ pte_t pt_walk(vaddr_t va, unsigned int *pte_level) return pte; } -/* Update an entry at the level @target. */ +/* + * Update an entry at the level @target. + * + * If `target` == CONFIG_PAGING_LEVELS, the search will continue until + * a leaf node is found. + * Otherwise, the page table entry will be searched at the requested + * `target` level. + * For an example of why this might be needed, see the comment in + * pt_update() before pt_update_entry() is called. + */ static int pt_update_entry(mfn_t root, vaddr_t virt, - mfn_t mfn, unsigned int target, + mfn_t mfn, unsigned int *target, unsigned int flags) { int rc; - unsigned int level = HYP_PT_ROOT_LEVEL; - pte_t *table; /* * The intermediate page table shouldn't be allocated when MFN isn't * valid and we are not populating page table. @@ -263,43 +273,50 @@ static int pt_update_entry(mfn_t root, vaddr_t virt, * combinations of (mfn, flags). */ bool alloc_tbl = !mfn_eq(mfn, INVALID_MFN) || (flags & PTE_POPULATE); - pte_t pte, *entry; - - /* convenience aliases */ - DECLARE_OFFSETS(offsets, virt); + pte_t pte, *ptep = NULL; - table = map_table(root); - for ( ; level > target; level-- ) + if ( *target == CONFIG_PAGING_LEVELS ) + ptep = _pt_walk(virt, target); + else { - rc = pt_next_level(alloc_tbl, &table, offsets[level]); - if ( rc == XEN_TABLE_MAP_NOMEM ) + pte_t *table; + unsigned int level = HYP_PT_ROOT_LEVEL; + /* Convenience aliases */ + DECLARE_OFFSETS(offsets, virt); + + table = map_table(root); + for ( ; level > *target; level-- ) { - rc = -ENOMEM; - goto out; + rc = pt_next_level(alloc_tbl, &table, offsets[level]); + if ( rc == XEN_TABLE_MAP_NOMEM ) + { + rc = -ENOMEM; + goto out; + } + + if ( rc == XEN_TABLE_MAP_NONE ) + { + rc = 0; + goto out; + } + + if ( rc != XEN_TABLE_NORMAL ) + break; } - if ( rc == XEN_TABLE_MAP_NONE ) + if ( level != *target ) { - rc = 0; + dprintk(XENLOG_ERR, + "%s: Shattering superpage is not supported\n", __func__); + rc = -EOPNOTSUPP; goto out; } - if ( rc != XEN_TABLE_NORMAL ) - break; - } - - if ( level != target ) - { - dprintk(XENLOG_ERR, - "%s: Shattering superpage is not supported\n", __func__); - rc = -EOPNOTSUPP; - goto out; + ptep = table + offsets[level]; } - entry = table + offsets[level]; - rc = -EINVAL; - if ( !pt_check_entry(*entry, mfn, flags) ) + if ( !pt_check_entry(*ptep, mfn, flags) ) goto out; /* We are removing the page */ @@ -316,7 +333,7 @@ static int pt_update_entry(mfn_t root, vaddr_t virt, pte = pte_from_mfn(mfn, PTE_VALID); else /* We are updating the permission => Copy the current pte. */ { - pte = *entry; + pte = *ptep; pte.pte &= ~PTE_ACCESS_MASK; } @@ -324,12 +341,12 @@ static int pt_update_entry(mfn_t root, vaddr_t virt, pte.pte |= (flags & PTE_ACCESS_MASK) | PTE_ACCESSED | PTE_DIRTY; } - write_pte(entry, pte); + write_pte(ptep, pte); rc = 0; out: - unmap_table(table); + unmap_table(ptep); return rc; } @@ -422,17 +439,40 @@ static int pt_update(vaddr_t virt, mfn_t mfn, while ( left ) { - unsigned int order, level; - - level = pt_mapping_level(vfn, mfn, left, flags); - order = XEN_PT_LEVEL_ORDER(level); + unsigned int order, level = CONFIG_PAGING_LEVELS; - ASSERT(left >= BIT(order, UL)); + /* + * In the case when modifying or destroying a mapping, it is necessary + * to search until a leaf node is found, instead of searching for + * a page table entry based on the precalculated `level` and `order` + * (look at pt_update()). + * This is because when `mfn` == INVALID_MFN, the `mask`(in + * pt_mapping_level()) will take into account only `vfn`, which could + * accidentally return an incorrect level, leading to the discovery of + * an incorrect page table entry. + * + * For example, if `vfn` is page table level 1 aligned, but it was + * mapped as page table level 0, then pt_mapping_level() will return + * `level` = 1, since only `vfn` (which is page table level 1 aligned) + * is taken into account when `mfn` == INVALID_MFN + * (look at pt_mapping_level()). + * + * To force searching until a leaf node is found is necessary to have + * `level` == CONFIG_PAGING_LEVELS which is a default value for + * `level`. + * + * For other cases (when a mapping is not being modified or destroyed), + * pt_mapping_level() should be used. + */ + if ( !mfn_eq(mfn, INVALID_MFN) || (flags & PTE_POPULATE) ) + level = pt_mapping_level(vfn, mfn, left, flags); - rc = pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, flags); + rc = pt_update_entry(root, vfn << PAGE_SHIFT, mfn, &level, flags); if ( rc ) break; + order = XEN_PT_LEVEL_ORDER(level); + vfn += 1UL << order; if ( !mfn_eq(mfn, INVALID_MFN) ) mfn = mfn_add(mfn, 1UL << order);