From patchwork Wed Dec 13 20:29:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13491848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D10F5C4332F for ; Wed, 13 Dec 2023 20:31:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35D716B0331; Wed, 13 Dec 2023 15:31:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 294A06B0347; Wed, 13 Dec 2023 15:31:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 04ACE6B0331; Wed, 13 Dec 2023 15:31:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E0DA26B032E for ; Wed, 13 Dec 2023 15:31:21 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A7567C0355 for ; Wed, 13 Dec 2023 20:31:21 +0000 (UTC) X-FDA: 81562939962.17.9D4FC93 Received: from mail-wr1-f50.google.com (mail-wr1-f50.google.com [209.85.221.50]) by imf09.hostedemail.com (Postfix) with ESMTP id C40D414000F for ; Wed, 13 Dec 2023 20:31:19 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=LVlTaZQT; spf=pass (imf09.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.50 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702499479; a=rsa-sha256; cv=none; b=v3YZUbrb1/zjd//9JeJ4FgVPnm3uoJGAFkQ8KQoMtrmzDheqqnD8XSQjbbcvoZOOQY44vl h4/if5i2dSCOXGbBfQTj75ZzEQtAz/FK4Q4Eyu/t/VKRvatKzFrA1Yr+IauMJp9EwOrNeo yeCyLU2PS7TM767aOXTQyWFJBHXHMN4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=LVlTaZQT; spf=pass (imf09.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.50 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702499479; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E15RwjGwh0jg6//BOgBWUm8VeEPhhxkhRr12JTaACPU=; b=THealNykb0lmpFvJrcO6Av6A1MyHzdkDxmkT5a1W9gAe7zBC+7LAZTAF5iumkEyUFuOw/a dPfZxHcUz1a069BXPf+YzlFIqv5JcDjNvgD60o8lXftUCcTpD/YbWX/iwkhWJGM1UHTy0G 8o5Of1IvQgs4IhypdRWopD8jsa2N2qs= Received: by mail-wr1-f50.google.com with SMTP id ffacd0b85a97d-336420a244dso655305f8f.0 for ; Wed, 13 Dec 2023 12:31:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702499478; x=1703104278; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E15RwjGwh0jg6//BOgBWUm8VeEPhhxkhRr12JTaACPU=; b=LVlTaZQTr959rXEqpZmVPcy4WN+NXilxUvNDcB7FtfgWjtJshNN9tA6EyTpfjjO+Q8 4z+Yh1Ennfj5bLShvoDWl1ZYJQt3XIaKbaQsxfcV7L7SItEuoJdRv54ZUu6DvOOcZBO8 rnlq0VijEA5Fzh3IyB48ovQiPAzFzncy61bR/3XCMlXB0Bi3C9Nk7MLGE7s0dPWAJtyD 8nhuDlkJtDMV0abVlbaNzILMypNm7b/e0Hw6iXtBPCD2CBVqwlIZJXC/3XQX2DHFE7hn Z9gjpxpzdl1VpWmOiRfYa0sHyeE1QjnpV+UNKtDVpf3mZ5+CkbptbD0KiAOXqgwuSJPN a0nA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702499478; x=1703104278; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E15RwjGwh0jg6//BOgBWUm8VeEPhhxkhRr12JTaACPU=; b=w9VULYA+dxWEJsl5G7cc1ODjMAC/UsCppyl1F4WUp9PIABA1ZLKcrMyfFfcKh6ZCJX Vltu+B0YO9Dics24tHVYwtU6r/fY+FUKcx3yskjU//CXLQiuGT9mkCk4LpZ4bbk+kssK 3u2N6AlxgUJsFGgG6BgQ1as0H4ZpG/g1bjKNZDWkgMQ2JhWJuuDAmTjMlRbiEMqKKYkH OCiSY8ayGEUgwbdVusxjHreQTRq4Ayjy0sEg2gXSWV1NmpIaa43m6j8xps6/YRup+M+2 0ARU6N+kZpncSXWoCyaMORgxk1N8YnOIVLb3f+OHPrPRUz5vNToWZfCaw+9NezpDOvHp DZrA== X-Gm-Message-State: AOJu0YyaKA71d2ty4avMFtCjylxGDj6nXa/T1mQTZx5hdtR9kwG42HR6 rZA+lNKQ9Izy5i44A14FXh8NSA== X-Google-Smtp-Source: AGHT+IHJ9RU792w4DCZZx9hZPdAspSn4SI/uXWC5g1BlHgB2r11Rp2Nl7fP233w9pzamiUzq2saHcQ== X-Received: by 2002:a5d:5742:0:b0:336:370d:4c4f with SMTP id q2-20020a5d5742000000b00336370d4c4fmr973590wrw.60.1702499478475; Wed, 13 Dec 2023 12:31:18 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id h3-20020a5d4303000000b0033629538fa2sm5560888wrq.18.2023.12.13.12.31.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 12:31:18 -0800 (PST) From: Alexandre Ghiti To: Russell King , Ryan Roberts , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Paul Walmsley , Palmer Dabbelt , Albert Ou , Anup Patel , Atish Patra , Ard Biesheuvel , Andrey Ryabinin , Andrey Konovalov , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-efi@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 1/4] riscv: Use WRITE_ONCE() when setting page table entries Date: Wed, 13 Dec 2023 21:29:58 +0100 Message-Id: <20231213203001.179237-2-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231213203001.179237-1-alexghiti@rivosinc.com> References: <20231213203001.179237-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C40D414000F X-Stat-Signature: se3utf3qd5ztew6agwgp3ttk6jcgy66w X-Rspam-User: X-HE-Tag: 1702499479-216084 X-HE-Meta: U2FsdGVkX1/fQm/LkhtKbcoZBMYRRUu6h2CILEMVYST0j28asYLhYUEK2/Vxy5G8CzrCO9jtwVE/RiJOPC8612wtAnVFju7JPlHO8DyateWNCVq675qliAAo8S9b6I9eIp+Gn7ASExv/YIX2OqTb35YKppi7SEl1qi0LMz14U+WEYq55nLb8w7duIKKdfbghsfJU20q2y4v9bj26l/D0MCnYwTKQ8QOh5fKw48BLRVDC3YlIa3j6F0P9bBg1q9cfNAFM/Niahx3ZFNdeyfQRzd4DNrrVqmzuKp2WjOX7sugsDZQgUbxzDzI4tp27bhloeYsmXgpvu6HAVZguu/3qLrxDQZDNZksKLLt3kkllxHkBVB8xIFCHpNmupEjzl+f2a8z/EL88m53eowwl9bNkH6CocZsZXUYwdG09Owx1Zj/gMb8NGgocppENbtFyTcANouEo+32Eiq9TjhJmbFx+FRLylpGvtRSZyndjUWwwNrKJ4U4vFLPRJp5vw6LFQSbcwOG3mXIehqKP/vFtASDNSSYoeE59P5YwpB/1z+2D2GCillRNFa0EYctIWmW0KQHg46xQihZ/EeftdDU+keVMEX/R4V5+34wgt2yr+nQY3Fv+vyHr9RJ1URGVqe3WGvWlcMxEtfUE5H2woU/4P5OQl9heJ+B16N7GLK8duq1pMZbuqTQCnhpxfwZZDRxd+95rRLgA6FlOBHnhzZM4JGM2wTULTnJ0gMu2oA5WqoKufaZR3/L8N7OKXpYvIAiXNi5/fEJR8BfDhtVpHGN2Ct+nPqJTUEdMwnh9gk03k0y8k0DDQREEpehy2mdDHfE/dEIy9oDzsdlO8m3pQkASysxB+uSvbWVFzxETgdv3CBM6xAXOxELnOy268lIw3ZLkgS0ykBmX7PNJ/L6w0wg48osBZn7NF2gixkn+wVU/9xj1i4lvkDEHkMmLbmvhhuO8VDb9bjP2rqeR2+3gYPNzcXa V/0MTefp D4Kr3FbuR6qHV3nhELKQmatdGKmwPso2DzPXw9kweJZy+bb2A2NYuVJbNWxPehaEd7pFeSjoHFGQJ5RsAqr0o72PaBgRXbhir/tC/pLymWtN5/uSmchPUhl71TYC/Jd4MXRmHY3xvWiidERc/nPrWWNYXc6ZoSjMw0q96TqvXMEtgUA9X/RYUbeGAbRsSU53PN/91ilQzaMN8qFr2siadAVnGGB639kZTtOaRSWgiwAWVLSZDqOxraX+fACozinQsZClkF5fJpDYxo/dYfwtQoDpX7hIhadRBqYRjC7GDu26cUBYymtChDhwLqfKtMi6bhppkB3F305BsWYnnmSUtMC0reNS14KKXxSWtTEX7YeK4/zhocEkQij2NTutZW3pwwFHmFn+UTLYldgSIAV8hYJUvJJP7TmGJ3W0Q4l3avnrcBmMzCvSyCwvNbZcxTknXbH4Uvf33FsXlKkdjhucMP2dUYTL6NM6RR0ZTSzQc9IusqotMENlHmn49nRlTD22CgicOPvadI8U3y/DiG0khr9f4CI2s6xje8qB4AyqCk+WbfcPTHaCKcToW0g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To avoid any compiler "weirdness" when accessing page table entries which are concurrently modified by the HW, let's use WRITE_ONCE() macro (commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables") gives a great explanation with more details). Signed-off-by: Alexandre Ghiti --- arch/riscv/include/asm/pgtable-64.h | 6 +++--- arch/riscv/include/asm/pgtable.h | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 9a2c780a11e9..5d8431a390dd 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -202,7 +202,7 @@ static inline int pud_user(pud_t pud) static inline void set_pud(pud_t *pudp, pud_t pud) { - *pudp = pud; + WRITE_ONCE(*pudp, pud); } static inline void pud_clear(pud_t *pudp) @@ -278,7 +278,7 @@ static inline unsigned long _pmd_pfn(pmd_t pmd) static inline void set_p4d(p4d_t *p4dp, p4d_t p4d) { if (pgtable_l4_enabled) - *p4dp = p4d; + WRITE_ONCE(*p4dp, p4d); else set_pud((pud_t *)p4dp, (pud_t){ p4d_val(p4d) }); } @@ -351,7 +351,7 @@ static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) { if (pgtable_l5_enabled) - *pgdp = pgd; + WRITE_ONCE(*pgdp, pgd); else set_p4d((p4d_t *)pgdp, (p4d_t){ pgd_val(pgd) }); } diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 294044429e8e..c9f4b250b4ee 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -248,7 +248,7 @@ static inline int pmd_leaf(pmd_t pmd) static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) { - *pmdp = pmd; + WRITE_ONCE(*pmdp, pmd); } static inline void pmd_clear(pmd_t *pmdp) @@ -510,7 +510,7 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b) */ static inline void set_pte(pte_t *ptep, pte_t pteval) { - *ptep = pteval; + WRITE_ONCE(*ptep, pteval); } void flush_icache_pte(pte_t pte); From patchwork Wed Dec 13 20:29:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13491849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F097C4332F for ; Wed, 13 Dec 2023 20:32:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1FA6C6B04CF; Wed, 13 Dec 2023 15:32:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 183FE6B04D0; Wed, 13 Dec 2023 15:32:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F187A6B04D1; Wed, 13 Dec 2023 15:32:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D75566B04CF for ; Wed, 13 Dec 2023 15:32:23 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AAC29140C0C for ; Wed, 13 Dec 2023 20:32:23 +0000 (UTC) X-FDA: 81562942566.01.AB835C6 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) by imf01.hostedemail.com (Postfix) with ESMTP id BED9940017 for ; Wed, 13 Dec 2023 20:32:21 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=CweviSw0; spf=pass (imf01.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702499541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lUqvcHnbHjun3bdBJPfvf0B8JWy5i7uak5v/0Wh9cSk=; b=WSS6/h2NsQRdskU8AunnBauHiOtvmLbOyMgsVHTKJR2aH8eQF1xnsp2y6l2gdQoX/mBtBY qa7LcSXzJst8J0ATI0NObU49dECyJrRuVYs/KYBdBiuPuAi+MOK2IF2xljjmuK6qcET2hv /NckAjuEiaIe0carK3BnNqGiAPOiLUk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=CweviSw0; spf=pass (imf01.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702499541; a=rsa-sha256; cv=none; b=Dz23Ws28Xp1ryd1XLKxlxU/qPCWsZm+tue1LcMraf6hCMYnhJFc9r1Qk/u3EMPJmcnhUTL Y1BxPUQxqxS9hfkDn8M16sPdWLVmj2GI7cnLkA6hyDqEKbWU8poUzE4KzlmUHk/0wDSiAO z4wEOsOgtc3cAqFBGhuhUpS+le9Ymh0= Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-40c29f7b068so69586335e9.0 for ; Wed, 13 Dec 2023 12:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702499540; x=1703104340; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lUqvcHnbHjun3bdBJPfvf0B8JWy5i7uak5v/0Wh9cSk=; b=CweviSw0okce6WRfJ0cPJZLXivNElU+SVKDyYi/2jMA9EHZkUcVDJ+pov0Fwy3FLLi SAX8q5uc2pL0Fb7CJ8IuZTH8z8G7Ogi2goyHz2klE0WeQYpjvK0GMsFoglFrjxqn8eW1 thLR4WCBdQAl1s42tw9tfa+bjTW6VXbLj1c/kLnkNKJyhcdRXQHOEMxM7QlAWvoT+N2s m62paHoq1qsLBRz3KUheEih9XheEt1kDTfHebSA8Fllxz+RsaCHN8SIsuhGNuXftRnvG ZjxFbN6D+ZKCUh0cZPg+hV11BgnHVEW8A9uG6XXda3iQV3pVenruz/b9nfpybOxHYwbr Z83A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702499540; x=1703104340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lUqvcHnbHjun3bdBJPfvf0B8JWy5i7uak5v/0Wh9cSk=; b=Ipmq5HGoPsuVU+IOSCvgflwBmToUUpVpT6w+pSAAKHXwI1qOVCLZH/WBgfrOOuKCeL jquzWUVV8DXZ4PyhNLIv8+y554zzRh+Tz3rofxeCnYqutWCFqZJ1vusi8mnX7dZxUrfn +ra1rSv8mHKtVajtbZACo18BDNb0jurfHUEjVzuTSiqlWEnM5SnDWwr5R8wmvqilMXb8 +9NpF+t7jaG7TK2uTYDGhWc2lirPBYp3ySgIvLHEXR3ECpMhlXfYm4veoqzqOf1vA9dr HL2axlmFT7K988epxfaYiED61W/rJ7LELH9UeRsk+wQSXevqudvbhWhUWew6jqDo/a0/ qyPQ== X-Gm-Message-State: AOJu0YzRptF+FZZ0SHdooGUqLUI0J2igZv/EjwvbC+/uKfevZQEruq+k wYkoD79Ch2Q5QTssNPUgc32rUw== X-Google-Smtp-Source: AGHT+IGeWwb4mZTjVlnfL3Tc7x0njpKhguH1T5u5o7xiWbZOuJ+qDIheC5NMdpL6eYcBnXpv6MaCcA== X-Received: by 2002:a5d:51cc:0:b0:336:353b:2193 with SMTP id n12-20020a5d51cc000000b00336353b2193mr1550679wrv.61.1702499540379; Wed, 13 Dec 2023 12:32:20 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id n10-20020a5d4c4a000000b003333abf3edfsm14139649wrt.47.2023.12.13.12.32.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 12:32:20 -0800 (PST) From: Alexandre Ghiti To: Russell King , Ryan Roberts , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Paul Walmsley , Palmer Dabbelt , Albert Ou , Anup Patel , Atish Patra , Ard Biesheuvel , Andrey Ryabinin , Andrey Konovalov , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-efi@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 2/4] mm: Introduce pudp/p4dp/pgdp_get() functions Date: Wed, 13 Dec 2023 21:29:59 +0100 Message-Id: <20231213203001.179237-3-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231213203001.179237-1-alexghiti@rivosinc.com> References: <20231213203001.179237-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BED9940017 X-Rspam-User: X-Stat-Signature: ww5ss3rfwh8us5o71cebh1tggyxruduy X-Rspamd-Server: rspam01 X-HE-Tag: 1702499541-965946 X-HE-Meta: U2FsdGVkX1+Wnjr4ezmFx0m0LFggbrimz3vDEYUPyjuZ3M8HhP6Fium7vD8yqV8H7Z605ysJ53CzAxIR+fJTp+BSv6npts6iIAuditR/GTRVe2v0OKxt+zKeIK/lFz9nUG4U2i8STupiyEFIlX/68ROEtyxeny2q41bI8pSa1369mOXYbsCuIhuUFdd6HIPC+qm034gR/ZlYG1l6yTMqDVdjkn+9mFjRPHogSuPXL1onfDDmVzzsEMHCZWbzo0q8zOgkxv7oijyS9te0YbfWS0lyWq2RT/Aw+EjnyHgxzorg/yBZBbysnHoqcbUc+FicYa8aC7hJIuCNCGFvADB0cLuiKKi3Z58sxJEmc1H83o1m+6bmEk6EIbN7JWwzeL2imNREh+pdcgN6IDBu33WZ1ALcmSSA77duxluarrZg35vPwcJM+PSYcyTqo6wpLXm0Do8gnBSPCNlnG4cJpewbkSuk6x9ZUKdZ6gWsYtcIukym0Qbps+gTz2msGdyK9a/A+m+ryGd4bXX1tCbxDVq6/Flb5JDMRe8QAmHCYPLl9SIiHE0Ea8hLC2n10bFyI/G77Mo0oucBmvunKgFl8UbHmdXFwOTIObpmH/IUg7zpe9j2bEV+LNe/SUiIITM+CPRj2zq9FYqF+0CTjPZqYkHslowVjVekRYxrd64IknRJ1SMR5q0dvX1cPcy0oHZ8DDjgrG80z0pm4JhdrZyYzfcBm+rn5010+voV6+vGbfA2289IrdKuKf2+g2eXMJSi2LQBoKPGTQhsL0oQ8sfHVFbHvy+8vchVYkRzZn6lMVA2HYzMXEFNMHf1VlL0YYblr1vK4JJGBeLGotSEu/reCfcRavxGnI0iEsx7I4kYww5THiQY84yN2BSVslkorK4PWXoNLVrfzk8HjxChdotMkodWMJO16htBQjnDYBI+vMqERO7zTdoymAjfgd2rsLgMEuvZMxYOU47tGsAaNsjEkpQ OsMQcDQY Q5efWMyUT9xd9t3QJKDTvfPTv8gRBvCCQs/LaovyUGmSp650x522/U7o3e/RUQb7fse14k28KxTYNRydQXQEqQ156Y52Ef6LXJoJNyFLY9G+yu86iECBiLjAbC+j17aJZp5i20kAcKnWOqWdYG/Qb1zdtKAp73C3nY7d5S1iKm8T5etoLlohlOM9Q+w+RnlONKFBaaIeKuLFw21CrkKRes5y+miycvT7N40wgf6iCHY7G0tYZGuEGMNcKLZMOP2q0HhE2Kpc9KBf9gWLu/t/oD46Y0JVHRhLC7wLgADL5qKjIhxwfAw8eAQUOacJTP+R+mK6JW4KYlevA1YohGuPMi+hXFdgj5uAFCi8GUQPN2A7YMY3haCF4OYvgaReWC4xHlgO9bfjJn5R4kuPAIIWGMtPwMRxTrOhiwYjM8PbMFGbJ/1tgfY2uXjqIObzhA+r7fvZBGGQiqLKduUuOhwe7LtlS7F3hOhkRLjZ8YpheD819Zss5aliYRbBdMHho3vCrtoIMkAi0QBdLC+9zOmBHopliM7KvmrrIEC8b2S2Jgegs6e/IGA0oBdeYCFyMoflSbBu4UO/IoGiuCbKFYM1dD+xAgnerV0pCljm6m4b4WbeN9uQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Instead of directly dereferencing page tables entries, which can cause issues (see commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables"), let's introduce new functions to get the pud/p4d/pgd entries (the pte and pmd versions already exist). Note that arm pgd_t is actually an array so pgdp_get() is defined as a macro to avoid a build error. Those new functions will be used in subsequent commits by the riscv architecture. Signed-off-by: Alexandre Ghiti --- arch/arm/include/asm/pgtable.h | 2 ++ include/linux/pgtable.h | 21 +++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 16b02f44c7d3..d657b84b6bf7 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -151,6 +151,8 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; +#define pgdp_get(pgpd) READ_ONCE(*pgdp) + #define pud_page(pud) pmd_page(__pmd(pud_val(pud))) #define pud_write(pud) pmd_write(__pmd(pud_val(pud))) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index af7639c3b0a3..8b7daccd11be 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -292,6 +292,27 @@ static inline pmd_t pmdp_get(pmd_t *pmdp) } #endif +#ifndef pudp_get +static inline pud_t pudp_get(pud_t *pudp) +{ + return READ_ONCE(*pudp); +} +#endif + +#ifndef p4dp_get +static inline p4d_t p4dp_get(p4d_t *p4dp) +{ + return READ_ONCE(*p4dp); +} +#endif + +#ifndef pgdp_get +static inline pgd_t pgdp_get(pgd_t *pgdp) +{ + return READ_ONCE(*pgdp); +} +#endif + #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, From patchwork Wed Dec 13 20:30:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13491855 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A30CC4332F for ; Wed, 13 Dec 2023 20:33:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1ABF06B0213; Wed, 13 Dec 2023 15:33:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 15C606B024A; Wed, 13 Dec 2023 15:33:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F40DC6B024E; Wed, 13 Dec 2023 15:33:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E221B6B0213 for ; Wed, 13 Dec 2023 15:33:24 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C4BC51C152C for ; Wed, 13 Dec 2023 20:33:24 +0000 (UTC) X-FDA: 81562945128.07.5BFD46E Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by imf10.hostedemail.com (Postfix) with ESMTP id C218FC0012 for ; Wed, 13 Dec 2023 20:33:22 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=xrokyilZ; dmarc=none; spf=pass (imf10.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.44 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702499602; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qNQI5S3fkIj7NrMfmbJzF3XH+7YYWzemWytVmilYcq0=; b=jlhcOwRM28l/lQqHZczLoCloAxxOuuGIsgWqs4pV5YNinWKJUaBSbXefYVYaSOHfkRrOWq J5UX82AfyBCKszK6VDNg5jBDcRaVoObaatfuICpGlZFRKMaSMDph+r2Snnyx/gF1XvcalU YPcrt1pBRguYC/GIxFfYUejb+2e7iiU= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=xrokyilZ; dmarc=none; spf=pass (imf10.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.44 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702499602; a=rsa-sha256; cv=none; b=f1wLed5FZ1G0WtsqXKebgeNV+vcpdvZeXNKCFuo6I7OFWzoLZfeiQl6P5oXFFIEPAH/6M6 EOIM3nsJO8DTtPXmMZIWeCQlfDY6XLufWGTyjNZ/+ZiJGzJyGgMP06/faGvABOsfuNkXh3 tzLOqFBTWZwjUytqeNHW+jqamPkgFr8= Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-336445a2749so280287f8f.0 for ; Wed, 13 Dec 2023 12:33:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702499601; x=1703104401; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qNQI5S3fkIj7NrMfmbJzF3XH+7YYWzemWytVmilYcq0=; b=xrokyilZ1noP5xuNoFG7ziBdLNQ6cKOUAjdSsTSURr+sDfHBfT0+L9WtLNVN/BmMct oYELqZMdnYzXOywEKz88d4zaxCbPZvn747FdMiOJaCzUcIklNRho3Z0f5V6TaeAF+2F8 pslhf3kuQMGRdAbuHJTN33Ru4MPhvyKhHG4C3cFkswC+B4YZ3bOessd2Zo5p2NURWmv3 0lB8U9pKoK43JftSUlK7RxIWaeylQpknlNfhCyLHgWdmra7REzaigG7CWohMevrnUNet H3mVgmJF6w/47DLRffWyrFkOYcRFT/B/Xe00fyI91k4b4ZmSjhELath/eZ+OREpmgLR4 WigA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702499601; x=1703104401; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qNQI5S3fkIj7NrMfmbJzF3XH+7YYWzemWytVmilYcq0=; b=wAySRYQ44m9K46MnaKKzqyF1NKKSWqPYXS8rrTuw0ynZaiBcRSDI7nggWJswQiEHw6 fPiRMksB+RF0j7T1cp3d1XPzb5fSKlzfxAB+ge1FUWVhodWJrw5bFPcJdYhEpAWeIfPI QFlt+9MDZUhBUD7hHqVNXDCV9ZbjLd9PtJ8oF7aEtn3fvJl3m3HM548jNMu4E87oPCpH HMNUeZHtvdI20rJcfKnGtA0Zlkh+FnBcgEzptDceDKcrtvLP8ykCga2yQ3EfD76Zkdj+ yOoK/gIdbIIWhSWuV+Rmfqzm1ArjX6mRBIH/t8DcOsBkyT3V1v6wKE642lBPOFp5K34p NyDQ== X-Gm-Message-State: AOJu0Yw+gUXVwtxWtYnwwZz4zbIZMVBorbLYDYECCaRQHKasm/5e+PDN ptX4NQ40pXvAlqgOwOw/IlSUPg== X-Google-Smtp-Source: AGHT+IG3reTNU+mJ6s5WyJ1mAvW4tTpIucH1hcfMu/iPtp1tRfvP2HWAF0mMX3m2L1tivm7yEvDWGA== X-Received: by 2002:a05:6000:174d:b0:336:36fb:84c8 with SMTP id m13-20020a056000174d00b0033636fb84c8mr990697wrf.107.1702499601505; Wed, 13 Dec 2023 12:33:21 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id p10-20020a5d458a000000b00336463625c0sm136243wrq.51.2023.12.13.12.33.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 12:33:21 -0800 (PST) From: Alexandre Ghiti To: Russell King , Ryan Roberts , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Paul Walmsley , Palmer Dabbelt , Albert Ou , Anup Patel , Atish Patra , Ard Biesheuvel , Andrey Ryabinin , Andrey Konovalov , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-efi@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 3/4] riscv: mm: Only compile pgtable.c if MMU Date: Wed, 13 Dec 2023 21:30:00 +0100 Message-Id: <20231213203001.179237-4-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231213203001.179237-1-alexghiti@rivosinc.com> References: <20231213203001.179237-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: C218FC0012 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ywg8mcjwfgmwk3hdizm1axj96dp61yga X-HE-Tag: 1702499602-656714 X-HE-Meta: U2FsdGVkX1/IkBS9BzDQVoxMg9EzWZ0hhaKkJ3HFB56O+GM2HATOqGyC9qFC7/J+TPGiUQk2tBJrp1GWmpdgu+QdH525CrjEfM2HKUtfbfn//n6kBiebSZOrDYSKr4TWf7MH6xplUvYuNccQ3LWL8/UgwTDDn/0HxVrBxr4iPzLsNgozFeKCauPBmEttmRexOZmPQT9lGiFpE4bMpcvyTFUcNPwFJwbeY6eLn7i9AdIIeirGh6F5Fs5G3+/J2ie53uzpw9munW3CxLNVQ5BP8ArDXfVgGRvxN9ip4DHQ6ASH40bLMtR/TOqDrDVCrEZS3w4/+hfaATlKGYNaYrOBJvOA73oCnN1szpvfoMl32k+lB4zuFGI3y8ykmh3/lYrvy3QgwJgzrBDrGj2OVBp3WR3RML9cEFQvPQJ1QGpmeDjvG7LAmcLEGTypD4D+iDS7bt2LDWgeyzVCaCqogxu4CNBt4NU5JZRJrFz+u0oCTTuuTnspUOJ9wFvLtBeQqY39FK4J8aLptCjPuTiQzpmzrKAq4TG9672hk0pX3YDNQ1c0VJg7dEjG5NuSdy6kFdyoyOVFEFuge4LIbOOn9Tina3RmZQ+M9doyhemh2doimDU6lCNTzBiqGgPE26wUeI3jIDVEa2MCWyhEK/+fbyvsIzO7MXh/kAF+DLApPnoWCTPzEUde90EJkzZFP6HIKPyG6q7iPRPoL3OeZWkYQBvFmmOs0R6Bh3Zpl8gtN+DGw3FKvnLjxZSnI/d9aKsOm19yzWng/YLAyjEn8PO0WSlko8bFzjjrMcyxvYkF5EG5Kmp1y1W4Ws3XbaHuC6bgudc0pA3yUbbbdAymIX45yCAfx5FYmupFagtY4knEpFMJGrtwLJkJWKhT2hqIu5HUM7xT2DPMcYRrqpotv3Ob+A6rfoiZw0JvMoPReF8cvxUjHeqaQ55pgZCprEUQ11xCAEn9ImMORow85rvUVGGj7lx 4CG/rp9N 6ZiqayB1b7h0sfTxs0ymDImHxpJQ1170db5eey5DbA7Xp62L0BS9sMntNKOUYstoaTe3SE/7KGXYA3YDIgtYrcAdB5YT/I5x9ru0PwPZ0iGpYbqmxFeMMRCaOi1s1h7cbzhFX4kcdoXY0/0BNhZcqVAc/xjuV3hkpMfTsVknDB4VPrNm2WjzfCy1ZmjSs60YExVsQO+WudRXrk7Eb8Z33Hh3etRxDGGlvAvFsGqi1bAtSeXOdagDlzxYbN5+wQVFs4aH3BgDdYaJcZPPb/Mq6T2X26aMxMxO2o7WI/TyXC4CBSx3eM1TC/wChh4aMgBYCEcWnRv8CSy5P/gIfLNMBK+m1qcK3Zu8fe2SkFeVJnzWifVs3Yj59LYNwha0N0RNscfUP3nYyjDqaUgHUkDVktaooddeHvPtHVu26rMxHvvKpShXXLtXjOCi8RCXgvpTdns1Mv/vTZOoVZvp1+u6mhwHB2Anhx42QhSpiyh259Wb1s30GTbT3RzNREep13fhFGooXmfkug7r/nMwdz9Rhz6wOAVRZFL2zK2OOunsy99JBZfB+9u/UDofkDg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All functions defined in there depend on MMU, so no need to compile it for !MMU configs. Signed-off-by: Alexandre Ghiti --- arch/riscv/mm/Makefile | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/mm/Makefile b/arch/riscv/mm/Makefile index 3a4dfc8babcf..2c869f8026a8 100644 --- a/arch/riscv/mm/Makefile +++ b/arch/riscv/mm/Makefile @@ -13,10 +13,9 @@ endif KCOV_INSTRUMENT_init.o := n obj-y += init.o -obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o +obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o pgtable.o obj-y += cacheflush.o obj-y += context.o -obj-y += pgtable.o obj-y += pmem.o ifeq ($(CONFIG_MMU),y) From patchwork Wed Dec 13 20:30:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13491856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55C8EC4332F for ; Wed, 13 Dec 2023 20:34:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E31CC6B0463; Wed, 13 Dec 2023 15:34:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DDFAC6B0464; Wed, 13 Dec 2023 15:34:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C32BA6B0465; Wed, 13 Dec 2023 15:34:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AE4ED6B0463 for ; Wed, 13 Dec 2023 15:34:28 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 86640160329 for ; Wed, 13 Dec 2023 20:34:28 +0000 (UTC) X-FDA: 81562947816.10.46180AB Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) by imf08.hostedemail.com (Postfix) with ESMTP id 781A0160029 for ; Wed, 13 Dec 2023 20:34:26 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=hdXC3o67; spf=pass (imf08.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702499666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h4Aa0IIC2EFp1SndKC1S47KpUUaL4iRIfXKMttfeWcM=; b=q212pjgHXpWyV1z1tgiZBMVMht06P0CvcVEgeKXZMeePhU+/EGPrdD7OMtr5qKZLAaoW0s Nf/g0+/rldZ0U9lBc8xMVlQj/5NloGbvARxX+X0mx2AXM34MtfDIv2vXFhjGorLoPoXlCu vjqZ99Ihxh3q0kPROv/67u9ShXnanoY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=hdXC3o67; spf=pass (imf08.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.54 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702499666; a=rsa-sha256; cv=none; b=1sOFNwBcI3B0AbZbzsZp2Fv4rgknymOdokQ191pN2A/sK7uewN2/tH2Go+b3nxX5RsNtoU JCXphY8TQ8VGP5M6LY7uT3Cz4h+Db3KhCxWR5pnp++vHdN8mgD4i6j45V6YtW012bFWpM+ 1na4XyO9fFz8woOd9uv9YuAynb8gzEo= Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-3331752d2b9so4857786f8f.3 for ; Wed, 13 Dec 2023 12:34:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702499665; x=1703104465; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h4Aa0IIC2EFp1SndKC1S47KpUUaL4iRIfXKMttfeWcM=; b=hdXC3o67vZwHOSjhhZFgnXmNCkcvSKDx5DPmO6g4eEG2qthX+dbTcF4X2ZFa8LSjIk yHNsBUfEGj8SfMoISLx+/sjGiaj8By3uATjuO28y4i/ppIkkxEfkMVtkL0RsO3TbjhaB 1aAzl4keFCOcUxNjidiSEz0CjQ1XTb+Uw/AxkorfeQt/bPSSaSamCVZ+NTdfguFgWcp/ NnjBT6kidQ7vAUvpNP7psBIn8x1FaGZbeL+ZEFMcfhWhmjnHHOUwizPnwiNXhOd78Fwz XVh65H6ox7dp5DvWevVv4aCc2P5a9EGUtkJo11CVSWXV3dOu48fdk9GL8jBpAPC27j4f 8LTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702499665; x=1703104465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h4Aa0IIC2EFp1SndKC1S47KpUUaL4iRIfXKMttfeWcM=; b=DWkf1M4QCy/zARLKuViwKEwO48bM+K1Ny32myZ5leXhy7oeU2j/i2zE+eBBr9fNtjr SO88/s4vQikgisvHXVfYkl/QsN4bl1J2yCQIgQbmR7b0KR49QiYd/nu2YBey4Zr2fslb j1/IMFRMBmcreq3VhO1T9PNXAPcZbojHg248HdR+2e5WJtY8DzFmOvCJCy4uiGZcj8+3 g1QgwRN24/+Ir7j4bi1NshqbuRJwYnkuAbzRJMv8j+stuVHxV3vdUEqFSV58vhm9CndS xfoZheDHRcYV/uSceQenqTlQv0kn9lUNbm8PBK6vPL6P+4E5kZcHpBzWnkzzD824k5+Y ObtQ== X-Gm-Message-State: AOJu0YyU9umokglCUPjUVKhLkWd/53dHVgIdZeJpWgJtURGa51xiIufu QbxJdwhWqbTCjM7xd6UN8z5VMg== X-Google-Smtp-Source: AGHT+IFqcoS3/E2XqpI0zJ2noiha4f1LF5OA5LcG+B8WAoWE6/JfzkFNnRM1BPg3pEXra57LDfKn/g== X-Received: by 2002:adf:9d86:0:b0:336:352f:678c with SMTP id p6-20020adf9d86000000b00336352f678cmr2061360wre.20.1702499664958; Wed, 13 Dec 2023 12:34:24 -0800 (PST) Received: from alex-rivos.ba.rivosinc.com (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id e6-20020a5d4e86000000b00336365d1dafsm3436888wru.69.2023.12.13.12.34.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 12:34:24 -0800 (PST) From: Alexandre Ghiti To: Russell King , Ryan Roberts , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Paul Walmsley , Palmer Dabbelt , Albert Ou , Anup Patel , Atish Patra , Ard Biesheuvel , Andrey Ryabinin , Andrey Konovalov , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-efi@vger.kernel.org, linux-mm@kvack.org Cc: Alexandre Ghiti Subject: [PATCH v2 4/4] riscv: Use accessors to page table entries instead of direct dereference Date: Wed, 13 Dec 2023 21:30:01 +0100 Message-Id: <20231213203001.179237-5-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231213203001.179237-1-alexghiti@rivosinc.com> References: <20231213203001.179237-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 781A0160029 X-Rspam-User: X-Stat-Signature: 436koh36ie75rw6d3pkmp93ak5jgea3s X-Rspamd-Server: rspam01 X-HE-Tag: 1702499666-554969 X-HE-Meta: U2FsdGVkX18aFRalCvKmu8tvQudBnB6K9aXAvvfEwJYG2hOmWiQzx/LdqxbtlijnnS4Kfp5X4wjSiJ5Os8hNETqssuvSs2konUgvDOuSm5Yq1pmLSYah80hgzNg/fNLUiOd0tgeebi4ldja8vU0UmDD0fuZ/3wY0ZoQ1PVt/eA5eadJcoBUKJqwN8ZqREDXVTpV0jnyWoX6o47XFKW8ngwVR16p9/3cyoqS7XGrrOjBaexLJTlxwRFUbHN15pd2xWT7QC9TNdywRaOcyPZ1+NH82kSURbR30oVFvwG4PjHixslf7m+tPKaasC7OpWqynszDkpwqy/HScWHnYbnxD/wB2O4CwFAyWp9pZ0N+Era9q5AGr8wMYUpZC8CTvllJXJ61ylIEWtI+RJWQ8KK9qcPHX50bKxg2tCqUeiwPrHiEC86xKJMnyXJQCQJbYbOKHcZ5zXBWRFGQ09bTuRQoHEUDzgDuH9ZMQ0JsSItu8kbltU7ml6KcVZUImlSjEKjMAQLMXBngxEPasb9jAQK1NonHcMA+xm2f9qzVrKtBRCeO3UapWz66486QqtMW7f3lSPQD+OjWTn3PSjzgUq6maarYhRsA7eJnFNolT+gybcgiSS0BsJbPObqnIRyA1gCAYuh3TikAxUXFLbAGPcKGr9Am4DiIM9UQ64pWsA1QB7Mb3xlKEzdtzcPQy5CXIU8FomXRY2A6c302KCsYVaaxotNZS3dX8R4NhU82KbhPB3+jQx2y7h2/8Ny2vUAFrndYIgdt6hdyYPvybkgEIyxjk4ohJigTJRUJKYFt7usZyEwvZ1hvuFdwlUJBC1CaX+l+a6bxqh3p1LZCrRmcQFzW9UQXsgzjLz/6RFKwhwP6Rk4n/f5j27JSYml0kgNlxVmthox7zFhW5yRwHCMf18pm6csSozRQVqHny+SwH5sy/bNI2RuIvqT7WKqo3qSmTGj03WjwOs2yy7PK+LQzoRVM KJ9YsR3T 4J5KwarE59ghah6mgNnWKU3K5Lp7/3UFsVBEACYKqX0yFphLrdP5vmFBpjGxFMO8+lGAcerddG56iTYmpj5Sscd+wtPWV5HcoctRHSXWACtIk95TsVRpWVuKivUjJZPNtvR6zeXRaGXdBW5Wp7CpWZeq8Gdd9E/aGpqGFWH81OU9Zm7i7Hud6IbC5RGFfqC71yz5BN91N/zeboMfuk34VHXBD27mBSEuqcV3c9QCOGz/L9cYPkbRUV25MDHmAwKYwrT7OXlYNt8ybKj/FuujYWsCuYosd+JQ+bjkxWA+FsfRXGG35ciCZgRCQXB7d7GzKi5t0LftKMAsuNmLv3KoPV3GoRt2yyPV52G1fwO0L5B5ODY6o4gmIbaXge6i2tDfVnXjX2rluOuMNxAiJqZSnNSjrz8Vnn7FfKdhR0Xm1x96oszEGo+PSKCSHHBOduLOI2pbYl3zheAY93EpOVfPWbm0F/YlVAtWIvToVNLcAyRV9KY82qUPJHatjs9CGdAi/wY7jZCBDPba0IaJRl5YEm0mViGKgMuK0w0DnGEKKVuzeoQ1rcq9wil6lCg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As very well explained in commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing page tables"), an architecture whose page table walker can modify the PTE in parallel must use READ_ONCE()/WRITE_ONCE() macro to avoid any compiler transformation. So apply that to riscv which is such architecture. Signed-off-by: Alexandre Ghiti Acked-by: Anup Patel --- arch/riscv/include/asm/kfence.h | 4 +-- arch/riscv/include/asm/pgtable-64.h | 16 ++------- arch/riscv/include/asm/pgtable.h | 29 ++++------------ arch/riscv/kernel/efi.c | 2 +- arch/riscv/kvm/mmu.c | 22 ++++++------- arch/riscv/mm/fault.c | 16 ++++----- arch/riscv/mm/hugetlbpage.c | 12 +++---- arch/riscv/mm/kasan_init.c | 45 +++++++++++++------------ arch/riscv/mm/pageattr.c | 44 ++++++++++++------------- arch/riscv/mm/pgtable.c | 51 ++++++++++++++++++++++++++--- 10 files changed, 128 insertions(+), 113 deletions(-) diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h index 0bbffd528096..7388edd88986 100644 --- a/arch/riscv/include/asm/kfence.h +++ b/arch/riscv/include/asm/kfence.h @@ -18,9 +18,9 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect) pte_t *pte = virt_to_kpte(addr); if (protect) - set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); + set_pte(pte, __pte(pte_val(ptep_get(pte)) & ~_PAGE_PRESENT)); else - set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); + set_pte(pte, __pte(pte_val(ptep_get(pte)) | _PAGE_PRESENT)); flush_tlb_kernel_range(addr, addr + PAGE_SIZE); diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index 5d8431a390dd..b42017d76924 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -340,13 +340,7 @@ static inline struct page *p4d_page(p4d_t p4d) #define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)) #define pud_offset pud_offset -static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address) -{ - if (pgtable_l4_enabled) - return p4d_pgtable(*p4d) + pud_index(address); - - return (pud_t *)p4d; -} +pud_t *pud_offset(p4d_t *p4d, unsigned long address); static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) { @@ -404,12 +398,6 @@ static inline struct page *pgd_page(pgd_t pgd) #define p4d_index(addr) (((addr) >> P4D_SHIFT) & (PTRS_PER_P4D - 1)) #define p4d_offset p4d_offset -static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) -{ - if (pgtable_l5_enabled) - return pgd_pgtable(*pgd) + p4d_index(address); - - return (p4d_t *)pgd; -} +p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); #endif /* _ASM_RISCV_PGTABLE_64_H */ diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index c9f4b250b4ee..3773f454f0fa 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -544,19 +544,12 @@ static inline void pte_clear(struct mm_struct *mm, __set_pte_at(ptep, __pte(0)); } -#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS -static inline int ptep_set_access_flags(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep, - pte_t entry, int dirty) -{ - if (!pte_same(*ptep, entry)) - __set_pte_at(ptep, entry); - /* - * update_mmu_cache will unconditionally execute, handling both - * the case that the PTE changed and the spurious fault case. - */ - return true; -} +#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS /* defined in mm/pgtable.c */ +extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep, pte_t entry, int dirty); +#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG /* defined in mm/pgtable.c */ +extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep); #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, @@ -569,16 +562,6 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, return pte; } -#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, - unsigned long address, - pte_t *ptep) -{ - if (!pte_young(*ptep)) - return 0; - return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); -} - #define __HAVE_ARCH_PTEP_SET_WRPROTECT static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep) diff --git a/arch/riscv/kernel/efi.c b/arch/riscv/kernel/efi.c index aa6209a74c83..b64bf1624a05 100644 --- a/arch/riscv/kernel/efi.c +++ b/arch/riscv/kernel/efi.c @@ -60,7 +60,7 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md) static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) { efi_memory_desc_t *md = data; - pte_t pte = READ_ONCE(*ptep); + pte_t pte = ptep_get(ptep); unsigned long val; if (md->attribute & EFI_MEMORY_RO) { diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 068c74593871..a9e2fd7245e1 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -103,7 +103,7 @@ static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, *ptep_level = current_level; ptep = (pte_t *)kvm->arch.pgd; ptep = &ptep[gstage_pte_index(addr, current_level)]; - while (ptep && pte_val(*ptep)) { + while (ptep && pte_val(ptep_get(ptep))) { if (gstage_pte_leaf(ptep)) { *ptep_level = current_level; *ptepp = ptep; @@ -113,7 +113,7 @@ static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, if (current_level) { current_level--; *ptep_level = current_level; - ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); + ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); ptep = &ptep[gstage_pte_index(addr, current_level)]; } else { ptep = NULL; @@ -149,25 +149,25 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, if (gstage_pte_leaf(ptep)) return -EEXIST; - if (!pte_val(*ptep)) { + if (!pte_val(ptep_get(ptep))) { if (!pcache) return -ENOMEM; next_ptep = kvm_mmu_memory_cache_alloc(pcache); if (!next_ptep) return -ENOMEM; - *ptep = pfn_pte(PFN_DOWN(__pa(next_ptep)), - __pgprot(_PAGE_TABLE)); + set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), + __pgprot(_PAGE_TABLE))); } else { if (gstage_pte_leaf(ptep)) return -EEXIST; - next_ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); + next_ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); } current_level--; ptep = &next_ptep[gstage_pte_index(addr, current_level)]; } - *ptep = *new_pte; + set_pte(ptep, *new_pte); if (gstage_pte_leaf(ptep)) gstage_remote_tlb_flush(kvm, current_level, addr); @@ -239,11 +239,11 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, BUG_ON(addr & (page_size - 1)); - if (!pte_val(*ptep)) + if (!pte_val(ptep_get(ptep))) return; if (ptep_level && !gstage_pte_leaf(ptep)) { - next_ptep = (pte_t *)gstage_pte_page_vaddr(*ptep); + next_ptep = (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); next_ptep_level = ptep_level - 1; ret = gstage_level_to_page_size(next_ptep_level, &next_page_size); @@ -261,7 +261,7 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, if (op == GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); else if (op == GSTAGE_OP_WP) - set_pte(ptep, __pte(pte_val(*ptep) & ~_PAGE_WRITE)); + set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); gstage_remote_tlb_flush(kvm, ptep_level, addr); } } @@ -603,7 +603,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) &ptep, &ptep_level)) return false; - return pte_young(*ptep); + return pte_young(ptep_get(ptep)); } int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 90d4ba36d1d0..76f1df709a21 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -136,24 +136,24 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a pgd = (pgd_t *)pfn_to_virt(pfn) + index; pgd_k = init_mm.pgd + index; - if (!pgd_present(*pgd_k)) { + if (!pgd_present(pgdp_get(pgd_k))) { no_context(regs, addr); return; } - set_pgd(pgd, *pgd_k); + set_pgd(pgd, pgdp_get(pgd_k)); p4d_k = p4d_offset(pgd_k, addr); - if (!p4d_present(*p4d_k)) { + if (!p4d_present(p4dp_get(p4d_k))) { no_context(regs, addr); return; } pud_k = pud_offset(p4d_k, addr); - if (!pud_present(*pud_k)) { + if (!pud_present(pudp_get(pud_k))) { no_context(regs, addr); return; } - if (pud_leaf(*pud_k)) + if (pud_leaf(pudp_get(pud_k))) goto flush_tlb; /* @@ -161,11 +161,11 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a * to copy individual PTEs */ pmd_k = pmd_offset(pud_k, addr); - if (!pmd_present(*pmd_k)) { + if (!pmd_present(pmdp_get(pmd_k))) { no_context(regs, addr); return; } - if (pmd_leaf(*pmd_k)) + if (pmd_leaf(pmdp_get(pmd_k))) goto flush_tlb; /* @@ -175,7 +175,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a * silently loop forever. */ pte_k = pte_offset_kernel(pmd_k, addr); - if (!pte_present(*pte_k)) { + if (!pte_present(ptep_get(pte_k))) { no_context(regs, addr); return; } diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c index b52f0210481f..431596c0e20e 100644 --- a/arch/riscv/mm/hugetlbpage.c +++ b/arch/riscv/mm/hugetlbpage.c @@ -54,7 +54,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, } if (sz == PMD_SIZE) { - if (want_pmd_share(vma, addr) && pud_none(*pud)) + if (want_pmd_share(vma, addr) && pud_none(pudp_get(pud))) pte = huge_pmd_share(mm, vma, addr, pud); else pte = (pte_t *)pmd_alloc(mm, pud, addr); @@ -93,11 +93,11 @@ pte_t *huge_pte_offset(struct mm_struct *mm, pmd_t *pmd; pgd = pgd_offset(mm, addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return NULL; p4d = p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return NULL; pud = pud_offset(p4d, addr); @@ -105,7 +105,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, /* must be pud huge, non-present or none */ return (pte_t *)pud; - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return NULL; pmd = pmd_offset(pud, addr); @@ -113,7 +113,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, /* must be pmd huge, non-present or none */ return (pte_t *)pmd; - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return NULL; for_each_napot_order(order) { @@ -293,7 +293,7 @@ void huge_pte_clear(struct mm_struct *mm, pte_t *ptep, unsigned long sz) { - pte_t pte = READ_ONCE(*ptep); + pte_t pte = ptep_get(ptep); int i, pte_num; if (!pte_napot(pte)) { diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 5e39dcf23fdb..e96251853037 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -31,7 +31,7 @@ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned phys_addr_t phys_addr; pte_t *ptep, *p; - if (pmd_none(*pmd)) { + if (pmd_none(pmdp_get(pmd))) { p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE); set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -39,7 +39,7 @@ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned ptep = pte_offset_kernel(pmd, vaddr); do { - if (pte_none(*ptep)) { + if (pte_none(ptep_get(ptep))) { phys_addr = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); set_pte(ptep, pfn_pte(PFN_DOWN(phys_addr), PAGE_KERNEL)); memset(__va(phys_addr), KASAN_SHADOW_INIT, PAGE_SIZE); @@ -53,7 +53,7 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned pmd_t *pmdp, *p; unsigned long next; - if (pud_none(*pud)) { + if (pud_none(pudp_get(pud))) { p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE); set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -63,7 +63,8 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned do { next = pmd_addr_end(vaddr, end); - if (pmd_none(*pmdp) && IS_ALIGNED(vaddr, PMD_SIZE) && (next - vaddr) >= PMD_SIZE) { + if (pmd_none(pmdp_get(pmdp)) && IS_ALIGNED(vaddr, PMD_SIZE) && + (next - vaddr) >= PMD_SIZE) { phys_addr = memblock_phys_alloc(PMD_SIZE, PMD_SIZE); if (phys_addr) { set_pmd(pmdp, pfn_pmd(PFN_DOWN(phys_addr), PAGE_KERNEL)); @@ -83,7 +84,7 @@ static void __init kasan_populate_pud(p4d_t *p4d, pud_t *pudp, *p; unsigned long next; - if (p4d_none(*p4d)) { + if (p4d_none(p4dp_get(p4d))) { p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -93,7 +94,8 @@ static void __init kasan_populate_pud(p4d_t *p4d, do { next = pud_addr_end(vaddr, end); - if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { + if (pud_none(pudp_get(pudp)) && IS_ALIGNED(vaddr, PUD_SIZE) && + (next - vaddr) >= PUD_SIZE) { phys_addr = memblock_phys_alloc(PUD_SIZE, PUD_SIZE); if (phys_addr) { set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_KERNEL)); @@ -113,7 +115,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd, p4d_t *p4dp, *p; unsigned long next; - if (pgd_none(*pgd)) { + if (pgd_none(pgdp_get(pgd))) { p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE); set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } @@ -123,7 +125,8 @@ static void __init kasan_populate_p4d(pgd_t *pgd, do { next = p4d_addr_end(vaddr, end); - if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) { + if (p4d_none(p4dp_get(p4dp)) && IS_ALIGNED(vaddr, P4D_SIZE) && + (next - vaddr) >= P4D_SIZE) { phys_addr = memblock_phys_alloc(P4D_SIZE, P4D_SIZE); if (phys_addr) { set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_KERNEL)); @@ -145,7 +148,7 @@ static void __init kasan_populate_pgd(pgd_t *pgdp, do { next = pgd_addr_end(vaddr, end); - if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + if (pgd_none(pgdp_get(pgdp)) && IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { phys_addr = memblock_phys_alloc(PGDIR_SIZE, PGDIR_SIZE); if (phys_addr) { @@ -168,7 +171,7 @@ static void __init kasan_early_clear_pud(p4d_t *p4dp, if (!pgtable_l4_enabled) { pudp = (pud_t *)p4dp; } else { - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(p4dp_get(p4dp)))); pudp = base_pud + pud_index(vaddr); } @@ -193,7 +196,7 @@ static void __init kasan_early_clear_p4d(pgd_t *pgdp, if (!pgtable_l5_enabled) { p4dp = (p4d_t *)pgdp; } else { - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(pgdp_get(pgdp)))); p4dp = base_p4d + p4d_index(vaddr); } @@ -239,14 +242,14 @@ static void __init kasan_early_populate_pud(p4d_t *p4dp, if (!pgtable_l4_enabled) { pudp = (pud_t *)p4dp; } else { - base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(*p4dp))); + base_pud = pt_ops.get_pud_virt(pfn_to_phys(_p4d_pfn(p4dp_get(p4dp)))); pudp = base_pud + pud_index(vaddr); } do { next = pud_addr_end(vaddr, end); - if (pud_none(*pudp) && IS_ALIGNED(vaddr, PUD_SIZE) && + if (pud_none(pudp_get(pudp)) && IS_ALIGNED(vaddr, PUD_SIZE) && (next - vaddr) >= PUD_SIZE) { phys_addr = __pa((uintptr_t)kasan_early_shadow_pmd); set_pud(pudp, pfn_pud(PFN_DOWN(phys_addr), PAGE_TABLE)); @@ -277,14 +280,14 @@ static void __init kasan_early_populate_p4d(pgd_t *pgdp, if (!pgtable_l5_enabled) { p4dp = (p4d_t *)pgdp; } else { - base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(*pgdp))); + base_p4d = pt_ops.get_p4d_virt(pfn_to_phys(_pgd_pfn(pgdp_get(pgdp)))); p4dp = base_p4d + p4d_index(vaddr); } do { next = p4d_addr_end(vaddr, end); - if (p4d_none(*p4dp) && IS_ALIGNED(vaddr, P4D_SIZE) && + if (p4d_none(p4dp_get(p4dp)) && IS_ALIGNED(vaddr, P4D_SIZE) && (next - vaddr) >= P4D_SIZE) { phys_addr = __pa((uintptr_t)kasan_early_shadow_pud); set_p4d(p4dp, pfn_p4d(PFN_DOWN(phys_addr), PAGE_TABLE)); @@ -305,7 +308,7 @@ static void __init kasan_early_populate_pgd(pgd_t *pgdp, do { next = pgd_addr_end(vaddr, end); - if (pgd_none(*pgdp) && IS_ALIGNED(vaddr, PGDIR_SIZE) && + if (pgd_none(pgdp_get(pgdp)) && IS_ALIGNED(vaddr, PGDIR_SIZE) && (next - vaddr) >= PGDIR_SIZE) { phys_addr = __pa((uintptr_t)kasan_early_shadow_p4d); set_pgd(pgdp, pfn_pgd(PFN_DOWN(phys_addr), PAGE_TABLE)); @@ -381,7 +384,7 @@ static void __init kasan_shallow_populate_pud(p4d_t *p4d, do { next = pud_addr_end(vaddr, end); - if (pud_none(*pud_k)) { + if (pud_none(pudp_get(pud_k))) { p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; @@ -401,7 +404,7 @@ static void __init kasan_shallow_populate_p4d(pgd_t *pgd, do { next = p4d_addr_end(vaddr, end); - if (p4d_none(*p4d_k)) { + if (p4d_none(p4dp_get(p4d_k))) { p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; @@ -420,7 +423,7 @@ static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long do { next = pgd_addr_end(vaddr, end); - if (pgd_none(*pgd_k)) { + if (pgd_none(pgdp_get(pgd_k))) { p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); continue; @@ -451,7 +454,7 @@ static void __init create_tmp_mapping(void) /* Copy the last p4d since it is shared with the kernel mapping. */ if (pgtable_l5_enabled) { - ptr = (p4d_t *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END)); + ptr = (p4d_t *)pgd_page_vaddr(pgdp_get(pgd_offset_k(KASAN_SHADOW_END))); memcpy(tmp_p4d, ptr, sizeof(p4d_t) * PTRS_PER_P4D); set_pgd(&tmp_pg_dir[pgd_index(KASAN_SHADOW_END)], pfn_pgd(PFN_DOWN(__pa(tmp_p4d)), PAGE_TABLE)); @@ -462,7 +465,7 @@ static void __init create_tmp_mapping(void) /* Copy the last pud since it is shared with the kernel mapping. */ if (pgtable_l4_enabled) { - ptr = (pud_t *)p4d_page_vaddr(*(base_p4d + p4d_index(KASAN_SHADOW_END))); + ptr = (pud_t *)p4d_page_vaddr(p4dp_get(base_p4d + p4d_index(KASAN_SHADOW_END))); memcpy(tmp_pud, ptr, sizeof(pud_t) * PTRS_PER_PUD); set_p4d(&base_p4d[p4d_index(KASAN_SHADOW_END)], pfn_p4d(PFN_DOWN(__pa(tmp_pud)), PAGE_TABLE)); diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index fc5fc4f785c4..0b5e38e018c8 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -29,7 +29,7 @@ static unsigned long set_pageattr_masks(unsigned long val, struct mm_walk *walk) static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr, unsigned long next, struct mm_walk *walk) { - p4d_t val = READ_ONCE(*p4d); + p4d_t val = p4dp_get(p4d); if (p4d_leaf(val)) { val = __p4d(set_pageattr_masks(p4d_val(val), walk)); @@ -42,7 +42,7 @@ static int pageattr_p4d_entry(p4d_t *p4d, unsigned long addr, static int pageattr_pud_entry(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk) { - pud_t val = READ_ONCE(*pud); + pud_t val = pudp_get(pud); if (pud_leaf(val)) { val = __pud(set_pageattr_masks(pud_val(val), walk)); @@ -55,7 +55,7 @@ static int pageattr_pud_entry(pud_t *pud, unsigned long addr, static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk) { - pmd_t val = READ_ONCE(*pmd); + pmd_t val = pmdp_get(pmd); if (pmd_leaf(val)) { val = __pmd(set_pageattr_masks(pmd_val(val), walk)); @@ -68,7 +68,7 @@ static int pageattr_pmd_entry(pmd_t *pmd, unsigned long addr, static int pageattr_pte_entry(pte_t *pte, unsigned long addr, unsigned long next, struct mm_walk *walk) { - pte_t val = READ_ONCE(*pte); + pte_t val = ptep_get(pte); val = __pte(set_pageattr_masks(pte_val(val), walk)); set_pte(pte, val); @@ -108,10 +108,10 @@ static int __split_linear_mapping_pmd(pud_t *pudp, vaddr <= (vaddr & PMD_MASK) && end >= next) continue; - if (pmd_leaf(*pmdp)) { + if (pmd_leaf(pmdp_get(pmdp))) { struct page *pte_page; - unsigned long pfn = _pmd_pfn(*pmdp); - pgprot_t prot = __pgprot(pmd_val(*pmdp) & ~_PAGE_PFN_MASK); + unsigned long pfn = _pmd_pfn(pmdp_get(pmdp)); + pgprot_t prot = __pgprot(pmd_val(pmdp_get(pmdp)) & ~_PAGE_PFN_MASK); pte_t *ptep_new; int i; @@ -148,10 +148,10 @@ static int __split_linear_mapping_pud(p4d_t *p4dp, vaddr <= (vaddr & PUD_MASK) && end >= next) continue; - if (pud_leaf(*pudp)) { + if (pud_leaf(pudp_get(pudp))) { struct page *pmd_page; - unsigned long pfn = _pud_pfn(*pudp); - pgprot_t prot = __pgprot(pud_val(*pudp) & ~_PAGE_PFN_MASK); + unsigned long pfn = _pud_pfn(pudp_get(pudp)); + pgprot_t prot = __pgprot(pud_val(pudp_get(pudp)) & ~_PAGE_PFN_MASK); pmd_t *pmdp_new; int i; @@ -197,10 +197,10 @@ static int __split_linear_mapping_p4d(pgd_t *pgdp, vaddr <= (vaddr & P4D_MASK) && end >= next) continue; - if (p4d_leaf(*p4dp)) { + if (p4d_leaf(p4dp_get(p4dp))) { struct page *pud_page; - unsigned long pfn = _p4d_pfn(*p4dp); - pgprot_t prot = __pgprot(p4d_val(*p4dp) & ~_PAGE_PFN_MASK); + unsigned long pfn = _p4d_pfn(p4dp_get(p4dp)); + pgprot_t prot = __pgprot(p4d_val(p4dp_get(p4dp)) & ~_PAGE_PFN_MASK); pud_t *pudp_new; int i; @@ -406,29 +406,29 @@ bool kernel_page_present(struct page *page) pte_t *pte; pgd = pgd_offset_k(addr); - if (!pgd_present(*pgd)) + if (!pgd_present(pgdp_get(pgd))) return false; - if (pgd_leaf(*pgd)) + if (pgd_leaf(pgdp_get(pgd))) return true; p4d = p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) + if (!p4d_present(p4dp_get(p4d))) return false; - if (p4d_leaf(*p4d)) + if (p4d_leaf(p4dp_get(p4d))) return true; pud = pud_offset(p4d, addr); - if (!pud_present(*pud)) + if (!pud_present(pudp_get(pud))) return false; - if (pud_leaf(*pud)) + if (pud_leaf(pudp_get(pud))) return true; pmd = pmd_offset(pud, addr); - if (!pmd_present(*pmd)) + if (!pmd_present(pmdp_get(pmd))) return false; - if (pmd_leaf(*pmd)) + if (pmd_leaf(pmdp_get(pmd))) return true; pte = pte_offset_kernel(pmd, addr); - return pte_present(*pte); + return pte_present(ptep_get(pte)); } diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c index fef4e7328e49..ef887efcb679 100644 --- a/arch/riscv/mm/pgtable.c +++ b/arch/riscv/mm/pgtable.c @@ -5,6 +5,47 @@ #include #include +int ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, + pte_t entry, int dirty) +{ + if (!pte_same(ptep_get(ptep), entry)) + __set_pte_at(ptep, entry); + /* + * update_mmu_cache will unconditionally execute, handling both + * the case that the PTE changed and the spurious fault case. + */ + return true; +} + +int ptep_test_and_clear_young(struct vm_area_struct *vma, + unsigned long address, + pte_t *ptep) +{ + if (!pte_young(ptep_get(ptep))) + return 0; + return test_and_clear_bit(_PAGE_ACCESSED_OFFSET, &pte_val(*ptep)); +} +EXPORT_SYMBOL_GPL(ptep_test_and_clear_young); + +#ifdef CONFIG_64BIT +pud_t *pud_offset(p4d_t *p4d, unsigned long address) +{ + if (pgtable_l4_enabled) + return p4d_pgtable(p4dp_get(p4d)) + pud_index(address); + + return (pud_t *)p4d; +} + +p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) +{ + if (pgtable_l5_enabled) + return pgd_pgtable(pgdp_get(pgd)) + p4d_index(address); + + return (p4d_t *)pgd; +} +#endif + #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot) { @@ -25,7 +66,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t phys, pgprot_t prot) int pud_clear_huge(pud_t *pud) { - if (!pud_leaf(READ_ONCE(*pud))) + if (!pud_leaf(pudp_get(pud))) return 0; pud_clear(pud); return 1; @@ -33,7 +74,7 @@ int pud_clear_huge(pud_t *pud) int pud_free_pmd_page(pud_t *pud, unsigned long addr) { - pmd_t *pmd = pud_pgtable(*pud); + pmd_t *pmd = pud_pgtable(pudp_get(pud)); int i; pud_clear(pud); @@ -63,7 +104,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t phys, pgprot_t prot) int pmd_clear_huge(pmd_t *pmd) { - if (!pmd_leaf(READ_ONCE(*pmd))) + if (!pmd_leaf(pmdp_get(pmd))) return 0; pmd_clear(pmd); return 1; @@ -71,7 +112,7 @@ int pmd_clear_huge(pmd_t *pmd) int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { - pte_t *pte = (pte_t *)pmd_page_vaddr(*pmd); + pte_t *pte = (pte_t *)pmd_page_vaddr(pmdp_get(pmd)); pmd_clear(pmd); @@ -88,7 +129,7 @@ pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp); VM_BUG_ON(address & ~HPAGE_PMD_MASK); - VM_BUG_ON(pmd_trans_huge(*pmdp)); + VM_BUG_ON(pmd_trans_huge(pmdp_get(pmdp))); /* * When leaf PTE entries (regular pages) are collapsed into a leaf * PMD entry (huge page), a valid non-leaf PTE is converted into a