From patchwork Mon Jul 31 17:03:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13335187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A72A9C04E69 for ; Mon, 31 Jul 2023 17:04:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233681AbjGaREi (ORCPT ); Mon, 31 Jul 2023 13:04:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233836AbjGaREA (ORCPT ); Mon, 31 Jul 2023 13:04:00 -0400 Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com [IPv6:2607:f8b0:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FB2D2130; Mon, 31 Jul 2023 10:03:44 -0700 (PDT) Received: by mail-oi1-x22a.google.com with SMTP id 5614622812f47-3a44cccbd96so3252579b6e.3; Mon, 31 Jul 2023 10:03:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690823024; x=1691427824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VFdcrIV5fIYEpRddOkWey4lp7uyipkLol3FVSysO4BI=; b=qbkWgPPid91ZIqftmFBBmjJpzv1I3nAFAPBn0iBZaDKWe1uvfhJ+1U6gJ1dsLe9k8G l7DrOCf7gRM2soqiMO44q+s7bgylkU1tiwFZ3vQhRvXR3+IrbpHWwADlXg073Kn29aGv 6jR4zaER3rVUUYb82D2jgMMbIjrJfkA8xcbPoOTkfTIUXCyNCW6341Yv7mTjWmt8YXjX DVSekNPLiQ7ZuAQU0alRNEN5s2nYABPtXl7psWq2PhJX4A2iw0iBhsPbgh2IFaJwQPd6 Ps6PTryX22bfzx7R3yw/niv4PjS3JpDK+9o8KFnsL0mYZ8+z5MK8XDbxFsJbw1tC6wxV bU+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690823024; x=1691427824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VFdcrIV5fIYEpRddOkWey4lp7uyipkLol3FVSysO4BI=; b=hFM66oTbLxTyUP/+IP8EXik+EzCMNlL//ndiwc0fgPYcQr119luZHpG6NNfvxPEfOv m/KmlEKEP/pgOqcTV2wB5Z99Ng3xOZGN2zAGhEYKMRgrf/9xwOoM/99ksixWiL4FR7vO b/4RsiuzrW0xQcFR56erNU6s9g6+nbDI5xh9WCvLvXB0Rx0cuMIhGIcTt1WE7MXwutqH lJ4buApbRdLnIOE3dhlRYY8SGp5F+5GMQrKg9UAyFAuWbv9l5Dws+1U5jklCwvfMJ7HO 4jxfNAtxdjlfEAhVb2me8h9mcr9flfc5rX9FG6bqRiwMA49gxCNeF2uo35X7CR3TEvz/ Ts/Q== X-Gm-Message-State: ABy/qLbty2vfA0k+dFDDKfjd4TBMUoY/hslhcPhEqN+yNIB9VJ/dQsdY z2PlRlpyV3zD79UwycbCZ3U= X-Google-Smtp-Source: APBJJlEPdsci4ZilX3NjM/t6a027bxaAHpHiS1b7Xz7FAoJRqjVPugqaYGEb4Qj9r/gbUH4WVN8xpQ== X-Received: by 2002:a54:4d8b:0:b0:3a6:f876:148d with SMTP id y11-20020a544d8b000000b003a6f876148dmr9790509oix.8.1690823023801; Mon, 31 Jul 2023 10:03:43 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id x31-20020a25ac9f000000b00c832ad2e2eesm2511833ybi.60.2023.07.31.10.03.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 10:03:43 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v8 03/31] mm: add utility functions for ptdesc Date: Mon, 31 Jul 2023 10:03:04 -0700 Message-Id: <20230731170332.69404-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230731170332.69404-1-vishal.moola@gmail.com> References: <20230731170332.69404-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Introduce utility functions setting the foundation for ptdescs. These will also assist in the splitting out of ptdesc from struct page. Functions that focus on the descriptor are prefixed with ptdesc_* while functions that focus on the pagetable are prefixed with pagetable_*. pagetable_alloc() is defined to allocate new ptdesc pages as compound pages. This is to standardize ptdescs by allowing for one allocation and one free function, in contrast to 2 allocation and 2 free functions. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/tlb.h | 11 +++++++ include/linux/mm.h | 61 +++++++++++++++++++++++++++++++++++++++ include/linux/pgtable.h | 12 ++++++++ 3 files changed, 84 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index bc32a2284c56..129a3a759976 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -480,6 +480,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb_remove_page_size(tlb, page, PAGE_SIZE); } +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) +{ + tlb_remove_table(tlb, pt); +} + +/* Like tlb_remove_ptdesc, but for page-like page directories. */ +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) +{ + tlb_remove_page(tlb, ptdesc_page(pt)); +} + static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 2ba73f09ae4a..3fda0ad41cf2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2787,6 +2787,57 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a } #endif /* CONFIG_MMU */ +static inline struct ptdesc *virt_to_ptdesc(const void *x) +{ + return page_ptdesc(virt_to_page(x)); +} + +static inline void *ptdesc_to_virt(const struct ptdesc *pt) +{ + return page_to_virt(ptdesc_page(pt)); +} + +static inline void *ptdesc_address(const struct ptdesc *pt) +{ + return folio_address(ptdesc_folio(pt)); +} + +static inline bool pagetable_is_reserved(struct ptdesc *pt) +{ + return folio_test_reserved(ptdesc_folio(pt)); +} + +/** + * pagetable_alloc - Allocate pagetables + * @gfp: GFP flags + * @order: desired pagetable order + * + * pagetable_alloc allocates memory for page tables as well as a page table + * descriptor to describe that memory. + * + * Return: The ptdesc describing the allocated page tables. + */ +static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) +{ + struct page *page = alloc_pages(gfp | __GFP_COMP, order); + + return page_ptdesc(page); +} + +/** + * pagetable_free - Free pagetables + * @pt: The page table descriptor + * + * pagetable_free frees the memory of all page tables described by a page + * table descriptor and the memory for the descriptor itself. + */ +static inline void pagetable_free(struct ptdesc *pt) +{ + struct page *page = ptdesc_page(pt); + + __free_pages(page, compound_order(page)); +} + #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); @@ -2913,6 +2964,11 @@ static inline struct page *pmd_pgtable_page(pmd_t *pmd) return virt_to_page((void *)((unsigned long) pmd & mask)); } +static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) +{ + return page_ptdesc(pmd_pgtable_page(pmd)); +} + static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { return ptlock_ptr(pmd_pgtable_page(pmd)); @@ -3025,6 +3081,11 @@ static inline void mark_page_reserved(struct page *page) adjust_managed_page_count(page, -1); } +static inline void free_reserved_ptdesc(struct ptdesc *pt) +{ + free_reserved_page(ptdesc_page(pt)); +} + /* * Default method to free all the __init memory into the buddy system. * The freed pages will be poisoned with pattern "poison" if it's within diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1f92514d54b0..250fdeba68f3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1064,6 +1064,18 @@ TABLE_MATCH(memcg_data, pt_memcg_data); #undef TABLE_MATCH static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); +#define ptdesc_page(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct page *)(pt), \ + struct ptdesc *: (struct page *)(pt))) + +#define ptdesc_folio(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct folio *)(pt), \ + struct ptdesc *: (struct folio *)(pt))) + +#define page_ptdesc(p) (_Generic((p), \ + const struct page *: (const struct ptdesc *)(p), \ + struct page *: (struct ptdesc *)(p))) + /* * No-op macros that just return the current protection value. Defined here * because these macros can be used even if CONFIG_MMU is not defined.