From patchwork Tue Jun 27 03:13:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE399C18E72 for ; Tue, 27 Jun 2023 03:15:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229898AbjF0DPX (ORCPT ); Mon, 26 Jun 2023 23:15:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbjF0DPO (ORCPT ); Mon, 26 Jun 2023 23:15:14 -0400 Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com [IPv6:2607:f8b0:4864:20::1134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2FA819A2; Mon, 26 Jun 2023 20:15:10 -0700 (PDT) Received: by mail-yw1-x1134.google.com with SMTP id 00721157ae682-5702415be17so30767097b3.2; Mon, 26 Jun 2023 20:15:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835710; x=1690427710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DXYENedStYVsFXpaqy8X3+wmlabVR1q8JHyD5+dxFj0=; b=Y5YNwg/QZrK+aHp3q3F/s+zHvk1fnDqd1/MV2/FtyEws8x/EU1okIb5gZ0g0bwjJIl 5VUUqN9Wuh0HLMtWdAEYbmMx90i6AJPptQg4hAlIbSGwOvljd5Q6n3O/2f8tA77gBCdK YDhNGdvPzlezHyk8L+H+QUfnc3k/4+57L174UxCkkys5LGiYJFRoujM3bGB1Ibcn91ui mqfwiux2PkGJoz2g2vjn5ccwHyr655voiBf0XjjuTxpjhEe8LJcN7Q6CvIBEB03riKFC EsDpQXncNNWQyJs9hFhiDQn510NXqp1Pgv3vGtCcL2I6tmdj8XTW1jmQN/FUgbsQGPmI XxjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835710; x=1690427710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DXYENedStYVsFXpaqy8X3+wmlabVR1q8JHyD5+dxFj0=; b=lmSm1XsFADuIG0wCEmYGEzzfOsqO3/rqQcu0Kg/dE+k6XQtaRW1Aw2i6XzKLLQDuUY +9rP5RhfeS2tjrTo1clDrtQaXWSGDr1oXmxQiHdpZeb5sZ5kVaplgSnAyRDye3jU7Duq mnNIo6q3VLAUWwS8afN+dq+sZ+urLJv0JVho1gcQyKQ1w/0tDHC6bNj8XmmDXaW/B5zX 6mDYCm/Qk5K1h31/fBmApIrDrQB4Guo0Vq4r6MK74d5O71eWffpXYUro+vHwj9e9voga 5xrKAX5NBusUnJBSdP/98rtzMgCb7jRQO/1hGQ/AIIDqoxYo9Yk/WzBW5wqWw1NSK3eL e7yA== X-Gm-Message-State: AC+VfDxPwuXGcXzqIT0pwOUmxCzc42tZg7owMRHTuF8vCdHbtYRRUjq1 Tv7avT7u+MbS1cJaYr2Ax44= X-Google-Smtp-Source: ACHHUZ6evDL0HWuUOw/E7eS1sxmpRauJbQh+5KyxG9iaYPnpfZgG5Wrtu7krHuwGR/MT62H3t8vWEA== X-Received: by 2002:a0d:d681:0:b0:561:81b:7319 with SMTP id y123-20020a0dd681000000b00561081b7319mr25124050ywd.32.1687835709872; Mon, 26 Jun 2023 20:15:09 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:09 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 01/33] mm: Add PAGE_TYPE_OP folio functions Date: Mon, 26 Jun 2023 20:13:59 -0700 Message-Id: <20230627031431.29653-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org No folio equivalents for page type operations have been defined, so define them for later folio conversions. Also changes the Page##uname macros to take in const struct page* since we only read the memory here. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/page-flags.h | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 92a2063a0a23..9218028caf33 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -908,6 +908,8 @@ static inline bool is_page_hwpoison(struct page *page) #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) +#define folio_test_type(folio, flag) \ + ((folio->page.page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) static inline int page_type_has_type(unsigned int page_type) { @@ -919,27 +921,41 @@ static inline int page_has_type(struct page *page) return page_type_has_type(page->page_type); } -#define PAGE_TYPE_OPS(uname, lname) \ -static __always_inline int Page##uname(struct page *page) \ +#define PAGE_TYPE_OPS(uname, lname, fname) \ +static __always_inline int Page##uname(const struct page *page) \ { \ return PageType(page, PG_##lname); \ } \ +static __always_inline int folio_test_##fname(const struct folio *folio)\ +{ \ + return folio_test_type(folio, PG_##lname); \ +} \ static __always_inline void __SetPage##uname(struct page *page) \ { \ VM_BUG_ON_PAGE(!PageType(page, 0), page); \ page->page_type &= ~PG_##lname; \ } \ +static __always_inline void __folio_set_##fname(struct folio *folio) \ +{ \ + VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio); \ + folio->page.page_type &= ~PG_##lname; \ +} \ static __always_inline void __ClearPage##uname(struct page *page) \ { \ VM_BUG_ON_PAGE(!Page##uname(page), page); \ page->page_type |= PG_##lname; \ -} +} \ +static __always_inline void __folio_clear_##fname(struct folio *folio) \ +{ \ + VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \ + folio->page.page_type |= PG_##lname; \ +} \ /* * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). */ -PAGE_TYPE_OPS(Buddy, buddy) +PAGE_TYPE_OPS(Buddy, buddy, buddy) /* * PageOffline() indicates that the page is logically offline although the @@ -963,7 +979,7 @@ PAGE_TYPE_OPS(Buddy, buddy) * pages should check PageOffline() and synchronize with such drivers using * page_offline_freeze()/page_offline_thaw(). */ -PAGE_TYPE_OPS(Offline, offline) +PAGE_TYPE_OPS(Offline, offline, offline) extern void page_offline_freeze(void); extern void page_offline_thaw(void); @@ -973,12 +989,12 @@ extern void page_offline_end(void); /* * Marks pages in use as page tables. */ -PAGE_TYPE_OPS(Table, table) +PAGE_TYPE_OPS(Table, table, pgtable) /* * Marks guardpages used with debug_pagealloc. */ -PAGE_TYPE_OPS(Guard, guard) +PAGE_TYPE_OPS(Guard, guard, guard) extern bool is_free_buddy_page(struct page *page); From patchwork Tue Jun 27 03:14:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4642C001DB for ; Tue, 27 Jun 2023 03:15:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229922AbjF0DPY (ORCPT ); Mon, 26 Jun 2023 23:15:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjF0DPT (ORCPT ); Mon, 26 Jun 2023 23:15:19 -0400 Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DE07E59; Mon, 26 Jun 2023 20:15:13 -0700 (PDT) Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-5700401acbeso42701367b3.0; Mon, 26 Jun 2023 20:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835712; x=1690427712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qrvvNogNsRIG83AHolsa9VCJSL8m6r0U+Sd+5GfHivE=; b=KFVPW/L1EGxf5onR9E38PakeUBm7Bbz4cFtahphLQ1sSUVNZGd7nER5RaL9WyA/UoL PgHHcs40P3f/vASt7RkKBQRHT9dThfptLWW5jKgWLlumWguDDfYLL3SZytXiaD4+4jm3 +atBVT37Sy/bJdoy89PSRnhwLs21WCXUyqFfA474pZoTc/BYDk8coSLDXM596R5kRLhl TefvIuFAqXot7lehideoP0lQvT6kZ9W44JAb3vqqx0E/oORJxSklOEhBggPE04qM7hKm 4F8j8xTI+8DdC5lHV59rBabQmofEn421JCLtxF3k9d/+4bYzPJjNOJQB3f4xHw6c3xwN WsmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835712; x=1690427712; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qrvvNogNsRIG83AHolsa9VCJSL8m6r0U+Sd+5GfHivE=; b=ltExkpq8sJarfk+cZ2b5RxdIpkopUqaHw3FkbSJuhWaZT01oEWkKRQDfX91KjQrc6s gDEO91Je522v/rgxz//3co5lBhLvk8DHXJV9x0k4CGkX2+783z3Qh3Y8QCQzjR4BMVxo lHUarokSSIR9uHFMDX1pyJyhzS774YfnABDIqwVKAbZXsokVXcqkDWz3t4j5Na+Leu+b TqDeAychgcB5kTAkPQNXRlLLtvXUUM549xQkW4yNBXUlcOVkHwAm3Ozg0Z67tkVY31i0 rhlyz6DfZ/Wq4PVXdUHKLUoj6gT70T3ZDp3hhytoe1+yQiB1qrURg7k95Iu8Fm09h6M1 AFQQ== X-Gm-Message-State: AC+VfDyS0qyuSbbl33ME6OxzlpboM1AzAcV35NNyXfIU26c02a7QpWjB Jt+kwD6jmwZO5JZKs17tib4= X-Google-Smtp-Source: ACHHUZ4EjEEup/VuemcyzWvx8SgfEca6vpQ2dFXd0lLvBvj7yBbGJ1thxu6SdSgQ7bisiJP7oRqBcQ== X-Received: by 2002:a81:7d84:0:b0:576:9b21:2411 with SMTP id y126-20020a817d84000000b005769b212411mr8527217ywc.20.1687835712129; Mon, 26 Jun 2023 20:15:12 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:11 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , David Hildenbrand , Claudio Imbrenda , Mike Rapoport Subject: [PATCH v6 02/33] s390: Use _pt_s390_gaddr for gmap address tracking Date: Mon, 26 Jun 2023 20:14:00 -0700 Message-Id: <20230627031431.29653-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org s390 uses page->index to keep track of page tables for the guest address space. In an attempt to consolidate the usage of page fields in s390, replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap. Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL before freeing the pages as well. This also reverts commit 7e25de77bc5ea ("s390/mm: use pmd_pgtable_page() helper in __gmap_segment_gaddr()") which had s390 use pmd_pgtable_page() to get a gmap page table, as pmd_pgtable_page() should be used for more generic process page tables. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/s390/mm/gmap.c | 56 +++++++++++++++++++++++++++------------- include/linux/mm_types.h | 2 +- 2 files changed, 39 insertions(+), 19 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index f4b6fc746fce..beb4804d9ca8 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit) page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) goto out_free; - page->index = 0; + page->_pt_s390_gaddr = 0; list_add(&page->lru, &gmap->crst_list); table = page_to_virt(page); crst_table_init(table, etype); @@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap) if (!(gmap_is_shadow(gmap) && gmap->removed)) gmap_flush_tlb(gmap); /* Free all segment & region tables. */ - list_for_each_entry_safe(page, next, &gmap->crst_list, lru) + list_for_each_entry_safe(page, next, &gmap->crst_list, lru) { + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); + } gmap_radix_tree_free(&gmap->guest_to_host); gmap_radix_tree_free(&gmap->host_to_guest); /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) + list_for_each_entry_safe(page, next, &gmap->pt_list, lru) { + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); + } gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ gmap_put(gmap->parent); @@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, list_add(&page->lru, &gmap->crst_list); *table = __pa(new) | _REGION_ENTRY_LENGTH | (*table & _REGION_ENTRY_TYPE_MASK); - page->index = gaddr; + page->_pt_s390_gaddr = gaddr; page = NULL; } spin_unlock(&gmap->guest_table_lock); - if (page) + if (page) { + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); + } return 0; } @@ -336,12 +342,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, static unsigned long __gmap_segment_gaddr(unsigned long *entry) { struct page *page; - unsigned long offset; + unsigned long offset, mask; offset = (unsigned long) entry / sizeof(unsigned long); offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE; - page = pmd_pgtable_page((pmd_t *) entry); - return page->index + offset; + mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1); + page = virt_to_page((void *)((unsigned long) entry & mask)); + + return page->_pt_s390_gaddr + offset; } /** @@ -1351,6 +1359,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) /* Free page table */ page = phys_to_page(pgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); } @@ -1379,6 +1388,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, /* Free page table */ page = phys_to_page(pgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); } } @@ -1409,6 +1419,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) /* Free segment table */ page = phys_to_page(sgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1437,6 +1448,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, /* Free segment table */ page = phys_to_page(sgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1467,6 +1479,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) /* Free region 3 table */ page = phys_to_page(r3t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1495,6 +1508,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, /* Free region 3 table */ page = phys_to_page(r3t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1525,6 +1539,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) /* Free region 2 table */ page = phys_to_page(r2t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1557,6 +1572,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, /* Free region 2 table */ page = phys_to_page(r2t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1762,9 +1778,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = r2t & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_r2t = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1814,6 +1830,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -1846,9 +1863,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = r3t & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_r3t = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1898,6 +1915,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -1930,9 +1948,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = sgt & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_sgt = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1982,6 +2000,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -2014,9 +2033,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, if (table && !(*table & _SEGMENT_ENTRY_INVALID)) { /* Shadow page tables are full pages (pte+pgste) */ page = pfn_to_page(*table >> PAGE_SHIFT); - *pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE; + *pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; *dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT); - *fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE); + *fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); rc = 0; } else { rc = -EAGAIN; @@ -2054,9 +2073,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, page = page_table_alloc_pgste(sg->mm); if (!page) return -ENOMEM; - page->index = pgt & _SEGMENT_ENTRY_ORIGIN; + page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_pgt = page_to_phys(page); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); @@ -2101,6 +2120,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); return rc; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index de10fc797c8e..fbbe4e93a9ba 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -144,7 +144,7 @@ struct page { struct { /* Page table pages */ unsigned long _pt_pad_1; /* compound_head */ pgtable_t pmd_huge_pte; /* protected by page->ptl */ - unsigned long _pt_pad_2; /* mapping */ + unsigned long _pt_s390_gaddr; /* mapping */ union { struct mm_struct *pt_mm; /* x86 pgds only */ atomic_t pt_frag_refcount; /* powerpc */ From patchwork Tue Jun 27 03:14:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E18FC04E69 for ; Tue, 27 Jun 2023 03:15:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229987AbjF0DPZ (ORCPT ); Mon, 26 Jun 2023 23:15:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbjF0DPW (ORCPT ); Mon, 26 Jun 2023 23:15:22 -0400 Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com [IPv6:2607:f8b0:4864:20::112e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26A101991; Mon, 26 Jun 2023 20:15:15 -0700 (PDT) Received: by mail-yw1-x112e.google.com with SMTP id 00721157ae682-5701810884aso32951897b3.0; Mon, 26 Jun 2023 20:15:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835714; x=1690427714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g10wkISUfh+NWh/zmhT9FX+m3ZttfvdTUqPKBonJP24=; b=YbS6It5/jG4gj9GV1WnqlWfE48J9CdnxNvOxOEYb2jkpFBwkcijgogv5/2FTyBYSJu izIlhC4BKNDwS5s1e4kK9kflroiGbN+V3CM6t7L2EFgtcSkg+ReKLemBNYF5Y3oVXg4d FUK6TJROlYVwfl34r2gL5piZl88lZtnh5mJi4W6rFRki4guA/8D+KKFrkkfHcSu1EJOJ fncA5E4mKOaGZV6UHnLSf+mughQQ2/sG6vdU9OT//BnRvDPbrV7dVSZuLKiGxg9g4JB4 L4rqAG2yhTX1HWLW8k7KKi4wHWH5luhyXPrJQJ+E2hOyRozoqBZDlMJstEk3Y4X/0/Xj 8nVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835714; x=1690427714; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g10wkISUfh+NWh/zmhT9FX+m3ZttfvdTUqPKBonJP24=; b=R2ChFym9OaXz6+8/v8iXMDP1rSr5DqzfsCCRyS9wetW1xqOaohQv99jKB4DzY/pXsn SulEZ3tp2Cvt6+AczAKqxedBbeTE1T5D5LcKCh9IBLK541U8MZiWkDw2gyo3ks38H6x7 pMnOpa2jMfeSh1kszY5qqA+Hq8JdpQT54k0l07H0I3KaE33izgHVxi1cmHB0+KWCVpzc YHvnKQzGlRKesp/pJE2XSSvkBwZeZ1eWI+/eKFzdo6yI34Rd+sIqrm2r2JMo8YdlCz/o y/6+TsNGUZz79BMLi4jmBkOuhU96jLkLg6zSymkfY8yeKaJFtfbyoXr+K7hxukqosoUi i5aQ== X-Gm-Message-State: AC+VfDzP/rq2WsIyYc7TLt75Nrj6BURmYdHbMu3xMsKbY7MhIprp2GvS y6TexRchzS5yZmFnzyvvZWw= X-Google-Smtp-Source: ACHHUZ4trddfgXzjmNYnvfIX3UyOWVfVwt64/0k1X+1Ib62f54dPN/U9UUyskS0pcHiYwtqCWDYcTQ== X-Received: by 2002:a81:4e44:0:b0:56f:ff55:2b74 with SMTP id c65-20020a814e44000000b0056fff552b74mr21585770ywb.37.1687835714203; Mon, 26 Jun 2023 20:15:14 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:13 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 03/33] pgtable: Create struct ptdesc Date: Mon, 26 Jun 2023 20:14:01 -0700 Message-Id: <20230627031431.29653-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Currently, page table information is stored within struct page. As part of simplifying struct page, create struct ptdesc for page table information. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/pgtable.h | 68 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 5063b482e34f..d46cb709ce08 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -987,6 +987,74 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, #endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */ #endif /* CONFIG_MMU */ + +/** + * struct ptdesc - Memory descriptor for page tables. + * @__page_flags: Same as page flags. Unused for page tables. + * @pt_rcu_head: For freeing page table pages. + * @pt_list: List of used page tables. Used for s390 and x86. + * @_pt_pad_1: Padding that aliases with page's compound head. + * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs. + * @_pt_s390_gaddr: Aliases with page's mapping. Used for s390 gmap only. + * @pt_mm: Used for x86 pgds. + * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only. + * @ptl: Lock for the page table. + * @__page_type: Same as page->page_type. Unused for page tables. + * @_refcount: Same as page refcount. Used for s390 page tables. + * @pt_memcg_data: Memcg data. Tracked for page tables here. + * + * This struct overlays struct page for now. Do not modify without a good + * understanding of the issues. + */ +struct ptdesc { + unsigned long __page_flags; + + union { + struct rcu_head pt_rcu_head; + struct list_head pt_list; + struct { + unsigned long _pt_pad_1; + pgtable_t pmd_huge_pte; + }; + }; + unsigned long _pt_s390_gaddr; + + union { + struct mm_struct *pt_mm; + atomic_t pt_frag_refcount; + }; + + union { + unsigned long _pt_pad_2; +#if ALLOC_SPLIT_PTLOCKS + spinlock_t *ptl; +#else + spinlock_t ptl; +#endif + }; + unsigned int __page_type; + atomic_t _refcount; +#ifdef CONFIG_MEMCG + unsigned long pt_memcg_data; +#endif +}; + +#define TABLE_MATCH(pg, pt) \ + static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt)) +TABLE_MATCH(flags, __page_flags); +TABLE_MATCH(compound_head, pt_list); +TABLE_MATCH(compound_head, _pt_pad_1); +TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); +TABLE_MATCH(mapping, _pt_s390_gaddr); +TABLE_MATCH(pt_mm, pt_mm); +TABLE_MATCH(ptl, ptl); +TABLE_MATCH(rcu_head, pt_rcu_head); +#ifdef CONFIG_MEMCG +TABLE_MATCH(memcg_data, pt_memcg_data); +#endif +#undef TABLE_MATCH +static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); + /* * No-op macros that just return the current protection value. Defined here * because these macros can be used even if CONFIG_MMU is not defined. From patchwork Tue Jun 27 03:14:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05B6AC001B3 for ; Tue, 27 Jun 2023 03:15:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230280AbjF0DP2 (ORCPT ); Mon, 26 Jun 2023 23:15:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230167AbjF0DPY (ORCPT ); Mon, 26 Jun 2023 23:15:24 -0400 Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com [IPv6:2607:f8b0:4864:20::112e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24CD8E7B; Mon, 26 Jun 2023 20:15:17 -0700 (PDT) Received: by mail-yw1-x112e.google.com with SMTP id 00721157ae682-573491c4deeso42149697b3.0; Mon, 26 Jun 2023 20:15:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835716; x=1690427716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HtFJ7gYDDt6nFtJ3t4rf4iEtrxS/aE7Nt5tBsex6r9A=; b=ezZY2RRaBsO8vupgw2jirYZqUqmGibvIwA82DEMWpf56igWuBXAqitT4X8HFP0H1em qInNajfNuzHXRKyCsOjr1ow6YjtmBJmSGwTpqkLixyNIhU7k9Ph16qz2hibDBlk4ocWn RdQq60FSFIq8+C1zjLLZN/Rg7C2lIgGdw5ZwRyR727fqaxVVL271FHYTSlcEWFbnTW+n ai4y6LrhJ2fQPdUOuCCIjPAYCrwucE/VRobbMg4Tdi7YBB6HSEEQoC+LDsht9wreS+ur 2ZI4rM/c7NVhKyeNzOFTFN1TecKoEZQs/ZCcRhwVVJL+smX0rtd7NxGtRSoeSJDmpM8H CrQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835716; x=1690427716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtFJ7gYDDt6nFtJ3t4rf4iEtrxS/aE7Nt5tBsex6r9A=; b=B0/LBCsmghvkwgc/uWx5ckJywk7ufgOXWH/wq5kc0PRgQnwcCeII4wg14uBi1d27mm PYg2Uu9D3tu/cjrLFpRNb7gyYiu4/H1z9mKq0HGnnLE6qBj5zOTnQzefqpQ1+wekI5Jd HOQFKH23zuu/e2+JnY7iks6LxDtiox1JErGVUKz6eQbQsFt5Sr7OZGNIp5KvQDlQHoQB tXofP+38jNN2agPydFRi8wmsdKgsKB5W98U6Pzbqjpi2MI2EDEmA2oZdEF6SSveR/n36 FjI4othZXjU8lvJxsOIkcQDQ3hEf06HkbqcFhH7+OuQCpgLw40+dbWdAYv7QOT+ou8cO 92zQ== X-Gm-Message-State: AC+VfDzM28jWSK2YUgXEMs7dLlXGwbS8dtTcEf7HkrUYpiHBtes2PIel CXrwnxCGwglS81NmbC5QD+8= X-Google-Smtp-Source: ACHHUZ4EM4P6arjbVFZ7FT16PBM+ahxIjoFzjt7cHsivb9z4gadBenbNhcBrFTzqfItdDpBTwEdG4w== X-Received: by 2002:a0d:c786:0:b0:570:8854:8e9e with SMTP id j128-20020a0dc786000000b0057088548e9emr26427874ywd.33.1687835716265; Mon, 26 Jun 2023 20:15:16 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:15 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" Subject: [PATCH v6 04/33] mm: add utility functions for ptdesc Date: Mon, 26 Jun 2023 20:14:02 -0700 Message-Id: <20230627031431.29653-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Introduce utility functions setting the foundation for ptdescs. These will also assist in the splitting out of ptdesc from struct page. Functions that focus on the descriptor are prefixed with ptdesc_* while functions that focus on the pagetable are prefixed with pagetable_*. pagetable_alloc() is defined to allocate new ptdesc pages as compound pages. This is to standardize ptdescs by allowing for one allocation and one free function, in contrast to 2 allocation and 2 free functions. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/tlb.h | 11 ++++++++ include/linux/mm.h | 56 +++++++++++++++++++++++++++++++++++++++ include/linux/pgtable.h | 12 +++++++++ 3 files changed, 79 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b46617207c93..6bade9e0e799 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb_remove_page_size(tlb, page, PAGE_SIZE); } +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) +{ + tlb_remove_table(tlb, pt); +} + +/* Like tlb_remove_ptdesc, but for page-like page directories. */ +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) +{ + tlb_remove_page(tlb, ptdesc_page(pt)); +} + static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 0dad5f40ef96..14d95d494958 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2744,6 +2744,57 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a } #endif /* CONFIG_MMU */ +static inline struct ptdesc *virt_to_ptdesc(const void *x) +{ + return page_ptdesc(virt_to_page(x)); +} + +static inline void *ptdesc_to_virt(const struct ptdesc *pt) +{ + return page_to_virt(ptdesc_page(pt)); +} + +static inline void *ptdesc_address(const struct ptdesc *pt) +{ + return folio_address(ptdesc_folio(pt)); +} + +static inline bool pagetable_is_reserved(struct ptdesc *pt) +{ + return folio_test_reserved(ptdesc_folio(pt)); +} + +/** + * pagetable_alloc - Allocate pagetables + * @gfp: GFP flags + * @order: desired pagetable order + * + * pagetable_alloc allocates memory for page tables as well as a page table + * descriptor to describe that memory. + * + * Return: The ptdesc describing the allocated page tables. + */ +static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) +{ + struct page *page = alloc_pages(gfp | __GFP_COMP, order); + + return page_ptdesc(page); +} + +/** + * pagetable_free - Free pagetables + * @pt: The page table descriptor + * + * pagetable_free frees the memory of all page tables described by a page + * table descriptor and the memory for the descriptor itself. + */ +static inline void pagetable_free(struct ptdesc *pt) +{ + struct page *page = ptdesc_page(pt); + + __free_pages(page, compound_order(page)); +} + #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); @@ -2981,6 +3032,11 @@ static inline void mark_page_reserved(struct page *page) adjust_managed_page_count(page, -1); } +static inline void free_reserved_ptdesc(struct ptdesc *pt) +{ + free_reserved_page(ptdesc_page(pt)); +} + /* * Default method to free all the __init memory into the buddy system. * The freed pages will be poisoned with pattern "poison" if it's within diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index d46cb709ce08..e9bb5f18cade 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1055,6 +1055,18 @@ TABLE_MATCH(memcg_data, pt_memcg_data); #undef TABLE_MATCH static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); +#define ptdesc_page(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct page *)(pt), \ + struct ptdesc *: (struct page *)(pt))) + +#define ptdesc_folio(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct folio *)(pt), \ + struct ptdesc *: (struct folio *)(pt))) + +#define page_ptdesc(p) (_Generic((p), \ + const struct page *: (const struct ptdesc *)(p), \ + struct page *: (struct ptdesc *)(p))) + /* * No-op macros that just return the current protection value. Defined here * because these macros can be used even if CONFIG_MMU is not defined. From patchwork Tue Jun 27 03:14:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4EDCC001DB for ; Tue, 27 Jun 2023 03:15:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229978AbjF0DPa (ORCPT ); Mon, 26 Jun 2023 23:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230177AbjF0DPZ (ORCPT ); Mon, 26 Jun 2023 23:15:25 -0400 Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com [IPv6:2607:f8b0:4864:20::112d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FDFF1BEE; Mon, 26 Jun 2023 20:15:19 -0700 (PDT) Received: by mail-yw1-x112d.google.com with SMTP id 00721157ae682-5700401acbeso42701907b3.0; Mon, 26 Jun 2023 20:15:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835718; x=1690427718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5MMx6i8bJUxARp5QNqsq1CJUY3hOxlwkAEMcUNtK7dI=; b=MeM3v1KjtFoJ68g67QFEF1NyVxMiMYgvdlNUvONCKTcddCTBLfXbCvjdDQ2rkMkntp NglXMwE26fCvz5F9nMSZMUxoetH1ZAcsVpi0efOXfEwN4DA6ftYlLFo1JApHI/tubC37 33sHqn+NMB2tr9HBuBdSW/hwIa8uL6U/SOUDWuLzyTAiHAPzWUOM8CjVLD+EbbHd/5kR FnMJGohaY21AmJl4Lrba1v1FCtiqCThvY8//FgKgMI5nMolrkjqa5ElszWe7So6X4/NA 29o+uMEWd5I0Ihfj0/TIUUwfyyspThDiVrfoH9mpbEwmREVQeVNhmFuIVnTUiZMir9ah 1F3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835718; x=1690427718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5MMx6i8bJUxARp5QNqsq1CJUY3hOxlwkAEMcUNtK7dI=; b=T5w2MLSMGZY9W53oMaNekVXk8oj2K2MYZWq1xr//bmz3O3dBcGG3i4t2Z8c/yk++QJ SvOU1sI/RidDftIdQfbKUMh7VKSQ4qwEoH9FNY6E5FAz4mOe4V6/ab4uy/perb0oMrsB gKu80Rib7pt7jYG3VOBJyvSTPHVqNWSLyZIwlrrsRsArk/qKEIPDBhn8zt36LY0er+yF hIX/BLWhqb48vUYvaiVh2HlbQjihWRyxnDmOid6lkEk32Hjgs8hCbTv+zju7UGSv9BHY uBuCdQ7jnWgIGNmnQxeIaQBytHLkSjQBwAhv7yLZU85omoKKZV5os8DpsE8GjbWASGlG T5Yw== X-Gm-Message-State: AC+VfDzaOlKqSEL1hoR5qQjOQ2yAzLPirvXK3n7UKeLxGoKkiiinxgbO CNTdG0k5J3q56rVAINGdITU= X-Google-Smtp-Source: ACHHUZ7jG3JdEOLXHoTlHZs7iMneXhbSi5TTL08hXZ/YgCW6ZXi6BK/DGPcEgnHcQP1gwkgg+JH2AQ== X-Received: by 2002:a0d:c506:0:b0:56f:e627:8545 with SMTP id h6-20020a0dc506000000b0056fe6278545mr30778532ywd.39.1687835718493; Mon, 26 Jun 2023 20:15:18 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:17 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 05/33] mm: Convert pmd_pgtable_page() to pmd_ptdesc() Date: Mon, 26 Jun 2023 20:14:03 -0700 Message-Id: <20230627031431.29653-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Converts pmd_pgtable_page() to pmd_ptdesc() and all its callers. This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 14d95d494958..1511faf0263c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2915,15 +2915,15 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, #if USE_SPLIT_PMD_PTLOCKS -static inline struct page *pmd_pgtable_page(pmd_t *pmd) +static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) { unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1); - return virt_to_page((void *)((unsigned long) pmd & mask)); + return virt_to_ptdesc((void *)((unsigned long) pmd & mask)); } static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_pgtable_page(pmd)); + return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); } static inline bool pmd_ptlock_init(struct page *page) @@ -2942,7 +2942,7 @@ static inline void pmd_ptlock_free(struct page *page) ptlock_free(page); } -#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte) +#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) #else From patchwork Tue Jun 27 03:14:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64E60C001E0 for ; Tue, 27 Jun 2023 03:15:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229657AbjF0DPb (ORCPT ); Mon, 26 Jun 2023 23:15:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230244AbjF0DP1 (ORCPT ); Mon, 26 Jun 2023 23:15:27 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C5FB1701; Mon, 26 Jun 2023 20:15:21 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-57083a06b71so40012837b3.1; Mon, 26 Jun 2023 20:15:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835720; x=1690427720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sr5TgrSX24KD5rPSp2rkPhxpII4heal/IIgk1FACoIs=; b=GRJmT9zZCc6VZLx9VY+7Q8GbOsBn33lZq/FXBv0aXORmxRac0rj/lSqIwgsFHnmh66 6cAHAx0KI5Lagl5OZRxl+lTWKC9rW3K7h91Q6uOR2njTfEcjIvHn1/QWkX3jlNssdRYv kPZ6iyIVFwiYJpTBLvXR9/yIre3tRTqt009ghcdse7E6cPjs19QO0aXvNJJOMDpClaY7 t8HHPKKx/NAZrk3fhoJWolRioMxC3Zc40A+aLrsXlHiTMIf/9zaAkHOq7TYNlbv+VzGn jItTpyFNqlbWxSgJZgFIeDKcVXb95B0vXbDJ5HgCdgCt4hiGi9xgNTBoUpG/3ZX1ru9x fM2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835720; x=1690427720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sr5TgrSX24KD5rPSp2rkPhxpII4heal/IIgk1FACoIs=; b=Lh51IWhEKFQe7c9Dybv3UjkBF5b/6eAF2x8etK9z/UyO23PNItsaPWpzYk6r4hKC2g MATRc9X22HQGCUUWA362bjL/K6Xwy51FLjc8oekFdjtn5K4bTtIzBWpysVx/lbTwe3me 5weCRd80Pr+9gU+utjjAM7nA2lMlim8qrnsLITigMxr/XE7YHCePgU1zvlHMdMljrxNU QJTJmdrniTP3gQXHLJ0J+l3gfgYYuAWADEwhWAmGNUZ931QhKDX6bAch+e/QtcnsWnF1 7vCcU5L6TrJTojxjS+xC5xugMLfKS4Qpzr1XLEimN4zZ9pul4jEjBmSAAMV/xv8N2zuA JvWw== X-Gm-Message-State: AC+VfDyKVKL2oJ210n72+7+QUX0kTNmBstoyMBmobjRMf0tSx3/e1Xjb XfhPVik0j/iGExayYL7Aito= X-Google-Smtp-Source: ACHHUZ63yguTuEDNtqBAUu0oFztqBMiblJl9iKmWyGby1Nlfce/DGMkGkkGcggiUUjYXFR4C+z9ptA== X-Received: by 2002:a0d:d78c:0:b0:56f:ff75:abcc with SMTP id z134-20020a0dd78c000000b0056fff75abccmr30306915ywd.29.1687835720566; Mon, 26 Jun 2023 20:15:20 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:20 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 06/33] mm: Convert ptlock_alloc() to use ptdescs Date: Mon, 26 Jun 2023 20:14:04 -0700 Message-Id: <20230627031431.29653-7-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 6 +++--- mm/memory.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1511faf0263c..39b0a4661e44 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2798,7 +2798,7 @@ static inline void pagetable_free(struct ptdesc *pt) #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); -extern bool ptlock_alloc(struct page *page); +bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); static inline spinlock_t *ptlock_ptr(struct page *page) @@ -2810,7 +2810,7 @@ static inline void ptlock_cache_init(void) { } -static inline bool ptlock_alloc(struct page *page) +static inline bool ptlock_alloc(struct ptdesc *ptdesc) { return true; } @@ -2840,7 +2840,7 @@ static inline bool ptlock_init(struct page *page) * slab code uses page->slab_cache, which share storage with page->ptl. */ VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page)) + if (!ptlock_alloc(page_ptdesc(page))) return false; spin_lock_init(ptlock_ptr(page)); return true; diff --git a/mm/memory.c b/mm/memory.c index 80faf3e76232..2ff14f50c7b3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5920,14 +5920,14 @@ void __init ptlock_cache_init(void) SLAB_PANIC, NULL); } -bool ptlock_alloc(struct page *page) +bool ptlock_alloc(struct ptdesc *ptdesc) { spinlock_t *ptl; ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); if (!ptl) return false; - page->ptl = ptl; + ptdesc->ptl = ptl; return true; } From patchwork Tue Jun 27 03:14:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8825C04FDF for ; Tue, 27 Jun 2023 03:15:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230290AbjF0DPx (ORCPT ); Mon, 26 Jun 2023 23:15:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230269AbjF0DP1 (ORCPT ); Mon, 26 Jun 2023 23:15:27 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7919619AB; Mon, 26 Jun 2023 20:15:23 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-570002c9b38so42369077b3.1; Mon, 26 Jun 2023 20:15:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835722; x=1690427722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z9Q71kHk0zB6vt5ahL9ZHs7069+SCA/PofhBzVdlAuY=; b=dOSQZ2OiXRbn7dKMKT1JXAgsN/C3jxCnB5LBHCK9iXILo1l4gDWq1J3cX5+exLYTxn wYoORsV8t6fPDceGwULdTPoZYpfiYfI04T4bOuvjwcHJfivuVIRVY0M+R3Uhc5iOpVeL 0vND8iKDAhmvYQiXoqAJRaFlNkMa3Bk4ZwjVbqbFjHFv+wwgHdDhUknddaUHcnd7Lv+F XfvU3AUvFbk/B6hNpVu5vRnlx39aUpFS25FcgH8ICpzs8p4la51sSmvC+1TP3EUm2inI tzhySMFhH/VY7MUDkzq18xgDM3Cn6KetwRARgLmNOPJ/2w0oaiVuiWvL7GVUeKg27bSV B58A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835722; x=1690427722; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z9Q71kHk0zB6vt5ahL9ZHs7069+SCA/PofhBzVdlAuY=; b=PqHM5vQhXUcLyN0wBTQl5bvICkwzsYMMu4Qh+gJ+i6W8UuNU1fvqwuivzAYafgMs4F PR829sseu0Yslu2OFPwaDg26UERDs287BQWdPCAQfEhHjHPaRhsy1k+8tCyfUa+hb/lu EOsjbVr/oMDw+Xv0IPJ7CF4Eufn5WZESkAKPGNDaEl++l08+KS6H37iWHXwoS4dOHgoA TbRK8zNKy5KpkFHqGCUzDG1XcCk8TQUyCQYjwgYgYnp2Tfxkm+UjgXZIsi8TuVe0Dxeb okCel0KNsyX0TgYml5J0A3fNw3BZx5PatDkTtT795fNDgCsC2c1dJ0zWZgTJjzdChP/m o2Cg== X-Gm-Message-State: AC+VfDwP//JzqDJ0Skr0afq2fMRSLB4GQvb1hjKWGRz5VH543Xbqh4TL oNB5qSJX9sipWtaqnChR57U= X-Google-Smtp-Source: ACHHUZ5sPmbirZ+Hp/U6Mh5uEO5OmHRgtgjjLF/hS5R/U+jD4/eUdYkFVTE6CQ/xsnAMAlWUQublhg== X-Received: by 2002:a0d:ea89:0:b0:56c:e4b1:19f6 with SMTP id t131-20020a0dea89000000b0056ce4b119f6mr32348296ywe.44.1687835722632; Mon, 26 Jun 2023 20:15:22 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:22 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 07/33] mm: Convert ptlock_ptr() to use ptdescs Date: Mon, 26 Jun 2023 20:14:05 -0700 Message-Id: <20230627031431.29653-8-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/x86/xen/mmu_pv.c | 2 +- include/linux/mm.h | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index e0a975165de7..8796ec310483 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -667,7 +667,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm) spinlock_t *ptl = NULL; #if USE_SPLIT_PTE_PTLOCKS - ptl = ptlock_ptr(page); + ptl = ptlock_ptr(page_ptdesc(page)); spin_lock_nest_lock(ptl, &mm->page_table_lock); #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 39b0a4661e44..0b230d5d229a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2801,9 +2801,9 @@ void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return page->ptl; + return ptdesc->ptl; } #else /* ALLOC_SPLIT_PTLOCKS */ static inline void ptlock_cache_init(void) @@ -2819,15 +2819,15 @@ static inline void ptlock_free(struct page *page) { } -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return &page->ptl; + return &ptdesc->ptl; } #endif /* ALLOC_SPLIT_PTLOCKS */ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_page(*pmd)); + return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } static inline bool ptlock_init(struct page *page) @@ -2842,7 +2842,7 @@ static inline bool ptlock_init(struct page *page) VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); if (!ptlock_alloc(page_ptdesc(page))) return false; - spin_lock_init(ptlock_ptr(page)); + spin_lock_init(ptlock_ptr(page_ptdesc(page))); return true; } @@ -2923,7 +2923,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); + return ptlock_ptr(pmd_ptdesc(pmd)); } static inline bool pmd_ptlock_init(struct page *page) From patchwork Tue Jun 27 03:14:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFDFCC0729B for ; Tue, 27 Jun 2023 03:15:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230345AbjF0DPd (ORCPT ); Mon, 26 Jun 2023 23:15:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230311AbjF0DP3 (ORCPT ); Mon, 26 Jun 2023 23:15:29 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8395219BD; Mon, 26 Jun 2023 20:15:25 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-5707b429540so62181507b3.1; Mon, 26 Jun 2023 20:15:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835724; x=1690427724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QwvXTPG+SvX1bm+f7lh9ddV99LBkA8HMSnaL5DvtP/w=; b=bw9pNUcuMxpubm9pq3maPyXbc62B+bsuNNloL1m8ijygS8iq7WYtlX4XdEVQIeg/Zh Ngyg1Az+rnDUxxJdpkw2Xv9RrFezRnlEtGz+ikp5CpgXDK/wyBh05TsohR9gIqLikIrz XQJZpHLygL22WsyZKTUtr35hO5G+O3BbEJWU1trW1sFmIRDgY2iyZUpnQW0acaV8lKSP bClQykA4VjSGm4LzvwTtemLaGQIugu/qzT0WXVHTnnvGtI1PeGca+HTzzD6jettWBhDo 8pn250bIB0NJGxFmMTfFNawkBicAYzNWGl1omwNwRWLI+9cdDNXLQFqbUAr8KIK7W1AA kzkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835724; x=1690427724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QwvXTPG+SvX1bm+f7lh9ddV99LBkA8HMSnaL5DvtP/w=; b=Hk43Sh0FNln6SPSrGAOXZiW4EVV6rwXdJZ7vEMYBF9a0Cq4WCjk5CRFVMg15l+5SkO vOZpG6L+YdwF2kd8BBRiyGcEVyx4evhnzBUTYt9loVP7FdUnYeOFer58mFRoxqvDxXwt 4PalQJuCFqxk6nayN7i4HEbNXQGSmHk/APS2026X4rYf8lYn4nB3wKGiO3KMMlAxae1c LqDkUmGzZtfwwbhF6VotAjEgokDCY5NiDlWupK9rboJ/viZ0LAr3PJNyHkBPJ6c/zF4j vg5OJx4hXBPqGYAtmQDBCHv0612RlfEDWDcWvDbYCzQg7jGVyrbM0N40hJW8nhMKvZOR KNZQ== X-Gm-Message-State: AC+VfDxHqiI6ANGatnRTmIo+F6I5XQr/CFPZoYHeho2ZnyV+mApvSTH8 en8nRThnl7SYjiTeWKo1DJ0= X-Google-Smtp-Source: ACHHUZ4U3kYwykXpP1FzDZUmgCkvLtNAT31XIQI7+ns8f+sP4iaCy71EJqjX8aE3VjJuNDesXn/JrA== X-Received: by 2002:a81:48d0:0:b0:56d:6dd:c1e0 with SMTP id v199-20020a8148d0000000b0056d06ddc1e0mr35914739ywa.21.1687835724632; Mon, 26 Jun 2023 20:15:24 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:24 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 08/33] mm: Convert pmd_ptlock_init() to use ptdescs Date: Mon, 26 Jun 2023 20:14:06 -0700 Message-Id: <20230627031431.29653-9-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0b230d5d229a..1c4c6a7b69b3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2926,12 +2926,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_ptdesc(pmd)); } -static inline bool pmd_ptlock_init(struct page *page) +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - page->pmd_huge_pte = NULL; + ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(page); + return ptlock_init(ptdesc_page(ptdesc)); } static inline void pmd_ptlock_free(struct page *page) @@ -2951,7 +2951,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2967,7 +2967,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) static inline bool pgtable_pmd_page_ctor(struct page *page) { - if (!pmd_ptlock_init(page)) + if (!pmd_ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); From patchwork Tue Jun 27 03:14:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FD5BC04A94 for ; Tue, 27 Jun 2023 03:16:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230355AbjF0DQB (ORCPT ); Mon, 26 Jun 2023 23:16:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230340AbjF0DPd (ORCPT ); Mon, 26 Jun 2023 23:15:33 -0400 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A78BD1BF8; Mon, 26 Jun 2023 20:15:27 -0700 (PDT) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-576918f4cf7so32409687b3.3; Mon, 26 Jun 2023 20:15:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835727; x=1690427727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uRcW1jW+95PozpJhE7MOWGztyyXp9a8KAqI8T3AvQGQ=; b=pvA/vO4MDeq4kIIhsE3yrV9+ALefWilKkLuNIgHhecd9VDeKpQ3CrB44RRLXN/wH9a v+3v6EJOL2IcXShzxOvXaOjl2qaDyEgyn03s8SsAgafbqCa5mGXGEFvhH716JQIAT0SC QEIWs35jWLlJpDE0C433qu5q7ESJ8UkDS3w921ZCRctPmdDsHkPUrtmlpYzw5VwqW3do SQRdIAIjkZikdiXvEEkKtp7sFgpQSLF0GRJH1HdPtksJU7st37GWKQiDw1MyawGMKHo0 s5ztl+d6CGvn0kzD8aZyh2iX1osumXcuHPhisUrTXBEX0Rdvq7Iup64Z2b22TeYu/gRr mzYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835727; x=1690427727; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uRcW1jW+95PozpJhE7MOWGztyyXp9a8KAqI8T3AvQGQ=; b=MkTh2ZB3XyXxu7xIgWkj2/HlGA+466Bdro/jIWxIY5HKeNxv2GO2ipLqkiUdCMcTWf YKklNnSwbwfMDfOjLoAOlZujS2ek/DiBUw/4w/KruqROi8iZPu5KAD850macWR4FXTBl Jo8IhmQaAmEYXOKqsQR6bw9cm3XhR/jNuX8uFvW5eZMgLJg226NKhEqNW+3HWOc94NNh KG5b0PjDoWMSnQsTALrChKmHvsyNIMdbhpNLWZ12byaAb5pxn2SFt92lSGYFSLTmIywm GsUQuwYj4Tlusi8S6qQt0nBeqNM+Y/XPJFLWqmFXlmqbp8CzSAzGLx7b+7657Njv8iU4 nbtg== X-Gm-Message-State: AC+VfDxEmi6ppn7KC9Ak658Bpvwm1akcuXtn3af+7LI+Jd+48xFpeFVU y9iaJqN2w9n7OGFuPQV5j2E= X-Google-Smtp-Source: ACHHUZ44nBppmQGdd8zdkRU4D5yhqKKKAoy0EjGlUrIReqb1/1yiuyQcU5hRXay6VI94GfMmMF8RKg== X-Received: by 2002:a81:c203:0:b0:561:da0d:6488 with SMTP id z3-20020a81c203000000b00561da0d6488mr27417977ywc.50.1687835726781; Mon, 26 Jun 2023 20:15:26 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:26 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 09/33] mm: Convert ptlock_init() to use ptdescs Date: Mon, 26 Jun 2023 20:14:07 -0700 Message-Id: <20230627031431.29653-10-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1c4c6a7b69b3..4af424e4015a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2830,7 +2830,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } -static inline bool ptlock_init(struct page *page) +static inline bool ptlock_init(struct ptdesc *ptdesc) { /* * prep_new_page() initialize page->private (and therefore page->ptl) @@ -2839,10 +2839,10 @@ static inline bool ptlock_init(struct page *page) * It can happen if arch try to use slab for page table allocation: * slab code uses page->slab_cache, which share storage with page->ptl. */ - VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page_ptdesc(page))) + VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc)); + if (!ptlock_alloc(ptdesc)) return false; - spin_lock_init(ptlock_ptr(page_ptdesc(page))); + spin_lock_init(ptlock_ptr(ptdesc)); return true; } @@ -2855,13 +2855,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } static inline void ptlock_cache_init(void) {} -static inline bool ptlock_init(struct page *page) { return true; } +static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) { - if (!ptlock_init(page)) + if (!ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); @@ -2931,7 +2931,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(ptdesc_page(ptdesc)); + return ptlock_init(ptdesc); } static inline void pmd_ptlock_free(struct page *page) From patchwork Tue Jun 27 03:14:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65B74C3DA41 for ; Tue, 27 Jun 2023 03:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230430AbjF0DQH (ORCPT ); Mon, 26 Jun 2023 23:16:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229738AbjF0DPf (ORCPT ); Mon, 26 Jun 2023 23:15:35 -0400 Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1B581991; Mon, 26 Jun 2023 20:15:29 -0700 (PDT) Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-56ff9cc91b4so44754577b3.0; Mon, 26 Jun 2023 20:15:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835729; x=1690427729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wSwuU7fg/k+TT5PagWIdYLap7GxOydHG9fnVI01D6B4=; b=Lg2sjw4Fq7Jsh/1bPb8/Bz5bb4WbWygT3r2OYZVXGMBEwfozbqxm0/F6bIElAKjdHD k9tw765etUmPaJavNHOgRAh4TrN9WtYkyFJDXO4e8zB0CLQ7CKYG2S6Z4giPdak3TlLV dBQDIQS2mzx3Rkho31FJSGH9B3yVhFoSm33zMOn6yvA7Um5yH03kqWlR3ymYgVkxmZrE IJEFqdSDC0V+0/+0zxMFLR8Zuzl0eiDJ5KrxbBEu73g+J0vEbUyjW5NugtcJ2jWIZ/MO 3IUWIpYlYW1hiX8BTPI8estSfzK1mYASEDhKfgKz4greHIhYdQhx0bK/LpN/7ezGhpY+ FpXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835729; x=1690427729; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wSwuU7fg/k+TT5PagWIdYLap7GxOydHG9fnVI01D6B4=; b=Y0bnOZH0Y5ZUOybzPSYzNSWX7/2r82zd8F1+/3jMIWiF2+u2AvPTo4eqccpfAQfp/N 2BXHci41BdkY2pFsGD/vZLmwkMontsC1MyiMfsqMXwj1sQ9tGej533fPtG4X9OeVooDP wS8zeeK0X1qKJPnGglFOHMuLboWVjgbeyRhqj15GvuvZoozGaJ02Wo4M16vdByy4ta1x hV2eHHgdLGGexKE6ANRMPNpzc5Pr351jZsXVzV8MmFjXifBkV3UAbJZJ+cwoyK2SLnaH KGDWorxR/h1TbyB/jL1L9Z++fFG+rOoYlM22suJfH0GC21VnkMyaCWqEmW/5uTVZvhNI OFXA== X-Gm-Message-State: AC+VfDw7yomyTMnF5XRm2kzQKiMTw8dfj5WIyIxROKsRCBXgw7tNBf7f 9a/V0HoPNKkoUXXkRbA7KU8= X-Google-Smtp-Source: ACHHUZ46AY3BqwTWqQUYpGbmhIWT6fwcR58AoQIOUKYPJMkYvs0jOnx1fxcN7fC61p35DP7uAiDexw== X-Received: by 2002:a81:6887:0:b0:565:cf47:7331 with SMTP id d129-20020a816887000000b00565cf477331mr37299896ywc.2.1687835728818; Mon, 26 Jun 2023 20:15:28 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:28 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 10/33] mm: Convert pmd_ptlock_free() to use ptdescs Date: Mon, 26 Jun 2023 20:14:08 -0700 Message-Id: <20230627031431.29653-11-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4af424e4015a..0221675e4dc5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2934,12 +2934,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) return ptlock_init(ptdesc); } -static inline void pmd_ptlock_free(struct page *page) +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - VM_BUG_ON_PAGE(page->pmd_huge_pte, page); + VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(page); + ptlock_free(ptdesc_page(ptdesc)); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) @@ -2952,7 +2952,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void pmd_ptlock_free(struct page *page) {} +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2976,7 +2976,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page); + pmd_ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } From patchwork Tue Jun 27 03:14:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2340FC04A94 for ; Tue, 27 Jun 2023 03:16:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230498AbjF0DQa (ORCPT ); Mon, 26 Jun 2023 23:16:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230373AbjF0DP5 (ORCPT ); Mon, 26 Jun 2023 23:15:57 -0400 Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A87C1BEE; Mon, 26 Jun 2023 20:15:31 -0700 (PDT) Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-57688a146ecso35954597b3.2; Mon, 26 Jun 2023 20:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835731; x=1690427731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bhgUYiKwQSwYMGFv9d1r56wjiuPfdLCT9N5JwdJpvSY=; b=NN6JuQbcJSEhuQl119fqyrzmvfTJLSgZNGPIizHO44moV6zy7vRkkkGbpzNnm+T/zk LfpssTQxYHLCLcRT6ur03GhfUH0PR3hr1gEAZWn4KlOVQ/bRaXKu0XwyzIXh7FK2T78w GV5aBHngJRtwmpryMrqg1wnpyNcK63Z6rXozTP8pqdze8wtkdCn+4d6Nxd1VmHHTwiSp Hj2gVskjJaa7d3mxLgebdTWNoaiyWHEkTnVETMwzQNfFNqae/PKVEKJL0sEqbBCs/7Gc Lg3rR1WUfvzFXQfsjuyOG1PSX82wJS5wL2NRG2ks6qk/cPnX1GTuY36dOOOiPPB0JsGq 3KdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835731; x=1690427731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bhgUYiKwQSwYMGFv9d1r56wjiuPfdLCT9N5JwdJpvSY=; b=QogoTcefqDYcCcRm+HwFr4hoIq4+4uNaUyNyncjwA37jNVxd0Mh3k/TCQucX8hPGoD AfLH8YWJ9Cf6izt7159CEoPvJWu2scWQpODelawQkeEQlLo9xOSvKY3ItoyzZJgRPwJ8 y7158B1kAL83MJjJ+eDGr9nNnCwOm0PDVZ4D79xcxI0VfeI4o/AjgEc8jXokMgM+N4EB D9Y7Oxf9jFcPebqrWLN7nW0zYwwXFopnCnzb7TcyTBT6j1zjwSzfrqgZnJkjB4Y+QYCw 77yHF9C+jVjMKvCpxn33z6AO4zAs0WFvtK+1Ilb5tY2WjiB/mn0Hvpc2+/nzd9blgUgT PDKA== X-Gm-Message-State: AC+VfDyhfQj1S3rw/qzcALkDe+uKZyff2l9aCaf2E760DXT4ndbjEGE+ UkbU3pCQldHDZAUUTTWFhms= X-Google-Smtp-Source: ACHHUZ77IlfDYtyaiNtbQBvQ+hst+uMDswEhJokbRedTnkaskK+XBmCINjFgoTdvnk/TaS9G8YnT8Q== X-Received: by 2002:a81:6cc3:0:b0:56f:fcb0:26f6 with SMTP id h186-20020a816cc3000000b0056ffcb026f6mr25623334ywc.52.1687835730840; Mon, 26 Jun 2023 20:15:30 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:30 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 11/33] mm: Convert ptlock_free() to use ptdescs Date: Mon, 26 Jun 2023 20:14:09 -0700 Message-Id: <20230627031431.29653-12-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 10 +++++----- mm/memory.c | 4 ++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0221675e4dc5..69e6d6696c44 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2799,7 +2799,7 @@ static inline void pagetable_free(struct ptdesc *pt) #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); -extern void ptlock_free(struct page *page); +void ptlock_free(struct ptdesc *ptdesc); static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { @@ -2815,7 +2815,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -static inline void ptlock_free(struct page *page) +static inline void ptlock_free(struct ptdesc *ptdesc) { } @@ -2856,7 +2856,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline void ptlock_cache_init(void) {} static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void ptlock_free(struct page *page) {} +static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) @@ -2870,7 +2870,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page) static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page); + ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } @@ -2939,7 +2939,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(ptdesc_page(ptdesc)); + ptlock_free(ptdesc); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) diff --git a/mm/memory.c b/mm/memory.c index 2ff14f50c7b3..8743aef6095b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5931,8 +5931,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -void ptlock_free(struct page *page) +void ptlock_free(struct ptdesc *ptdesc) { - kmem_cache_free(page_ptl_cachep, page->ptl); + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Tue Jun 27 03:14:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BDDDC001B1 for ; Tue, 27 Jun 2023 03:16:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230399AbjF0DQl (ORCPT ); Mon, 26 Jun 2023 23:16:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230393AbjF0DP6 (ORCPT ); Mon, 26 Jun 2023 23:15:58 -0400 Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB8411FC9; Mon, 26 Jun 2023 20:15:33 -0700 (PDT) Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-570282233ceso30914587b3.1; Mon, 26 Jun 2023 20:15:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835733; x=1690427733; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RnNdr5PDFTKJu1h5zX1p1xVUKJvhUfKvVUh/w7ZbrdI=; b=GTe2EUys5qOsmL9h+39HkmdbRnOOR/ecm3IZWO97nmipqYf9WxPQOFqim+diknH7FN cktZXimy2BTWcqn/5d10bb15uuq3PUdsia+9j36BiOqVzXj30Geeor6rYSL1Rhj9WqCn LmJ9N5OLeEYa/H4vUyKvc7Otfz8SqaPNrXVtQODdE9ilutCJ24KAZJChe9ZmvWB1bUx1 7bpUWrAZkpPXIiT5JcrWxhsn1qZ9K0WItSWhparWMs+/rm9Q3PdTkYO+KF0D1GfwN1hv 0eQBee2svyGNdGSpwEO15WqpXr0WlpHBW3JMLWAP0NGZGcgLzbwqXYAJu19e1eeZo9JE G7Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835733; x=1690427733; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RnNdr5PDFTKJu1h5zX1p1xVUKJvhUfKvVUh/w7ZbrdI=; b=X75qhBcYo7FSOj1y3vE1uWgT3gQOw/QXjusYtMht2XY/F59aOKvQep8LoR3ZH8sPM1 byPQygq+mo8SgqH/NzmqvLYX+Getg8u9FgC7/b4z/ES8HCOzuXBE7QlnuWjgSHTpm+2s I3/uT7VczFHdD07T9mmyD9BM0DZXHjBBTu/Ff9KgCPGHE/3uqEmfvvM4H14VKNw9POiA HxCQaCntqkTlLaU7FcNtrnZXHiosXGka9OvkQnCcLsJG886HdsASgEKyujfRcVKd522O ctwfVX6LmFo2uxoQri1abXZwh7NPvc+6HuG2W5PWf7jKALmbS2mWttBLblb4J9+b9hxD 5b9Q== X-Gm-Message-State: AC+VfDw7MO1Im+av4sZZWnh16VtQpkAIa/ibK102cnk7K8vM4ROQ+EOR eTNmLt7qC3XLnk4Ge0eoLEERlclYUW+hPQ== X-Google-Smtp-Source: ACHHUZ42Y/0xAZG9OHSM8VrbwgLQ6VZ3weOziZPodRcVkI1essgSTHJwJyadoUyqzcMtcD2kr7P2zg== X-Received: by 2002:a0d:e005:0:b0:573:4d89:ecb5 with SMTP id j5-20020a0de005000000b005734d89ecb5mr19780303ywe.39.1687835732948; Mon, 26 Jun 2023 20:15:32 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:32 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 12/33] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor} Date: Mon, 26 Jun 2023 20:14:10 -0700 Message-Id: <20230627031431.29653-13-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Create pagetable_pte_ctor(), pagetable_pmd_ctor(), pagetable_pte_dtor(), and pagetable_pmd_dtor() and make the original pgtable constructor/destructors wrappers. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 69e6d6696c44..356e79984cf9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2859,20 +2859,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ -static inline bool pgtable_pte_page_ctor(struct page *page) +static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { - if (!ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pte_page_ctor(struct page *page) +{ + return pagetable_pte_ctor(page_ptdesc(page)); +} + +static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + pagetable_pte_dtor(page_ptdesc(page)); } pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); @@ -2965,20 +2979,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) { - if (!pmd_ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!pmd_ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + return pagetable_pmd_ctor(page_ptdesc(page)); +} + +static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + pmd_ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + pagetable_pmd_dtor(page_ptdesc(page)); } /* From patchwork Tue Jun 27 03:14:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91511C0015E for ; Tue, 27 Jun 2023 03:16:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229982AbjF0DQi (ORCPT ); Mon, 26 Jun 2023 23:16:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230399AbjF0DQD (ORCPT ); Mon, 26 Jun 2023 23:16:03 -0400 Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com [IPv6:2607:f8b0:4864:20::1131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E50B8E7B; Mon, 26 Jun 2023 20:15:35 -0700 (PDT) Received: by mail-yw1-x1131.google.com with SMTP id 00721157ae682-5704fce0f23so42034037b3.3; Mon, 26 Jun 2023 20:15:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835735; x=1690427735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JO1uHT8He+Av5NkiVAKHRKBK8X3I83QFDr1CeWVeYZ4=; b=dat1wDV0hFLhWxYH0r8ddz1H8BGcTEWHVyi5Rs5rk43A9IksDrk+3mqoUiUkRQ1bdc PR81Ap1TsQpZ2VWOAqt25hxf/tbd3WLQzV54B0l4aeb76JcfUJ+wAECtozZ2I0UZuoRC uh/oOwQmqyqFuyDBqIX7eJx8OEjtqbDmrxv7FxX1ASzR4qd3FXDZ91LjXYV0TyzIprFw kPqkd4c0XzfvJ4/7zzjtkJRieqZM2Ms5EHtEVe+Q06Qv77iK8p1yxt5xwpUsINGiVcuX C3CUZrMaWADT5iPU+9qU+7PbI5xUFKj2+rJ+T40oMZKgcYMP/C+Ckh8hFjolQrjk+muF NBQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835735; x=1690427735; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JO1uHT8He+Av5NkiVAKHRKBK8X3I83QFDr1CeWVeYZ4=; b=UiSVoODlrqWNG9PfBmrM6+nO7IHx5n7zgW1msDyt5PCg+5b8HoYW60WOENC7wl6uGI WCM25Xnd/gz/3Wq/4vNksGTydkd6xRVp3/ldXS6HQsF5RUy+jDrNaSxp/UfVxAJv1Qxc GN9zy3TeJ6IIvQhxm67CUuDkwDhkZhrkmT44ypn/N1RV1/B72tpq13ZGaDJlm9UHYZHY Tmrb4VbXjpHgHE/l3d8bYGYkZcaJwfp5HyHMwAnw08iptKN6OBr75rTXDP5r56aTbne7 aSbXPQKltiluk7dYa5bKvBziy2ZWGedBgTp91DAsPopdb3yuCqw93RpDMfh5U5vgDsL/ bNoQ== X-Gm-Message-State: AC+VfDwN50RS/uX6M37Fl4MMBwzP/YM4TlbItLfsmlOSzILoIdCh8z1e SL0tuBmd1d5pSo6+H5jNY7A= X-Google-Smtp-Source: ACHHUZ6zABvTWGv3CxHnfopD7sHdOjFX2mae0AdIm6lQEgrZTtH2zvRSehpouTlujo5zm4bK/E+peg== X-Received: by 2002:a0d:f306:0:b0:56d:2b76:1d2 with SMTP id c6-20020a0df306000000b0056d2b7601d2mr29033748ywf.10.1687835735078; Mon, 26 Jun 2023 20:15:35 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:34 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Christophe Leroy , Mike Rapoport Subject: [PATCH v6 13/33] powerpc: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:11 -0700 Message-Id: <20230627031431.29653-14-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/powerpc/mm/book3s64/mmu_context.c | 10 +++--- arch/powerpc/mm/book3s64/pgtable.c | 32 +++++++++--------- arch/powerpc/mm/pgtable-frag.c | 46 +++++++++++++------------- 3 files changed, 44 insertions(+), 44 deletions(-) diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c index c766e4c26e42..1715b07c630c 100644 --- a/arch/powerpc/mm/book3s64/mmu_context.c +++ b/arch/powerpc/mm/book3s64/mmu_context.c @@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx) static void pmd_frag_destroy(void *pmd_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pmd_frag); + ptdesc = virt_to_ptdesc(pmd_frag); /* drop all the pending references */ count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 85c84e89e3ea..1212deeabe15 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -306,22 +306,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm) static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO; if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; - page = alloc_page(gfp); - if (!page) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_pages(page, 0); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -331,12 +331,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!mm->context.pmd_frag)) { - atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR); mm->context.pmd_frag = ret + PMD_FRAG_SIZE; } spin_unlock(&mm->page_table_lock); @@ -357,15 +357,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr) void pmd_fragment_free(unsigned long *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - if (PageReserved(page)) - return free_reserved_page(page); + if (pagetable_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index 20652daa1d7e..8961f1540209 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -18,15 +18,15 @@ void pte_frag_destroy(void *pte_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pte_frag); + ptdesc = virt_to_ptdesc(pte_frag); /* drop all the pending references */ count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pte_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } } @@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm) static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) + ptdesc = pagetable_alloc(PGALLOC_GFP | __GFP_ACCOUNT, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } } else { - page = alloc_page(PGALLOC_GFP); - if (!page) + ptdesc = pagetable_alloc(PGALLOC_GFP, 0); + if (!ptdesc) return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) return ret; spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!pte_frag_get(&mm->context))) { - atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR); pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE); } spin_unlock(&mm->page_table_lock); @@ -108,15 +108,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel) void pte_fragment_free(unsigned long *table, int kernel) { - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); - if (PageReserved(page)) - return free_reserved_page(page); + if (pagetable_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { if (!kernel) - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } } From patchwork Tue Jun 27 03:14:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C252BC04E69 for ; Tue, 27 Jun 2023 03:17:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231240AbjF0DRO (ORCPT ); Mon, 26 Jun 2023 23:17:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230231AbjF0DQO (ORCPT ); Mon, 26 Jun 2023 23:16:14 -0400 Received: from mail-yw1-x112e.google.com (mail-yw1-x112e.google.com [IPv6:2607:f8b0:4864:20::112e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFF972107; Mon, 26 Jun 2023 20:15:37 -0700 (PDT) Received: by mail-yw1-x112e.google.com with SMTP id 00721157ae682-570002c9b38so42370247b3.1; Mon, 26 Jun 2023 20:15:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835737; x=1690427737; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sltDOqqNfHEtvz8jLXUjW/r74sJCuRKUMVTEbSHBWCo=; b=QrI1aOGwmRyx1aG/1jUWgOOlctjtzMuSnwhAF489vBWlZq/T3WvrQqLmr1ZsHa3z9f KMuR8bUnQVshf/a2XJDnp4tBiTRfYlQJkd76z10JNyfa2AKxYm8pQZSInnTFwDV0P9Qn 6Ey5MjY+vGUMOaBE0CtREaS5w9oSlx1P3EZIsa6ss9NVkfJ+VRZ8LRFwypU6kVxWJIuF J8TvqLduMYlxzEPQNlBqIgKNLnzpQj5acaQr/A3KnaUPK86vURXRnYY+Ty7Rf6Hk2iF8 qfuryvFc/G562SwsCPua2Ciim8h2kDAtybagPQ8pIMgfSUCuDCG9cQelahJlIzPK0uv7 jK8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835737; x=1690427737; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sltDOqqNfHEtvz8jLXUjW/r74sJCuRKUMVTEbSHBWCo=; b=KEUJMcHZVGDtRZrLh+/hZxFmQh57KINp91zVWWe+/JHlnBfNdofIVfH4euXhcUrfe5 0RuPUJzpKImEb6DWfNTDSzh0yhIJp4Zh0Q/Kkm4tOXoBrloHY7g9ADsj6dM+pwz+YmLx Djvq/LUUNnF8LBVUXexywHwz3t0TQw7YixMS61VepVjmsCuvamFpC/P0l2y80h95DI/s DSgv7GZlFwnGCMKB4/IefA4i+NWxcNnf4rzFLdrf2W3PQooX+thb6yjSVhIBbdXddInw Zl4Qo3DV7WSzfJiZNXfOz5B6gRu4B0A+zlFoWLLRxEl+jbXVMnRHWZ87QDPGRivVqKOe bRBQ== X-Gm-Message-State: AC+VfDw6jf+AIR2SYb7Hi2TU37cxPfeaOW1VENg26gy/Y/NQD38mA1fz JVVoHNH0/YBTEk9Y57ckcNQ= X-Google-Smtp-Source: ACHHUZ4JE7nBItej+vq0Vl4Eri4cTOSZA2h3Tcd4FX2deMd/F+CeONudm5Ey9sNjlWrn9sHNr8vp1Q== X-Received: by 2002:a0d:c001:0:b0:570:6d74:21d5 with SMTP id b1-20020a0dc001000000b005706d7421d5mr30801318ywd.13.1687835737124; Mon, 26 Jun 2023 20:15:37 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:36 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Dave Hansen Subject: [PATCH v6 14/33] x86: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:12 -0700 Message-Id: <20230627031431.29653-15-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/x86/mm/pgtable.c | 47 ++++++++++++++++++++++++++----------------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 15a8009a4480..d3a93e8766ee 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -52,7 +52,7 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pgtable_pte_page_dtor(pte); + pagetable_pte_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table @@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pgtable_pmd_page_dtor(page); - paravirt_tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); } #if CONFIG_PGTABLE_LEVELS > 3 @@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) static inline void pgd_list_add(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_add(&page->lru, &pgd_list); + list_add(&ptdesc->pt_list, &pgd_list); } static inline void pgd_list_del(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } #define UNSHARED_PTRS_PER_PGD \ @@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd) static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm) { - virt_to_page(pgd)->pt_mm = mm; + virt_to_ptdesc(pgd)->pt_mm = mm; } struct mm_struct *pgd_page_get_mm(struct page *page) { - return page->pt_mm; + return page_ptdesc(page)->pt_mm; } static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd) @@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; + struct ptdesc *ptdesc; for (i = 0; i < count; i++) if (pmds[i]) { - pgtable_pmd_page_dtor(virt_to_page(pmds[i])); - free_page((unsigned long)pmds[i]); + ptdesc = virt_to_ptdesc(pmds[i]); + + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); mm_dec_nr_pmds(mm); } } @@ -230,18 +233,24 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; + gfp &= ~__GFP_HIGHMEM; for (i = 0; i < count; i++) { - pmd_t *pmd = (pmd_t *)__get_free_page(gfp); - if (!pmd) + pmd_t *pmd = NULL; + struct ptdesc *ptdesc = pagetable_alloc(gfp, 0); + + if (!ptdesc) failed = true; - if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) { - free_page((unsigned long)pmd); - pmd = NULL; + if (ptdesc && !pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); + ptdesc = NULL; failed = true; } - if (pmd) + if (ptdesc) { mm_inc_nr_pmds(mm); + pmd = ptdesc_address(ptdesc); + } + pmds[i] = pmd; } @@ -830,7 +839,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) free_page((unsigned long)pmd_sv); - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; From patchwork Tue Jun 27 03:14:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293892 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5570C001E0 for ; Tue, 27 Jun 2023 03:17:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230457AbjF0DRS (ORCPT ); Mon, 26 Jun 2023 23:17:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230326AbjF0DQV (ORCPT ); Mon, 26 Jun 2023 23:16:21 -0400 Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com [IPv6:2607:f8b0:4864:20::112d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54AD72126; Mon, 26 Jun 2023 20:15:40 -0700 (PDT) Received: by mail-yw1-x112d.google.com with SMTP id 00721157ae682-5768a7e3adbso46469437b3.0; Mon, 26 Jun 2023 20:15:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835739; x=1690427739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xnT0OuFQgSMPBMpNBMcjRtrAID2j2J/z+99kPRjsdFc=; b=C9lK0LHuN5lKypvu+Aj6aeuTYWXXF635p2/pwdffwMnvgLPQ5DfNtjJwGjF0xWaGtG k4CBfAborCW+hCWMhZ2oUELsTotpY/LgKDoPzYRghFuc0ywrV2kgKg/Bf03R4RrxER14 8aFizXA8XvmvkijHxeK5b0ar2PYZmVlLjrBJOqZ59csukgFcxnmc3Xf5i7RxtGtOgHDV jxfjAgQfXU+ZB0gWRP2dwNno3Yad8q9D+y808smsM+9/wszItlWIKVHjO8VqsouceqTf m31MSyOhX7e2LHwarw7nCVK/K8Mtdnr1+ICaXDV8yqHnDrMeoBLMORe+kyEQDKvPPqv1 8mBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835739; x=1690427739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xnT0OuFQgSMPBMpNBMcjRtrAID2j2J/z+99kPRjsdFc=; b=C+3/u0aPMcmziPyyHtsIysyP7bJMFPDSrCjvG255iojczTinphlYV1FE5wB/m1cFbt GZvnnYKocl58pNKjPbgFsXe7cVn+o1UoUtqFlZqIGIn/EoJTL4HGynQFLnj9fxigRS7S m46HFMNO/OoWSySCAtVyFzCR/6JIUa8ELYwEVwkX/05KCOh+FOiD1EO6RJuAQaYHUmjA sIOVYNVGnoOGz7Vh34Em6PoFGwZOG6wGJcbPpU9iPJqJgyYljiyFOpjw5HpuwZ9Fz7Cu 256wVWaZqiLIX7pBWgynktYQKn2reoPHAsyMOSTXUA5fJgDPxYzYlkLtZ1IPE/at0Q64 UUvg== X-Gm-Message-State: AC+VfDx49Byn0V06mJiZ+PtK4+fqqBEd48kOdpOPahp+btfkbYICsDli iaJAPMLx3mjJKs2IavoRBvU= X-Google-Smtp-Source: ACHHUZ4s0sSEGcPfreheHOmEbwOCo2wccO0ZlMx4KueotfvtG5g+MVgAzB4cpcvHGs8z0aON7ghJ0A== X-Received: by 2002:a81:a191:0:b0:56d:50a:c0bb with SMTP id y139-20020a81a191000000b0056d050ac0bbmr37686199ywg.25.1687835739285; Mon, 26 Jun 2023 20:15:39 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:38 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , David Hildenbrand , Claudio Imbrenda , Mike Rapoport Subject: [PATCH v6 15/33] s390: Convert various gmap functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:13 -0700 Message-Id: <20230627031431.29653-16-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Since we're now using pagetable_free(), set _pt_s390_gaddr (which aliases with page->mapping) to NULL in that function instead. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/s390/mm/gmap.c | 217 +++++++++++++++++++++++--------------------- include/linux/mm.h | 3 + 2 files changed, 117 insertions(+), 103 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index beb4804d9ca8..8dbe0fdc0e44 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -34,7 +34,7 @@ static struct gmap *gmap_alloc(unsigned long limit) { struct gmap *gmap; - struct page *page; + struct ptdesc *ptdesc; unsigned long *table; unsigned long etype, atype; @@ -67,12 +67,12 @@ static struct gmap *gmap_alloc(unsigned long limit) spin_lock_init(&gmap->guest_table_lock); spin_lock_init(&gmap->shadow_lock); refcount_set(&gmap->ref_count, 1); - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) goto out_free; - page->_pt_s390_gaddr = 0; - list_add(&page->lru, &gmap->crst_list); - table = page_to_virt(page); + ptdesc->_pt_s390_gaddr = 0; + list_add(&ptdesc->pt_list, &gmap->crst_list); + table = ptdesc_to_virt(ptdesc); crst_table_init(table, etype); gmap->table = table; gmap->asce = atype | _ASCE_TABLE_LENGTH | @@ -181,25 +181,23 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root) */ static void gmap_free(struct gmap *gmap) { - struct page *page, *next; + struct ptdesc *ptdesc, *next; /* Flush tlb of all gmaps (if not already done for shadows) */ if (!(gmap_is_shadow(gmap) && gmap->removed)) gmap_flush_tlb(gmap); /* Free all segment & region tables. */ - list_for_each_entry_safe(page, next, &gmap->crst_list, lru) { - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) { + pagetable_free(ptdesc); } gmap_radix_tree_free(&gmap->guest_to_host); gmap_radix_tree_free(&gmap->host_to_guest); /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { - /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) { - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + /* Free all ptdesc tables. */ + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) { + page_table_free_pgste(ptdesc_page(ptdesc)); } gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ @@ -308,28 +306,27 @@ EXPORT_SYMBOL_GPL(gmap_get_enabled); static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, unsigned long init, unsigned long gaddr) { - struct page *page; + struct ptdesc *ptdesc; unsigned long *new; /* since we dont free the gmap table until gmap_free we can unlock */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - new = page_to_virt(page); + new = ptdesc_to_virt(ptdesc); crst_table_init(new, init); spin_lock(&gmap->guest_table_lock); if (*table & _REGION_ENTRY_INVALID) { - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); *table = __pa(new) | _REGION_ENTRY_LENGTH | (*table & _REGION_ENTRY_TYPE_MASK); - page->_pt_s390_gaddr = gaddr; - page = NULL; + ptdesc->_pt_s390_gaddr = gaddr; + ptdesc = NULL; } spin_unlock(&gmap->guest_table_lock); - if (page) { - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); - } + if (ptdesc) + pagetable_free(ptdesc); + return 0; } @@ -341,15 +338,15 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, */ static unsigned long __gmap_segment_gaddr(unsigned long *entry) { - struct page *page; + struct ptdesc *ptdesc; unsigned long offset, mask; offset = (unsigned long) entry / sizeof(unsigned long); offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE; mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1); - page = virt_to_page((void *)((unsigned long) entry & mask)); + ptdesc = virt_to_ptdesc((void *)((unsigned long) entry & mask)); - return page->_pt_s390_gaddr + offset; + return ptdesc->_pt_s390_gaddr + offset; } /** @@ -1345,6 +1342,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) unsigned long *ste; phys_addr_t sto, pgt; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */ @@ -1358,9 +1356,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ page = phys_to_page(pgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc_page(ptdesc)); } /** @@ -1374,9 +1373,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, unsigned long *sgt) { - struct page *page; phys_addr_t pgt; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _SEGMENT_SIZE) { @@ -1387,9 +1387,10 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ page = phys_to_page(pgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + page_table_free_pgste(ptdesc_page(ptdesc)); } } @@ -1405,6 +1406,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) unsigned long r3o, *r3e; phys_addr_t sgt; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */ @@ -1418,9 +1420,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ page = phys_to_page(sgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1434,9 +1437,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, unsigned long *r3t) { - struct page *page; phys_addr_t sgt; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION3_SIZE) { @@ -1447,9 +1451,10 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ page = phys_to_page(sgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1465,6 +1470,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) unsigned long r2o, *r2e; phys_addr_t r3t; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */ @@ -1478,9 +1484,10 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ page = phys_to_page(r3t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1495,8 +1502,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, unsigned long *r2t) { phys_addr_t r3t; - struct page *page; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION2_SIZE) { @@ -1507,9 +1515,10 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ page = phys_to_page(r3t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1525,6 +1534,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) unsigned long r1o, *r1e; struct page *page; phys_addr_t r2t; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r1e = gmap_table_walk(sg, raddr, 4); /* get region-1 pointer */ @@ -1538,9 +1548,10 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) __gmap_unshadow_r2t(sg, raddr, __va(r2t)); /* Free region 2 table */ page = phys_to_page(r2t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1558,6 +1569,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, struct page *page; phys_addr_t r2t; int i; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); asce = __pa(r1t) | _ASCE_TYPE_REGION1; @@ -1571,9 +1583,10 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, r1t[i] = _REGION1_ENTRY_EMPTY; /* Free region 2 table */ page = phys_to_page(r2t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + ptdesc = page_ptdesc(page); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1770,18 +1783,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r2t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_r2t = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_r2t = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */ @@ -1802,7 +1815,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r2t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1830,8 +1843,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r2t); @@ -1855,18 +1867,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r3t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_r3t = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_r3t = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */ @@ -1887,7 +1899,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r3t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1915,8 +1927,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r3t); @@ -1940,18 +1951,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_sgt; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE)); /* Allocate a shadow segment table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_sgt = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_sgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */ @@ -1972,7 +1983,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= sgt & _REGION_ENTRY_PROTECT; - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -2000,8 +2011,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_sgt); @@ -2024,8 +2034,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, int *fake) { unsigned long *table; - struct page *page; int rc; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); spin_lock(&sg->guest_table_lock); @@ -2033,9 +2044,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, if (table && !(*table & _SEGMENT_ENTRY_INVALID)) { /* Shadow page tables are full pages (pte+pgste) */ page = pfn_to_page(*table >> PAGE_SHIFT); - *pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; + ptdesc = page_ptdesc(page); + *pgt = ptdesc->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; *dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT); - *fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); + *fake = !!(ptdesc->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); rc = 0; } else { rc = -EAGAIN; @@ -2064,19 +2076,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, { unsigned long raddr, origin; unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t s_pgt; int rc; BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE)); /* Allocate a shadow page table */ - page = page_table_alloc_pgste(sg->mm); - if (!page) + ptdesc = page_ptdesc(page_table_alloc_pgste(sg->mm)); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_pgt = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_pgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */ @@ -2094,7 +2106,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, /* mark as invalid as long as the parent table is not protected */ *table = (unsigned long) s_pgt | _SEGMENT_ENTRY | (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID; - list_add(&page->lru, &sg->pt_list); + list_add(&ptdesc->pt_list, &sg->pt_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_SEGMENT_ENTRY_INVALID; @@ -2120,8 +2132,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + page_table_free_pgste(ptdesc_page(ptdesc)); return rc; } @@ -2821,11 +2832,11 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range); */ void s390_unlist_old_asce(struct gmap *gmap) { - struct page *old; + struct ptdesc *old; - old = virt_to_page(gmap->table); + old = virt_to_ptdesc(gmap->table); spin_lock(&gmap->guest_table_lock); - list_del(&old->lru); + list_del(&old->pt_list); /* * Sometimes the topmost page might need to be "removed" multiple * times, for example if the VM is rebooted into secure mode several @@ -2840,7 +2851,7 @@ void s390_unlist_old_asce(struct gmap *gmap) * pointers, so list_del can work (and do nothing) without * dereferencing stale or invalid pointers. */ - INIT_LIST_HEAD(&old->lru); + INIT_LIST_HEAD(&old->pt_list); spin_unlock(&gmap->guest_table_lock); } EXPORT_SYMBOL_GPL(s390_unlist_old_asce); @@ -2861,7 +2872,7 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce); int s390_replace_asce(struct gmap *gmap) { unsigned long asce; - struct page *page; + struct ptdesc *ptdesc; void *table; s390_unlist_old_asce(gmap); @@ -2870,10 +2881,10 @@ int s390_replace_asce(struct gmap *gmap) if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT) return -EINVAL; - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - table = page_to_virt(page); + table = ptdesc_to_virt(ptdesc); memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT)); /* @@ -2882,7 +2893,7 @@ int s390_replace_asce(struct gmap *gmap) * it will be freed when the VM is torn down. */ spin_lock(&gmap->guest_table_lock); - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); spin_unlock(&gmap->guest_table_lock); /* Set new table origin while preserving existing ASCE control bits */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 356e79984cf9..0e4d5f6d10e5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2792,6 +2792,9 @@ static inline void pagetable_free(struct ptdesc *pt) { struct page *page = ptdesc_page(pt); + /* set page->mapping to NULL since s390 gmap may have used it */ + pt->_pt_s390_gaddr = 0; + __free_pages(page, compound_order(page)); } From patchwork Tue Jun 27 03:14:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B67B2C001B3 for ; Tue, 27 Jun 2023 03:18:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230338AbjF0DSN (ORCPT ); Mon, 26 Jun 2023 23:18:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230385AbjF0DQd (ORCPT ); Mon, 26 Jun 2023 23:16:33 -0400 Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 408D026AE; Mon, 26 Jun 2023 20:15:43 -0700 (PDT) Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-570002c9b38so42370607b3.1; Mon, 26 Jun 2023 20:15:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835741; x=1690427741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TkWAvuRnBcdDxQyUnHPccd2fzUbRvSLtPtfAdfkK0D0=; b=Y29oLBo54IelkOx322p2iVXWfA4IaT3l9yF0tlkE7TIOLncY8qGbD0oBXUadhtPpBr ox9bcqf07tpeqzwcnY+iasgqeZXejAOVjtaJjAsb7WJ15Uy0POLJ3MxCZns1FcM5WPjP +JbOozXOilFCizwLt/mpoCnI1UBR9D6cbicNOi3gepwC7mGMoITq+lzxvsSHsVo1SDEj Mu7JBT23VNXCvVL/f5whQ8wp1S6+w1a74gEslVRxsbEnv+d7SoOA5E7Rsd4oM5np6JgZ 2uL9Cbj4u7ByOSSRN6KflmiiGBGWbxgqL+M+RNd5GWw1ztOmw5rlB1y9QWz6YFnxA8My wgNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835741; x=1690427741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TkWAvuRnBcdDxQyUnHPccd2fzUbRvSLtPtfAdfkK0D0=; b=lfP/wDCin1a/9dqKxKCIV1Rwot6BRIuC6VfpgNdqISG2vwrwwufSWD/Kbjhvgb9x46 7tItf5SqWvhIgGSTEk1MOfYvwONFdLlgpCU1EXlV88w7nZnW086MalaUDtoCrYFLRefA vac3PJx9V6rXef+O0jWp1DHSguHt69KDUpQlvkV8MFyweXBXJQ/4xwJdp3oO+H9Y2AfB O0FZNmqU+h8zNB9ohZB8yjycPoStIg+tWVzQKBAixQXUBTBuMtgeOBw1nQPhi/1emD2O 1oF6RcovtO1UxkoVkxGsY03FKAWXy9R8PV7KTpD6k6aac6CpbaAOCfhL47KjtoWGx35I MXKg== X-Gm-Message-State: AC+VfDy3kWLB2tN2ol6a5mhqxqnlhP8vKikyx+ekGDRjJDxKFO9IYiHI xAdKrCTisYYhOpj9BkVECC0= X-Google-Smtp-Source: ACHHUZ6zo/heeKGErFKSgHt9WPTzp866BV9Vxkz2nD8Fm51KKhKDyA9uiul/2kgbaOeGxMWVqoUNKQ== X-Received: by 2002:a0d:d742:0:b0:569:feee:3950 with SMTP id z63-20020a0dd742000000b00569feee3950mr33893914ywd.2.1687835741628; Mon, 26 Jun 2023 20:15:41 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:41 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , David Hildenbrand , Claudio Imbrenda , Mike Rapoport Subject: [PATCH v6 16/33] s390: Convert various pgalloc functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:14 -0700 Message-Id: <20230627031431.29653-17-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/s390/include/asm/pgalloc.h | 4 +- arch/s390/include/asm/tlb.h | 4 +- arch/s390/mm/pgalloc.c | 108 ++++++++++++++++---------------- 3 files changed, 59 insertions(+), 57 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 17eb618f1348..00ad9b88fda9 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!pagetable_pmd_ctor(virt_to_ptdesc(table))) { crst_table_free(mm, table); return NULL; } @@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b91f4a9b044c..383b1f91442c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_table(tlb, pmd); + tlb_remove_ptdesc(tlb, pmd); } /* diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 66ab68db9842..79b1c2458d85 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl); unsigned long *crst_table_alloc(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, CRST_ALLOC_ORDER); - if (!page) + if (!ptdesc) return NULL; - arch_set_page_dat(page, CRST_ALLOC_ORDER); - return (unsigned long *) page_to_virt(page); + arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER); + return (unsigned long *) ptdesc_to_virt(ptdesc); } void crst_table_free(struct mm_struct *mm, unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } static void __crst_table_upgrade(void *arg) @@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits) struct page *page_table_alloc_pgste(struct mm_struct *mm) { - struct page *page; + struct ptdesc *ptdesc; u64 *table; - page = alloc_page(GFP_KERNEL); - if (page) { - table = (u64 *)page_to_virt(page); + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (ptdesc) { + table = (u64 *)ptdesc_to_virt(ptdesc); memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return page; + return ptdesc_page(ptdesc); } void page_table_free_pgste(struct page *page) { - __free_page(page); + pagetable_free(page_ptdesc(page)); } #endif /* CONFIG_PGSTE */ @@ -233,7 +233,7 @@ void page_table_free_pgste(struct page *page) unsigned long *page_table_alloc(struct mm_struct *mm) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; unsigned int mask, bit; /* Try to get a fragment of a 4K page as a 2K page table */ @@ -241,9 +241,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); - mask = atomic_read(&page->_refcount) >> 24; + ptdesc = list_first_entry(&mm->context.pgtable_list, + struct ptdesc, pt_list); + mask = atomic_read(&ptdesc->_refcount) >> 24; /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible @@ -255,13 +255,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm) */ mask = (mask | (mask >> 4)) & 0x03U; if (mask != 0x03U) { - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_refcount, + atomic_xor_bits(&ptdesc->_refcount, 0x01U << (bit + 24)); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } } spin_unlock_bh(&mm->context.lock); @@ -269,27 +269,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } /* Allocate a fresh page */ - page = alloc_page(GFP_KERNEL); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - arch_set_page_dat(page, 0); + arch_set_page_dat(ptdesc_page(ptdesc), 0); /* Initialize page table */ - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_xor_bits(&page->_refcount, 0x03U << 24); + atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->_refcount, 0x01U << 24); + atomic_xor_bits(&ptdesc->_refcount, 0x01U << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -311,9 +311,8 @@ static void page_table_release_check(struct page *page, void *table, void page_table_free(struct mm_struct *mm, unsigned long *table) { unsigned int mask, bit, half; - struct page *page; + struct ptdesc *ptdesc = virt_to_ptdesc(table); - page = virt_to_page(table); if (!mm_alloc_pgste(mm)) { /* Free 2K page table fragment of a 4K page */ bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); @@ -323,42 +322,41 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; if (mask & 0x03U) - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x10U << (bit + 24)); mask >>= 24; if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { struct mm_struct *mm; - struct page *page; unsigned int bit, mask; + struct ptdesc *ptdesc = virt_to_ptdesc(table); mm = tlb->mm; - page = virt_to_page(table); if (mm_alloc_pgste(mm)) { gmap_unlink(mm, table, vmaddr); table = (unsigned long *) ((unsigned long)table | 0x03U); - tlb_remove_table(tlb, table); + tlb_remove_ptdesc(tlb, table); return; } bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); @@ -368,12 +366,12 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; if (mask & 0x03U) - list_add_tail(&page->lru, &mm->context.pgtable_list); + list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); tlb_remove_table(tlb, table); @@ -383,7 +381,7 @@ void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 0x03U, half = mask; void *table = (void *)((unsigned long) _table ^ mask); - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); switch (half) { case 0x00U: /* pmd, pud, or p4d */ @@ -391,20 +389,20 @@ void __tlb_remove_table(void *_table) return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, mask << (4 + 24)); mask >>= 24; if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; break; } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } /* @@ -432,16 +430,20 @@ static void base_pgt_free(unsigned long *table) static unsigned long *base_crst_alloc(unsigned long val) { unsigned long *table; + struct ptdesc *ptdesc; - table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER); - if (table) - crst_table_init(table, val); + ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, CRST_ALLOC_ORDER); + if (!ptdesc) + return NULL; + table = ptdesc_address(ptdesc); + + crst_table_init(table, val); return table; } static void base_crst_free(unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } #define BASE_ADDR_END_FUNC(NAME, SIZE) \ From patchwork Tue Jun 27 03:14:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9655EB64DC for ; Tue, 27 Jun 2023 03:18:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231129AbjF0DSU (ORCPT ); Mon, 26 Jun 2023 23:18:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231137AbjF0DQe (ORCPT ); Mon, 26 Jun 2023 23:16:34 -0400 Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com [IPv6:2607:f8b0:4864:20::1134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC68D26BC; Mon, 26 Jun 2023 20:15:44 -0700 (PDT) Received: by mail-yw1-x1134.google.com with SMTP id 00721157ae682-570282233ceso30915747b3.1; Mon, 26 Jun 2023 20:15:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835744; x=1690427744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nccUzLka7Xkr4OuMpYk05u/KaIGQrzDFYQttz/vcmHQ=; b=HLcNThhCjrF2lW2DbUB2arEo4ko3B6allhXZIcElXjdr6H7L2X1FZfSho548An2x+c a+9K7HsBcoezl8dCiHuvfAoLyrYuTYtD9E2TWwaVAxGHkBxxc2/5TZ6nFsMXNvk6remM jx+8EfWgo0KYBg7GjeDiKlLbwiB7jM460FpLAi7tKdSlKxxDWqFHsa1UXC/mbPJcCtbU yj4Rp4J3hy1xGdNYGZKCLJPZo1cDJDTZZ/8LhFsRol7Jja96ui1kFTq+O1Fx747WJ4sF vLKKmUtgz/Vt/5vQzW3HaVNESXDRtjCa2KGe7DVgUo3lXiMXBnrUqjQNRU6qXKgQgpLj HQ8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835744; x=1690427744; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nccUzLka7Xkr4OuMpYk05u/KaIGQrzDFYQttz/vcmHQ=; b=UD/bVze+9Sbu9xvwcfrUP8ZL/MVwTFIu/470bweBKcJ/duFl21o9DtrU1dBPJJJQjK oFpscPOXPoPU0fLzhy33QpJsd7o6FFXWC9XiKEilefy+vvIin2cn41fabRdJhvaccfju oSwMla1xj532/WqUHr9Lbe0/rYE6AujJI4CsR89xiYdAKhgu4vbwpbaqYVsv9Unr499A HduYWVDwTxd+4E0ajHE/B67vhV9arXQjExH7OA6midsl5EAh8xsrsEdqe0ECULUf7Cqn wekslUTuHHuFwFkTJs1J5t7VRfTAEiQD9wIox+o65xSig7lA0R7BVacINTYaJUczwhcI B+rA== X-Gm-Message-State: AC+VfDyiK14gsjXw19SOUCsIqYYvt8ff85WOXMNkVbowWad/DT5ch0rZ 8IGaSWB8LjUW4JzT28rRnLM= X-Google-Smtp-Source: ACHHUZ4vo/lElRqyF24hRbACCzJ93507ECpcyy5/0zyyAPcDbkzDTEVAwpHPMKxzXN5UZzQtaYrpVQ== X-Received: by 2002:a81:6bc5:0:b0:576:8fcd:270f with SMTP id g188-20020a816bc5000000b005768fcd270fmr7059365ywc.19.1687835743726; Mon, 26 Jun 2023 20:15:43 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:43 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 17/33] mm: Remove page table members from struct page Date: Mon, 26 Jun 2023 20:14:15 -0700 Message-Id: <20230627031431.29653-18-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org The page table members are now split out into their own ptdesc struct. Remove them from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm_types.h | 14 -------------- include/linux/pgtable.h | 3 --- 2 files changed, 17 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index fbbe4e93a9ba..434e54440686 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -141,20 +141,6 @@ struct page { struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ }; - struct { /* Page table pages */ - unsigned long _pt_pad_1; /* compound_head */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - unsigned long _pt_s390_gaddr; /* mapping */ - union { - struct mm_struct *pt_mm; /* x86 pgds only */ - atomic_t pt_frag_refcount; /* powerpc */ - }; -#if ALLOC_SPLIT_PTLOCKS - spinlock_t *ptl; -#else - spinlock_t ptl; -#endif - }; struct { /* ZONE_DEVICE pages */ /** @pgmap: Points to the hosting device page map. */ struct dev_pagemap *pgmap; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e9bb5f18cade..daeacfe3930d 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1044,10 +1044,7 @@ struct ptdesc { TABLE_MATCH(flags, __page_flags); TABLE_MATCH(compound_head, pt_list); TABLE_MATCH(compound_head, _pt_pad_1); -TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); TABLE_MATCH(mapping, _pt_s390_gaddr); -TABLE_MATCH(pt_mm, pt_mm); -TABLE_MATCH(ptl, ptl); TABLE_MATCH(rcu_head, pt_rcu_head); #ifdef CONFIG_MEMCG TABLE_MATCH(memcg_data, pt_memcg_data); From patchwork Tue Jun 27 03:14:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B165EB64DC for ; Tue, 27 Jun 2023 03:18:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231304AbjF0DSh (ORCPT ); Mon, 26 Jun 2023 23:18:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbjF0DRI (ORCPT ); Mon, 26 Jun 2023 23:17:08 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 359C62723; Mon, 26 Jun 2023 20:15:47 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5768a7e3adbso46470387b3.0; Mon, 26 Jun 2023 20:15:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835746; x=1690427746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bBkB4loM/Mt006iaxsYk7j8UnaAx8a9/NTd5FkPootg=; b=H1qFd5ryATtw8t405nCe3wIlgG+rX1BXYLLKuv1YpYWhPdhPiNJLVH72Jw72yhkr/h 3iJdbhNcvDeJj1R/7QZzbVvdYp8+Ef/D3y8M93HMkREuKVm52xA1EryWULik3f4dSL7k 14E+NJ36RkQ5e6TdqjWSx7Fc6bbQU9Nra+93+Jt2nhawSIssAg+z8tZmqu6fpDttBNOM ca/S4ptlU/TspyqfylxkdmdBeNsH9f2Hrm83Obo7DrdaBuxczqOW9lHL1iMbAqq171gX B7pt5M7+TmTNcw4OD4CiyQ2ipIhxzKH/nEdZ2UyNN8D+QQdMlcx6LBXeqPnQpiIanV21 tW0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835746; x=1690427746; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bBkB4loM/Mt006iaxsYk7j8UnaAx8a9/NTd5FkPootg=; b=KD8jwcubm0hh73DWu7ZDye1QXELvMlqnRwhApUyD5LoAOnsDE4aOg8TLcY5jMdjrtx VCueirSG2sz/UAsLGG4TISh9AKYB62nbCaUJuCofVRL2ahqo1e2ImBwICjaO5GrmrP+d Q24r7LtfF25ilmRZybGiEVjaeckN/Q46I18EEL6b5pZrla3ksgsXc9MikNBMqk0yqlwE iel9Mg1oAIqHDc4RmHzEjf9VvCFDIV9ngmFTHkNdo/LVrnzWmHeGZvzKwJ3kfi5T4NHL tgy1gLkDvLfvuI1GA2jUq408zop27bIr2McShsweuLwrthBQ0chMQlS18O0q8vvDiWEW LmPA== X-Gm-Message-State: AC+VfDyUSpza+zVcTJ4a7eeWLgCC8qM5Tw55pj4bv+3OXKbx1bFnLB/w +P50LiAIGpO/JF0naYwNzUk= X-Google-Smtp-Source: ACHHUZ6u2uv6uT2cSLb5CcuORgExO8yy2prZOQQwKiPsRPUBkVmf/UzBxwtB6MEmnnGsXK2WOg9dTA== X-Received: by 2002:a0d:ea81:0:b0:565:b76d:82c8 with SMTP id t123-20020a0dea81000000b00565b76d82c8mr32046751ywe.5.1687835745750; Mon, 26 Jun 2023 20:15:45 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:45 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" Subject: [PATCH v6 18/33] pgalloc: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:16 -0700 Message-Id: <20230627031431.29653-19-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/pgalloc.h | 88 +++++++++++++++++++++-------------- 1 file changed, 52 insertions(+), 36 deletions(-) diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index a7cf825befae..c75d4a753849 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -8,7 +8,7 @@ #define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT) /** - * __pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * __pte_alloc_one_kernel - allocate memory for a PTE-level kernel page table * @mm: the mm_struct of the current context * * This function is intended for architectures that need @@ -18,12 +18,17 @@ */ static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm) { - return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL); + struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & + ~__GFP_HIGHMEM, 0); + + if (!ptdesc) + return NULL; + return ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL /** - * pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * pte_alloc_one_kernel - allocate memory for a PTE-level kernel page table * @mm: the mm_struct of the current context * * Return: pointer to the allocated memory or %NULL on error @@ -35,40 +40,40 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) #endif /** - * pte_free_kernel - free PTE-level kernel page table page + * pte_free_kernel - free PTE-level kernel page table memory * @mm: the mm_struct of the current context * @pte: pointer to the memory containing the page table */ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + pagetable_free(virt_to_ptdesc(pte)); } /** - * __pte_alloc_one - allocate a page for PTE-level user page table + * __pte_alloc_one - allocate memory for a PTE-level user page table * @mm: the mm_struct of the current context * @gfp: GFP flags to use for the allocation * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pte_ctor(). * * This function is intended for architectures that need * anything beyond simple page allocation or must have custom GFP flags. * - * Return: `struct page` initialized as page table or %NULL on error + * Return: `struct page` referencing the ptdesc or %NULL on error */ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) { - struct page *pte; + struct ptdesc *ptdesc; - pte = alloc_page(gfp); - if (!pte) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(pte)) { - __free_page(pte); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return pte; + return ptdesc_page(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE @@ -76,9 +81,9 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) * pte_alloc_one - allocate a page for PTE-level user page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pte_ctor(). * - * Return: `struct page` initialized as page table or %NULL on error + * Return: `struct page` referencing the ptdesc or %NULL on error */ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { @@ -92,14 +97,16 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) */ /** - * pte_free - free PTE-level user page table page + * pte_free - free PTE-level user page table memory * @mm: the mm_struct of the current context - * @pte_page: the `struct page` representing the page table + * @pte_page: the `struct page` referencing the ptdesc */ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { - pgtable_pte_page_dtor(pte_page); - __free_page(pte_page); + struct ptdesc *ptdesc = page_ptdesc(pte_page); + + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } @@ -107,10 +114,11 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) #ifndef __HAVE_ARCH_PMD_ALLOC_ONE /** - * pmd_alloc_one - allocate a page for PMD-level page table + * pmd_alloc_one - allocate memory for a PMD-level page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pmd_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pmd_ctor(). + * * Allocations use %GFP_PGTABLE_USER in user context and * %GFP_PGTABLE_KERNEL in kernel context. * @@ -118,28 +126,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) */ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_PGTABLE_USER; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - page = alloc_page(gfp); - if (!page) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_page(page); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return (pmd_t *)page_address(page); + return ptdesc_address(ptdesc); } #endif #ifndef __HAVE_ARCH_PMD_FREE static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); + BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pgtable_pmd_page_dtor(virt_to_page(pmd)); - free_page((unsigned long)pmd); + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } #endif @@ -150,19 +160,25 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr) { gfp_t gfp = GFP_PGTABLE_USER; + struct ptdesc *ptdesc; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - return (pud_t *)get_zeroed_page(gfp); + gfp &= ~__GFP_HIGHMEM; + + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) + return NULL; + return ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PUD_ALLOC_ONE /** - * pud_alloc_one - allocate a page for PUD-level page table + * pud_alloc_one - allocate memory for a PUD-level page table * @mm: the mm_struct of the current context * - * Allocates a page using %GFP_PGTABLE_USER for user context and - * %GFP_PGTABLE_KERNEL for kernel context. + * Allocate memory for a page table using %GFP_PGTABLE_USER for user context + * and %GFP_PGTABLE_KERNEL for kernel context. * * Return: pointer to the allocated memory or %NULL on error */ @@ -175,7 +191,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) static inline void __pud_free(struct mm_struct *mm, pud_t *pud) { BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - free_page((unsigned long)pud); + pagetable_free(virt_to_ptdesc(pud)); } #ifndef __HAVE_ARCH_PUD_FREE @@ -190,7 +206,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) #ifndef __HAVE_ARCH_PGD_FREE static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long)pgd); + pagetable_free(virt_to_ptdesc(pgd)); } #endif From patchwork Tue Jun 27 03:14:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F2C7C0015E for ; Tue, 27 Jun 2023 03:18:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbjF0DSu (ORCPT ); Mon, 26 Jun 2023 23:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231235AbjF0DRN (ORCPT ); Mon, 26 Jun 2023 23:17:13 -0400 Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60A062949; Mon, 26 Jun 2023 20:15:49 -0700 (PDT) Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-57040e313c5so62556367b3.0; Mon, 26 Jun 2023 20:15:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835748; x=1690427748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jHkn+4hWn6cNWfQmftBQOtWQYQb9iJJIYcRZZNmu6iY=; b=pEoqTDJnBy0mCqMkui4NdE0+Cwn3eLpTYrB/bC5trNXcVXfSLt5Z5R8YIAYFQDP+fU oHaVR+r3yqYLt41ruRoFntv6iwVJI6k1WqPqr0CaczRFTmURyku+1Wh096jhdpaix/Yw TPjMea9fKHZjosCsh+ySOFuhqRhCq2RDcuYke4iWdPlhdEuluthF5cnNEQKZ+c5Hc7fG +fx7g6BPukH/s549n7Tu5GqJ8+5PFdtaVRB8v1bDc72Rwv3o28GHJ8X1zhRwfXU/u/w5 Ysny8z2GqEVJln+w7kgofDjg2/U0lPQrPRqE+xKNvOs2VewEz52G+iOBPJ4WaFg8E+aM IwPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835748; x=1690427748; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jHkn+4hWn6cNWfQmftBQOtWQYQb9iJJIYcRZZNmu6iY=; b=XG5X3YZaYhZZSXyLgHNvfCYrIcJM9i+Ff8K1ifIKyEt7Or3GAvuAlp2AqgFz0MbZP5 pzvEkrskPz1KGkjgd3DTU10M6waT1kc3ObOOO+ag2Rc7tLo5I0h4ypA2GTetRN47V+ZP vkHIyv33/FoWJhPc5UPG7Hr68YX5Ju/MZEJkCv1ctqdd+YmNNUOoMSLAxIszSmXDOax5 jL5bO9of5AHb/u0uQ5pKFxFNnl/Nv/MqMVDd1CXNSYOD/jl5Q3Fj1rY61l6YM2amGF8/ hetceb2Kuwv83Z8+HMqiqGW6UqHXYEZOQYvFGTuwSlA3HZUKcWJDnErH1lV35UyYOSf4 DI3g== X-Gm-Message-State: AC+VfDxO8W02A+uwImtqf7lkFHsQ2UTN+93//rL/XvdxfahjDQfzoWZG SQcqrNwufwM8RNE/VzHSsfs= X-Google-Smtp-Source: ACHHUZ6p2ufkJSMv8EFIrIVima6CHOhkXf0TNdBTXqyjA7XsBDI65pfDIOBZhup550aSGjYhkYTA/w== X-Received: by 2002:a0d:d988:0:b0:56c:e260:e2d5 with SMTP id b130-20020a0dd988000000b0056ce260e2d5mr41557377ywe.7.1687835748072; Mon, 26 Jun 2023 20:15:48 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:47 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Catalin Marinas , Mike Rapoport Subject: [PATCH v6 19/33] arm: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:17 -0700 Message-Id: <20230627031431.29653-20-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. late_alloc() also uses the __get_free_pages() helper function. Convert this to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/arm/include/asm/tlb.h | 12 +++++++----- arch/arm/mm/mmu.c | 7 ++++--- 2 files changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index b8cbe03ad260..f40d06ad5d2a 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + pagetable_pte_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE /* @@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) __tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE); #endif - tlb_remove_table(tlb, pte); + tlb_remove_ptdesc(tlb, ptdesc); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { #ifdef CONFIG_ARM_LPAE - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); #endif } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 13fc4bb5f792..fdeaee30d167 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -737,11 +737,12 @@ static void __init *early_alloc(unsigned long sz) static void *__init late_alloc(unsigned long sz) { - void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); + void *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_HIGHMEM, + get_order(sz)); - if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr))) + if (!ptdesc || !pagetable_pte_ctor(ptdesc)) BUG(); - return ptr; + return ptdesc_to_virt(ptdesc); } static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr, From patchwork Tue Jun 27 03:14:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD243C00528 for ; Tue, 27 Jun 2023 03:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229699AbjF0DS6 (ORCPT ); Mon, 26 Jun 2023 23:18:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230250AbjF0DR1 (ORCPT ); Mon, 26 Jun 2023 23:17:27 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EE722956; Mon, 26 Jun 2023 20:15:51 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-57045429f76so40111527b3.0; Mon, 26 Jun 2023 20:15:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835750; x=1690427750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1W6z1I6yvWrQZbHmgHYBEGZZDstSbvELOPET/EcKlKw=; b=VXFgZtILJXGVZEJYMvHJGtbDmmBoKn8mX0/dAC+EY4RvTpIsJK+gJ5UUjyLzd+tcaA Wa8E/Q/hReN0LGOCZNJ2CPK66hXv4CHL1RcwD7G2hwKgrVV+r9MdE8P4UKG8hPoTp4Um SIrbyRIEF8122yBtjbfkSdTM8KV9j17ll3YIHe1LlzxS/E/9XISE37/ArykIbO2T/+U0 rkTOBMVwTfe5aMKf0k+p5UGbW9JxArSdzVKMpa8hS9FCRAuPQzNFBaUszCNNpnt8LHI6 yAZrECaEr3SZfaO8vcK6fiSyK9T57SgwrpEkJPc2S/OPeKfITI7S9cxAkwbxJnf5XUxM do4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835750; x=1690427750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1W6z1I6yvWrQZbHmgHYBEGZZDstSbvELOPET/EcKlKw=; b=F5lheibJQnoHNbBtWNKK3hnOscdsa6rFQqr/a5PY+OcvI11YMjEh8EEVXkAy5ZpiIP xGPMe5LnkA65RO+YHwn/VQwU3S9elYfR3nv08QoZLch7oipQ11xpNQ+lBuGlmbFc2sqW Xh9Acj+EpOt1As6BFDJ/nzwGMFKi1YDlCbmMSRVWkrDtDcJExv5/N1VPR/69J+C4ailG x9n4VuMbeLD1DFiQs3zwA/4+wG3yzsZT+Q/9yJ/8UkRUgGcszBf1MefSjrOcmDQ+wIkG zdZWBIVhgfmSyIPB/WoGKQuWan3RjUdDLV7n9viJqD22zquFtOJKOQntVFDi4+BxsEuc tDQw== X-Gm-Message-State: AC+VfDxZ56Qmdnha3snOll6muu2kvnQQGAlpNto3ZYzvhuezm71LKFtq YmJQWA97SWOiEIYECECEudU= X-Google-Smtp-Source: ACHHUZ5fFv9vf2BaGzqImDQ1UeHA1UgOFch4VJlKuLrykfvZA/f5SK3BTYMlrNUhfUsCVi2Zl8lSjg== X-Received: by 2002:a81:498a:0:b0:570:215b:190 with SMTP id w132-20020a81498a000000b00570215b0190mr27857555ywa.31.1687835750176; Mon, 26 Jun 2023 20:15:50 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:49 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Catalin Marinas , Mike Rapoport Subject: [PATCH v6 20/33] arm64: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:18 -0700 Message-Id: <20230627031431.29653-21-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) Acked-by: Catalin Marinas --- arch/arm64/include/asm/tlb.h | 14 ++++++++------ arch/arm64/mm/mmu.c | 7 ++++--- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f..2c29239d05c3 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); - tlb_remove_table(tlb, pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + pagetable_pte_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #if CONFIG_PGTABLE_LEVELS > 2 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - tlb_remove_table(tlb, virt_to_page(pudp)); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp)); } #endif diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 95d360805f8a..47781bec6171 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) static phys_addr_t pgd_pgtable_alloc(int shift) { phys_addr_t pa = __pgd_pgtable_alloc(shift); + struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa)); /* * Call proper page table ctor in case later we need to @@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift) * this pre-allocated page table. * * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is - * folded, and if so pgtable_pmd_page_ctor() becomes nop. + * folded, and if so pagetable_pte_ctor() becomes nop. */ if (shift == PAGE_SHIFT) - BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa))); + BUG_ON(!pagetable_pte_ctor(ptdesc)); else if (shift == PMD_SHIFT) - BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa))); + BUG_ON(!pagetable_pmd_ctor(ptdesc)); return pa; } From patchwork Tue Jun 27 03:14:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F1A7C04A94 for ; Tue, 27 Jun 2023 03:19:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231377AbjF0DT2 (ORCPT ); Mon, 26 Jun 2023 23:19:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230368AbjF0DSO (ORCPT ); Mon, 26 Jun 2023 23:18:14 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 608852972; Mon, 26 Jun 2023 20:15:53 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-5769e6a6818so30021607b3.1; Mon, 26 Jun 2023 20:15:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835752; x=1690427752; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/CuYJ7liBLmFxtlpmCAIL/qtog0vkZLk1dVk72P32nA=; b=Z2y492L3/D8WlJ5Hq0dqKp0BKJYUW7Eb9G7qdf3vIGRQ+O8SF6z7cY8de1pnrMmDqQ +a96x8kX3opEhwfyHnlNFTWX5OwZauzWBj6C4ouPJiZfrfA0EmnR4gOkOWCXT0Tt5rk7 XHfLS6CI7/qHcQlfDrYckrQudKqfd53TugdHSOAC54JEXTOgwU9ogsxc/TSIilUUybaL /EAuzpcVE6dEZDr0iMV3l0ZL5CrHH5SQ2tSWsVIXTRE6S7JRWuiJGvbxR8kbv1x+D3dj SKo4oZP9xKp5iaeAAmMvP/qF+1bKjojF2HvsjQLLw57QGCxHLxvjF/rVFYkYGLWd7pTO F1LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835752; x=1690427752; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/CuYJ7liBLmFxtlpmCAIL/qtog0vkZLk1dVk72P32nA=; b=NVg8NzAuVeJn2DZVdMiKsEyLYUaXI6eW8+jXbaMcMZ6fhKIuNGuiYef6BB51xzZkxa EN9YrZpX4ouQQ6dKsMRUxVjo3YkV8ClRvwORqS6IgAhfJ/JNSdnkyoMm+UrfdWQYJyP4 UvSp5In31g9NRVO1z4GESC31L1sqiUVQkAJrgwSGDFTZF4H/4Q+uN1NPFKPx76LH2KE2 TR0kA5X6HTUSvml+OGpxauhvhzxCrMwY96Gdt/KUl0RBRYEIZC2JwAJKaP6L4axd33Jj 9D7Px/8jUnyTuyyAdcfSnoKDycuDV0qqD0mlHz3qYu/87KV0/Eho/DKxEQ87Qptj4np0 4CwQ== X-Gm-Message-State: AC+VfDwZTHNyxOQx5t4c02W5GgCUyfWT3q5nCrnewnyfrcnxnTm0pX1p BDPs0A9z7dXz77NoA7qw3pM= X-Google-Smtp-Source: ACHHUZ5CQK8kRHJUPJM2uDCPFRj7p03JLp7keUZBkce6O+8d/iMl0gg4/oyJxvTRdTEHePf390CKKQ== X-Received: by 2002:a81:4689:0:b0:573:9751:ad15 with SMTP id t131-20020a814689000000b005739751ad15mr18645821ywa.17.1687835752283; Mon, 26 Jun 2023 20:15:52 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:52 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Guo Ren , Mike Rapoport Subject: [PATCH v6 21/33] csky: Convert __pte_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:19 -0700 Message-Id: <20230627031431.29653-22-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) Acked-by: Guo Ren Acked-by: Mike Rapoport (IBM) --- arch/csky/include/asm/pgalloc.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 7d57e5da0914..9c84c9012e53 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page(tlb, pte); \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ } while (0) extern void pagetable_init(void); From patchwork Tue Jun 27 03:14:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84780C04A6A for ; Tue, 27 Jun 2023 03:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbjF0DTe (ORCPT ); Mon, 26 Jun 2023 23:19:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230503AbjF0DSP (ORCPT ); Mon, 26 Jun 2023 23:18:15 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E11B2D4A; Mon, 26 Jun 2023 20:15:55 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-5701e8f2b79so43163497b3.0; Mon, 26 Jun 2023 20:15:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835754; x=1690427754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aUnc1suNAj/KzreaB6s7vAsOWiJZx3FNBLzDbQjj/SU=; b=m20u0ufiAKGaCJ1eF+oXsWKBIRL5JTjyvtae8lqFZvtXfIrM7cJtjjAu/m4Uz73Z8l 2OzyWyLCkpN3ueHXh3OYTXgBp8fMDOqyT852NuEeBcWkgEMJmVKnx3KmXsByRx3AKFHF Vg5wuji9LMQRgJRH1HqmX3jxPY497r0xZlE76FEouiAeDZpP9QjhFrAgiTXpIz8r0/Pn raAur9OcS2AWjvRxW5kW7BBQEK8M8XOe3JN2fYON+zb1b/iUEGPKmhswAXipr/RMjiJ1 5Ou/YPdkNRNCdvvgrCiDyHNc6JnGXPl8TTudPcGz7r3XrC0of8M4idYi+6qLUVc137gj TxEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835754; x=1690427754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aUnc1suNAj/KzreaB6s7vAsOWiJZx3FNBLzDbQjj/SU=; b=BqyTBxtu2b2xQhMX7w10GAFQodT22G1K1yb7N3X/SU2Ldn/tHRw/7APHY2sUcP6oFc szCgTkHjNaIkwlPJiUvW1lXbM3vYCmLb223XKs+YBEIAmzVtoI1yRlnl1jbATH1PbmJH GxdPavOM4oLBGnRcYCs3LQiTuOhh/CSgO0Us7DYwxRAanyWGAWq2UAyWj1Dfg4Hj+K48 8XrikVvmO7PBBzvK73lDKGlPJ3CzDAcUGRyUyXkMNDMNX7XlPJHArsJem48m6hw/SZap qvALyBob/n+RI3B1wgKAXLT4rwb6XmOQI3wLUz+xiqW1xCrspmikxYrCkMEZOo0E85Yd quTg== X-Gm-Message-State: AC+VfDwboR3spZ1lH1dh/wBpLg4vdfDUDhFChAy6Y6965Y64DcNvwuOa hSl6QqlhVIFd6KUBhpzGnrI= X-Google-Smtp-Source: ACHHUZ5MOzNqvj1LDJTcnkC+L2S0i67qvPaFwWjlYeZ+6A+ljBIE978bk3O2AZxZRJ0D8kmPxN2z/Q== X-Received: by 2002:a81:7189:0:b0:56f:ecdd:ded7 with SMTP id m131-20020a817189000000b0056fecddded7mr30983642ywc.10.1687835754377; Mon, 26 Jun 2023 20:15:54 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:54 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 22/33] hexagon: Convert __pte_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:20 -0700 Message-Id: <20230627031431.29653-23-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/hexagon/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index f0c47e6a7427..55988625e6fb 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, max_kernel_seg = pmdindex; } -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor((pte)); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor((page_ptdesc(pte))); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Tue Jun 27 03:14:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC84CC001E0 for ; Tue, 27 Jun 2023 03:20:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231484AbjF0DT7 (ORCPT ); Mon, 26 Jun 2023 23:19:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230217AbjF0DSc (ORCPT ); Mon, 26 Jun 2023 23:18:32 -0400 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6C54E6F; Mon, 26 Jun 2023 20:15:57 -0700 (PDT) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-5700401acbeso42706437b3.0; Mon, 26 Jun 2023 20:15:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835756; x=1690427756; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pBiyp2dfMpzV+NvCt1JbKOh6Ic1GEYp83948mVxRzoc=; b=lEtqmrwI3pH1F9NhuLSu+lhrJuJBeaNoc0B/a+jtRpylRNiz6lnVYqwXFylSaxIlSS fal5MuqlLO46M8tgspmd3hSeZZQkHzVCiF/enKflZ1M3hEbA15QMn0yHyWiDxK+YUgaZ GWFcNw4Uez0C21+p6DRUZFT6brtk9p0VoxgHF26jewVOVnyhkSQ3DRQfrpXYXTkzJvdt UKT9r8byFkYtRNMun/mKCKgBDEtBv/scAQyWpgjFy69JlQUU0QN5ljLcHnq80rGO/875 /+CaEkKEPeM1zZ6Ui2wCINBbaXnsdbhwpMR3GV5pLj1s7PximB5Gycu9naEXtrlwl71V d+Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835756; x=1690427756; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pBiyp2dfMpzV+NvCt1JbKOh6Ic1GEYp83948mVxRzoc=; b=RTOocuLDRtF+zhTL9aIT/Zm8hzZIMC8dhfXdrsmXQffh0e8SippLnwTIDL4+vtF2lp j/IIbW1m5CZsxy7vf3D/psqn8GY0HFKwGnv5Qaq9GCbJJQ0bJakTpxnblnUdEX4ia9Fw oTn03sa1zO4aZ9j5N3ca9a7kfHfyS8bvALxXpshVZ8mIN3d/qHi9zppwNnbfhFf8BKlE h9O0uy3X8EIKuQuQQV71Vg8j3LdzUpoJlVrxvgsNXQGYCj1i/OukjBAfcMTHNUUEx/FK uyKEPl0PYMwBkR2mB9AjKlugZOJYuNihz6NIBNNBmwGYqArfUcX5CbsURt2zBTCatRoB pX4Q== X-Gm-Message-State: AC+VfDyXBIrupJFHG+eb2vLTpFfqLcfRVa/H7keTcNFHkWEXkvPofLWA v521szvpTGbdVyegHwWXuqs= X-Google-Smtp-Source: ACHHUZ6AsPEMPaYNwMvJ9C/uUytKifTWfNOxJlUh8Y26CFwkBAjPl8zEubY5Kzi6Ted2kFoKwje9oQ== X-Received: by 2002:a0d:e24c:0:b0:56c:fc62:a5a2 with SMTP id l73-20020a0de24c000000b0056cfc62a5a2mr32690358ywe.7.1687835756402; Mon, 26 Jun 2023 20:15:56 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:56 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Huacai Chen , Mike Rapoport Subject: [PATCH v6 23/33] loongarch: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:21 -0700 Message-Id: <20230627031431.29653-24-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------ arch/loongarch/mm/pgtable.c | 7 ++++--- 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index af1d1e4a6965..23f5b1107246 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -45,9 +45,9 @@ extern void pagetable_init(void); extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -55,18 +55,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_page(GFP_KERNEL_ACCOUNT); - if (!pg) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_page(pg); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - pud = (pud_t *) __get_free_page(GFP_KERNEL); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 36a6dc0148ae..5bd102b51f7c 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -11,10 +11,11 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - ret = (pgd_t *) __get_free_page(GFP_KERNEL); - if (ret) { + if (ptdesc) { + ret = (pgd_t *)ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Tue Jun 27 03:14:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8793FC001DF for ; Tue, 27 Jun 2023 03:20:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231504AbjF0DUB (ORCPT ); Mon, 26 Jun 2023 23:20:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231290AbjF0DSh (ORCPT ); Mon, 26 Jun 2023 23:18:37 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0E242D61; Mon, 26 Jun 2023 20:16:03 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-576a9507a9bso36784837b3.1; Mon, 26 Jun 2023 20:16:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835758; x=1690427758; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PAg5qbCpsu4QeXC2CiYcNVSx9jEaugPagVbNncebk84=; b=HwE9+CZ1/MplCMi7Dd3y6gsJURyNL7dWtCmym2tEO6RLfYDnkDNvgC79iMqV9euLNn oVO0tEUHDhiP0KRiFP5iP4q3wggvsZxNdlBp3O9pE0RV47pjcEcaBTmVZLvdVziVmkiN gnSeJULzSpSwzOVkoSp8BjrvY4/vuL0/IWrHFgVmjaO0yw1AUh35UrvuuREuQfZ+tpcw 9rFnkC2jafbB81zXFV2qvI6ZqjiQ1IYXHHbUsS7aTvWca4xplkb3IO0wrERoA+osEcaT dwBDuf+P0WX2zQaokYGQLzNk5fMi+yQC6tDn43/lzPeNFLEgDUuF9Wyio0Rv0IcsuBq4 Vk5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835758; x=1690427758; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PAg5qbCpsu4QeXC2CiYcNVSx9jEaugPagVbNncebk84=; b=XLGDYicBPsaYa9hbdYKCNi+jfLhGNwpSZ1cyMgD/1N+Z7mcdPRe+NaWTup2X32hq4Q jQZlHPrpnkg0HaFUiacefIJzsEoTDAeDQuwKhoQ8y4VykrS33qHEDRQUiaf+TRu9H62g tLJXAi6wvA9aQ/zx3FHh7MAezA18pdq4aEoQ4IeSwaFEaEOHxYOBj9poWEuKw7Zhq5QQ Fwj63IWE2JSjfVafSVm+l5vVyxj5DGsaqEjMilu7NWjGpasMttzOBmKJFE0fwQG//h1L jBudUtnrgWGnhWTEnci42GL2Ff5M+cY+uUsiWGcevTERYDBGMBve5Bft5UebFWDsO4uY iDrA== X-Gm-Message-State: AC+VfDxyP9PQC49vE3LdRdYq9xeO7iIi3N6i/9gJT/vCmfqtLWDS0HRN +nEVNFXt1ROaC1gAnjvry3s= X-Google-Smtp-Source: ACHHUZ5t5DVTD+g/5WvyBpKjpLFbmoJ8U0LjdfQz8YgnU+ZmoVAy6rR6pXI+3pmYZp1NNhYIufsQrw== X-Received: by 2002:a0d:e657:0:b0:569:ef2b:e20 with SMTP id p84-20020a0de657000000b00569ef2b0e20mr35915592ywe.23.1687835758576; Mon, 26 Jun 2023 20:15:58 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:15:58 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Geert Uytterhoeven , Mike Rapoport Subject: [PATCH v6 24/33] m68k: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:22 -0700 Message-Id: <20230627031431.29653-25-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) Acked-by: Geert Uytterhoeven --- arch/m68k/include/asm/mcf_pgalloc.h | 47 ++++++++++++++-------------- arch/m68k/include/asm/sun3_pgalloc.h | 8 ++--- arch/m68k/mm/motorola.c | 4 +-- 3 files changed, 30 insertions(+), 29 deletions(-) diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 5c2c0a864524..302c5bf67179 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -5,22 +5,22 @@ #include #include -extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) +static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long) pte); + pagetable_free(virt_to_ptdesc(pte)); } extern const char bad_pmd_string[]; -extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) +static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { - unsigned long page = __get_free_page(GFP_DMA); + struct ptdesc *ptdesc = pagetable_alloc((GFP_DMA | __GFP_ZERO) & + ~__GFP_HIGHMEM, 0); - if (!page) + if (!ptdesc) return NULL; - memset((void *)page, 0, PAGE_SIZE); - return (pte_t *) (page); + return ptdesc_address(ptdesc); } extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) @@ -35,36 +35,34 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, unsigned long address) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_DMA, 0); + struct ptdesc *ptdesc = pagetable_alloc(GFP_DMA | __GFP_ZERO, 0); pte_t *pte; - if (!page) + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pte = page_address(page); - clear_page(pte); - + pte = ptdesc_address(ptdesc); return pte; } static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } /* @@ -75,16 +73,19 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long) pgd); + pagetable_free(virt_to_ptdesc(pgd)); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *new_pgd; + struct ptdesc *ptdesc = pagetable_alloc((GFP_DMA | __GFP_NOWARN) & + ~__GFP_HIGHMEM, 0); - new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN); - if (!new_pgd) + if (!ptdesc) return NULL; + new_pgd = ptdesc_address(ptdesc); + memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t)); memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT); return new_pgd; diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 198036aff519..ff48573db2c0 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -17,10 +17,10 @@ extern const char bad_pmd_string[]; -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index c75984e2d86b..594575a0780c 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -161,7 +161,7 @@ void *get_pointer_table(int type) * m68k doesn't have SPLIT_PTE_PTLOCKS for not having * SMP. */ - pgtable_pte_page_ctor(virt_to_page(page)); + pagetable_pte_ctor(virt_to_ptdesc(page)); } mmu_page_ctor(page); @@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type) list_del(dp); mmu_page_dtor((void *)page); if (type == TABLE_PTE) - pgtable_pte_page_dtor(virt_to_page((void *)page)); + pagetable_pte_dtor(virt_to_ptdesc((void *)page)); free_page (page); return 1; } else if (ptable_list[type].next != dp) { From patchwork Tue Jun 27 03:14:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5932EC001B3 for ; Tue, 27 Jun 2023 03:20:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231696AbjF0DU2 (ORCPT ); Mon, 26 Jun 2023 23:20:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231315AbjF0DSh (ORCPT ); Mon, 26 Jun 2023 23:18:37 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E36AD2D64; Mon, 26 Jun 2023 20:16:05 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-57028539aadso44557767b3.2; Mon, 26 Jun 2023 20:16:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835761; x=1690427761; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z2eVDlXL8alglJUGVXcc8JA7ix0brgT+4DYqdXjkOuM=; b=Rm5e85CxLSywqFxegbM92lF42neHfR0fR+lQK4IGVo6rv84p4nUjka/pX4hUXrlpwO WD89hdl3l7+Q82fCWDirVtfl1UdPVGFAIR8Pji+LBlOshyqJG18LS5xzOpOCQe2DamDu jKvOLad62fOFII4Rcx9NR2LrprdDjwL6rx6EBRmhcfI68ty/WxtUVuYaOl3yyA6VAcm8 kllPQdGN+P459ax9lvyxifjF3Qc52jw7EuJ0L5rjpr2rzqsA5k+0IGd/73X59eOaGEjw K+/CutY1Ags92Fdl/Y8Zq5PhNziANc4vxThpN5vTFQCoz8gt8gqyoqM1KvT2g/UvZYfQ zR+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835761; x=1690427761; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z2eVDlXL8alglJUGVXcc8JA7ix0brgT+4DYqdXjkOuM=; b=XeMb7es0dNd0SG6P+JEFDJDa6b+dXe3oGiaNtHIWVFle+b2548b27fQLZng9YjPiR+ GZMiZdGfa6PMjgcqHd0v4PgHof1OoRNO517ZDcbVH5I+MbzNKNG6GxDkIo+vBhjp3Bp5 98eYS+2g0RR9F4PieUN472VQCM2UO2SNdIrdTLBVJe7JGiN7dsffUjrv2O8148UBYK+y 7jYpimlKsoRQgXSUIKDSTziK7wrb7RPmH6z8uKNF35H7ulVmYzOwLqKf/3c9OOyvxpve twyU9DcKl33P/FcSrP5DNblFL8Pvf6uo2RCMUXpkceDiVkuv7ETwetqHZ78TyaPFOhpI vVHw== X-Gm-Message-State: AC+VfDwPxOkrxDK7c7fLBGLd9LNJCZdWN53FHQngH1cxusnxDRC2ozAJ I6LE57Ix4oPvGMmJC9kZ5zs= X-Google-Smtp-Source: ACHHUZ7ohgVca4eugfFM/saLKJ40V7MYdUNGfKx6MOO12TR+JS8tS/aqfK1zXJTwp/W9GOKug+ymcA== X-Received: by 2002:a0d:d8c2:0:b0:565:8c16:a0e1 with SMTP id a185-20020a0dd8c2000000b005658c16a0e1mr39842823ywe.13.1687835760682; Mon, 26 Jun 2023 20:16:00 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.15.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:00 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Thomas Bogendoerfer , Mike Rapoport Subject: [PATCH v6 25/33] mips: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:23 -0700 Message-Id: <20230627031431.29653-26-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/mips/include/asm/pgalloc.h | 32 ++++++++++++++++++-------------- arch/mips/mm/pgtable.c | 8 +++++--- 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f72e737dda21..40e40a7eb94a 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, PGD_TABLE_ORDER); + pagetable_free(virt_to_ptdesc(pgd)); } -#define __pte_free_tlb(tlb,pte,address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -65,18 +65,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); - if (!pg) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_pages(pg, PMD_TABLE_ORDER); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -90,10 +90,14 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, + PUD_TABLE_ORDER); - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c index b13314be5d0e..1506e458040d 100644 --- a/arch/mips/mm/pgtable.c +++ b/arch/mips/mm/pgtable.c @@ -10,10 +10,12 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, + PGD_TABLE_ORDER); - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); - if (ret) { + if (ptdesc) { + ret = ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Tue Jun 27 03:14:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BD2EC001E0 for ; Tue, 27 Jun 2023 03:20:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231387AbjF0DUl (ORCPT ); Mon, 26 Jun 2023 23:20:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230037AbjF0DTX (ORCPT ); Mon, 26 Jun 2023 23:19:23 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EF202D76; Mon, 26 Jun 2023 20:16:12 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-56ffd7d7fedso42553147b3.2; Mon, 26 Jun 2023 20:16:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835763; x=1690427763; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DOzSOLTncjJXn0AvJQqE6PD7zrtBe8MIuyz5zw7TzDY=; b=pYu/np5E02v9D4l9kp2NKvJbLly/W4E/pytzpciEjQbyT6+cb3MtewkIWSZ/wCYqXd G1Xr/QkETq+9GbOV2yhW+2wZnwokC/KdlaanhyggpRl1yOJr09K81oiMFpOqkCSKUN1U FaF0VsjnMFKldVgjw5dM4Iy6YMJ/sIWl4pMPKU4lAMjDSOWrwT/y3HEK7DZHkwf96llK iirmdX85Js3ztXjDiHDLqtShs9EXg3guUWkHRosbVKtQGujpNsiEApphQ37yRcHluKce hLpePft/RQblwSLVA6U8/x8zaLVFAB0Ea9ymuNJLOKqGjd9fA49LoIOpslhGIKWxr+Yx skeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835763; x=1690427763; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DOzSOLTncjJXn0AvJQqE6PD7zrtBe8MIuyz5zw7TzDY=; b=EaIgRLjiA53n8PE1xV6pqh6mZWUte1d23KXhMfuiweK88LHVRKyJ1CZYIpXbWE9lGK GXA2tiTDPNSaIGr7jab/l0bnh+O1iw/L2vNXy1bPacSmZjO4WiXbkbQiDvtr0tkRZ344 FVtbNJq7sUjMIBXCmJ0CzD2AeRjH8Ji+aJHwK7lRx9vAFXqEaSLPtr9BpzuORn8/D4BR uhS6Q5GLxbIkA0IuC5tgyxHA0n/1TJgRYrm41hsXxTnYoLkgHG+eDpwPZUAfZcf5PBwt m85taHkVyl7+HvZ5ZoaKEHzYgLC4umJXg3qcEK8lRdeTsgKuX8edokDXxSCTjvgFFk+9 nUXA== X-Gm-Message-State: AC+VfDxW1os6Azo0ANeyRntr/McpSgnZEjfsVY22jZ8xld/znWJ6xFo6 NwMA/Vn/uFHQNBPY6yW3ctE= X-Google-Smtp-Source: ACHHUZ4sbsRgQIL7xgoFeCT8hAtH3rltXvvkgvi4T0uDEBiqb265+aHLns97dzgLS2JgA0bF/wvN3w== X-Received: by 2002:a0d:ddc5:0:b0:576:8a5a:87e5 with SMTP id g188-20020a0dddc5000000b005768a5a87e5mr10382513ywe.26.1687835762791; Mon, 26 Jun 2023 20:16:02 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:02 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport , Dinh Nguyen Subject: [PATCH v6 26/33] nios2: Convert __pte_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:24 -0700 Message-Id: <20230627031431.29653-27-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) Acked-by: Dinh Nguyen --- arch/nios2/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ecd1657bb2ce..ce6bb8e74271 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, extern pgd_t *pgd_alloc(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ - do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ + do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* _ASM_NIOS2_PGALLOC_H */ From patchwork Tue Jun 27 03:14:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CE91C0015E for ; Tue, 27 Jun 2023 03:20:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231284AbjF0DUb (ORCPT ); Mon, 26 Jun 2023 23:20:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230224AbjF0DTZ (ORCPT ); Mon, 26 Jun 2023 23:19:25 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66C492D7B; Mon, 26 Jun 2023 20:16:14 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-5704fce0f23so42038307b3.3; Mon, 26 Jun 2023 20:16:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835765; x=1690427765; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0AnRv7d38hHUVaoAGins0CaWMMRgzo0GwUb20Y9YoSw=; b=e60Y6sevaT4oQ/L5KuHLcNtraCcaqymJSw0Pta6sshcpf6ZZ6QManSmR0ZQpDOB1TG Zv6Fux3N6RAGTYaqlaV6L/vXV7y5TNrp+A8hP8/1TroaBlBjnjU2829v3J+cnc+AhOb5 JN+Np+61YaCNuusYO6cIWgMGGdhyo7H5wD/egNG2vy+zzD2dD0OPETzsjKXbKfq9Y/2e UqWfUGJ07+JZFMPC47CCL6nCuIIVod63+MvhTRgj6G4ml0xqLiHcn4xnMK5EG5kSpCpR IaKEXBQ7G4bgWt8P3QrMWU+dPATv19IcGEhHsTfe2EI7sEKEiwRzexwBCQnzrxDWjBM/ K5Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835765; x=1690427765; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0AnRv7d38hHUVaoAGins0CaWMMRgzo0GwUb20Y9YoSw=; b=gQA1VN7r9S7oEEcSJxS7So1VfIgUtLtaMdjpaHbH4Q+WP9nrUO3sX9orpiYjcWOsXA 0qqiq+rZ1BI2hDVVovxV8neXJLpHpMm5ZJc7/SSaOzEBYNEUZKxqWKfROuG32AZnbE1Z KXh2Rtqjjv+D9bdvFBU9K+iXLMrnbVLHI2JXro1sFZcWH5AbGX5ENaFTLgqnhXnzxHGu zf5KilI4taVbDZYLHqtFRqofjp3TY8o7AxMeNM+yvAZD71DpEM/AwiO64QVrznn03QcH ZBEkzy8RBR1XCBXLxfEtih4/SfWIp/4owAZea/PhQ9JSWYtH/U1t64zlzT8obaInePnE f5jA== X-Gm-Message-State: AC+VfDytymZ5godIfpnX7ZadzkELU2HhjKSZfGRfTCoYCOXAaivmpLGz VQn15WwFgNjYJtGeSxSdUpg= X-Google-Smtp-Source: ACHHUZ5DGQdkUnjmZtvUypsIHEbjelQEN0sIFmuzviWoifvHBxszYaVoeD06anXcP6T6X+bEflb+Ug== X-Received: by 2002:a0d:fb86:0:b0:573:5071:2a12 with SMTP id l128-20020a0dfb86000000b0057350712a12mr24368068ywf.18.1687835764817; Mon, 26 Jun 2023 20:16:04 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:04 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 27/33] openrisc: Convert __pte_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:25 -0700 Message-Id: <20230627031431.29653-28-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/openrisc/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index b7b2b8d16fad..c6a73772a546 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm) extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Tue Jun 27 03:14:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293906 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E98E4C001DF for ; Tue, 27 Jun 2023 03:20:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231733AbjF0DUi (ORCPT ); Mon, 26 Jun 2023 23:20:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231365AbjF0DT1 (ORCPT ); Mon, 26 Jun 2023 23:19:27 -0400 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7705E30C2; Mon, 26 Jun 2023 20:16:16 -0700 (PDT) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-57059626276so44705237b3.3; Mon, 26 Jun 2023 20:16:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835767; x=1690427767; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0VT4pshsioiI3cvxCaccwxS3QnyiZywL9n+n1KwplTM=; b=BTT3Mzd5pMJTg1qQFpEMV1yFnXFEE6dhiGZIFIlGfqbhlgUNcX3lP+0Xoq0dD8MFSr RK3vHi5QnyAo7T8lNR0m2sSqGWtPBPRWNiWjHN3rFdDhGGO7T9MFTERmHBgNR4sgVMS+ JUT/NdTfBaBAKs5kycCS3l/TBhEAp+C90btW4rno0I/EAboZKF9q6wEak2gDddjsGCi1 q2GUAru2ftFNSTo28kLQL6bJa4wXILET/IeOTUMnQplkR5BiuM2QbQkizw1eBdbmDbmG 65E54NzNO8/RkUogmqdZk7khZ+p+IpCeUb+CDcGS40pgjK7sLmx4u40SPJ3RJUKs7fxl Kwyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835767; x=1690427767; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0VT4pshsioiI3cvxCaccwxS3QnyiZywL9n+n1KwplTM=; b=jKFXd+x6asFZ9vsdOFk+tyLgmIiqYSohhmMKEn5OoHXZp8mMXgz4b28b0rkVSwbOWu 2zIr5LIlsQUwimjuOroVf8+0qzIfv/YC/0Gv7p8jXXxSKR2pwaS/X9jWomjVnHnWORUd o4aPA4Qu4pfDxx3YVzZA0ypKjzUiIQqRwa/ETtYO2Co4v9Ooxi2wX31xO8rvoEjQYDIC LCj/zCdmD9IvpySiEPuTEOF6mfr/+C6UCuPEQGD7AEXtjp5/vrmteRsicj3HVDHPkKJM G/jdRO+gHogY5+Ui3DRLtsnv1j+gabnshkMpth5z2FQo974W6olA7PfNXHy7lCt2ebLz d4Kw== X-Gm-Message-State: AC+VfDw1QCbAGmDH56uPZTj0kx4mPu+PrP5kb0xsYZvUEOYkcQvr1Sno zzodaJkpK4lReyt1tSXBbEM= X-Google-Smtp-Source: ACHHUZ4rZA4ZO6uIUbBCVKZtf01FpSGP007O6nK+owIo75GKh2wlbCSJx7/xiS4iS/AR11yVX11PjQ== X-Received: by 2002:a81:6d53:0:b0:570:6665:4646 with SMTP id i80-20020a816d53000000b0057066654646mr37647298ywc.1.1687835766960; Mon, 26 Jun 2023 20:16:06 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:06 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Paul Walmsley , Palmer Dabbelt , Mike Rapoport Subject: [PATCH v6 28/33] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs Date: Mon, 26 Jun 2023 20:14:26 -0700 Message-Id: <20230627031431.29653-29-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Palmer Dabbelt Acked-by: Mike Rapoport (IBM) --- arch/riscv/include/asm/pgalloc.h | 8 ++++---- arch/riscv/mm/init.c | 16 ++++++---------- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 59dc12b5b7e8..d169a4f41a2e 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #endif /* __PAGETABLE_PMD_FOLDED */ -#define __pte_free_tlb(tlb, pte, buf) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, buf) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\ } while (0) #endif /* CONFIG_MMU */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 4b95d8999120..efff9c752fcf 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -354,12 +354,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) static phys_addr_t __init alloc_pte_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page((void *)vaddr))); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !pagetable_pte_ctor(ptdesc)); + return __pa((pte_t *)ptdesc_address(ptdesc)); } static void __init create_pte_mapping(pte_t *ptep, @@ -437,12 +435,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) static phys_addr_t __init alloc_pmd_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page((void *)vaddr))); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !pagetable_pmd_ctor(ptdesc)); + return __pa((pmd_t *)ptdesc_address(ptdesc)); } static void __init create_pmd_mapping(pmd_t *pmdp, From patchwork Tue Jun 27 03:14:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04334EB64DC for ; Tue, 27 Jun 2023 03:20:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231723AbjF0DUd (ORCPT ); Mon, 26 Jun 2023 23:20:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230385AbjF0DT1 (ORCPT ); Mon, 26 Jun 2023 23:19:27 -0400 Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com [IPv6:2607:f8b0:4864:20::1134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6A5030C3; Mon, 26 Jun 2023 20:16:16 -0700 (PDT) Received: by mail-yw1-x1134.google.com with SMTP id 00721157ae682-57083a06b71so40017377b3.1; Mon, 26 Jun 2023 20:16:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835769; x=1690427769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NG13BwzmmAIpXpcduEwvRHFbS3iQhU18W2JpKLteVDY=; b=eyYuDUlXPMdgLVymoCPm6l0B5AkMkP0MJVgww2Dc7/3sqQmnGU1ku6H9CU98Kx8HDx 01E8lwPN+qvwNuBvO2TKM7cIgKQ3CJY1x5BttlhhMCAu/eXz9REbxcYgAP/6CQkWy6Cf lQ90mc5ZocMFbyDKOmgQ6JtaRafPtmT5vpmCcu1CHVchruI6mMlKmB0nAjhundO2FScF dwi9LCyq+7qGXz1usN+jVcsXjJuTQQ04p4sKanEFFQJoYolDJUzZRawaXCZwSYL33znl 4r4UGf7Bdp39+qapNSstfbw+OcAirA3yy9mzGZ+ruj3Aqj6H943hd4iO/NnOtCq31XtZ 136A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835769; x=1690427769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NG13BwzmmAIpXpcduEwvRHFbS3iQhU18W2JpKLteVDY=; b=eQZvgPhmLhwGkb8trxbjmsHr3Hm2QhoW14rfkADIJiHXQIBO1xq8aF4ngJplMiySrI WkPsOA+aouENGFgzLZUfGoIsXm3NRVS0fCP0bLUoUJv8sJt1FoY/hwBAezpV/7HtSWBl DyFUIwVfjP65/GA7cogoOSApQSWWSLirN2Wt3JPZgd5B9b+09VfhjzYKdZGGsF/ht+YY fZ2YjEMsypr7rjbGNL2XYjNkcHXVgIigFYq+AoWK5nL6w91mosOO7+lCiXICNvEVG1/Y h12+mj2Js9J++mqbpvTy6muEYzE4Fc21rxRQGXryLc033BWpMjFZk9VT39R86MmiOGh9 HrBw== X-Gm-Message-State: AC+VfDyYdMx3VutvmbTvfxsXNB94yOStRLJ53Q/lF3DwOkCJVpOIGJeF o8/2CnBfzsVRfPZrYV2GfRg= X-Google-Smtp-Source: ACHHUZ5BRaD177tE1K+/zidTaxecch2WxFQR5MI/tG/yJGMKk055pAnLHtqYXsbXJZWlLLkzghbGoA== X-Received: by 2002:a81:8684:0:b0:576:e4b6:9272 with SMTP id w126-20020a818684000000b00576e4b69272mr1982237ywf.30.1687835769172; Mon, 26 Jun 2023 20:16:09 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:08 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Yoshinori Sato , Geert Uytterhoeven , John Paul Adrian Glaubitz , Mike Rapoport Subject: [PATCH v6 29/33] sh: Convert pte_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:27 -0700 Message-Id: <20230627031431.29653-30-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Signed-off-by: Vishal Moola (Oracle) Reviewed-by: Geert Uytterhoeven Acked-by: John Paul Adrian Glaubitz Acked-by: Mike Rapoport (IBM) --- arch/sh/include/asm/pgalloc.h | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index a9e98233c4d4..5d8577ab1591 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -2,6 +2,7 @@ #ifndef __ASM_SH_PGALLOC_H #define __ASM_SH_PGALLOC_H +#include #include #define __HAVE_ARCH_PMD_ALLOC_ONE @@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, set_pmd(pmd, __pmd((unsigned long)page_address(pte))); } -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* __ASM_SH_PGALLOC_H */ From patchwork Tue Jun 27 03:14:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DE3FC001B1 for ; Tue, 27 Jun 2023 03:20:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231288AbjF0DU3 (ORCPT ); Mon, 26 Jun 2023 23:20:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231144AbjF0DTf (ORCPT ); Mon, 26 Jun 2023 23:19:35 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59EDB30D1; Mon, 26 Jun 2023 20:16:22 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-576918f4cf7so32413727b3.3; Mon, 26 Jun 2023 20:16:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835771; x=1690427771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2x+E7/HO6cxS1FKNHsH+YAjV99GeoJ7wX0XOkEEtc8g=; b=PChVo0lfe7Am1Juwq3iG7RGGrP0XqVtToprVlSbVPrmyTboyAd49pm1+evSB9B2tFw kXjlu6Q/E8ApcKDqDyfQ8SExeWZmFlXQXipKcu1XJWfNM69YhOWXhf3ISbUTXIBhFgb0 96QCzhEZra+XSmonTSpvrhyWNfhcBtgM1hIeMstisaUf5aqytakahkQIURxFAuMmzadV 6ZXbtsztDtfR61NxP+OmIDHBWYbrG0Ze8ny7+vi6zis0kg+kLkgyVN6ua8e4P9XYJM+O StKP4NZwkZwEtlTvPnxWtxrHSt+7T/pJDNCTHMVfEfjXHYKpAk/tbkRioOk7ft8zrct2 Kzpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835771; x=1690427771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2x+E7/HO6cxS1FKNHsH+YAjV99GeoJ7wX0XOkEEtc8g=; b=GdxI13Fi5bCrvulmaC5WYwWVG1ePbb86nphzisRhykVJ37yvnT7n+j5VHu21ZAUfcQ ve0DSE/7Kei6CsdUiSHfu5o1U1igySgE79CXIEXOBvcle8OdOYhE91WpC3VX3K6F8QSs piindVUxgkC9zYoax61rS5b7sJabFCFTGolGGC0eFs/yztOqRFeVx5fDPSgNq26bAoL4 BK4v1BUK4HYYvWIHXeOnYgV11pbC+Emhh+9ftXnjhn5fFkUcpNOhLiXjr/IuA0fXzSso rVnc+/WU0WXmAknbkLeoiEkqJw5+3KdfcopLOv4T/fIQYxP5vR5YOuaP2usn1vFaml/5 Ks6Q== X-Gm-Message-State: AC+VfDwJeFtmf6+9mlB2kfupIwo2kzf5OGQgEa7jE4sLhd+Q2ansNvgt o9Z3aH8oEZz21C7jqiVKgPo= X-Google-Smtp-Source: ACHHUZ6W45+nYYtPli9FO9SD+YRunw+Uuhdk8hdZzeF/68wsjhLAFg0GoFHHxkB57Ykeq0XvBZGu+Q== X-Received: by 2002:a0d:e293:0:b0:561:d25b:672a with SMTP id l141-20020a0de293000000b00561d25b672amr31395214ywe.21.1687835771352; Mon, 26 Jun 2023 20:16:11 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:10 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , "David S. Miller" , Mike Rapoport Subject: [PATCH v6 30/33] sparc64: Convert various functions to use ptdescs Date: Mon, 26 Jun 2023 20:14:28 -0700 Message-Id: <20230627031431.29653-31-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/sparc/mm/init_64.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..105915cd2eee 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2893,14 +2893,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm) pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (!page) + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return (pte_t *) page_address(page); + return ptdesc_address(ptdesc); } void pte_free_kernel(struct mm_struct *mm, pte_t *pte) @@ -2910,10 +2911,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte) static void __pte_free(pgtable_t pte) { - struct page *page = virt_to_page(pte); + struct ptdesc *ptdesc = virt_to_ptdesc(pte); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void pte_free(struct mm_struct *mm, pgtable_t pte) From patchwork Tue Jun 27 03:14:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0E96C001DF for ; Tue, 27 Jun 2023 03:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230095AbjF0DUo (ORCPT ); Mon, 26 Jun 2023 23:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231433AbjF0DTy (ORCPT ); Mon, 26 Jun 2023 23:19:54 -0400 Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A12630D6; Mon, 26 Jun 2023 20:16:23 -0700 (PDT) Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-5707b429540so62188047b3.1; Mon, 26 Jun 2023 20:16:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835773; x=1690427773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ES9RfFz84BcV9KH4Nk3AfkjPOEvVVRdPbPMSiTejLcw=; b=dLHeJwfXKaTxpO4c0sQBm93zd4wQrUxFt2CjFz5qWQ7T65DNPYbc5uKxnR+eK0IL7t R2V9BA+60dIzn1DRHNiAM8OyrAJAml6DoS1l2PV4PYnoH4WUW9ywNy9TDf3yz7ME+vFq NLGTD2cyLj0S/PJY2GAWd4utoHoRGPP+jS/tnomFzM8g/lVPns4ABJt0M6rzW8rsiL9B m29bDYcQGdb38WDxMbx9Hk40CYhBq3UtQSCa5gjYXZKOF2lc/8CEcIQNZ6Wo9PGTUZtN IYAVdG9F0aChBdAkISeJXlJPZbPB7Nf1GhV5RWEzzuTieYJGBF1MgcwOgT5miNq4avun mLuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835773; x=1690427773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ES9RfFz84BcV9KH4Nk3AfkjPOEvVVRdPbPMSiTejLcw=; b=LlVjw1yneIsDv3AVSCNWQZ4j0HYcIF3WjEvLHYR6ja4VpAUJVh2C78pfeQFHRupW3+ pTTCZ25KB3fujklsfxxGfHhJJu87ZAS8JdR2VP4G2g3fArxG57Y2VdeGF+waiZnywJ1j jg2ndKE4W/8uuDFsGLPRz9wj64f0CEGCekUlS6tYOZexD/N+fXlm1xJah/weGSm5Xkwq 6mG6El9KRCKV7jaqZ8Rt5y+4Vlf6QpAgVRuxURIa52vzZtU4j/ogQd9QiMpp/OymqBW8 WOWvNv5LwBFWK7W3cGUY/IciBGoNVuugks1NwYLvjEO/jVo5TNNCDX9fs6x7zHbUwFxm nskg== X-Gm-Message-State: AC+VfDw9sA7aBdcoEf9rwDVFLVbZYLve3fbWkgDPwyHfMDIFaKVab9TI H86xyG6KEcXkoE490hHVrjw= X-Google-Smtp-Source: ACHHUZ7mviCb8EMix7g2GoIipUtqwtEpnZ4nIwFH1XOniyUWELa3iqb5sNl3hiYnjR+jSDWsaWi7Kw== X-Received: by 2002:a0d:cbcb:0:b0:56c:e5a3:3e09 with SMTP id n194-20020a0dcbcb000000b0056ce5a33e09mr43279069ywd.15.1687835773500; Mon, 26 Jun 2023 20:16:13 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:13 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , "David S. Miller" , Mike Rapoport Subject: [PATCH v6 31/33] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents Date: Mon, 26 Jun 2023 20:14:29 -0700 Message-Id: <20230627031431.29653-32-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable pte constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/sparc/mm/srmmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 13f027afc875..8393faa3e596 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) return NULL; page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); - if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) { + if (page_ref_inc_return(page) == 2 && + !pagetable_pte_ctor(page_ptdesc(page))) { page_ref_dec(page); ptep = NULL; } @@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep) page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); if (page_ref_dec_return(page) == 1) - pgtable_pte_page_dtor(page); + pagetable_pte_dtor(page_ptdesc(page)); spin_unlock(&mm->page_table_lock); srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); From patchwork Tue Jun 27 03:14:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0B42EB64DC for ; Tue, 27 Jun 2023 03:20:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231274AbjF0DUp (ORCPT ); Mon, 26 Jun 2023 23:20:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231446AbjF0DTz (ORCPT ); Mon, 26 Jun 2023 23:19:55 -0400 Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A850730DA; Mon, 26 Jun 2023 20:16:23 -0700 (PDT) Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-57045429f76so40114347b3.0; Mon, 26 Jun 2023 20:16:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835775; x=1690427775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JRfRtQSkmDpdaHgWU+x8J8rVDSFtmuGYMRjnXDV0vQY=; b=Gr8f3MEz3obvDl9KABhIuO70HTBkoFbuWEeimu9z4yQhoBxpIMzXHt4/zQiD/35FR2 Rou/oVuLMNuircs5Aw55ugfL83e83sFrRKh4nGjfNUBrs0Ns51CypHUWM+aHwKiIGCB4 bDbKAJJc6ueglTVv6Zd/8VvPqPuBRwfoUXOaXnwyPF0zCFc73NfT/c9Q8XEzBzPYIVow QBI9GU/YA6Md8J3UGn6Xw0CXbMe3aYnpOaLZmcH+kLxI6o4K+ynL8B23lQ0U//mVHlcw WbYitL1St1RrPY9jH0d0TpkVmHfz+e14q0Nl7+sag9AUsUOnDCmyHqhFNRZORMjuCsmz kdWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835775; x=1690427775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JRfRtQSkmDpdaHgWU+x8J8rVDSFtmuGYMRjnXDV0vQY=; b=DAR/jfLfFAzqjmz1zu34qyBmuDqISmUIQeqMgoqBxN9ZBtNbr2loTBGBwmNes8OAhL WpdY7mzQ9mqLdRZk10p7iIfhAPBs7P4A4fbmf/vQgLCO5xOUXx42CFXqiMtsokPK3oTH gPi4G9VYVDwkVHN2BrViTQM+wYFHN51WWmkwmpBWCNJ3t7SiMlHuxEkRCRwIgyrmQUSM KZSVe+1IRk4fVI5QE4GFztT4jqSM92MVWouiyNKX4w4SzQ/zo6Q4nsdlGajYN0515150 4+IEuVRXQMhqrDJ82R1qafQfRnXmTClUlxg1EtdrLNEQj53fWdenqQGrYCf4S6S2eiFB EzxA== X-Gm-Message-State: AC+VfDy2FcieoK0II28vzh6zWOffQE/t7x0Ruk6soWjlDZrozE8bet9N WkP7zT3WK7RhnSUMmUudcI4= X-Google-Smtp-Source: ACHHUZ4XU3sOm1xs9Jcr5rbSZbAjZaSOolmfhgi2qcsz6PmREIffAwTevHZjKc/Hdw6V5zHvfavIAw== X-Received: by 2002:a0d:cd46:0:b0:576:7902:f4dd with SMTP id p67-20020a0dcd46000000b005767902f4ddmr12132465ywd.47.1687835775530; Mon, 26 Jun 2023 20:16:15 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:15 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Richard Weinberger , Mike Rapoport Subject: [PATCH v6 32/33] um: Convert {pmd, pte}_free_tlb() to use ptdescs Date: Mon, 26 Jun 2023 20:14:30 -0700 Message-Id: <20230627031431.29653-33-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/um/include/asm/pgalloc.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 8ec7cd46dd96..de5e31c64793 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -25,19 +25,19 @@ */ extern pgd_t *pgd_alloc(struct mm_struct *); -#define __pte_free_tlb(tlb,pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb),(pte)); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #ifdef CONFIG_3_LEVEL_PGTABLES -#define __pmd_free_tlb(tlb, pmd, address) \ -do { \ - pgtable_pmd_page_dtor(virt_to_page(pmd)); \ - tlb_remove_page((tlb),virt_to_page(pmd)); \ -} while (0) \ +#define __pmd_free_tlb(tlb, pmd, address) \ +do { \ + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ +} while (0) #endif From patchwork Tue Jun 27 03:14:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13293908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D05FEB64DD for ; Tue, 27 Jun 2023 03:20:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231283AbjF0DUn (ORCPT ); Mon, 26 Jun 2023 23:20:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231453AbjF0DTz (ORCPT ); Mon, 26 Jun 2023 23:19:55 -0400 Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 348DB30DC; Mon, 26 Jun 2023 20:16:24 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5704ddda6dfso44581037b3.1; Mon, 26 Jun 2023 20:16:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687835778; x=1690427778; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UurGuuLzED1CKQ5+YtAa+uldlNlkULcrhOlEPtYgkts=; b=Bw6j1qqbA7rK2Dm84VNEOP66RtUUiWxR5Sye2bW+eq3uYQIAY2EgG1iwdb2DLJnhG8 GeCT0K/ZwJIKUih4HIgcFRg+mw1EU/RmQtL9RCws+bBmnGeSgMwKEqV8Sq7OOh2ekNKy KxZzPsUUwTKcYY2uR69y6/3CJ/yQh13J76r7SLJTov+ctRLrvX63ZMz8HGYd0YBMtxPR d4bJ4wdUGuyrpgX3PWgY/0ePtHqJFHhYHpUgd8NRua4eh9nOB+MqmDgdNijgrl/iljgo W4AbA6WKPV5xVJmow0GDfY5w1hM9Ipzkd6LQ2xSzFc3wc9cyZI8JF3nhcsteILote3wU 0FOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687835778; x=1690427778; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UurGuuLzED1CKQ5+YtAa+uldlNlkULcrhOlEPtYgkts=; b=af1itiHR6wgEmwEBh7daAjn0+QnPL0aIOJeH/42dqHJ7izsvOgGtaTrEi8Qx/g6EM7 VUCXORqHuoPzcEe5E406AXoVRTARVgCmSbUz0W90KyUOlGAtBZPdFS4RU3fj5GHSYiAg /dUjfWC0F7QAp+WQhAATl7Ng/ycVim8cx8BBIqm0x3cU49FFT7kfcpKYycJNs8HlnaEs 44B0IqoQiuY3cGPXO8gixtO2G7ijn/NHktUfiB2+r2MA+v+gI79PHrQNjhSGrk4iAY3c 9BS40qav+IVT+avM629KZ/j+Va8Dcj/1li57/T4uhXF00opX0cTmWBYW46qi3dwsow4F KESA== X-Gm-Message-State: AC+VfDw24mj6LQ4th4upi/cYvjjQZKF1kh5rYyEKkNPQskVsFy5PhdoV vuD4N6tZ4ZLW0UwIa6oDUr0= X-Google-Smtp-Source: ACHHUZ48cJY/DPvk+Kk12K3DwipJ5tKtap76NQbk/m6+l8TYs0OHWXs1Miee1fTjpWtp4A4P8vJjEA== X-Received: by 2002:a81:4e88:0:b0:565:ba4b:aa81 with SMTP id c130-20020a814e88000000b00565ba4baa81mr32811368ywb.45.1687835777737; Mon, 26 Jun 2023 20:16:17 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id s4-20020a0dd004000000b0057399b3bd26sm1614798ywd.33.2023.06.26.20.16.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 20:16:17 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH v6 33/33] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers Date: Mon, 26 Jun 2023 20:14:31 -0700 Message-Id: <20230627031431.29653-34-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230627031431.29653-1-vishal.moola@gmail.com> References: <20230627031431.29653-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org These functions are no longer necessary. Remove them and cleanup Documentation referencing them. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- Documentation/mm/split_page_table_lock.rst | 12 +++++------ .../zh_CN/mm/split_page_table_lock.rst | 14 ++++++------- include/linux/mm.h | 20 ------------------- 3 files changed, 13 insertions(+), 33 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index a834fad9de12..e4f6972eb6c0 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -58,7 +58,7 @@ Support of split page table lock by an architecture =================================================== There's no need in special enabling of PTE split page table lock: everything -required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which +required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which must be called on PTE table allocation / freeing. Make sure the architecture doesn't use slab allocator for page table @@ -68,8 +68,8 @@ This field shares storage with page->ptl. PMD split lock only makes sense if you have more than two page table levels. -PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table -allocation and pgtable_pmd_page_dtor() on freeing. +PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table +allocation and pagetable_pmd_dtor() on freeing. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing @@ -77,7 +77,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc(). With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK. -NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must +NOTE: pagetable_pte_ctor() and pagetable_pmd_ctor() can fail -- it must be handled properly. page->ptl @@ -97,7 +97,7 @@ trick: split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs one more cache line for indirect access; -The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in -pgtable_pmd_page_ctor() for PMD table. +The spinlock_t allocated in pagetable_pte_ctor() for PTE table and in +pagetable_pmd_ctor() for PMD table. Please, never access page->ptl directly -- use appropriate helper. diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst index 4fb7aa666037..a2c288670a24 100644 --- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst +++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst @@ -56,16 +56,16 @@ Hugetlb特定的辅助函数: 架构对分页表锁的支持 ==================== -没有必要特别启用PTE分页表锁:所有需要的东西都由pgtable_pte_page_ctor() -和pgtable_pte_page_dtor()完成,它们必须在PTE表分配/释放时被调用。 +没有必要特别启用PTE分页表锁:所有需要的东西都由pagetable_pte_ctor() +和pagetable_pte_dtor()完成,它们必须在PTE表分配/释放时被调用。 确保架构不使用slab分配器来分配页表:slab使用page->slab_cache来分配其页 面。这个区域与page->ptl共享存储。 PMD分页锁只有在你有两个以上的页表级别时才有意义。 -启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor(),在释放时调 -用pgtable_pmd_page_dtor()。 +启用PMD分页锁需要在PMD表分配时调用pagetable_pmd_ctor(),在释放时调 +用pagetable_pmd_dtor()。 分配通常发生在pmd_alloc_one()中,释放发生在pmd_free()和pmd_free_tlb() 中,但要确保覆盖所有的PMD表分配/释放路径:即X86_PAE在pgd_alloc()中预先 @@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。 一切就绪后,你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。 -注意:pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必 +注意:pagetable_pte_ctor()和pagetable_pmd_ctor()可能失败--必 须正确处理。 page->ptl @@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁,其中'page'是包含该表的页面struc 的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的 情况下使用分页锁,但由于间接访问而多花了一个缓存行。 -PTE表的spinlock_t分配在pgtable_pte_page_ctor()中,PMD表的spinlock_t -分配在pgtable_pmd_page_ctor()中。 +PTE表的spinlock_t分配在pagetable_pte_ctor()中,PMD表的spinlock_t +分配在pagetable_pmd_ctor()中。 请不要直接访问page->ptl - -使用适当的辅助函数。 diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e4d5f6d10e5..dc0f19f35424 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2873,11 +2873,6 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pte_page_ctor(struct page *page) -{ - return pagetable_pte_ctor(page_ptdesc(page)); -} - static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -2887,11 +2882,6 @@ static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pte_page_dtor(struct page *page) -{ - pagetable_pte_dtor(page_ptdesc(page)); -} - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) { @@ -2993,11 +2983,6 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pmd_page_ctor(struct page *page) -{ - return pagetable_pmd_ctor(page_ptdesc(page)); -} - static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3007,11 +2992,6 @@ static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pmd_page_dtor(struct page *page) -{ - pagetable_pmd_dtor(page_ptdesc(page)); -} - /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be