From patchwork Tue Jun 1 19:50:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12292133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5ADCC47092 for ; Tue, 1 Jun 2021 19:51:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4233B613BD for ; Tue, 1 Jun 2021 19:51:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4233B613BD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C25526B006E; Tue, 1 Jun 2021 15:51:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B5BBF6B0070; Tue, 1 Jun 2021 15:51:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FC3E6B0071; Tue, 1 Jun 2021 15:51:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 68FA36B006E for ; Tue, 1 Jun 2021 15:51:02 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E69C79418 for ; Tue, 1 Jun 2021 19:51:01 +0000 (UTC) X-FDA: 78206198322.28.AC623DF Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf20.hostedemail.com (Postfix) with ESMTP id 3A57B375 for ; Tue, 1 Jun 2021 19:50:45 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id p2-20020a2599820000b02905394c6727easo311359ybo.13 for ; Tue, 01 Jun 2021 12:51:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2mDlJ4zm4sn/0TQGJvh6R8hsABG+H+usS9FMxVN9J3Y=; b=cQvUBw3eGwDl0hneAD7hQfGmamB0FvvZBH4FHQyepPRiGU2ih9ssVmISSmFnRlkm1T 6E6yqOkUS597U+Z1FH9N0UYg+ZgFbJhgkXG7CQTIyWZQ7p55xaBOrWp8fUajvvvM4akM 5YM/Fe5PuhGUpm8EgOIjbIvGDZ6c92R0b5vd0Meh6akenGleirMNJ+SgqiNYBtL+OfIH 23yPSi8AxEKFmNIEJRDU+yg/4v8J680+DU6jxTjfi5IpmEFY3wZY6IDDqOCCjA/8unpR nwHmtZiIsNKWmAus3j8o1OVpkx0q+z/6FDkOZX0WimvfejN83QwVnaACya4GiaZP3q/Y dV5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2mDlJ4zm4sn/0TQGJvh6R8hsABG+H+usS9FMxVN9J3Y=; b=Rj7nWJyLNteo2xL1EN5o81zRcy8UIpfi/W1yXFbFL2LU0RHMF5kXU+3yDszw7oay3w JcOVUgjDtxmmR6k0inPmvJ+eFHzSZnr3V9aq/VKr/PS+hy6iMHdi++QPWxL3C50v71dm zwAcQgUdNtKARE4uM4hTaMAoN5o90XsEEdb+XQZYQmEKkWqVyL+QGxF5BT5SP2D0LI6m d62FQ3Dfn4rBUNDw5OJbxJGX/dak7DnGXOkgI30ii3KQIdIJf65OmvGz4lnEKMYtZ0IW DCRy0OIJ15exKxp7wpzKhUsHSrKnS/7Mlr6P5rZjQ0eGiGcgnNNDzTW5vfkRuZODlFqL 5suw== X-Gm-Message-State: AOAM531XVltsG4hwbROo50hnqsvSjyFB+KHyIBJ7OU2V5EvDjT3WbUCw Zyl8jmKXLxRiYTxtvTEVHAM/+pQ= X-Google-Smtp-Source: ABdhPJzi3NK1YtJ6vYJiSgJltRxdk5Gc08xlkeTTjaf1jU5X4kdH1/0Y9Y1k3WBQ+ImMpkfCee9evmc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7faa:4fc5:a3b2:f7e5]) (user=pcc job=sendgmr) by 2002:a25:f20f:: with SMTP id i15mr39351635ybe.119.1622577060912; Tue, 01 Jun 2021 12:51:00 -0700 (PDT) Date: Tue, 1 Jun 2021 12:50:46 -0700 In-Reply-To: <20210601195049.2695657-1-pcc@google.com> Message-Id: <20210601195049.2695657-2-pcc@google.com> Mime-Version: 1.0 References: <20210601195049.2695657-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v5 1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable() From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-Rspamd-Queue-Id: 3A57B375 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=cQvUBw3e; spf=pass (imf20.hostedemail.com: domain of 3pI-2YAMKCKkYLLPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--pcc.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3pI-2YAMKCKkYLLPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--pcc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Stat-Signature: 6xb69hm639417868rmsra3h85o4m9nya X-HE-Tag: 1622577045-264518 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In an upcoming change we would like to add a flag to GFP_HIGHUSER_MOVABLE so that it would no longer be an OR of GFP_HIGHUSER and __GFP_MOVABLE. This poses a problem for alloc_zeroed_user_highpage_movable() which passes __GFP_MOVABLE into an arch-specific __alloc_zeroed_user_highpage() hook which ORs in GFP_HIGHUSER. Since __alloc_zeroed_user_highpage() is only ever called from alloc_zeroed_user_highpage_movable(), we can remove one level of indirection here. Remove __alloc_zeroed_user_highpage(), make alloc_zeroed_user_highpage_movable() the hook, and use GFP_HIGHUSER_MOVABLE in the hook implementations so that they will pick up the new flag that we are going to add. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ic6361c657b2cdcd896adbe0cf7cb5a7fbb1ed7bf Reported-by: kernel test robot --- v5: - fix s390 build error arch/alpha/include/asm/page.h | 6 +++--- arch/arm64/include/asm/page.h | 6 +++--- arch/ia64/include/asm/page.h | 6 +++--- arch/m68k/include/asm/page_no.h | 6 +++--- arch/s390/include/asm/page.h | 6 +++--- arch/x86/include/asm/page.h | 6 +++--- include/linux/highmem.h | 35 ++++++++------------------------- 7 files changed, 26 insertions(+), 45 deletions(-) diff --git a/arch/alpha/include/asm/page.h b/arch/alpha/include/asm/page.h index 268f99b4602b..18f48a6f2ff6 100644 --- a/arch/alpha/include/asm/page.h +++ b/arch/alpha/include/asm/page.h @@ -17,9 +17,9 @@ extern void clear_page(void *page); #define clear_user_page(page, vaddr, pg) clear_page(page) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vmaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vmaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE extern void copy_page(void * _to, void * _from); #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..0cfe4f7e7055 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -28,9 +28,9 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(movableflags, vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/ia64/include/asm/page.h b/arch/ia64/include/asm/page.h index f4dc81fa7146..1b990466d540 100644 --- a/arch/ia64/include/asm/page.h +++ b/arch/ia64/include/asm/page.h @@ -82,16 +82,16 @@ do { \ } while (0) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ ({ \ struct page *page = alloc_page_vma( \ - GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr); \ + GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr); \ if (page) \ flush_dcache_page(page); \ page; \ }) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) diff --git a/arch/m68k/include/asm/page_no.h b/arch/m68k/include/asm/page_no.h index 8d0f862ee9d7..c9d0d84158a4 100644 --- a/arch/m68k/include/asm/page_no.h +++ b/arch/m68k/include/asm/page_no.h @@ -13,9 +13,9 @@ extern unsigned long memory_end; #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #define __pa(vaddr) ((unsigned long)(vaddr)) #define __va(paddr) ((void *)((unsigned long)(paddr))) diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index cc98f9b78fd4..479dc76e0eca 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -68,9 +68,9 @@ static inline void copy_page(void *to, void *from) #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /* * These are used to make use of C type-checking.. diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 7555b48803a8..4d5810c8fab7 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -34,9 +34,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, copy_page(to, from); } -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) -#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define alloc_zeroed_user_highpage_movable(vma, vaddr) \ + alloc_page_vma(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, vma, vaddr) +#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE #ifndef __pa #define __pa(x) __phys_addr((unsigned long)(x)) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..54d0643b8fcf 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -152,28 +152,24 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) } #endif -#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#ifndef __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE /** - * __alloc_zeroed_user_highpage - Allocate a zeroed HIGHMEM page for a VMA with caller-specified movable GFP flags - * @movableflags: The GFP flags related to the pages future ability to move like __GFP_MOVABLE + * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move * @vma: The VMA the page is to be allocated for * @vaddr: The virtual address the page will be inserted into * - * This function will allocate a page for a VMA but the caller is expected - * to specify via movableflags whether the page will be movable in the - * future or not + * This function will allocate a page for a VMA that the caller knows will + * be able to migrate in the future using move_pages() or reclaimed * * An architecture may override this function by defining - * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE and providing their own + * __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE and providing their own * implementation. */ static inline struct page * -__alloc_zeroed_user_highpage(gfp_t movableflags, - struct vm_area_struct *vma, - unsigned long vaddr) +alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, + unsigned long vaddr) { - struct page *page = alloc_page_vma(GFP_HIGHUSER | movableflags, - vma, vaddr); + struct page *page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vaddr); if (page) clear_user_highpage(page, vaddr); @@ -182,21 +178,6 @@ __alloc_zeroed_user_highpage(gfp_t movableflags, } #endif -/** - * alloc_zeroed_user_highpage_movable - Allocate a zeroed HIGHMEM page for a VMA that the caller knows can move - * @vma: The VMA the page is to be allocated for - * @vaddr: The virtual address the page will be inserted into - * - * This function will allocate a page for a VMA that the caller knows will - * be able to migrate in the future using move_pages() or reclaimed - */ -static inline struct page * -alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, - unsigned long vaddr) -{ - return __alloc_zeroed_user_highpage(__GFP_MOVABLE, vma, vaddr); -} - static inline void clear_highpage(struct page *page) { void *kaddr = kmap_atomic(page);