From patchwork Wed Feb 27 17:05:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 10831959 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6E2EA15AC for ; Wed, 27 Feb 2019 17:07:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 582022E305 for ; Wed, 27 Feb 2019 17:07:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B9962E31F; Wed, 27 Feb 2019 17:07:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 469EE2E305 for ; Wed, 27 Feb 2019 17:07:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730424AbfB0RHn (ORCPT ); Wed, 27 Feb 2019 12:07:43 -0500 Received: from foss.arm.com ([217.140.101.70]:36416 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730409AbfB0RHm (ORCPT ); Wed, 27 Feb 2019 12:07:42 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9E822174E; Wed, 27 Feb 2019 09:07:41 -0800 (PST) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ED5CD3F738; Wed, 27 Feb 2019 09:07:37 -0800 (PST) From: Steven Price To: linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" , Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Subject: [PATCH v3 19/34] sh: mm: Add p?d_large() definitions Date: Wed, 27 Feb 2019 17:05:53 +0000 Message-Id: <20190227170608.27963-20-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190227170608.27963-1-steven.price@arm.com> References: <20190227170608.27963-1-steven.price@arm.com> MIME-Version: 1.0 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For sh, we don't support large pages, so add stubs returning 0. CC: Yoshinori Sato CC: Rich Felker CC: linux-sh@vger.kernel.org Signed-off-by: Steven Price --- arch/sh/include/asm/pgtable-3level.h | 1 + arch/sh/include/asm/pgtable_32.h | 1 + arch/sh/include/asm/pgtable_64.h | 1 + 3 files changed, 3 insertions(+) diff --git a/arch/sh/include/asm/pgtable-3level.h b/arch/sh/include/asm/pgtable-3level.h index 7d8587eb65ff..9d8b2b002582 100644 --- a/arch/sh/include/asm/pgtable-3level.h +++ b/arch/sh/include/asm/pgtable-3level.h @@ -48,6 +48,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) #define pud_present(x) (pud_val(x)) #define pud_clear(xp) do { set_pud(xp, __pud(0)); } while (0) #define pud_bad(x) (pud_val(x) & ~PAGE_MASK) +#define pud_large(x) (0) /* * (puds are folded into pgds so this doesn't get actually called, diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include/asm/pgtable_32.h index 29274f0e428e..61186aa11021 100644 --- a/arch/sh/include/asm/pgtable_32.h +++ b/arch/sh/include/asm/pgtable_32.h @@ -329,6 +329,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte) #define pmd_present(x) (pmd_val(x)) #define pmd_clear(xp) do { set_pmd(xp, __pmd(0)); } while (0) #define pmd_bad(x) (pmd_val(x) & ~PAGE_MASK) +#define pmd_large(x) (0) #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT)) #define pte_page(x) pfn_to_page(pte_pfn(x)) diff --git a/arch/sh/include/asm/pgtable_64.h b/arch/sh/include/asm/pgtable_64.h index 1778bc5971e7..80fe9264babf 100644 --- a/arch/sh/include/asm/pgtable_64.h +++ b/arch/sh/include/asm/pgtable_64.h @@ -64,6 +64,7 @@ static __inline__ void set_pte(pte_t *pteptr, pte_t pteval) #define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY))) #define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY) #define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE) +#define pmd_large(pmd_entry) (0) #define pmd_page_vaddr(pmd_entry) \ ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))