From patchwork Fri Jun 24 17:36:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 12894932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33315C433EF for ; Fri, 24 Jun 2022 17:37:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2C958E020E; Fri, 24 Jun 2022 13:37:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98C818E0246; Fri, 24 Jun 2022 13:37:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 519998E020E; Fri, 24 Jun 2022 13:37:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1FF898E0244 for ; Fri, 24 Jun 2022 13:37:20 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F1D92333E9 for ; Fri, 24 Jun 2022 17:37:19 +0000 (UTC) X-FDA: 79613835798.08.528CB4E Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf18.hostedemail.com (Postfix) with ESMTP id 8131B1C00A0 for ; Fri, 24 Jun 2022 17:37:19 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id l19-20020ac5c353000000b0036c1d249b61so910934vkk.12 for ; Fri, 24 Jun 2022 10:37:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=C3fH1oRby2iG8QHlX2xdR+iuje/PS4n79lvPkGthMbg=; b=ZvJyRZYi5Luh9HoaMNclQG/FEjMLfzSsnyQ/9mxS9ijpI3JJW36iQDpaHsY87LXRBZ MYe/3q2xvO35L2IDpu2v2DM/tTOw5jERdNYlWr+w9FKaAmGF7Qw169EIGH/QBP5/M5Hu EnvP0MM3PVu+Qq3VOyCfhXtU1PqbPDjXRp9bYa+nmG69Ejf6C4LVUubpOccWnz+8x8al Ud1kt4H0mzAyAWMnP0xMUwaKTGQ7OAkmev0EEidQ3pcSQbK/8Wk2CIruV0DC5jWgMmae phVsDW4MQ5+a3GUwp4m0a+E3VZr2h8GwiRLlTkbUBbEknSpuJ1gbOzRLqoe5ZpouQ3g/ 4rOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C3fH1oRby2iG8QHlX2xdR+iuje/PS4n79lvPkGthMbg=; b=nktw7VVFcaRAY9dpaDDwxP9uBsN3TUEAoFd9UQbAs1NAs4V4U2YOKm1wfY9AM+xF3e n2KENcKxqf/b4Fys4fFbthacq6FRx5BO7JjW+Wvl4Tc11SLKrAfIUYJdN5ZR5WKQRkzd Rho8CPPaRdvA2pByqycpM1sFiVxWoVhqC03sh0aLCCoRot0xdd/Jn/sNnxHp6WuDkUYr jCRnWU34BjK86Mrl1/0YzwXOL3o2haM0tnl2g/VSzT3nelG2G6AzSizzY9s94tf283yg PD9IZ+/Xj2jyKiJ8oLzEM1uvzxRfFsszPXxONkaXYlYxnIGoCVP1nf16zjA3OqkesSmk 8PHw== X-Gm-Message-State: AJIora/vuZP7zx3fBIUNXIGbkAMzaawFCM6A2JacrXOgzp/gYC9PndiX YV6d4nGBGmH4iehgzJgU7tqbK2WSl5OsuYze X-Google-Smtp-Source: AGRyM1sN569V6cAWRy0BJnUUQMejK8sKMmpW5w3o0qztJIemJ7kY/kSH0xH0s1Ckgj1aJGX5wLB5oOn168goMMsQ X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6122:506:b0:36c:3d23:38e7 with SMTP id x6-20020a056122050600b0036c3d2338e7mr9197vko.26.1656092239059; Fri, 24 Jun 2022 10:37:19 -0700 (PDT) Date: Fri, 24 Jun 2022 17:36:37 +0000 In-Reply-To: <20220624173656.2033256-1-jthoughton@google.com> Message-Id: <20220624173656.2033256-8-jthoughton@google.com> Mime-Version: 1.0 References: <20220624173656.2033256-1-jthoughton@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [RFC PATCH 07/26] hugetlb: add hugetlb_pte to track HugeTLB page table entries From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , Jue Wang , Manish Mishra , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656092239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C3fH1oRby2iG8QHlX2xdR+iuje/PS4n79lvPkGthMbg=; b=juMaFKRksOlUhfu3aRZKabJuRj715lqjY42MbqsqEunIAIZDEugf5PQ2FNpEEFEjW69JKi VB5+O+Iqj4e5qEb4nkjdqMUX6kiYfcdePUiYOo3L3YPWeVj7N6C4+L/6rdXNMl2+3ra/DK jg8nWMz6kloowo7u68KCJNb345duyEA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZvJyRZYi; spf=pass (imf18.hostedemail.com: domain of 3T_a1YgoKCDAVfTagSTfaZSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jthoughton.bounces.google.com designates 209.85.221.201 as permitted sender) smtp.mailfrom=3T_a1YgoKCDAVfTagSTfaZSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656092239; a=rsa-sha256; cv=none; b=vBlfAefnYLE9LG9pAB5kJn045rx5d+OFRy66Y07HDQBGb8NhTWx509CQ9+yuYR++hFDWci sBwpTrggqCaHTGhT70uzzSDtm5L/3tko3krw0QJEMGvch00ye1DKdnUG8iqZdUhW57FfCR uZZFtCsEeOR+vKTOc4Ay2V4OEmBeonc= Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZvJyRZYi; spf=pass (imf18.hostedemail.com: domain of 3T_a1YgoKCDAVfTagSTfaZSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jthoughton.bounces.google.com designates 209.85.221.201 as permitted sender) smtp.mailfrom=3T_a1YgoKCDAVfTagSTfaZSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: ma1nuzacft4yu93rgz7n7ap555fd1taz X-Rspamd-Queue-Id: 8131B1C00A0 X-HE-Tag: 1656092239-470411 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After high-granularity mapping, page table entries for HugeTLB pages can be of any size/type. (For example, we can have a 1G page mapped with a mix of PMDs and PTEs.) This struct is to help keep track of a HugeTLB PTE after we have done a page table walk. Without this, we'd have to pass around the "size" of the PTE everywhere. We effectively did this before; it could be fetched from the hstate, which we pass around pretty much everywhere. This commit includes definitions for some basic helper functions that are used later. These helper functions wrap existing PTE inspection/modification functions, where the correct version is picked depending on if the HugeTLB PTE is actually "huge" or not. (Previously, all HugeTLB PTEs were "huge"). For example, hugetlb_ptep_get wraps huge_ptep_get and ptep_get, where ptep_get is used when the HugeTLB PTE is PAGE_SIZE, and huge_ptep_get is used in all other cases. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 84 +++++++++++++++++++++++++++++++++++++++++ mm/hugetlb.c | 57 ++++++++++++++++++++++++++++ 2 files changed, 141 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5fe1db46d8c9..1d4ec9dfdebf 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -46,6 +46,68 @@ enum { __NR_USED_SUBPAGE, }; +struct hugetlb_pte { + pte_t *ptep; + unsigned int shift; +}; + +static inline +void hugetlb_pte_init(struct hugetlb_pte *hpte) +{ + hpte->ptep = NULL; +} + +static inline +void hugetlb_pte_populate(struct hugetlb_pte *hpte, pte_t *ptep, + unsigned int shift) +{ + BUG_ON(!ptep); + hpte->ptep = ptep; + hpte->shift = shift; +} + +static inline +unsigned long hugetlb_pte_size(const struct hugetlb_pte *hpte) +{ + BUG_ON(!hpte->ptep); + return 1UL << hpte->shift; +} + +static inline +unsigned long hugetlb_pte_mask(const struct hugetlb_pte *hpte) +{ + BUG_ON(!hpte->ptep); + return ~(hugetlb_pte_size(hpte) - 1); +} + +static inline +unsigned int hugetlb_pte_shift(const struct hugetlb_pte *hpte) +{ + BUG_ON(!hpte->ptep); + return hpte->shift; +} + +static inline +bool hugetlb_pte_huge(const struct hugetlb_pte *hpte) +{ + return !IS_ENABLED(CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING) || + hugetlb_pte_shift(hpte) > PAGE_SHIFT; +} + +static inline +void hugetlb_pte_copy(struct hugetlb_pte *dest, const struct hugetlb_pte *src) +{ + dest->ptep = src->ptep; + dest->shift = src->shift; +} + +bool hugetlb_pte_present_leaf(const struct hugetlb_pte *hpte); +bool hugetlb_pte_none(const struct hugetlb_pte *hpte); +bool hugetlb_pte_none_mostly(const struct hugetlb_pte *hpte); +pte_t hugetlb_ptep_get(const struct hugetlb_pte *hpte); +void hugetlb_pte_clear(struct mm_struct *mm, const struct hugetlb_pte *hpte, + unsigned long address); + struct hugepage_subpool { spinlock_t lock; long count; @@ -1130,6 +1192,28 @@ static inline spinlock_t *huge_pte_lock_shift(unsigned int shift, return ptl; } +static inline +spinlock_t *hugetlb_pte_lockptr(struct mm_struct *mm, struct hugetlb_pte *hpte) +{ + + BUG_ON(!hpte->ptep); + // Only use huge_pte_lockptr if we are at leaf-level. Otherwise use + // the regular page table lock. + if (hugetlb_pte_none(hpte) || hugetlb_pte_present_leaf(hpte)) + return huge_pte_lockptr(hugetlb_pte_shift(hpte), + mm, hpte->ptep); + return &mm->page_table_lock; +} + +static inline +spinlock_t *hugetlb_pte_lock(struct mm_struct *mm, struct hugetlb_pte *hpte) +{ + spinlock_t *ptl = hugetlb_pte_lockptr(mm, hpte); + + spin_lock(ptl); + return ptl; +} + #if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA) extern void __init hugetlb_cma_reserve(int order); extern void __init hugetlb_cma_check(void); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d6d0d4c03def..1a1434e29740 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1120,6 +1120,63 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } +bool hugetlb_pte_present_leaf(const struct hugetlb_pte *hpte) +{ + pgd_t pgd; + p4d_t p4d; + pud_t pud; + pmd_t pmd; + + BUG_ON(!hpte->ptep); + if (hugetlb_pte_size(hpte) >= PGDIR_SIZE) { + pgd = *(pgd_t *)hpte->ptep; + return pgd_present(pgd) && pgd_leaf(pgd); + } else if (hugetlb_pte_size(hpte) >= P4D_SIZE) { + p4d = *(p4d_t *)hpte->ptep; + return p4d_present(p4d) && p4d_leaf(p4d); + } else if (hugetlb_pte_size(hpte) >= PUD_SIZE) { + pud = *(pud_t *)hpte->ptep; + return pud_present(pud) && pud_leaf(pud); + } else if (hugetlb_pte_size(hpte) >= PMD_SIZE) { + pmd = *(pmd_t *)hpte->ptep; + return pmd_present(pmd) && pmd_leaf(pmd); + } else if (hugetlb_pte_size(hpte) >= PAGE_SIZE) + return pte_present(*hpte->ptep); + BUG(); +} + +bool hugetlb_pte_none(const struct hugetlb_pte *hpte) +{ + if (hugetlb_pte_huge(hpte)) + return huge_pte_none(huge_ptep_get(hpte->ptep)); + return pte_none(ptep_get(hpte->ptep)); +} + +bool hugetlb_pte_none_mostly(const struct hugetlb_pte *hpte) +{ + if (hugetlb_pte_huge(hpte)) + return huge_pte_none_mostly(huge_ptep_get(hpte->ptep)); + return pte_none_mostly(ptep_get(hpte->ptep)); +} + +pte_t hugetlb_ptep_get(const struct hugetlb_pte *hpte) +{ + if (hugetlb_pte_huge(hpte)) + return huge_ptep_get(hpte->ptep); + return ptep_get(hpte->ptep); +} + +void hugetlb_pte_clear(struct mm_struct *mm, const struct hugetlb_pte *hpte, + unsigned long address) +{ + BUG_ON(!hpte->ptep); + unsigned long sz = hugetlb_pte_size(hpte); + + if (sz > PAGE_SIZE) + return huge_pte_clear(mm, address, hpte->ptep, sz); + return pte_clear(mm, address, hpte->ptep); +} + static void enqueue_huge_page(struct hstate *h, struct page *page) { int nid = page_to_nid(page);