From patchwork Fri Jun 24 17:36:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 12894941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12139C43334 for ; Fri, 24 Jun 2022 17:37:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF62D8E024E; Fri, 24 Jun 2022 13:37:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7DB18E0244; Fri, 24 Jun 2022 13:37:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95B1A8E024E; Fri, 24 Jun 2022 13:37:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7ABF58E0244 for ; Fri, 24 Jun 2022 13:37:33 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5E21535476 for ; Fri, 24 Jun 2022 17:37:33 +0000 (UTC) X-FDA: 79613836386.25.0FD11A4 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf20.hostedemail.com (Postfix) with ESMTP id F0CB91C001F for ; Fri, 24 Jun 2022 17:37:32 +0000 (UTC) Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-3178a95ec78so27001427b3.4 for ; Fri, 24 Jun 2022 10:37:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ndW8f2/FMhLCn5Y2u5fQS3kI5t4rS7D1K1+5Zv5C8rs=; b=czqXqvIE/8k7tydJPozB7auTFGn2IWAhFN0QqqSLWSloUnswocGDGgUeo23aTjkMei kJVRiKE5/tiYCuHldEVJHKMPdnIu1VB4kfdTvkuHI+ANDJBJq4WBaukBrLfqF7kOY/tK MT6za/4MfqzLFtBFRgSQ5jACJHp8mD5dBxqaUXvHHjfCB4WTLOVkleis0beQo12/qieV 0OirUq/zaPSgClUBBFEAredAa0pJS7lZV3B53aynse7fEXzlquNVE8V9XBV/gY5amf1H Zy8GYpedNEgd7iU2Q+yG3Q5EJxgpMOpVeMlp5AHPW2AVdClXpdfObc+IfRZlB5gzBNda kxBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ndW8f2/FMhLCn5Y2u5fQS3kI5t4rS7D1K1+5Zv5C8rs=; b=QflOH6QaXcmmCuXUZRNhyaPYBBnQnugGv5R2LD9bBDQoWFxMRUk4ZKGszupukytGKP ef8Rd3tfSBf6LCo3krieBIuQwQQLSbCFiiqj/FE1W40/HyJbk2Bdvkj/9WA4lsWd1ZWb 8w9N2xskYbL/1R8SiT7OhDSm3ELI4UHho2d3xtw26i68lYChbe0LoYUXt29Gvh6PxA7d d+R5d/WZNSs7uA9vrCIR6Nl62/5Bc2bfyHzvY4cnG5RP/501X+VfDabCuwVHUfT9iXUo tWTdCRN30hM3G2n37GqA9jNJIeedkLuAs71SCb5LltYoOnO4oKtjF9ST3ah1FzUtjmEq 1QXQ== X-Gm-Message-State: AJIora9QgB7sj2JhHBeKuLff+6QP6Hedm317e4m6gfqWIkKyKLqJ+odA YzS48b8K7q+XQTB713nwLZFpmOrp9bUplliI X-Google-Smtp-Source: AGRyM1vf9VAPWkUYJ/WFD/esjoiahsG2GN/fHJroFrmfPTAWRMvqzCQsNWId7C9YuKC2vmXazC7krxj0p+91Xop2 X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:f30f:0:b0:664:af3b:f83f with SMTP id c15-20020a25f30f000000b00664af3bf83fmr292571ybs.516.1656092252403; Fri, 24 Jun 2022 10:37:32 -0700 (PDT) Date: Fri, 24 Jun 2022 17:36:46 +0000 In-Reply-To: <20220624173656.2033256-1-jthoughton@google.com> Message-Id: <20220624173656.2033256-17-jthoughton@google.com> Mime-Version: 1.0 References: <20220624173656.2033256-1-jthoughton@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [RFC PATCH 16/26] hugetlb: make hugetlb_change_protection compatible with HGM From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , Jue Wang , Manish Mishra , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656092253; a=rsa-sha256; cv=none; b=2cyNOrtyXFnpzrCh51dQK65iUtGPXIveSb+rff4UQRGH9ICmbB6abmTC8hKV6fSkfvp9St +amkJmc9gizp2MJnkVzwL7fvGM8f4ywSM/XgEqPfhXr8WiMNMOq/8tAQ7aHdcKVkHqD6vF SvNFZ+jyETT3ImUBVg9ExO4qsJhUulc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656092253; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ndW8f2/FMhLCn5Y2u5fQS3kI5t4rS7D1K1+5Zv5C8rs=; b=OAG8BsrCxKnVpE+Fr3Red7mImikgs2SHxtT4I3pYarDplrHG/htKZBILEFdGG4j+xhsglM 7FtxRJTsZ4Yarh4rMIVLd24jC2aS0TgA8asE3r+mfoX7/YMsrjsB2RU8ntM1E/pM4HAXuc ons7y+SHxWrO6gqL9g+w1wPcLIaWx14= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=czqXqvIE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3XPa1YgoKCD0isgntfgsnmfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--jthoughton.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3XPa1YgoKCD0isgntfgsnmfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--jthoughton.bounces.google.com X-Stat-Signature: qp4iyo7te95zx37fo659zn3ecorjqxs7 X-Rspamd-Queue-Id: F0CB91C001F Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=czqXqvIE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3XPa1YgoKCD0isgntfgsnmfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--jthoughton.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3XPa1YgoKCD0isgntfgsnmfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--jthoughton.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1656092252-175365 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: HugeTLB is now able to change the protection of hugepages that are mapped at high granularity. I need to add more of the HugeTLB PTE wrapper functions to clean up this patch. I'll do this in the next version. Signed-off-by: James Houghton --- mm/hugetlb.c | 91 +++++++++++++++++++++++++++++++++++----------------- 1 file changed, 62 insertions(+), 29 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 51fc1d3f122f..f9c7daa6c090 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6476,14 +6476,15 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; unsigned long start = address; - pte_t *ptep; pte_t pte; struct hstate *h = hstate_vma(vma); - unsigned long pages = 0, psize = huge_page_size(h); + unsigned long base_pages = 0, psize = huge_page_size(h); bool shared_pmd = false; struct mmu_notifier_range range; bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; + struct hugetlb_pte hpte; + bool hgm_enabled = hugetlb_hgm_enabled(vma); /* * In the case of shared PMDs, the area to flush could be beyond @@ -6499,28 +6500,38 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, mmu_notifier_invalidate_range_start(&range); i_mmap_lock_write(vma->vm_file->f_mapping); - for (; address < end; address += psize) { + while (address < end) { spinlock_t *ptl; - ptep = huge_pte_offset(mm, address, psize); - if (!ptep) + pte_t *ptep = huge_pte_offset(mm, address, huge_page_size(h)); + + if (!ptep) { + address += huge_page_size(h); continue; - ptl = huge_pte_lock(h, mm, ptep); - if (huge_pmd_unshare(mm, vma, &address, ptep)) { + } + hugetlb_pte_populate(&hpte, ptep, huge_page_shift(h)); + if (hgm_enabled) { + int ret = hugetlb_walk_to(mm, &hpte, address, PAGE_SIZE, + /*stop_at_none=*/true); + BUG_ON(ret); + } + + ptl = hugetlb_pte_lock(mm, &hpte); + if (huge_pmd_unshare(mm, vma, &address, hpte.ptep)) { /* * When uffd-wp is enabled on the vma, unshare * shouldn't happen at all. Warn about it if it * happened due to some reason. */ WARN_ON_ONCE(uffd_wp || uffd_wp_resolve); - pages++; + base_pages += hugetlb_pte_size(&hpte) / PAGE_SIZE; spin_unlock(ptl); shared_pmd = true; - continue; + goto next_hpte; } - pte = huge_ptep_get(ptep); + pte = hugetlb_ptep_get(&hpte); if (unlikely(is_hugetlb_entry_hwpoisoned(pte))) { spin_unlock(ptl); - continue; + goto next_hpte; } if (unlikely(is_hugetlb_entry_migration(pte))) { swp_entry_t entry = pte_to_swp_entry(pte); @@ -6540,12 +6551,13 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, newpte = pte_swp_mkuffd_wp(newpte); else if (uffd_wp_resolve) newpte = pte_swp_clear_uffd_wp(newpte); - set_huge_swap_pte_at(mm, address, ptep, - newpte, psize); - pages++; + set_huge_swap_pte_at(mm, address, hpte.ptep, + newpte, + hugetlb_pte_size(&hpte)); + base_pages += hugetlb_pte_size(&hpte) / PAGE_SIZE; } spin_unlock(ptl); - continue; + goto next_hpte; } if (unlikely(pte_marker_uffd_wp(pte))) { /* @@ -6553,21 +6565,40 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, * no need for huge_ptep_modify_prot_start/commit(). */ if (uffd_wp_resolve) - huge_pte_clear(mm, address, ptep, psize); + huge_pte_clear(mm, address, hpte.ptep, psize); } - if (!huge_pte_none(pte)) { + if (!hugetlb_pte_none(&hpte)) { pte_t old_pte; - unsigned int shift = huge_page_shift(hstate_vma(vma)); - - old_pte = huge_ptep_modify_prot_start(vma, address, ptep); - pte = huge_pte_modify(old_pte, newprot); - pte = arch_make_huge_pte(pte, shift, vma->vm_flags); - if (uffd_wp) - pte = huge_pte_mkuffd_wp(huge_pte_wrprotect(pte)); - else if (uffd_wp_resolve) - pte = huge_pte_clear_uffd_wp(pte); - huge_ptep_modify_prot_commit(vma, address, ptep, old_pte, pte); - pages++; + unsigned int shift = hpte.shift; + /* + * This is ugly. This will be cleaned up in a future + * version of this series. + */ + if (shift > PAGE_SHIFT) { + old_pte = huge_ptep_modify_prot_start( + vma, address, hpte.ptep); + pte = huge_pte_modify(old_pte, newprot); + pte = arch_make_huge_pte( + pte, shift, vma->vm_flags); + if (uffd_wp) + pte = huge_pte_mkuffd_wp(huge_pte_wrprotect(pte)); + else if (uffd_wp_resolve) + pte = huge_pte_clear_uffd_wp(pte); + huge_ptep_modify_prot_commit( + vma, address, hpte.ptep, + old_pte, pte); + } else { + old_pte = ptep_modify_prot_start( + vma, address, hpte.ptep); + pte = pte_modify(old_pte, newprot); + if (uffd_wp) + pte = pte_mkuffd_wp(pte_wrprotect(pte)); + else if (uffd_wp_resolve) + pte = pte_clear_uffd_wp(pte); + ptep_modify_prot_commit( + vma, address, hpte.ptep, old_pte, pte); + } + base_pages += hugetlb_pte_size(&hpte) / PAGE_SIZE; } else { /* None pte */ if (unlikely(uffd_wp)) @@ -6576,6 +6607,8 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, make_pte_marker(PTE_MARKER_UFFD_WP)); } spin_unlock(ptl); +next_hpte: + address += hugetlb_pte_size(&hpte); } /* * Must flush TLB before releasing i_mmap_rwsem: x86's huge_pmd_unshare @@ -6597,7 +6630,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, i_mmap_unlock_write(vma->vm_file->f_mapping); mmu_notifier_invalidate_range_end(&range); - return pages << h->order; + return base_pages; } /* Return true if reservation was successful, false otherwise. */