From patchwork Fri Jul 30 22:15:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12412071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-25.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MISSING_HEADERS, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9C52C4320A for ; Fri, 30 Jul 2021 22:15:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2811360F01 for ; Fri, 30 Jul 2021 22:15:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2811360F01 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id BC5EB8D0002; Fri, 30 Jul 2021 18:15:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B77288D0001; Fri, 30 Jul 2021 18:15:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3D7E8D0002; Fri, 30 Jul 2021 18:15:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id 862588D0001 for ; Fri, 30 Jul 2021 18:15:37 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0BF401ADA7 for ; Fri, 30 Jul 2021 22:15:37 +0000 (UTC) X-FDA: 78420661914.12.B4A6002 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf11.hostedemail.com (Postfix) with ESMTP id 38703F00708F for ; Fri, 30 Jul 2021 22:15:36 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id 16-20020a250b100000b029055791ebe1e6so12162431ybl.20 for ; Fri, 30 Jul 2021 15:15:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:cc; bh=LU5DBdfWbcb7nq9Hpjh9AB6SsnxaUwrgsobOqLYPM9o=; b=XpXwBkjGlVd2d6XL3s1SwLhwX8JZVK/cIvIbDFd3yRnmsGMBWKV8TuZpR0HUdQgXnU iJku5j3MvE5vmpW5QtBm45oQLb3xIvvX1uDHFvWv4VSw8nv96esBAsAF+Vagm8rLf/Ur Lk2WJIBDh/sCMPKcDjiiEmEkw9Nl108PBzM/SHCHl13YB5kSXKtRQGdfsuoMkb9xOF+j /vIHapRP1+NZ4FgJU7HzaHuXc9rwpBk5ILuL9tZbhJYGKlbSI94xBhsCF/vdwVgctge4 StTqwvLN6o3c0IVSplYxXw9kh7/2lxtoZ1cco9qjSBPG5aAT5PPOAZZydmxMhk2jPoEQ Xwcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:cc; bh=LU5DBdfWbcb7nq9Hpjh9AB6SsnxaUwrgsobOqLYPM9o=; b=h5ustggTqdZqzug5IBvP3b/wOvVp6cN3D5c6OQByjjtGUTc39oIU0MlAeafr63pxE4 lXwADwnpV7wkN09z6iJJ9LZwoNneiW8JDUEGJ9tJtgJiMTtc4n3QT1Fd5JSWWwvPr/lJ /uzljApzLghOz6g+x6lRVrPoEeWtBh82sQncMw5VdrYGFIl3xCgAA6v4DEnoDcMaU1IT SoT4LyFc+JfRKl8t/M1IFq48BzVdM9BKRLWONSR7kwRZIJKPyFEbbvkDD3azfEtQkm0n PlgdDbGe80dpe7WAuSaPDdpoSm2LgGxhbaMZgmebel4NJaK3ikGSovHNWdzYo65tWn5Q FiWQ== X-Gm-Message-State: AOAM5322j1WD7jGlJyzh2CHMkv+jzz2197LQuEP3VRe+NHwRve+N/dTg IAVtnVusBBK9XcCTJWTwhn61wi+GFDsIsJ3CQw== X-Google-Smtp-Source: ABdhPJxe1XYsmM6riS+UPT68QnUlgiHQqg3WENQUZg+apcfgwViegTCzjryTSQLvJ8lc2lb5AY9O5HG6KBsLJh2l9A== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:3576:2dfb:a2a7:68b7]) (user=almasrymina job=sendgmr) by 2002:a25:2ac2:: with SMTP id q185mr2013749ybq.66.1627683335290; Fri, 30 Jul 2021 15:15:35 -0700 (PDT) Date: Fri, 30 Jul 2021 15:15:22 -0700 Message-Id: <20210730221522.524256-1-almasrymina@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.554.ge1b32706d8-goog Subject: [PATCH v1] mm, hugepages: add mremap() support for hugepage backed vma From: Mina Almasry Cc: Ken Chen , Mina Almasry , Mike Kravetz , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Chris Kennelly X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 38703F00708F Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=XpXwBkjG; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3B3oEYQsKCBw2DE2KJQEAF28GG8D6.4GEDAFMP-EECN24C.GJ8@flex--almasrymina.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3B3oEYQsKCBw2DE2KJQEAF28GG8D6.4GEDAFMP-EECN24C.GJ8@flex--almasrymina.bounces.google.com X-Stat-Signature: 8nubgycmp51bnfgpkjf1om7ywwfn9b91 X-HE-Tag: 1627683336-503717 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ken Chen Support mremap() for hugepage backed vma segment by simply repositioning page table entries. The page table entries are repositioned to the new virtual address on mremap(). Hugetlb mremap() support is of course generic; my motivating use case is a library (hugepage_text), which reloads the ELF text of executables in hugepages. This significantly increases the execution performance of said executables. Restricts the mremap operation on hugepages to up to the size of the original mapping as the underlying hugetlb reservation is not yet capable of handling remapping to a larger size. Tested with a simple mmap/mremap test case, roughly: void* haddr = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_ANONYMOUS | MAP_SHARED, -1, 0); void* taddr = mmap(NULL, size, PROT_NONE, MAP_HUGETLB | MAP_ANONYMOUS | MAP_SHARED, -1, 0); void* raddr = mremap(haddr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, taddr); Signed-off-by: Mina Almasry Cc: Mike Kravetz Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: Ken Chen Cc: Chris Kennelly --- include/linux/hugetlb.h | 13 ++++++ mm/hugetlb.c | 89 +++++++++++++++++++++++++++++++++++++++++ mm/mremap.c | 75 ++++++++++++++++++++++++++++++++-- 3 files changed, 174 insertions(+), 3 deletions(-) -- 2.32.0.554.ge1b32706d8-goog diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index f7ca1a3870ea5..685a289b58401 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -124,6 +124,7 @@ struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages, void hugepage_put_subpool(struct hugepage_subpool *spool); void reset_vma_resv_huge_pages(struct vm_area_struct *vma); +void clear_vma_resv_huge_pages(struct vm_area_struct *vma); int hugetlb_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); int hugetlb_overcommit_handler(struct ctl_table *, int, void *, size_t *, loff_t *); @@ -132,6 +133,8 @@ int hugetlb_treat_movable_handler(struct ctl_table *, int, void *, size_t *, int hugetlb_mempolicy_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); +int move_hugetlb_page_tables(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long len); int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, struct page **, struct vm_area_struct **, @@ -218,6 +221,10 @@ static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma) { } +static inline void clear_vma_resv_huge_pages(struct vm_area_struct *vma) +{ +} + static inline unsigned long hugetlb_total_pages(void) { return 0; @@ -265,6 +272,12 @@ static inline int copy_hugetlb_page_range(struct mm_struct *dst, return 0; } +#define move_hugetlb_page_tables(vma, old_addr, new_addr, len) \ + ({ \ + BUG(); \ + 0; \ + }) + static inline void hugetlb_report_meminfo(struct seq_file *m) { } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 528947da65c8f..bd26b00caf3cf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1004,6 +1004,23 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) vma->vm_private_data = (void *)0; } +/* + * Reset and decrement one ref on hugepage private reservation. + * Called with mm->mmap_sem writer semaphore held. + * This function should be only used by move_vma() and operate on + * same sized vma. It should never come here with last ref on the + * reservation. + */ +void clear_vma_resv_huge_pages(struct vm_area_struct *vma) +{ + struct resv_map *reservations = vma_resv_map(vma); + + if (reservations && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) + kref_put(&reservations->refs, resv_map_release); + + reset_vma_resv_huge_pages(vma); +} + /* Returns true if the VMA has associated reserve pages */ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) { @@ -4429,6 +4446,73 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, return ret; } +static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr); + +static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pte_t *src_pte) +{ + struct address_space *mapping = vma->vm_file->f_mapping; + struct hstate *h = hstate_vma(vma); + struct mm_struct *mm = vma->vm_mm; + pte_t *dst_pte, pte; + spinlock_t *src_ptl, *dst_ptl; + + /* Shared pagetables need more thought here if we re-enable them */ + BUG_ON(vma_shareable(vma, old_addr)); + + /* Prevent race with file truncation */ + i_mmap_lock_write(mapping); + + dst_pte = huge_pte_offset(mm, new_addr, huge_page_size(h)); + dst_ptl = huge_pte_lock(h, mm, dst_pte); + src_ptl = huge_pte_lockptr(h, mm, src_pte); + /* + * We don't have to worry about the ordering of src and dst ptlocks + * because exclusive mmap_sem (or the i_mmap_lock) prevents deadlock. + */ + if (src_ptl != dst_ptl) + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); + + pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); + set_huge_pte_at(mm, new_addr, dst_pte, pte); + + if (src_ptl != dst_ptl) + spin_unlock(src_ptl); + spin_unlock(dst_ptl); + i_mmap_unlock_write(mapping); +} + +int move_hugetlb_page_tables(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long len) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + struct mm_struct *mm = vma->vm_mm; + unsigned long old_end = old_addr + len; + pte_t *src_pte, *dst_pte; + struct mmu_notifier_range range; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, old_addr, + old_end); + mmu_notifier_invalidate_range_start(&range); + for (; old_addr < old_end; old_addr += sz, new_addr += sz) { + src_pte = huge_pte_offset(mm, old_addr, sz); + if (!src_pte) + continue; + if (huge_pte_none(huge_ptep_get(src_pte))) + continue; + dst_pte = huge_pte_alloc(mm, vma, new_addr, sz); + if (!dst_pte) + break; + + move_huge_pte(vma, old_addr, new_addr, src_pte); + } + flush_tlb_range(vma, old_end - len, old_end); + mmu_notifier_invalidate_range_end(&range); + + return len + old_addr - old_end; +} + void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page) @@ -6043,6 +6127,11 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, } #else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) +{ + return false; +} + pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud) { diff --git a/mm/mremap.c b/mm/mremap.c index badfe17ade1f0..3c0ee2bb9c439 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -489,6 +489,9 @@ unsigned long move_page_tables(struct vm_area_struct *vma, old_end = old_addr + len; flush_cache_range(vma, old_addr, old_end); + if (is_vm_hugetlb_page(vma)) + return move_hugetlb_page_tables(vma, old_addr, new_addr, len); + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, old_addr, old_end); mmu_notifier_invalidate_range_start(&range); @@ -642,6 +645,57 @@ static unsigned long move_vma(struct vm_area_struct *vma, mremap_userfaultfd_prep(new_vma, uf); } + if (is_vm_hugetlb_page(vma)) { + /* + * Clear the old hugetlb private page reservation. + * It has already been transferred to new_vma. + * + * The reservation tracking for hugetlb private mapping is + * done in two places: + * 1. implicit vma size, e.g. vma->vm_end - vma->vm_start + * 2. tracking of hugepages that has been faulted in already, + * this is done via a linked list hanging off + * vma_resv_map(vma). + * + * Each hugepage vma also has hugepage specific vm_ops method + * and there is an imbalance in the open() and close method. + * + * In the open method (hugetlb_vm_op_open), a ref count is + * obtained on the structure that tracks faulted in pages. + * + * In the close method, it unconditionally returns pending + * reservation on the vma as well as release a kref count and + * calls release function upon last reference. + * + * Because of this unbalanced operation in the open/close + * method, this code runs into trouble in the mremap() path: + * copy_vma will copy the pointer to the reservation structure, + * then calls vma->vm_ops->open() method, which only increments + * ref count on the tracking structure and does not do actual + * reservation. In the same code sequence from move_vma(), the + * close() method is called as a result of cleaning up original + * vma segment from a call to do_munmap(). At this stage, the + * tracking and reservation is out of balance, e.g. the + * reservation is returned, however there is an active ref on + * the tracking structure. + * + * When the remap'ed vma unmaps (either implicit at process + * exit or explicit munmap), the reservation will be returned + * again because hugetlb_vm_op_close calculate pending + * reservation unconditionally based on size of vma. This + * cause h->resv_huge_pages. to underflow and no more hugepages + * can be allocated to application in certain situation. + * + * We need to reset and clear the tracking reservation, such + * that we don't prematurely returns hugepage reservation at + * mremap time. The reservation should only be returned at + * munmap() time. This is totally undesired, however, we + * don't want to re-factor hugepage reservation code at this + * stage for prod kernel. Resetting is the least risky method. + */ + clear_vma_resv_huge_pages(vma); + } + /* Conceal VM_ACCOUNT so old reservation is not undone */ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) { vma->vm_flags &= ~VM_ACCOUNT; @@ -736,9 +790,6 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr, (vma->vm_flags & (VM_DONTEXPAND | VM_PFNMAP))) return ERR_PTR(-EINVAL); - if (is_vm_hugetlb_page(vma)) - return ERR_PTR(-EINVAL); - /* We can't remap across vm area boundaries */ if (old_len > vma->vm_end - addr) return ERR_PTR(-EFAULT); @@ -949,6 +1000,24 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, if (mmap_write_lock_killable(current->mm)) return -EINTR; + vma = find_vma(mm, addr); + if (!vma || vma->vm_start > addr) + goto out; + + if (is_vm_hugetlb_page(vma)) { + struct hstate *h __maybe_unused = hstate_vma(vma); + + if (old_len & ~huge_page_mask(h) || + new_len & ~huge_page_mask(h)) + goto out; + + /* + * Don't allow remap expansion, because the underlying hugetlb + * reservation is not yet capable to handle split reservation. + */ + if (new_len > old_len) + goto out; + } if (flags & (MREMAP_FIXED | MREMAP_DONTUNMAP)) { ret = mremap_to(addr, old_len, new_addr, new_len,