From patchwork Fri Jun 24 17:36:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 12894942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E42CCA473 for ; Fri, 24 Jun 2022 17:37:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A2978E024F; Fri, 24 Jun 2022 13:37:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 405828E0244; Fri, 24 Jun 2022 13:37:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 195E58E024F; Fri, 24 Jun 2022 13:37:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E76B28E0244 for ; Fri, 24 Jun 2022 13:37:34 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C792233713 for ; Fri, 24 Jun 2022 17:37:34 +0000 (UTC) X-FDA: 79613836428.30.1A1C868 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf25.hostedemail.com (Postfix) with ESMTP id 4B1D0A002A for ; Fri, 24 Jun 2022 17:37:34 +0000 (UTC) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-31776c7cd7eso27012557b3.5 for ; Fri, 24 Jun 2022 10:37:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=4vXUhYS1gVPcFf2LfwHq3p4PeYS4NxljSjceZ18tGG4=; b=SANzcf+LwCVGpRM55Cay++FK4cUCDPSiB1XYFVIRVPd8bp9OmeUxkkbKxAjSLEPaec rO0L74FDJGW8GBsbPMOk/D6MIlmhHhIDSjtWw7C+OlohYel2WaCeh5uufn8zu/z6hULm /YPEqs3V1KLPyQPBtO4T1n3x5IEA0mIDDhG4OM6wA+fQpkRtkK3I4ZlOCnhtXfYJJBvT zqcEyFkmPgpGY0oBWs35n3cyP752+a4rX4v1dipphzs54KYnjLeWO2uVKbQ1AeddxDAb M9UdJg2b1CXLL7GgWGFeWr1dEZLZQzO1Y1PjZaxJ6F/EO+UtB2CbCiS/IWX6f6NMt489 ntQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=4vXUhYS1gVPcFf2LfwHq3p4PeYS4NxljSjceZ18tGG4=; b=zuPjW4zhLbOIkcGgYe4PchFFROCyRFwbLJqj54HKgFcGHRpsf+mn7ccv98KpKEClAT 5UXbYDXylgZR4Or99Uy0H4hPx85aZ3wCxpu5tKW3MYx2/Q2/VyL9IJ0nd02I/ObwhYfZ SaPszMw6Y5DyuE0Wb8auuPJkOMFLpqr0mJLvWI0d94SPZ6MYWP+Ux11ObLHb7ahUWZcJ rtP0vnixi95bL/0h0P/kIt5lDUmcTaOPZK+7Q626XQefqA8PVjRd74RmxhwvemLu+BSV p63TCHM9ji8ATZ7Hk9/gOnlVBKGbj/3SmcrVixH1fXMqYE1hVRK9f5wvG1oiMauzllo0 7gGQ== X-Gm-Message-State: AJIora80v1djF69VamtGrZGYlpKYXRBCssVbVTFYdcJwIJKnyWcjWuSM HQE2WrxEX9++FmD1gvXdDAxqDAm511rm9BDc X-Google-Smtp-Source: AGRyM1ukQX4IYJ9JitkrwX+RpgB7i6QAoj2im0rwaYwc2vW9f/595sRIyJPCfzsFwuJfCBggqm9wh7FmHwoLS8un X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a81:b622:0:b0:317:af00:63c3 with SMTP id u34-20020a81b622000000b00317af0063c3mr24103ywh.298.1656092253828; Fri, 24 Jun 2022 10:37:33 -0700 (PDT) Date: Fri, 24 Jun 2022 17:36:47 +0000 In-Reply-To: <20220624173656.2033256-1-jthoughton@google.com> Message-Id: <20220624173656.2033256-18-jthoughton@google.com> Mime-Version: 1.0 References: <20220624173656.2033256-1-jthoughton@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [RFC PATCH 17/26] hugetlb: update follow_hugetlb_page to support HGM From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , Jue Wang , Manish Mishra , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SANzcf+L; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3Xfa1YgoKCD4oymtzlmytslttlqj.htrqnsz2-rrp0fhp.twl@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Xfa1YgoKCD4oymtzlmytslttlqj.htrqnsz2-rrp0fhp.twl@flex--jthoughton.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656092254; a=rsa-sha256; cv=none; b=qUISRroulgOElMN4ZBKLDLhm4OxD4J/OJYVFJZ4sFB5GW6xauFHWRoUdcMF17vtm5rkY6n GuwslJoSrUaIeOcvjxF7gtwRmJwWURk4sN7/r4JDq5VpGlGG+FMsysNy1Phmim7UQ7kFyi rijGPirtILuGqLXFvj7n8/gz4rQCbNE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656092254; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4vXUhYS1gVPcFf2LfwHq3p4PeYS4NxljSjceZ18tGG4=; b=EhTqIao2Taek0U4UH7ONkKRPQvnboLV535z00RTLi4cvOf1tfS5ZuFXEokJomNtq4T93Lk /YJwYkjw3rHa2avM4DkLIQ3d9v3sPqJBuUKU8vnP/PkH+m8ZwWjnhlIhpGr1riXnO+ncCy 2WqRGTsY/GY2XO9aVQ2OGnmEW0up84Q= X-Stat-Signature: w6dpbgbjrt651taco3qk4kmqngocw3yj X-Rspamd-Server: rspam08 X-Rspam-User: X-Rspamd-Queue-Id: 4B1D0A002A Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SANzcf+L; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3Xfa1YgoKCD4oymtzlmytslttlqj.htrqnsz2-rrp0fhp.twl@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Xfa1YgoKCD4oymtzlmytslttlqj.htrqnsz2-rrp0fhp.twl@flex--jthoughton.bounces.google.com X-HE-Tag: 1656092254-623049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This enables support for GUP, and it is needed for the KVM demand paging self-test to work. One important change here is that, before, we never needed to grab the i_mmap_sem, but now, to prevent someone from collapsing the page tables out from under us, we grab it for reading when doing high-granularity PT walks. Signed-off-by: James Houghton --- mm/hugetlb.c | 70 ++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 57 insertions(+), 13 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f9c7daa6c090..aadfcee947cf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6298,14 +6298,18 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long vaddr = *position; unsigned long remainder = *nr_pages; struct hstate *h = hstate_vma(vma); + struct address_space *mapping = vma->vm_file->f_mapping; int err = -EFAULT, refs; + bool has_i_mmap_sem = false; while (vaddr < vma->vm_end && remainder) { pte_t *pte; spinlock_t *ptl = NULL; bool unshare = false; int absent; + unsigned long pages_per_hpte; struct page *page; + struct hugetlb_pte hpte; /* * If we have a pending SIGKILL, don't keep faulting pages and @@ -6325,9 +6329,23 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, */ pte = huge_pte_offset(mm, vaddr & huge_page_mask(h), huge_page_size(h)); - if (pte) - ptl = huge_pte_lock(h, mm, pte); - absent = !pte || huge_pte_none(huge_ptep_get(pte)); + if (pte) { + hugetlb_pte_populate(&hpte, pte, huge_page_shift(h)); + if (hugetlb_hgm_enabled(vma)) { + BUG_ON(has_i_mmap_sem); + i_mmap_lock_read(mapping); + /* + * Need to hold the mapping semaphore for + * reading to do a HGM walk. + */ + has_i_mmap_sem = true; + hugetlb_walk_to(mm, &hpte, vaddr, PAGE_SIZE, + /*stop_at_none=*/true); + } + ptl = hugetlb_pte_lock(mm, &hpte); + } + + absent = !pte || hugetlb_pte_none(&hpte); /* * When coredumping, it suits get_dump_page if we just return @@ -6338,8 +6356,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, */ if (absent && (flags & FOLL_DUMP) && !hugetlbfs_pagecache_present(h, vma, vaddr)) { - if (pte) + if (pte) { + if (has_i_mmap_sem) { + i_mmap_unlock_read(mapping); + has_i_mmap_sem = false; + } spin_unlock(ptl); + } remainder = 0; break; } @@ -6359,8 +6382,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, vm_fault_t ret; unsigned int fault_flags = 0; - if (pte) + if (pte) { + if (has_i_mmap_sem) { + i_mmap_unlock_read(mapping); + has_i_mmap_sem = false; + } spin_unlock(ptl); + } if (flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; else if (unshare) @@ -6403,8 +6431,11 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, continue; } - pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; - page = pte_page(huge_ptep_get(pte)); + pfn_offset = (vaddr & ~hugetlb_pte_mask(&hpte)) >> PAGE_SHIFT; + page = pte_page(hugetlb_ptep_get(&hpte)); + pages_per_hpte = hugetlb_pte_size(&hpte) / PAGE_SIZE; + if (hugetlb_hgm_enabled(vma)) + page = compound_head(page); VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && !PageAnonExclusive(page), page); @@ -6414,17 +6445,21 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * and skip the same_page loop below. */ if (!pages && !vmas && !pfn_offset && - (vaddr + huge_page_size(h) < vma->vm_end) && - (remainder >= pages_per_huge_page(h))) { - vaddr += huge_page_size(h); - remainder -= pages_per_huge_page(h); - i += pages_per_huge_page(h); + (vaddr + pages_per_hpte < vma->vm_end) && + (remainder >= pages_per_hpte)) { + vaddr += pages_per_hpte; + remainder -= pages_per_hpte; + i += pages_per_hpte; spin_unlock(ptl); + if (has_i_mmap_sem) { + has_i_mmap_sem = false; + i_mmap_unlock_read(mapping); + } continue; } /* vaddr may not be aligned to PAGE_SIZE */ - refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, + refs = min3(pages_per_hpte - pfn_offset, remainder, (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); if (pages || vmas) @@ -6447,6 +6482,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, flags))) { spin_unlock(ptl); + if (has_i_mmap_sem) { + has_i_mmap_sem = false; + i_mmap_unlock_read(mapping); + } remainder = 0; err = -ENOMEM; break; @@ -6458,8 +6497,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, i += refs; spin_unlock(ptl); + if (has_i_mmap_sem) { + has_i_mmap_sem = false; + i_mmap_unlock_read(mapping); + } } *nr_pages = remainder; + BUG_ON(has_i_mmap_sem); /* * setting position is actually required only if remainder is * not zero but it's faster not to add a "if (remainder)"