From patchwork Fri Dec 6 21:28:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guillaume Morin X-Patchwork-Id: 13897808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C138E77173 for ; Fri, 6 Dec 2024 21:28:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72EDA6B02F2; Fri, 6 Dec 2024 16:28:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B6806B02F3; Fri, 6 Dec 2024 16:28:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5611C8D000B; Fri, 6 Dec 2024 16:28:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 33EF26B02F2 for ; Fri, 6 Dec 2024 16:28:49 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C3CA1140751 for ; Fri, 6 Dec 2024 21:28:48 +0000 (UTC) X-FDA: 82865823768.14.BD3894A Received: from smtp2-g21.free.fr (smtp2-g21.free.fr [212.27.42.2]) by imf07.hostedemail.com (Postfix) with ESMTP id AD6B640003 for ; Fri, 6 Dec 2024 21:28:28 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=morinfr.org header.s=20170427 header.b=C0niQcwq; spf=pass (imf07.hostedemail.com: domain of guillaume@morinfr.org designates 212.27.42.2 as permitted sender) smtp.mailfrom=guillaume@morinfr.org; dmarc=pass (policy=quarantine) header.from=morinfr.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733520519; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=YI2g3jesiL2lCKYbdz8KZS5bsDiuyqJo7taR769xakk=; b=ktaY0NvNA57+Tvu8WyRGozDcAPApL4HCONDmymrlbGsardQU056eQFiJzYwS4LJwWGbgUo k51xVtymDPU4ygR8lxCbamwNC2n/0OcGG/Q3HxDyiqkNwRFrUbNbMtjBz6jcAWYzXcbObp uH+4PNnteTAcE9g9zE3tPQMvhim2+xk= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=morinfr.org header.s=20170427 header.b=C0niQcwq; spf=pass (imf07.hostedemail.com: domain of guillaume@morinfr.org designates 212.27.42.2 as permitted sender) smtp.mailfrom=guillaume@morinfr.org; dmarc=pass (policy=quarantine) header.from=morinfr.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733520519; a=rsa-sha256; cv=none; b=Y6WeNZsZTg86rn5875RQl36y7TnKhc+8jGqtyfrHEMRoKWP0ZPGN+HowaGtUHuDOhgIiVL zNzjrfMa7jm5yVRqYJ/UOdXMKuGh1L45wRLa7aRB4HQ5s5axAEJpi3Q1GpUVGV5N87toF6 TyB2sVo8xaGtnTcfZAPM/nmEM3m0LjI= Received: from bender.morinfr.org (unknown [82.66.66.112]) by smtp2-g21.free.fr (Postfix) with ESMTPS id 68C452003C8; Fri, 6 Dec 2024 22:28:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=morinfr.org ; s=20170427; h=Content-Type:MIME-Version:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-Transfer-Encoding:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=YI2g3jesiL2lCKYbdz8KZS5bsDiuyqJo7taR769xakk=; b=C0niQcwqA7XbuooX3l8wqzRajg N5Qdcg60pQMirhAGvps+Nqv/ZuC4sMRjdiw6pBPlMG/PnV+J0xhuKgLpvFdq9NjyLl8uTVQ1KfqJC 2s75Zd6WSIkiGV40i5tESf1bbqkSMsr0KH0q5sRYDqJM2SZ3v1MwQ8ytLBvwAA3QDRSQ=; Received: from guillaum by bender.morinfr.org with local (Exim 4.96) (envelope-from ) id 1tJfsK-002VaX-2k; Fri, 06 Dec 2024 22:28:36 +0100 Date: Fri, 6 Dec 2024 22:28:36 +0100 From: Guillaume Morin To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, guillaume@morinfr.org, Muchun Song , Andrew Morton , Peter Xu , David Hildenbrand , Eric Hagberg Subject: [PATCH v5] mm/hugetlb: support FOLL_FORCE|FOLL_WRITE Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Rspamd-Queue-Id: AD6B640003 X-Stat-Signature: jgo8ijtg3bukggjzsrg4ei7bfc8kkmhs X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1733520508-405709 X-HE-Meta: U2FsdGVkX19aXwEEvTVylx/lhJc7A73U19QtCjaTooqsTVC4svQzxOZ2lLNZ2+GSMCAFut6UiWttOGpz7x1MCeplnY9jbNXy8Yiv9fYpeUYlqXHSyDCYC7MgnhcZlx1LC5wHSvSNjTSsNW/JYykTj1YtXCu+k4DO1UFI2xuH19PIsCna4AliB5+EjIGBOKpo3ABVARxaulXNJp7/qK8hYmT02KAjKRTYKeNMcstkp4v8o6WW2XVsGP5F9zFrEtgygkiqPo/fnAHvoOWpUWNhUDC9s1OK2mQJx7MliOSwLljFf7kVvhauEajTrvQqlrHPC8hkNcymEEUFfMHp0i2AN8Vx81TC8EUSiHx4JPxrjbe1l/GL/uAmTGO7pE5+CufQWgYCHM3mw9CLCV1B7sw7jKH3mWq9/MJVcXKWD91psJ+KEcMb8FGNc9MgSZix29MPmPGkQ+EUDGwUxsTCggSVwMwMyergbtmzpKC1jpx9Ayeh/fvXUod0y3U+seaC6SVsQgWXzhIywc4rONDOun9xiRVorOZnq4MhywPQBnONTceFVGY3u3NaD7UJpY8aOGAkEOGBEtD/T8H0LO1EVxQAHebiLpR6xkLDwwIdINxfcc2Wrlm4+jNXSvl54sknAC8xDeszjxuxgwBb4BQtB0bsUNRaZTDByN+wHO+kKbkozSWQKRT1GWYjLLtSw1cIcjoxvYwtMDPM5ASCXBebuf1KULeCoVw3V5upMV3ip2WFIkD8/2yRTn21Hyl2hi1nvYmrVby5ZjjRqMuNgyD/74gNGh1txdO9m3jwqEcF7pY/BIQVmi4IpP8WnE/Ve2g6AF9dPhPgjC+DLdTVNQn8ind1VRh3G+QsgeSHTBFe2RctXzMz6kRHaf4HcetCgFqidTqX4ATQtrTCmH2KJBY0+hdmBWm6l2TWw+ewz3lCGKZyHnIWv/sVNHxhvcaYA3oU5IhnpbW4ppwmGESTLPiPBv8 +AFxw0D8 LM4tr0zKwbQ0CgO0kSJxwiVzz1efBjqBm7WTH1OTvJmOQhkzTnWpl9tcgA4sckfP2S1g919GAHXJidR/1XzcEsj1tON9Ho9esnb4bn18mG/tt3nN8Ul5HzdlL4LGV4VpQBE4hFh7giTJURKALcEB43U7ekYg29tEG50HXirFZoxNN6CYgc0zyrbMBY7Ey3FkQwkcT5lvwDA7m8bYyZCBgH8Xaq3r/RZsSFFHVF5nlwNBxAdCvZPFNr9be/Z1f4ZULrUPMOolptLaeUXvtNJGOYcZEIXNq96uoqWjSNjlNCd+OFiX4YvFwvy7mvKIvTxjx71Q5SHzVWJx1i/HtTewEaySv5mCiw8dTTXENIAF4S1Q+vRSve3jzWYFiSUmOM+t/Tno0nBV9J5n3yEdnEZZAJxfHi4AOmr+vt0yfc4axGde4uqOTIFS6gOA81g9KAzYBm2P0ClL0kPp0Zsi4fs7ENnMrNtbl/9tJvdxH0U0S4qBIUkI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Eric reported that PTRACE_POKETEXT fails when applications use hugetlb for mapping text using huge pages. Before commit 1d8d14641fd9 ("mm/hugetlb: support write-faults in shared mappings"), PTRACE_POKETEXT worked by accident, but it was buggy and silently ended up mapping pages writable into the page tables even though VM_WRITE was not set. In general, FOLL_FORCE|FOLL_WRITE does currently not work with hugetlb. Let's implement FOLL_FORCE|FOLL_WRITE properly for hugetlb, such that what used to work in the past by accident now properly works, allowing applications using hugetlb for text etc. to get properly debugged. This change might also be required to implement uprobes support for hugetlb [1]. [1] https://lore.kernel.org/lkml/ZiK50qob9yl5e0Xz@bender.morinfr.org/ Cc: Muchun Song Cc: Andrew Morton Cc: Peter Xu Cc: David Hildenbrand Cc: Eric Hagberg Signed-off-by: Guillaume Morin --- Changes in v2: - Improved commit message Changes in v3: - Fix potential unitialized mem access in follow_huge_pud - define pud_soft_dirty when soft dirty is not enabled Changes in v4: - Remove the soft dirty pud check - Remove the pud_soft_dirty added in v3 Changes in v5: - Remove unnecessary hugetlb hunk after David's changes - properly get the page in follow_huge_pud to call can_follow_write_common() mm/gup.c | 91 +++++++++++++++++++++++++--------------------------- mm/hugetlb.c | 17 +++++----- 2 files changed, 53 insertions(+), 55 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 746070a1d8bf..425a084079d8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -587,6 +587,33 @@ static struct folio *try_grab_folio_fast(struct page *page, int refs, } #endif /* CONFIG_HAVE_GUP_FAST */ +/* Common code for can_follow_write_* */ +static inline bool can_follow_write_common(struct page *page, + struct vm_area_struct *vma, unsigned int flags) +{ + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + return page && PageAnon(page) && PageAnonExclusive(page); +} + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags, unsigned long address) { @@ -613,6 +640,18 @@ static struct page *no_page_table(struct vm_area_struct *vma, } #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +/* FOLL_FORCE can write to even unwritable PUDs in COW mappings. */ +static inline bool can_follow_write_pud(pud_t pud, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pud is writable, we can write to the page. */ + if (pud_write(pud)) + return true; + + return can_follow_write_common(page, vma, flags); +} + static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, int flags, struct follow_page_context *ctx) @@ -625,10 +664,11 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, assert_spin_locked(pud_lockptr(mm, pudp)); - if ((flags & FOLL_WRITE) && !pud_write(pud)) + if (!pud_present(pud)) return NULL; - if (!pud_present(pud)) + if ((flags & FOLL_WRITE) && + !can_follow_write_pud(pud, pfn_to_page(pfn), vma, flags)) return NULL; pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; @@ -677,27 +717,7 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, if (pmd_write(pmd)) return true; - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + if (!can_follow_write_common(page, vma, flags)) return false; /* ... and a write-fault isn't required for other reasons. */ @@ -798,27 +818,7 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page, if (pte_write(pte)) return true; - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + if (!can_follow_write_common(page, vma, flags)) return false; /* ... and a write-fault isn't required for other reasons. */ @@ -1285,9 +1285,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if (!(vm_flags & VM_WRITE) || (vm_flags & VM_SHADOW_STACK)) { if (!(gup_flags & FOLL_FORCE)) return -EFAULT; - /* hugetlb does not support FOLL_FORCE|FOLL_WRITE. */ - if (is_vm_hugetlb_page(vma)) - return -EFAULT; /* * We used to let the write,force case do COW in a * VM_MAYWRITE VM_SHARED !VM_WRITE vma, so ptrace could diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ea2ed8e301ef..2438dcc0c03a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5169,6 +5169,13 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, update_mmu_cache(vma, address, ptep); } +static void set_huge_ptep_maybe_writable(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + if (vma->vm_flags & VM_WRITE) + set_huge_ptep_writable(vma, address, ptep); +} + bool is_hugetlb_entry_migration(pte_t pte) { swp_entry_t swp; @@ -5802,13 +5809,6 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, if (!unshare && huge_pte_uffd_wp(pte)) return 0; - /* - * hugetlb does not support FOLL_FORCE-style write faults that keep the - * PTE mapped R/O such as maybe_mkwrite() would do. - */ - if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE))) - return VM_FAULT_SIGSEGV; - /* Let's take out MAP_SHARED mappings first. */ if (vma->vm_flags & VM_MAYSHARE) { set_huge_ptep_writable(vma, vmf->address, vmf->pte); @@ -5837,7 +5837,8 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio, SetPageAnonExclusive(&old_folio->page); } if (likely(!unshare)) - set_huge_ptep_writable(vma, vmf->address, vmf->pte); + set_huge_ptep_maybe_writable(vma, vmf->address, + vmf->pte); delayacct_wpcopy_end(); return 0;