From patchwork Wed Jun 29 09:53:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 12899708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E6C84C433EF for ; Wed, 29 Jun 2022 09:56:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=RdE5BOVhCWJey3DbxwNbnLRsmo4Wt/fB9G7etzx61/M=; b=Exf3H9B5kmgEFT wuvlxDz7A2Gc3uvvWM0smyPPw/MGT4VIVC9LhSmoJIw8/tnlHhV+o6wxSVrvaUFzkcIYURz1GYwQY MA3ii6B1XuLHAsJJKL+v99RWrJDQ4S0VtgOOhz8F5/ClsHI/brziVb+FQBXZ/QKDsASYxaDezK4JR WRFUvWcCVVO8hLNSz0Nl6G5lW4bJOwZLmOKL94t8aOsBEKWIjMjmzKhbxdEbKQy3vOOYmbE/T4TL8 hYQx57h4UX+4/pbQgwbBZ6lldjuorT2WtUxbdsmVGw0GYlr6EJXeYjH09REjgi0rutRBRTuqbs+CK 6fe8mvfzk8ZIStVE/HZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6UQ3-00ApYH-58; Wed, 29 Jun 2022 09:55:35 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o6UOX-00AoeN-Aq for linux-arm-kernel@lists.infradead.org; Wed, 29 Jun 2022 09:54:03 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 61408B81F8B; Wed, 29 Jun 2022 09:53:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30AC6C34114; Wed, 29 Jun 2022 09:53:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656496436; bh=VYp/4V7mI7DjQJLdE6/hBPkEwCm8EJFQ3ZvTXHySRUs=; h=From:To:Cc:Subject:Date:From; b=TZ6ebsAcl1Vv/PO7FJ+gZVSjTOHE+AptOKuVxFKnDAcm8g3NigHXek5ox/CJLrEbu veiIuS/qKdbGrknJgvQtbMaIFn7vcjZdiYzMJ36vtyw1M8utpf9Y0ob1ro2aGlZjra KqdArJRFoW+xnjcT4HomE/3VSEpDjL1Q++NLgxhV5jK/xFjtzoWXkHfVUb17vucn8Q mZEZ6pAHAspKs9QqF8+XaKK3TjdVZEluI9VUm+1NOoGGDl0NRSg0TSSdWEtVla52rw AV/F52jKcI4ZGgvoh14cpiLRXYNZVCuxQTDK60cvmxjqTWwh9tm4ciDyVKkpak1GYH dhYSw7AsA5N8A== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: kernel-team@android.com, Will Deacon , Ard Biesheuvel , Steve Capper , Anshuman Khandual , Mike Kravetz , Catalin Marinas , Marc Zyngier Subject: [PATCH] arm64: hugetlb: Restore TLB invalidation for BBM on contiguous ptes Date: Wed, 29 Jun 2022 10:53:49 +0100 Message-Id: <20220629095349.25748-1-will@kernel.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220629_025401_705663_940F3463 X-CRM114-Status: GOOD ( 17.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Commit fb396bb459c1 ("arm64/hugetlb: Drop TLB flush from get_clear_flush()") removed TLB invalidation from get_clear_flush() [now get_clear_contig()] on the basis that the core TLB invalidation code is aware of hugetlb mappings backed by contiguous page-table entries and will cover the correct virtual address range. However, this change also resulted in the TLB invalidation being removed from the "break" step in the break-before-make (BBM) sequence used internally by huge_ptep_set_{access_flags,wrprotect}(), therefore making the BBM sequence unsafe irrespective of later invalidation. Although the architecture is desperately unclear about how exactly contiguous ptes should be updated in a live page-table, restore TLB invalidation to our BBM sequence under the assumption that BBM is the right thing to be doing in the first place. Cc: Ard Biesheuvel Cc: Steve Capper Cc: Anshuman Khandual Cc: Mike Kravetz Cc: Catalin Marinas Cc: Marc Zyngier Signed-off-by: Will Deacon Reviewed-by: Catalin Marinas Reviewed-by: Anshuman Khandual --- Found by inspection. arch/arm64/mm/hugetlbpage.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index e2a5ec9fdc0d..3618ef3f6d81 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -214,6 +214,19 @@ static pte_t get_clear_contig(struct mm_struct *mm, return orig_pte; } +static pte_t get_clear_contig_flush(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long pgsize, + unsigned long ncontig) +{ + pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + + flush_tlb_range(&vma, addr, addr + (pgsize * ncontig)); + return orig_pte; +} + /* * Changing some bits of contiguous entries requires us to follow a * Break-Before-Make approach, breaking the whole contiguous set @@ -447,19 +460,20 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, int ncontig, i; size_t pgsize = 0; unsigned long pfn = pte_pfn(pte), dpfn; + struct mm_struct *mm = vma->vm_mm; pgprot_t hugeprot; pte_t orig_pte; if (!pte_cont(pte)) return ptep_set_access_flags(vma, addr, ptep, pte, dirty); - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); + ncontig = find_num_contig(mm, addr, ptep, &pgsize); dpfn = pgsize >> PAGE_SHIFT; if (!__cont_access_flags_changed(ptep, pte, ncontig)) return 0; - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); + orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); /* Make sure we don't lose the dirty or young state */ if (pte_dirty(orig_pte)) @@ -470,7 +484,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, hugeprot = pte_pgprot(pte); for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) - set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot)); + set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); return 1; } @@ -492,7 +506,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, ncontig = find_num_contig(mm, addr, ptep, &pgsize); dpfn = pgsize >> PAGE_SHIFT; - pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); + pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); pte = pte_wrprotect(pte); hugeprot = pte_pgprot(pte); @@ -505,17 +519,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { + struct mm_struct *mm = vma->vm_mm; size_t pgsize; int ncontig; - pte_t orig_pte; if (!pte_cont(READ_ONCE(*ptep))) return ptep_clear_flush(vma, addr, ptep); - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); - flush_tlb_range(vma, addr, addr + pgsize * ncontig); - return orig_pte; + ncontig = find_num_contig(mm, addr, ptep, &pgsize); + return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); } static int __init hugetlbpage_init(void)