From patchwork Mon Mar 27 09:27:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 9645777 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1C7B5602D6 for ; Mon, 27 Mar 2017 09:30:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CFB527DA4 for ; Mon, 27 Mar 2017 09:30:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C4C028335; Mon, 27 Mar 2017 09:30:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 58FE527DA4 for ; Mon, 27 Mar 2017 09:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5L4EGm9XaWMeIgMVDYetvuFrgSD/PVMpTUTGriuARz0=; b=LyLp/vTIiHoAWsrLycdZH013Hc AWeuIh3ffzjNwY3H4aFFIusRSCS1Ht/8NSSiQQCLsZUfWDiMJUY8x7gsaxbEFFgM5Io1lEi3+YBwz NbTfH1z2gNTUcUNje/YNCuOudiYUSkdDhyQmwG2PXLV5/9JtwvSa2CTuzUTkuljMTkRiEna5WMcl1 DuLJ6CDzQkT7bHo+BnN4E6RVVd8XShlLZoggvHfp00lMTDFQ2at6jbsPRbmehvgDPBjXlvl3TDm4D ymdi/5kRBk9JXs2Y12FAwoHEIZRySBwizSoyn4sTDDlVW8gf3TMO+A+ftkGGMjLzXchxvMBUeNqwv WqQSrrKw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1csQyT-00052m-IL; Mon, 27 Mar 2017 09:30:05 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1csQwx-00039X-18 for linux-arm-kernel@lists.infradead.org; Mon, 27 Mar 2017 09:28:35 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C318215AD; Mon, 27 Mar 2017 02:28:11 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.195.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 658463F23B; Mon, 27 Mar 2017 02:28:10 -0700 (PDT) From: Punit Agrawal To: will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH v2 7/7] arm64: hugetlb: Add break-before-make logic for contiguous entries Date: Mon, 27 Mar 2017 10:27:26 +0100 Message-Id: <20170327092726.31730-8-punit.agrawal@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170327092726.31730-1-punit.agrawal@arm.com> References: <20170327092726.31730-1-punit.agrawal@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170327_022831_371706_FDB7123F X-CRM114-Status: GOOD ( 16.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, David Woods , Punit Agrawal , linux-arm-kernel@lists.infradead.org, Steve Capper MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Steve Capper It has become apparent that one has to take special care when modifying attributes of memory mappings that employ the contiguous bit. Both the requirement and the architecturally correct "Break-Before-Make" technique of updating contiguous entries can be found described in: ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit", page D4-1762. The huge pte accessors currently replace the attributes of contiguous pte entries in place thus can, on certain platforms, lead to TLB conflict aborts or even erroneous results returned from TLB lookups. This patch adds a helper function get_clear_flush(.) that clears a contiguous entry and returns the head pte (whilst taking care to retain dirty bit information that could have been modified by DBM). A tlb invalidate is performed to then ensure that there is no possibility of multiple tlb entries being present for the same region. Cc: David Woods Signed-off-by: Steve Capper (Fixed indentation and some comments cleanup) Signed-off-by: Punit Agrawal Reviewed-by: Mark Rutland --- arch/arm64/mm/hugetlbpage.c | 81 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 64 insertions(+), 17 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index ef85d0656039..e2106932daa0 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -68,6 +68,47 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr, return CONT_PTES; } +/* + * Changing some bits of contiguous entries requires us to follow a + * Break-Before-Make approach, breaking the whole contiguous set + * before we can change any entries. See ARM DDI 0487A.k_iss10775, + * "Misprogramming of the Contiguous bit", page D4-1762. + * + * This helper performs the break step. + */ +static pte_t get_clear_flush(struct mm_struct *mm, + unsigned long addr, + pte_t *ptep, + unsigned long pgsize, + unsigned long ncontig) +{ + unsigned long i, saddr = addr; + struct vm_area_struct vma = { .vm_mm = mm }; + pte_t orig_pte = huge_ptep_get(ptep); + + /* + * If we already have a faulting entry then we don't need + * to break before make (there won't be a tlb entry cached). + */ + if (!pte_present(orig_pte)) + return orig_pte; + + for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { + pte_t pte = ptep_get_and_clear(mm, addr, ptep); + + /* + * If HW_AFDBM is enabled, then the HW could turn on + * the dirty bit for any page in the set, so check + * them all. All hugetlb entries are already young. + */ + if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM) && pte_dirty(pte)) + orig_pte = pte_mkdirty(orig_pte); + } + + flush_tlb_range(&vma, saddr, addr); + return orig_pte; +} + void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { @@ -93,6 +134,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte); + get_clear_flush(mm, addr, ptep, pgsize, ncontig); + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) { pr_debug("%s: set pte %p to 0x%llx\n", __func__, ptep, pte_val(pfn_pte(pfn, hugeprot))); @@ -202,7 +245,7 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { - int ncontig, i; + int ncontig; size_t pgsize; pte_t orig_pte = huge_ptep_get(ptep); @@ -210,17 +253,8 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, return ptep_get_and_clear(mm, addr, ptep); ncontig = find_num_contig(mm, addr, ptep, &pgsize); - for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) { - /* - * If HW_AFDBM is enabled, then the HW could - * turn on the dirty bit for any of the page - * in the set, so check them all. - */ - if (pte_dirty(ptep_get_and_clear(mm, addr, ptep))) - orig_pte = pte_mkdirty(orig_pte); - } - return orig_pte; + return get_clear_flush(mm, addr, ptep, pgsize, ncontig); } int huge_ptep_set_access_flags(struct vm_area_struct *vma, @@ -230,6 +264,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, int ncontig, i, changed = 0; size_t pgsize = 0; unsigned long pfn = pte_pfn(pte), dpfn; + pte_t orig_pte; pgprot_t hugeprot; if (!pte_cont(pte)) @@ -239,10 +274,12 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, dpfn = pgsize >> PAGE_SHIFT; hugeprot = pte_pgprot(pte); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) { - changed |= ptep_set_access_flags(vma, addr, ptep, - pfn_pte(pfn, hugeprot), dirty); - } + orig_pte = get_clear_flush(vma->vm_mm, addr, ptep, pgsize, ncontig); + if (!pte_same(orig_pte, pte)) + changed = 1; + + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) + set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot)); return changed; } @@ -252,6 +289,9 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, { int ncontig, i; size_t pgsize; + pte_t pte = pte_wrprotect(huge_ptep_get(ptep)), orig_pte; + unsigned long pfn = pte_pfn(pte), dpfn; + pgprot_t hugeprot; if (!pte_cont(*ptep)) { ptep_set_wrprotect(mm, addr, ptep); @@ -259,8 +299,15 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, } ncontig = find_num_contig(mm, addr, ptep, &pgsize); - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) - ptep_set_wrprotect(mm, addr, ptep); + dpfn = pgsize >> PAGE_SHIFT; + + orig_pte = get_clear_flush(mm, addr, ptep, pgsize, ncontig); + if (pte_dirty(orig_pte)) + pte = pte_mkdirty(pte); + + hugeprot = pte_pgprot(pte); + for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) + set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); } void huge_ptep_clear_flush(struct vm_area_struct *vma,