From patchwork Tue Mar 30 06:07:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 12171717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38115C433DB for ; Tue, 30 Mar 2021 06:08:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9702261957 for ; Tue, 30 Mar 2021 06:08:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9702261957 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 261886B0083; Tue, 30 Mar 2021 02:08:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 238916B0085; Tue, 30 Mar 2021 02:08:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 128586B0087; Tue, 30 Mar 2021 02:08:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id EAEDC6B0083 for ; Tue, 30 Mar 2021 02:08:34 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B150A6C26 for ; Tue, 30 Mar 2021 06:08:34 +0000 (UTC) X-FDA: 77975511348.40.6FB69E0 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf16.hostedemail.com (Postfix) with ESMTP id C304980192D5 for ; Tue, 30 Mar 2021 06:08:32 +0000 (UTC) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12U62g7Y156897; Tue, 30 Mar 2021 02:08:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=U9WSLfraYrHhKqAW4vji2mb/X+hKa8BChHdGaKCI2cw=; b=QpU3JSo0i0isNBG+2uKV9Mijg0/5jT3IO5VkT3DhTLZaSX8N8LgJdtwRnyzJW6Tyv+iJ GPSvxlbpv/MziMdmMcctYeOyOCA0hGI1GFyqKKpmfNlmhXp5ONeoL0PfjKk+AgksPD/d lzDr6poR1KM+iW23JVGWXJTF9Rc41ZoQTT2PtbjWAPK6er1feU1qldwR1spBT71OkzRl wcRJlOtpF6LGQZgI1YU/Xjhw+4ApnKALibmM3ORRX2Y+cbyr/I9f8bWfmq858p+tIKZU M6jnrEQy8r19NGM4Eh6mPTZhjHaYAJsu/1068Gqx2fLdHRVgstgvxnGEwrT8VYbr8oZh 7g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 37jj60p5uu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 02:08:30 -0400 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 12U634q9157992; Tue, 30 Mar 2021 02:08:30 -0400 Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0b-001b2d01.pphosted.com with ESMTP id 37jj60p5uk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 02:08:30 -0400 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.0.43/8.16.0.43) with SMTP id 12U65clS030860; Tue, 30 Mar 2021 06:08:29 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma01wdc.us.ibm.com with ESMTP id 37hvb8ugen-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 30 Mar 2021 06:08:29 +0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 12U68SjM10617474 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 30 Mar 2021 06:08:28 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6B07F7806A; Tue, 30 Mar 2021 06:08:28 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 93DA17805E; Tue, 30 Mar 2021 06:08:25 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.199.52.226]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 30 Mar 2021 06:08:25 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, kaleshsingh@google.com, npiggin@gmail.com, joel@joelfernandes.org, "Aneesh Kumar K.V" Subject: [PATCH v3 6/9] mm/mremap: Use range flush that does TLB and page walk cache flush Date: Tue, 30 Mar 2021 11:37:49 +0530 Message-Id: <20210330060752.592769-7-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210330060752.592769-1-aneesh.kumar@linux.ibm.com> References: <20210330060752.592769-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: NC6onVdnQU0pTMiNmjtpg03NjRK4bawF X-Proofpoint-ORIG-GUID: nR4p3_gb6At2pFCqvKyreF28lbdiwuJu X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.761 definitions=2021-03-30_01:2021-03-26,2021-03-30 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 priorityscore=1501 clxscore=1015 malwarescore=0 mlxscore=0 spamscore=0 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103250000 definitions=main-2103300043 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C304980192D5 X-Stat-Signature: nmttswz9ci1q5t4tbzofxb6a4amiah8d Received-SPF: none (linux.ibm.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=mx0a-001b2d01.pphosted.com; client-ip=148.163.158.5 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617084512-305533 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some architectures do have the concept of page walk cache which need to be flush when updating higher levels of page tables. A fast mremap that involves moving page table pages instead of copying pte entries should flush page walk cache since the old translation cache is no more valid. Add new helper flush_pte_tlb_pwc_range() which invalidates both TLB and page walk cache where TLB entries are mapped with page size PAGE_SIZE. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/book3s/64/tlbflush.h | 11 +++++++++++ mm/mremap.c | 15 +++++++++++++-- 2 files changed, 24 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index efe5336e2b6f..b9022eb9f20e 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -80,6 +80,17 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, return flush_hugetlb_tlb_pwc_range(vma, start, end, false); } +#define flush_pte_tlb_pwc_range flush_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool also_pwc) +{ + if (radix_enabled()) + return radix__flush_tlb_pwc_range_psize(vma->vm_mm, start, + end, mmu_virtual_psize, also_pwc); + return hash__flush_tlb_range(vma, start, end); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { diff --git a/mm/mremap.c b/mm/mremap.c index 574287f9bb39..0e7b11daafee 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -210,6 +210,17 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +#ifndef flush_pte_tlb_pwc_range +#define flush_pte_tlb_pwc_range flush_pte_tlb_pwc_range +static inline void flush_pte_tlb_pwc_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + bool also_pwc) +{ + return flush_tlb_range(vma, start, end); +} +#endif + #ifdef CONFIG_HAVE_MOVE_PMD static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) @@ -260,7 +271,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pmd_none(*new_pmd)); pmd_populate(mm, new_pmd, (pgtable_t)pmd_page_vaddr(pmd)); - flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PMD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl); @@ -307,7 +318,7 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, VM_BUG_ON(!pud_none(*new_pud)); pud_populate(mm, new_pud, (pmd_t *)pud_page_vaddr(pud)); - flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + flush_pte_tlb_pwc_range(vma, old_addr, old_addr + PUD_SIZE, true); if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl);