From patchwork Sun Jan 31 00:11:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12057463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90F3AC433E6 for ; Sun, 31 Jan 2021 00:16:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1BD8964E15 for ; Sun, 31 Jan 2021 00:16:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BD8964E15 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DA6C6B007D; Sat, 30 Jan 2021 19:16:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F1866B007E; Sat, 30 Jan 2021 19:16:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F5B76B0080; Sat, 30 Jan 2021 19:16:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 22F386B007D for ; Sat, 30 Jan 2021 19:16:22 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E43DB3633 for ; Sun, 31 Jan 2021 00:16:21 +0000 (UTC) X-FDA: 77764153362.30.swim52_2505ea6275b5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id BFC17180B3C83 for ; Sun, 31 Jan 2021 00:16:21 +0000 (UTC) X-HE-Tag: swim52_2505ea6275b5 X-Filterd-Recvd-Size: 8290 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Sun, 31 Jan 2021 00:16:21 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id o63so9417964pgo.6 for ; Sat, 30 Jan 2021 16:16:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O8uSPE/HXC6aHbVUxVpHG/d195EuTH7J5Gm24sXeUII=; b=PwdIMepAzUWPJ56ktZqfTNN8uHV1c1HOobe4OORa4XxJPl9z4jD2tKvqxvoz0NN8Q7 vyLxvXY0eIDxnVZd66ZzKxalFo2pJE4Nj5hazn3Y1/ageaD2NuE9vFR0+KUNpYXZDmeq HUolGqpL0IJ8uww1/JcIOqWNR0896AbtuxsneTHFJm4E5OQ4HsdJ3pCEoxppW+6sQng9 gfJsDusUVhQ/bLlx6/0ttYoqj6WhBqI+LVBu1dBxrdfJZWgd/SbsJ6xlHnyqblqTMkjl bDFUXr8WllrSt+rdAR4J+fExRJPMaSbNwUeleG1XK+k6LHuMJzUxR10VFVXkN/Rp39YY ZRaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O8uSPE/HXC6aHbVUxVpHG/d195EuTH7J5Gm24sXeUII=; b=IUjeeJAkprk2u8ncLFZ227MbkQLLJeh+5YY8Hz31PbW2lG+CNjZgt7EhgNlD3z/Qmp 2WkUZ/6FVeoF3OGr3a+HsvIkAX7xqm0VCOGxpfFTFQTXumH8Hhq/T6436sS3h3bffNU0 xERcctBGA2dM35Uk1vTBFODKxct7P+cWmGM00JgTwjTe1TLlTVEKkHkCi26oNeC1XDqi rh91xmlW6H10ouvUzA7h6OLimgj3jNP1gezp9djcbmo/eNPGKImDnv2phbZNNs1+UNXc OaOU+Y7YpVwyTZ8v6gwCXBHPqZWSZmJuMmJkU0HEaUiMNna3s3/8O2yT7iGSnm91qWaZ CX0A== X-Gm-Message-State: AOAM532mdtyJQlnsAo+JNNRmmmYlUTsP2zGYkE1rbgGyVuR0O32oY93d SvQXyR9xs/ZsOu8YT/2QocKopQyu63c= X-Google-Smtp-Source: ABdhPJxKdj8NOHZ5TdgXQQbvYo0DHw/nVRQlVpun7jqBLlHK6Vdma/V1nIaCJETf5QG8NYedmgWadQ== X-Received: by 2002:a63:f74f:: with SMTP id f15mr10935466pgk.186.1612052180090; Sat, 30 Jan 2021 16:16:20 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id e12sm13127365pga.13.2021.01.30.16.16.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Jan 2021 16:16:19 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Andrea Arcangeli , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [RFC 12/20] mm/tlb: save the VMA that is flushed during tlb_start_vma() Date: Sat, 30 Jan 2021 16:11:24 -0800 Message-Id: <20210131001132.3368247-13-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210131001132.3368247-1-namit@vmware.com> References: <20210131001132.3368247-1-namit@vmware.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Certain architectures need information about the vma that is about to be flushed. Currently, an artificial vma is constructed using the original vma infromation. Instead of saving the flags, record the vma during tlb_start_vma() and use this vma when calling flush_tlb_range(). Record the vma unconditionally as it would be needed for per-VMA deferred TLB flush tracking and the overhead of tracking it unconditionally should be negligible. Signed-off-by: Nadav Amit Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org --- include/asm-generic/tlb.h | 56 +++++++++++++-------------------------- 1 file changed, 19 insertions(+), 37 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b97136b7010b..041be2ef4426 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -252,6 +252,13 @@ extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, struct mmu_gather { struct mm_struct *mm; + /* + * The current vma. This information is changing upon tlb_start_vma() + * and is therefore only valid between tlb_start_vma() and tlb_end_vma() + * calls. + */ + struct vm_area_struct *vma; + #ifdef CONFIG_MMU_GATHER_TABLE_FREE struct mmu_table_batch *batch; #endif @@ -283,12 +290,6 @@ struct mmu_gather { unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; - /* - * tracks VM_EXEC | VM_HUGETLB in tlb_start_vma - */ - unsigned int vma_exec : 1; - unsigned int vma_huge : 1; - unsigned int batch_count; #ifndef CONFIG_MMU_GATHER_NO_GATHER @@ -352,10 +353,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) flush_tlb_mm(tlb->mm); } -static inline void -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - -#define tlb_end_vma tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } #else /* CONFIG_MMU_GATHER_NO_RANGE */ @@ -364,7 +361,7 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm /* * When an architecture does not provide its own tlb_flush() implementation - * but does have a reasonably efficient flush_vma_range() implementation + * but does have a reasonably efficient flush_tlb_range() implementation * use that. */ static inline void tlb_flush(struct mmu_gather *tlb) @@ -372,38 +369,20 @@ static inline void tlb_flush(struct mmu_gather *tlb) if (tlb->fullmm || tlb->need_flush_all) { flush_tlb_mm(tlb->mm); } else if (tlb->end) { - struct vm_area_struct vma = { - .vm_mm = tlb->mm, - .vm_flags = (tlb->vma_exec ? VM_EXEC : 0) | - (tlb->vma_huge ? VM_HUGETLB : 0), - }; - - flush_tlb_range(&vma, tlb->start, tlb->end); + VM_BUG_ON(!tlb->vma); + flush_tlb_range(tlb->vma, tlb->start, tlb->end); } } static inline void -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) +tlb_update_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { - /* - * flush_tlb_range() implementations that look at VM_HUGETLB (tile, - * mips-4k) flush only large pages. - * - * flush_tlb_range() implementations that flush I-TLB also flush D-TLB - * (tile, xtensa, arm), so it's ok to just add VM_EXEC to an existing - * range. - * - * We rely on tlb_end_vma() to issue a flush, such that when we reset - * these values the batch is empty. - */ - tlb->vma_huge = is_vm_hugetlb_page(vma); - tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); + tlb->vma = vma; } - #else static inline void -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } +tlb_update_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } #endif @@ -487,17 +466,17 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * if (tlb->fullmm) return; - tlb_update_vma_flags(tlb, vma); + tlb_update_vma(tlb, vma); flush_cache_range(vma, vma->vm_start, vma->vm_end); } static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) - return; + goto out; if (IS_ENABLED(CONFIG_ARCH_WANT_AGGRESSIVE_TLB_FLUSH_BATCHING)) - return; + goto out; /* * Do a TLB flush and reset the range at VMA boundaries; this avoids @@ -506,6 +485,9 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm * this. */ tlb_flush_mmu_tlbonly(tlb); +out: + /* Reset the VMA as a precaution. */ + tlb_update_vma(tlb, NULL); } #ifdef CONFIG_ARCH_HAS_TLB_GENERATIONS