From patchwork Sun Jan 31 00:11:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12057451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ADA9C433E6 for ; Sun, 31 Jan 2021 00:16:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 033FD64E15 for ; Sun, 31 Jan 2021 00:16:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 033FD64E15 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4CA546B0072; Sat, 30 Jan 2021 19:16:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4539C6B0073; Sat, 30 Jan 2021 19:16:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CDCC6B0074; Sat, 30 Jan 2021 19:16:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 1048B6B0072 for ; Sat, 30 Jan 2021 19:16:12 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id CEB1182499A8 for ; Sun, 31 Jan 2021 00:16:11 +0000 (UTC) X-FDA: 77764152942.21.roll14_181180c275b5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id B3176180442C2 for ; Sun, 31 Jan 2021 00:16:11 +0000 (UTC) X-HE-Tag: roll14_181180c275b5 X-Filterd-Recvd-Size: 6989 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Sun, 31 Jan 2021 00:16:11 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id a20so8196743pjs.1 for ; Sat, 30 Jan 2021 16:16:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iWAT5LTYdh0RU/BQ9SE2YshZULg3KZXfLmEPedqIyqE=; b=UuM/YrXEkvYOegT3ZR4hSe5siUCIuNlpxDunWSaWN3A6lStfUT0VCrnTWP0ywlm2ak 3SOPc+b2jJ4nnbxG3nLmr5B4H/UHzjArVF/yNXzV2nFHH9FCFuyOTZbXtDCnOX4OV8Ad OpW5avMfxmalWCBjgBnVTTQ/x4A5rLgf26xV14RNvci96a0oxUuZ3iO+86TuSYDMC6I8 s6my2/kIyDXGPxQYFP2bGWiTO30o/+oLvqVoID/I1NVJu7IJYQhoO/JreQ1yIl4aOOJA VV54+EiaeqX9kD033IWxjZYoTTCxMfwObVX6zvpwN1Jl8jiqDDTd0JlKSL9F0qMX8ubg NqrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iWAT5LTYdh0RU/BQ9SE2YshZULg3KZXfLmEPedqIyqE=; b=HqGO8CSxo/KFXkU0db57k1z1oJdL55T5Izb0gSds3mFl8VA/VA8WeJzJuDbRKqWyJ4 VTEsft8pX+8EM3a25/MQMhv7JBaUra+BN2ND6Yf62HpWJLHyyVuJ+Tay2bXoiIZwnIDK EV5V/+CIivV8b8wz4p0KZBQZsbSfBMU7kul4bXNKMP++vClIqzrRsaV2djx43am0XywZ lZwDRC8nJ3zDrQbmlBm/1Mtehyy6K6oW2pqABtxVDft8+P4OpGIupQH4v7qUghykK9b/ rr26P7c2od1wuJ3hwsE0NLHT9rf3cGYhxzxHo90QWXczpalSjr0SD+iXR54GpHE481jX Qm4w== X-Gm-Message-State: AOAM533dPFehNoggJDI7S8+134A1IPdt3GK5zg41Y5IL1TkNhRCxXDgx Pb0hKg4bOLA7drbME6a8v3KAa+8LUss= X-Google-Smtp-Source: ABdhPJyAOtaB18E0ZUfJhrWQrqey7BWu2lxfepi790EgHIvhgRAbOLPI5LnhO91qOv9N9CzgTqL6ww== X-Received: by 2002:a17:90a:9503:: with SMTP id t3mr10424487pjo.189.1612052169930; Sat, 30 Jan 2021 16:16:09 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id e12sm13127365pga.13.2021.01.30.16.16.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Jan 2021 16:16:09 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Andrea Arcangeli , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [RFC 06/20] fs/task_mmu: use mmu_gather interface of clear-soft-dirty Date: Sat, 30 Jan 2021 16:11:18 -0800 Message-Id: <20210131001132.3368247-7-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210131001132.3368247-1-namit@vmware.com> References: <20210131001132.3368247-1-namit@vmware.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Use mmu_gather interface in task_mmu instead of {inc|dec}_tlb_flush_pending(). This would allow to consolidate the code and to avoid potential bugs. Signed-off-by: Nadav Amit Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org --- fs/proc/task_mmu.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3cec6fbef725..4cd048ffa0f6 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1032,8 +1032,25 @@ enum clear_refs_types { struct clear_refs_private { enum clear_refs_types type; + struct mmu_gather tlb; }; +static int tlb_pre_vma(unsigned long start, unsigned long end, + struct mm_walk *walk) +{ + struct clear_refs_private *cp = walk->private; + + tlb_start_vma(&cp->tlb, walk->vma); + return 0; +} + +static void tlb_post_vma(struct mm_walk *walk) +{ + struct clear_refs_private *cp = walk->private; + + tlb_end_vma(&cp->tlb, walk->vma); +} + #ifdef CONFIG_MEM_SOFT_DIRTY #define is_cow_mapping(flags) (((flags) & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE) @@ -1140,6 +1157,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, /* Clear accessed and referenced bits. */ pmdp_test_and_clear_young(vma, addr, pmd); test_and_clear_page_young(page); + tlb_flush_pmd_range(&cp->tlb, addr, HPAGE_PMD_SIZE); ClearPageReferenced(page); out: spin_unlock(ptl); @@ -1155,6 +1173,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, if (cp->type == CLEAR_REFS_SOFT_DIRTY) { clear_soft_dirty(vma, addr, pte); + tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE); continue; } @@ -1168,6 +1187,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, /* Clear accessed and referenced bits. */ ptep_test_and_clear_young(vma, addr, pte); test_and_clear_page_young(page); + tlb_flush_pte_range(&cp->tlb, addr, PAGE_SIZE); ClearPageReferenced(page); } pte_unmap_unlock(pte - 1, ptl); @@ -1198,6 +1218,8 @@ static int clear_refs_test_walk(unsigned long start, unsigned long end, } static const struct mm_walk_ops clear_refs_walk_ops = { + .pre_vma = tlb_pre_vma, + .post_vma = tlb_post_vma, .pmd_entry = clear_refs_pte_range, .test_walk = clear_refs_test_walk, }; @@ -1248,6 +1270,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, goto out_unlock; } + tlb_gather_mmu(&cp.tlb, mm); if (type == CLEAR_REFS_SOFT_DIRTY) { for (vma = mm->mmap; vma; vma = vma->vm_next) { if (!(vma->vm_flags & VM_SOFTDIRTY)) @@ -1256,7 +1279,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, vma_set_page_prot(vma); } - inc_tlb_flush_pending(mm); mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, 0, NULL, mm, 0, -1UL); mmu_notifier_invalidate_range_start(&range); @@ -1265,10 +1287,9 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, &cp); if (type == CLEAR_REFS_SOFT_DIRTY) { mmu_notifier_invalidate_range_end(&range); - flush_tlb_mm(mm); - dec_tlb_flush_pending(mm); } out_unlock: + tlb_finish_mmu(&cp.tlb); mmap_write_unlock(mm); out_mm: mmput(mm);