From patchwork Sun Jan 31 00:11:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12057467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B40C433DB for ; Sun, 31 Jan 2021 00:16:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 50FBD64E0F for ; Sun, 31 Jan 2021 00:16:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50FBD64E0F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D088A6B0080; Sat, 30 Jan 2021 19:16:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C91DA6B0081; Sat, 30 Jan 2021 19:16:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FB3C6B0082; Sat, 30 Jan 2021 19:16:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 853096B0080 for ; Sat, 30 Jan 2021 19:16:25 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 49F973633 for ; Sun, 31 Jan 2021 00:16:25 +0000 (UTC) X-FDA: 77764153530.13.top39_1b15b5c275b5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 1BDF118140B69 for ; Sun, 31 Jan 2021 00:16:25 +0000 (UTC) X-HE-Tag: top39_1b15b5c275b5 X-Filterd-Recvd-Size: 8389 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Sun, 31 Jan 2021 00:16:24 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id v19so9418394pgj.12 for ; Sat, 30 Jan 2021 16:16:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6DfXOdul3BLGU90QLFVs7+YmYGOmb4ShgTcRCfB8SKk=; b=rzXYCCMt7lIoTbm1AX0NaK4JcpYl3mpYPtPPWccGVcYlicf90HqGe490fUvOVbLaq9 12W0/fh3QMZkRC77RVtEgkJeKfLWcQ5YxEg9HY9nrswO0onKA//7P1lgQwosWV2utIHO Xbijz+GGPfiAOV5rU95L/2S/PNnrGTYvKt8oE8m1V+VadGx3M99K26DAn0rtzFNQb6HV S7UzQ7r2hcEVdE8mdIGcS2ZQnc5ixnSi3g+R9Pg7Jf6oVqOLlqNM5nr3Nb2JeFL0Qpaq Jwh2IrMY6IH3aQo+FmuayEYLKT6MpJvyJKM6/U3ZvMadrbLjhsLefq26XwYQYoDopQ0S RiLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6DfXOdul3BLGU90QLFVs7+YmYGOmb4ShgTcRCfB8SKk=; b=EyLM6RuaJX5SkfEXEkQ7NCzmOOIX7WtzQGjPPO3G7CHJ9LI/rskyBFIkO9997yC3nu s+xnBTeKAggHS6tdjMcjBLaA/H2zVxZqtqOYJhq0xYGmnq+1PKXIqcCMSppmWEyq2HxN Ck4iAMwvfoHJapvSVrG77SaZeP3w/VEis54jIPwzDW3WMyPR4Gm9IdLkdSGd5pqayK5i jGJxf64f7uln3o2QhnXeQ/7B88fILU9px619ESLfmtrqz6Qt/KjQAMyDS5wslcKXUHFr mu8QZD5TsPuFaWRGqsBCrYSPglnQlwLjZWCRUnpGz87OmT7jkr3aBi1jhLbps3iQCeuq CNdg== X-Gm-Message-State: AOAM5310kEovJeytIEH7Lho4aWDPy+ZiHxZ2ONcFaCUhvqcj/agSx8uy QagmPMJtv6CGd/NDf0Y8D0g0tO89TRc= X-Google-Smtp-Source: ABdhPJzymehCQkc1AiOy4wnRhxKaB0Uby6rJCyGv65YnWd4ayACsGKdDOPYrtfb5ztEQ5gLeaTxgVA== X-Received: by 2002:a63:cc05:: with SMTP id x5mr10497845pgf.254.1612052183332; Sat, 30 Jan 2021 16:16:23 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id e12sm13127365pga.13.2021.01.30.16.16.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Jan 2021 16:16:22 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Andrea Arcangeli , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [RFC 14/20] mm: move inc/dec_tlb_flush_pending() to mmu_gather.c Date: Sat, 30 Jan 2021 16:11:26 -0800 Message-Id: <20210131001132.3368247-15-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210131001132.3368247-1-namit@vmware.com> References: <20210131001132.3368247-1-namit@vmware.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Reduce the chances that inc/dec_tlb_flush_pending() will be abused by moving them into mmu_gather.c, which is more of their natural place. This also allows to reduce the clutter on mm_types.h. Signed-off-by: Nadav Amit Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org --- include/linux/mm_types.h | 54 ---------------------------------------- mm/mmu_gather.c | 54 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+), 54 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 812ee0fd4c35..676795dfd5d4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -615,60 +615,6 @@ static inline void init_tlb_flush_pending(struct mm_struct *mm) atomic_set(&mm->tlb_flush_pending, 0); } -static inline void inc_tlb_flush_pending(struct mm_struct *mm) -{ - atomic_inc(&mm->tlb_flush_pending); - /* - * The only time this value is relevant is when there are indeed pages - * to flush. And we'll only flush pages after changing them, which - * requires the PTL. - * - * So the ordering here is: - * - * atomic_inc(&mm->tlb_flush_pending); - * spin_lock(&ptl); - * ... - * set_pte_at(); - * spin_unlock(&ptl); - * - * spin_lock(&ptl) - * mm_tlb_flush_pending(); - * .... - * spin_unlock(&ptl); - * - * flush_tlb_range(); - * atomic_dec(&mm->tlb_flush_pending); - * - * Where the increment if constrained by the PTL unlock, it thus - * ensures that the increment is visible if the PTE modification is - * visible. After all, if there is no PTE modification, nobody cares - * about TLB flushes either. - * - * This very much relies on users (mm_tlb_flush_pending() and - * mm_tlb_flush_nested()) only caring about _specific_ PTEs (and - * therefore specific PTLs), because with SPLIT_PTE_PTLOCKS and RCpc - * locks (PPC) the unlock of one doesn't order against the lock of - * another PTL. - * - * The decrement is ordered by the flush_tlb_range(), such that - * mm_tlb_flush_pending() will not return false unless all flushes have - * completed. - */ -} - -static inline void dec_tlb_flush_pending(struct mm_struct *mm) -{ - /* - * See inc_tlb_flush_pending(). - * - * This cannot be smp_mb__before_atomic() because smp_mb() simply does - * not order against TLB invalidate completion, which is what we need. - * - * Therefore we must rely on tlb_flush_*() to guarantee order. - */ - atomic_dec(&mm->tlb_flush_pending); -} - static inline bool mm_tlb_flush_pending(struct mm_struct *mm) { /* diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 5a659d4e59eb..13338c096cc6 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -249,6 +249,60 @@ void tlb_flush_mmu(struct mmu_gather *tlb) tlb_flush_mmu_free(tlb); } +static inline void inc_tlb_flush_pending(struct mm_struct *mm) +{ + atomic_inc(&mm->tlb_flush_pending); + /* + * The only time this value is relevant is when there are indeed pages + * to flush. And we'll only flush pages after changing them, which + * requires the PTL. + * + * So the ordering here is: + * + * atomic_inc(&mm->tlb_flush_pending); + * spin_lock(&ptl); + * ... + * set_pte_at(); + * spin_unlock(&ptl); + * + * spin_lock(&ptl) + * mm_tlb_flush_pending(); + * .... + * spin_unlock(&ptl); + * + * flush_tlb_range(); + * atomic_dec(&mm->tlb_flush_pending); + * + * Where the increment if constrained by the PTL unlock, it thus + * ensures that the increment is visible if the PTE modification is + * visible. After all, if there is no PTE modification, nobody cares + * about TLB flushes either. + * + * This very much relies on users (mm_tlb_flush_pending() and + * mm_tlb_flush_nested()) only caring about _specific_ PTEs (and + * therefore specific PTLs), because with SPLIT_PTE_PTLOCKS and RCpc + * locks (PPC) the unlock of one doesn't order against the lock of + * another PTL. + * + * The decrement is ordered by the flush_tlb_range(), such that + * mm_tlb_flush_pending() will not return false unless all flushes have + * completed. + */ +} + +static inline void dec_tlb_flush_pending(struct mm_struct *mm) +{ + /* + * See inc_tlb_flush_pending(). + * + * This cannot be smp_mb__before_atomic() because smp_mb() simply does + * not order against TLB invalidate completion, which is what we need. + * + * Therefore we must rely on tlb_flush_*() to guarantee order. + */ + atomic_dec(&mm->tlb_flush_pending); +} + /** * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down * @tlb: the mmu_gather structure to initialize