From patchwork Mon Aug 22 08:21:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 12950326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B884C28D13 for ; Mon, 22 Aug 2022 08:25:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 153178D0003; Mon, 22 Aug 2022 04:25:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1031C8D0001; Mon, 22 Aug 2022 04:25:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0C758D0003; Mon, 22 Aug 2022 04:25:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DFE568D0001 for ; Mon, 22 Aug 2022 04:25:40 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B33F61A0404 for ; Mon, 22 Aug 2022 08:25:40 +0000 (UTC) X-FDA: 79826544840.06.B605D80 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf25.hostedemail.com (Postfix) with ESMTP id 76484A004C for ; Mon, 22 Aug 2022 08:23:34 +0000 (UTC) Received: from canpemm500009.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4MB4zR4wk3z1N7Vq; Mon, 22 Aug 2022 16:20:03 +0800 (CST) Received: from localhost.localdomain (10.67.164.66) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 22 Aug 2022 16:23:30 +0800 From: Yicong Yang To: , , , , , , CC: , , , , , , , , , , , , , , , , Barry Song <21cnbao@gmail.com>, , , , , Anshuman Khandual Subject: [PATCH v3 2/4] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Date: Mon, 22 Aug 2022 16:21:18 +0800 Message-ID: <20220822082120.8347-3-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220822082120.8347-1-yangyicong@huawei.com> References: <20220822082120.8347-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.164.66] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661156615; a=rsa-sha256; cv=none; b=atlESNjbzYyYGVtHltWzn3Cx8kvTxwdUm38eyE4AwRMawKX75aoks+U0/9IAfcUAvAvtg7 ncgf0nYwUasFLBbJieZEI3iNPMmdx3x7eK3Z4mLkCRdbvL8q4lgrX/+5pk6WBosrhAvFCa al93IOLZ946IvCDXK7mhRqco7xqM//o= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661156615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eh4hv036LHuNDHsVgHaw8qPJWY4E7fR0gzteZ3nXcNc=; b=b4GRil/IK2JPBHngCdEoY6nS2xQovly+HrMR6biGLRaVxjBoH1ieAFZ+ti9UVE8c3B5h0H YxveSn4PgZDjBfqIdgqCZuaEbTMwC0Id5EErbXlveA9m1+59dlMi5JUXsBj5MLU0WKRGhF 07q3WmvCO22Fsc4w/Ph7VBmXiZWg9fE= X-Rspamd-Queue-Id: 76484A004C X-Stat-Signature: 5z5ptu87kt8n3zfdbnfiuidkd4cxmf5h Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=yangyicong@huawei.com X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1661156614-228843 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Anshuman Khandual The entire scheme of deferred TLB flush in reclaim path rests on the fact that the cost to refill TLB entries is less than flushing out individual entries by sending IPI to remote CPUs. But architecture can have different ways to evaluate that. Hence apart from checking TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be architecture specific. Signed-off-by: Anshuman Khandual [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] Signed-off-by: Yicong Yang [Rebase and fix incorrect return value type] Reviewed-by: Kefeng Wang Reviewed-by: Anshuman Khandual --- arch/x86/include/asm/tlbflush.h | 12 ++++++++++++ mm/rmap.c | 9 +-------- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index cda3118f3b27..8a497d902c16 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -240,6 +240,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); } +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) +{ + bool should_defer = false; + + /* If remote CPUs need to be flushed then defer batch the flush */ + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) + should_defer = true; + put_cpu(); + + return should_defer; +} + static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) { /* diff --git a/mm/rmap.c b/mm/rmap.c index edc06c52bc82..a17a004550c6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -687,17 +687,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) */ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) { - bool should_defer = false; - if (!(flags & TTU_BATCH_FLUSH)) return false; - /* If remote CPUs need to be flushed then defer batch the flush */ - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) - should_defer = true; - put_cpu(); - - return should_defer; + return arch_tlbbatch_should_defer(mm); } /*