From patchwork Tue Nov 15 03:14:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yicong Yang X-Patchwork-Id: 13043184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5B1CC4167B for ; Tue, 15 Nov 2022 03:15:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3250B6B0072; Mon, 14 Nov 2022 22:15:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D5EF6B0073; Mon, 14 Nov 2022 22:15:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19D1D6B0074; Mon, 14 Nov 2022 22:15:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0C9476B0072 for ; Mon, 14 Nov 2022 22:15:43 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D0213AB50E for ; Tue, 15 Nov 2022 03:15:42 +0000 (UTC) X-FDA: 80134211724.26.11A492C Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf06.hostedemail.com (Postfix) with ESMTP id BAEE5180004 for ; Tue, 15 Nov 2022 03:15:40 +0000 (UTC) Received: from canpemm500009.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4NBB6Y5fkczqSLj; Tue, 15 Nov 2022 11:11:49 +0800 (CST) Received: from localhost.localdomain (10.67.164.66) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 15 Nov 2022 11:15:36 +0800 From: Yicong Yang To: , , , , , , , CC: , , , , , , , , , , , , , , , , , Barry Song <21cnbao@gmail.com>, , , , Anshuman Khandual , Barry Song Subject: [PATCH v6 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Date: Tue, 15 Nov 2022 11:14:24 +0800 Message-ID: <20221115031425.44640-2-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20221115031425.44640-1-yangyicong@huawei.com> References: <20221115031425.44640-1-yangyicong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.164.66] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668482141; a=rsa-sha256; cv=none; b=veKM1oUQiJwKgQy42FUk90CvfJdoF3dPiJuVrCXvlbpi7GpcQ0dQqXXSruilBcCCSZ4syL SK4DiKKB9UMSb8iAin18F/r+L8wZO0SBKjoMde4C9I9GdlTkTcIVaxhqOjMNFlvS/6Ti8O rRD00jTsL2K8xars8Nkh0YIDIVRljtw= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668482141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JYN0ZtWyj5AfyInxXgiWePGH5ZNHD747UHFqOddEV+E=; b=hrRq+9cD620tqSObV04WWlV5RLmXuo93p3wWKC8iuYPiep+niJXct7BDFaxWqfGvL/h5tf J/sEKYIVp9h/uQS6JvNyE1uzBjNfqm6LS7hLKXDSGZvKlFpPwAZflBrLfxQbJTubCvSl3H RVC/WmclD4yhAmia4NHh6EhGWZ6ECc4= X-Rspamd-Queue-Id: BAEE5180004 Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=yangyicong@huawei.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: uwwam75dk9894rc7m54kk4brf3qsukbj X-HE-Tag: 1668482140-823115 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Anshuman Khandual The entire scheme of deferred TLB flush in reclaim path rests on the fact that the cost to refill TLB entries is less than flushing out individual entries by sending IPI to remote CPUs. But architecture can have different ways to evaluate that. Hence apart from checking TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be architecture specific. Signed-off-by: Anshuman Khandual [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] Signed-off-by: Yicong Yang [Rebase and fix incorrect return value type] Reviewed-by: Kefeng Wang Reviewed-by: Anshuman Khandual Reviewed-by: Barry Song Tested-by: Punit Agrawal Reviewed-by: Xin Hao --- arch/x86/include/asm/tlbflush.h | 12 ++++++++++++ mm/rmap.c | 9 +-------- 2 files changed, 13 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index cda3118f3b27..8a497d902c16 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -240,6 +240,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); } +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) +{ + bool should_defer = false; + + /* If remote CPUs need to be flushed then defer batch the flush */ + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) + should_defer = true; + put_cpu(); + + return should_defer; +} + static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) { /* diff --git a/mm/rmap.c b/mm/rmap.c index 2ec925e5fa6a..a9ab10bc0144 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -685,17 +685,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) */ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) { - bool should_defer = false; - if (!(flags & TTU_BATCH_FLUSH)) return false; - /* If remote CPUs need to be flushed then defer batch the flush */ - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) - should_defer = true; - put_cpu(); - - return should_defer; + return arch_tlbbatch_should_defer(mm); } /*