From patchwork Sun Jul 10 23:28:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12912838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98871C433EF for ; Mon, 11 Jul 2022 07:03:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D01F98E0001; Mon, 11 Jul 2022 03:03:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8A856B0073; Mon, 11 Jul 2022 03:03:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B51D78E0001; Mon, 11 Jul 2022 03:03:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A04546B0072 for ; Mon, 11 Jul 2022 03:03:08 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 69CEB60306 for ; Mon, 11 Jul 2022 07:03:08 +0000 (UTC) X-FDA: 79673927256.17.704CD07 Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by imf18.hostedemail.com (Postfix) with ESMTP id 0F0F71C0097 for ; Mon, 11 Jul 2022 07:03:07 +0000 (UTC) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 746F2348D1 for ; Mon, 11 Jul 2022 07:03:07 +0000 (UTC) X-FDA: 79673927214.16.0A40CA2 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf19.hostedemail.com (Postfix) with ESMTP id D23F31A0051 for ; Mon, 11 Jul 2022 07:03:06 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id x18-20020a17090a8a9200b001ef83b332f5so7435388pjn.0 for ; Mon, 11 Jul 2022 00:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=vvcpBMBhsK4WThjIJ06GBib55a72jQWIvp45WwzhM0M=; b=nVo8rx6VOOC28p9T/aKSRT17oSfJld+SgNKLaQjWe4lXHAxea0/9ZefGQSJAMHQrp/ dSHJkpNxq3Bb2zD532BR6aNt22z7Lt0+oSYhud365U9Qz68ryiRKugzYUwJjAxdnX7AT Y7u/qvoq/y5cNHc56lg8GUiS/93nph1JCKeBA5SBnbMmj2c9eUxamuf/qdtsHfPpxXvA u0Vhkb5LcjuYiKfEF5QIYG1ikfV6pLK0QHlCHwVvmS9DeU19pzTflNUj7T6XykIKuKcD SuEA0g98a0qX+9h2G+0bWfU4naTGFPNyp9tmY/uz7qn3YJ6cE3vgvNPrSOOVru0AnhZK eZhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=vvcpBMBhsK4WThjIJ06GBib55a72jQWIvp45WwzhM0M=; b=fAkeUTCim9gcJ/VOwryuL6oh7V0FSuKU/4+MIL0O9S2JP7eUCsE2pGOELzqTdDEcHT KOGNvSKQB7U/s7eTEFdehmVOJ7C4jElxSRuwhdIcameWcS+DuY0ryZ7fgLUE5EgVF5R2 3xH3yY1diB1M50hhNvK564Fm92CJ8oz2Yy/MlbHBrpy4Zm1Mw4CLy2CmC07ZrX5hSU1Q BaBwnmmLx0Z6NPR+wzy/Af5sUdLf/WYQVZGPYhFYxmDxNNImjJGk45yOdVUt70LKtX5R Ki4ahBl2JRe+yVzLnsG5Tn1eBO5QFNCNWP8KMyxHgxF5ML09sL0K+O/dSumF9TNiWfgJ Ynzg== X-Gm-Message-State: AJIora+8m6TTDAChjSo865ybTuypXWsgylLXtiywKiLnoSrkHil1PYcN CCPm///c6uTbPtCbz6egih4= X-Google-Smtp-Source: AGRyM1tkRMMSGieV9B+4jTa6gBCbgS/VOYk6R3OP4PWsI/ErIO73Qcxhm9Awn7/7FYSemvdlQPOzYw== X-Received: by 2002:a17:903:22cf:b0:16c:4145:75c5 with SMTP id y15-20020a17090322cf00b0016c414575c5mr5717431plg.127.1657522985449; Mon, 11 Jul 2022 00:03:05 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id ge16-20020a17090b0e1000b001efa4a1bb3esm3967305pjb.35.2022.07.11.00.03.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Jul 2022 00:03:04 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-kernel@vger.kernel.org, Hugh Dickins , Thomas Gleixner , Dave Hansen Cc: Ingo Molnar , Borislav Petkov , x86@kernel.org, Linux MM , Nadav Amit , Peter Zijlstra , Andy Lutomirski Subject: [PATCH v2] x86/mm/tlb: ignore f->new_tlb_gen when zero Date: Sun, 10 Jul 2022 16:28:37 -0700 Message-Id: <20220710232837.3618-1-namit@vmware.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nVo8rx6V; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf18.hostedemail.com: domain of MAILER-DAEMON@hostedemail.com has no SPF policy when checking 216.40.44.17) smtp.mailfrom=MAILER-DAEMON@hostedemail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657522988; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=vvcpBMBhsK4WThjIJ06GBib55a72jQWIvp45WwzhM0M=; b=Lqy5pNlheHjcuig5p9dZV2lIqpIqdhzGl0ger4GOUr9v7Qs5bE+q+iLgvdqsRuh1Iiwq0u QW/xLRdb+Xda3LjT2rtucKlCpplsyRfc+uHRoY2XZQM22BdpW0Cc/gtgnqJ03uFhYQjvi2 XPAIs7l0K4HjDwHIgUvSVqXOIH72a4M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657522988; a=rsa-sha256; cv=none; b=r4QMNk6wNHm4ZnqHNo59b9MdpPg6kX34v+IHvThK5u7N4S1kg0Ue3LwsxbrTMM3aeQPlis 6QCY1vQUAIgy7kgnNc5i1WSmsMceCqRCt226Ek10UJzIZeFnUoPkyiqOzk70oY0mPH60P0 A146DXzkXZ1lCixVZDdy0peZoH9Ro5g= Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nVo8rx6V; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf18.hostedemail.com: domain of MAILER-DAEMON@hostedemail.com has no SPF policy when checking 216.40.44.17) smtp.mailfrom=MAILER-DAEMON@hostedemail.com X-Stat-Signature: zi33czp9p6zo85s79pqpj1t3izixwr6k X-Rspamd-Queue-Id: 0F0F71C0097 X-HE-Tag-Orig: 1657522986-350349 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1657522987-573880 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Commit aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when possible") introduced an optimization of skipping the flush if the TLB generation that is flushed (as provided in flush_tlb_info) was already flushed. However, arch_tlbbatch_flush() does not provide any generation in flush_tlb_info. As a result, try_to_unmap_one() would not perform any TLB flushes. Fix it by checking whether f->new_tlb_gen is nonzero. Zero value is anyhow is an invalid generation value. To avoid future confusions, introduce TLB_GENERATION_INVALID constant and use it properly. Add some assertions to check no partial flushes are done with TLB_GENERATION_INVALID or when f->mm is NULL, since this does not make any sense. In addition, add the missing unlikely(). Fixes: aa44284960d5 ("x86/mm/tlb: Avoid reading mm_tlb_gen when possible") Reported-by: Hugh Dickins Tested-by: Hugh Dickins Cc: Dave Hansen Cc: Peter Zijlstra (Intel) Cc: Andy Lutomirski Signed-off-by: Nadav Amit --- v1 -> v2: * Introduce TLB_GENERATION_INVALID to clarify intent. * Leave the early return and do not "goto out". * Add some assertions to check and document in code the relationship between TLB_GENERATION_INVALID and TLB_FLUSH_ALL. --- arch/x86/include/asm/tlbflush.h | 1 + arch/x86/mm/tlb.c | 15 ++++++++++++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 4af5579c7ef7..cda3118f3b27 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -16,6 +16,7 @@ void __flush_tlb_all(void); #define TLB_FLUSH_ALL -1UL +#define TLB_GENERATION_INVALID 0 void cr4_update_irqsoff(unsigned long set, unsigned long clear); unsigned long cr4_read_shadow(void); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index d9314cc8b81f..0f346c51dd99 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -771,7 +771,8 @@ static void flush_tlb_func(void *info) return; } - if (f->new_tlb_gen <= local_tlb_gen) { + if (unlikely(f->new_tlb_gen != TLB_GENERATION_INVALID && + f->new_tlb_gen <= local_tlb_gen)) { /* * The TLB is already up to date in respect to f->new_tlb_gen. * While the core might be still behind mm_tlb_gen, checking @@ -843,6 +844,12 @@ static void flush_tlb_func(void *info) /* Partial flush */ unsigned long addr = f->start; + /* Partial flush cannot have invalid generations */ + VM_BUG_ON(f->new_tlb_gen == TLB_GENERATION_INVALID); + + /* Partial flush must have valid mm */ + VM_BUG_ON(f->mm == NULL); + nr_invalidate = (f->end - f->start) >> f->stride_shift; while (addr < f->end) { @@ -1045,7 +1052,8 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) struct flush_tlb_info *info; preempt_disable(); - info = get_flush_tlb_info(NULL, start, end, 0, false, 0); + info = get_flush_tlb_info(NULL, start, end, 0, false, + TLB_GENERATION_INVALID); on_each_cpu(do_kernel_range_flush, info, 1); @@ -1214,7 +1222,8 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) int cpu = get_cpu(); - info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, 0); + info = get_flush_tlb_info(NULL, 0, TLB_FLUSH_ALL, 0, false, + TLB_GENERATION_INVALID); /* * flush_tlb_multi() is not optimized for the common case in which only * a local TLB flush is needed. Optimize this use-case by calling