From patchwork Thu Aug 27 11:40:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ahmed S. Darwish" X-Patchwork-Id: 11740575 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC82C739 for ; Thu, 27 Aug 2020 11:40:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8CD2320786 for ; Thu, 27 Aug 2020 11:40:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="Y1mV2kVw"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="BY9gNkBf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8CD2320786 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A43E38E0008; Thu, 27 Aug 2020 07:40:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9CCB56B002F; Thu, 27 Aug 2020 07:40:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 893FE8E0008; Thu, 27 Aug 2020 07:40:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 6CAD46B002E for ; Thu, 27 Aug 2020 07:40:38 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 252C633C4 for ; Thu, 27 Aug 2020 11:40:38 +0000 (UTC) X-FDA: 77196156156.14.pain95_33073912706c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id E321318229818 for ; Thu, 27 Aug 2020 11:40:37 +0000 (UTC) X-Spam-Summary: 1,0,0,15a64c172e1dd25d,d41d8cd98f00b204,a.darwish@linutronix.de,,RULES_HIT:2:41:69:355:379:800:960:967:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:2194:2199:2393:2525:2538:2559:2564:2682:2685:2693:2859:2897:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3165:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4118:4250:4419:5007:6119:6261:6653:7550:7875:7903:7904:8957:9025:9592:10004:11026:11658:11914:12043:12048:12160:12291:12295:12296:12297:12438:12555:12679:12683:12895:13141:13180:13229:13230:13894:14096:14394:14915:21080:21324:21451:21627:21740:21795:21939:21983:21987:21990:30029:30034:30051:30054:30070,0,RBL:193.142.43.55:@linutronix.de:.lbl8.mailshell.net-64.201.201.201 62.14.6.100;04yfcjrawzixcu68jfi8bn45wsysiocqn7khfq3ugsqg8cxtjxom4tn4aot86fu.y49gt5qtrwg9pzch46ni6aa1fq7px1uwm38e7igr77wxf1ue199n76wpbr1bbog.a-lbl8.mailshell.net-223.238.255.100 ,CacheIP X-HE-Tag: pain95_33073912706c X-Filterd-Recvd-Size: 7842 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 27 Aug 2020 11:40:36 +0000 (UTC) From: "Ahmed S. Darwish" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1598528434; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LMqgdQtrL9KkVo4R+kGlzmmWtYDHEwOz0TcrSWf2Bzs=; b=Y1mV2kVwChCqd5pR9WedpTE72GW/HIHzwg+yD0BWSpblxs2ic0azaxKuXUy3gyC+Oa2XXX Si2RUYXhhI/T3i2o2B4QacLg/UnxS8sdKxf8LlKn570IiIyQdg9ng4Y9Heb2p0TPvZVL6n jI+4fMqZm+gBOJr0fxFym6xjXi5XB3AKyBiYchYjFx/r829PO9iU8Ji4c7lvjguPWkUP9z CgCeJu3FFdCeBrakJ01HgHdJGdEnrpLXQpBndrURaITvAC18HfmEdlL+FxjRngx5ZPW8Gy 3Mdis3lAU1hIgXUkEykndkqBrGaHCFfbpoqAvkG15S3VrXioqYBED2fi+UAK8g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1598528434; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LMqgdQtrL9KkVo4R+kGlzmmWtYDHEwOz0TcrSWf2Bzs=; b=BY9gNkBfUXyrwK+3tyeSP9m9sFYi/gXpQxizhLtUR5Fn7mSlRDFRSOq9wdVeULrT8igA3H 0PQoXUDyQu1yDKAA== To: Peter Zijlstra , Ingo Molnar , Will Deacon , Andrew Morton , Konstantin Khlebnikov , linux-mm@kvack.org Cc: Thomas Gleixner , "Sebastian A. Siewior" , LKML , "Ahmed S. Darwish" Subject: [PATCH v1 2/8] mm/swap: Do not abuse the seqcount_t latching API Date: Thu, 27 Aug 2020 13:40:38 +0200 Message-Id: <20200827114044.11173-3-a.darwish@linutronix.de> In-Reply-To: <20200827114044.11173-1-a.darwish@linutronix.de> References: <20200519214547.352050-1-a.darwish@linutronix.de> <20200827114044.11173-1-a.darwish@linutronix.de> MIME-Version: 1.0 X-Rspamd-Queue-Id: E321318229818 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls") implemented an optimization mechanism to exit the to-be-started LRU drain operation (name it A) if another drain operation *started and finished* while (A) was blocked on the LRU draining mutex. This was done through a seqcount_t latch, which is an abuse of its semantics: 1. seqcount_t latching should be used for the purpose of switching between two storage places with sequence protection to allow interruptible, preemptible, writer sections. The referenced optimization mechanism has absolutely nothing to do with that. 2. The used raw_write_seqcount_latch() has two SMP write memory barriers to insure one consistent storage place out of the two storage places available. A full memory barrier is required instead: to guarantee that the pagevec counter stores visible by local CPU are visible to other CPUs -- before loading the current drain generation. Beside the seqcount_t API abuse, the semantics of a latch sequence counter was force-fitted into the referenced optimization. What was meant is to track "generations" of LRU draining operations, where "global lru draining generation = x" implies that all generations 0 < n <= x are already *scheduled* for draining -- thus nothing needs to be done if the current generation number n <= x. Remove the conceptually-inappropriate seqcount_t latch usage. Manually implement the referenced optimization using a counter and SMP memory barriers. Note, while at it, use the non-atomic variant of cpumask_set_cpu(), __cpumask_set_cpu(), due to the already existing mutex protection. Link: https://lkml.kernel.org/r/CALYGNiPSr-cxV9MX9czaVh6Wz_gzSv3H_8KPvgjBTGbJywUJpA@mail.gmail.com Link: https://lkml.kernel.org/r/87y2pg9erj.fsf@vostro.fn.ogness.net Signed-off-by: Ahmed S. Darwish --- mm/swap.c | 65 +++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 54 insertions(+), 11 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index d16d65d9b4e0..a1ec807e325d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -763,10 +763,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) */ void lru_add_drain_all(void) { - static seqcount_t seqcount = SEQCNT_ZERO(seqcount); - static DEFINE_MUTEX(lock); + /* + * lru_drain_gen - Global pages generation number + * + * (A) Definition: global lru_drain_gen = x implies that all generations + * 0 < n <= x are already *scheduled* for draining. + * + * This is an optimization for the highly-contended use case where a + * user space workload keeps constantly generating a flow of pages for + * each CPU. + */ + static unsigned int lru_drain_gen; static struct cpumask has_work; - int cpu, seq; + static DEFINE_MUTEX(lock); + unsigned cpu, this_gen; /* * Make sure nobody triggers this path before mm_percpu_wq is fully @@ -775,21 +785,54 @@ void lru_add_drain_all(void) if (WARN_ON(!mm_percpu_wq)) return; - seq = raw_read_seqcount_latch(&seqcount); + /* + * Guarantee pagevec counter stores visible by this CPU are visible to + * other CPUs before loading the current drain generation. + */ + smp_mb(); + + /* + * (B) Locally cache global LRU draining generation number + * + * The read barrier ensures that the counter is loaded before the mutex + * is taken. It pairs with smp_mb() inside the mutex critical section + * at (D). + */ + this_gen = smp_load_acquire(&lru_drain_gen); mutex_lock(&lock); /* - * Piggyback on drain started and finished while we waited for lock: - * all pages pended at the time of our enter were drained from vectors. + * (C) Exit the draining operation if a newer generation, from another + * lru_add_drain_all(), was already scheduled for draining. Check (A). */ - if (__read_seqcount_retry(&seqcount, seq)) + if (unlikely(this_gen != lru_drain_gen)) goto done; - raw_write_seqcount_latch(&seqcount); + /* + * (D) Increment global generation number + * + * Pairs with smp_load_acquire() at (B), outside of the critical + * section. Use a full memory barrier to guarantee that the new global + * drain generation number is stored before loading pagevec counters. + * + * This pairing must be done here, before the for_each_online_cpu loop + * below which drains the page vectors. + * + * Let x, y, and z represent some system CPU numbers, where x < y < z. + * Assume CPU #z is is in the middle of the for_each_online_cpu loop + * below and has already reached CPU #y's per-cpu data. CPU #x comes + * along, adds some pages to its per-cpu vectors, then calls + * lru_add_drain_all(). + * + * If the paired barrier is done at any later step, e.g. after the + * loop, CPU #x will just exit at (C) and miss flushing out all of its + * added pages. + */ + WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1); + smp_mb(); cpumask_clear(&has_work); - for_each_online_cpu(cpu) { struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); @@ -801,7 +844,7 @@ void lru_add_drain_all(void) need_activate_page_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); queue_work_on(cpu, mm_percpu_wq, work); - cpumask_set_cpu(cpu, &has_work); + __cpumask_set_cpu(cpu, &has_work); } } @@ -816,7 +859,7 @@ void lru_add_drain_all(void) { lru_add_drain(); } -#endif +#endif /* CONFIG_SMP */ /** * release_pages - batched put_page()