From patchwork Thu Aug 13 03:17:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 11711565 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A370F913 for ; Thu, 13 Aug 2020 03:17:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F408207F7 for ; Thu, 13 Aug 2020 03:17:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j/2W25yr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F408207F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6AF5D6B000E; Wed, 12 Aug 2020 23:17:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 662268D0001; Wed, 12 Aug 2020 23:17:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 54FC36B0022; Wed, 12 Aug 2020 23:17:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 3CAD06B000E for ; Wed, 12 Aug 2020 23:17:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DB1E51E14 for ; Thu, 13 Aug 2020 03:17:16 +0000 (UTC) X-FDA: 77144084472.25.rod69_0214b9e26ff0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id B03291804E3A0 for ; Thu, 13 Aug 2020 03:17:16 +0000 (UTC) X-Spam-Summary: 1,0,0,b30a60dd08628f0c,d41d8cd98f00b204,hughd@google.com,,RULES_HIT:41:69:355:379:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2890:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3872:3874:4042:4250:4321:4385:4605:5007:6119:6261:6653:7903:10004:10400:11026:11233:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12679:12740:12895:12986:13161:13229:13439:14093:14097:14181:14394:14659:14721:21067:21080:21324:21444:21450:21451:21627:21740:21966:21990:30001:30041:30054:30070:30075:30090,0,RBL:209.85.161.67:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04y8shkacwy155k4bamx14abq4jzjycpgdmz7dnwh57y5y535mk5wr8hrt66x6m.zgt7js8stzeibquh1fmezpfcwh3ydkfntg6njcfc7u31pcm9mgj6o4pae4sj54c.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutra l,Custom X-HE-Tag: rod69_0214b9e26ff0 X-Filterd-Recvd-Size: 5876 Received: from mail-oo1-f67.google.com (mail-oo1-f67.google.com [209.85.161.67]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Aug 2020 03:17:16 +0000 (UTC) Received: by mail-oo1-f67.google.com with SMTP id k63so941234oob.1 for ; Wed, 12 Aug 2020 20:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=d5gM963jXgjoG9xSkVthzNFdGJ4ytytBPjGpdNfGwQE=; b=j/2W25yrTTTJa0lxgxJG+ghzhZ7OSp8SEds1Qz1gC1lDAfGWdRKzSnU9ZDd0tlVpg1 dwF3sBJmqO0/UhaKTXEsXbsKrcXcGZDjqBaWEMWp40j4ryEIGBzDxyOO8nlQZNq+zLK7 TKx2rkqUEMRF6hQsjmnfxlXDK/iMFAhbsexlq2lYgjCMvl3giIXtf3mIwnQb0x4eHS1b U2sHqoTUa3RzXlOzKz9HkcxjDEEL22k+w++oTqxtGWddSuDC5h+6Y/8DO64Owyi4htmJ ZMeR+G/d4jk+nXOcz3by/QX77MMpF6NvnZptI6Mi1kVQdLZNZl3N0yFzcZXAQ2+HK8By YRJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=d5gM963jXgjoG9xSkVthzNFdGJ4ytytBPjGpdNfGwQE=; b=YQJIk53AC4J7w/wzQIk2/zqdPpAF5L8gL7AvmjUy19mwRXSwur5Es5Zyl9rk7Cam1/ zDtKsDnDDUcAGh2l5fzFB680l5jP3GsyTPIz63Hn4xNDkQnBwJumaRUEhYM95j0qaTD9 SMwVZqUCW2y+BwcEWlb45Qhcvz1Pk2byYZZfS+zhxmr+/vrEFi9lfkIOUORvTgYeplo7 6sCf1WA/sGkeFjKmofVB0nVmDjbgCK/MtgwtlndSZW1mULnsSNdV9llShcX/CAgtEAPx cvasWhiIKx2KsCB8Bv4LKOzw9Z6O8DO5T7Y+ulzMUhHrp54ZSi8vcEvO92ZrkUEYN0bi sStg== X-Gm-Message-State: AOAM532nqEMO3Qz9lCOCrMdiujvfuScNMOy89jBL51qUVJERhoUhd98Z KWKgdzhFFwQuDE3m9p0/6deHWA== X-Google-Smtp-Source: ABdhPJz3Ahiq/PDZ7De+xF095uaVH1Vh3EB/+6umE8wxr7Ev1HWhGWS6bGlZEUajwVigJFocX+Qi2A== X-Received: by 2002:a4a:ac0e:: with SMTP id p14mr2860150oon.26.1597288635333; Wed, 12 Aug 2020 20:17:15 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id p9sm861601oti.22.2020.08.12.20.17.13 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Wed, 12 Aug 2020 20:17:14 -0700 (PDT) Date: Wed, 12 Aug 2020 20:17:00 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Christoph Hellwig cc: Linus Torvalds , Andrew Morton , Dan Williams , Eric Dumazet , Hugh Dickins , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH] dma-debug: fix debug_dma_assert_idle(), use rcu_read_lock() Message-ID: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Rspamd-Queue-Id: B03291804E3A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since commit 2a9127fcf229 ("mm: rewrite wait_on_page_bit_common() logic") improved unlock_page(), it has become more noticeable how cow_user_page() in a kernel with CONFIG_DMA_API_DEBUG=y can create and suffer from heavy contention on DMA debug's radix_lock in debug_dma_assert_idle(). It is only doing a lookup: use rcu_read_lock() and rcu_read_unlock() instead; though that does require the static ents[] to be moved onstack... ...but, hold on, isn't that radix_tree_gang_lookup() and loop doing quite the wrong thing: searching CACHELINES_PER_PAGE entries for an exact match with the first cacheline of the page in question? radix_tree_gang_lookup() is the right tool for the job, but we need nothing more than to check the first entry it can find, reporting if that falls anywhere within the page. (Is RCU safe here? As safe as using the spinlock was. The entries are never freed, so don't need to be freed by RCU. They may be reused, and there is a faint chance of a race, with an offending entry reused while printing its error info; but the spinlock did not prevent that either, and I agree that it's not worth worrying about.) Fixes: 3b7a6418c749 ("dma debug: account for cachelines and read-only mappings in overlap tracking") Signed-off-by: Hugh Dickins --- kernel/dma/debug.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-) --- v5.9-rc/kernel/dma/debug.c 2020-08-05 18:17:57.544203766 -0700 +++ linux/kernel/dma/debug.c 2020-08-12 19:53:33.159070245 -0700 @@ -565,11 +565,8 @@ static void active_cacheline_remove(stru */ void debug_dma_assert_idle(struct page *page) { - static struct dma_debug_entry *ents[CACHELINES_PER_PAGE]; - struct dma_debug_entry *entry = NULL; - void **results = (void **) &ents; - unsigned int nents, i; - unsigned long flags; + struct dma_debug_entry *entry; + unsigned long pfn; phys_addr_t cln; if (dma_debug_disabled()) @@ -578,20 +575,14 @@ void debug_dma_assert_idle(struct page * if (!page) return; - cln = (phys_addr_t) page_to_pfn(page) << CACHELINE_PER_PAGE_SHIFT; - spin_lock_irqsave(&radix_lock, flags); - nents = radix_tree_gang_lookup(&dma_active_cacheline, results, cln, - CACHELINES_PER_PAGE); - for (i = 0; i < nents; i++) { - phys_addr_t ent_cln = to_cacheline_number(ents[i]); + pfn = page_to_pfn(page); + cln = (phys_addr_t) pfn << CACHELINE_PER_PAGE_SHIFT; - if (ent_cln == cln) { - entry = ents[i]; - break; - } else if (ent_cln >= cln + CACHELINES_PER_PAGE) - break; - } - spin_unlock_irqrestore(&radix_lock, flags); + rcu_read_lock(); + if (!radix_tree_gang_lookup(&dma_active_cacheline, (void **) &entry, + cln, 1) || entry->pfn != pfn) + entry = NULL; + rcu_read_unlock(); if (!entry) return;