From patchwork Wed Aug 31 08:30:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12960537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 534B4ECAAD1 for ; Wed, 31 Aug 2022 08:30:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 534978D0003; Wed, 31 Aug 2022 04:30:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E3A48D0002; Wed, 31 Aug 2022 04:30:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D2798D0003; Wed, 31 Aug 2022 04:30:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2E22D8D0002 for ; Wed, 31 Aug 2022 04:30:34 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E98E4140D22 for ; Wed, 31 Aug 2022 08:30:33 +0000 (UTC) X-FDA: 79859216346.16.B3A6FD4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 4BFEE140062 for ; Wed, 31 Aug 2022 08:30:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1661934632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=NFEsdZZZDsnHBh+g7oky6kdPceWmzGWYxGqF9lptHPg=; b=SVSfMJXGUOoqegMOWi/zeXucQMvcopKz7kyBP2xfyKXC5wp0t7/m4mA5SCxfP3yxw1Nme5 oXyH/RUqvhEo66YGP/Bt7oGbTxZ/8WS2QqKHqgbllS4WMo/ukKOVtBWrf6KPwXa003O+RV kCHqkN2QoJ+4D/iBLYuT8JjxOFT7oJE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-333-fF7fsLlTOwuzp0yfJQsY4Q-1; Wed, 31 Aug 2022 04:30:29 -0400 X-MC-Unique: fF7fsLlTOwuzp0yfJQsY4Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BB8901C04B44; Wed, 31 Aug 2022 08:30:28 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.193.206]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3EF582166B26; Wed, 31 Aug 2022 08:30:26 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Jason Gunthorpe , John Hubbard , Andrea Arcangeli , Hugh Dickins , Peter Xu Subject: [PATCH v1] mm/ksm: update stale comment in write_protect_page() Date: Wed, 31 Aug 2022 10:30:24 +0200 Message-Id: <20220831083024.37138-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SVSfMJXG; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661934633; a=rsa-sha256; cv=none; b=3YmlO6z/kbWzg4LqBrGPfG0WPGRQZk37tUabW7FFjlItEhWigXhA1UxZUJK2eKs8+xBbWL 3HGqAXZljF7/USTM/cXwj1bKPFUAE5xI2jWGT7YvX+621Nz+dLLNY/lwKTmuDEKA6djPKa Fyg63PmB1w4nnOMMLFH9dEFpqkokgIs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661934633; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=NFEsdZZZDsnHBh+g7oky6kdPceWmzGWYxGqF9lptHPg=; b=z0noVuUxnmbljUwFner2GWTKghTzdy8pt2x/uJe+eAIC1xxyAufz9AhaLcX8eRrswXcOE9 5uhzLb3x7s+YBKRWUyTDQ/CITe6mOJHn0SkSlTBFM9DNJS33tcRZuVBKgSyglfF9z13Wv+ ltQK/g/KySDE8LAYL+mk5ODMGHrA3RQ= X-Rspam-User: X-Rspamd-Queue-Id: 4BFEE140062 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SVSfMJXG; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: tkhnfzcb6uk7adtbq3b3qetpj55i1ny3 X-Rspamd-Server: rspam08 X-HE-Tag: 1661934633-554359 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The comment is stale, because a TLB flush is no longer sufficient and required to synchronize against concurrent GUP-fast. This used to be true in the past, whereby a TLB flush would have implied an IPI on architectures that support GUP-fast, resulting in GUP-fast that disables local interrupts from completing before completing the flush. However, ever since general RCU GUP-fast was introduced in commit 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()"), this handling no longer applies in general. RCU primarily prevents page tables from getting freed while they might still get walked by GUP-fast, but we can race with GUP-fast after clearing the PTE and flushing the TLB. Nowadays, we can see a refcount change from GUP-fast at any point in time. However, GUP-fast detects concurrent PTE changes by looking up the PTE, temporarily grabbing a reference, and dropping the reference again if the PTE changed in the meantime. An explanation by Jason Gunthorpe regarding how existing memory barriers should be fine to make sure that concurrent GUP-fast cannot succeed in grabbing a reference with write permissions after we cleared the PTE and flushed the TLB can be found at [1]. Note that clearing PageAnonExclusive via page_try_share_anon_rmap() might still need some explicit memory barriers to not have any possible races with RCU GUP-fast. [1] https://lkml.kernel.org/r/Yw5rwIUPm49XtqOB@nvidia.com Cc: Andrew Morton Cc: Jason Gunthorpe Cc: John Hubbard Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Peter Xu Signed-off-by: David Hildenbrand --- mm/ksm.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 42ab153335a2..e88291f63461 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1072,23 +1072,20 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, swapped = PageSwapCache(page); flush_cache_page(vma, pvmw.address, page_to_pfn(page)); /* - * Ok this is tricky, when get_user_pages_fast() run it doesn't - * take any lock, therefore the check that we are going to make - * with the pagecount against the mapcount is racy and - * O_DIRECT can happen right after the check. - * So we clear the pte and flush the tlb before the check - * this assure us that no O_DIRECT can happen after the check - * or in the middle of the check. - * - * No need to notify as we are downgrading page table to read - * only not changing it to point to a new page. + * Especially if we're downgrading protection, make sure to + * flush the TLB now. No need to notify as we are not changing + * the PTE to point at a different page. * * See Documentation/mm/mmu_notifier.rst */ entry = ptep_clear_flush(vma, pvmw.address, pvmw.pte); + /* - * Check that no O_DIRECT or similar I/O is in progress on the - * page + * Make sure that there are no unexpected references (e.g., + * concurrent O_DIRECT). Note that while concurrent GUP-fast + * could raise the refcount temporarily to grab a write + * reference, it will observe the changed PTE and drop that + * temporary reference again. */ if (page_mapcount(page) + 1 + swapped != page_count(page)) { set_pte_at(mm, pvmw.address, pvmw.pte, entry);