From patchwork Fri Nov 5 20:45:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12605795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60683C433EF for ; Fri, 5 Nov 2021 20:45:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 12E976136A for ; Fri, 5 Nov 2021 20:45:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 12E976136A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8EA369400BF; Fri, 5 Nov 2021 16:45:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 872EE9400B3; Fri, 5 Nov 2021 16:45:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73ADF9400BF; Fri, 5 Nov 2021 16:45:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 5B1B19400B3 for ; Fri, 5 Nov 2021 16:45:02 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2237D7533C for ; Fri, 5 Nov 2021 20:45:02 +0000 (UTC) X-FDA: 78776056044.22.2F08839 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf12.hostedemail.com (Postfix) with ESMTP id BD19510000AE for ; Fri, 5 Nov 2021 20:45:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CD0916136A; Fri, 5 Nov 2021 20:45:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1636145101; bh=idL0yaFmY8TUVLDiAyvWIBsNSZtEO/ZbWdvGTnAYvG0=; h=Date:From:To:Subject:In-Reply-To:From; b=purpFdKL9jSqV02q7QhJHpwT0YC5oCFkTXmung7fTYtGKZHYzQ1pVIictLj0G/lkQ ZU5ZGPVQfGfAcMEFffMIJWv/lWkzfjtctuUFlWyNg2KqStBY/EBJwGQY+BHI/VCRfy ndP7edNlym1wYtEfn6zniON96q11mVjIKTOe5jpA= Date: Fri, 05 Nov 2021 13:45:00 -0700 From: Andrew Morton To: akpm@linux-foundation.org, apopple@nvidia.com, jglisse@redhat.com, jhubbard@nvidia.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, rcampbell@nvidia.com, torvalds@linux-foundation.org Subject: [patch 197/262] mm/rmap.c: avoid double faults migrating device private pages Message-ID: <20211105204500.E7gZ9uZtc%akpm@linux-foundation.org> In-Reply-To: <20211105133408.cccbb98b71a77d5e8430aba1@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=purpFdKL; dmarc=none; spf=pass (imf12.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BD19510000AE X-Stat-Signature: c63hzkpka9r9wpz5ih9ryotaje7dsxag X-HE-Tag: 1636145101-972694 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alistair Popple Subject: mm/rmap.c: avoid double faults migrating device private pages During migration special page table entries are installed for each page being migrated. These entries store the pfn and associated permissions of ptes mapping the page being migarted. Device-private pages use special swap pte entries to distinguish read-only vs. writeable pages which the migration code checks when creating migration entries. Normally this follows a fast path in migrate_vma_collect_pmd() which correctly copies the permissions of device-private pages over to migration entries when migrating pages back to the CPU. However the slow-path falls back to using try_to_migrate() which unconditionally creates read-only migration entries for device-private pages. This leads to unnecessary double faults on the CPU as the new pages are always mapped read-only even when they could be mapped writeable. Fix this by correctly copying device-private permissions in try_to_migrate_one(). Link: https://lkml.kernel.org/r/20211018045247.3128058-1-apopple@nvidia.com Signed-off-by: Alistair Popple Reported-by: Ralph Campbell Reviewed-by: John Hubbard Cc: Jerome Glisse Signed-off-by: Andrew Morton --- mm/rmap.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/mm/rmap.c~mm-rmapc-avoid-double-faults-migrating-device-private-pages +++ a/mm/rmap.c @@ -1807,6 +1807,7 @@ static bool try_to_migrate_one(struct pa update_hiwater_rss(mm); if (is_zone_device_page(page)) { + unsigned long pfn = page_to_pfn(page); swp_entry_t entry; pte_t swp_pte; @@ -1815,8 +1816,11 @@ static bool try_to_migrate_one(struct pa * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry = make_readable_migration_entry( - page_to_pfn(page)); + entry = pte_to_swp_entry(pteval); + if (is_writable_device_private_entry(entry)) + entry = make_writable_migration_entry(pfn); + else + entry = make_readable_migration_entry(pfn); swp_pte = swp_entry_to_pte(entry); /*