From patchwork Wed Oct 21 19:13:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11849659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E27915E6 for ; Wed, 21 Oct 2020 19:14:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E860D2416E for ; Wed, 21 Oct 2020 19:14:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="bw248tYI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E860D2416E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D926B6B005D; Wed, 21 Oct 2020 15:13:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D1B476B0062; Wed, 21 Oct 2020 15:13:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B95926B0068; Wed, 21 Oct 2020 15:13:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 89C516B005D for ; Wed, 21 Oct 2020 15:13:59 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2AC93181AEF07 for ; Wed, 21 Oct 2020 19:13:59 +0000 (UTC) X-FDA: 77396882598.13.egg13_1c06c7d2724a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 005D518140B60 for ; Wed, 21 Oct 2020 19:13:58 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30054:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04y8cmi5w56izyofrhesczmph69uqop869rtem653rnr3th1kdr1tn8zzkgte7m.dz3qgbiep4g37edhjnxfbwbf9a1gkpdg1d41a83ry6pd8y3pij9aaonzbc3eun5.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:29,LUA_SUMMARY:none X-HE-Tag: egg13_1c06c7d2724a X-Filterd-Recvd-Size: 3915 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 21 Oct 2020 19:13:58 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 21 Oct 2020 12:13:09 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Oct 2020 19:13:56 +0000 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by mail.nvidia.com (172.20.187.18) with Microsoft SMTP Server id 15.0.1473.3 via Frontend Transport; Wed, 21 Oct 2020 19:13:56 +0000 From: Ralph Campbell To: , CC: Jerome Glisse , John Hubbard , Alistair Popple , Christoph Hellwig , "Jason Gunthorpe" , Andrew Morton , "Ralph Campbell" Subject: [PATCH] mm: optimize migrate_vma_pages() mmu notifier Date: Wed, 21 Oct 2020 12:13:35 -0700 Message-ID: <20201021191335.10916-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603307589; bh=LZRrQCf0szPeaP51dA4nPF1DXPPmj4P4e5jvIgcuuGQ=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:MIME-Version: X-NVConfidentiality:Content-Transfer-Encoding:Content-Type; b=bw248tYI3ov/44qIiE8IwH+jK3e3SrXX/V3qYHCCe4HB5i2E0due5+0mACcwjGc+T zm0l7jC9H2cyzU4iVOrte2DNoOTfxtA0DjayM0mIS90UxWFVPtE2ZNHd1Ev2+lmH1G APcO+QZchzRPz+anuCtt0v7l8HbvSUCA0dAh3MEQhB3y4JkGxyc5lRFHtxKdfThoWw w8Bbxz9bVDEkZTlWK/98LQ3Qyhs8/vQFIQkkTNpJfuKYFhy/0gUFrZT+Hd6qh47DiG DK93VSG5KwZhi2jJajZjWfcnx2MWko0RgXo6+kH8BDXWmBaDxlQvqgkUmsKHtX6xwd k7VPIXmdeZgOw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When migrating a zero page or pte_none() anonymous page to device private memory, migrate_vma_setup() will initialize the src[] array with a NULL PFN. This lets the device driver allocate device private memory and clear it instead of DMAing a page of zeros over the device bus. Since the source page didn't exist at the time, no struct page was locked nor a migration PTE inserted into the CPU page tables. The actual PTE insertion happens in migrate_vma_pages() when it tries to insert the device private struct page PTE into the CPU page tables. migrate_vma_pages() has to call the mmu notifiers again since another device could fault on the same page before the page table locks are acquired. Allow device drivers to optimize the invalidation similar to migrate_vma_setup() by calling mmu_notifier_range_init() which sets struct mmu_notifier_range event type to MMU_NOTIFY_MIGRATE and the migrate_pgmap_owner field. Signed-off-by: Ralph Campbell --- This is for Andrew Morton's mm tree after the merge window. mm/migrate.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..560b57dde960 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2999,11 +2999,10 @@ void migrate_vma_pages(struct migrate_vma *migrate) if (!notified) { notified = true; - mmu_notifier_range_init(&range, - MMU_NOTIFY_CLEAR, 0, - NULL, - migrate->vma->vm_mm, - addr, migrate->end); + mmu_notifier_range_init_migrate(&range, 0, + migrate->vma, migrate->vma->vm_mm, + addr, migrate->end, + migrate->pgmap_owner); mmu_notifier_invalidate_range_start(&range); } migrate_vma_insert_page(migrate, addr, newpage,