From patchwork Mon Jul 13 17:21:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B87DF13B4 for ; Mon, 13 Jul 2020 17:22:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84AC820773 for ; Mon, 13 Jul 2020 17:22:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="kh5nU6VW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84AC820773 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 85F788D0001; Mon, 13 Jul 2020 13:22:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 762F98D0009; Mon, 13 Jul 2020 13:22:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47BE58D0001; Mon, 13 Jul 2020 13:22:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 228D48D0006 for ; Mon, 13 Jul 2020 13:22:11 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D2D0C8245571 for ; Mon, 13 Jul 2020 17:22:10 +0000 (UTC) X-FDA: 77033720820.08.food17_4a13ae226eea Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id ABE871819E76C for ; Mon, 13 Jul 2020 17:22:10 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30054:30064:30070,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04y866uo8ij7j4w7s6689pk71u1n4yckosig8s6dc78a4gyztts143m3em31bdq.fub1igwutjnf3wfkhmwz5dhd9fko3wydjy8adumzxb4dtnjj8kmnpr3hatf5dpn.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: food17_4a13ae226eea X-Filterd-Recvd-Size: 6200 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 17:22:09 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:21:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:09 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:09 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:22:01 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 5/5] mm/hmm/test: use the new migration invalidation Date: Mon, 13 Jul 2020 10:21:49 -0700 Message-ID: <20200713172149.2310-6-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660871; bh=9IIuPtj59etQET6XQzw0bMUB+2aW73xZLXgD2GoJ4ec=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=kh5nU6VW9tnNBPWQqX71XltgDPOm1zkutww5LVlecYz2mLPH3oKZQAyW9rtckV084 qA1j9je3ucjTMg/3t65OpqoSFb5gMN4jmqzKD42GuPNEs1G4YgCDOkkrVQtdgjZRre 6uM4nqv054kJDrfvRKpXWXXsogMFOt60pn+J/0flBYkKdrUhqY5R2u/GJgC5xzHB0U uH5Z5o/w88n9kL/iYoCi5c+dDF9nQ4dewS4qX4Kp64hVGj0rUzSdc/XBEFSXf3SHje gleJnXunyobNzEN0wJ56h2idOoaAwZwcp8yrAr5JcWrnTZODtIqB66AFFVLXruI0m9 thlDkRVOTEATg== X-Rspamd-Queue-Id: ABE871819E76C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device private memory and handle the invalidation in the driver as part of migrating device private memory. Signed-off-by: Ralph Campbell --- lib/test_hmm.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 1bd60cfb5a25..77875fc4e7c1 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -214,6 +214,14 @@ static bool dmirror_interval_invalidate(struct mmu_interval_notifier *mni, { struct dmirror *dmirror = container_of(mni, struct dmirror, notifier); + /* + * Ignore invalidation callbacks for device private pages since + * the invalidation is handled as part of the migration process. + */ + if (range->event == MMU_NOTIFY_MIGRATE && + range->migrate_pgmap_owner == dmirror->mdevice) + return true; + if (mmu_notifier_range_blockable(range)) mutex_lock(&dmirror->mutex); else if (!mutex_trylock(&dmirror->mutex)) @@ -702,7 +710,7 @@ static int dmirror_migrate(struct dmirror *dmirror, args.dst = dst_pfns; args.start = addr; args.end = next; - args.src_owner = NULL; + args.src_owner = dmirror->mdevice; args.dir = MIGRATE_VMA_FROM_SYSTEM; ret = migrate_vma_setup(&args); if (ret) @@ -992,7 +1000,7 @@ static void dmirror_devmem_free(struct page *page) } static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, - struct dmirror_device *mdevice) + struct dmirror *dmirror) { const unsigned long *src = args->src; unsigned long *dst = args->dst; @@ -1014,6 +1022,7 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, continue; lock_page(dpage); + xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); copy_highpage(dpage, spage); *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; if (*src & MIGRATE_PFN_WRITE) @@ -1022,15 +1031,6 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, return 0; } -static void dmirror_devmem_fault_finalize_and_map(struct migrate_vma *args, - struct dmirror *dmirror) -{ - /* Invalidate the device's page table mapping. */ - mutex_lock(&dmirror->mutex); - dmirror_do_update(dmirror, args->start, args->end); - mutex_unlock(&dmirror->mutex); -} - static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { struct migrate_vma args; @@ -1060,11 +1060,16 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; - ret = dmirror_devmem_fault_alloc_and_copy(&args, dmirror->mdevice); + ret = dmirror_devmem_fault_alloc_and_copy(&args, dmirror); if (ret) return ret; migrate_vma_pages(&args); - dmirror_devmem_fault_finalize_and_map(&args, dmirror); + /* + * No device finalize step is needed since + * dmirror_devmem_fault_alloc_and_copy() will have already + * invalidated the device page table. We could reinstate device MMU + * entries for pages that didn't migrate but that should be rare. + */ migrate_vma_finalize(&args); return 0; }