From patchwork Mon Jul 13 17:21:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660795 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F077613B4 for ; Mon, 13 Jul 2020 17:22:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D9124207DD for ; Mon, 13 Jul 2020 17:22:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="VswAJyQZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730206AbgGMRWH (ORCPT ); Mon, 13 Jul 2020 13:22:07 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:3758 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729644AbgGMRWF (ORCPT ); Mon, 13 Jul 2020 13:22:05 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:20:13 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:05 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:05 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:21:59 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:21:59 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:21:59 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 1/5] nouveau: fix storing invalid ptes Date: Mon, 13 Jul 2020 10:21:45 -0700 Message-ID: <20200713172149.2310-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660813; bh=rz9ii3/rSLaotraZODv9PDOOFrnasJhkv9z9XY7GWhs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=VswAJyQZ7manJ2SfOLznJm8rAKzIbt64JdSJ43MkwLlhuHyXzmFh52wAvmFtpNPDW +jyEcn970ykf8LGntBY+F+RIDnr2nFHfOC0b9btqH8RNqeo+T+haRZmc9tLLE0cifQ wrWYu1OALfcGxVn3Q1f0RqDYOB8vZCa3Wg3CZgHjJE6Nm2acpHo0M01MG0oJpsyphH ANNoa0KS+FojsYBXaBordvpqiTpWIlC14Z3XmoijCG5KmSRaXVJoZb6yy/Ie6hxDYu 8I7ZtUJ9uwHhYIOBrHIiXkklut9fRHTrxiR0UD7Vqc6pulM5yOXNyn9amCeRJf1913 x68cXPgjrdxtQ== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When migrating a range of system memory to device private memory, some of the pages in the address range may not be migrating. In this case, the non migrating pages won't have a new GPU MMU entry to store but the nvif_object_ioctl() NVIF_VMM_V0_PFNMAP method doesn't check the input and stores a bad valid GPU page table entry. Fix this by skipping the invalid input PTEs when updating the GPU page tables. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c index ed37fddd063f..7eabe9fe0d2b 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c @@ -79,8 +79,12 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, dma_addr_t addr; nvkm_kmap(pt->memory); - while (ptes--) { + for (; ptes; ptes--, map->pfn++) { u64 data = 0; + + if (!(*map->pfn & NVKM_VMM_PFN_V)) + continue; + if (!(*map->pfn & NVKM_VMM_PFN_W)) data |= BIT_ULL(6); /* RO. */ @@ -100,7 +104,6 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, } VMM_WO064(pt, vmm, ptei++ * 8, data); - map->pfn++; } nvkm_done(pt->memory); } @@ -310,9 +313,12 @@ gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, dma_addr_t addr; nvkm_kmap(pt->memory); - while (ptes--) { + for (; ptes; ptes--, map->pfn++) { u64 data = 0; + if (!(*map->pfn & NVKM_VMM_PFN_V)) + continue; + if (!(*map->pfn & NVKM_VMM_PFN_W)) data |= BIT_ULL(6); /* RO. */ @@ -332,7 +338,6 @@ gp100_vmm_pd0_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt, } VMM_WO064(pt, vmm, ptei++ * 16, data); - map->pfn++; } nvkm_done(pt->memory); } From patchwork Mon Jul 13 17:21:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660789 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A45213B4 for ; Mon, 13 Jul 2020 17:22:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 32B8F20791 for ; Mon, 13 Jul 2020 17:22:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="a/3tko9L" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730268AbgGMRW0 (ORCPT ); Mon, 13 Jul 2020 13:22:26 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17672 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730434AbgGMRWJ (ORCPT ); Mon, 13 Jul 2020 13:22:09 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:21:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:08 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:08 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:22:00 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:22:00 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:21:59 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma Date: Mon, 13 Jul 2020 10:21:46 -0700 Message-ID: <20200713172149.2310-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660871; bh=oUFyut0bYbUlRSiOKSQzYag17RS/tfE1NY07M2r4NuM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=a/3tko9LlFA9S/p9fHKn5ZyPDjpqNEiDPwIpL8GqIO9Zb6M5+Ga/Utrsie4r0Fumm K16mG0JeoguFCkPQXWoJ4eK7OWD4o2c0Eo9ixNvTS8+7x3bi5mVIlElRUIzt9Q1lGf Y/r0AX9tvZbNRKX79JXnpdezxZR9ulB2iOjx4oUsdPYSo8/0azD8Gn3qmKNrd+YyC+ gQbKkSeDly0oHbEJEVT9VgHYMLS5M0ftUffP7JhtqfS81rVq4kAPBreLY8+M9xt+Sl NC/GhlOhSXi40ZPdkVZvY51/2dzubpCQUrdcyXK7lkm2kChEzF4Xf8fnBkDtkbq8Pa lR7CyWBGgeDHQ== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The src_owner field in struct migrate_vma is being used for two purposes, it implies the direction of the migration and it identifies device private pages owned by the caller. Split this into separate parameters so the src_owner field can be used just to identify device private pages owned by the caller of migrate_vma_setup(). Signed-off-by: Ralph Campbell Reviewed-by: Bharata B Rao --- arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++ drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 ++ include/linux/migrate.h | 12 +++++++++--- lib/test_hmm.c | 2 ++ mm/migrate.c | 5 +++-- 5 files changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 09d8119024db..acbf14cd2d72 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -400,6 +400,7 @@ kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, mig.end = end; mig.src = &src_pfn; mig.dst = &dst_pfn; + mig.dir = MIGRATE_VMA_FROM_SYSTEM; /* * We come here with mmap_lock write lock held just for @@ -578,6 +579,7 @@ kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start, mig.src = &src_pfn; mig.dst = &dst_pfn; mig.src_owner = &kvmppc_uvmem_pgmap; + mig.dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE; mutex_lock(&kvm->arch.uvmem_lock); /* The requested page is already paged-out, nothing to do */ diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index e5c230d9ae24..e5c83b8ee82e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -183,6 +183,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) .src = &src, .dst = &dst, .src_owner = drm->dev, + .dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE, }; /* @@ -615,6 +616,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, struct migrate_vma args = { .vma = vma, .start = start, + .dir = MIGRATE_VMA_FROM_SYSTEM, }; unsigned long i; u64 *pfns; diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3e546cbf03dd..620f2235d7d4 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -180,6 +180,11 @@ static inline unsigned long migrate_pfn(unsigned long pfn) return (pfn << MIGRATE_PFN_SHIFT) | MIGRATE_PFN_VALID; } +enum migrate_vma_direction { + MIGRATE_VMA_FROM_SYSTEM, + MIGRATE_VMA_FROM_DEVICE_PRIVATE, +}; + struct migrate_vma { struct vm_area_struct *vma; /* @@ -199,11 +204,12 @@ struct migrate_vma { /* * Set to the owner value also stored in page->pgmap->owner for - * migrating out of device private memory. If set only device - * private pages with this owner are migrated. If not set - * device private pages are not migrated at all. + * migrating device private memory. The direction also needs to + * be set to MIGRATE_VMA_FROM_DEVICE_PRIVATE. */ void *src_owner; + + enum migrate_vma_direction dir; }; int migrate_vma_setup(struct migrate_vma *args); diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 9aa577afc269..1bd60cfb5a25 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -703,6 +703,7 @@ static int dmirror_migrate(struct dmirror *dmirror, args.start = addr; args.end = next; args.src_owner = NULL; + args.dir = MIGRATE_VMA_FROM_SYSTEM; ret = migrate_vma_setup(&args); if (ret) goto out; @@ -1054,6 +1055,7 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) args.src = &src_pfns; args.dst = &dst_pfns; args.src_owner = dmirror->mdevice; + args.dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE; if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; diff --git a/mm/migrate.c b/mm/migrate.c index f37729673558..2bbc5c4c672e 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2287,7 +2287,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, goto next; page = device_private_entry_to_page(entry); - if (page->pgmap->owner != migrate->src_owner) + if (migrate->dir != MIGRATE_VMA_FROM_DEVICE_PRIVATE || + page->pgmap->owner != migrate->src_owner) goto next; mpfn = migrate_pfn(page_to_pfn(page)) | @@ -2295,7 +2296,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (is_write_device_private_entry(entry)) mpfn |= MIGRATE_PFN_WRITE; } else { - if (migrate->src_owner) + if (migrate->dir != MIGRATE_VMA_FROM_SYSTEM) goto next; pfn = pte_pfn(pte); if (is_zero_pfn(pfn)) { From patchwork Mon Jul 13 17:21:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660771 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6C9913B4 for ; Mon, 13 Jul 2020 17:22:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9F21820791 for ; Mon, 13 Jul 2020 17:22:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="kOtLcclg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729644AbgGMRWL (ORCPT ); Mon, 13 Jul 2020 13:22:11 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:3772 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730454AbgGMRWL (ORCPT ); Mon, 13 Jul 2020 13:22:11 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:20:18 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:10 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:10 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:22:00 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:22:00 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:22:00 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 3/5] mm/notifier: add migration invalidation type Date: Mon, 13 Jul 2020 10:21:47 -0700 Message-ID: <20200713172149.2310-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660818; bh=U/xcm6Lo9aADndFIpE59fzJXl6/Z/hJZ8D48vc+tKDk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=kOtLcclgt4z3D8y5W3+84XQjgX8HH893PgCInOGLot5P09WWKK4bZRwlnl0y9Bm/4 s3yatnp1LZ13Wl32s+hv8/jwao5hwk07rHMNqyRWpYgaohBlqJMYWCGW7gCkqPhm+q 0OrJ0Qd6ogB0k6uJuTbCR15n29aej9QDDWZir2Pf1WWx/yAdlfu2eJaFJMhSJzqfG4 CS66JzcI1332DdQRruGHxrOQqEu/EYdAQaNYKvnPLWtTFQ9FHmNaymMBuwnd5FViut 842hB1Ipm1bDIZl6SS9C2JIDC3ZxFuPWyTtNQLVXNUMfckbmJV3XdTDjeDCGYzz7zA XYAlTzgCdHp4g== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Currently migrate_vma_setup() calls mmu_notifier_invalidate_range_start() which flushes all device private page mappings whether or not a page is being migrated to/from device private memory. In order to not disrupt device mappings that are not being migrated, shift the responsibility for clearing device private mappings to the device driver and leave CPU page table unmapping handled by migrate_vma_setup(). To support this, the caller of migrate_vma_setup() should always set struct migrate_vma::src_owner to a non NULL value that matches the device private page->pgmap->owner. This value is then passed to the struct mmu_notifier_range with a new event type which the driver's invalidation function can use to avoid device MMU invalidations. Signed-off-by: Ralph Campbell --- include/linux/mmu_notifier.h | 7 +++++++ mm/migrate.c | 8 +++++++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index fc68f3570e19..1921fcf6be5b 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -38,6 +38,10 @@ struct mmu_interval_notifier; * * @MMU_NOTIFY_RELEASE: used during mmu_interval_notifier invalidate to signal * that the mm refcount is zero and the range is no longer accessible. + * + * @MMU_NOTIFY_MIGRATE: used during migrate_vma_collect() invalidate to signal + * a device driver to possibly ignore the invalidation if the + * migrate_pgmap_owner field matches the driver's device private pgmap owner. */ enum mmu_notifier_event { MMU_NOTIFY_UNMAP = 0, @@ -46,6 +50,7 @@ enum mmu_notifier_event { MMU_NOTIFY_PROTECTION_PAGE, MMU_NOTIFY_SOFT_DIRTY, MMU_NOTIFY_RELEASE, + MMU_NOTIFY_MIGRATE, }; #define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) @@ -264,6 +269,7 @@ struct mmu_notifier_range { unsigned long end; unsigned flags; enum mmu_notifier_event event; + void *migrate_pgmap_owner; }; static inline int mm_has_notifiers(struct mm_struct *mm) @@ -513,6 +519,7 @@ static inline void mmu_notifier_range_init(struct mmu_notifier_range *range, range->start = start; range->end = end; range->flags = flags; + range->migrate_pgmap_owner = NULL; } #define ptep_clear_flush_young_notify(__vma, __address, __ptep) \ diff --git a/mm/migrate.c b/mm/migrate.c index 2bbc5c4c672e..9b3dcb81be5f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2391,8 +2391,14 @@ static void migrate_vma_collect(struct migrate_vma *migrate) { struct mmu_notifier_range range; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, + /* + * Note that the src_owner is passed to the mmu notifier callback so + * that the registered device driver can skip invalidating device + * private page mappings that won't be migrated. + */ + mmu_notifier_range_init(&range, MMU_NOTIFY_MIGRATE, 0, migrate->vma, migrate->vma->vm_mm, migrate->start, migrate->end); + range.migrate_pgmap_owner = migrate->src_owner; mmu_notifier_invalidate_range_start(&range); walk_page_range(migrate->vma->vm_mm, migrate->start, migrate->end, From patchwork Mon Jul 13 17:21:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660793 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1E41513B6 for ; Mon, 13 Jul 2020 17:22:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0327920791 for ; Mon, 13 Jul 2020 17:22:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Syu1yy09" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730436AbgGMRWJ (ORCPT ); Mon, 13 Jul 2020 13:22:09 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17663 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729644AbgGMRWI (ORCPT ); Mon, 13 Jul 2020 13:22:08 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:21:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:08 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:08 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:22:00 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 4/5] nouveau/svm: use the new migration invalidation Date: Mon, 13 Jul 2020 10:21:48 -0700 Message-ID: <20200713172149.2310-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660871; bh=Otn21fbrlotjCIOeaNWRTeP+Kh/XYMo4ARZTNQnvhww=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Syu1yy0970L8A2g9NlvIGxzAFkuUW5LVJClRjNZY9ynf5FP4oXilfjNxAvfomlB3/ NwaefpgpWIMgVVm9rzXuoK4D9Rf6lyDjr3zdlSX7apuJ13xvb0BxyIfrEpmnLiAaH1 Oy7vPsfaNBZga1dDNnFQL0YPszsYW2A/Y8IX8uw7fjJQGgqwSAizGNP+5/vwPsIZiW gTeU3Ktc8rM02koYiushk3QEmoS6N1/MHvQ6Wb+mE3Y5upkmBqGdIzVlbbNUKu5xeM fxMJFD7Mb98qY4UG1iQOECjtCcTAG4U1uu909OLqp4OO8p9dvCxjZgfWY3eFrnwgTN 0BIpNrQuY2KFg== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of device private memory and handle the invalidation in the driver as part of migrating device private memory. Signed-off-by: Ralph Campbell --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 11 ++++++++--- drivers/gpu/drm/nouveau/nouveau_svm.c | 10 +++++++++- drivers/gpu/drm/nouveau/nouveau_svm.h | 1 + 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index e5c83b8ee82e..8f2683ebd8c0 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -154,6 +154,8 @@ static vm_fault_t nouveau_dmem_fault_copy_one(struct nouveau_drm *drm, if (dma_mapping_error(dev, *dma_addr)) goto error_free_page; + nouveau_svmm_invalidate(spage->zone_device_data, args->start, + args->end); if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr, NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) goto error_dma_unmap; @@ -531,7 +533,8 @@ nouveau_dmem_init(struct nouveau_drm *drm) } static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, - unsigned long src, dma_addr_t *dma_addr, u64 *pfn) + struct nouveau_svmm *svmm, unsigned long src, + dma_addr_t *dma_addr, u64 *pfn) { struct device *dev = drm->dev->dev; struct page *dpage, *spage; @@ -561,6 +564,7 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, goto out_free_page; } + dpage->zone_device_data = svmm; *pfn = NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) @@ -584,8 +588,8 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, unsigned long addr = args->start, nr_dma = 0, i; for (i = 0; addr < args->end; i++) { - args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->src[i], - dma_addrs + nr_dma, pfns + i); + args->dst[i] = nouveau_dmem_migrate_copy_one(drm, svmm, + args->src[i], dma_addrs + nr_dma, pfns + i); if (!dma_mapping_error(drm->dev->dev, dma_addrs[nr_dma])) nr_dma++; addr += PAGE_SIZE; @@ -616,6 +620,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, struct migrate_vma args = { .vma = vma, .start = start, + .src_owner = drm->dev, .dir = MIGRATE_VMA_FROM_SYSTEM, }; unsigned long i; diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index c5f8ca6fb2e3..2ba7a8a2348c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -246,7 +246,7 @@ nouveau_svmm_join(struct nouveau_svmm *svmm, u64 inst) } /* Invalidate SVMM address-range on GPU. */ -static void +void nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit) { if (limit > start) { @@ -279,6 +279,14 @@ nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn, if (unlikely(!svmm->vmm)) goto out; + /* + * Ignore invalidation callbacks for device private pages since + * the invalidation is handled as part of the migration process. + */ + if (update->event == MMU_NOTIFY_MIGRATE && + update->migrate_pgmap_owner == svmm->vmm->cli->drm->dev) + goto out; + if (limit > svmm->unmanaged.start && start < svmm->unmanaged.limit) { if (start < svmm->unmanaged.start) { nouveau_svmm_invalidate(svmm, start, diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.h b/drivers/gpu/drm/nouveau/nouveau_svm.h index f0fcd1b72e8b..bb2d56e50e0c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.h +++ b/drivers/gpu/drm/nouveau/nouveau_svm.h @@ -19,6 +19,7 @@ int nouveau_svmm_join(struct nouveau_svmm *, u64 inst); void nouveau_svmm_part(struct nouveau_svmm *, u64 inst); int nouveau_svmm_bind(struct drm_device *, void *, struct drm_file *); +void nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit); u64 *nouveau_pfns_alloc(unsigned long npages); void nouveau_pfns_free(u64 *pfns); void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm, From patchwork Mon Jul 13 17:21:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11660785 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD20413B6 for ; Mon, 13 Jul 2020 17:22:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C5EA8207BC for ; Mon, 13 Jul 2020 17:22:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="kh5nU6VW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730450AbgGMRWK (ORCPT ); Mon, 13 Jul 2020 13:22:10 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:17686 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730435AbgGMRWJ (ORCPT ); Mon, 13 Jul 2020 13:22:09 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 13 Jul 2020 10:21:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 13 Jul 2020 10:22:09 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 13 Jul 2020 10:22:09 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 13 Jul 2020 17:22:01 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 13 Jul 2020 10:22:01 -0700 From: Ralph Campbell To: , , , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Andrew Morton" , Shuah Khan , "Ben Skeggs" , Bharata B Rao , "Ralph Campbell" Subject: [PATCH v2 5/5] mm/hmm/test: use the new migration invalidation Date: Mon, 13 Jul 2020 10:21:49 -0700 Message-ID: <20200713172149.2310-6-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200713172149.2310-1-rcampbell@nvidia.com> References: <20200713172149.2310-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594660871; bh=9IIuPtj59etQET6XQzw0bMUB+2aW73xZLXgD2GoJ4ec=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=kh5nU6VW9tnNBPWQqX71XltgDPOm1zkutww5LVlecYz2mLPH3oKZQAyW9rtckV084 qA1j9je3ucjTMg/3t65OpqoSFb5gMN4jmqzKD42GuPNEs1G4YgCDOkkrVQtdgjZRre 6uM4nqv054kJDrfvRKpXWXXsogMFOt60pn+J/0flBYkKdrUhqY5R2u/GJgC5xzHB0U uH5Z5o/w88n9kL/iYoCi5c+dDF9nQ4dewS4qX4Kp64hVGj0rUzSdc/XBEFSXf3SHje gleJnXunyobNzEN0wJ56h2idOoaAwZwcp8yrAr5JcWrnTZODtIqB66AFFVLXruI0m9 thlDkRVOTEATg== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device private memory and handle the invalidation in the driver as part of migrating device private memory. Signed-off-by: Ralph Campbell --- lib/test_hmm.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 1bd60cfb5a25..77875fc4e7c1 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -214,6 +214,14 @@ static bool dmirror_interval_invalidate(struct mmu_interval_notifier *mni, { struct dmirror *dmirror = container_of(mni, struct dmirror, notifier); + /* + * Ignore invalidation callbacks for device private pages since + * the invalidation is handled as part of the migration process. + */ + if (range->event == MMU_NOTIFY_MIGRATE && + range->migrate_pgmap_owner == dmirror->mdevice) + return true; + if (mmu_notifier_range_blockable(range)) mutex_lock(&dmirror->mutex); else if (!mutex_trylock(&dmirror->mutex)) @@ -702,7 +710,7 @@ static int dmirror_migrate(struct dmirror *dmirror, args.dst = dst_pfns; args.start = addr; args.end = next; - args.src_owner = NULL; + args.src_owner = dmirror->mdevice; args.dir = MIGRATE_VMA_FROM_SYSTEM; ret = migrate_vma_setup(&args); if (ret) @@ -992,7 +1000,7 @@ static void dmirror_devmem_free(struct page *page) } static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, - struct dmirror_device *mdevice) + struct dmirror *dmirror) { const unsigned long *src = args->src; unsigned long *dst = args->dst; @@ -1014,6 +1022,7 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, continue; lock_page(dpage); + xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); copy_highpage(dpage, spage); *dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; if (*src & MIGRATE_PFN_WRITE) @@ -1022,15 +1031,6 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy(struct migrate_vma *args, return 0; } -static void dmirror_devmem_fault_finalize_and_map(struct migrate_vma *args, - struct dmirror *dmirror) -{ - /* Invalidate the device's page table mapping. */ - mutex_lock(&dmirror->mutex); - dmirror_do_update(dmirror, args->start, args->end); - mutex_unlock(&dmirror->mutex); -} - static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { struct migrate_vma args; @@ -1060,11 +1060,16 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; - ret = dmirror_devmem_fault_alloc_and_copy(&args, dmirror->mdevice); + ret = dmirror_devmem_fault_alloc_and_copy(&args, dmirror); if (ret) return ret; migrate_vma_pages(&args); - dmirror_devmem_fault_finalize_and_map(&args, dmirror); + /* + * No device finalize step is needed since + * dmirror_devmem_fault_alloc_and_copy() will have already + * invalidated the device page table. We could reinstate device MMU + * entries for pages that didn't migrate but that should be rare. + */ migrate_vma_finalize(&args); return 0; }