From patchwork Wed Sep 28 12:01:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CDC44C54EE9 for ; Wed, 28 Sep 2022 12:02:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8A47810E489; Wed, 28 Sep 2022 12:02:07 +0000 (UTC) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2084.outbound.protection.outlook.com [40.107.223.84]) by gabe.freedesktop.org (Postfix) with ESMTPS id F366110E46F; Wed, 28 Sep 2022 12:01:45 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WEOS/+LtHw4vjFrWRApv8ly7uDhTlKeJHC3UDVYjXPBY8e8OmFOxk0rXrSOKAq254Lgi/NXNwyY2Z3VgdoYzBRYu0H6bTOriX8K8CZMrMdMVDIOprpDZLafB+h59hY8vLP0fjFNfWXyEvhdnXbTOmVzffNfougIx2Gbm84MSqaxjfxhiyXvs49Gs46cZtbti7nMgov6YZhCkZT7xO766FFNyQfvTnecE6KukRTcmyaq18krUOG5JslOdYw+ypnZ6mjkQXiA6WK+EpoUHamPND3JLNl2xhjlUGTwj2mQQFE21LW0AxZKGxuCSr1WIRzDgUS3sgYr97EIHQ0ZGA9mdUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/weI2KkhCvXKV8t2ezOZUyHcQhYCO5tbBFcGHCZcjc8=; b=etO8AmNJEBkVHG0B9uyFaagwQZoxnrbm+T/E7uDu9lG83NTRK5AF9cQTKQHLeJIqDG0i1u8ervxFefm+V+J4usH+l35gvwL/Ju+Dh7Np/Oq5PtHHxnt/X9d0rA4ymNCzsh8oCFdP1ps047k+LtRoYMG+mS5XijI3/IjciYlNRfAPzDtb/acmALwTdI5t65FD6U8CkAWHNmVoMe6Jev7cwwSQQc5DmammeAaFEeDWe6tLO2/hwEGxY12qP0esfiMPTEPAQXFowe3iae++4CEdjXkeoB9ALgchF707tVGmxUfklXnlQwT5xBfBWePmk2Avn63ycJd/vZKmMKI25GgHXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/weI2KkhCvXKV8t2ezOZUyHcQhYCO5tbBFcGHCZcjc8=; b=EUj+MAyvdZC1b8kNcHOZ5fJTexnTdAoul4w9V9tfzh+ewQYwgIs47HAAbVzBYuFAzi69IHCj86EhV0AsGpuMnBjbbybMpTB1UFZkNcuEXNvNNZ9OjnG0eC1RfVmGFCnE35NlxxcOsXLYo/MBeJXs7XWhN1HKOZb5O9UnP5TzWr2qG343/YKT0rc4wlqOno2BYgX3lFbqnzL23kX1q6ZdzN8NuC3JWHv+YRkWkovQIE1zpO1/gFa8azzJXYbxe5H82qpBrpnNSwlMnInhMpi3GgwuCyK0he6zqv0pTpE6AYgcWmwyLRZyEsgO5HXinnjUaDaDk+dzz0sJbeWiiKszsg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM6PR12MB4337.namprd12.prod.outlook.com (2603:10b6:5:2a9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:01:44 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:01:44 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 1/8] mm/memory.c: Fix race when faulting a device private page Date: Wed, 28 Sep 2022 22:01:15 +1000 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0126.ausprd01.prod.outlook.com (2603:10c6:10:5::18) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM6PR12MB4337:EE_ X-MS-Office365-Filtering-Correlation-Id: ba17ac09-e699-4850-0501-08daa149361f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QbXmhi7bIuhzzsB3TYygKoRsRC68sUqAi9si6G1v+xyaBFEPxBXuzLylE9I4im+vYZYrKlqOSjPALRLp2oBzVB6QgEJOosix235UF3lVv8fhm2WGk10VcXfSbcjMb8cjbm7v/OWXh7F75LShKafp1+gORqjEiP2oCHxZoja+/j+doUSBDgTN8WVMSDWLwMnQXr1i4zmJE/nKbGnISrP5LfiwitM3vhdY3vw8OYNgFARHAP5J6Y6Nqr1Q9wFFHIUE1y0gfLbuefHgK0VHGXKQkPQSFE9ckZW51B68O2t8UPJ67FibjvpoAX4aI/ar2IuBpFHcHS75puXYvIst+FetKjkZ62rhZ+y84S5PKpX9VOpEN6zS7g/9xWR1MmChiBfSzM64zlKQ8AW7dgFzdsWzKunTVDreHd5bt0L4f4UnF46B2Sapqm9amilo9HRKCYAmo7ZmWSTcPTgv0BthufLCBC24GFphp6P7OBrOUl1xu2R74Usf4xE4VjXmHqJfBIZTQcPMCdJsf9VB9wPVpcQgWNr39pvp7NScJo44YvAecWlycqUzI9KQ2cbcitcQxUlEFuo1XFfHEBfg5ZKZFQrCVzlQC2wlr0Qx0FmwnqDCwR77tkqT4q7nxFWVPDoPUWQ1cJ045078pUUTdfd+z+Nvt2Iwg4rf+sLcJMmmXrFOBo4nZt1CAyA2cW5mPksLAVx6cB55YgTko1ETsUVvj1Ha5Q== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199015)(86362001)(36756003)(2906002)(2616005)(6506007)(6666004)(30864003)(6512007)(26005)(38100700002)(83380400001)(54906003)(316002)(186003)(8936002)(5660300002)(4326008)(66556008)(8676002)(66946007)(66476007)(6486002)(478600001)(41300700001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: qeTJSNF2OGhiFYzu9XUZZijacw3WtcwehvIPS3SNcdhElT6tGZl9J/KJvUdLcFb3DgqR2QUUuKEcuPPjqDbU3na+YSt5A8+z2M97nQ3+PirtEjxFByQcVdbd86AE2/GH/uPzycUI6YGSP3T7Io8fOAlfUwFWFzffCEJLzYGcyLU9iAauHnBJE/1C9H2H6aPTMV4uXbWKP6L78vi/11wJSClCc0d/gX49O5pv4Joe1jgJS15661I03lJl0fhwJyiSErBz+v1ePCr+6Xi+b5sic8HOu7MGvIyix1ogBgTTyoi8pdVI1VP9zNk5pQdo+977ILTEYy2RgHJACBCmfKLqsk6vPVOUT+1RTb3pmBmwcbCVZPz8YTcyPmDhJ5a86dx8rhV5p+lqrrkSl08f+PtEOyBNoQgEnprS8PJU84mwSEviqaQom1zsvXp2dhRzNb0dad7++ol+SG/38rsxG0MgI45QC51mIW+y9ptaY9Jr9Y4s/aelYV5EocQQwmlbrBqyo19pL3d2jUD8Q/GFuzB5qkS+avrRAvIFwxIoH4Qqzvx1B71fkAe0QITGUqceeWmQz4ex8KL3sCfSyTgQViUZ3DClOPw656m+0s4y9PIbQlU1KJvjE72BRl0LZHM0hcgDIDEnRXRCJKfDsEUeMkMX6iBJyI8cAM2XWWQbTcyaqsHdRBE0tpiFz1iUknYRo78tm6cyLWIyAGWvVxLSu1rqVKigLlQ0ly6yfHtsgxAhH8Tllhj20Wrrq6Z9Oy4xy+w8gg/7mofjmK4kX9MjKxryS4MOAVUL4wx7BvqNJouR2VQyyjG/qtNotPpB8HTeSTNKE+z9PUHv1HiISz/FUudjx2BqV05IdgGReLEe0JurKF5hJLMsxDY5WmAEp6M/nxruTojElMZCtKtan/hV93EDVfNVK+9xEm4J1GjjwVzVaxDgbqk5UCLPbCJNNYxeYvYCAwbg/FtBL+AgBapnIBZ70KMu5FmYaUmHfkFpwhHXgnRJ6iE5Byh/IjMN31sYcBfwEYHlBg1IqCV+0Hx7njK4ov4Uf0Fy+L0gD3rQCbqen0R3foxhLdifl5gt4gSEmRWDyii6eXuic1izbrPvr1FbfUfa/unlmB+9f5WLs969LsSktlZzwSNkIGMPGfKi+MdTyoBTTDCKD5QlA0PtUY6hzATXZsw6FblTo0inIfwNr6Q3eTYz1ek0Gn0NTdUfTfKWI+5FXAw1hArXip/0F7/sFIsFYpiqNrhm/HllgAmK1/4tdDx8LyEMH2WyBwJeiVpOCfbIjY4KttT1SIFMUUgzs/pI8Q5RmJ0SpGi/u1+TbmilYzqz98EPgxrvY7juE8CrucVV2Pk8xpYa20vfLeQVAAwNTUw3oB6yWxWWaHScOydrwn0+cA6uEjMG2tLZrenQsaqhlA1Iv8tLvS4RPtDsA0AhU1IqXcyGh1ZSPF3DCztOIR86T5Dd6TZf3c66osjQLGIbbqO1RFMccxYDUsIl4DDoG6tNOaXd8BzPZV6RJ3Q2FO1PFdPhP6tfN2F2zOLNEXuLjxfC4WuaOgS0A8O4tn7Mjl4172HqjDXYAPFQwkKZg6AlqX/iLpYTCt3NBIud X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba17ac09-e699-4850-0501-08daa149361f X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:01:44.0283 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: mUlwWxRaAYK5U4mHlu6EyUZXypwoZVATxbQD78sWPHDF21YsxDuEf86AJDRJpE3E/2sfrs3DcNGHrtZNLjm+Ug== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4337 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Michael Ellerman , nouveau@lists.freedesktop.org, Felix Kuehling , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , John Hubbard Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When the CPU tries to access a device private page the migrate_to_ram() callback associated with the pgmap for the page is called. However no reference is taken on the faulting page. Therefore a concurrent migration of the device private page can free the page and possibly the underlying pgmap. This results in a race which can crash the kernel due to the migrate_to_ram() function pointer becoming invalid. It also means drivers can't reliably read the zone_device_data field because the page may have been freed with memunmap_pages(). Close the race by getting a reference on the page while holding the ptl to ensure it has not been freed. Unfortunately the elevated reference count will cause the migration required to handle the fault to fail. To avoid this failure pass the faulting page into the migrate_vma functions so that if an elevated reference count is found it can be checked to see if it's expected or not. Signed-off-by: Alistair Popple Cc: Jason Gunthorpe Cc: John Hubbard Cc: Ralph Campbell Cc: Michael Ellerman Cc: Felix Kuehling Cc: Lyude Paul Acked-by: Felix Kuehling --- arch/powerpc/kvm/book3s_hv_uvmem.c | 15 ++++++----- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 17 +++++++------ drivers/gpu/drm/amd/amdkfd/kfd_migrate.h | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 11 +++++--- include/linux/migrate.h | 8 ++++++- lib/test_hmm.c | 7 ++--- mm/memory.c | 16 +++++++++++- mm/migrate.c | 34 ++++++++++++++----------- mm/migrate_device.c | 18 +++++++++---- 9 files changed, 87 insertions(+), 41 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 5980063..d4eacf4 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -508,10 +508,10 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) static int __kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long page_shift, - struct kvm *kvm, unsigned long gpa) + struct kvm *kvm, unsigned long gpa, struct page *fault_page) { unsigned long src_pfn, dst_pfn = 0; - struct migrate_vma mig; + struct migrate_vma mig = { 0 }; struct page *dpage, *spage; struct kvmppc_uvmem_page_pvt *pvt; unsigned long pfn; @@ -525,6 +525,7 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma, mig.dst = &dst_pfn; mig.pgmap_owner = &kvmppc_uvmem_pgmap; mig.flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE; + mig.fault_page = fault_page; /* The requested page is already paged-out, nothing to do */ if (!kvmppc_gfn_is_uvmem_pfn(gpa >> page_shift, kvm, NULL)) @@ -580,12 +581,14 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma, static inline int kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long page_shift, - struct kvm *kvm, unsigned long gpa) + struct kvm *kvm, unsigned long gpa, + struct page *fault_page) { int ret; mutex_lock(&kvm->arch.uvmem_lock); - ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa); + ret = __kvmppc_svm_page_out(vma, start, end, page_shift, kvm, gpa, + fault_page); mutex_unlock(&kvm->arch.uvmem_lock); return ret; @@ -736,7 +739,7 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma, bool pagein) { unsigned long src_pfn, dst_pfn = 0; - struct migrate_vma mig; + struct migrate_vma mig = { 0 }; struct page *spage; unsigned long pfn; struct page *dpage; @@ -994,7 +997,7 @@ static vm_fault_t kvmppc_uvmem_migrate_to_ram(struct vm_fault *vmf) if (kvmppc_svm_page_out(vmf->vma, vmf->address, vmf->address + PAGE_SIZE, PAGE_SHIFT, - pvt->kvm, pvt->gpa)) + pvt->kvm, pvt->gpa, vmf->page)) return VM_FAULT_SIGBUS; else return 0; diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c index b059a77..776448b 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c @@ -409,7 +409,7 @@ svm_migrate_vma_to_vram(struct amdgpu_device *adev, struct svm_range *prange, uint64_t npages = (end - start) >> PAGE_SHIFT; struct kfd_process_device *pdd; struct dma_fence *mfence = NULL; - struct migrate_vma migrate; + struct migrate_vma migrate = { 0 }; unsigned long cpages = 0; dma_addr_t *scratch; void *buf; @@ -668,7 +668,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange, static long svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange, struct vm_area_struct *vma, uint64_t start, uint64_t end, - uint32_t trigger) + uint32_t trigger, struct page *fault_page) { struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms); uint64_t npages = (end - start) >> PAGE_SHIFT; @@ -676,7 +676,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange, unsigned long cpages = 0; struct kfd_process_device *pdd; struct dma_fence *mfence = NULL; - struct migrate_vma migrate; + struct migrate_vma migrate = { 0 }; dma_addr_t *scratch; void *buf; int r = -ENOMEM; @@ -699,6 +699,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange, migrate.src = buf; migrate.dst = migrate.src + npages; + migrate.fault_page = fault_page; scratch = (dma_addr_t *)(migrate.dst + npages); kfd_smi_event_migration_start(adev->kfd.dev, p->lead_thread->pid, @@ -766,7 +767,7 @@ svm_migrate_vma_to_ram(struct amdgpu_device *adev, struct svm_range *prange, * 0 - OK, otherwise error code */ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm, - uint32_t trigger) + uint32_t trigger, struct page *fault_page) { struct amdgpu_device *adev; struct vm_area_struct *vma; @@ -807,7 +808,8 @@ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm, } next = min(vma->vm_end, end); - r = svm_migrate_vma_to_ram(adev, prange, vma, addr, next, trigger); + r = svm_migrate_vma_to_ram(adev, prange, vma, addr, next, trigger, + fault_page); if (r < 0) { pr_debug("failed %ld to migrate prange %p\n", r, prange); break; @@ -851,7 +853,7 @@ svm_migrate_vram_to_vram(struct svm_range *prange, uint32_t best_loc, pr_debug("from gpu 0x%x to gpu 0x%x\n", prange->actual_loc, best_loc); do { - r = svm_migrate_vram_to_ram(prange, mm, trigger); + r = svm_migrate_vram_to_ram(prange, mm, trigger, NULL); if (r) return r; } while (prange->actual_loc && --retries); @@ -938,7 +940,8 @@ static vm_fault_t svm_migrate_to_ram(struct vm_fault *vmf) goto out_unlock_prange; } - r = svm_migrate_vram_to_ram(prange, mm, KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU); + r = svm_migrate_vram_to_ram(prange, mm, KFD_MIGRATE_TRIGGER_PAGEFAULT_CPU, + vmf->page); if (r) pr_debug("failed %d migrate 0x%p [0x%lx 0x%lx] to ram\n", r, prange, prange->start, prange->last); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h index b3f0754..a5d7e6d 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.h @@ -43,7 +43,7 @@ enum MIGRATION_COPY_DIR { int svm_migrate_to_vram(struct svm_range *prange, uint32_t best_loc, struct mm_struct *mm, uint32_t trigger); int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm, - uint32_t trigger); + uint32_t trigger, struct page *fault_page); unsigned long svm_migrate_addr_to_pfn(struct amdgpu_device *adev, unsigned long addr); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c index 11074cc..9139e5a 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c @@ -2913,13 +2913,15 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid, */ if (prange->actual_loc) r = svm_migrate_vram_to_ram(prange, mm, - KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU); + KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU, + NULL); else r = 0; } } else { r = svm_migrate_vram_to_ram(prange, mm, - KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU); + KFD_MIGRATE_TRIGGER_PAGEFAULT_GPU, + NULL); } if (r) { pr_debug("failed %d to migrate svms %p [0x%lx 0x%lx]\n", @@ -3242,7 +3244,8 @@ svm_range_trigger_migration(struct mm_struct *mm, struct svm_range *prange, return 0; if (!best_loc) { - r = svm_migrate_vram_to_ram(prange, mm, KFD_MIGRATE_TRIGGER_PREFETCH); + r = svm_migrate_vram_to_ram(prange, mm, + KFD_MIGRATE_TRIGGER_PREFETCH, NULL); *migrated = !r; return r; } @@ -3303,7 +3306,7 @@ static void svm_range_evict_svm_bo_worker(struct work_struct *work) mutex_lock(&prange->migrate_mutex); do { r = svm_migrate_vram_to_ram(prange, mm, - KFD_MIGRATE_TRIGGER_TTM_EVICTION); + KFD_MIGRATE_TRIGGER_TTM_EVICTION, NULL); } while (!r && prange->actual_loc && --retries); if (!r && prange->actual_loc) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 22c0a0c..82ffa47 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -62,6 +62,8 @@ extern const char *migrate_reason_names[MR_TYPES]; #ifdef CONFIG_MIGRATION extern void putback_movable_pages(struct list_head *l); +int migrate_folio_extra(struct address_space *mapping, struct folio *dst, + struct folio *src, enum migrate_mode mode, int extra_count); int migrate_folio(struct address_space *mapping, struct folio *dst, struct folio *src, enum migrate_mode mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, @@ -212,6 +214,12 @@ struct migrate_vma { */ void *pgmap_owner; unsigned long flags; + + /* + * Set to vmf->page if this is being called to migrate a page as part of + * a migrate_to_ram() callback. + */ + struct page *fault_page; }; int migrate_vma_setup(struct migrate_vma *args); diff --git a/lib/test_hmm.c b/lib/test_hmm.c index e3965ca..89463ff 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -907,7 +907,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror, struct vm_area_struct *vma; unsigned long src_pfns[64] = { 0 }; unsigned long dst_pfns[64] = { 0 }; - struct migrate_vma args; + struct migrate_vma args = { 0 }; unsigned long next; int ret; @@ -968,7 +968,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror, unsigned long src_pfns[64] = { 0 }; unsigned long dst_pfns[64] = { 0 }; struct dmirror_bounce bounce; - struct migrate_vma args; + struct migrate_vma args = { 0 }; unsigned long next; int ret; @@ -1334,7 +1334,7 @@ static void dmirror_devmem_free(struct page *page) static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { - struct migrate_vma args; + struct migrate_vma args = { 0 }; unsigned long src_pfns = 0; unsigned long dst_pfns = 0; struct page *rpage; @@ -1357,6 +1357,7 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) args.dst = &dst_pfns; args.pgmap_owner = dmirror->mdevice; args.flags = dmirror_select_device(dmirror); + args.fault_page = vmf->page; if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; diff --git a/mm/memory.c b/mm/memory.c index b994784..65d3977 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3742,7 +3742,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { vmf->page = pfn_swap_entry_to_page(entry); - ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) { + spin_unlock(vmf->ptl); + goto out; + } + + /* + * Get a page reference while we know the page can't be + * freed. + */ + get_page(vmf->page); + pte_unmap_unlock(vmf->pte, vmf->ptl); + vmf->page->pgmap->ops->migrate_to_ram(vmf); + put_page(vmf->page); } else if (is_hwpoison_entry(entry)) { ret = VM_FAULT_HWPOISON; } else if (is_swapin_error_entry(entry)) { diff --git a/mm/migrate.c b/mm/migrate.c index ce6a58f..e3f78a7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -620,6 +620,25 @@ EXPORT_SYMBOL(folio_migrate_copy); * Migration functions ***********************************************************/ +int migrate_folio_extra(struct address_space *mapping, struct folio *dst, + struct folio *src, enum migrate_mode mode, int extra_count) +{ + int rc; + + BUG_ON(folio_test_writeback(src)); /* Writeback must be complete */ + + rc = folio_migrate_mapping(mapping, dst, src, extra_count); + + if (rc != MIGRATEPAGE_SUCCESS) + return rc; + + if (mode != MIGRATE_SYNC_NO_COPY) + folio_migrate_copy(dst, src); + else + folio_migrate_flags(dst, src); + return MIGRATEPAGE_SUCCESS; +} + /** * migrate_folio() - Simple folio migration. * @mapping: The address_space containing the folio. @@ -635,20 +654,7 @@ EXPORT_SYMBOL(folio_migrate_copy); int migrate_folio(struct address_space *mapping, struct folio *dst, struct folio *src, enum migrate_mode mode) { - int rc; - - BUG_ON(folio_test_writeback(src)); /* Writeback must be complete */ - - rc = folio_migrate_mapping(mapping, dst, src, 0); - - if (rc != MIGRATEPAGE_SUCCESS) - return rc; - - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); - return MIGRATEPAGE_SUCCESS; + return migrate_folio_extra(mapping, dst, src, mode, 0); } EXPORT_SYMBOL(migrate_folio); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 7235424..f756c00 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -313,14 +313,14 @@ static void migrate_vma_collect(struct migrate_vma *migrate) * folio_migrate_mapping(), except that here we allow migration of a * ZONE_DEVICE page. */ -static bool migrate_vma_check_page(struct page *page) +static bool migrate_vma_check_page(struct page *page, struct page *fault_page) { /* * One extra ref because caller holds an extra reference, either from * isolate_lru_page() for a regular page, or migrate_vma_collect() for * a device page. */ - int extra = 1; + int extra = 1 + (page == fault_page); /* * FIXME support THP (transparent huge page), it is bit more complex to @@ -393,7 +393,8 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) if (folio_mapped(folio)) try_to_migrate(folio, 0); - if (page_mapped(page) || !migrate_vma_check_page(page)) { + if (page_mapped(page) || + !migrate_vma_check_page(page, migrate->fault_page)) { if (!is_zone_device_page(page)) { get_page(page); putback_lru_page(page); @@ -505,6 +506,8 @@ int migrate_vma_setup(struct migrate_vma *args) return -EINVAL; if (!args->src || !args->dst) return -EINVAL; + if (args->fault_page && !is_device_private_page(args->fault_page)) + return -EINVAL; memset(args->src, 0, sizeof(*args->src) * nr_pages); args->cpages = 0; @@ -735,8 +738,13 @@ void migrate_vma_pages(struct migrate_vma *migrate) continue; } - r = migrate_folio(mapping, page_folio(newpage), - page_folio(page), MIGRATE_SYNC_NO_COPY); + if (migrate->fault_page == page) + r = migrate_folio_extra(mapping, page_folio(newpage), + page_folio(page), + MIGRATE_SYNC_NO_COPY, 1); + else + r = migrate_folio(mapping, page_folio(newpage), + page_folio(page), MIGRATE_SYNC_NO_COPY); if (r != MIGRATEPAGE_SUCCESS) migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; } From patchwork Wed Sep 28 12:01:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 52047C32771 for ; Wed, 28 Sep 2022 12:02:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0C19510E49C; Wed, 28 Sep 2022 12:02:12 +0000 (UTC) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2047.outbound.protection.outlook.com [40.107.96.47]) by gabe.freedesktop.org (Postfix) with ESMTPS id A148510E479; Wed, 28 Sep 2022 12:01:52 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UHEJEguBzni4ePR19CqTEzuBHAUbpRCCcwEWokqc0/W3ldxS4h25d2v5e+El0zNcwZIhgHRczaM04sZbEWUsG5NOrSsr1IhzqEL7fuK830apCsF035z3nOu3tVrgpFXti/N9KPXA6Ca+yBTuZA0HG8X3GaL1HlSk9Qu/sNFoVqcDh6iZXX/jeJy/I1YKdzAC+IrhTCTp8TuYwuitnknL48D/SSu+7Y3+hltDBDy86TpN0RH0yWOrfBfKPvYjJqK06YyvZz1JqmLj7ZREhcT8D2Zx43HgBYgq4qfoK9BkaoAfxtSad9OXY69uN3lr5d+tvDe5A03HWx0KM85AQoI7Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iXSzlgttX5rlmCnLMM95a+/kiNZ2711zyubrbhwBCVg=; b=QU1+wfxi7c4P4Yv59a41Jxt242PiFZxnamyP6CBAn4qciit1W5wrxM6bRPyB8Szprz+4ps0WTcziDHcXbkGKrsi+kltkQELaH5DVYPEBnIA5+5SRuIlF7SNcoF6cyIjy9j9hnIE9EX4NgQfX66pNdLsCRj/XSqWWtChbTS1xPzAdLlUG7iv0UHk+d6F8Itx8ZJ0s+o9nO8QaPlTkC6sVrwQ6kBasOK06CCBW8pIfyLuB8p09E3nojWEcwOPrANu3ww1EyyPRug9abc58qCACbwzTx9ZUL2IAjmLQzve8v5iPUczsW+DVjWB2gYe+ocXlpqIGZf3fOWnHhxORMq6Ikw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iXSzlgttX5rlmCnLMM95a+/kiNZ2711zyubrbhwBCVg=; b=KmHfUXuDo5U9QmYKzLJtrNeOLXZ5teMa8v8Cqx0faaRhoY0uIWaRLpto/KxsR72r6XcaRh0FnRq0RhYsMjBqEVLdiFbVUoJp1mqTu2BB0QZXgSZAnysnRoOv4XpX1UhA1u0LKYG2+bpstfrRCV/TAtf1PBNKd2uKNmQbKzb8dz/2e33laJlBAYK9DoizLr3UXqkvbi1YsyzTeBrbLwDQ3GCq4sRyVWp1p3tWTBLw5ofdEQC7YIikPdEvv4WriVcIeX+ueAOxfZ+z0CaNaJ04m2lWMB9KU2TbaEHhPpb1PftsAYh8qVuSKRvub1GXwH+l13tN9KuBaNgUaI7g2AsFSQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM6PR12MB4337.namprd12.prod.outlook.com (2603:10b6:5:2a9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:01:50 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:01:50 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 2/8] mm: Free device private pages have zero refcount Date: Wed, 28 Sep 2022 22:01:16 +1000 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0085.ausprd01.prod.outlook.com (2603:10c6:10:110::18) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM6PR12MB4337:EE_ X-MS-Office365-Filtering-Correlation-Id: 45404f4b-7570-49e7-ff4e-08daa14939f9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2AQERJpNRRRHCGU79grVWkpGdxmjaYr/4QkPsUsOvZy3bjRvEXvEjpdlzfBGcfWzvpLKMIuAZO5JyfSxELZ9ttRi9YgrLpQA6jabh279eKAoInxoYRSrY6vtNjU/YLHCl89dTioEX19ZtSE+Bzviee382GfN7Abyqpxl2sL+dbXXziBJLmTT1nq1vr1tUG/J9pj19mNEWeALLLwoDOlPrFRus/kB1eXY3pzm4zN0qbsh2WVpoGC8GKBHyeAbE2zNt7M3lUkTmUmrNwSXCSthfzpK8TEqmesW/a5arpxtroIJofKkbBzmRp13bcb1pclf0fQnkbbeDPqkxozQm79rWm1buYW5pVXBy5LOfSup2tFDX+shGoLZtMSHnveaUnLMm58uNUOzSHEON36i49vSjbVm1dRKueoOVCWu9irlY0QEcuWdPApvo6pBRjQxa96t5FpOwml9crmd9of0scplGyGwDDR848szKdcmJ3lA89n8C3dMzJkZP3lPt91W/gQTYo4CFmrvzUH4cAiYLdGWvwPs+TNpHNgfv1kdrd7Mk76Uzs1AILfNeNj07DDYjShg+3/MXPes3DMAskJT6RXmXieAmOF0on7UDN71HkM4krOa96JjIXwjxcC/2wzpsO5l2FdmOQkJdtYS1r5PMoLupF2LNSYKb9VF14nSLeqINzZkfPiApfjOFyMENnWjBRs26Jm9d0ED1V/e9/QfizfuoDmnTQVH8+Zyny+SBXX6CVSDvmb2rv5Pv0jd7laydVVM5bAf5CgiYq1s917Kd7lWoQvvJBzJDpeZfK4YbP4r8C8= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199015)(86362001)(36756003)(2906002)(2616005)(6506007)(7416002)(6666004)(6512007)(26005)(38100700002)(83380400001)(66574015)(54906003)(316002)(186003)(966005)(8936002)(5660300002)(4326008)(66556008)(8676002)(66946007)(66476007)(6486002)(478600001)(41300700001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?KEJIdETGu2OBGCExpYQFxQFbtTj/?= =?utf-8?q?5/BlEyzNGq5Ik7QQG4YhV4xYvlIONPqW6X4xPieslTMIeSKZ1BAtKCS6v0lh1VUHZ?= =?utf-8?q?Sh3EsAlEoTT3e2UV91qd0pF0am4L36CyUzG7foMDFT6IKc0xcbZefY3UIJ9fpSlGu?= =?utf-8?q?AGqDkGHgEttR+tZ5pW7DLeZ4bfzpaZFzXNwKDgBAKU9kJbDqS56zI4PArTedxLlEr?= =?utf-8?q?vdX3hxez0zFBIzopx47hbWDF3zYMMnaOKSqc+FwFacx/SCnGx2N3s4V6515loeQKW?= =?utf-8?q?6pvDY/DF44b9PUByC5q4JgZwDqTlb2KkJ4mrTC8NKfA+akVanliiJZIF4HhefUZE5?= =?utf-8?q?ZEoVPsTVCzM9Lvq/A3RBLIXevLNZpFI5Ey1SW43I14/xGr0ZGsMub8OSZrwgLh+5e?= =?utf-8?q?riGEf22AQZsn902gbSK1VtSkBADfP13J2noPHQrld5n9oItsnAUXj0XKqyCgSygwf?= =?utf-8?q?E3xJHP0z5hxdtq6OJpvEJJ+i+mXdVNPn9TZhA0BEUjO7PZPBkNq4UWXSfImbTH+I0?= =?utf-8?q?q382XYtBOcJP1hFM3CT0XxVPMrkdogieLcW668ZlmdtcshcbVi9Ff9twhfqDnebrq?= =?utf-8?q?13+nQilU9y0hFRwseXtWCQa5/5auRCtuc52lrE9b5twbUBsxJLxCOCbBsCIQOqBrs?= =?utf-8?q?31Sk6zxksCWUeiwUM9xivKl+gwcTae3xG2eMGAO5nalqVqjF5a700H+FpwBS9Zw6y?= =?utf-8?q?bEITlJjVXFSzyzh2QR8i7ukvMRfWp93yl3XWel3empuQrIYy90GArbgqlZveKtVwU?= =?utf-8?q?Bjt13RN3NbsN8QHcnKxaj542WoziJIGla1gG7hpW5gBcRpetkSEg95UjcwpaiTCKC?= =?utf-8?q?1u78n3SWGZmki78Duaz3fGZc2hgYQlGE91HLyDwm1NPV00kMhRVuAOsk2r3fPN2wO?= =?utf-8?q?DigoLVlhcgQw5x5aaoRCymc7EScy98Iw8z6w2qfupDmXUy/5100Mvlx8FmeBJiCdf?= =?utf-8?q?CndE6O8mZ6QlBHG6xF2oAgK+jeiNj2DYH7EBzGckV4DohvQkioDhbzXQG7RBMAGBE?= =?utf-8?q?OrGJEy8sQ2QECbegPg4bmLR7U9Ml+6SvYqleLpOkUwGi1ecGvteDkMqCjm5JygubN?= =?utf-8?q?A9E6qZdEY215VlLf66mdHs/7Ecy+5/YSIbGIrYY3Ca+2Q1vgmqdK3tXpPXvoQQ1CF?= =?utf-8?q?QOBPaYKfe3epsuxyNei/K13n4UK3Bf0YCbNq7TtkqQmXP5wSC54yB1suTpaqMToA1?= =?utf-8?q?ZReXzvMmzs8IBX/qZ07rwDD+HxAP5k/8NhII2AStvWzYA5DSwfx3QduMvcsJCYZ9M?= =?utf-8?q?aIVNcCzKwbpDyUZakXFXU+Lnqx6gzgWRYysjdhAACQaO8ciQfKqvfHMMGfJkdrTTc?= =?utf-8?q?jXOUOklYwTG5Ek7lEODFxPTDxOSMuLKd2BzrFIEKnMflhFDSsyNb34owoqhQp2Cjp?= =?utf-8?q?UvGfi/sS4jUqTc7dJsJiQ61q7yAWGrMH+ymlSAf2hB/VsUYXeZITTxhHdDUIRALkF?= =?utf-8?q?wq9VZ9IOabeac9AJo7/6pYDPQc/zTScQKYnt/3MTQzNiP+yUAcCKF/euchmJ4MYyr?= =?utf-8?q?D+aQU7fbccuR?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 45404f4b-7570-49e7-ff4e-08daa14939f9 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:01:50.5122 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: rOqWpQclmsW8U5eqg93LySU3wWOVAMBgMIMXWLyjkSVGLgCGd1D5QtSsjNI+0+chw+ImlZDaX1J94EI4+YS6gg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4337 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alex Sierra , Ralph Campbell , John Hubbard , nouveau@lists.freedesktop.org, Felix Kuehling , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , Michael Ellerman , Alex Deucher , Dan Williams , =?utf-8?q?Christian_K=C3=B6nig?= , Ben Skeggs Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Since 27674ef6c73f ("mm: remove the extra ZONE_DEVICE struct page refcount") device private pages have no longer had an extra reference count when the page is in use. However before handing them back to the owning device driver we add an extra reference count such that free pages have a reference count of one. This makes it difficult to tell if a page is free or not because both free and in use pages will have a non-zero refcount. Instead we should return pages to the drivers page allocator with a zero reference count. Kernel code can then safely use kernel functions such as get_page_unless_zero(). Signed-off-by: Alistair Popple Cc: Jason Gunthorpe Cc: Michael Ellerman Cc: Felix Kuehling Cc: Alex Deucher Cc: Christian König Cc: Ben Skeggs Cc: Lyude Paul Cc: Ralph Campbell Cc: Alex Sierra Cc: John Hubbard Cc: Dan Williams Acked-by: Felix Kuehling --- This will conflict with Dan's series to fix reference counts for DAX[1]. At the moment this only makes changes for device private and coherent pages, however if DAX is fixed to remove the extra refcount then we should just be able to drop the checks for private/coherent pages and treat them the same. [1] - https://lore.kernel.org/linux-mm/166329930818.2786261.6086109734008025807.stgit@dwillia2-xfh.jf.intel.com/ --- arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 2 +- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- include/linux/memremap.h | 1 + lib/test_hmm.c | 2 +- mm/memremap.c | 9 +++++++++ mm/page_alloc.c | 8 ++++++++ 7 files changed, 22 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index d4eacf4..9d8de68 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -718,7 +718,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) dpage = pfn_to_page(uvmem_pfn); dpage->zone_device_data = pvt; - lock_page(dpage); + zone_device_page_init(dpage); return dpage; out_clear: spin_lock(&kvmppc_uvmem_bitmap_lock); diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c index 776448b..97a6845 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c @@ -223,7 +223,7 @@ svm_migrate_get_vram_page(struct svm_range *prange, unsigned long pfn) page = pfn_to_page(pfn); svm_range_bo_ref(prange->svm_bo); page->zone_device_data = prange->svm_bo; - lock_page(page); + zone_device_page_init(page); } static void diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 1635661..b092988 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -326,7 +326,7 @@ nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm) return NULL; } - lock_page(page); + zone_device_page_init(page); return page; } diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 1901049..f68bf6d 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -182,6 +182,7 @@ static inline bool folio_is_device_coherent(const struct folio *folio) } #ifdef CONFIG_ZONE_DEVICE +void zone_device_page_init(struct page *page); void *memremap_pages(struct dev_pagemap *pgmap, int nid); void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 89463ff..688c15d 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -627,8 +627,8 @@ static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevice) goto error; } + zone_device_page_init(dpage); dpage->zone_device_data = rpage; - lock_page(dpage); return dpage; error: diff --git a/mm/memremap.c b/mm/memremap.c index 25029a4..1c2c038 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -505,8 +505,17 @@ void free_zone_device_page(struct page *page) /* * Reset the page count to 1 to prepare for handing out the page again. */ + if (page->pgmap->type != MEMORY_DEVICE_PRIVATE && + page->pgmap->type != MEMORY_DEVICE_COHERENT) + set_page_count(page, 1); +} + +void zone_device_page_init(struct page *page) +{ set_page_count(page, 1); + lock_page(page); } +EXPORT_SYMBOL_GPL(zone_device_page_init); #ifdef CONFIG_FS_DAX bool __put_devmap_managed_page_refs(struct page *page, int refs) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9d49803..4df1e43 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6744,6 +6744,14 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, set_pageblock_migratetype(page, MIGRATE_MOVABLE); cond_resched(); } + + /* + * ZONE_DEVICE pages are released directly to the driver page allocator + * which will set the page count to 1 when allocating the page. + */ + if (pgmap->type == MEMORY_DEVICE_PRIVATE || + pgmap->type == MEMORY_DEVICE_COHERENT) + set_page_count(page, 0); } /* From patchwork Wed Sep 28 12:01:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7DDA7C32771 for ; Wed, 28 Sep 2022 12:02:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5B69A10E486; Wed, 28 Sep 2022 12:02:06 +0000 (UTC) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2059.outbound.protection.outlook.com [40.107.223.59]) by gabe.freedesktop.org (Postfix) with ESMTPS id E7E1310E47E; Wed, 28 Sep 2022 12:01:58 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BBbBu95EBoYMzuEPu9FY9LT+RefTSP3O/MFt2xXEh9AAxGWWAK5L8T6NVA47pqhzNNLImEp6Sg2lispHW6xlDvaybb/TZp1aubnwoXKNEMpy6OuJlh2UK/E/NgboTqZqZnOkVzzsT+dUiQFow+TYtuQcx9lvgCRuzgDAbsap0nin0DV2QGfD1trHsCKwDA1cVHeY9KDWsBU57KvKM/22z1EtTbkFj5TIma0vJV/yZ5hh8M7BBL2y6Yn58nzjazyVmzHR+DUkVEEcD8SmL8a6S1AAekwbTxUt2CNgkyB6alDI0HHo0qDae5augJqi1FOtPiHtMOI4hDVW1HwDvwdZxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6M9w+2gybViDBivC9wQrqqVE6VFlLxDz9IRiM3M2IEI=; b=I1032g/tMIBPolq7HvHYbvY3bQySmxUKavoTfPIaSIAdllf42KBoKjR7opJQpny397SAjYl6VsXFnHE4AIb2s7ojJGKcmYriv0yEmWjIjh4UPCxg8lMCkfYoCF6yGieDgHpTx1hKmwpvhIKWr8zhSkps2xs+qVmcVDRQJGu7BLrs4MW+mJidXi17IWp3Vnxo4jjWoF0PMDCIppfRu4bv8caD/Y7mUJPaepMIv1r4XqTcYii93wlGfWvWyinl9RcSt2oxKUYKvHkRhjdQNLyZ6lYWb4TpdA5wf7TpX5wcbPb4IVWsUNPwl9ksdkR92ZMYCKvaxMZBWYi0jt7/pT0uHA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6M9w+2gybViDBivC9wQrqqVE6VFlLxDz9IRiM3M2IEI=; b=fJ1okzx/HlJpu0lt3iUPIZdcA3GJClhwhqJ9Bx5TL50iDDjA0Q6Tee1jkAh59LYDhGmIgRLIdR80r1Rp17d1IFfi5VDbaUjTZ0vR4DkUUR4mTW4Mz4SW24bkaKKTKFcQafvqhVKpVQxL57+GMmBYsT1ZJQiQKdl1gRtulf66wgdYmAExwj89Sy+N9z/bRON1MuojBptWAWDZrZpasIOVDIyLTC/4i1uZtKEwEdx9hwyETtY1Zez9LPPF5ygRtd5aZnutkLUi9sUgxug6RM9lZ9ykwmjgoL4FmsnbIsP3jL9J9fI/djfNo88B7OeiCXRzDp5pY/TWJ4eQavpn4qeb2Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM6PR12MB4337.namprd12.prod.outlook.com (2603:10b6:5:2a9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:01:57 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:01:57 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 3/8] mm/memremap.c: Take a pgmap reference on page allocation Date: Wed, 28 Sep 2022 22:01:17 +1000 Message-Id: <12d155ec727935ebfbb4d639a03ab374917ea51b.1664366292.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0159.ausprd01.prod.outlook.com (2603:10c6:10:1ba::17) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM6PR12MB4337:EE_ X-MS-Office365-Filtering-Correlation-Id: a149d2f4-4aba-41e1-5de7-08daa1493db8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WT7JMBoco1FW6jnV3fEIznbxXtekjxUEwNMxi8JsZ4daAWQv6z+vQDx4IWcw3j2tiXqm7ULAUVHN5f9qemOaHPt3lxciIJ14tDK8pzPrEXetsScHBByUnsEulbkqVB1NC5LjgKVGApdF7sC+zQf4QD6Lpo7Iqj/etJSRnnGvjdjO45GZSZF7R2VC1YhbZHXvDUIj7qVic32M8BEzw3Ovta1oeQfKwEIqeUqsxmpVYjhm0KBr7LbolX30ddY0Flbo8VZQtEdrRH1cUsgkDpZ3IKzL6s0q6OQi3rdhik9Vw45TkHyOBjvD+REP9JQuKPXqvgNh4sQto5gCmHOFZwcxikO2FsAniMjqoummt6/+JfB6jpwCyW16b1j0a62E8tonKFTsYk0mCcbWCmK8TeGhANrlCTlRjDYTislSCrGVjP2IMxCRgYUf0STA6/yOo/r2AdXm1Fh5jdnPIB3jLNjZkHUOc9wwiAVgMAIntMMkfKTuQ2+ujti+kgB3lQyVpkJiirMhtYlMw5rTIWmjbXN1MgsQHmanamFP/R7nuuW35xTPIdP7AZgHf0tUb0NXWxTmunHvYqijmj2gq36HGkxWlha3W3TfOa5QE/YrXDK/A4FQymP7nzQHg6K79Ob4Ur1sMmC52Wq3mpr9hOKwfcbtJ26vs2zXisFfnat9Rvjjn6JOh8ymmVkHlHRaSE44qlUD9G7+0aTnxiQkwZwz/t7bdZ8BEcv6DyBIX3ghttApmvgHmrTjZJSFG9K6oOM9shjpeLvxIx6Zl6gZXa7dj9iF/gqoAt0bFMiQSwambs+U8ns= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199015)(86362001)(36756003)(2906002)(2616005)(6506007)(7416002)(6666004)(6512007)(26005)(38100700002)(83380400001)(66574015)(54906003)(316002)(186003)(966005)(8936002)(5660300002)(4326008)(66556008)(8676002)(66946007)(66476007)(6486002)(478600001)(41300700001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?seDNySyD8FVB+AhplP+olseXyusb?= =?utf-8?q?+s83yTD2dfpui9LDv1vONmmVfx0jl1vSjydfDC6CwjaBrDuEL5ua97vKSU9UIBprq?= =?utf-8?q?prS1CcrQhF/SeZ57KxbdGu1cqunn33LOaA+4ssxZP9C3ki14OY3/XKYoSTPALYVe0?= =?utf-8?q?V8S7cCTyjyLu4yPyfB5VnHmofSX29aP/SjLkD3NXTZJI1s2fwtfLOEjl4EZfKIzow?= =?utf-8?q?xcux6kQYeMoLddaY+eMCU5NUF3+AwaT0UqQq3KB25t54uhX1BWY3BdoCqWhhPwnI/?= =?utf-8?q?4Fh1fpOS9XAqrJTgQqQFOC42Nbd8WgAnf3M8f1jSMwGoCP9fme5BIwphHKN26hEkr?= =?utf-8?q?7ISuRZobY4sZH+GKvf3weIaZ76GzXEoWCq2KNo767l+Zf8hSD9qv1PAaE16KrZlcj?= =?utf-8?q?O3z5YYJo0YPazFyzRuSqz8O/zsy9TGoapuyBIvbrcFxDBSHADx1jLUgycruS2y9kO?= =?utf-8?q?TPh3zRzghKjGDsfzDUlko433jfCSl4up9POmMHZc24fulEOxxx1i01Ktb2QOFfbuL?= =?utf-8?q?TLDXtpP8ukFt4GH25eY0ULm15clUSFji7PwFYuhLejsPubQgDqknPONnwOtyrC/Wm?= =?utf-8?q?brNXwtmAjDUNxQmwi+6NYww7Mpia/6JWmhWxDZtw1M2h4slJPn/UCyV8wJecry8YH?= =?utf-8?q?KfZqkZTEB3HdRRY0VG/sHqfWz6iflcn9cYTG264wIyO2wYkB3lJf6vKgvl9/1iWWH?= =?utf-8?q?JIpk/7g5mE5KK8M+hc6My+qnHmVizTpX3YsmTcJj8BcZIV3Grij0Y9n+bswVZB3vX?= =?utf-8?q?w+7SOe7AGyJxbxRxbcq7IJky+Ihw4qTjjzVJoJj9soagvZxBXi/GT8vyTQSb9KolF?= =?utf-8?q?cEMSzQB5VaiM5gxU1MLQHHnhjZzEaJ1TToeLAPl2X151Z6XDdkegcJDnAYFBZoLht?= =?utf-8?q?t72r8kiq/KCPQhkPfACOi506hOOqRyiyTwA546E/UmO17LKubCVvjmFTC2UbP1Elt?= =?utf-8?q?kCfPkL6KRM993g/wJHRppUql5nH8Q0A8bWYF+PlyLwPUNcS4xOcPM8LeHM2LIBbsk?= =?utf-8?q?lJgVDw/rp/lNLFpN2L1Yv4MRAS4ENSJOXABE3HPTHiZ834VGO5I/iakUroQoshSWX?= =?utf-8?q?8NAXRVrpJUDNi2HuWJZKcjilRHMRW/PCwwCgJQB8NDXei0VtfOWn/tleTcTCqof/7?= =?utf-8?q?9HCJjl8ujwXSazNWtrzm7RmaUrtZdJb6uJAVdR7bMz6v8yMU/itNJ1r0RuEY970jA?= =?utf-8?q?TUBcrTpYzxpBGGftfKxz7wPdMkot+4jGjsSFA/C/OCG8NjHdB+sh8YeC7Mlbe9/Q8?= =?utf-8?q?jkCVzsffcqrB6EZ+0Ca2qW1OkGm58J2pS6/pXWfiqKRz5AlE32id+3309aaYrjMHk?= =?utf-8?q?JMXwAYjc8Biist8H0QDWHfFbZdT4grZuk4hRlSkGFLwYGBaWBEEYvcZzpnK5XxDj/?= =?utf-8?q?Yz/vxRF6eAhYQRGPd7bCcNzq7F6tW0NaThcMUV0pOXu9Qe9vY9+IDT38AHPsnl8nr?= =?utf-8?q?1z0nyVWVEB/+7ZWZyxjCCUf85cQr5+CbhRXe7eIjeEGOI5AZ0hMvEnSzZNn/Jl59T?= =?utf-8?q?npsJNAtNJT1s?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a149d2f4-4aba-41e1-5de7-08daa1493db8 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:01:57.0898 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 6jfxEc+grLbYoGv6DbvjxYDS32krdCslvxMazBUu5vyBCy6lTWOQtO6ODT7VKC2psqmf2R7Dq5IJSLFtV7MDxw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4337 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alex Sierra , Ralph Campbell , nouveau@lists.freedesktop.org, Felix Kuehling , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , John Hubbard , Alex Deucher , Dan Williams , =?utf-8?q?Christian_K=C3=B6nig?= , Ben Skeggs Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" ZONE_DEVICE pages have a struct dev_pagemap which is allocated by a driver. When the struct page is first allocated by the kernel in memremap_pages() a reference is taken on the associated pagemap to ensure it is not freed prior to the pages being freed. Prior to 27674ef6c73f ("mm: remove the extra ZONE_DEVICE struct page refcount") pages were considered free and returned to the driver when the reference count dropped to one. However the pagemap reference was not dropped until the page reference count hit zero. This would occur as part of the final put_page() in memunmap_pages() which would wait for all pages to be freed prior to returning. When the extra refcount was removed the pagemap reference was no longer being dropped in put_page(). Instead memunmap_pages() was changed to explicitly drop the pagemap references. This means that memunmap_pages() can complete even though pages are still mapped by the kernel which can lead to kernel crashes, particularly if a driver frees the pagemap. To fix this drivers should take a pagemap reference when allocating the page. This reference can then be returned when the page is freed. Signed-off-by: Alistair Popple Fixes: 27674ef6c73f ("mm: remove the extra ZONE_DEVICE struct page refcount") Cc: Jason Gunthorpe Cc: Felix Kuehling Cc: Alex Deucher Cc: Christian König Cc: Ben Skeggs Cc: Lyude Paul Cc: Ralph Campbell Cc: Alex Sierra Cc: John Hubbard Cc: Dan Williams --- Again I expect this will conflict with Dan's series. This implements the first suggestion from Jason at https://lore.kernel.org/linux-mm/YzLy5jJOF0jdlrJK@nvidia.com/ so whatever we end up doing for DAX we should do the same here. --- mm/memremap.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/mm/memremap.c b/mm/memremap.c index 1c2c038..421bec3 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -138,8 +138,11 @@ void memunmap_pages(struct dev_pagemap *pgmap) int i; percpu_ref_kill(&pgmap->ref); - for (i = 0; i < pgmap->nr_range; i++) - percpu_ref_put_many(&pgmap->ref, pfn_len(pgmap, i)); + if (pgmap->type != MEMORY_DEVICE_PRIVATE && + pgmap->type != MEMORY_DEVICE_COHERENT) + for (i = 0; i < pgmap->nr_range; i++) + percpu_ref_put_many(&pgmap->ref, pfn_len(pgmap, i)); + wait_for_completion(&pgmap->done); for (i = 0; i < pgmap->nr_range; i++) @@ -264,7 +267,9 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], PHYS_PFN(range->start), PHYS_PFN(range_len(range)), pgmap); - percpu_ref_get_many(&pgmap->ref, pfn_len(pgmap, range_id)); + if (pgmap->type != MEMORY_DEVICE_PRIVATE && + pgmap->type != MEMORY_DEVICE_COHERENT) + percpu_ref_get_many(&pgmap->ref, pfn_len(pgmap, range_id)); return 0; err_add_memory: @@ -502,16 +507,24 @@ void free_zone_device_page(struct page *page) page->mapping = NULL; page->pgmap->ops->page_free(page); - /* - * Reset the page count to 1 to prepare for handing out the page again. - */ if (page->pgmap->type != MEMORY_DEVICE_PRIVATE && page->pgmap->type != MEMORY_DEVICE_COHERENT) + /* + * Reset the page count to 1 to prepare for handing out the page + * again. + */ set_page_count(page, 1); + else + put_dev_pagemap(page->pgmap); } void zone_device_page_init(struct page *page) { + /* + * Drivers shouldn't be allocating pages after calling + * memunmap_pages(). + */ + WARN_ON_ONCE(!percpu_ref_tryget_live(&page->pgmap->ref)); set_page_count(page, 1); lock_page(page); } From patchwork Wed Sep 28 12:01:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B4D1C32771 for ; Wed, 28 Sep 2022 12:02:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E647210E49F; Wed, 28 Sep 2022 12:02:15 +0000 (UTC) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2040.outbound.protection.outlook.com [40.107.223.40]) by gabe.freedesktop.org (Postfix) with ESMTPS id E1DAB10E488; Wed, 28 Sep 2022 12:02:06 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SKZtDse83kzXKxROuyCVMfS3lk+TmrCb/+dBmk0U6aQUbn7ZqxbKdjTpYRQCqTJw70MHFkFPcIGgjSs/XdN8JVxI72ALAh20TpPL85ltmR3Q8L2ZoNyz4A7ZPFMaxf+tnl+KWEgoNbxKBFJOlpZdLXYtjK/Ck/Gg0k0XlRKi/bGjQmyPjDtY2ifZE0OfxlHD5ZnsglHdO8Fgf0CATZzkkwGnmgioA/C5WkkxqpZ4Dl9/kUnLmz2zEWNjD6ip3xIrFP13ao+aGy10IO+FXRcToWof7XMm3s1xU49Z/qFM9p1Jd62AsW6yUpAK6JrzyDUwdfBHW4QVvq1NAwbUgWZ/kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ief9eruoW4PaeSi1pjMuUgPTPH5kc5KOb/oQ3J3KlkI=; b=kRTarwt6OSETGZK8NzJGbtEt/mRgUdRkMDV3vW8TTMtgSLQB1XTpcXEpkI9KKHYXGJDgZhjt2pWlMt2HjQdPIuQYWomjSwGxMX+Mx4wB5fuVYKSyMQmtL3Gj2MLnTARuFFbD5E4bbhhjZ8rRfKf0mYcpEBZXR0S/EvgmFQH0F6ls6num+Ipm3rgpJDw0/92rJNgXb10P+0gWoAF5qNidPfN8Gd+AV7HLFUtTGO8nCZ2jBU9RdgCdTbgA0ja2io108Yi3eDkBIsjxiiT+4tT0U60ULs2pIbZW+am2D0zaXSh+GPET8wlh1K9C2QEcUBC4LIMYWqpwj1lnhGRFSVQC/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ief9eruoW4PaeSi1pjMuUgPTPH5kc5KOb/oQ3J3KlkI=; b=ovAB/8gt4CeRVFox9S/8k1EWHb0qLua/UwSOTBVkQaPyWJ0XzLNyUZ5Uy64OaR3j8hNbM1QkTnvReYepVKt10FCCQ920PWzDCuHZbwaR1U5dJDajOu2baUvxKL/UAS2Yl0lPSwHFl0qDhulDl85deph9UeQsYnXP/dtLc0boLaCtG4FkU6LOJn7wP0qDXCecN4lfcxzDXMY06I4j1XPiex8XmX463hWHbkWQTiFLit0NZw+njGpsxtvtEH5pUgSu+PXjd4eHCUy8D0ickbJhemc5pGlJK4ZdfGs8Le/F1F7N4PLAWA9n5I3EbP3cgq6+1x6X33gi5LirMktkjv5sQw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by DM6PR12MB4337.namprd12.prod.outlook.com (2603:10b6:5:2a9::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:02:05 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:02:04 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 4/8] mm/migrate_device.c: Refactor migrate_vma and migrate_deivce_coherent_page() Date: Wed, 28 Sep 2022 22:01:18 +1000 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SYCP282CA0004.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:80::16) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|DM6PR12MB4337:EE_ X-MS-Office365-Filtering-Correlation-Id: 5db009e4-50fa-4842-fc1b-08daa149425d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6bHyFhJuTYdjplcig/FgtJsbsj+q8c1DEbmssZ+wbS68yKFbAUL41+TtK0Bz54yBxjwSguHoOj/owQOX3/p4rzTBC+SVKItWBLcxfB8snv9eNqvRCWHnZIyFeZY0vM6xaIblPC9QYxHldpIXwYPvbpeJj+kKaol9czr/N7dXa9FR6kFmKYw6YVTM7KnZv2I1Sfs3AOU7soA2JyO6k66mQZFUuvsGiA8zqbd94S/RG9zAShslPodG2H2OdzsnfOWLuMfFB5zHuzu03Iry0O2LkPhqVpJAkVlgqucQYHP/rgMo4CEw1Mg4hH/ieGCiqgMNuQ71MvkhwN5rTo5vQVbE92sPON0HauIBAJC7wjqD0Bw8qqadJT2/GCRQlZASEHw9Lc0NtJ2HehxPCAudIdiaCwLh3b26GoV/AEiCKYChJ00wKMRhnLr3v1ia15QzpGlf16cmsTJv62EfwDDd9yhicbhyRqJnSaYCy435KVl4FdXjR+TdkNSVBINrLbI49ljtszJzPnQxTPeMyOLXNY79gRexVFfBwpzPrRVt2NV3tK2Kb61XDOQktSYGBjJaSAeBigrKbIPBxTQippU0ruYyYia9au9AUcKPjkDJhzIfz/aGpwHhuwVSmDX5v/8CVQ0GfmL6KsahwmlYQaVYBkRhuhgLRmLclPW5ZJQh4NChVGIxSmLMIFHHWHDLNlRQqXX7NCl5+3HwA4merzbAyToDmQ== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(451199015)(86362001)(36756003)(2906002)(2616005)(6506007)(107886003)(7416002)(6666004)(30864003)(6512007)(26005)(38100700002)(83380400001)(54906003)(316002)(186003)(8936002)(5660300002)(4326008)(66556008)(8676002)(66946007)(66476007)(6486002)(478600001)(41300700001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: NCRpf0NEkcAh3Zq1BKy4DZb573pfUYbTbVxcHQqQT0MXVOuL01YncslDb7cuPAUJqBPqKyzEAWXRNfWMSpjWtu2R7mKmC3gYa7xGQsJrdc5vNGCQyVy/t5rH2i127z05HbsVJz8q6ub6dyuJbTpHUWDRLFuaPGg9B8cn0pA3FH4AZeoqoLq/fTSMJcq3qG+tc3f/GalKN88MFLpNMweBEhX7+NTFiK2ckrOWQ8D9ZLznL+itN41K2DgfUh0Z4Jkss9eqYaMFuCixYBVqu1IUsu2gKZX3jBj9bqsRgy0XszX/mODQbrhH/6/ld6YuDvr0CKFdRb9uqev4HdYgokW3hPmvgnIV10dNizAVqwSHfA3xY7UPnnjnYAMT+8c6clfCM7xoh+NHAnbpGlhD8alTUlF7tddO/bq9TY0vlwMGGUTUgG1NuiCf+cb/aWpBvEYW4nGhb1VFAGfI52adOqb/ac2OYgSR4C2JZ/3Xij0OiVPpD8w5h3j1J+8w5b2OUly+UG1FlopRfmcDL6JHMG9A/TdD2hOvBLCiUZ0GhihcZDHMxvsmIENTY1hcoCuVHoQAtA48LGqbOW0PoE/FYy8N6oqTsz8SvSndktPuOsIFxbRP31OS602qinD2miPIBupVASrH4iNg58q3eKOsLVwbwq3eqAMjYOlxEqBFLrPVynGA19ISJ6oXZV0KQoV0PeHVVWx83PLI5iO+pjJh7VVlLoInmmx9WGZKFLXeVnX9weXVyghQkAQvRd+bsSxYtVp6EwCqpCrWwNoCSxWyrMkCrW87fCdk1gLtsVKQLBID9inftW+3bFOhxjEiF18Xum45jyW9QaMHn+FEZa4xMmjibHU3o+znmnhxsw+J3zOfyehMmN7QgsDPuXmmCm95NwmJimN1UtlwpqoUxRrmoKnVoJVNu94wfYVW3I/Ou17eBOTkdYWe5XCB8rxl56LUdXXYkODS64VIbBqdmaAUPlOD+a2mXp5HvV0YUMB+yOUr1LD/T9Kkt4kbXSy9llLVs2NN+h6GdIG8yA/dK2pQtFU6u/6WbHGt4trl/AMR6o93+bINg/B6PIiW7Q3X6z6XlesinSBs3yVyG78Bbo/tbhzyAe8a7bYpUFIq+EQldQJDOe8/zB259+BE6CrYbJF+QpkVnG6T5r+J5x6DdsTLYz1eV2PAelAE4H9K+aWFOfL+yJm3WiR4zs8fU16AC7q67DHbU+C3/bsSIz0GADoz1J/S0I+r1KySzIfeh9QfAHXwy3zGEBchUx8+X21KpoBGz+6ldsJxCs11DPuJzEcMPZ8OZ1y69NPQBUe/xFXb0Vf4QYc71JnrmTxxm/6oWIaUi83+NfeCsH+cf5c89xZPWzbktOMtD6tV9XsmclO78JctBIKWsOmm7IE/n+iseQSCpsopL9htOOO+okUBOBGglUluFEqYqkE2ltQaKu6ZsFNfeuHp2QJO/vYY07eSBVzQ5fyRVQqYVQD2Qe43Bwp8IQVxNS/I1t+5c2gly3Mj/3RjoZIb4VU9WyDfnjjDjstGLLgz/+EfEhuQro4+uIQBhKGaDG/oaH8UYof+eWoZhzMHnmCSCSst+fUfjLH9fpx/DbiX X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5db009e4-50fa-4842-fc1b-08daa149425d X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:02:04.7455 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: RadTLMLcz6FKpgo8s4o9hOeihUg5OvwAnyP8pPm2Cl2ePx7TMfQ0FsJs3LUlVmYv6lkiW06RJUuFQZgBX1WuHQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4337 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Matthew Wilcox , John Hubbard , David Hildenbrand , nouveau@lists.freedesktop.org, Yang Shi , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Zi Yan , "Huang, Ying" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" migrate_device_coherent_page() reuses the existing migrate_vma family of functions to migrate a specific page without providing a valid mapping or vma. This looks a bit odd because it means we are calling migrate_vma_*() without setting a valid vma, however it was considered acceptable at the time because the details were internal to migrate_device.c and there was only a single user. One of the reasons the details could be kept internal was that this was strictly for migrating device coherent memory. Such memory can be copied directly by the CPU without intervention from a driver. However this isn't true for device private memory, and a future change requires similar functionality for device private memory. So refactor the code into something more sensible for migrating device memory without a vma. Signed-off-by: Alistair Popple Cc: "Huang, Ying" Cc: Zi Yan Cc: Matthew Wilcox Cc: Yang Shi Cc: David Hildenbrand Cc: Ralph Campbell Cc: John Hubbard --- mm/migrate_device.c | 150 +++++++++++++++++++++++++-------------------- 1 file changed, 85 insertions(+), 65 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index f756c00..ba479b5 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -345,26 +345,20 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page) } /* - * migrate_vma_unmap() - replace page mapping with special migration pte entry - * @migrate: migrate struct containing all migration information - * - * Isolate pages from the LRU and replace mappings (CPU page table pte) with a - * special migration pte entry and check if it has been pinned. Pinned pages are - * restored because we cannot migrate them. - * - * This is the last step before we call the device driver callback to allocate - * destination memory and copy contents of original page over to new page. + * Unmaps pages for migration. Returns number of unmapped pages. */ -static void migrate_vma_unmap(struct migrate_vma *migrate) +static unsigned long migrate_device_unmap(unsigned long *src_pfns, + unsigned long npages, + struct page *fault_page) { - const unsigned long npages = migrate->npages; unsigned long i, restore = 0; bool allow_drain = true; + unsigned long unmapped = 0; lru_add_drain(); for (i = 0; i < npages; i++) { - struct page *page = migrate_pfn_to_page(migrate->src[i]); + struct page *page = migrate_pfn_to_page(src_pfns[i]); struct folio *folio; if (!page) @@ -379,8 +373,7 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) } if (isolate_lru_page(page)) { - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; - migrate->cpages--; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; restore++; continue; } @@ -394,34 +387,54 @@ static void migrate_vma_unmap(struct migrate_vma *migrate) try_to_migrate(folio, 0); if (page_mapped(page) || - !migrate_vma_check_page(page, migrate->fault_page)) { + !migrate_vma_check_page(page, fault_page)) { if (!is_zone_device_page(page)) { get_page(page); putback_lru_page(page); } - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; - migrate->cpages--; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; restore++; continue; } + + unmapped++; } for (i = 0; i < npages && restore; i++) { - struct page *page = migrate_pfn_to_page(migrate->src[i]); + struct page *page = migrate_pfn_to_page(src_pfns[i]); struct folio *folio; - if (!page || (migrate->src[i] & MIGRATE_PFN_MIGRATE)) + if (!page || (src_pfns[i] & MIGRATE_PFN_MIGRATE)) continue; folio = page_folio(page); remove_migration_ptes(folio, folio, false); - migrate->src[i] = 0; + src_pfns[i] = 0; folio_unlock(folio); folio_put(folio); restore--; } + + return unmapped; +} + +/* + * migrate_vma_unmap() - replace page mapping with special migration pte entry + * @migrate: migrate struct containing all migration information + * + * Isolate pages from the LRU and replace mappings (CPU page table pte) with a + * special migration pte entry and check if it has been pinned. Pinned pages are + * restored because we cannot migrate them. + * + * This is the last step before we call the device driver callback to allocate + * destination memory and copy contents of original page over to new page. + */ +static void migrate_vma_unmap(struct migrate_vma *migrate) +{ + migrate->cpages = migrate_device_unmap(migrate->src, migrate->npages, + migrate->fault_page); } /** @@ -668,41 +681,36 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, *src &= ~MIGRATE_PFN_MIGRATE; } -/** - * migrate_vma_pages() - migrate meta-data from src page to dst page - * @migrate: migrate struct containing all migration information - * - * This migrates struct page meta-data from source struct page to destination - * struct page. This effectively finishes the migration from source page to the - * destination page. - */ -void migrate_vma_pages(struct migrate_vma *migrate) +static void migrate_device_pages(unsigned long *src_pfns, + unsigned long *dst_pfns, unsigned long npages, + struct migrate_vma *migrate) { - const unsigned long npages = migrate->npages; - const unsigned long start = migrate->start; struct mmu_notifier_range range; - unsigned long addr, i; + unsigned long i; bool notified = false; - for (i = 0, addr = start; i < npages; addr += PAGE_SIZE, i++) { - struct page *newpage = migrate_pfn_to_page(migrate->dst[i]); - struct page *page = migrate_pfn_to_page(migrate->src[i]); + for (i = 0; i < npages; i++) { + struct page *newpage = migrate_pfn_to_page(dst_pfns[i]); + struct page *page = migrate_pfn_to_page(src_pfns[i]); struct address_space *mapping; int r; if (!newpage) { - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; continue; } if (!page) { + unsigned long addr; + /* * The only time there is no vma is when called from * migrate_device_coherent_page(). However this isn't * called if the page could not be unmapped. */ - VM_BUG_ON(!migrate->vma); - if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE)) + VM_BUG_ON(!migrate); + addr = migrate->start + i*PAGE_SIZE; + if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE)) continue; if (!notified) { notified = true; @@ -714,7 +722,7 @@ void migrate_vma_pages(struct migrate_vma *migrate) mmu_notifier_invalidate_range_start(&range); } migrate_vma_insert_page(migrate, addr, newpage, - &migrate->src[i]); + &src_pfns[i]); continue; } @@ -727,18 +735,18 @@ void migrate_vma_pages(struct migrate_vma *migrate) * device private or coherent memory. */ if (mapping) { - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; continue; } } else if (is_zone_device_page(newpage)) { /* * Other types of ZONE_DEVICE page are not supported. */ - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; continue; } - if (migrate->fault_page == page) + if (migrate && migrate->fault_page == page) r = migrate_folio_extra(mapping, page_folio(newpage), page_folio(page), MIGRATE_SYNC_NO_COPY, 1); @@ -746,7 +754,7 @@ void migrate_vma_pages(struct migrate_vma *migrate) r = migrate_folio(mapping, page_folio(newpage), page_folio(page), MIGRATE_SYNC_NO_COPY); if (r != MIGRATEPAGE_SUCCESS) - migrate->src[i] &= ~MIGRATE_PFN_MIGRATE; + src_pfns[i] &= ~MIGRATE_PFN_MIGRATE; } /* @@ -757,28 +765,30 @@ void migrate_vma_pages(struct migrate_vma *migrate) if (notified) mmu_notifier_invalidate_range_only_end(&range); } -EXPORT_SYMBOL(migrate_vma_pages); /** - * migrate_vma_finalize() - restore CPU page table entry + * migrate_vma_pages() - migrate meta-data from src page to dst page * @migrate: migrate struct containing all migration information * - * This replaces the special migration pte entry with either a mapping to the - * new page if migration was successful for that page, or to the original page - * otherwise. - * - * This also unlocks the pages and puts them back on the lru, or drops the extra - * refcount, for device pages. + * This migrates struct page meta-data from source struct page to destination + * struct page. This effectively finishes the migration from source page to the + * destination page. */ -void migrate_vma_finalize(struct migrate_vma *migrate) +void migrate_vma_pages(struct migrate_vma *migrate) +{ + migrate_device_pages(migrate->src, migrate->dst, migrate->npages, migrate); +} +EXPORT_SYMBOL(migrate_vma_pages); + +static void migrate_device_finalize(unsigned long *src_pfns, + unsigned long *dst_pfns, unsigned long npages) { - const unsigned long npages = migrate->npages; unsigned long i; for (i = 0; i < npages; i++) { struct folio *dst, *src; - struct page *newpage = migrate_pfn_to_page(migrate->dst[i]); - struct page *page = migrate_pfn_to_page(migrate->src[i]); + struct page *newpage = migrate_pfn_to_page(dst_pfns[i]); + struct page *page = migrate_pfn_to_page(src_pfns[i]); if (!page) { if (newpage) { @@ -788,7 +798,7 @@ void migrate_vma_finalize(struct migrate_vma *migrate) continue; } - if (!(migrate->src[i] & MIGRATE_PFN_MIGRATE) || !newpage) { + if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE) || !newpage) { if (newpage) { unlock_page(newpage); put_page(newpage); @@ -815,6 +825,22 @@ void migrate_vma_finalize(struct migrate_vma *migrate) } } } + +/** + * migrate_vma_finalize() - restore CPU page table entry + * @migrate: migrate struct containing all migration information + * + * This replaces the special migration pte entry with either a mapping to the + * new page if migration was successful for that page, or to the original page + * otherwise. + * + * This also unlocks the pages and puts them back on the lru, or drops the extra + * refcount, for device pages. + */ +void migrate_vma_finalize(struct migrate_vma *migrate) +{ + migrate_device_finalize(migrate->src, migrate->dst, migrate->npages); +} EXPORT_SYMBOL(migrate_vma_finalize); /* @@ -825,25 +851,19 @@ EXPORT_SYMBOL(migrate_vma_finalize); int migrate_device_coherent_page(struct page *page) { unsigned long src_pfn, dst_pfn = 0; - struct migrate_vma args; struct page *dpage; WARN_ON_ONCE(PageCompound(page)); lock_page(page); src_pfn = migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE; - args.src = &src_pfn; - args.dst = &dst_pfn; - args.cpages = 1; - args.npages = 1; - args.vma = NULL; /* * We don't have a VMA and don't need to walk the page tables to find * the source page. So call migrate_vma_unmap() directly to unmap the * page as migrate_vma_setup() will fail if args.vma == NULL. */ - migrate_vma_unmap(&args); + migrate_device_unmap(&src_pfn, 1, NULL); if (!(src_pfn & MIGRATE_PFN_MIGRATE)) return -EBUSY; @@ -853,10 +873,10 @@ int migrate_device_coherent_page(struct page *page) dst_pfn = migrate_pfn(page_to_pfn(dpage)); } - migrate_vma_pages(&args); + migrate_device_pages(&src_pfn, &dst_pfn, 1, NULL); if (src_pfn & MIGRATE_PFN_MIGRATE) copy_highpage(dpage, page); - migrate_vma_finalize(&args); + migrate_device_finalize(&src_pfn, &dst_pfn, 1); if (src_pfn & MIGRATE_PFN_MIGRATE) return 0; From patchwork Wed Sep 28 12:01:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48AFAC32771 for ; Wed, 28 Sep 2022 12:02:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BC7CF10E4A2; Wed, 28 Sep 2022 12:02:23 +0000 (UTC) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2049.outbound.protection.outlook.com [40.107.237.49]) by gabe.freedesktop.org (Postfix) with ESMTPS id DB79410E4B2; Wed, 28 Sep 2022 12:02:16 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mWLh78/LFNhPSqER6AgvsHryUBrRHLCb8GM7KDgwXhC5K+2GzWRxT2TuqrXlgS4lVIq6Mhar3BrB7atGxiYRYNg8kcGyLLVU0+zmG7R/9CCfAp4LbrzlU//pKHzRUwfE8RZKHqtn3PIZiEeQ91M69RyYOzzYLtU8XEw9dwzJalo/A9t5gMtFxabhqXjvbZPGB19XpFAtjkwL1HQSdEGNU3DYbK6X+DuTgKBvh3wCoXKswJcAP3GfQDwDuMh+qJBeIOkAQDDyQ55HsS3FQenDtv2MP8upI2qAvJLNmHjx53ioqjfUUuiqYwMgMGC+/AfLSYIasK+WDBzc0Z99ahw3AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MxnM0Mm+w4dC4/s1WCkbhv6Ca4Qcuv7yec9xOyQ1fNU=; b=D7HxZJHHMZemOr0pOL4L0Snf5fGmV1U8SdwTRmJhA7BV6wCvWx0pMvw+H1IYUfjNxt/rAaqjrZlDoWTkgrNkiQsCvrivrzGHtou6P9Jz6CpzblscgzUM8cqIWfYpDydsKWDCsU+VG2HDXZMrlvMCEaT3vIT8Xv8A6zypNN2hU2+85g9DmvTC4e0+dSYpjBvCHEk8RtKiBeMGBCsu5Jp+8r2wwLUvkHMPrn7INJt7iGwzP2s3YicHt/2HbnjGfl4CzLfOOuELkbLOqQ+sYzttYjBus/cvd/QBKa/tuvrcnfnhFetTqNjnv4Dl6tZqCkT78lXdkMfJU+ZIrSFmKi33HQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MxnM0Mm+w4dC4/s1WCkbhv6Ca4Qcuv7yec9xOyQ1fNU=; b=uEwO8nARf5Z8vGHbnzbdMEky80wz1T4UWIk0TI+wBJ5HIK/WnL6X3bodRIamHdNlFbz7zvG3UCkkE6ot/adI3mXHSQ+gAUgHsVuIH6Qk03sKvnUa3FHGPN/cB/2eezmmBOiR8YldvJe1TOox4RuogDpsRCdoRylTePKpLK3G1xOqYsF8qwErqVs5JhkKX4938H6SJWeyRaaaTS7jiTDCAMMdETyaK3frTEcNQrLN8RWgfc2e7P6oQwP/btJ6374IgGCfG4x1ppZNxXyaTh5cRo7usKyYVAtl1IkgZFqoRxwDxGncMqyTgsoE3PeXwqY8iTQvz3nHIhKw2WNXBZKOMw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by IA0PR12MB7604.namprd12.prod.outlook.com (2603:10b6:208:438::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:02:10 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:02:10 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 5/8] mm/migrate_device.c: Add migrate_device_range() Date: Wed, 28 Sep 2022 22:01:19 +1000 Message-Id: <868116aab70b0c8ee467d62498bb2cf0ef907295.1664366292.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0009.ausprd01.prod.outlook.com (2603:10c6:10:e8::14) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|IA0PR12MB7604:EE_ X-MS-Office365-Filtering-Correlation-Id: f6f069f8-1ae2-407e-236b-08daa14945e3 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AqsfkSFJRaQLCYEkmv27B/mTs/zhNdSWptRmwZpu2FocEVCdzO1Xm4SmEXlkNqOJRKzTHxAGihl8pWOwFNw1KLSeElsETRPzE7UbSUeVyf+XYv6ImESi9ALCoqlrVjpuOuwLHCw/PK9MwAVQ3NrJlDU8zDF7UhfI0m1g1oRmjAZ65O4C3HVf8MwAtrSSZCLZ1QhR7Wor8z/kB67m1dGhqtocpibH0sg5ix+fg2C2VAJzbtum+tjz5GjtGjNQkChpzjdeQKzWrNdB90dL4YoD/avlGMTM67msbNO/XNo80+oMH5Boy7aOkHou4/UXPkiYcDQISlirGQwUP+PzSklYIidaa5a7UhYErhiDs0XiXGc0lJbH6k1j+Bcowkwrp1eAaGucjK6tMBk2eyeJiQT0pXlwCJAq/rfUW24mxnEcw+GGcr9d9nnFkrbo9atBHvAuMkdfbGZitMlDYr4hPHzUhDVq871v6hFYyty6NB1Sy5atwyd9rIW23L9LyQ2njtHXedZqBoKzCTDuShpxoj+Ah3ZDpwoyyW53sFeBDonMd8n6ytfs+ql/WB9GoSgWIJGbkMHgrxy2S6CQ+qzMqjRmjZYVWO9pFjvOrFyo1oEwdt8yKgijvEz31o1jZ5LLNt3c69rUUFuLCYH5etwto4bZ28VM74/JGErAcO/IhE3lbbamap5QHlxswmZfDnWdIMtuXUK6it0uAEB6lpYrI8m/BA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(451199015)(8936002)(478600001)(7416002)(316002)(5660300002)(83380400001)(36756003)(6506007)(107886003)(41300700001)(54906003)(6486002)(2906002)(66946007)(186003)(66556008)(2616005)(66476007)(4326008)(8676002)(26005)(86362001)(6512007)(38100700002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: LP6lH00UwAstkkl/R+B4V0Rz9UzueNjgwupYzeS4oZNoHzHI5T5WCBWKsNXqI+mUSJUjPPoK3yOKvU1viyFu4BSvhJqDUlbvfVPTy3oDjqLyViSDwoWiYZvDiwCA9UW92pdXIjJDSbWAfj+I6o0kP0P+MOJbW8rwp5hnHh/Jx5zJo680rFyYIRzkwcW0XbohfmAU5VJIYOGUOikg2hHtcxs6x/oRs5twbNPwzwv1WWPxY6mmCH7+HPXs0JJCKC+INe3IV8TxFBjc+E9nWPV2hyX1lr+/EzXz6E5zByLNQXh9pj7lCson77sm2Gg00h/CdZBpk/w8/jdlKPAM8A0gFxtCb4KK89ZNxCxNXNB9vX8yCNx9ZtBseO+9kLmn+0ec7Jm5w8s6LW8EZSQhQeXQ6KHXQMwOAHcGcO2HO87CSgUtLKyOLbh6ayYIyTd3ynluG8Nn8Kkfu6HwUkT7AEteyi4iJG31OD3FXRQpfOM9ipgDFrQFdlxCNn7+qtY0L2Pt4695HmXONS4B763DI5i+VgD4WByfEhzaA0pGEud/tLu8d0X77IfGm6En5uQp8YNQoyAUNZN/muvFFQ1pK+14NVk81k/pkcKyWNOyOhsclqIuZd5aqqFWUwY0JbH3jIt5AjKbg/zLrXK+PVrgim/JJgamDkn6KNFcHOp1A5+wcu3bRVWIkTkTxU1c/ptRzJSbKnmKyLBdaSYGz7uoHKfXRvQAWdeINfoWmiSvyM+1HayaJw8H8Cj1T/Swo5voDMYDmOENvsDKRb4qWvrzYpu9wKG3/VIoUVNiWC1daFp1t22io1HAGEL9+JdqrIzfMg1+dNF7YLJF/hoIbmbpIUh6PiyD+/fhfAjdv2HKs/SuvBRaKINW+LqA+WJszLMKEo/yd0/aF7yvZ2rnWZOWiApBFe1+uGozwYCaYcgLVcSxiktD5M6Rcx3BvuzllzRt0BovTzOXVIKqknJJXNPIn9rh/zsrVxvL1Rk4W3d/BGVxDAZpbTlbDk23nGS+FD6iWj10AXO8myr+gWyTafNm58bAsjojHbahrT+8vOKm55f98kA7yCPgrFJaRgPFmZkNamVIZlvJ2qOIanQWS417FGst1UYxy4WHZBJg1SLMJp+JnXNxOrOLzcDu6YmBGdEEhf37BbbUcPPXnEQYiWVCh3+0qZ2sApnXzu9PFsl2uMP2Qxd1xzaYEEhWB10rkFEbpJ2gZB0EdVy4osWR7Oo++VLbe9Rq0us9azoPO2Erf4y5k+MCi7OH6/2PZ9uZ8N7HDsVFjMLVbXc57lPFHR4zkn0wBRyzEz0MdfPg1UB0ksRGesMF3wcviL5C7NIhynkL3JCO7xgSPTAeGEY3LWHIipgdKJ0gznwGoERGkoMqeshpmXypGaDGc2qYjuyxd4FUVe8l686oGRUleQJVB2rvGsIV0V78Qb6KdlvzXyC1KwA7HFCeu6UkRkGn2ODz/HTfG8t7WleqFYAO3W4WAU2LxVcHkSTZUIpuA7a2dU6M+gFnArFTQ1yEs9MY2HNBhBjr+vjm7LyDiTl83eZj6F3vvIVuj/6E/k1Lu4xIrlQk1Ixwsg6jUUqfchkTMmlrJZXJCXUQ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: f6f069f8-1ae2-407e-236b-08daa14945e3 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:02:10.6513 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: DO7SMGhn0/HwQvHedKylVhXlQ0dsrJ2NwEzWRo3PMh7Gdk6pZAcdOQuFpwssR8omFvWpliQdEwC72vZUbeqZDg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7604 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Matthew Wilcox , John Hubbard , David Hildenbrand , nouveau@lists.freedesktop.org, Yang Shi , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Zi Yan , "Huang, Ying" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Device drivers can use the migrate_vma family of functions to migrate existing private anonymous mappings to device private pages. These pages are backed by memory on the device with drivers being responsible for copying data to and from device memory. Device private pages are freed via the pgmap->page_free() callback when they are unmapped and their refcount drops to zero. Alternatively they may be freed indirectly via migration back to CPU memory in response to a pgmap->migrate_to_ram() callback called whenever the CPU accesses an address mapped to a device private page. In other words drivers cannot control the lifetime of data allocated on the devices and must wait until these pages are freed from userspace. This causes issues when memory needs to reclaimed on the device, either because the device is going away due to a ->release() callback or because another user needs to use the memory. Drivers could use the existing migrate_vma functions to migrate data off the device. However this would require them to track the mappings of each page which is both complicated and not always possible. Instead drivers need to be able to migrate device pages directly so they can free up device memory. To allow that this patch introduces the migrate_device family of functions which are functionally similar to migrate_vma but which skips the initial lookup based on mapping. Signed-off-by: Alistair Popple Cc: "Huang, Ying" Cc: Zi Yan Cc: Matthew Wilcox Cc: Yang Shi Cc: David Hildenbrand Cc: Ralph Campbell Cc: John Hubbard --- include/linux/migrate.h | 7 +++- mm/migrate_device.c | 89 ++++++++++++++++++++++++++++++++++++++---- 2 files changed, 89 insertions(+), 7 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 82ffa47..582cdc7 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -225,6 +225,13 @@ struct migrate_vma { int migrate_vma_setup(struct migrate_vma *args); void migrate_vma_pages(struct migrate_vma *migrate); void migrate_vma_finalize(struct migrate_vma *migrate); +int migrate_device_range(unsigned long *src_pfns, unsigned long start, + unsigned long npages); +void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, + unsigned long npages); +void migrate_device_finalize(unsigned long *src_pfns, + unsigned long *dst_pfns, unsigned long npages); + #endif /* CONFIG_MIGRATION */ #endif /* _LINUX_MIGRATE_H */ diff --git a/mm/migrate_device.c b/mm/migrate_device.c index ba479b5..824860a 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -681,7 +681,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, *src &= ~MIGRATE_PFN_MIGRATE; } -static void migrate_device_pages(unsigned long *src_pfns, +static void __migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, unsigned long npages, struct migrate_vma *migrate) { @@ -703,6 +703,9 @@ static void migrate_device_pages(unsigned long *src_pfns, if (!page) { unsigned long addr; + if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE)) + continue; + /* * The only time there is no vma is when called from * migrate_device_coherent_page(). However this isn't @@ -710,8 +713,6 @@ static void migrate_device_pages(unsigned long *src_pfns, */ VM_BUG_ON(!migrate); addr = migrate->start + i*PAGE_SIZE; - if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE)) - continue; if (!notified) { notified = true; @@ -767,6 +768,22 @@ static void migrate_device_pages(unsigned long *src_pfns, } /** + * migrate_device_pages() - migrate meta-data from src page to dst page + * @src_pfns: src_pfns returned from migrate_device_range() + * @dst_pfns: array of pfns allocated by the driver to migrate memory to + * @npages: number of pages in the range + * + * Equivalent to migrate_vma_pages(). This is called to migrate struct page + * meta-data from source struct page to destination. + */ +void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns, + unsigned long npages) +{ + __migrate_device_pages(src_pfns, dst_pfns, npages, NULL); +} +EXPORT_SYMBOL(migrate_device_pages); + +/** * migrate_vma_pages() - migrate meta-data from src page to dst page * @migrate: migrate struct containing all migration information * @@ -776,12 +793,22 @@ static void migrate_device_pages(unsigned long *src_pfns, */ void migrate_vma_pages(struct migrate_vma *migrate) { - migrate_device_pages(migrate->src, migrate->dst, migrate->npages, migrate); + __migrate_device_pages(migrate->src, migrate->dst, migrate->npages, migrate); } EXPORT_SYMBOL(migrate_vma_pages); -static void migrate_device_finalize(unsigned long *src_pfns, - unsigned long *dst_pfns, unsigned long npages) +/* + * migrate_device_finalize() - complete page migration + * @src_pfns: src_pfns returned from migrate_device_range() + * @dst_pfns: array of pfns allocated by the driver to migrate memory to + * @npages: number of pages in the range + * + * Completes migration of the page by removing special migration entries. + * Drivers must ensure copying of page data is complete and visible to the CPU + * before calling this. + */ +void migrate_device_finalize(unsigned long *src_pfns, + unsigned long *dst_pfns, unsigned long npages) { unsigned long i; @@ -825,6 +852,7 @@ static void migrate_device_finalize(unsigned long *src_pfns, } } } +EXPORT_SYMBOL(migrate_device_finalize); /** * migrate_vma_finalize() - restore CPU page table entry @@ -843,6 +871,53 @@ void migrate_vma_finalize(struct migrate_vma *migrate) } EXPORT_SYMBOL(migrate_vma_finalize); +/** + * migrate_device_range() - migrate device private pfns to normal memory. + * @src_pfns: array large enough to hold migrating source device private pfns. + * @start: starting pfn in the range to migrate. + * @npages: number of pages to migrate. + * + * migrate_vma_setup() is similar in concept to migrate_vma_setup() except that + * instead of looking up pages based on virtual address mappings a range of + * device pfns that should be migrated to system memory is used instead. + * + * This is useful when a driver needs to free device memory but doesn't know the + * virtual mappings of every page that may be in device memory. For example this + * is often the case when a driver is being unloaded or unbound from a device. + * + * Like migrate_vma_setup() this function will take a reference and lock any + * migrating pages that aren't free before unmapping them. Drivers may then + * allocate destination pages and start copying data from the device to CPU + * memory before calling migrate_device_pages(). + */ +int migrate_device_range(unsigned long *src_pfns, unsigned long start, + unsigned long npages) +{ + unsigned long i, pfn; + + for (pfn = start, i = 0; i < npages; pfn++, i++) { + struct page *page = pfn_to_page(pfn); + + if (!get_page_unless_zero(page)) { + src_pfns[i] = 0; + continue; + } + + if (!trylock_page(page)) { + src_pfns[i] = 0; + put_page(page); + continue; + } + + src_pfns[i] = migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; + } + + migrate_device_unmap(src_pfns, npages, NULL); + + return 0; +} +EXPORT_SYMBOL(migrate_device_range); + /* * Migrate a device coherent page back to normal memory. The caller should have * a reference on page which will be copied to the new page if migration is @@ -873,7 +948,7 @@ int migrate_device_coherent_page(struct page *page) dst_pfn = migrate_pfn(page_to_pfn(dpage)); } - migrate_device_pages(&src_pfn, &dst_pfn, 1, NULL); + migrate_device_pages(&src_pfn, &dst_pfn, 1); if (src_pfn & MIGRATE_PFN_MIGRATE) copy_highpage(dpage, page); migrate_device_finalize(&src_pfn, &dst_pfn, 1); From patchwork Wed Sep 28 12:01:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B42C8C32771 for ; Wed, 28 Sep 2022 12:02:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 50D1E10E4B8; Wed, 28 Sep 2022 12:02:33 +0000 (UTC) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2047.outbound.protection.outlook.com [40.107.237.47]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8507610E491; Wed, 28 Sep 2022 12:02:20 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NC9HfUdT3u3QbIM99d6XoVuFBwmuqapJn52JYjPmeENuCxpf7RbOghSVMGsZE7hce+7I6prhiPB09GZy1GASFjvBLR9Bdz8H1T6sX3BV+olLwGaPwUs5ATx2co/rHegfY6Vp28kdb0T9oq8LpMgDpa9kFM4DqkKQ87KCXiuqugZ2ZDDniBEIxmBqDBfWqBVVzSkrwg8p7UJ/wDiN9pPNOvzEaN+GvGu9ACneirpKpGMrwXDef3RhSpCWrcnFVlwejTvLBfZglt0Vb6m7yFIMTtgsgkIPsFyUvux7AKu6IR0lmUdOclG/TFK4FOxf6PYqWbtkKbVEUjzv7SG/GMpE4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Irdbf86/Asnwx5UiI7R2Cz+qn/wZDHTv90XwJO+4KbM=; b=O1cQ7mAscP1I0Ar/1L9NYzUc16m+Utadm7i3uO/U1fSjpgNBBmEn6ojofyFaXy+uB76PYwmmT8TBS+9rIgWy+/0lugxCArzf6kwHvv7GLU0RPp0k7FL+cc8fFKfVidgjp7YdemAqFXRWeoRI7v1Zcl3OqVFU8KZNNnUJaWjKjQ0UJMlkrXlJ1+8wkct+nCHZ5OMNZ70pFYDlqLBBwMQxQn6ifNnq1uuCrNhLT7R0ILpFT0PBGnTc9q0bxJbM6VYqpzteIg5OJHgwyDfZJ2w0bhJ+NHNHQuz5q7WviMqR6a2o1tlpGC/VivN/BuL0agBnQvy9GmlycGnhnSl1rjL16g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Irdbf86/Asnwx5UiI7R2Cz+qn/wZDHTv90XwJO+4KbM=; b=lQgqL2DGgCyl3laF4Vsv6Mg66yNZgWvB/YpUZ1f81qboeY/lBMO8hT2lvGEqeLwnCIdIGqt4IcRaZf4PQxStiv+Cbx4EfK8Giv0XqJ1n4uf1qDW/A20ykGey5ze95jvScAnTxTlU1jPlmfyZlM/Jyn59+aMGgvQVq9m5Bk+ij1jiZYDWkyFX+tq2PUOIX3zHnEnZe8ashMcXX38lAXWbnBevLTupM53O1St+NUmLGyA0I+hioUmnKvLZQ5bhghpafmYMT/Fn5M9hm1kYrvY29PZd7ALYs5W76P+jhen83pXgsiCh+1srultW1txJOVXIDIAccK/wI2cnoGsEAOHZdA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by IA0PR12MB7604.namprd12.prod.outlook.com (2603:10b6:208:438::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:02:18 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:02:18 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 6/8] nouveau/dmem: Refactor nouveau_dmem_fault_copy_one() Date: Wed, 28 Sep 2022 22:01:20 +1000 Message-Id: <20573d7b4e641a78fde9935f948e64e71c9e709e.1664366292.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0055.ausprd01.prod.outlook.com (2603:10c6:10:ea::6) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|IA0PR12MB7604:EE_ X-MS-Office365-Filtering-Correlation-Id: 2446eecf-8cf9-4b7c-50eb-08daa1494a35 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 43CWhP5Q60Lwvq/mb5pjx0glX6mbD8hfSM4kB7zsAuIe9Ii4jR7vzD+tfF2hjQ0DlBQuK0uh2tUwV4WzVaCNv/6kEmVWKYnoclAKWf846hi3A3Wi+cdyLkf0HhfIJxj5e11q03GlzHPG8Htr2NUaOVyPYNZWG1sbNyDlARbS4KByK4zjn2tj9OTRiVJ/SJvkO26u23BXUZJOBsBP2JWTx2hY6efopGR1dd95+UwPSheZK7SX4TaMFyoesnPDaUq/AD7L11tmTFJSlPB0h/xq/O8RCmjcsQCLtkDbYc/+QPvstW6b94dG519fzspzeAkVxP0iwD9rBrrc80q2fgDXiFmcsa/67uB3sUG+5CpPJLe2GIW407eSMwg+YdpBJSYf/AkoiPlGpoxNrxhcB122/4RVanyOHQ2ub1mwryJdtFFBBHq2pcOwNKOXSLFtADMeo59qGdbxD5d8ev0lMQyXNlLd2Mt+9oaFApH7PCIHCinodWWYsGTMnhjNJWixpA35cjNoAY+EtMEyOK4aJ7Mxqjuf4hxZZ6f13MAfYzTmhxmK9z/valMvyR5Vi+czr43zdrRsTt4vKi9RUZByOJUG0mbGa6hCy5hfK/GwbacM3hqNd91RQGcJkZUB39IWzIhFoMQ7JzOK8Ar2m0Fdx8E+85T7AYPidB7wHL21XIchQbI2cIAQmdIRwoq/icxVZ67xE1dNVCEV3b+ArYoMde58iQ== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(451199015)(8936002)(478600001)(316002)(5660300002)(83380400001)(36756003)(6506007)(107886003)(6666004)(41300700001)(54906003)(6486002)(2906002)(66946007)(186003)(66556008)(2616005)(66476007)(4326008)(8676002)(26005)(86362001)(6512007)(38100700002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Ta2S8itL0Vnv7S5z+fDZiGHVH4FSdQrkSb2UffuLLHHVtekrRPS6pvPcUcpuZ3cDJj5Gx26pRvyFewYABvU/ua/Iz0t6IlHQmQo5XOTr/G4XqSizZHBmSGLCxGf8sA4vRDMTluJUPi0IwjEBdhPWknp4ONspOcuzvmyd0g64gLHzdbLHusmYXATr924h3qhD6wu/imZv3+RoBZm4iZdjYAtWUxazQ4mU8j/CmC+Zu3lMlCKt01oXlTty0PUzYKuP7utbwpp2qf4h8v9nbYWODeQg5MuocXtgzpD2CFaXJRFfA0HDWTzGWkw9mJNETbIsc/I9B36NyV7G+TGH4wWzVFyKzcQwrGJ9yBD0FD6AR0W9Zidic59Hocu75uDJGbqHZnU7Mx/JZ9jxkLSxkMz8Bfm88g8a7XhSEEYDPTYadaRuY2/3g0C38tvN4Gfz5ABf94u+yLe64c/4gMcKeTM9uwbUTmvYd8xPXkTOk/DZXzinGvJbus37o04J5Da8p4UCDSYVSilgXmW1YGmhsm/EHfIqNUgZPJ6SkuSSra0ka9kTm8RtvjCApZUfm05+VTwL2io0FqdRtdJHqwfAZZRqtIHz742ofY0aBhcg9H6S/Z0qaRt50B/OmUZI/T+AkqeEfmKy5qwnihvq4Ve5h/FVQzlr4EXjVCQfsiINdqNWKZTBJPSl7GFNOIyywVdkNN/nfC6baZqKYzYeyffya9Ana6VAoBXP2XMqu9wnAGPpUolY3ZGY8dz7vD3EMZcaeFoU/snCvHCa47RTc5q34xpBriidRYaNRn+mgOIRI05pDjElbYzwCfhse3I6FJMRxAJ+zPs2CHM/PKeD3DFGEOtDnKsGS4YrEjXfuyQVsmONBobANwp4TG8kPgOCrC2Q2Kl2ZKB6V8OOo5o5ylcnMSPwRGT66ovjyd2J+K7ky7V0UIXRTGHybvUU3G91FczL7z07L2R+YeqFxiJuzJtWYpHDABqzAuKsykrIMZrgfnFOEsYi+JJrOxoMZ7DWSKY6XUbkRCP6nntXmHVMvCmiqDA0Gp0srr8kqbJbFoqye8l4EI8OcyRG1gMkOlRYbNScNLlXO4i5qshxTAZ4+HRZIJoh/dIimHa/9yPQ448LodpzPDPcM0uNBpLgoGUjz7gGDv/5QPXo9lLzwjFDZHTWpjdvDULjqLQ/Nub9dGFudYuaLqP7UqlU3r8Ue2nMzwys4kIPbMWZj7cejHSbh1Eik6A8ML798XakKbDcFfH+IELVQ/0WMDX2vHztrQnJEWTfpH7LfkoaT7deKzHmJrWTjeHkmemgVWPYrEFWtZxphjFbdZOCMWz2wSdN8GKZjafppqP1tFn2Xub4Y0GmzhL5KpRyf7VsMVuSw9XTloAXjIhA5dQzJSXHRidwLR+YLBOziESQyhPJR7QFc05rGuQfpO7AAGC74yHxLeoSrub5jreBvg1tEWsvnUWVHpChLnLuH6RjYtmD0z72913kgkI8t/BgYJO0Dfth0hIPQzjrAWS38iytwf/IpPJ6I74LfyuwdI1VuSsNKEdND4pg/MqXYT0L1ZI1D9ycS7uMy676Fqsa5JKGqeTRPeAqnZjYhL2HDTyZ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2446eecf-8cf9-4b7c-50eb-08daa1494a35 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:02:18.1195 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: YwDhX8jr2X3hxge2ZwOXynFsTA56G2ZgYPHBRIzQ6V2Xo9v5dtmsZ+MJe97bNvnZG3ZVuD46EnrR/XS3oXDuAg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7604 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , nouveau@lists.freedesktop.org, Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, John Hubbard , Ben Skeggs Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" nouveau_dmem_fault_copy_one() is used during handling of CPU faults via the migrate_to_ram() callback and is used to copy data from GPU to CPU memory. It is currently specific to fault handling, however a future patch implementing eviction of data during teardown needs similar functionality. Refactor out the core functionality so that it is not specific to fault handling. Signed-off-by: Alistair Popple Reviewed-by: Lyude Paul Cc: Ben Skeggs Cc: Ralph Campbell Cc: John Hubbard --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 58 +++++++++++++-------------- 1 file changed, 28 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index b092988..65f51fb 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -139,44 +139,24 @@ static void nouveau_dmem_fence_done(struct nouveau_fence **fence) } } -static vm_fault_t nouveau_dmem_fault_copy_one(struct nouveau_drm *drm, - struct vm_fault *vmf, struct migrate_vma *args, - dma_addr_t *dma_addr) +static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spage, + struct page *dpage, dma_addr_t *dma_addr) { struct device *dev = drm->dev->dev; - struct page *dpage, *spage; - struct nouveau_svmm *svmm; - - spage = migrate_pfn_to_page(args->src[0]); - if (!spage || !(args->src[0] & MIGRATE_PFN_MIGRATE)) - return 0; - dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address); - if (!dpage) - return VM_FAULT_SIGBUS; lock_page(dpage); *dma_addr = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); if (dma_mapping_error(dev, *dma_addr)) - goto error_free_page; + return -EIO; - svmm = spage->zone_device_data; - mutex_lock(&svmm->mutex); - nouveau_svmm_invalidate(svmm, args->start, args->end); if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr, - NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) - goto error_dma_unmap; - mutex_unlock(&svmm->mutex); + NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) { + dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); + return -EIO; + } - args->dst[0] = migrate_pfn(page_to_pfn(dpage)); return 0; - -error_dma_unmap: - mutex_unlock(&svmm->mutex); - dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); -error_free_page: - __free_page(dpage); - return VM_FAULT_SIGBUS; } static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) @@ -184,9 +164,11 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) struct nouveau_drm *drm = page_to_drm(vmf->page); struct nouveau_dmem *dmem = drm->dmem; struct nouveau_fence *fence; + struct nouveau_svmm *svmm; + struct page *spage, *dpage; unsigned long src = 0, dst = 0; dma_addr_t dma_addr = 0; - vm_fault_t ret; + vm_fault_t ret = 0; struct migrate_vma args = { .vma = vmf->vma, .start = vmf->address, @@ -207,9 +189,25 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) if (!args.cpages) return 0; - ret = nouveau_dmem_fault_copy_one(drm, vmf, &args, &dma_addr); - if (ret || dst == 0) + spage = migrate_pfn_to_page(src); + if (!spage || !(src & MIGRATE_PFN_MIGRATE)) + goto done; + + dpage = alloc_page_vma(GFP_HIGHUSER, vmf->vma, vmf->address); + if (!dpage) + goto done; + + dst = migrate_pfn(page_to_pfn(dpage)); + + svmm = spage->zone_device_data; + mutex_lock(&svmm->mutex); + nouveau_svmm_invalidate(svmm, args.start, args.end); + ret = nouveau_dmem_copy_one(drm, spage, dpage, &dma_addr); + mutex_unlock(&svmm->mutex); + if (ret) { + ret = VM_FAULT_SIGBUS; goto done; + } nouveau_fence_new(dmem->migrate.chan, false, &fence); migrate_vma_pages(&args); From patchwork Wed Sep 28 12:01:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFDB5C32771 for ; Wed, 28 Sep 2022 12:03:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 227C110E491; Wed, 28 Sep 2022 12:03:34 +0000 (UTC) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2066.outbound.protection.outlook.com [40.107.237.66]) by gabe.freedesktop.org (Postfix) with ESMTPS id 71C2F10E4B2; Wed, 28 Sep 2022 12:02:27 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FVFfU8Ux64UNDQSQ1H9IRf1V+MR2WbdG8wttbfqFkG2sn/6a07mK1FW0izABo001jDJv5awg0BnCsHuVQez/0qtQvutwuKpbP4m/PPz/ijye9D+oekA/+A1yzbkQAuNuulKTscnj1Zh6cgnYR+HVzqrg9GuKsmTSxsKVWtrKCb9u/WUGKy7/iBKO5J2rKqaIXWdOygy8+YurWuKWbB1lkHasJlSMsJshrEipHR0SIngtug7EJ6BRrnnKGjsS9BFtz2+HIKNkYoh2f1yEelaB7QD8e8IXztO4pNZqNXP341A4WcAMvICCi+JbCnIA020WIkvPNsA8iyOv/FshDTcTTA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iiMf8uPwmhpzqByf/TKsI/cucXcphTXO4Sgs7noQ56k=; b=OSYVmPpan+wOP4dZObNoCXsQLEsySBs+vF1CGQ9icB9NNKQ8q8YRCV5VBQsnT9eD9AHlkZvwBuE5nMm9FBjQR+gzgiKYkVk3OomEdI7DAzmpnqYoVqpGfp+7VOWTeeu4N2FzS8pn5N13x8zSStQa4aG0niafHi0csFNWBAE+QGc1lqCmqkxonKyULbtInRoGiV0onLyLc4oJ5JU1I20xMXISsE3fCQ+nODUh6x6S/g+tQJBF+HsjCmjfJGeR75+t+5ICu3D7bEtI7RCJMZcHGE9h6JIL7lVsAHr9AqRzG0DCY5RJb/u8TECrPGMhex9JiZdbZ7OUsepxOBZhRbQg7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iiMf8uPwmhpzqByf/TKsI/cucXcphTXO4Sgs7noQ56k=; b=lGBHngB5PGqXiimXsxd0FI+/M977+qKk4D+iF1ZxFP4OtcG06QM1Yr5oi8WZsxDNj2Lx+QrORO2bw627RvXEzfwXTZop/hy8IDiatJgBLGQZo9g/2ee5nC3i6XCRT+2aS9TQf7YckhsIU5gAe8atrZzMYjnxpwt7j8G5+w0rSDyaSv6QaqMjk9p3soe5sUqmQWtZw6pI2MjRPIXSDgukzZ2wEN2En5HTpJ/ptJY06siXqYwev0qXVJ8G2cbTS+GmUD63381zanRd+rDv/OIMSzS/DO9ifI27KhmeW4y9LZ3bD7b6MG5aVXePGxebyrQnd5/iy4EdCY1MMKRGAfIZcw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by IA0PR12MB7604.namprd12.prod.outlook.com (2603:10b6:208:438::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:02:24 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:02:24 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 7/8] nouveau/dmem: Evict device private memory during release Date: Wed, 28 Sep 2022 22:01:21 +1000 Message-Id: <66277601fb8fda9af408b33da9887192bf895bda.1664366292.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY0PR01CA0001.ausprd01.prod.outlook.com (2603:10c6:10:1bb::21) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|IA0PR12MB7604:EE_ X-MS-Office365-Filtering-Correlation-Id: eb695943-c5a0-4f81-6e03-08daa1494e55 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Arja5SDzACxxa41DQoRq+5Kf+Bwiskz8IA6LCuhkNEFAzJhXR5UHGXXL7Ik/yNFdHI1rwb5zhW3l9Hud+0pq2USguZBsw+hAtOD1foZusEQlkpVo3xvG7VB90DtITyMczCRRvJzsJBnO636Dh58ieUGRZjIK4QRlUBrC9/4vCFTfG5dScYupg7s9Bfh/4gels/eoBJQpYkXkQkVEjlkfOJ6SqTgrNRXwQNOYjIQEokNAFFAplx3i6pk/U+kCEMcWfAzgdqilq0fn/zSY3K8nh23QrYTLHfjPfwrQXapUXo7KtjL5pt5Qu+NfCFLdhX37Irw+NuYq/psDJSGbOVguFNbla9HXLswrSE7WsvFhijSWPG2+nYfri0Jt+2FiK9ONVNnjgWiAQsJRnkcoML5km2QeueYJ5LcDf83/nYbAzz5Ps+5nmqs8pAM+iqlzKsG6SMr0FUxWEbP4ASTH3ShbF7nYo7jaMiOBTp+RiFtLZ/xkbNoztsLxgZuzPpQYA6/rOOs9eSAmNDGXOPor002Sv+AKQCeFGuYJTkHD7qoWg8b6NqTxS3pSMMjfYlWmy3rx/wKmlkcg+quUGdkI6JgMWbrL61CFK16/dqNIRFmitiGweoC7tTQMS/dVBuzQ3Tdz8vQmyIghUQkYsSEgOVIhdwBiz/COGv4j9KBPTod19CcAZwt7SnjlK1o66Q9wnm42xy0N2ZnZVkRpNYKtyhQvjg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(451199015)(8936002)(478600001)(316002)(5660300002)(83380400001)(36756003)(6506007)(107886003)(6666004)(41300700001)(54906003)(6486002)(2906002)(66946007)(186003)(66556008)(2616005)(66476007)(4326008)(8676002)(26005)(86362001)(6512007)(38100700002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kmL+har2Xav8wqyZC/khsNbgUUdiQTqjj3Ujj/A/L8kYrzCsbGEzrMRmTEGl7CGnCrgRN5CqnlKhHkX2squoLt6xI+RbMCTWzNYclaic8r6bCzcxA02ydZKO/lNDfCz7LwL3Rh/PRcKHLU8CBrlVLpqId2aX8l31d9SABVI3FQS0xKexhIjkyOs0HLFJ5CeKBiHbusn6JhlhxFCs9iIlmddBVeZ/3jaakyvzsBFb2PTDj7tWFAkoJfEQgW2vYGhCBD2DPK9vsjAr9FvXiPvB2+jIwXuvr3adszFjj9XrlBySgso2+FSm6sUACa11NrnpQkrXFzlZ1+AESqwAC72MRHHtCOsO7vMRf/BOOpy7hiX6P/aG84N7uEDirE2Vk+CNy/W4vAyVIkb4CV+qiG5qjEER4eqObWruE3IOSB35yxhwgYmqDc7ov6g7hWNDGgkaCOAssnhVwJy5+dQdCUwVNpcwFhu7XuRMak4GybSPo7QSz+GXoJS10HyNeLE1HtQ2rATtneEjELeWWKgs/DZNBpziuDhqwnlvFH0oNwGI24KJLVZQ7d536psyqOwUVHtUqI1s/lQVIUyC89HZNDu1m4iNP2XmwWfMzNiYzCkO7FD/CPgYIKD7Y8w7CAaV5+Ruo0Vqtf7IQep2ibDR3isFqY7gR/H8V0Y3QAylBfXuP+Z2D3NPsM7nVkcMotQVnoFNGfo9bgbEpeHn37TtgcHo30RGfhhK8AjwB9WRvAU60PLhmpkLJ4zeY5IwS8+JAJShaxoG14R7y7lNctjZJshcPX6OHoAbxDus9VJrUF5zT6QdnJ5MxONdn++ZsMHq16ZNNoouhHYzzbfTq6kvoyN453x2vnb517Zgpg/37x3qkR62vcvCvyYtee/q1DWj7xECvxiBNUm+m900forIPOJWhvQGPFDPPxiA0z1ORxe7JavTE2fLF/uV4bPDIIX9tGMb+yAUyzJJeRqPyCzIlvyBv+qOijLZ6TEdiutrMPnknggh+AS8PDj2CjKJU5Hj62i7H/g5eyUPA+x+4nIJAGV0SuQHaoWp2rWNYVk9Z/Y0OEIch4Rfcce5bjo1LMfFateG+g/dNAcJgC1aW2LACPeXmmmzuYL7a5ma7UnDGBDGgbLrx/PtamgtMiF7ar6uwxrVAVKbwefJ0D7LWPAycpdQCj2kL018gb94Km//K5acQq2eEj2nZNfPiP5wG6guFGDZy9eSjQoIEaGXGgSPXDc0Hpp6s/jZ+y/bB3Rq04my3/qBKQnDFJPQVf4xnLYBmRcam4pa/A1PwAMtS5FbZ/8VbpZS5zAj/HuTiBmlggO5l0EJKDlgjktG/yBfukAX7cNjNT4/tpq/ndoEgCnWh5GSGXheu44Vs6UOD7fX0fklz4rTIUeH+zjmaiSoRHrfWmU4v8iB/cJt8D+OJASR0voYZv/IRz6fQI45yj2lhxiDjmCcflz6PpVlvEjlv8MkCNp72gNOpYtIoCQo2acndOwgWO1kbE1iELzm3m26hLPVnDuQnC4Sn9wLQn/4yUATcD4eqiKTQZ4XIlGfITG4Xp5DIif2gqsXpexaW9tR1aMJzAP0JLwb/iZDWbiVgrFN2krR X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: eb695943-c5a0-4f81-6e03-08daa1494e55 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:02:24.6971 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1UZhIYBuFJ4j59BfUjyLjNKXg21Ejm1SGHcqPwPTPiwHkzt5AoMoYxIsqkW++igFxNlsGlBd4Jl8rztmFxtfCw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7604 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , nouveau@lists.freedesktop.org, Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, John Hubbard , Ben Skeggs Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When the module is unloaded or a GPU is unbound from the module it is possible for device private pages to still be mapped in currently running processes. This can lead to a hangs and RCU stall warnings when unbinding the device as memunmap_pages() will wait in an uninterruptible state until all device pages have been freed which may never happen. Fix this by migrating device mappings back to normal CPU memory prior to freeing the GPU memory chunks and associated device private pages. Signed-off-by: Alistair Popple Cc: Lyude Paul Cc: Ben Skeggs Cc: Ralph Campbell Cc: John Hubbard Reviewed-by: Lyude Paul --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 48 +++++++++++++++++++++++++++- 1 file changed, 48 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 65f51fb..5fe2091 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -367,6 +367,52 @@ nouveau_dmem_suspend(struct nouveau_drm *drm) mutex_unlock(&drm->dmem->mutex); } +/* + * Evict all pages mapping a chunk. + */ +static void +nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk) +{ + unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT; + unsigned long *src_pfns, *dst_pfns; + dma_addr_t *dma_addrs; + struct nouveau_fence *fence; + + src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL); + dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL); + dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL); + + migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT, + npages); + + for (i = 0; i < npages; i++) { + if (src_pfns[i] & MIGRATE_PFN_MIGRATE) { + struct page *dpage; + + /* + * _GFP_NOFAIL because the GPU is going away and there + * is nothing sensible we can do if we can't copy the + * data back. + */ + dpage = alloc_page(GFP_HIGHUSER | __GFP_NOFAIL); + dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)); + nouveau_dmem_copy_one(chunk->drm, + migrate_pfn_to_page(src_pfns[i]), dpage, + &dma_addrs[i]); + } + } + + nouveau_fence_new(chunk->drm->dmem->migrate.chan, false, &fence); + migrate_device_pages(src_pfns, dst_pfns, npages); + nouveau_dmem_fence_done(&fence); + migrate_device_finalize(src_pfns, dst_pfns, npages); + kfree(src_pfns); + kfree(dst_pfns); + for (i = 0; i < npages; i++) + dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL); + kfree(dma_addrs); +} + void nouveau_dmem_fini(struct nouveau_drm *drm) { @@ -378,8 +424,10 @@ nouveau_dmem_fini(struct nouveau_drm *drm) mutex_lock(&drm->dmem->mutex); list_for_each_entry_safe(chunk, tmp, &drm->dmem->chunks, list) { + nouveau_dmem_evict_chunk(chunk); nouveau_bo_unpin(chunk->bo); nouveau_bo_ref(NULL, &chunk->bo); + WARN_ON(chunk->callocated); list_del(&chunk->list); memunmap_pages(&chunk->pagemap); release_mem_region(chunk->pagemap.range.start, From patchwork Wed Sep 28 12:01:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12992191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E787DC32771 for ; Wed, 28 Sep 2022 12:03:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CF6B610E4AE; Wed, 28 Sep 2022 12:03:37 +0000 (UTC) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2058.outbound.protection.outlook.com [40.107.237.58]) by gabe.freedesktop.org (Postfix) with ESMTPS id 80D9310E4BC; Wed, 28 Sep 2022 12:02:33 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GQ6S5e9J+i3jZoQy4RXoB8Nw7OQf2skPrq2mNm4btBuoUhWhwxV89SGZxCQ73GMQ7CJqeJS8lIVaIwZO4HUdAkcTqNe3oNebfZghIzzHMt3/+Ut5QukxBUOQyJq0vYGPTdMzu+edA1aAjCLaM9XshbmQqC6vTxDXbxsCE0khgfOGC2IxlLYwLFpEdED2T+Fkhz+5Gt2ncrw9i1+6+SxtSBOzCjige2xdYUshdSMzyTplpcRABqmGcpx3r1ugBhtMPb3NVU5yxGd8qtQC4nHDT4hLqK3tjc51RnBqBhZgMuAfgzBEpplUjH6ROnjQpc3bAAt0hV5/qJscfYUvLihaYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oRhRQafIcKra+fUPgEOB67JM02ZDkkxp+B6p8U4ee3Y=; b=m/2bwa99LeXlmQgo/2TKnRrwfIitvAat83ock/h/KZqE1SKijrcB/Eal6q4p8h4m7rOXw6lA3Va/eDL3/74VwjqZg79KBypNlKI2LOKLOw7DYDdTsNeBdO9CRA1hjFhPlmbDtBxIg164RVOjFj+31cSIvQP5jAqM7fP2Rt9ome0IZ/z66plFbFR/u/P9zHLdP8nPjyPsTSI+0gAQl/9WL9NY/WJhYzRX4fnvmN0sRKpUPmZ5bVpNVmh4MMjPmAy6JS4xOkJKSO0uiNBqSchNCHffdDpYFSNfLp7/ldSFH5NJcDArbRhWGp4OeCo92kUYy/AgfRUcyLdwMGy+Bj7qtQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oRhRQafIcKra+fUPgEOB67JM02ZDkkxp+B6p8U4ee3Y=; b=BihgRa5GoQT00am0ZUBK3GuEv7X8rChjYUn1xm+Rlg+PYGpNc6G9iioXqmplvx5LCQJqKl6clboaOhqfP0xrn6Epz8f80PLacA1f3AEm8wVXqUaSBe21z9RMds52XDD6ptOv2d09ffmmGSeZekxDyignvUQLtz+EooiCu8sINPymr/Er7yrkKNCqjPbY1lNyeolwnqpldDU56jLjr97d19ENTACwmHktIH2tRLiOV0QR/QFCqgVGNuUCTU33ZiiaY5iTLopVjrmardmdbjHfVXEA0cCV8+jrEH/gCq3h+B98dpMwEkPGDi+9OztUMgqgFsGw4AMlBnCUrDTU9aP73A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by IA0PR12MB7604.namprd12.prod.outlook.com (2603:10b6:208:438::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Wed, 28 Sep 2022 12:02:31 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::4064:6c13:72e5:a936%5]) with mapi id 15.20.5654.026; Wed, 28 Sep 2022 12:02:31 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Subject: [PATCH v2 8/8] hmm-tests: Add test for migrate_device_range() Date: Wed, 28 Sep 2022 22:01:22 +1000 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: X-ClientProxiedBy: SY6PR01CA0146.ausprd01.prod.outlook.com (2603:10c6:10:1ba::7) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|IA0PR12MB7604:EE_ X-MS-Office365-Filtering-Correlation-Id: 2176ab3b-85b1-4509-a7b1-08daa14951ed X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Gf9hHGrXIXjLPkDXQWGoEow++8JQh1DarCwXDGV/EUOS4k+fvvEDBI0/SsLn/8WRwMHZZfbatPmU/fM8+dWe68+g/zRep8jxPwBmWcZHTnmZlpSc/aavRWRta/8DfCU90gYQ+84sGcN9jWsKQGH/HNiPcbXTw4zH3VcLiQZj7vkAYdK46Upz11QB8IBSAe/1v9YRKTe4ph9s+cNtmgCIpYVVaoVhPQ++DB4s2Pz2QU9J8dopHoGMQYm3Obt4ogMINM+AGMa3tG9vc7YHIzZb2+ZNj7qR8GVSwIGU/JLmqJb4xnZJdINkwwX6Va2hONaTtKBZPcjPZQIqRfphyNHcsRrQdcrxqaQ9FaG3gVk5iCVGVaJOKatWlUfQKdcWgeenVu95Vut8PzedFsc4to/M3z1B0w//cmDoKtlKBk0UnGSUoV+1l3arg2HHocZ/OfXgZzVOu2wHSTyTHvJSFmEFCoMhXsT/SVee9/o+AbFv+LkYbbWK05Vzod9FBobJad06CKvAgHPVWto0NIrtxmG+W7ZppJEx5HYcT/RCeLNG6HkLGPs6ZB2z9fr2tY4K6+nFuB0sSFOe7v2KAhBWlLRdfnWCMvL7dzWJxTNbvLdTHqKtPSgOfE7ZmEaiTycyK3rJ4bpTw/tnuIfVa09k3sUTpQ5maEjkjJuaXf+OvxT/RFSsz5LPKgOzFLih+e7y/ZgJ15zPm9r+m4kJTD27VIMXlA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB3176.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(451199015)(8936002)(478600001)(316002)(5660300002)(83380400001)(36756003)(6506007)(6666004)(41300700001)(54906003)(6486002)(2906002)(66946007)(186003)(66556008)(2616005)(66476007)(4326008)(8676002)(26005)(86362001)(6512007)(38100700002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 2gRZ9HT8PsPayD20j9AeSiDsh7IYJOYoKxt8cVniEcoXXkSBYuX8Il/I2i2ZBxxsQKLbe13fMX7YWdBKKfdobN4o+GqWEklBF43I9S0k8HkXj52/KXp+0N8e4vjKJZ8n2O5O6CGrHYfkK3etAl9T0JF08x+wBkdKhazoZeBYtQm087v50OsOcN7kOwEgXD3YIIYgOQvFNUASPIYZ03+G9N83lWJCQY5yIhcyxQkvJ8ftBGsYzYE9wYrPEdBKTbTaOMhneMcbvlE2RZGMfm20q+ZJEHRv7BPMtEb5hEyLpv7lgj0cfXpuwzYrRwnT97DCfAQfbbDxxp+P++sqA6JjQ4IZuTTgFisZsv7fY17kKqX/jfcIyVL+O+Cwv4TFf/ThdJkawQ8Jvy+oHPUB8Tc2AozUELK5M45mQC/BVueYZFkFYlnnrIVx0Km2Uhoxm8k9Cr7GIPHCpGf2aE0kSsLbCz+qkCaEbzjVgmk08L61vcCwBC9Kdusxqmh+k8rgpkoKjJwhrAkYRBF0Mx7vuHQis8rgB9ZAFoZ1b97UlEL7NmzIT5uYgBCPEfm0K01w2sodlifSARQ+JkTmKC4J61t0l9g5X0J7fX2LzfbRytBu9nZMgG8fZSiwTYLO0FEyGi+VuG8aRRiPE8l98UkztfJ4iPGb7o9dRiaVk7KvsJef0RJKB7Np+nGDE4wB1ljPZuIJF+9qx0hOANCx3Vca2VBJ8Eyi8cZw0DRWlX2V+UAJwSbiUmG7rO7is9oXPd0ibgUjFP75FB6Wcnos2yjJOuHfgzCh3OSX2hdrMxWDKeKq/bjYriSafMOwdX16QI9+RNUAC+u64M2qJewfuJn9AdUZmmj45JM2f0Y3JBexlrrG3XXSogy2XgD85c+jOf/yIxPXwJ+TR5SWU2hduE5PyhXTC9BbNTbYZ1AThtBs2hxhDBH+pA1pXb5aZcGkgMZpNrU33kZ0Vg5hKlimfsvFzhk039b1YTRV3UvFu6PbwaysSua8Y3S2smEONIQDh49iHpcktBHPatq0wWLTqsn0AEq5coEDTDpP+QPQFhU5xO+zii/rlboD5F8n+YY9p4WDtFqYxqf9nxcxkqSleIobuEjX98CkrlfAekIJ1JkvjI7myDVy+8seYNMQf4gJ343QwiXgKqZgsl2I6fyChwI1niGveHYqVsM0dIHqCViT+cyoZkTJcdxLUg8GfNHYoHiMqq+fEv2FrsLjtOKfrI1b0H3DisVVpaKQgAwuo5bJ577uwsipKS5KRao4Ezne8zQWY/FBMyjUjgOrDqnNcjKdr/lqe3/FNaKVmCx4kH7D8NewLV0Cy8uQ00O4ZIewceNZafrs/ejGJbpUvSMCmtQUHDJSiGqGR8FB6xDDMvNbYJdsN8YM7jPvNCNKSTE43lac861sv7IuyV01y6cc49RA85v8h1s7KqYgdWJRjbY+iFgJu/aaq1rVT+x4aIL8qhvq6Z1F0C1wKVOwa+fPZEsvK6C9jvWUVEG3OeTo0e68XzJmaAy51IzZL4x/25xIPkFiqM+VN3m0vjm0/wyGn0XEr9/Y/psVyf3IVmQqPOZZYkEEWZgUpQfuzwsyQ5aODQanxEVA X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2176ab3b-85b1-4509-a7b1-08daa14951ed X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2022 12:02:30.8997 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 3vDKmaLc7wdzRcq8tXEnNuCbsgg3HIpqjh4Uw8EW98KDjBno2PXjjDeTTK0146Y0bo4k4ZBCzjTwmKEKKDLp1A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7604 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Alex Sierra , Ralph Campbell , nouveau@lists.freedesktop.org, Felix Kuehling , Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Jason Gunthorpe , John Hubbard Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Signed-off-by: Alistair Popple Cc: Jason Gunthorpe Cc: Ralph Campbell Cc: John Hubbard Cc: Alex Sierra Cc: Felix Kuehling --- lib/test_hmm.c | 120 +++++++++++++++++++++----- lib/test_hmm_uapi.h | 1 +- tools/testing/selftests/vm/hmm-tests.c | 49 +++++++++++- 3 files changed, 149 insertions(+), 21 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 688c15d..6c2fc85 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -100,6 +100,7 @@ struct dmirror { struct dmirror_chunk { struct dev_pagemap pagemap; struct dmirror_device *mdevice; + bool remove; }; /* @@ -192,11 +193,15 @@ static int dmirror_fops_release(struct inode *inode, struct file *filp) return 0; } +static struct dmirror_chunk *dmirror_page_to_chunk(struct page *page) +{ + return container_of(page->pgmap, struct dmirror_chunk, pagemap); +} + static struct dmirror_device *dmirror_page_to_device(struct page *page) { - return container_of(page->pgmap, struct dmirror_chunk, - pagemap)->mdevice; + return dmirror_page_to_chunk(page)->mdevice; } static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range) @@ -1218,6 +1223,85 @@ static int dmirror_snapshot(struct dmirror *dmirror, return ret; } +static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk) +{ + unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT; + unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT; + unsigned long npages = end_pfn - start_pfn + 1; + unsigned long i; + unsigned long *src_pfns; + unsigned long *dst_pfns; + + src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL); + dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL); + + migrate_device_range(src_pfns, start_pfn, npages); + for (i = 0; i < npages; i++) { + struct page *dpage, *spage; + + spage = migrate_pfn_to_page(src_pfns[i]); + if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) + continue; + + if (WARN_ON(!is_device_private_page(spage) && + !is_device_coherent_page(spage))) + continue; + spage = BACKING_PAGE(spage); + dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + lock_page(dpage); + copy_highpage(dpage, spage); + dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)); + if (src_pfns[i] & MIGRATE_PFN_WRITE) + dst_pfns[i] |= MIGRATE_PFN_WRITE; + } + migrate_device_pages(src_pfns, dst_pfns, npages); + migrate_device_finalize(src_pfns, dst_pfns, npages); + kfree(src_pfns); + kfree(dst_pfns); +} + +/* Removes free pages from the free list so they can't be re-allocated */ +static void dmirror_remove_free_pages(struct dmirror_chunk *devmem) +{ + struct dmirror_device *mdevice = devmem->mdevice; + struct page *page; + + for (page = mdevice->free_pages; page; page = page->zone_device_data) + if (dmirror_page_to_chunk(page) == devmem) + mdevice->free_pages = page->zone_device_data; +} + +static void dmirror_device_remove_chunks(struct dmirror_device *mdevice) +{ + unsigned int i; + + mutex_lock(&mdevice->devmem_lock); + if (mdevice->devmem_chunks) { + for (i = 0; i < mdevice->devmem_count; i++) { + struct dmirror_chunk *devmem = + mdevice->devmem_chunks[i]; + + spin_lock(&mdevice->lock); + devmem->remove = true; + dmirror_remove_free_pages(devmem); + spin_unlock(&mdevice->lock); + + dmirror_device_evict_chunk(devmem); + memunmap_pages(&devmem->pagemap); + if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE) + release_mem_region(devmem->pagemap.range.start, + range_len(&devmem->pagemap.range)); + kfree(devmem); + } + mdevice->devmem_count = 0; + mdevice->devmem_capacity = 0; + mdevice->free_pages = NULL; + kfree(mdevice->devmem_chunks); + mdevice->devmem_chunks = NULL; + } + mutex_unlock(&mdevice->devmem_lock); +} + static long dmirror_fops_unlocked_ioctl(struct file *filp, unsigned int command, unsigned long arg) @@ -1272,6 +1356,11 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_snapshot(dmirror, &cmd); break; + case HMM_DMIRROR_RELEASE: + dmirror_device_remove_chunks(dmirror->mdevice); + ret = 0; + break; + default: return -EINVAL; } @@ -1326,9 +1415,13 @@ static void dmirror_devmem_free(struct page *page) mdevice = dmirror_page_to_device(page); spin_lock(&mdevice->lock); - mdevice->cfree++; - page->zone_device_data = mdevice->free_pages; - mdevice->free_pages = page; + + /* Return page to our allocator if not freeing the chunk */ + if (!dmirror_page_to_chunk(page)->remove) { + mdevice->cfree++; + page->zone_device_data = mdevice->free_pages; + mdevice->free_pages = page; + } spin_unlock(&mdevice->lock); } @@ -1401,22 +1494,7 @@ static int dmirror_device_init(struct dmirror_device *mdevice, int id) static void dmirror_device_remove(struct dmirror_device *mdevice) { - unsigned int i; - - if (mdevice->devmem_chunks) { - for (i = 0; i < mdevice->devmem_count; i++) { - struct dmirror_chunk *devmem = - mdevice->devmem_chunks[i]; - - memunmap_pages(&devmem->pagemap); - if (devmem->pagemap.type == MEMORY_DEVICE_PRIVATE) - release_mem_region(devmem->pagemap.range.start, - range_len(&devmem->pagemap.range)); - kfree(devmem); - } - kfree(mdevice->devmem_chunks); - } - + dmirror_device_remove_chunks(mdevice); cdev_del(&mdevice->cdevice); } diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index e31d58c..8c818a2 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -36,6 +36,7 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x04, struct hmm_dmirror_cmd) #define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) #define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd) /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index f2c2c97..28232ad 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1054,6 +1054,55 @@ TEST_F(hmm, migrate_fault) hmm_buffer_free(buffer); } +TEST_F(hmm, migrate_release) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Migrate memory to device. */ + ret = hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Release device memory. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_RELEASE, buffer, npages); + ASSERT_EQ(ret, 0); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / (2 * sizeof(*ptr)); ++i) + ASSERT_EQ(ptr[i], i); + + hmm_buffer_free(buffer); +} + /* * Migrate anonymous shared memory to device private memory. */