From patchwork Fri Apr 12 16:04:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 10898667 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 72557139A for ; Fri, 12 Apr 2019 16:04:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58ACB28B8D for ; Fri, 12 Apr 2019 16:04:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CF5A28EAC; Fri, 12 Apr 2019 16:04:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6C3DD28B8D for ; Fri, 12 Apr 2019 16:04:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1D11589A0E; Fri, 12 Apr 2019 16:04:31 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-eopbgr800079.outbound.protection.outlook.com [40.107.80.79]) by gabe.freedesktop.org (Postfix) with ESMTPS id 426EF899BE for ; Fri, 12 Apr 2019 16:04:29 +0000 (UTC) Received: from MN2PR05MB6141.namprd05.prod.outlook.com (20.178.241.217) by MN2PR05MB6030.namprd05.prod.outlook.com (20.178.241.159) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1813.9; Fri, 12 Apr 2019 16:04:25 +0000 Received: from MN2PR05MB6141.namprd05.prod.outlook.com ([fe80::91e:292d:e304:78ad]) by MN2PR05MB6141.namprd05.prod.outlook.com ([fe80::91e:292d:e304:78ad%7]) with mapi id 15.20.1792.009; Fri, 12 Apr 2019 16:04:25 +0000 From: Thomas Hellstrom To: "dri-devel@lists.freedesktop.org" , Linux-graphics-maintainer , "linux-kernel@vger.kernel.org" Subject: [PATCH 5/9] drm/ttm: TTM fault handler helpers Thread-Topic: [PATCH 5/9] drm/ttm: TTM fault handler helpers Thread-Index: AQHU8UlmrKefacsgVEqIdDv9sHJp2Q== Date: Fri, 12 Apr 2019 16:04:25 +0000 Message-ID: <20190412160338.64994-6-thellstrom@vmware.com> References: <20190412160338.64994-1-thellstrom@vmware.com> In-Reply-To: <20190412160338.64994-1-thellstrom@vmware.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: VE1PR03CA0023.eurprd03.prod.outlook.com (2603:10a6:802:a0::35) To MN2PR05MB6141.namprd05.prod.outlook.com (2603:10b6:208:c7::25) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.20.1 x-originating-ip: [155.4.205.35] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f38c37c9-3e47-4a88-ff28-08d6bf6088c5 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600139)(711020)(4605104)(2017052603328)(7193020); SRVR:MN2PR05MB6030; x-ms-traffictypediagnostic: MN2PR05MB6030: x-ld-processed: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0,ExtAddr x-microsoft-antispam-prvs: x-forefront-prvs: 0005B05917 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(136003)(366004)(346002)(376002)(39860400002)(396003)(199004)(189003)(110136005)(25786009)(476003)(2616005)(71190400001)(6512007)(2906002)(186003)(486006)(316002)(54906003)(68736007)(1076003)(6486002)(6436002)(446003)(478600001)(3846002)(71200400001)(6116002)(11346002)(66574012)(99286004)(256004)(14444005)(14454004)(2501003)(8676002)(66066001)(7736002)(4326008)(6506007)(81166006)(36756003)(81156014)(386003)(97736004)(102836004)(50226002)(106356001)(52116002)(5660300002)(53936002)(8936002)(305945005)(76176011)(26005)(86362001)(105586002); DIR:OUT; SFP:1101; SCL:1; SRVR:MN2PR05MB6030; H:MN2PR05MB6141.namprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: vmware.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: GiOgDyejVN+4iFf414pDHzrZqCcJHDMEO94EgxXvPhyS625H45QYaQ8W57zdbkLLSIjpn2N8RhzgHb2D7J1QmrYZTxOobdEta3g/SxSj/WCarEnYEAXJ3svY1Bf3avN5mfQ2zD6G+caNTXWyYZ0C8K8Nr+ZOjzczaLY8EB5v1Yvfyf6oDN3HsQ5cUSY/7g/5H3yar/iRbx8/PJY48Z2JSATtbT3ru3HfSPA39NhMSok0EmpchzmtmroxmLxkg9XPNU+YpfjogsFNLCAL2QTovZ/45W1K18KKdZlakDex6b370EPyzoNWQaIdlPOdtIVYZtoHJDCDouKDb6zLOHie9vljxfhuvXHcbKU+moieiwz6VA/ddbzNxiRWeDEnEJ3tg8GUB8GzyYC5Kj4Fhr67kYUHNqRYawPY/acuaNtk5OA= Content-ID: MIME-Version: 1.0 X-OriginatorOrg: vmware.com X-MS-Exchange-CrossTenant-Network-Message-Id: f38c37c9-3e47-4a88-ff28-08d6bf6088c5 X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Apr 2019 16:04:25.6580 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b39138ca-3cee-4b4a-a4d6-cd83d9dd62f0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR05MB6030 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmware.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zwZx+egcKdsgc3itDQWR4iyUekr38sRDhkgs4gFFJWk=; b=tGuZG/vKaHsi+W2Qsq7zWAQIcZIltdPYFY8kJ4C2UZxgFeA87E3zCnPL8iPL1PBKw98TrpIkGlhPOwalJ29SsXc1sRF/++RQSs/woC3Lv8AgKB/pw5MvBiKqaZ8VR4IE+/G7VvJ5qT4lAlCP54JH8XYhfujWfztXL6CVlvahzR0= X-Mailman-Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=thellstrom@vmware.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Hellstrom , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP With the vmwgfx dirty tracking, the default TTM fault handler is not completely sufficient (vmwgfx need to modify the vma->vm_flags member, and also needs to restrict the number of prefaults). We also want to replicate the new ttm_bo_vm_reserve() functionality So start turning the TTM vm code into helpers: ttm_bo_vm_fault_reserved() and ttm_bo_vm_reserve(), and provide a default TTM fault handler for other drivers to use. Cc: "Christian König" Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo_vm.c | 170 ++++++++++++++++++++------------ include/drm/ttm/ttm_bo_api.h | 10 ++ 2 files changed, 116 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index bfb25b81fed7..3bd28fb97124 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -42,8 +42,6 @@ #include #include -#define TTM_BO_VM_NUM_PREFAULT 16 - static vm_fault_t ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo, struct vm_fault *vmf) { @@ -106,31 +104,30 @@ static unsigned long ttm_bo_io_mem_pfn(struct ttm_buffer_object *bo, + page_offset; } -static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) +/** + * ttm_bo_vm_reserve - Reserve a buffer object in a retryable vm callback + * @bo: The buffer object + * @vmf: The fault structure handed to the callback + * + * vm callbacks like fault() and *_mkwrite() allow for the mm_sem to be dropped + * during long waits, and after the wait the callback will be restarted. This + * is to allow other threads using the same virtual memory space concurrent + * access to map(), unmap() completely unrelated buffer objects. TTM buffer + * object reservations sometimes wait for GPU and should therefore be + * considered long waits. This function reserves the buffer object interruptibly + * taking this into account. Starvation is avoided by the vm system not + * allowing too many repeated restarts. + * This function is intended to be used in customized fault() and _mkwrite() + * handlers. + * + * Return: + * 0 on success and the bo was reserved. + * VM_FAULT_RETRY if blocking wait. + * VM_FAULT_NOPAGE if blocking wait and retrying was not allowed. + */ +vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, + struct vm_fault *vmf) { - struct vm_area_struct *vma = vmf->vma; - struct ttm_buffer_object *bo = (struct ttm_buffer_object *) - vma->vm_private_data; - struct ttm_bo_device *bdev = bo->bdev; - unsigned long page_offset; - unsigned long page_last; - unsigned long pfn; - struct ttm_tt *ttm = NULL; - struct page *page; - int err; - int i; - vm_fault_t ret = VM_FAULT_NOPAGE; - unsigned long address = vmf->address; - struct ttm_mem_type_manager *man = - &bdev->man[bo->mem.mem_type]; - struct vm_area_struct cvma; - - /* - * Work around locking order reversal in fault / nopfn - * between mmap_sem and bo_reserve: Perform a trylock operation - * for reserve, and if it fails, retry the fault after waiting - * for the buffer to become unreserved. - */ if (unlikely(!reservation_object_trylock(bo->resv))) { if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) { if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { @@ -151,14 +148,56 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) return VM_FAULT_NOPAGE; } + return 0; +} +EXPORT_SYMBOL(ttm_bo_vm_reserve); + +/** + * ttm_bo_vm_fault_reserved - TTM fault helper + * @vmf: The struct vm_fault given as argument to the fault callback + * @cvma: The struct vmw_area_struct affected. Note that this may be a + * copy of the real vma object if the caller needs, for example, VM + * flags to be temporarily altered while determining the page protection. + * @num_prefault: Maximum number of prefault pages. The caller may want to + * specify this based on madvice settings and the size of the GPU object + * backed by the memory. + * + * This function inserts one or more page table entries pointing to the + * memory backing the buffer object, and then returns a return code + * instructing the caller to retry the page access. + * + * Return: + * VM_FAULT_NOPAGE on success or pending signal + * VM_FAULT_SIGBUS on unspecified error + * VM_FAULT_OOM on out-of-memory + * VM_FAULT_RETRY if retryable wait + */ +vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, + struct vm_area_struct *cvma, + pgoff_t num_prefault) +{ + struct vm_area_struct *vma = vmf->vma; + struct ttm_buffer_object *bo = (struct ttm_buffer_object *) + vma->vm_private_data; + struct ttm_bo_device *bdev = bo->bdev; + unsigned long page_offset; + unsigned long page_last; + unsigned long pfn; + struct ttm_tt *ttm = NULL; + struct page *page; + int err; + pgoff_t i; + vm_fault_t ret = VM_FAULT_NOPAGE; + unsigned long address = vmf->address; + struct ttm_mem_type_manager *man = + &bdev->man[bo->mem.mem_type]; + /* * Refuse to fault imported pages. This should be handled * (if at all) by redirecting mmap to the exporter. */ - if (bo->ttm && (bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) { - ret = VM_FAULT_SIGBUS; - goto out_unlock; - } + if (bo->ttm && (bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) + return VM_FAULT_SIGBUS; if (bdev->driver->fault_reserve_notify) { struct dma_fence *moving = dma_fence_get(bo->moving); @@ -169,11 +208,9 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) break; case -EBUSY: case -ERESTARTSYS: - ret = VM_FAULT_NOPAGE; - goto out_unlock; + return VM_FAULT_NOPAGE; default: - ret = VM_FAULT_SIGBUS; - goto out_unlock; + return VM_FAULT_SIGBUS; } if (bo->moving != moving) { @@ -189,24 +226,15 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) * move. */ ret = ttm_bo_vm_fault_idle(bo, vmf); - if (unlikely(ret != 0)) { - if (ret == VM_FAULT_RETRY && - !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { - /* The BO has already been unreserved. */ - return ret; - } - - goto out_unlock; - } + if (unlikely(ret != 0)) + return ret; err = ttm_mem_io_lock(man, true); - if (unlikely(err != 0)) { - ret = VM_FAULT_NOPAGE; - goto out_unlock; - } + if (unlikely(err != 0)) + return VM_FAULT_NOPAGE; err = ttm_mem_io_reserve_vm(bo); if (unlikely(err != 0)) { - ret = VM_FAULT_SIGBUS; + return VM_FAULT_SIGBUS; goto out_io_unlock; } @@ -220,17 +248,11 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) goto out_io_unlock; } - /* - * Make a local vma copy to modify the page_prot member - * and vm_flags if necessary. The vma parameter is protected - * by mmap_sem in write mode. - */ - cvma = *vma; - cvma.vm_page_prot = vm_get_page_prot(cvma.vm_flags); + cvma->vm_page_prot = vm_get_page_prot(cvma->vm_flags); if (bo->mem.bus.is_iomem) { - cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, - cvma.vm_page_prot); + cvma->vm_page_prot = ttm_io_prot(bo->mem.placement, + cvma->vm_page_prot); } else { struct ttm_operation_ctx ctx = { .interruptible = false, @@ -240,8 +262,8 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) }; ttm = bo->ttm; - cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, - cvma.vm_page_prot); + cvma->vm_page_prot = ttm_io_prot(bo->mem.placement, + cvma->vm_page_prot); /* Allocate all page at once, most common usage */ if (ttm_tt_populate(ttm, &ctx)) { @@ -254,10 +276,11 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) * Speculatively prefault a number of pages. Only error on * first page. */ - for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) { + for (i = 0; i < num_prefault; ++i) { if (bo->mem.bus.is_iomem) { /* Iomem should not be marked encrypted */ - cvma.vm_page_prot = pgprot_decrypted(cvma.vm_page_prot); + cvma->vm_page_prot = + pgprot_decrypted(cvma->vm_page_prot); pfn = ttm_bo_io_mem_pfn(bo, page_offset); } else { page = ttm->pages[page_offset]; @@ -273,10 +296,10 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) } if (vma->vm_flags & VM_MIXEDMAP) - ret = vmf_insert_mixed(&cvma, address, + ret = vmf_insert_mixed(cvma, address, __pfn_to_pfn_t(pfn, PFN_DEV)); else - ret = vmf_insert_pfn(&cvma, address, pfn); + ret = vmf_insert_pfn(cvma, address, pfn); /* * Somebody beat us to this PTE or prefaulting to @@ -295,7 +318,26 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) ret = VM_FAULT_NOPAGE; out_io_unlock: ttm_mem_io_unlock(man); -out_unlock: + return ret; +} +EXPORT_SYMBOL(ttm_bo_vm_fault_reserved); + +static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct vm_area_struct cvma = *vma; + struct ttm_buffer_object *bo = (struct ttm_buffer_object *) + vma->vm_private_data; + vm_fault_t ret; + + ret = ttm_bo_vm_reserve(bo, vmf); + if (ret) + return ret; + + ret = ttm_bo_vm_fault_reserved(vmf, &cvma, TTM_BO_VM_NUM_PREFAULT); + if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) + return ret; + reservation_object_unlock(bo->resv); return ret; } diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 49d9cdfc58f2..bebfa16426ca 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -768,4 +768,14 @@ int ttm_bo_swapout(struct ttm_bo_global *glob, struct ttm_operation_ctx *ctx); void ttm_bo_swapout_all(struct ttm_bo_device *bdev); int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo); + +/* Default number of pre-faulted pages in the TTM fault handler */ +#define TTM_BO_VM_NUM_PREFAULT 16 + +vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, + struct vm_fault *vmf); + +vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, + struct vm_area_struct *cvma, + pgoff_t num_prefault); #endif