Message ID | 1589050310-19666-1-git-send-email-andrey.grodzovsky@amd.com (mailing list archive) |
---|---|
Headers | show |
Series | RFC Support hot device unplug in amdgpu | expand |
On Sat, 9 May 2020 14:51:44 -0400 Andrey Grodzovsky <andrey.grodzovsky@amd.com> wrote: > This RFC is a more of a proof of concept then a fully working > solution as there are a few unresolved issues we are hopping to get > advise on from people on the mailing list. Until now extracting a > card either by physical extraction (e.g. eGPU with thunderbold > connection or by emulation through syfs > -> /sys/bus/pci/devices/device_id/remove) would cause random crashes > in user apps. The random crashes in apps were mostly due to the app > having mapped a device backed BO into it's adress space was still > trying to access the BO while the backing device was gone. To answer > this first problem Christian suggested to fix the handling of mapped > memory in the clients when the device goes away by forcibly unmap all > buffers the user processes has by clearing their respective VMAs > mapping the device BOs. Then when the VMAs try to fill in the page > tables again we check in the fault handler if the device is removed > and if so, return an error. This will generate a SIGBUS to the > application which can then cleanly terminate. This indeed was done > but this in turn created a problem of kernel OOPs were the OOPSes > were due to the fact that while the app was terminating because of > the SIGBUS it would trigger use after free in the driver by calling > to accesses device structures that were already released from the pci > remove sequence. This we handled by introducing a 'flush' seqence > during device removal were we wait for drm file reference to drop to > 0 meaning all user clients directly using this device terminated. > With this I was able to cleanly emulate device unplug with X and > glxgears running and later emulate device plug back and restart of X > and glxgears. > > But this use case is only partial and as I see it all the use cases > are as follwing and the questions it raises. > > 1) Application accesses a BO by opening drm file > 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS > and termination and waiting for drm file refcound to drop to 0 before releasing the device > 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? > > 2) Application accesses a BO by importing a DMA-BUF > 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the > imported dma-buf's file release > 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for > all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to > update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. > > 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we > force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. > > The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the > described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere > mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? > > Patches 1-3 address 1.1 > Patch 4 addresses 2.1 > Pathces 5-6 address 2.2 > > Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 Hi, how did you come up with the goal "make applications terminate"? Is that your end goal, or is it just step 1 of many on the road of supporting device hot-unplug? Why do you want to terminate also applications that don't "need" to terminate? Why hunt them down? I'm referring to your points 1.2, 2.2 and 3. From an end user perspective, I believe making applications terminate is not helpful at all. Your display server still disappears, which means all your apps are forced to quit, and you lose your desktop. I do understand that a graceful termination is better than a hard lockup, but not much. When I've talked about DRM device hot-unplug with Daniel Vetter, our shared opinion seems to be that the unplug should not outright kill any programs that are prepared to handle errors, that is, functions or ioctls that return a success code can return an error, and then it is up for the application to decide how to handle that. The end goal must not be to terminate all applications that had something to do with the device. At the very least the display server must survive. The rough idea on how that should work is that DRM ioctls start returning errors and all mmaps are replaced with something harmless that does not cause a SIGBUS. Userspace can handle the errors if it wants to, and display servers will react to the device removed uevent if not earlier. Why deliberately avoid raising SIGBUS? Because it is such a huge pain to handle due to the archaic design of how signals are delivered. Most of existing userspace is also not prepared to handle SIGBUS anywhere. The problem of handling SIGBUS at all is that a process can only have a single signal handler per signal, but the process may comprise of multiple components that cannot cooperate on signal catching: Mesa GPU drivers, GUI toolkits, and the application itself may all do some things that would require handling SIGBUS if removing a DRM device raised it. For Mesa to cooperate with SIGBUS handling with the other components in the process, we'd need some whole new APIs, an EGL extension and maybe Vulkan extension too. The process may also have threads, which are really painful with signals. What if you need to handle the SIGBUS differently in different threads? Hence, mmaps should be replaced with something harmless, maybe something that reads back all zeros and ignores writes. The application will learn later that the DRM device is gone. Sending it a SIGBUS on the spot when it accesses an mmap does not help: the memory is gone already - if you didn't have a backup of the contents, you're not going to make one now. My point here is, are you designing things to specifically only terminate processes, or will you leave room in the design to improve the implementation towards a proper handling of DRM device hot-unplug? Thanks, pq > > Andrey Grodzovsky (6): > drm/ttm: Add unampping of the entire device address space > drm/amdgpu: Force unmap all user VMAs on device removal. > drm/amdgpu: Wait for all user clients > drm/amdgpu: Wait for all clients importing out dma-bufs. > drm/ttm: Add destroy flag in TTM BO eviction interface > drm/amdgpu: Use TTM MMs destroy interface > > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ > drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ > drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ > drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + > drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- > drivers/gpu/drm/qxl/qxl_object.c | 4 +- > drivers/gpu/drm/radeon/radeon_object.c | 2 +- > drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- > drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- > include/drm/ttm/ttm_bo_api.h | 2 +- > include/drm/ttm/ttm_bo_driver.h | 2 + > 16 files changed, 139 insertions(+), 34 deletions(-) >
On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: > This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. > Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) > would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still > trying to access the BO while the backing device was gone. > To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers > the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler > if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. > This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS > it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. > This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. > With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. > > But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. > > 1) Application accesses a BO by opening drm file > 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS > and termination and waiting for drm file refcound to drop to 0 before releasing the device > 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? > > 2) Application accesses a BO by importing a DMA-BUF > 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the > imported dma-buf's file release > 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for > all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to > update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. > > 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we > force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. > > The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the > described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere > mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? > > Patches 1-3 address 1.1 > Patch 4 addresses 2.1 > Pathces 5-6 address 2.2 > > Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 So we've been working on this problem for a few years already (but it's still not solved), I think you could have saved yourselfs some typing. Bunch of things: - we can't wait for userspace in the hotunplug handlers, that might never happen. The correct way is to untangle the lifetime of your hw driver for a specific struct pci_device from the drm_device lifetime. Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and drm_dev_enter/exit. A bunch of usb/spi drivers use this 100% correctly now, so there's examples. Plus kerneldoc explains stuff. - for a big driver like amdgpu doing this split up is going to be horrendously complex. I know, we've done it for i915, at least partially. I strongly recommend that you're using devm_ for managing hw related resources (iomap, irq, ...) as much as possible. For drm_device resources (mostly structures and everything related to that) we've just merged the drmm_ managed resources framework. There's some more work to be done there for various kms objects, but you can at least somewhat avoid tedious handrolling for everything internal already. Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses of this in drm, they're all wrong. - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In theory they're already refcounted and all and so should work, in practice I think we need to refcount the underlying drm_device with drm_dev_get/put to avoid the worst fall-out. - One unfortunate thing with drm_dev_unplug is that the driver core is very opinionated and doesn't tell you whether it's a hotunplug or a driver unload. In the former case trying to shut down hw just wastes time (and might hit driver bugs), in the latter case driver engineers very much expect everything to be shut down. Right now you can only have one or the other, so this needs a module option hack or similar (default to the correct hotunplug behaviour for users). - SIGBUS is better than crashing the kernel, but it's not even close for users. They still lose everything because everything crashes because in my experience, in practice, no one ever handles errors. There's a few more things on top: - sighandlers are global, which means only the app can use it. You can't use it in e.g. mesa. They're also not composable, so if you have on sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) it's all sadness. Hence "usersapce will handle SIGBUS" wont work. - worse, neither vk nor gl (to my knowledge) have a concept of events for when the gpu died. The only stuff you have is things like arb_robustness which says a) everything continues as if nothing happened b) there's a function where you can ask whether your gl context and all the textures/buffers are toast. I think that's about the only hotunplug application model we can realistically expect applications to support. That means _all_ errors need to be silently eaten by either mesa or the kernel. On i915 the model (also for terminally wedged gpu hangs) is that all ioctl keep working, mmaps keep working, and execbuf gives you an -EIO (which mesa eats silently iirc for arb_robustness). Conclusion is that SIGBUS is imo a no-go, and the only option we have is that a) mmaps fully keep working, doable for shmem or b) we put some fake memory in there (for vram or whatever), maybe even only a single page for all fake memory. - you probably want arb_robustness and similar stuff in userspace as a first step. tldr; - refcounting, not waiting for userspace - nothing can fail because userspace wont handle it That's at least my take on this mess, and what we've been pushing for over the past few years. For kms-only drm_driver we should have achieved that by now (plus/minus maybe some issues for dma-buf/fences, but kms-only dma-buf/fences are simple enough that maybe we don't go boom yet). For big gpus with rendering I think best next step would be to type up a reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues and likely solutions. And then bikeshed that, since the above is just my take on all this. Cheers, Daniel > > Andrey Grodzovsky (6): > drm/ttm: Add unampping of the entire device address space > drm/amdgpu: Force unmap all user VMAs on device removal. > drm/amdgpu: Wait for all user clients > drm/amdgpu: Wait for all clients importing out dma-bufs. > drm/ttm: Add destroy flag in TTM BO eviction interface > drm/amdgpu: Use TTM MMs destroy interface > > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ > drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ > drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ > drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + > drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- > drivers/gpu/drm/qxl/qxl_object.c | 4 +- > drivers/gpu/drm/radeon/radeon_object.c | 2 +- > drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- > drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- > include/drm/ttm/ttm_bo_api.h | 2 +- > include/drm/ttm/ttm_bo_driver.h | 2 + > 16 files changed, 139 insertions(+), 34 deletions(-) > > -- > 2.7.4 >
Quoting Daniel Vetter (2020-05-11 10:54:33) > - worse, neither vk nor gl (to my knowledge) have a concept of events > for when the gpu died. The only stuff you have is things like > arb_robustness which says a) everything continues as if nothing > happened b) there's a function where you can ask whether your gl > context and all the textures/buffers are toast. Vulkan/DX12 arrived after eGPU, and there is at least the concept of VK_ERROR_DEVICE_LOST. Mainly used at the moment after a GPU hang and loss of context. https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#devsandqueues-lost-device -Chris
On Mon, May 11, 2020 at 11:19:30AM +0100, Chris Wilson wrote: > Quoting Daniel Vetter (2020-05-11 10:54:33) > > - worse, neither vk nor gl (to my knowledge) have a concept of events > > for when the gpu died. The only stuff you have is things like > > arb_robustness which says a) everything continues as if nothing > > happened b) there's a function where you can ask whether your gl > > context and all the textures/buffers are toast. > > Vulkan/DX12 arrived after eGPU, and there is at least the concept of > VK_ERROR_DEVICE_LOST. Mainly used at the moment after a GPU hang and > loss of context. > > https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#devsandqueues-lost-device Ah cool, so -EIO on some ioctls, with silencing it in the gl driver and passing it on for the vk driver should be ok. Assuming vk frameworks bother to implement the *may* thing. I'm assuming if the validation midlayer doesn't inject this, it's untested and firework will ensue. But then more direct path to fireworks is what vk is all about :-) -Daniel
On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: > > This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. > > Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) > > would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still > > trying to access the BO while the backing device was gone. > > To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers > > the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler > > if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. > > This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS > > it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. > > This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. > > With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. > > > > But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. > > > > 1) Application accesses a BO by opening drm file > > 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS > > and termination and waiting for drm file refcound to drop to 0 before releasing the device > > 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? > > > > 2) Application accesses a BO by importing a DMA-BUF > > 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the > > imported dma-buf's file release > > 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for > > all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to > > update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. > > > > 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we > > force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. > > > > The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the > > described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere > > mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? > > > > Patches 1-3 address 1.1 > > Patch 4 addresses 2.1 > > Pathces 5-6 address 2.2 > > > > Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 > > So we've been working on this problem for a few years already (but it's > still not solved), I think you could have saved yourselfs some typing. > > Bunch of things: > - we can't wait for userspace in the hotunplug handlers, that might never > happen. The correct way is to untangle the lifetime of your hw driver > for a specific struct pci_device from the drm_device lifetime. > Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and > drm_dev_enter/exit. A bunch of usb/spi drivers use this 100% correctly > now, so there's examples. Plus kerneldoc explains stuff. > > - for a big driver like amdgpu doing this split up is going to be > horrendously complex. I know, we've done it for i915, at least > partially. I strongly recommend that you're using devm_ for managing hw > related resources (iomap, irq, ...) as much as possible. > > For drm_device resources (mostly structures and everything related to > that) we've just merged the drmm_ managed resources framework. There's > some more work to be done there for various kms objects, but you can at > least somewhat avoid tedious handrolling for everything internal > already. > > Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses > of this in drm, they're all wrong. > > - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In > theory they're already refcounted and all and so should work, in > practice I think we need to refcount the underlying drm_device with > drm_dev_get/put to avoid the worst fall-out. oh I forgot one, since it's new since the last time we've seriously discussed this: p2p dma-buf But that /should/ be handleable with the move_notify callback. Assuming we don't have any bugs anywhere, and the importer can indeed get rid of all its mapping, always. But for completeness probably need this one, just to keep it noted. -Daniel > > - One unfortunate thing with drm_dev_unplug is that the driver core is > very opinionated and doesn't tell you whether it's a hotunplug or a > driver unload. In the former case trying to shut down hw just wastes > time (and might hit driver bugs), in the latter case driver engineers > very much expect everything to be shut down. > > Right now you can only have one or the other, so this needs a module > option hack or similar (default to the correct hotunplug behaviour for > users). > > - SIGBUS is better than crashing the kernel, but it's not even close for > users. They still lose everything because everything crashes because in > my experience, in practice, no one ever handles errors. There's a few > more things on top: > > - sighandlers are global, which means only the app can use it. You can't > use it in e.g. mesa. They're also not composable, so if you have on > sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) > it's all sadness. Hence "usersapce will handle SIGBUS" wont work. > > - worse, neither vk nor gl (to my knowledge) have a concept of events > for when the gpu died. The only stuff you have is things like > arb_robustness which says a) everything continues as if nothing > happened b) there's a function where you can ask whether your gl > context and all the textures/buffers are toast. > > I think that's about the only hotunplug application model we can > realistically expect applications to support. That means _all_ errors > need to be silently eaten by either mesa or the kernel. On i915 the > model (also for terminally wedged gpu hangs) is that all ioctl keep > working, mmaps keep working, and execbuf gives you an -EIO (which mesa > eats silently iirc for arb_robustness). > > Conclusion is that SIGBUS is imo a no-go, and the only option we have is > that a) mmaps fully keep working, doable for shmem or b) we put some > fake memory in there (for vram or whatever), maybe even only a single > page for all fake memory. > > - you probably want arb_robustness and similar stuff in userspace as a > first step. > > tldr; > - refcounting, not waiting for userspace > - nothing can fail because userspace wont handle it > > That's at least my take on this mess, and what we've been pushing for over > the past few years. For kms-only drm_driver we should have achieved that > by now (plus/minus maybe some issues for dma-buf/fences, but kms-only > dma-buf/fences are simple enough that maybe we don't go boom yet). > > For big gpus with rendering I think best next step would be to type up a > reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues > and likely solutions. And then bikeshed that, since the above is just my > take on all this. > > Cheers, Daniel > > > > > Andrey Grodzovsky (6): > > drm/ttm: Add unampping of the entire device address space > > drm/amdgpu: Force unmap all user VMAs on device removal. > > drm/amdgpu: Wait for all user clients > > drm/amdgpu: Wait for all clients importing out dma-bufs. > > drm/ttm: Add destroy flag in TTM BO eviction interface > > drm/amdgpu: Use TTM MMs destroy interface > > > > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ > > drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- > > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- > > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- > > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ > > drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ > > drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- > > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + > > drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- > > drivers/gpu/drm/qxl/qxl_object.c | 4 +- > > drivers/gpu/drm/radeon/radeon_object.c | 2 +- > > drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- > > drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- > > include/drm/ttm/ttm_bo_api.h | 2 +- > > include/drm/ttm/ttm_bo_driver.h | 2 + > > 16 files changed, 139 insertions(+), 34 deletions(-) > > > > -- > > 2.7.4 > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch
On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > - One unfortunate thing with drm_dev_unplug is that the driver core is > very opinionated and doesn't tell you whether it's a hotunplug or a > driver unload. In the former case trying to shut down hw just wastes > time (and might hit driver bugs), in the latter case driver engineers > very much expect everything to be shut down. You can get that information at the PCI bus level with pci_dev_is_disconnected(). The flag queried by this function is set upon hot removal. Be aware however that the device is guaranteed to be unreachable if the function returns true, but the converse is NOT guaranteed, i.e. the function may return false even though the device has just gone away. Those somewhat difficult semantics are one of the reasons why some people are skeptical of the function's merits (notably Greg KH). See this LWN article for more information: https://lwn.net/Articles/767885/ (scroll down to the "Surprise removal" section) I've suggested to Greg a few years back that we should have a flag at the device level to indicate whether it's gone, not just at the bus level. That way the property could be expressed regardless of the bus used. It would facilitate the feature you're missing, that the driver core tells you whether it's a surprise removal or not. Unfortunately Greg rejected the idea. Thanks, Lukas
On Mon, May 11, 2020 at 1:43 PM Lukas Wunner <lukas@wunner.de> wrote: > > On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > > - One unfortunate thing with drm_dev_unplug is that the driver core is > > very opinionated and doesn't tell you whether it's a hotunplug or a > > driver unload. In the former case trying to shut down hw just wastes > > time (and might hit driver bugs), in the latter case driver engineers > > very much expect everything to be shut down. > > You can get that information at the PCI bus level with > pci_dev_is_disconnected(). The flag queried by this function is set > upon hot removal. Be aware however that the device is guaranteed to > be unreachable if the function returns true, but the converse is NOT > guaranteed, i.e. the function may return false even though the device > has just gone away. > > Those somewhat difficult semantics are one of the reasons why some > people are skeptical of the function's merits (notably Greg KH). > See this LWN article for more information: > > https://lwn.net/Articles/767885/ > (scroll down to the "Surprise removal" section) > > I've suggested to Greg a few years back that we should have a flag > at the device level to indicate whether it's gone, not just at the bus > level. That way the property could be expressed regardless of the bus > used. It would facilitate the feature you're missing, that the driver > core tells you whether it's a surprise removal or not. Unfortunately > Greg rejected the idea. Ok, so at least for pci devices you could do something like if (pci_dev_is_disconnected()) drm_dev_unplug(); else drm_dev_unregister(); In the ->remove callback and both users and developers should be happy. I guess for other drivers like usb/spi just yanking the cable for driver hacking is good enough - loss of power should also reset the device :-) -Daniel
Am 11.05.20 um 11:26 schrieb Pekka Paalanen: > On Sat, 9 May 2020 14:51:44 -0400 > Andrey Grodzovsky <andrey.grodzovsky@amd.com> wrote: > >> This RFC is a more of a proof of concept then a fully working >> solution as there are a few unresolved issues we are hopping to get >> advise on from people on the mailing list. Until now extracting a >> card either by physical extraction (e.g. eGPU with thunderbold >> connection or by emulation through syfs >> -> /sys/bus/pci/devices/device_id/remove) would cause random crashes >> in user apps. The random crashes in apps were mostly due to the app >> having mapped a device backed BO into it's adress space was still >> trying to access the BO while the backing device was gone. To answer >> this first problem Christian suggested to fix the handling of mapped >> memory in the clients when the device goes away by forcibly unmap all >> buffers the user processes has by clearing their respective VMAs >> mapping the device BOs. Then when the VMAs try to fill in the page >> tables again we check in the fault handler if the device is removed >> and if so, return an error. This will generate a SIGBUS to the >> application which can then cleanly terminate. This indeed was done >> but this in turn created a problem of kernel OOPs were the OOPSes >> were due to the fact that while the app was terminating because of >> the SIGBUS it would trigger use after free in the driver by calling >> to accesses device structures that were already released from the pci >> remove sequence. This we handled by introducing a 'flush' seqence >> during device removal were we wait for drm file reference to drop to >> 0 meaning all user clients directly using this device terminated. >> With this I was able to cleanly emulate device unplug with X and >> glxgears running and later emulate device plug back and restart of X >> and glxgears. >> >> But this use case is only partial and as I see it all the use cases >> are as follwing and the questions it raises. >> >> 1) Application accesses a BO by opening drm file >> 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS >> and termination and waiting for drm file refcound to drop to 0 before releasing the device >> 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? >> >> 2) Application accesses a BO by importing a DMA-BUF >> 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the >> imported dma-buf's file release >> 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for >> all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to >> update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. >> >> 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we >> force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. >> >> The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the >> described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere >> mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? >> >> Patches 1-3 address 1.1 >> Patch 4 addresses 2.1 >> Pathces 5-6 address 2.2 >> >> Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 > Hi, > > how did you come up with the goal "make applications terminate"? Is > that your end goal, or is it just step 1 of many on the road of > supporting device hot-unplug? > > Why do you want to terminate also applications that don't "need" to > terminate? Why hunt them down? I'm referring to your points 1.2, 2.2 > and 3. Yeah, that is a known limitation. For now the whole idea is to terminate the programs using the device as soon as possible. > From an end user perspective, I believe making applications terminate > is not helpful at all. Your display server still disappears, which > means all your apps are forced to quit, and you lose your desktop. I do > understand that a graceful termination is better than a hard lockup, > but not much. This is not for a desktop use case at all. Regards, Christian. > > When I've talked about DRM device hot-unplug with Daniel Vetter, our > shared opinion seems to be that the unplug should not outright kill any > programs that are prepared to handle errors, that is, functions or > ioctls that return a success code can return an error, and then it is > up for the application to decide how to handle that. The end goal must > not be to terminate all applications that had something to do with the > device. At the very least the display server must survive. > > The rough idea on how that should work is that DRM ioctls start > returning errors and all mmaps are replaced with something harmless > that does not cause a SIGBUS. Userspace can handle the errors if it > wants to, and display servers will react to the device removed uevent > if not earlier. > > Why deliberately avoid raising SIGBUS? Because it is such a huge pain > to handle due to the archaic design of how signals are delivered. Most > of existing userspace is also not prepared to handle SIGBUS anywhere. > > The problem of handling SIGBUS at all is that a process can only have a > single signal handler per signal, but the process may comprise of > multiple components that cannot cooperate on signal catching: Mesa GPU > drivers, GUI toolkits, and the application itself may all do some > things that would require handling SIGBUS if removing a DRM device > raised it. For Mesa to cooperate with SIGBUS handling with the other > components in the process, we'd need some whole new APIs, an EGL > extension and maybe Vulkan extension too. The process may also have > threads, which are really painful with signals. What if you need to > handle the SIGBUS differently in different threads? > > Hence, mmaps should be replaced with something harmless, maybe > something that reads back all zeros and ignores writes. The application > will learn later that the DRM device is gone. Sending it a SIGBUS on > the spot when it accesses an mmap does not help: the memory is gone > already - if you didn't have a backup of the contents, you're not going > to make one now. > > My point here is, are you designing things to specifically only > terminate processes, or will you leave room in the design to improve the > implementation towards a proper handling of DRM device hot-unplug? > > > Thanks, > pq > > >> Andrey Grodzovsky (6): >> drm/ttm: Add unampping of the entire device address space >> drm/amdgpu: Force unmap all user VMAs on device removal. >> drm/amdgpu: Wait for all user clients >> drm/amdgpu: Wait for all clients importing out dma-bufs. >> drm/ttm: Add destroy flag in TTM BO eviction interface >> drm/amdgpu: Use TTM MMs destroy interface >> >> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- >> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- >> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + >> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- >> drivers/gpu/drm/qxl/qxl_object.c | 4 +- >> drivers/gpu/drm/radeon/radeon_object.c | 2 +- >> drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- >> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- >> include/drm/ttm/ttm_bo_api.h | 2 +- >> include/drm/ttm/ttm_bo_driver.h | 2 + >> 16 files changed, 139 insertions(+), 34 deletions(-) >>
Am 11.05.20 um 13:19 schrieb Daniel Vetter: > On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: >> On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: >>> This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. >>> Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) >>> would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still >>> trying to access the BO while the backing device was gone. >>> To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers >>> the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler >>> if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. >>> This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS >>> it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. >>> This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. >>> With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. >>> >>> But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. >>> >>> 1) Application accesses a BO by opening drm file >>> 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS >>> and termination and waiting for drm file refcound to drop to 0 before releasing the device >>> 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? >>> >>> 2) Application accesses a BO by importing a DMA-BUF >>> 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the >>> imported dma-buf's file release >>> 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for >>> all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to >>> update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. >>> >>> 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we >>> force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. >>> >>> The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the >>> described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere >>> mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? >>> >>> Patches 1-3 address 1.1 >>> Patch 4 addresses 2.1 >>> Pathces 5-6 address 2.2 >>> >>> Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 >> So we've been working on this problem for a few years already (but it's >> still not solved), I think you could have saved yourselfs some typing. >> >> Bunch of things: >> - we can't wait for userspace in the hotunplug handlers, that might never >> happen. The correct way is to untangle the lifetime of your hw driver >> for a specific struct pci_device from the drm_device lifetime. >> Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and >> drm_dev_enter/exit. A bunch of usb/spi drivers use this 100% correctly >> now, so there's examples. Plus kerneldoc explains stuff. That's exactly what we tried first and I expected that this is a necessity. Ok so back to the drawing board for this. >> >> - for a big driver like amdgpu doing this split up is going to be >> horrendously complex. I know, we've done it for i915, at least >> partially. I strongly recommend that you're using devm_ for managing hw >> related resources (iomap, irq, ...) as much as possible. >> >> For drm_device resources (mostly structures and everything related to >> that) we've just merged the drmm_ managed resources framework. There's >> some more work to be done there for various kms objects, but you can at >> least somewhat avoid tedious handrolling for everything internal >> already. >> >> Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses >> of this in drm, they're all wrong. >> >> - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In >> theory they're already refcounted and all and so should work, in >> practice I think we need to refcount the underlying drm_device with >> drm_dev_get/put to avoid the worst fall-out. > oh I forgot one, since it's new since the last time we've seriously > discussed this: p2p dma-buf > > But that /should/ be handleable with the move_notify callback. Assuming we > don't have any bugs anywhere, and the importer can indeed get rid of all > its mapping, always. Yeah, already noted that as well in the internal discussion. > > But for completeness probably need this one, just to keep it noted. > -Daniel > >> - One unfortunate thing with drm_dev_unplug is that the driver core is >> very opinionated and doesn't tell you whether it's a hotunplug or a >> driver unload. In the former case trying to shut down hw just wastes >> time (and might hit driver bugs), in the latter case driver engineers >> very much expect everything to be shut down. >> >> Right now you can only have one or the other, so this needs a module >> option hack or similar (default to the correct hotunplug behaviour for >> users). >> >> - SIGBUS is better than crashing the kernel, but it's not even close for >> users. They still lose everything because everything crashes because in >> my experience, in practice, no one ever handles errors. There's a few >> more things on top: >> >> - sighandlers are global, which means only the app can use it. You can't >> use it in e.g. mesa. They're also not composable, so if you have on >> sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) >> it's all sadness. Hence "usersapce will handle SIGBUS" wont work. >> >> - worse, neither vk nor gl (to my knowledge) have a concept of events >> for when the gpu died. The only stuff you have is things like >> arb_robustness which says a) everything continues as if nothing >> happened b) there's a function where you can ask whether your gl >> context and all the textures/buffers are toast. >> >> I think that's about the only hotunplug application model we can >> realistically expect applications to support. That means _all_ errors >> need to be silently eaten by either mesa or the kernel. On i915 the >> model (also for terminally wedged gpu hangs) is that all ioctl keep >> working, mmaps keep working, and execbuf gives you an -EIO (which mesa >> eats silently iirc for arb_robustness). >> >> Conclusion is that SIGBUS is imo a no-go, and the only option we have is >> that a) mmaps fully keep working, doable for shmem or b) we put some >> fake memory in there (for vram or whatever), maybe even only a single >> page for all fake memory. Ok, good to know. So to summarize no application termination, but instead redirect all memory access to a dummy page. From the IOCTLs we return -ENODEV instead of -EIO. Is that a problem? Thanks for the comments, Christian. >> >> - you probably want arb_robustness and similar stuff in userspace as a >> first step. >> >> tldr; >> - refcounting, not waiting for userspace >> - nothing can fail because userspace wont handle it >> >> That's at least my take on this mess, and what we've been pushing for over >> the past few years. For kms-only drm_driver we should have achieved that >> by now (plus/minus maybe some issues for dma-buf/fences, but kms-only >> dma-buf/fences are simple enough that maybe we don't go boom yet). >> >> For big gpus with rendering I think best next step would be to type up a >> reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues >> and likely solutions. And then bikeshed that, since the above is just my >> take on all this. >> >> Cheers, Daniel >> >>> Andrey Grodzovsky (6): >>> drm/ttm: Add unampping of the entire device address space >>> drm/amdgpu: Force unmap all user VMAs on device removal. >>> drm/amdgpu: Wait for all user clients >>> drm/amdgpu: Wait for all clients importing out dma-bufs. >>> drm/ttm: Add destroy flag in TTM BO eviction interface >>> drm/amdgpu: Use TTM MMs destroy interface >>> >>> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ >>> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ >>> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ >>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + >>> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- >>> drivers/gpu/drm/qxl/qxl_object.c | 4 +- >>> drivers/gpu/drm/radeon/radeon_object.c | 2 +- >>> drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- >>> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- >>> include/drm/ttm/ttm_bo_api.h | 2 +- >>> include/drm/ttm/ttm_bo_driver.h | 2 + >>> 16 files changed, 139 insertions(+), 34 deletions(-) >>> >>> -- >>> 2.7.4 >>> >> -- >> Daniel Vetter >> Software Engineer, Intel Corporation >> http://blog.ffwll.ch
On Mon, May 11, 2020 at 2:34 PM Christian König <ckoenig.leichtzumerken@gmail.com> wrote: > > Am 11.05.20 um 13:19 schrieb Daniel Vetter: > > On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > >> On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: > >>> This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. > >>> Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) > >>> would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still > >>> trying to access the BO while the backing device was gone. > >>> To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers > >>> the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler > >>> if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. > >>> This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS > >>> it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. > >>> This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. > >>> With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. > >>> > >>> But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. > >>> > >>> 1) Application accesses a BO by opening drm file > >>> 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS > >>> and termination and waiting for drm file refcound to drop to 0 before releasing the device > >>> 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? > >>> > >>> 2) Application accesses a BO by importing a DMA-BUF > >>> 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the > >>> imported dma-buf's file release > >>> 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for > >>> all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to > >>> update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. > >>> > >>> 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we > >>> force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. > >>> > >>> The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the > >>> described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere > >>> mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? > >>> > >>> Patches 1-3 address 1.1 > >>> Patch 4 addresses 2.1 > >>> Pathces 5-6 address 2.2 > >>> > >>> Reference: https://gitlab.freedesktop.org/drm/amd/-/issues/1081 > >> So we've been working on this problem for a few years already (but it's > >> still not solved), I think you could have saved yourselfs some typing. > >> > >> Bunch of things: > >> - we can't wait for userspace in the hotunplug handlers, that might never > >> happen. The correct way is to untangle the lifetime of your hw driver > >> for a specific struct pci_device from the drm_device lifetime. > >> Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and > >> drm_dev_enter/exit. A bunch of usb/spi drivers use this 100% correctly > >> now, so there's examples. Plus kerneldoc explains stuff. > > That's exactly what we tried first and I expected that this is a > necessity. Ok so back to the drawing board for this. > > >> > >> - for a big driver like amdgpu doing this split up is going to be > >> horrendously complex. I know, we've done it for i915, at least > >> partially. I strongly recommend that you're using devm_ for managing hw > >> related resources (iomap, irq, ...) as much as possible. > >> > >> For drm_device resources (mostly structures and everything related to > >> that) we've just merged the drmm_ managed resources framework. There's > >> some more work to be done there for various kms objects, but you can at > >> least somewhat avoid tedious handrolling for everything internal > >> already. > >> > >> Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses > >> of this in drm, they're all wrong. > >> > >> - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In > >> theory they're already refcounted and all and so should work, in > >> practice I think we need to refcount the underlying drm_device with > >> drm_dev_get/put to avoid the worst fall-out. > > oh I forgot one, since it's new since the last time we've seriously > > discussed this: p2p dma-buf > > > > But that /should/ be handleable with the move_notify callback. Assuming we > > don't have any bugs anywhere, and the importer can indeed get rid of all > > its mapping, always. > > Yeah, already noted that as well in the internal discussion. > > > > > But for completeness probably need this one, just to keep it noted. > > -Daniel > > > >> - One unfortunate thing with drm_dev_unplug is that the driver core is > >> very opinionated and doesn't tell you whether it's a hotunplug or a > >> driver unload. In the former case trying to shut down hw just wastes > >> time (and might hit driver bugs), in the latter case driver engineers > >> very much expect everything to be shut down. > >> > >> Right now you can only have one or the other, so this needs a module > >> option hack or similar (default to the correct hotunplug behaviour for > >> users). > >> > >> - SIGBUS is better than crashing the kernel, but it's not even close for > >> users. They still lose everything because everything crashes because in > >> my experience, in practice, no one ever handles errors. There's a few > >> more things on top: > >> > >> - sighandlers are global, which means only the app can use it. You can't > >> use it in e.g. mesa. They're also not composable, so if you have on > >> sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) > >> it's all sadness. Hence "usersapce will handle SIGBUS" wont work. > >> > >> - worse, neither vk nor gl (to my knowledge) have a concept of events > >> for when the gpu died. The only stuff you have is things like > >> arb_robustness which says a) everything continues as if nothing > >> happened b) there's a function where you can ask whether your gl > >> context and all the textures/buffers are toast. > >> > >> I think that's about the only hotunplug application model we can > >> realistically expect applications to support. That means _all_ errors > >> need to be silently eaten by either mesa or the kernel. On i915 the > >> model (also for terminally wedged gpu hangs) is that all ioctl keep > >> working, mmaps keep working, and execbuf gives you an -EIO (which mesa > >> eats silently iirc for arb_robustness). > >> > >> Conclusion is that SIGBUS is imo a no-go, and the only option we have is > >> that a) mmaps fully keep working, doable for shmem or b) we put some > >> fake memory in there (for vram or whatever), maybe even only a single > >> page for all fake memory. > > Ok, good to know. > > So to summarize no application termination, but instead redirect all > memory access to a dummy page. > > From the IOCTLs we return -ENODEV instead of -EIO. Is that a problem? For atomic I think it'd be good if we're consistent across drivers. For render ioctl all that matters is that you end up implementing arb_robusteness/vk device removal/whatever else custom interface HSA has/... correctly. So entirely up to a discussion between amdgpu and radeonsi folks I'd say. -Daniel > Thanks for the comments, > Christian. > > >> > >> - you probably want arb_robustness and similar stuff in userspace as a > >> first step. > >> > >> tldr; > >> - refcounting, not waiting for userspace > >> - nothing can fail because userspace wont handle it > >> > >> That's at least my take on this mess, and what we've been pushing for over > >> the past few years. For kms-only drm_driver we should have achieved that > >> by now (plus/minus maybe some issues for dma-buf/fences, but kms-only > >> dma-buf/fences are simple enough that maybe we don't go boom yet). > >> > >> For big gpus with rendering I think best next step would be to type up a > >> reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues > >> and likely solutions. And then bikeshed that, since the above is just my > >> take on all this. > >> > >> Cheers, Daniel > >> > >>> Andrey Grodzovsky (6): > >>> drm/ttm: Add unampping of the entire device address space > >>> drm/amdgpu: Force unmap all user VMAs on device removal. > >>> drm/amdgpu: Wait for all user clients > >>> drm/amdgpu: Wait for all clients importing out dma-bufs. > >>> drm/ttm: Add destroy flag in TTM BO eviction interface > >>> drm/amdgpu: Use TTM MMs destroy interface > >>> > >>> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- > >>> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + > >>> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- > >>> drivers/gpu/drm/qxl/qxl_object.c | 4 +- > >>> drivers/gpu/drm/radeon/radeon_object.c | 2 +- > >>> drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- > >>> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- > >>> include/drm/ttm/ttm_bo_api.h | 2 +- > >>> include/drm/ttm/ttm_bo_driver.h | 2 + > >>> 16 files changed, 139 insertions(+), 34 deletions(-) > >>> > >>> -- > >>> 2.7.4 > >>> > >> -- > >> Daniel Vetter > >> Software Engineer, Intel Corporation > >> http://blog.ffwll.ch >
On Mon, May 11, 2020 at 02:21:57PM +0200, Daniel Vetter wrote: > On Mon, May 11, 2020 at 1:43 PM Lukas Wunner <lukas@wunner.de> wrote: > > On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > > > - One unfortunate thing with drm_dev_unplug is that the driver core is > > > very opinionated and doesn't tell you whether it's a hotunplug or a > > > driver unload. In the former case trying to shut down hw just wastes > > > time (and might hit driver bugs), in the latter case driver engineers > > > very much expect everything to be shut down. > > > > You can get that information at the PCI bus level with > > pci_dev_is_disconnected(). > > Ok, so at least for pci devices you could do something like > > if (pci_dev_is_disconnected()) > drm_dev_unplug(); > else > drm_dev_unregister(); > > In the ->remove callback and both users and developers should be > happy. Basically yes. But if the driver is unbound e.g. via sysfs and the device is hot-removed while it is being unbound, that approach fails. So you'll need checks for pci_dev_is_disconnected() further below in the call stack as well to avoid unpleasant side effects such as unduly delaying unbinding or ending up in infinite loops when reading "all ones" from PCI BARs, etc. It may also be worth checking for pci_dev_is_disconnected() in ioctls as well and directly returning -ENODEV, though of course that suffers from the same race. (The device may disappear after the check for pci_dev_is_disconnected(), or it may have already disappeared but pciehp hasn't updated the device's channel state yet.) Thanks, Lukas
On Mon, May 11, 2020 at 4:08 PM Lukas Wunner <lukas@wunner.de> wrote: > > On Mon, May 11, 2020 at 02:21:57PM +0200, Daniel Vetter wrote: > > On Mon, May 11, 2020 at 1:43 PM Lukas Wunner <lukas@wunner.de> wrote: > > > On Mon, May 11, 2020 at 11:54:33AM +0200, Daniel Vetter wrote: > > > > - One unfortunate thing with drm_dev_unplug is that the driver core is > > > > very opinionated and doesn't tell you whether it's a hotunplug or a > > > > driver unload. In the former case trying to shut down hw just wastes > > > > time (and might hit driver bugs), in the latter case driver engineers > > > > very much expect everything to be shut down. > > > > > > You can get that information at the PCI bus level with > > > pci_dev_is_disconnected(). > > > > Ok, so at least for pci devices you could do something like > > > > if (pci_dev_is_disconnected()) > > drm_dev_unplug(); > > else > > drm_dev_unregister(); > > > > In the ->remove callback and both users and developers should be > > happy. > > Basically yes. But if the driver is unbound e.g. via sysfs and the > device is hot-removed while it is being unbound, that approach fails. > > So you'll need checks for pci_dev_is_disconnected() further below in > the call stack as well to avoid unpleasant side effects such as unduly > delaying unbinding or ending up in infinite loops when reading "all ones" > from PCI BARs, etc. > > It may also be worth checking for pci_dev_is_disconnected() in ioctls > as well and directly returning -ENODEV, though of course that suffers > from the same race. (The device may disappear after the check for > pci_dev_is_disconnected(), or it may have already disappeared but > pciehp hasn't updated the device's channel state yet.) I guess we could do a drm_pci_dev_enter which combines drm_dev_enter + pci_dev_is_connected. Not perfect, but well then the only real solution is just unconditionaly drm_dev_unplug in ->remove. I think if we do an additional developer_mode module parameter, and if that's not explicitly set, ignore the pci_dev_is_disconnected and just always call drm_dev_unplug() that would be about as good as it gets. -Daniel
On 5/11/20 5:26 AM, Pekka Paalanen wrote: > On Sat, 9 May 2020 14:51:44 -0400 > Andrey Grodzovsky <andrey.grodzovsky@amd.com> wrote: > >> This RFC is a more of a proof of concept then a fully working >> solution as there are a few unresolved issues we are hopping to get >> advise on from people on the mailing list. Until now extracting a >> card either by physical extraction (e.g. eGPU with thunderbold >> connection or by emulation through syfs >> -> /sys/bus/pci/devices/device_id/remove) would cause random crashes >> in user apps. The random crashes in apps were mostly due to the app >> having mapped a device backed BO into it's adress space was still >> trying to access the BO while the backing device was gone. To answer >> this first problem Christian suggested to fix the handling of mapped >> memory in the clients when the device goes away by forcibly unmap all >> buffers the user processes has by clearing their respective VMAs >> mapping the device BOs. Then when the VMAs try to fill in the page >> tables again we check in the fault handler if the device is removed >> and if so, return an error. This will generate a SIGBUS to the >> application which can then cleanly terminate. This indeed was done >> but this in turn created a problem of kernel OOPs were the OOPSes >> were due to the fact that while the app was terminating because of >> the SIGBUS it would trigger use after free in the driver by calling >> to accesses device structures that were already released from the pci >> remove sequence. This we handled by introducing a 'flush' seqence >> during device removal were we wait for drm file reference to drop to >> 0 meaning all user clients directly using this device terminated. >> With this I was able to cleanly emulate device unplug with X and >> glxgears running and later emulate device plug back and restart of X >> and glxgears. >> >> But this use case is only partial and as I see it all the use cases >> are as follwing and the questions it raises. >> >> 1) Application accesses a BO by opening drm file >> 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS >> and termination and waiting for drm file refcound to drop to 0 before releasing the device >> 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? >> >> 2) Application accesses a BO by importing a DMA-BUF >> 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the >> imported dma-buf's file release >> 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for >> all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to >> update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. >> >> 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we >> force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. >> >> The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the >> described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere >> mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? >> >> Patches 1-3 address 1.1 >> Patch 4 addresses 2.1 >> Pathces 5-6 address 2.2 >> >> Reference: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&data=02%7C01%7Candrey.grodzovsky%40amd.com%7C6f92386d0dd444de4fe608d7f58d5ae9%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637247860388177520&sdata=xg3zrilEwSCR7icmkKVVzZwiI11XvmGR%2Bca8nOWBiDM%3D&reserved=0 > Hi, > > how did you come up with the goal "make applications terminate"? Is > that your end goal, or is it just step 1 of many on the road of > supporting device hot-unplug? Just as an effort to improve the current situation where we have unexpected random crashes following device removal. > > Why do you want to terminate also applications that don't "need" to > terminate? Why hunt them down? I'm referring to your points 1.2, 2.2 > and 3. Because when those applications do exit and since they hold a reference to drm device through their open device file descriptor or dma-buf file descriptor we end up in use after free situation where during pci remove we already released everything but now last drm_dev_put release callback is trying to access those released structures. Any way, as you and Daniel pointed out forcing termination is a bad approach. Seems we need to actually keep all the drm structures around until the very last device reference is dropped while in the meatime returning error code for any new IOCTLs and rerouting any page fault to zero page. Thanks for the detailed response. Andrey > > From an end user perspective, I believe making applications terminate > is not helpful at all. Your display server still disappears, which > means all your apps are forced to quit, and you lose your desktop. I do > understand that a graceful termination is better than a hard lockup, > but not much. > > When I've talked about DRM device hot-unplug with Daniel Vetter, our > shared opinion seems to be that the unplug should not outright kill any > programs that are prepared to handle errors, that is, functions or > ioctls that return a success code can return an error, and then it is > up for the application to decide how to handle that. The end goal must > not be to terminate all applications that had something to do with the > device. At the very least the display server must survive. > > The rough idea on how that should work is that DRM ioctls start > returning errors and all mmaps are replaced with something harmless > that does not cause a SIGBUS. Userspace can handle the errors if it > wants to, and display servers will react to the device removed uevent > if not earlier. > > Why deliberately avoid raising SIGBUS? Because it is such a huge pain > to handle due to the archaic design of how signals are delivered. Most > of existing userspace is also not prepared to handle SIGBUS anywhere. > > The problem of handling SIGBUS at all is that a process can only have a > single signal handler per signal, but the process may comprise of > multiple components that cannot cooperate on signal catching: Mesa GPU > drivers, GUI toolkits, and the application itself may all do some > things that would require handling SIGBUS if removing a DRM device > raised it. For Mesa to cooperate with SIGBUS handling with the other > components in the process, we'd need some whole new APIs, an EGL > extension and maybe Vulkan extension too. The process may also have > threads, which are really painful with signals. What if you need to > handle the SIGBUS differently in different threads? > > Hence, mmaps should be replaced with something harmless, maybe > something that reads back all zeros and ignores writes. The application > will learn later that the DRM device is gone. Sending it a SIGBUS on > the spot when it accesses an mmap does not help: the memory is gone > already - if you didn't have a backup of the contents, you're not going > to make one now. > > My point here is, are you designing things to specifically only > terminate processes, or will you leave room in the design to improve the > implementation towards a proper handling of DRM device hot-unplug? > > > Thanks, > pq > > >> Andrey Grodzovsky (6): >> drm/ttm: Add unampping of the entire device address space >> drm/amdgpu: Force unmap all user VMAs on device removal. >> drm/amdgpu: Wait for all user clients >> drm/amdgpu: Wait for all clients importing out dma-bufs. >> drm/ttm: Add destroy flag in TTM BO eviction interface >> drm/amdgpu: Use TTM MMs destroy interface >> >> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- >> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- >> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + >> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- >> drivers/gpu/drm/qxl/qxl_object.c | 4 +- >> drivers/gpu/drm/radeon/radeon_object.c | 2 +- >> drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- >> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- >> include/drm/ttm/ttm_bo_api.h | 2 +- >> include/drm/ttm/ttm_bo_driver.h | 2 + >> 16 files changed, 139 insertions(+), 34 deletions(-) >> > > _______________________________________________ > amd-gfx mailing list > amd-gfx@lists.freedesktop.org > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Candrey.grodzovsky%40amd.com%7C6f92386d0dd444de4fe608d7f58d5ae9%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637247860388197434&sdata=Unqh9pySrEsPeAFLxzmI0deAlPF29%2FfXLMdSl8Jsvgo%3D&reserved=0
On 5/11/20 5:54 AM, Daniel Vetter wrote: > On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: >> This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. >> Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) >> would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still >> trying to access the BO while the backing device was gone. >> To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers >> the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler >> if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. >> This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS >> it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. >> This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. >> With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. >> >> But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. >> >> 1) Application accesses a BO by opening drm file >> 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS >> and termination and waiting for drm file refcound to drop to 0 before releasing the device >> 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? >> >> 2) Application accesses a BO by importing a DMA-BUF >> 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the >> imported dma-buf's file release >> 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for >> all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to >> update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. >> >> 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we >> force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. >> >> The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the >> described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere >> mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? >> >> Patches 1-3 address 1.1 >> Patch 4 addresses 2.1 >> Pathces 5-6 address 2.2 >> >> Reference: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&data=02%7C01%7Candrey.grodzovsky%40amd.com%7Cf6eec90e9da144cb772a08d7f5921ec2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637247880251844517&sdata=QBGIbm1KLysglvRAvoiek8jBcNLE%2B4J7gVGDAbZD5Jw%3D&reserved=0 > So we've been working on this problem for a few years already (but it's > still not solved), I think you could have saved yourselfs some typing. > > Bunch of things: > - we can't wait for userspace in the hotunplug handlers, that might never > happen. The correct way is to untangle the lifetime of your hw driver > for a specific struct pci_device from the drm_device lifetime. > Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and > drm_dev_enter/exit. this To be sure I understood you - do you mean that we should disable/shutdown any HW related stuff such as interrupts disable, any shutdown related device registers programming and io regions unmapping during pci remove sequence (in our case amdgpu_pci_remove) while keeping all the drm/amdgpu structures around in memory until drm_dev_put refocunt drop to 0 and &drm_driver.release is called thus avoiding any user after free oopses when last user reference is dropped ? Is there any point in doing any HW programming to shutdown device if device is already removed anyway (i assume that if driver hook for pci remove is called and it's a physical remove the device is already gone, no ?) What happens if drm_dev_put doesn't drop to 0 before the device is plugged back into the system ? In this case i have duplicates of all device structures in the system. Do you expect this to be not a problem or if it is it's up to me to resolve i guess ? > A bunch of usb/spi drivers use this 100% correctly > now, so there's examples. Plus kerneldoc explains stuff. Would you say tiny drm drivers are a good example ? > > - for a big driver like amdgpu doing this split up is going to be > horrendously complex. I know, we've done it for i915, at least > partially. Can you point me to relevant code/commits for i915 ? > I strongly recommend that you're using devm_ for managing hw > related resources (iomap, irq, ...) as much as possible. From what i saw, in DRM devres implementation amounts to using devm_drm_dev_init/devm_drm_dev_init_release - is that what you mean ? If so i see that devm_drm_dev_init_release just calls drm_dev_put, drm_dev_unplug ends up calling devm_drm_dev_init_release through the devres infrastructure - We already call drm_dev_unplug in amdgpu_pci_remove, we also directly call drm_dev_put there so i am not clear what's the added value of using devm here ? > > For drm_device resources (mostly structures and everything related to > that) we've just merged the drmm_ managed resources framework. There's > some more work to be done there for various kms objects, but you can at > least somewhat avoid tedious handrolling for everything internal > already. I can't find drmm in the code, can you point me please ? > > Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses > of this in drm, they're all wrong. > > - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In > theory they're already refcounted and all and so should work, in > practice I think we need to refcount the underlying drm_device with > drm_dev_get/put to avoid the worst fall-out. > > - One unfortunate thing with drm_dev_unplug is that the driver core is > very opinionated and doesn't tell you whether it's a hotunplug or a > driver unload. In the former case trying to shut down hw just wastes > time (and might hit driver bugs), in the latter case driver engineers > very much expect everything to be shut down. > > Right now you can only have one or the other, so this needs a module > option hack or similar (default to the correct hotunplug behaviour for > users). > > - SIGBUS is better than crashing the kernel, but it's not even close for > users. They still lose everything because everything crashes because in > my experience, in practice, no one ever handles errors. There's a few > more things on top: > > - sighandlers are global, which means only the app can use it. You can't > use it in e.g. mesa. They're also not composable, so if you have on > sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) > it's all sadness. Hence "usersapce will handle SIGBUS" wont work. > > - worse, neither vk nor gl (to my knowledge) have a concept of events > for when the gpu died. The only stuff you have is things like > arb_robustness which says a) everything continues as if nothing > happened b) there's a function where you can ask whether your gl > context and all the textures/buffers are toast. > > I think that's about the only hotunplug application model we can > realistically expect applications to support. That means _all_ errors > need to be silently eaten by either mesa or the kernel. On i915 the > model (also for terminally wedged gpu hangs) is that all ioctl keep > working, mmaps keep working, and execbuf gives you an -EIO (which mesa > eats silently iirc for arb_robustness). > > Conclusion is that SIGBUS is imo a no-go, and the only option we have is > that a) mmaps fully keep working, doable for shmem or b) we put some > fake memory in there (for vram or whatever), maybe even only a single > page for all fake memory. > > - you probably want arb_robustness and similar stuff in userspace as a > first step. > > tldr; > - refcounting, not waiting for userspace > - nothing can fail because userspace wont handle it For nothing can fail i see in tiny drm driver examples (e.g. ili9225_pipe_enable) that for any function which is about to do HW programming they check for drm_dev_enter and silently return if device is not present - is that what you mean, that I should pepper all of amdgpu code such that any function that ends up doing some HW programming be guarded with drm_dev_enter/exit silently returning in case of device is gone ? Thanks a lot for your detailed response. Andrey > > That's at least my take on this mess, and what we've been pushing for over > the past few years. For kms-only drm_driver we should have achieved that > by now (plus/minus maybe some issues for dma-buf/fences, but kms-only > dma-buf/fences are simple enough that maybe we don't go boom yet). > > For big gpus with rendering I think best next step would be to type up a > reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues > and likely solutions. And then bikeshed that, since the above is just my > take on all this. > > Cheers, Daniel > >> Andrey Grodzovsky (6): >> drm/ttm: Add unampping of the entire device address space >> drm/amdgpu: Force unmap all user VMAs on device removal. >> drm/amdgpu: Wait for all user clients >> drm/amdgpu: Wait for all clients importing out dma-bufs. >> drm/ttm: Add destroy flag in TTM BO eviction interface >> drm/amdgpu: Use TTM MMs destroy interface >> >> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- >> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- >> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- >> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + >> drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- >> drivers/gpu/drm/qxl/qxl_object.c | 4 +- >> drivers/gpu/drm/radeon/radeon_object.c | 2 +- >> drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- >> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- >> include/drm/ttm/ttm_bo_api.h | 2 +- >> include/drm/ttm/ttm_bo_driver.h | 2 + >> 16 files changed, 139 insertions(+), 34 deletions(-) >> >> -- >> 2.7.4 >>
On Wed, May 13, 2020 at 10:32:56AM -0400, Andrey Grodzovsky wrote: > > On 5/11/20 5:54 AM, Daniel Vetter wrote: > > On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote: > > > This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list. > > > Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove) > > > would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still > > > trying to access the BO while the backing device was gone. > > > To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers > > > the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler > > > if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate. > > > This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS > > > it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence. > > > This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated. > > > With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears. > > > > > > But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises. > > > > > > 1) Application accesses a BO by opening drm file > > > 1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS > > > and termination and waiting for drm file refcound to drop to 0 before releasing the device > > > 1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ? > > > > > > 2) Application accesses a BO by importing a DMA-BUF > > > 2.1) BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the > > > imported dma-buf's file release > > > 2.2) BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for > > > all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to > > > update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate. > > > > > > 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we > > > force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever. > > > > > > The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the > > > described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere > > > mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ? > > > > > > Patches 1-3 address 1.1 > > > Patch 4 addresses 2.1 > > > Pathces 5-6 address 2.2 > > > > > > Reference: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&data=02%7C01%7Candrey.grodzovsky%40amd.com%7Cf6eec90e9da144cb772a08d7f5921ec2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637247880251844517&sdata=QBGIbm1KLysglvRAvoiek8jBcNLE%2B4J7gVGDAbZD5Jw%3D&reserved=0 > > So we've been working on this problem for a few years already (but it's > > still not solved), I think you could have saved yourselfs some typing. > > > > Bunch of things: > > - we can't wait for userspace in the hotunplug handlers, that might never > > happen. The correct way is to untangle the lifetime of your hw driver > > for a specific struct pci_device from the drm_device lifetime. > > Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and > > drm_dev_enter/exit. > > this > > To be sure I understood you - do you mean that we should disable/shutdown > any HW related stuff such as interrupts disable, any shutdown related device > registers programming and io regions unmapping during pci remove sequence > (in our case amdgpu_pci_remove) while keeping all the drm/amdgpu structures > around in memory until drm_dev_put refocunt drop to 0 and > &drm_driver.release is called thus avoiding any user after free oopses when > last user reference is dropped ? Yes. > Is there any point in doing any HW programming to shutdown device if device > is already removed anyway (i assume that if driver hook for pci remove is > called and it's a physical remove the device is already gone, no ?) No, all that does is result in bus timeouts, which take forever. Plus increased chances that the driver gets confused about the values it reads (for pci all you get is 0xffffffff, no error value, usb is a lot better here because it's explicit packets and streams where you can get explicit errors and bail out). The trouble is that developers still expect you to shut down hw when they unload the driver for testing, but I think we've discussed a reasonable solution for that for pci drivers somewhere in this thread. > What happens if drm_dev_put doesn't drop to 0 before the device is plugged > back into the system ? In this case i have duplicates of all device > structures in the system. Do you expect this to be not a problem or if it is > it's up to me to resolve i guess ? You get another one. Same way you get duplicates if there's actually 2 devices plugged in, so if your driver supports multiple gpus already you should be fine. We should also not hang onto chardev minor numbers, so I think you should be getting the same minor number again. But not 100% sure, maybe something we might need to fix ... > > A bunch of usb/spi drivers use this 100% correctly > > now, so there's examples. Plus kerneldoc explains stuff. > > > Would you say tiny drm drivers are a good example ? Yup. But also, they're tiny, so lots more complexity in amdgpu that they don't even cover. But for the basic flow of using drmm and devm and the functions I mentioned above, they should be the most bug-free drivers we have. > > - for a big driver like amdgpu doing this split up is going to be > > horrendously complex. I know, we've done it for i915, at least > > partially. > > > Can you point me to relevant code/commits for i915 ? Anything touching the i915 drm_driver.release function. Or any function called from there. > > I strongly recommend that you're using devm_ for managing hw > > related resources (iomap, irq, ...) as much as possible. > > > From what i saw, in DRM devres implementation amounts to using > devm_drm_dev_init/devm_drm_dev_init_release - is that what you mean ? If so > i see that devm_drm_dev_init_release just calls drm_dev_put, drm_dev_unplug > ends up calling devm_drm_dev_init_release through the devres infrastructure > - We already call drm_dev_unplug in amdgpu_pci_remove, we also directly call > drm_dev_put there so i am not clear what's the added value of using devm > here ? There's a lot more to devres, but yes that's the drm_device one. For the full list of what can all be managed with devres see https://dri.freedesktop.org/docs/drm/driver-api/driver-model/devres.html There's lots of example usage in drm, especially tiny drivers and anything that runs on arm. Only caveat is that any usage of devm_kzalloc is buggy (at least in drm, as far as I've checked them). > > For drm_device resources (mostly structures and everything related to > > that) we've just merged the drmm_ managed resources framework. There's > > some more work to be done there for various kms objects, but you can at > > least somewhat avoid tedious handrolling for everything internal > > already. > > > I can't find drmm in the code, can you point me please ? drm_managed.c, you need latest drm-next I think. Or linux-next. Docs here: https://dri.freedesktop.org/docs/drm/gpu/drm-internals.html#managed-resources > > > > > Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses > > of this in drm, they're all wrong. > > > > - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In > > theory they're already refcounted and all and so should work, in > > practice I think we need to refcount the underlying drm_device with > > drm_dev_get/put to avoid the worst fall-out. > > > > - One unfortunate thing with drm_dev_unplug is that the driver core is > > very opinionated and doesn't tell you whether it's a hotunplug or a > > driver unload. In the former case trying to shut down hw just wastes > > time (and might hit driver bugs), in the latter case driver engineers > > very much expect everything to be shut down. > > > > Right now you can only have one or the other, so this needs a module > > option hack or similar (default to the correct hotunplug behaviour for > > users). > > > > - SIGBUS is better than crashing the kernel, but it's not even close for > > users. They still lose everything because everything crashes because in > > my experience, in practice, no one ever handles errors. There's a few > > more things on top: > > > > - sighandlers are global, which means only the app can use it. You can't > > use it in e.g. mesa. They're also not composable, so if you have on > > sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor) > > it's all sadness. Hence "usersapce will handle SIGBUS" wont work. > > > > - worse, neither vk nor gl (to my knowledge) have a concept of events > > for when the gpu died. The only stuff you have is things like > > arb_robustness which says a) everything continues as if nothing > > happened b) there's a function where you can ask whether your gl > > context and all the textures/buffers are toast. > > > > I think that's about the only hotunplug application model we can > > realistically expect applications to support. That means _all_ errors > > need to be silently eaten by either mesa or the kernel. On i915 the > > model (also for terminally wedged gpu hangs) is that all ioctl keep > > working, mmaps keep working, and execbuf gives you an -EIO (which mesa > > eats silently iirc for arb_robustness). > > > > Conclusion is that SIGBUS is imo a no-go, and the only option we have is > > that a) mmaps fully keep working, doable for shmem or b) we put some > > fake memory in there (for vram or whatever), maybe even only a single > > page for all fake memory. > > > > - you probably want arb_robustness and similar stuff in userspace as a > > first step. > > > > tldr; > > - refcounting, not waiting for userspace > > - nothing can fail because userspace wont handle it > > > For nothing can fail i see in tiny drm driver examples (e.g. > ili9225_pipe_enable) that for any function which is about to do HW > programming they check for drm_dev_enter and silently return if device is > not present - is that what you mean, that I should pepper all of amdgpu code > such that any function that ends up doing some HW programming be guarded > with drm_dev_enter/exit silently returning in case of device is gone ? Yup. Note taht e.g. usb because it's a packet/stream bus gives you explicit errors, so those drivers don't need the drm_dev_enter/exit. But for pci we need them to make sure we're wasting time when the hw is gone on everything first timing out. > Thanks a lot for your detailed response. np. Cheers, Daniel > > Andrey > > > > > > That's at least my take on this mess, and what we've been pushing for over > > the past few years. For kms-only drm_driver we should have achieved that > > by now (plus/minus maybe some issues for dma-buf/fences, but kms-only > > dma-buf/fences are simple enough that maybe we don't go boom yet). > > > > For big gpus with rendering I think best next step would be to type up a > > reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues > > and likely solutions. And then bikeshed that, since the above is just my > > take on all this. > > > > Cheers, Daniel > > > > > Andrey Grodzovsky (6): > > > drm/ttm: Add unampping of the entire device address space > > > drm/amdgpu: Force unmap all user VMAs on device removal. > > > drm/amdgpu: Wait for all user clients > > > drm/amdgpu: Wait for all clients importing out dma-bufs. > > > drm/ttm: Add destroy flag in TTM BO eviction interface > > > drm/amdgpu: Use TTM MMs destroy interface > > > > > > drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ++ > > > drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- > > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++- > > > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++- > > > drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 22 ++++++++-- > > > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 +++++ > > > drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 4 ++ > > > drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 17 +++++++- > > > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 + > > > drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- > > > drivers/gpu/drm/qxl/qxl_object.c | 4 +- > > > drivers/gpu/drm/radeon/radeon_object.c | 2 +- > > > drivers/gpu/drm/ttm/ttm_bo.c | 63 +++++++++++++++++++++-------- > > > drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 6 +-- > > > include/drm/ttm/ttm_bo_api.h | 2 +- > > > include/drm/ttm/ttm_bo_driver.h | 2 + > > > 16 files changed, 139 insertions(+), 34 deletions(-) > > > > > > -- > > > 2.7.4 > > >