Message ID | 20240830070351.2855919-1-jens.wiklander@linaro.org (mailing list archive) |
---|---|
Headers | show |
Series | Linaro restricted heap | expand |
Hi, On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > Hi, > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > carvout. This is a difference from the Mediatek restricted heap which > relies on the secure world to manage the carveout. > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > afraid I've had to skip some comments. I know I have raised the same question during LPC (in connection to Qualcomm's dma-heap implementation). Is there any reason why we are using generic heaps instead of allocating the dma-bufs on the device side? In your case you already have TEE device, you can use it to allocate and export dma-bufs, which then get imported by the V4L and DRM drivers. I have a feeling (I might be completely wrong here) that by using generic dma-buf heaps we can easily end up in a situation when the userspace depends heavily on the actual platform being used (to map the platform to heap names). I think we should instead depend on the existing devices (e.g. if there is a TEE device, use an IOCTL to allocate secured DMA BUF from it, otherwise check for QTEE device, otherwise check for some other vendor device). The mental experiment to check if the API is correct is really simple: Can you use exactly the same rootfs on several devices without any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm laptop, etc)? > > This can be tested on QEMU with the following steps: > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > -b prototype/sdp-v1 > repo sync -j8 > cd build > make toolchains -j4 > make all -j$(nproc) > make run-only > # login and at the prompt: > xtest --sdp-basic > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > list dependencies needed to build the above. > > The tests are pretty basic, mostly checking that a Trusted Application in > the secure world can access and manipulate the memory. - Can we test that the system doesn't crash badly if user provides non-secured memory to the users which expect a secure buffer? - At the same time corresponding entities shouldn't decode data to the buffers accessible to the rest of the sytem. > > Cheers, > Jens > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > Changes since Olivier's post [2]: > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > the generic restricted heap > * Simplifications and cleanup > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > support" > * Replaced the word "secure" with "restricted" where applicable > > Etienne Carriere (1): > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > Jens Wiklander (2): > dma-buf: heaps: restricted_heap: add no_map attribute > dma-buf: heaps: add Linaro restricted dmabuf heap support > > Olivier Masse (1): > dt-bindings: reserved-memory: add linaro,restricted-heap > > .../linaro,restricted-heap.yaml | 56 ++++++ > drivers/dma-buf/heaps/Kconfig | 10 ++ > drivers/dma-buf/heaps/Makefile | 1 + > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > drivers/dma-buf/heaps/restricted_heap.h | 2 + > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > drivers/tee/tee_core.c | 38 ++++ > drivers/tee/tee_shm.c | 104 ++++++++++- > include/linux/tee_drv.h | 11 ++ > include/uapi/linux/tee.h | 29 +++ > 10 files changed, 426 insertions(+), 7 deletions(-) > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > -- > 2.34.1 >
On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > Hi, > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >> Hi, >> >> This patch set is based on top of Yong Wu's restricted heap patch set [1]. >> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >> >> The Linaro restricted heap uses genalloc in the kernel to manage the heap >> carvout. This is a difference from the Mediatek restricted heap which >> relies on the secure world to manage the carveout. >> >> I've tried to adress the comments on [2], but [1] introduces changes so I'm >> afraid I've had to skip some comments. > > I know I have raised the same question during LPC (in connection to > Qualcomm's dma-heap implementation). Is there any reason why we are > using generic heaps instead of allocating the dma-bufs on the device > side? > > In your case you already have TEE device, you can use it to allocate and > export dma-bufs, which then get imported by the V4L and DRM drivers. > This goes to the heart of why we have dma-heaps in the first place. We don't want to burden userspace with having to figure out the right place to get a dma-buf for a given use-case on a given hardware. That would be very non-portable, and fail at the core purpose of a kernel: to abstract hardware specifics away. Worse, the actual interface for dma-buf exporting changes from framework to framework (getting a dma-buf from DRM is different than V4L, and there would be yet another API for TEE, etc..) Most subsystem don't need an allocator, they work just fine simply being only dma-bufs importers. Recent example being the IIO subsystem[0], for which some early posting included an allocator, but in the end, all that was needed was to consume buffers. For devices that don't actually contain memory there is no reason to be an exporter. What most want is just to consume normal system memory. Or system memory with some constraints (e.g. contiguous, coherent, restricted, etc..). > I have a feeling (I might be completely wrong here) that by using > generic dma-buf heaps we can easily end up in a situation when the > userspace depends heavily on the actual platform being used (to map the > platform to heap names). I think we should instead depend on the > existing devices (e.g. if there is a TEE device, use an IOCTL to > allocate secured DMA BUF from it, otherwise check for QTEE device, > otherwise check for some other vendor device). > > The mental experiment to check if the API is correct is really simple: > Can you use exactly the same rootfs on several devices without > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > laptop, etc)? > This is a great north star to follow. And exactly the reason we should *not* be exposing device specific constraints to userspace. The constrains change based on the platform. So a userspace would have to also pick a different set of constraints based on each platform. Userspace knows which subsystems it will attach a buffer, and the kernel knows what constraints those devices have on a given platform. Ideal case is then allocate from the one exporter, attach to various devices, and have the constraints solved at map time by the exporter based on the set of attached devices. For example, on one platform the display needs contiguous buffers, but on a different platform the display can scatter-gather. So what heap should our generic application allocate from when it wants a buffer consumable by the display, CMA or System? Answer *should* be always use the generic exporter, and that exporter then picks the right backing type based on the platform. Userspace shouldn't be dealing with any of these constraints (looking back, adding the CMA heap was probably incorrect, and the System heap should have been the only one. Idea back then was a userspace helper would show up to do the constraint solving and pick the right heap. That has yet to materialize and folks are still just hardcoding which heap to use..). Same for this restricted heap, I'd like to explore if we can enhance the System heap such that when attached to the TEE framework, the backing memory is either made restricted by fire-walling, or allocating from a TEE carveout (based on platform). This will mean more inter-subsystem coordination, but we can iterate on these in kernel interfaces. We cannot iterate on userspace interfaces, those have to be correct the first time. Andrew [0] https://www.kernel.org/doc/html/next/iio/iio_dmabuf_api.html >> >> This can be tested on QEMU with the following steps: >> repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ >> -b prototype/sdp-v1 >> repo sync -j8 >> cd build >> make toolchains -j4 >> make all -j$(nproc) >> make run-only >> # login and at the prompt: >> xtest --sdp-basic >> >> https://optee.readthedocs.io/en/latest/building/prerequisites.html >> list dependencies needed to build the above. >> >> The tests are pretty basic, mostly checking that a Trusted Application in >> the secure world can access and manipulate the memory. > > - Can we test that the system doesn't crash badly if user provides > non-secured memory to the users which expect a secure buffer? > > - At the same time corresponding entities shouldn't decode data to the > buffers accessible to the rest of the sytem. > >> >> Cheers, >> Jens >> >> [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ >> [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ >> >> Changes since Olivier's post [2]: >> * Based on Yong Wu's post [1] where much of dma-buf handling is done in >> the generic restricted heap >> * Simplifications and cleanup >> * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap >> support" >> * Replaced the word "secure" with "restricted" where applicable >> >> Etienne Carriere (1): >> tee: new ioctl to a register tee_shm from a dmabuf file descriptor >> >> Jens Wiklander (2): >> dma-buf: heaps: restricted_heap: add no_map attribute >> dma-buf: heaps: add Linaro restricted dmabuf heap support >> >> Olivier Masse (1): >> dt-bindings: reserved-memory: add linaro,restricted-heap >> >> .../linaro,restricted-heap.yaml | 56 ++++++ >> drivers/dma-buf/heaps/Kconfig | 10 ++ >> drivers/dma-buf/heaps/Makefile | 1 + >> drivers/dma-buf/heaps/restricted_heap.c | 17 +- >> drivers/dma-buf/heaps/restricted_heap.h | 2 + >> .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ >> drivers/tee/tee_core.c | 38 ++++ >> drivers/tee/tee_shm.c | 104 ++++++++++- >> include/linux/tee_drv.h | 11 ++ >> include/uapi/linux/tee.h | 29 +++ >> 10 files changed, 426 insertions(+), 7 deletions(-) >> create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml >> create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c >> >> -- >> 2.34.1 >> >
Hi Jens, On Fri, 30 Aug 2024 at 08:04, Jens Wiklander <jens.wiklander@linaro.org> wrote: > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > carvout. This is a difference from the Mediatek restricted heap which > relies on the secure world to manage the carveout. Calling this the 'genalloc heap' would be much more clear. Cheers, Daniel
On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > Hi, > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > Hi, > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > carvout. This is a difference from the Mediatek restricted heap which > > > relies on the secure world to manage the carveout. > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > afraid I've had to skip some comments. > > > > I know I have raised the same question during LPC (in connection to > > Qualcomm's dma-heap implementation). Is there any reason why we are > > using generic heaps instead of allocating the dma-bufs on the device > > side? > > > > In your case you already have TEE device, you can use it to allocate and > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > This goes to the heart of why we have dma-heaps in the first place. > We don't want to burden userspace with having to figure out the right > place to get a dma-buf for a given use-case on a given hardware. > That would be very non-portable, and fail at the core purpose of > a kernel: to abstract hardware specifics away. Unfortunately all proposals to use dma-buf heaps were moving in the described direction: let app select (somehow) from a platform- and vendor- specific list of dma-buf heaps. In the kernel we at least know the platform on which the system is running. Userspace generally doesn't (and shouldn't). As such, it seems better to me to keep the knowledge in the kernel and allow userspace do its job by calling into existing device drivers. > > Worse, the actual interface for dma-buf exporting changes from > framework to framework (getting a dma-buf from DRM is different > than V4L, and there would be yet another API for TEE, etc..) But if the app is working with the particular subsystem, then it already talks its language. Allocating a dma-buf is just another part of the interface, which the app already has to support. > Most subsystem don't need an allocator, they work just fine > simply being only dma-bufs importers. Recent example being the > IIO subsystem[0], for which some early posting included an > allocator, but in the end, all that was needed was to consume > buffers. > > For devices that don't actually contain memory there is no > reason to be an exporter. What most want is just to consume > normal system memory. Or system memory with some constraints > (e.g. contiguous, coherent, restricted, etc..). ... secure, accessible only to the camera and video encoder, ... or accessible only to the video decoder and the display unit. Who specifies those restrictions? Can we express them in a platform-neutral way? > > > I have a feeling (I might be completely wrong here) that by using > > generic dma-buf heaps we can easily end up in a situation when the > > userspace depends heavily on the actual platform being used (to map the > > platform to heap names). I think we should instead depend on the > > existing devices (e.g. if there is a TEE device, use an IOCTL to > > allocate secured DMA BUF from it, otherwise check for QTEE device, > > otherwise check for some other vendor device). > > > > The mental experiment to check if the API is correct is really simple: > > Can you use exactly the same rootfs on several devices without > > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > > laptop, etc)? > > > > This is a great north star to follow. And exactly the reason we should > *not* be exposing device specific constraints to userspace. The constrains > change based on the platform. So a userspace would have to also pick > a different set of constraints based on each platform. Great, I totally agree here. > Userspace knows which subsystems it will attach a buffer, and the > kernel knows what constraints those devices have on a given platform. > Ideal case is then allocate from the one exporter, attach to various > devices, and have the constraints solved at map time by the exporter > based on the set of attached devices. > > For example, on one platform the display needs contiguous buffers, > but on a different platform the display can scatter-gather. So > what heap should our generic application allocate from when it > wants a buffer consumable by the display, CMA or System? > Answer *should* be always use the generic exporter, and that > exporter then picks the right backing type based on the platform. The display can support scather-gather, the GPU needs bigger stride for this particular format and the video encoder decoder can not support SG. Which set of constraints and which buffer size should generic exporter select? > Userspace shouldn't be dealing with any of these constraints > (looking back, adding the CMA heap was probably incorrect, > and the System heap should have been the only one. Idea back > then was a userspace helper would show up to do the constraint > solving and pick the right heap. That has yet to materialize and > folks are still just hardcoding which heap to use..). > > Same for this restricted heap, I'd like to explore if we can > enhance the System heap such that when attached to the TEE framework, > the backing memory is either made restricted by fire-walling, > or allocating from a TEE carveout (based on platform). Firewalling from which devices? Or rather allowing access from which devices? Is it possible to specify that somehow? > This will mean more inter-subsystem coordination, but we can > iterate on these in kernel interfaces. We cannot iterate on > userspace interfaces, those have to be correct the first time. > > Andrew > > [0] https://www.kernel.org/doc/html/next/iio/iio_dmabuf_api.html > > > > > > > This can be tested on QEMU with the following steps: > > > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > > > -b prototype/sdp-v1 > > > repo sync -j8 > > > cd build > > > make toolchains -j4 > > > make all -j$(nproc) > > > make run-only > > > # login and at the prompt: > > > xtest --sdp-basic > > > > > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > > > list dependencies needed to build the above. > > > > > > The tests are pretty basic, mostly checking that a Trusted Application in > > > the secure world can access and manipulate the memory. > > > > - Can we test that the system doesn't crash badly if user provides > > non-secured memory to the users which expect a secure buffer? > > > > - At the same time corresponding entities shouldn't decode data to the > > buffers accessible to the rest of the sytem. > > > > > > > > Cheers, > > > Jens > > > > > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > > > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > > > > > Changes since Olivier's post [2]: > > > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > > > the generic restricted heap > > > * Simplifications and cleanup > > > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > > > support" > > > * Replaced the word "secure" with "restricted" where applicable > > > > > > Etienne Carriere (1): > > > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > > > > > Jens Wiklander (2): > > > dma-buf: heaps: restricted_heap: add no_map attribute > > > dma-buf: heaps: add Linaro restricted dmabuf heap support > > > > > > Olivier Masse (1): > > > dt-bindings: reserved-memory: add linaro,restricted-heap > > > > > > .../linaro,restricted-heap.yaml | 56 ++++++ > > > drivers/dma-buf/heaps/Kconfig | 10 ++ > > > drivers/dma-buf/heaps/Makefile | 1 + > > > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > > > drivers/dma-buf/heaps/restricted_heap.h | 2 + > > > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > > > drivers/tee/tee_core.c | 38 ++++ > > > drivers/tee/tee_shm.c | 104 ++++++++++- > > > include/linux/tee_drv.h | 11 ++ > > > include/uapi/linux/tee.h | 29 +++ > > > 10 files changed, 426 insertions(+), 7 deletions(-) > > > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > > > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > > > > > -- > > > 2.34.1 > > > > >
On Mon, Sep 23, 2024 at 09:33:29AM +0300, Dmitry Baryshkov wrote: > Hi, > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > Hi, > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > carvout. This is a difference from the Mediatek restricted heap which > > relies on the secure world to manage the carveout. > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > afraid I've had to skip some comments. > > I know I have raised the same question during LPC (in connection to > Qualcomm's dma-heap implementation). Is there any reason why we are > using generic heaps instead of allocating the dma-bufs on the device > side? > > In your case you already have TEE device, you can use it to allocate and > export dma-bufs, which then get imported by the V4L and DRM drivers. > > I have a feeling (I might be completely wrong here) that by using > generic dma-buf heaps we can easily end up in a situation when the > userspace depends heavily on the actual platform being used (to map the > platform to heap names). I think we should instead depend on the > existing devices (e.g. if there is a TEE device, use an IOCTL to > allocate secured DMA BUF from it, otherwise check for QTEE device, > otherwise check for some other vendor device). That makes sense, it's similar to what we do with TEE_IOC_SHM_ALLOC where we allocate from a carveout reserverd for shared memory with the secure world. It was even based on dma-buf until commit dfd0743f1d9e ("tee: handle lookup of shm with reference count 0"). We should use a new TEE_IOC_*_ALLOC for these new dma-bufs to avoid confusion and to have more freedom when designing the interface. > > The mental experiment to check if the API is correct is really simple: > Can you use exactly the same rootfs on several devices without > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > laptop, etc)? No, I don't think so. > > > > > This can be tested on QEMU with the following steps: > > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > > -b prototype/sdp-v1 > > repo sync -j8 > > cd build > > make toolchains -j4 > > make all -j$(nproc) > > make run-only > > # login and at the prompt: > > xtest --sdp-basic > > > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > > list dependencies needed to build the above. > > > > The tests are pretty basic, mostly checking that a Trusted Application in > > the secure world can access and manipulate the memory. > > - Can we test that the system doesn't crash badly if user provides > non-secured memory to the users which expect a secure buffer? > > - At the same time corresponding entities shouldn't decode data to the > buffers accessible to the rest of the sytem. I'll a few tests along that. Thanks, Jens > > > > > Cheers, > > Jens > > > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > > > Changes since Olivier's post [2]: > > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > > the generic restricted heap > > * Simplifications and cleanup > > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > > support" > > * Replaced the word "secure" with "restricted" where applicable > > > > Etienne Carriere (1): > > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > > > Jens Wiklander (2): > > dma-buf: heaps: restricted_heap: add no_map attribute > > dma-buf: heaps: add Linaro restricted dmabuf heap support > > > > Olivier Masse (1): > > dt-bindings: reserved-memory: add linaro,restricted-heap > > > > .../linaro,restricted-heap.yaml | 56 ++++++ > > drivers/dma-buf/heaps/Kconfig | 10 ++ > > drivers/dma-buf/heaps/Makefile | 1 + > > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > > drivers/dma-buf/heaps/restricted_heap.h | 2 + > > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > > drivers/tee/tee_core.c | 38 ++++ > > drivers/tee/tee_shm.c | 104 ++++++++++- > > include/linux/tee_drv.h | 11 ++ > > include/uapi/linux/tee.h | 29 +++ > > 10 files changed, 426 insertions(+), 7 deletions(-) > > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > > > -- > > 2.34.1 > > > > -- > With best wishes > Dmitry
Hi, On Tue, Sep 24, 2024 at 01:13:18PM -0500, Andrew Davis wrote: > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > Hi, > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > Hi, > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > carvout. This is a difference from the Mediatek restricted heap which > > > relies on the secure world to manage the carveout. > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > afraid I've had to skip some comments. > > > > I know I have raised the same question during LPC (in connection to > > Qualcomm's dma-heap implementation). Is there any reason why we are > > using generic heaps instead of allocating the dma-bufs on the device > > side? > > > > In your case you already have TEE device, you can use it to allocate and > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > This goes to the heart of why we have dma-heaps in the first place. > We don't want to burden userspace with having to figure out the right > place to get a dma-buf for a given use-case on a given hardware. > That would be very non-portable, and fail at the core purpose of > a kernel: to abstract hardware specifics away. > > Worse, the actual interface for dma-buf exporting changes from > framework to framework (getting a dma-buf from DRM is different > than V4L, and there would be yet another API for TEE, etc..) > > Most subsystem don't need an allocator, they work just fine > simply being only dma-bufs importers. Recent example being the > IIO subsystem[0], for which some early posting included an > allocator, but in the end, all that was needed was to consume > buffers. > > For devices that don't actually contain memory there is no > reason to be an exporter. What most want is just to consume > normal system memory. Or system memory with some constraints > (e.g. contiguous, coherent, restricted, etc..). > > > I have a feeling (I might be completely wrong here) that by using > > generic dma-buf heaps we can easily end up in a situation when the > > userspace depends heavily on the actual platform being used (to map the > > platform to heap names). I think we should instead depend on the > > existing devices (e.g. if there is a TEE device, use an IOCTL to > > allocate secured DMA BUF from it, otherwise check for QTEE device, > > otherwise check for some other vendor device). > > > > The mental experiment to check if the API is correct is really simple: > > Can you use exactly the same rootfs on several devices without > > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > > laptop, etc)? > > > > This is a great north star to follow. And exactly the reason we should > *not* be exposing device specific constraints to userspace. The constrains > change based on the platform. So a userspace would have to also pick > a different set of constraints based on each platform. > > Userspace knows which subsystems it will attach a buffer, and the > kernel knows what constraints those devices have on a given platform. > Ideal case is then allocate from the one exporter, attach to various > devices, and have the constraints solved at map time by the exporter > based on the set of attached devices. > > For example, on one platform the display needs contiguous buffers, > but on a different platform the display can scatter-gather. So > what heap should our generic application allocate from when it > wants a buffer consumable by the display, CMA or System? > Answer *should* be always use the generic exporter, and that > exporter then picks the right backing type based on the platform. > > Userspace shouldn't be dealing with any of these constraints > (looking back, adding the CMA heap was probably incorrect, > and the System heap should have been the only one. Idea back > then was a userspace helper would show up to do the constraint > solving and pick the right heap. That has yet to materialize and > folks are still just hardcoding which heap to use..). > > Same for this restricted heap, I'd like to explore if we can > enhance the System heap such that when attached to the TEE framework, > the backing memory is either made restricted by fire-walling, > or allocating from a TEE carveout (based on platform). So the exporter (you mentioned System heap) will somehow know how to interact with the TEE subsystem to allocate suitable memory? I suppose the memory could be from a static carveout, dynamic restricted memory allocation, or how to turn normal memory into restricted memory (fire-walling), depending on the platform. > > This will mean more inter-subsystem coordination, but we can > iterate on these in kernel interfaces. We cannot iterate on > userspace interfaces, those have to be correct the first time. Good point, this approach should make it easier for userspace. Thanks, Jens > > Andrew > > [0] https://www.kernel.org/doc/html/next/iio/iio_dmabuf_api.html > > > > > > > This can be tested on QEMU with the following steps: > > > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > > > -b prototype/sdp-v1 > > > repo sync -j8 > > > cd build > > > make toolchains -j4 > > > make all -j$(nproc) > > > make run-only > > > # login and at the prompt: > > > xtest --sdp-basic > > > > > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > > > list dependencies needed to build the above. > > > > > > The tests are pretty basic, mostly checking that a Trusted Application in > > > the secure world can access and manipulate the memory. > > > > - Can we test that the system doesn't crash badly if user provides > > non-secured memory to the users which expect a secure buffer? > > > > - At the same time corresponding entities shouldn't decode data to the > > buffers accessible to the rest of the sytem. > > > > > > > > Cheers, > > > Jens > > > > > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > > > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > > > > > Changes since Olivier's post [2]: > > > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > > > the generic restricted heap > > > * Simplifications and cleanup > > > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > > > support" > > > * Replaced the word "secure" with "restricted" where applicable > > > > > > Etienne Carriere (1): > > > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > > > > > Jens Wiklander (2): > > > dma-buf: heaps: restricted_heap: add no_map attribute > > > dma-buf: heaps: add Linaro restricted dmabuf heap support > > > > > > Olivier Masse (1): > > > dt-bindings: reserved-memory: add linaro,restricted-heap > > > > > > .../linaro,restricted-heap.yaml | 56 ++++++ > > > drivers/dma-buf/heaps/Kconfig | 10 ++ > > > drivers/dma-buf/heaps/Makefile | 1 + > > > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > > > drivers/dma-buf/heaps/restricted_heap.h | 2 + > > > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > > > drivers/tee/tee_core.c | 38 ++++ > > > drivers/tee/tee_shm.c | 104 ++++++++++- > > > include/linux/tee_drv.h | 11 ++ > > > include/uapi/linux/tee.h | 29 +++ > > > 10 files changed, 426 insertions(+), 7 deletions(-) > > > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > > > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > > > > > -- > > > 2.34.1 > > > > >
Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: >> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: >>> Hi, >>> >>> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >>>> Hi, >>>> >>>> This patch set is based on top of Yong Wu's restricted heap patch set [1]. >>>> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >>>> >>>> The Linaro restricted heap uses genalloc in the kernel to manage the heap >>>> carvout. This is a difference from the Mediatek restricted heap which >>>> relies on the secure world to manage the carveout. >>>> >>>> I've tried to adress the comments on [2], but [1] introduces changes so I'm >>>> afraid I've had to skip some comments. >>> I know I have raised the same question during LPC (in connection to >>> Qualcomm's dma-heap implementation). Is there any reason why we are >>> using generic heaps instead of allocating the dma-bufs on the device >>> side? >>> >>> In your case you already have TEE device, you can use it to allocate and >>> export dma-bufs, which then get imported by the V4L and DRM drivers. >>> >> This goes to the heart of why we have dma-heaps in the first place. >> We don't want to burden userspace with having to figure out the right >> place to get a dma-buf for a given use-case on a given hardware. >> That would be very non-portable, and fail at the core purpose of >> a kernel: to abstract hardware specifics away. > Unfortunately all proposals to use dma-buf heaps were moving in the > described direction: let app select (somehow) from a platform- and > vendor- specific list of dma-buf heaps. In the kernel we at least know > the platform on which the system is running. Userspace generally doesn't > (and shouldn't). As such, it seems better to me to keep the knowledge in > the kernel and allow userspace do its job by calling into existing > device drivers. The idea of letting the kernel fully abstract away the complexity of inter device data exchange is a completely failed design. There has been plenty of evidence for that over the years. Because of this in DMA-buf it's an intentional design decision that userspace and *not* the kernel decides where and what to allocate from. What the kernel should provide are the necessary information what type of memory a device can work with and if certain memory is accessible or not. This is the part which is unfortunately still not well defined nor implemented at the moment. Apart from that there are a whole bunch of intentional design decision which should prevent developers to move allocation decision inside the kernel. For example DMA-buf doesn't know what the content of the buffer is (except for it's total size) and which use cases a buffer will be used with. So the question if memory should be exposed through DMA-heaps or a driver specific allocator is not a question of abstraction, but rather one of the physical location and accessibility of the memory. If the memory is attached to any physical device, e.g. local memory on a dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the memory as device specific allocator. If the memory is not physically attached to any device, but rather just memory attached to the CPU or a system wide memory controller then expose the memory as DMA-heap with specific requirements (e.g. certain sized pages, contiguous, restricted, encrypted, ...). >> Worse, the actual interface for dma-buf exporting changes from >> framework to framework (getting a dma-buf from DRM is different >> than V4L, and there would be yet another API for TEE, etc..) > But if the app is working with the particular subsystem, then it already > talks its language. Allocating a dma-buf is just another part of the > interface, which the app already has to support. Correct, yes. >> Most subsystem don't need an allocator, they work just fine >> simply being only dma-bufs importers. Recent example being the >> IIO subsystem[0], for which some early posting included an >> allocator, but in the end, all that was needed was to consume >> buffers. >> >> For devices that don't actually contain memory there is no >> reason to be an exporter. What most want is just to consume >> normal system memory. Or system memory with some constraints >> (e.g. contiguous, coherent, restricted, etc..). > ... secure, accessible only to the camera and video encoder, ... or > accessible only to the video decoder and the display unit. Who specifies > those restrictions? Can we express them in a platform-neutral way? I once create a prototype for letting kernel drivers expose hints to which DMA-heap they want to work with. The problem is that there are tons of different use cases and you need to use specific allocations for specific use cases. >>> I have a feeling (I might be completely wrong here) that by using >>> generic dma-buf heaps we can easily end up in a situation when the >>> userspace depends heavily on the actual platform being used (to map the >>> platform to heap names). I think we should instead depend on the >>> existing devices (e.g. if there is a TEE device, use an IOCTL to >>> allocate secured DMA BUF from it, otherwise check for QTEE device, >>> otherwise check for some other vendor device). >>> >>> The mental experiment to check if the API is correct is really simple: >>> Can you use exactly the same rootfs on several devices without >>> any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm >>> laptop, etc)? >>> >> This is a great north star to follow. And exactly the reason we should >> *not* be exposing device specific constraints to userspace. The constrains >> change based on the platform. So a userspace would have to also pick >> a different set of constraints based on each platform. > Great, I totally agree here. That sounds reasonable, but depends on the restriction. For example a lot of GPUs can work with any imported memory as long as it is DMA accessible for them, but they can scanout a picture to display on a monitor only from their local memory. >> Userspace knows which subsystems it will attach a buffer, and the >> kernel knows what constraints those devices have on a given platform. >> Ideal case is then allocate from the one exporter, attach to various >> devices, and have the constraints solved at map time by the exporter >> based on the set of attached devices. That approach doesn't work. We have already tried stuff like that multiple times. >> >> For example, on one platform the display needs contiguous buffers, >> but on a different platform the display can scatter-gather. So >> what heap should our generic application allocate from when it >> wants a buffer consumable by the display, CMA or System? >> Answer *should* be always use the generic exporter, and that >> exporter then picks the right backing type based on the platform. > The display can support scather-gather, the GPU needs bigger stride for > this particular format and the video encoder decoder can not support SG. > Which set of constraints and which buffer size should generic exporter > select? Yeah, exactly that's the problem. The kernel doesn't know all the necessary information to make an informed allocation decision. Sometimes you even have to insert format conversation steps and doing that transparently for userspace inside the kernel is really a no-go from the design side. >> Userspace shouldn't be dealing with any of these constraints >> (looking back, adding the CMA heap was probably incorrect, >> and the System heap should have been the only one. Idea back >> then was a userspace helper would show up to do the constraint >> solving and pick the right heap. That has yet to materialize and >> folks are still just hardcoding which heap to use..). >> >> Same for this restricted heap, I'd like to explore if we can >> enhance the System heap such that when attached to the TEE framework, >> the backing memory is either made restricted by fire-walling, >> or allocating from a TEE carveout (based on platform). Clearly NAK from my side to that design. Regards, Christian. > Firewalling from which devices? Or rather allowing access from which > devices? Is it possible to specify that somehow? > >> This will mean more inter-subsystem coordination, but we can >> iterate on these in kernel interfaces. We cannot iterate on >> userspace interfaces, those have to be correct the first time. >> >> Andrew >> >> [0]https://www.kernel.org/doc/html/next/iio/iio_dmabuf_api.html >> >>>> This can be tested on QEMU with the following steps: >>>> repo init -uhttps://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ >>>> -b prototype/sdp-v1 >>>> repo sync -j8 >>>> cd build >>>> make toolchains -j4 >>>> make all -j$(nproc) >>>> make run-only >>>> # login and at the prompt: >>>> xtest --sdp-basic >>>> >>>> https://optee.readthedocs.io/en/latest/building/prerequisites.html >>>> list dependencies needed to build the above. >>>> >>>> The tests are pretty basic, mostly checking that a Trusted Application in >>>> the secure world can access and manipulate the memory. >>> - Can we test that the system doesn't crash badly if user provides >>> non-secured memory to the users which expect a secure buffer? >>> >>> - At the same time corresponding entities shouldn't decode data to the >>> buffers accessible to the rest of the sytem. >>> >>>> Cheers, >>>> Jens >>>> >>>> [1]https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ >>>> [2]https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ >>>> >>>> Changes since Olivier's post [2]: >>>> * Based on Yong Wu's post [1] where much of dma-buf handling is done in >>>> the generic restricted heap >>>> * Simplifications and cleanup >>>> * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap >>>> support" >>>> * Replaced the word "secure" with "restricted" where applicable >>>> >>>> Etienne Carriere (1): >>>> tee: new ioctl to a register tee_shm from a dmabuf file descriptor >>>> >>>> Jens Wiklander (2): >>>> dma-buf: heaps: restricted_heap: add no_map attribute >>>> dma-buf: heaps: add Linaro restricted dmabuf heap support >>>> >>>> Olivier Masse (1): >>>> dt-bindings: reserved-memory: add linaro,restricted-heap >>>> >>>> .../linaro,restricted-heap.yaml | 56 ++++++ >>>> drivers/dma-buf/heaps/Kconfig | 10 ++ >>>> drivers/dma-buf/heaps/Makefile | 1 + >>>> drivers/dma-buf/heaps/restricted_heap.c | 17 +- >>>> drivers/dma-buf/heaps/restricted_heap.h | 2 + >>>> .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ >>>> drivers/tee/tee_core.c | 38 ++++ >>>> drivers/tee/tee_shm.c | 104 ++++++++++- >>>> include/linux/tee_drv.h | 11 ++ >>>> include/uapi/linux/tee.h | 29 +++ >>>> 10 files changed, 426 insertions(+), 7 deletions(-) >>>> create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml >>>> create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c >>>> >>>> -- >>>> 2.34.1 >>>>
On Wed, Sep 25, 2024 at 09:15:04AM GMT, Jens Wiklander wrote: > On Mon, Sep 23, 2024 at 09:33:29AM +0300, Dmitry Baryshkov wrote: > > Hi, > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > Hi, > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > carvout. This is a difference from the Mediatek restricted heap which > > > relies on the secure world to manage the carveout. > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > afraid I've had to skip some comments. > > > > I know I have raised the same question during LPC (in connection to > > Qualcomm's dma-heap implementation). Is there any reason why we are > > using generic heaps instead of allocating the dma-bufs on the device > > side? > > > > In your case you already have TEE device, you can use it to allocate and > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > I have a feeling (I might be completely wrong here) that by using > > generic dma-buf heaps we can easily end up in a situation when the > > userspace depends heavily on the actual platform being used (to map the > > platform to heap names). I think we should instead depend on the > > existing devices (e.g. if there is a TEE device, use an IOCTL to > > allocate secured DMA BUF from it, otherwise check for QTEE device, > > otherwise check for some other vendor device). > > That makes sense, it's similar to what we do with TEE_IOC_SHM_ALLOC > where we allocate from a carveout reserverd for shared memory with the > secure world. It was even based on dma-buf until commit dfd0743f1d9e > ("tee: handle lookup of shm with reference count 0"). > > We should use a new TEE_IOC_*_ALLOC for these new dma-bufs to avoid > confusion and to have more freedom when designing the interface. > > > > > The mental experiment to check if the API is correct is really simple: > > Can you use exactly the same rootfs on several devices without > > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > > laptop, etc)? > > No, I don't think so. Then the API needs to be modified. Or the userspace needs to be modified in the way similar to Vulkan / OpenCL / glvnd / VA / VDPU: platform-specific backends, coexisting on a single rootfs. It is more or less fine to have platform-specific rootfs when we are talking about the embedded, resource-limited devices. But for the end-user devices we must be able to install a generic distro with no device-specific packages being selected. > > > > > > > > > This can be tested on QEMU with the following steps: > > > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > > > -b prototype/sdp-v1 > > > repo sync -j8 > > > cd build > > > make toolchains -j4 > > > make all -j$(nproc) > > > make run-only > > > # login and at the prompt: > > > xtest --sdp-basic > > > > > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > > > list dependencies needed to build the above. > > > > > > The tests are pretty basic, mostly checking that a Trusted Application in > > > the secure world can access and manipulate the memory. > > > > - Can we test that the system doesn't crash badly if user provides > > non-secured memory to the users which expect a secure buffer? > > > > - At the same time corresponding entities shouldn't decode data to the > > buffers accessible to the rest of the sytem. > > I'll a few tests along that. > > Thanks, > Jens > > > > > > > > > Cheers, > > > Jens > > > > > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > > > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > > > > > Changes since Olivier's post [2]: > > > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > > > the generic restricted heap > > > * Simplifications and cleanup > > > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > > > support" > > > * Replaced the word "secure" with "restricted" where applicable > > > > > > Etienne Carriere (1): > > > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > > > > > Jens Wiklander (2): > > > dma-buf: heaps: restricted_heap: add no_map attribute > > > dma-buf: heaps: add Linaro restricted dmabuf heap support > > > > > > Olivier Masse (1): > > > dt-bindings: reserved-memory: add linaro,restricted-heap > > > > > > .../linaro,restricted-heap.yaml | 56 ++++++ > > > drivers/dma-buf/heaps/Kconfig | 10 ++ > > > drivers/dma-buf/heaps/Makefile | 1 + > > > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > > > drivers/dma-buf/heaps/restricted_heap.h | 2 + > > > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > > > drivers/tee/tee_core.c | 38 ++++ > > > drivers/tee/tee_shm.c | 104 ++++++++++- > > > include/linux/tee_drv.h | 11 ++ > > > include/uapi/linux/tee.h | 29 +++ > > > 10 files changed, 426 insertions(+), 7 deletions(-) > > > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > > > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > > > > > -- > > > 2.34.1 > > > > > > > -- > > With best wishes > > Dmitry
On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > > > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > > > Hi, > > > > > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > > > Hi, > > > > > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > > > carvout. This is a difference from the Mediatek restricted heap which > > > > > relies on the secure world to manage the carveout. > > > > > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > > > afraid I've had to skip some comments. > > > > I know I have raised the same question during LPC (in connection to > > > > Qualcomm's dma-heap implementation). Is there any reason why we are > > > > using generic heaps instead of allocating the dma-bufs on the device > > > > side? > > > > > > > > In your case you already have TEE device, you can use it to allocate and > > > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > > > > This goes to the heart of why we have dma-heaps in the first place. > > > We don't want to burden userspace with having to figure out the right > > > place to get a dma-buf for a given use-case on a given hardware. > > > That would be very non-portable, and fail at the core purpose of > > > a kernel: to abstract hardware specifics away. > > Unfortunately all proposals to use dma-buf heaps were moving in the > > described direction: let app select (somehow) from a platform- and > > vendor- specific list of dma-buf heaps. In the kernel we at least know > > the platform on which the system is running. Userspace generally doesn't > > (and shouldn't). As such, it seems better to me to keep the knowledge in > > the kernel and allow userspace do its job by calling into existing > > device drivers. > > The idea of letting the kernel fully abstract away the complexity of inter > device data exchange is a completely failed design. There has been plenty of > evidence for that over the years. > > Because of this in DMA-buf it's an intentional design decision that > userspace and *not* the kernel decides where and what to allocate from. Hmm, ok. > > What the kernel should provide are the necessary information what type of > memory a device can work with and if certain memory is accessible or not. > This is the part which is unfortunately still not well defined nor > implemented at the moment. > > Apart from that there are a whole bunch of intentional design decision which > should prevent developers to move allocation decision inside the kernel. For > example DMA-buf doesn't know what the content of the buffer is (except for > it's total size) and which use cases a buffer will be used with. > > So the question if memory should be exposed through DMA-heaps or a driver > specific allocator is not a question of abstraction, but rather one of the > physical location and accessibility of the memory. > > If the memory is attached to any physical device, e.g. local memory on a > dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the > memory as device specific allocator. So, for embedded systems with unified memory all buffers (maybe except PCIe BARs) should come from DMA-BUF heaps, correct? > > If the memory is not physically attached to any device, but rather just > memory attached to the CPU or a system wide memory controller then expose > the memory as DMA-heap with specific requirements (e.g. certain sized pages, > contiguous, restricted, encrypted, ...). Is encrypted / protected a part of the allocation contract or should it be enforced separately via a call to TEE / SCM / anything else?
On Wed, Sep 25, 2024 at 1:41 PM Dmitry Baryshkov <dmitry.baryshkov@linaro.org> wrote: > > On Wed, Sep 25, 2024 at 09:15:04AM GMT, Jens Wiklander wrote: > > On Mon, Sep 23, 2024 at 09:33:29AM +0300, Dmitry Baryshkov wrote: > > > Hi, > > > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > > Hi, > > > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > > carvout. This is a difference from the Mediatek restricted heap which > > > > relies on the secure world to manage the carveout. > > > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > > afraid I've had to skip some comments. > > > > > > I know I have raised the same question during LPC (in connection to > > > Qualcomm's dma-heap implementation). Is there any reason why we are > > > using generic heaps instead of allocating the dma-bufs on the device > > > side? > > > > > > In your case you already have TEE device, you can use it to allocate and > > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > > > I have a feeling (I might be completely wrong here) that by using > > > generic dma-buf heaps we can easily end up in a situation when the > > > userspace depends heavily on the actual platform being used (to map the > > > platform to heap names). I think we should instead depend on the > > > existing devices (e.g. if there is a TEE device, use an IOCTL to > > > allocate secured DMA BUF from it, otherwise check for QTEE device, > > > otherwise check for some other vendor device). > > > > That makes sense, it's similar to what we do with TEE_IOC_SHM_ALLOC > > where we allocate from a carveout reserverd for shared memory with the > > secure world. It was even based on dma-buf until commit dfd0743f1d9e > > ("tee: handle lookup of shm with reference count 0"). > > > > We should use a new TEE_IOC_*_ALLOC for these new dma-bufs to avoid > > confusion and to have more freedom when designing the interface. > > > > > > > > The mental experiment to check if the API is correct is really simple: > > > Can you use exactly the same rootfs on several devices without > > > any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm > > > laptop, etc)? > > > > No, I don't think so. > > Then the API needs to be modified. I don't think that is enough. I would have answered no even without the secure data path in mind. Communication with the secure world is still too fragmented. > > Or the userspace needs to be modified in the way similar to Vulkan / > OpenCL / glvnd / VA / VDPU: platform-specific backends, coexisting on a > single rootfs. Yes, that's likely a needed step. But the first step is to have something to relate to upstream, without that there's only an ever-changing downstream ABI. > > It is more or less fine to have platform-specific rootfs when we are > talking about the embedded, resource-limited devices. But for the > end-user devices we must be able to install a generic distro with no > device-specific packages being selected. I'm not sure we can solve that problem here. But we should of course not make matters worse. In the restricted heap patch set which this patchset builds on we define a way to allocate memory from a restricted heap, but we leave the problem of finding the right heap to userspace. Thanks, Jens > > > > > > > > > > > > > > This can be tested on QEMU with the following steps: > > > > repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ > > > > -b prototype/sdp-v1 > > > > repo sync -j8 > > > > cd build > > > > make toolchains -j4 > > > > make all -j$(nproc) > > > > make run-only > > > > # login and at the prompt: > > > > xtest --sdp-basic > > > > > > > > https://optee.readthedocs.io/en/latest/building/prerequisites.html > > > > list dependencies needed to build the above. > > > > > > > > The tests are pretty basic, mostly checking that a Trusted Application in > > > > the secure world can access and manipulate the memory. > > > > > > - Can we test that the system doesn't crash badly if user provides > > > non-secured memory to the users which expect a secure buffer? > > > > > > - At the same time corresponding entities shouldn't decode data to the > > > buffers accessible to the rest of the sytem. > > > > I'll a few tests along that. > > > > Thanks, > > Jens > > > > > > > > > > > > > Cheers, > > > > Jens > > > > > > > > [1] https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ > > > > [2] https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ > > > > > > > > Changes since Olivier's post [2]: > > > > * Based on Yong Wu's post [1] where much of dma-buf handling is done in > > > > the generic restricted heap > > > > * Simplifications and cleanup > > > > * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap > > > > support" > > > > * Replaced the word "secure" with "restricted" where applicable > > > > > > > > Etienne Carriere (1): > > > > tee: new ioctl to a register tee_shm from a dmabuf file descriptor > > > > > > > > Jens Wiklander (2): > > > > dma-buf: heaps: restricted_heap: add no_map attribute > > > > dma-buf: heaps: add Linaro restricted dmabuf heap support > > > > > > > > Olivier Masse (1): > > > > dt-bindings: reserved-memory: add linaro,restricted-heap > > > > > > > > .../linaro,restricted-heap.yaml | 56 ++++++ > > > > drivers/dma-buf/heaps/Kconfig | 10 ++ > > > > drivers/dma-buf/heaps/Makefile | 1 + > > > > drivers/dma-buf/heaps/restricted_heap.c | 17 +- > > > > drivers/dma-buf/heaps/restricted_heap.h | 2 + > > > > .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ > > > > drivers/tee/tee_core.c | 38 ++++ > > > > drivers/tee/tee_shm.c | 104 ++++++++++- > > > > include/linux/tee_drv.h | 11 ++ > > > > include/uapi/linux/tee.h | 29 +++ > > > > 10 files changed, 426 insertions(+), 7 deletions(-) > > > > create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml > > > > create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c > > > > > > > > -- > > > > 2.34.1 > > > > > > > > > > -- > > > With best wishes > > > Dmitry > > -- > With best wishes > Dmitry
Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: > On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: >> Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: >>> On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: >>>> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: >>>>> Hi, >>>>> >>>>> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >>>>>> Hi, >>>>>> >>>>>> This patch set is based on top of Yong Wu's restricted heap patch set [1]. >>>>>> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >>>>>> >>>>>> The Linaro restricted heap uses genalloc in the kernel to manage the heap >>>>>> carvout. This is a difference from the Mediatek restricted heap which >>>>>> relies on the secure world to manage the carveout. >>>>>> >>>>>> I've tried to adress the comments on [2], but [1] introduces changes so I'm >>>>>> afraid I've had to skip some comments. >>>>> I know I have raised the same question during LPC (in connection to >>>>> Qualcomm's dma-heap implementation). Is there any reason why we are >>>>> using generic heaps instead of allocating the dma-bufs on the device >>>>> side? >>>>> >>>>> In your case you already have TEE device, you can use it to allocate and >>>>> export dma-bufs, which then get imported by the V4L and DRM drivers. >>>>> >>>> This goes to the heart of why we have dma-heaps in the first place. >>>> We don't want to burden userspace with having to figure out the right >>>> place to get a dma-buf for a given use-case on a given hardware. >>>> That would be very non-portable, and fail at the core purpose of >>>> a kernel: to abstract hardware specifics away. >>> Unfortunately all proposals to use dma-buf heaps were moving in the >>> described direction: let app select (somehow) from a platform- and >>> vendor- specific list of dma-buf heaps. In the kernel we at least know >>> the platform on which the system is running. Userspace generally doesn't >>> (and shouldn't). As such, it seems better to me to keep the knowledge in >>> the kernel and allow userspace do its job by calling into existing >>> device drivers. >> The idea of letting the kernel fully abstract away the complexity of inter >> device data exchange is a completely failed design. There has been plenty of >> evidence for that over the years. >> >> Because of this in DMA-buf it's an intentional design decision that >> userspace and *not* the kernel decides where and what to allocate from. > Hmm, ok. > >> What the kernel should provide are the necessary information what type of >> memory a device can work with and if certain memory is accessible or not. >> This is the part which is unfortunately still not well defined nor >> implemented at the moment. >> >> Apart from that there are a whole bunch of intentional design decision which >> should prevent developers to move allocation decision inside the kernel. For >> example DMA-buf doesn't know what the content of the buffer is (except for >> it's total size) and which use cases a buffer will be used with. >> >> So the question if memory should be exposed through DMA-heaps or a driver >> specific allocator is not a question of abstraction, but rather one of the >> physical location and accessibility of the memory. >> >> If the memory is attached to any physical device, e.g. local memory on a >> dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the >> memory as device specific allocator. > So, for embedded systems with unified memory all buffers (maybe except > PCIe BARs) should come from DMA-BUF heaps, correct? From what I know that is correct, yes. Question is really if that will stay this way. Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. Could be that designs with local specialized memory see a revival any time, who knows. >> If the memory is not physically attached to any device, but rather just >> memory attached to the CPU or a system wide memory controller then expose >> the memory as DMA-heap with specific requirements (e.g. certain sized pages, >> contiguous, restricted, encrypted, ...). > Is encrypted / protected a part of the allocation contract or should it > be enforced separately via a call to TEE / SCM / anything else? Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. For the content encryption used by AMD and some other vendors it's clearly a data property which isn't related in any way to something the kernel deals with. When it's not encryption but rather some special protected area of memory which only certain devices have DMA access to then having a separate heap might make sense for that. As rule of thump I would say it's the kernels responsibility to manage the physical interconnection between two devices, e.g. come up with DMA addresses which work. And it's the userspace responsibility to negotiate the actual data format of the bytes transferred, e.g. things like width, height, stride, pixel format, tiling, encryption etc.... The tricky part is all those special cases, e.g. that GPU can only scanout from local memory, that atomic operations work only on system memory, that a devices might have different coherency constrains, etc.. Nobody has figured out really all the requirements and we basically just go from use case to another use use case. Regards, Christian.
On 9/25/24 19:31, Christian König wrote: > Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: >> On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: >>> Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: >>>> On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: >>>>> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: >>>>>> Hi, >>>>>> >>>>>> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >>>>>>> Hi, >>>>>>> >>>>>>> This patch set is based on top of Yong Wu's restricted heap patch set [1]. >>>>>>> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >>>>>>> >>>>>>> The Linaro restricted heap uses genalloc in the kernel to manage the heap >>>>>>> carvout. This is a difference from the Mediatek restricted heap which >>>>>>> relies on the secure world to manage the carveout. >>>>>>> >>>>>>> I've tried to adress the comments on [2], but [1] introduces changes so I'm >>>>>>> afraid I've had to skip some comments. >>>>>> I know I have raised the same question during LPC (in connection to >>>>>> Qualcomm's dma-heap implementation). Is there any reason why we are >>>>>> using generic heaps instead of allocating the dma-bufs on the device >>>>>> side? >>>>>> >>>>>> In your case you already have TEE device, you can use it to allocate and >>>>>> export dma-bufs, which then get imported by the V4L and DRM drivers. >>>>>> >>>>> This goes to the heart of why we have dma-heaps in the first place. >>>>> We don't want to burden userspace with having to figure out the right >>>>> place to get a dma-buf for a given use-case on a given hardware. >>>>> That would be very non-portable, and fail at the core purpose of >>>>> a kernel: to abstract hardware specifics away. >>>> Unfortunately all proposals to use dma-buf heaps were moving in the >>>> described direction: let app select (somehow) from a platform- and >>>> vendor- specific list of dma-buf heaps. In the kernel we at least know >>>> the platform on which the system is running. Userspace generally doesn't >>>> (and shouldn't). As such, it seems better to me to keep the knowledge in >>>> the kernel and allow userspace do its job by calling into existing >>>> device drivers. >>> The idea of letting the kernel fully abstract away the complexity of inter >>> device data exchange is a completely failed design. There has been plenty of >>> evidence for that over the years. >>> >>> Because of this in DMA-buf it's an intentional design decision that >>> userspace and *not* the kernel decides where and what to allocate from. >> Hmm, ok. >> >>> What the kernel should provide are the necessary information what type of >>> memory a device can work with and if certain memory is accessible or not. >>> This is the part which is unfortunately still not well defined nor >>> implemented at the moment. >>> >>> Apart from that there are a whole bunch of intentional design decision which >>> should prevent developers to move allocation decision inside the kernel. For >>> example DMA-buf doesn't know what the content of the buffer is (except for >>> it's total size) and which use cases a buffer will be used with. >>> >>> So the question if memory should be exposed through DMA-heaps or a driver >>> specific allocator is not a question of abstraction, but rather one of the >>> physical location and accessibility of the memory. >>> >>> If the memory is attached to any physical device, e.g. local memory on a >>> dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the >>> memory as device specific allocator. >> So, for embedded systems with unified memory all buffers (maybe except >> PCIe BARs) should come from DMA-BUF heaps, correct? > > From what I know that is correct, yes. Question is really if that will > stay this way. > > Neural accelerators look a lot stripped down FPGAs these days and the > benefit of local memory for GPUs is known for decades. > > Could be that designs with local specialized memory see a revival any > time, who knows. > >>> If the memory is not physically attached to any device, but rather just >>> memory attached to the CPU or a system wide memory controller then expose >>> the memory as DMA-heap with specific requirements (e.g. certain sized pages, >>> contiguous, restricted, encrypted, ...). >> Is encrypted / protected a part of the allocation contract or should it >> be enforced separately via a call to TEE / SCM / anything else? > > Well that is a really good question I can't fully answer either. From > what I know now I would say it depends on the design. > IMHO, I think Dmitry's proposal to rather allow TEE device being allocator and exporter of DMA-bufs related to restricted memory makes sense to me. Since it's really the TEE implementation (OP-TEE, AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a particular piece of allocated memory. AFAIK, that happens after the DMA-buf gets allocated and then user-space calls into TEE to setup which media pipeline is going to access that particular DMA-buf. It can also be a static contract depending on a particular platform design. As Jens noted in the other thread, we already manage shared memory allocations (from a static carve-out or dynamically mapped) for communications among Linux and TEE that were based on DMA-bufs earlier but since we didn't required them to be shared with other devices, so we rather switched to anonymous memory. From user-space perspective, it's cleaner to use TEE device IOCTLs for DMA-buf allocations since it already know to which underlying TEE implementation it's communicating with rather than first figuring out which DMA heap to use for allocation and then communicating with TEE implementation. -Sumit
[Resend in plain text format as my earlier message was rejected by some mailing lists] On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote: > > On 9/25/24 19:31, Christian König wrote: > > Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: > > On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: > > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > Hi, > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > Hi, > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > carvout. This is a difference from the Mediatek restricted heap which > relies on the secure world to manage the carveout. > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > afraid I've had to skip some comments. > > I know I have raised the same question during LPC (in connection to > Qualcomm's dma-heap implementation). Is there any reason why we are > using generic heaps instead of allocating the dma-bufs on the device > side? > > In your case you already have TEE device, you can use it to allocate and > export dma-bufs, which then get imported by the V4L and DRM drivers. > > This goes to the heart of why we have dma-heaps in the first place. > We don't want to burden userspace with having to figure out the right > place to get a dma-buf for a given use-case on a given hardware. > That would be very non-portable, and fail at the core purpose of > a kernel: to abstract hardware specifics away. > > Unfortunately all proposals to use dma-buf heaps were moving in the > described direction: let app select (somehow) from a platform- and > vendor- specific list of dma-buf heaps. In the kernel we at least know > the platform on which the system is running. Userspace generally doesn't > (and shouldn't). As such, it seems better to me to keep the knowledge in > the kernel and allow userspace do its job by calling into existing > device drivers. > > The idea of letting the kernel fully abstract away the complexity of inter > device data exchange is a completely failed design. There has been plenty of > evidence for that over the years. > > Because of this in DMA-buf it's an intentional design decision that > userspace and *not* the kernel decides where and what to allocate from. > > Hmm, ok. > > What the kernel should provide are the necessary information what type of > memory a device can work with and if certain memory is accessible or not. > This is the part which is unfortunately still not well defined nor > implemented at the moment. > > Apart from that there are a whole bunch of intentional design decision which > should prevent developers to move allocation decision inside the kernel. For > example DMA-buf doesn't know what the content of the buffer is (except for > it's total size) and which use cases a buffer will be used with. > > So the question if memory should be exposed through DMA-heaps or a driver > specific allocator is not a question of abstraction, but rather one of the > physical location and accessibility of the memory. > > If the memory is attached to any physical device, e.g. local memory on a > dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the > memory as device specific allocator. > > So, for embedded systems with unified memory all buffers (maybe except > PCIe BARs) should come from DMA-BUF heaps, correct? > > > From what I know that is correct, yes. Question is really if that will stay this way. > > Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. > > Could be that designs with local specialized memory see a revival any time, who knows. > > If the memory is not physically attached to any device, but rather just > memory attached to the CPU or a system wide memory controller then expose > the memory as DMA-heap with specific requirements (e.g. certain sized pages, > contiguous, restricted, encrypted, ...). > > Is encrypted / protected a part of the allocation contract or should it > be enforced separately via a call to TEE / SCM / anything else? > > > Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. > IMHO, I think Dmitry's proposal to rather allow the TEE device to be the allocator and exporter of DMA-bufs related to restricted memory makes sense to me. Since it's really the TEE implementation (OP-TEE, AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a particular piece of allocated memory. AFAIK, that happens after the DMA-buf gets allocated and then user-space calls into TEE to set up which media pipeline is going to access that particular DMA-buf. It can also be a static contract depending on a particular platform design. As Jens noted in the other thread, we already manage shared memory allocations (from a static carve-out or dynamically mapped) for communications among Linux and TEE that were based on DMA-bufs earlier but since we didn't required them to be shared with other devices, so we rather switched to anonymous memory. From user-space perspective, it's cleaner to use TEE device IOCTLs for DMA-buf allocations since it already knows which underlying TEE implementation it's communicating with rather than first figuring out which DMA heap to use for allocation and then communicating with TEE implementation. -Sumit
Am 26.09.24 um 15:52 schrieb Sumit Garg: > [Resend in plain text format as my earlier message was rejected by > some mailing lists] > > On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote: >> On 9/25/24 19:31, Christian König wrote: >> >> Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: >> >> On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: >> >> Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: >> >> On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: >> >> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: >> >> Hi, >> >> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >> >> Hi, >> >> This patch set is based on top of Yong Wu's restricted heap patch set [1]. >> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >> >> The Linaro restricted heap uses genalloc in the kernel to manage the heap >> carvout. This is a difference from the Mediatek restricted heap which >> relies on the secure world to manage the carveout. >> >> I've tried to adress the comments on [2], but [1] introduces changes so I'm >> afraid I've had to skip some comments. >> >> I know I have raised the same question during LPC (in connection to >> Qualcomm's dma-heap implementation). Is there any reason why we are >> using generic heaps instead of allocating the dma-bufs on the device >> side? >> >> In your case you already have TEE device, you can use it to allocate and >> export dma-bufs, which then get imported by the V4L and DRM drivers. >> >> This goes to the heart of why we have dma-heaps in the first place. >> We don't want to burden userspace with having to figure out the right >> place to get a dma-buf for a given use-case on a given hardware. >> That would be very non-portable, and fail at the core purpose of >> a kernel: to abstract hardware specifics away. >> >> Unfortunately all proposals to use dma-buf heaps were moving in the >> described direction: let app select (somehow) from a platform- and >> vendor- specific list of dma-buf heaps. In the kernel we at least know >> the platform on which the system is running. Userspace generally doesn't >> (and shouldn't). As such, it seems better to me to keep the knowledge in >> the kernel and allow userspace do its job by calling into existing >> device drivers. >> >> The idea of letting the kernel fully abstract away the complexity of inter >> device data exchange is a completely failed design. There has been plenty of >> evidence for that over the years. >> >> Because of this in DMA-buf it's an intentional design decision that >> userspace and *not* the kernel decides where and what to allocate from. >> >> Hmm, ok. >> >> What the kernel should provide are the necessary information what type of >> memory a device can work with and if certain memory is accessible or not. >> This is the part which is unfortunately still not well defined nor >> implemented at the moment. >> >> Apart from that there are a whole bunch of intentional design decision which >> should prevent developers to move allocation decision inside the kernel. For >> example DMA-buf doesn't know what the content of the buffer is (except for >> it's total size) and which use cases a buffer will be used with. >> >> So the question if memory should be exposed through DMA-heaps or a driver >> specific allocator is not a question of abstraction, but rather one of the >> physical location and accessibility of the memory. >> >> If the memory is attached to any physical device, e.g. local memory on a >> dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the >> memory as device specific allocator. >> >> So, for embedded systems with unified memory all buffers (maybe except >> PCIe BARs) should come from DMA-BUF heaps, correct? >> >> >> From what I know that is correct, yes. Question is really if that will stay this way. >> >> Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. >> >> Could be that designs with local specialized memory see a revival any time, who knows. >> >> If the memory is not physically attached to any device, but rather just >> memory attached to the CPU or a system wide memory controller then expose >> the memory as DMA-heap with specific requirements (e.g. certain sized pages, >> contiguous, restricted, encrypted, ...). >> >> Is encrypted / protected a part of the allocation contract or should it >> be enforced separately via a call to TEE / SCM / anything else? >> >> >> Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. >> > IMHO, I think Dmitry's proposal to rather allow the TEE device to be > the allocator and exporter of DMA-bufs related to restricted memory > makes sense to me. Since it's really the TEE implementation (OP-TEE, > AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a > particular piece of allocated memory. AFAIK, that happens after the > DMA-buf gets allocated and then user-space calls into TEE to set up > which media pipeline is going to access that particular DMA-buf. It > can also be a static contract depending on a particular platform > design. > > As Jens noted in the other thread, we already manage shared memory > allocations (from a static carve-out or dynamically mapped) for > communications among Linux and TEE that were based on DMA-bufs earlier > but since we didn't required them to be shared with other devices, so > we rather switched to anonymous memory. > > From user-space perspective, it's cleaner to use TEE device IOCTLs for > DMA-buf allocations since it already knows which underlying TEE > implementation it's communicating with rather than first figuring out > which DMA heap to use for allocation and then communicating with TEE > implementation. +1 I'm not that deeply into the functionality the TEE device IOCTLs expose, so can't judge if what's said above is correct or not. But in general building on top of existing infrastructure and information is a really strong argument for a design. So from my 10 mile high point of view that sounds like the way to go. Regards, Christian. > > -Sumit
On 9/25/24 3:51 AM, Christian König wrote: > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: >> On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: >>> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: >>>> Hi, >>>> >>>> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: >>>>> Hi, >>>>> >>>>> This patch set is based on top of Yong Wu's restricted heap patch set[1]. >>>>> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. >>>>> >>>>> The Linaro restricted heap uses genalloc in the kernel to manage the heap >>>>> carvout. This is a difference from the Mediatek restricted heap which >>>>> relies on the secure world to manage the carveout. >>>>> >>>>> I've tried to adress the comments on [2], but [1] introduces changes so I'm >>>>> afraid I've had to skip some comments. >>>> I know I have raised the same question during LPC (in connection to >>>> Qualcomm's dma-heap implementation). Is there any reason why we are >>>> using generic heaps instead of allocating the dma-bufs on the device >>>> side? >>>> >>>> In your case you already have TEE device, you can use it to allocate and >>>> export dma-bufs, which then get imported by the V4L and DRM drivers. >>>> >>> This goes to the heart of why we have dma-heaps in the first place. >>> We don't want to burden userspace with having to figure out the right >>> place to get a dma-buf for a given use-case on a given hardware. >>> That would be very non-portable, and fail at the core purpose of >>> a kernel: to abstract hardware specifics away. >> Unfortunately all proposals to use dma-buf heaps were moving in the >> described direction: let app select (somehow) from a platform- and >> vendor- specific list of dma-buf heaps. In the kernel we at least know >> the platform on which the system is running. Userspace generally doesn't >> (and shouldn't). As such, it seems better to me to keep the knowledge in >> the kernel and allow userspace do its job by calling into existing >> device drivers. > > The idea of letting the kernel fully abstract away the complexity of inter device data exchange is a completely failed design. There has been plentyof evidence for that over the years. > And forcing userspace to figure it all out is also an unsolved problem after all these years. Neither side wants to get their hands dirty, but it is fundamentally a kernel problem to handle these device complexities. > Because of this in DMA-buf it's an intentional design decision that userspace and *not* the kernel decides where and what to allocate from. > DMA-buf attach and map stages are split from each other, to me this indicates the design intended the actual backing allocation to be chosen based on attached devices at map time, not at allocation time. Meaning userspace doesn't really get to choose the backing storage. > What the kernel should provide are the necessary information what type ofmemory a device can work with and if certain memory is accessible or not. This is the part which is unfortunately still not well defined nor implemented at the moment. > This sounds like the kernel provided "hints" solution. Given enough hints, the correct Heap would become obvious, and so a complete hint based solution is one step away from just having the kernel simply make that one correct selection for you. > Apart from that there are a whole bunch of intentional design decision which should prevent developers to move allocation decision inside the kernel. For example DMA-buf doesn't know what the content of the buffer is (except for it's total size) and which use cases a buffer will be used with. > > So the question if memory should be exposed through DMA-heaps or a driverspecific allocator is not a question of abstraction, but rather one of thephysical location and accessibility of the memory. > > If the memory is attached to any physical device, e.g. local memory on a dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the memory as device specific allocator. > > If the memory is not physically attached to any device, but rather just memory attached to the CPU or a system wide memory controller then expose the memory as DMA-heap with specific requirements (e.g. certain sized pages, contiguous, restricted, encrypted, ...). > Agree with the first part, some exporters are just giving out CPU memory but with some specific constraint (some subsystems get a pass like V4L2 as it came out before DMA-heaps, but no reason to keep making new allocators for each and every subsystem just because they may have some memory constraint on the consumption side). >>> Worse, the actual interface for dma-buf exporting changes from >>> framework to framework (getting a dma-buf from DRM is different >>> than V4L, and there would be yet another API for TEE, etc..) >> But if the app is working with the particular subsystem, then it already >> talks its language. Allocating a dma-buf is just another part of the >> interface, which the app already has to support. > > Correct, yes. > Importing the buffer to a given subsystem with be subsystem specific yes, but I don't see how that means allocating the buffer should have to be too. For instance I need one of these new "restricted" heaps to pass to V4L2, I'll need to know how to use V4L2 sure, but why should I have to deal with the TEE subsystem to get that buffer as proposed? The common allocator could have handled this. >>> Most subsystem don't need an allocator, they work just fine >>> simply being only dma-bufs importers. Recent example being the >>> IIO subsystem[0], for which some early posting included an >>> allocator, but in the end, all that was needed was to consume >>> buffers. >>> >>> For devices that don't actually contain memory there is no >>> reason to be an exporter. What most want is just to consume >>> normal system memory. Or system memory with some constraints >>> (e.g. contiguous, coherent, restricted, etc..). >> ... secure, accessible only to the camera and video encoder, ... or >> accessible only to the video decoder and the display unit. Who specifies >> those restrictions? Can we express them in a platform-neutral way? > > I once create a prototype for letting kernel drivers expose hints to which DMA-heap they want to work with. > > The problem is that there are tons of different use cases and you need touse specific allocations for specific use cases. > >>>> I have a feeling (I might be completely wrong here) that by using >>>> generic dma-buf heaps we can easily end up in a situation when the >>>> userspace depends heavily on the actual platform being used (to map the >>>> platform to heap names). I think we should instead depend on the >>>> existing devices (e.g. if there is a TEE device, use an IOCTL to >>>> allocate secured DMA BUF from it, otherwise check for QTEE device, >>>> otherwise check for some other vendor device). >>>> >>>> The mental experiment to check if the API is correct is really simple: >>>> Can you use exactly the same rootfs on several devices without >>>> any additional tuning (e.g. your QEMU, HiKey, a Mediatek board, Qualcomm >>>> laptop, etc)? >>>> >>> This is a great north star to follow. And exactly the reason we should >>> *not* be exposing device specific constraints to userspace. The constrains >>> change based on the platform. So a userspace would have to also pick >>> a different set of constraints based on each platform. >> Great, I totally agree here. > > That sounds reasonable, but depends on the restriction. > > For example a lot of GPUs can work with any imported memory as long as itis DMA accessible for them, but they can scanout a picture to display on amonitor only from their local memory. > And in this case, attaching the DMA-buf to that monitor for scanout should fail at attach time, if we had a way to properly communicate constraints between importer and exporter.. >>> Userspace knows which subsystems it will attach a buffer, and the >>> kernel knows what constraints those devices have on a given platform. >>> Ideal case is then allocate from the one exporter, attach to various >>> devices, and have the constraints solved at map time by the exporter >>> based on the set of attached devices. > > That approach doesn't work. We have already tried stuff like that multiple times. > Past failures don't guarantee future failure :) >>> For example, on one platform the display needs contiguous buffers, >>> but on a different platform the display can scatter-gather. So >>> what heap should our generic application allocate from when it >>> wants a buffer consumable by the display, CMA or System? >>> Answer *should* be always use the generic exporter, and that >>> exporter then picks the right backing type based on the platform. >> The display can support scather-gather, the GPU needs bigger stride for >> this particular format and the video encoder decoder can not support SG. >> Which set of constraints and which buffer size should generic exporter >> select? > > Yeah, exactly that's the problem. The kernel doesn't know all the necessary information to make an informed allocation decision. > Why not? For instance when we attach a buffer for scanout we provide all the format/stride/etc info. What we are missing is a backchannel to provide these to the exporter as a set of constraints that those formats impose on a memory area for a given device. > Sometimes you even have to insert format conversation steps and doing that transparently for userspace inside the kernel is really a no-go from the design side. > Agree, we cannot be doing conversions like that in-kernel. We need the format and constraints solved at an earlier step. Andrew >>> Userspace shouldn't be dealing with any of these constraints >>> (looking back, adding the CMA heap was probably incorrect, >>> and the System heap should have been the only one. Idea back >>> then was a userspace helper would show up to do the constraint >>> solving and pick the right heap. That has yet to materialize and >>> folks are still just hardcoding which heap to use..). >>> >>> Same for this restricted heap, I'd like to explore if we can >>> enhance the System heap such that when attached to the TEE framework, >>> the backing memory is either made restricted by fire-walling, >>> or allocating from a TEE carveout (based on platform). > > Clearly NAK from my side to that design. > > Regards, > Christian. > >> Firewalling from which devices? Or rather allowing access from which >> devices? Is it possible to specify that somehow? >> >>> This will mean more inter-subsystem coordination, but we can >>> iterate on these in kernel interfaces. We cannot iterate on >>> userspace interfaces, those have to be correct the first time. >>> >>> Andrew >>> >>> [0]https://www.kernel.org/doc/html/next/iio/iio_dmabuf_api.html >>> >>>>> This can be tested on QEMU with the following steps: >>>>> repo init -uhttps://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \ >>>>> -b prototype/sdp-v1 >>>>> repo sync -j8 >>>>> cd build >>>>> make toolchains -j4 >>>>> make all -j$(nproc) >>>>> make run-only >>>>> # login and at the prompt: >>>>> xtest --sdp-basic >>>>> >>>>> https://optee.readthedocs.io/en/latest/building/prerequisites.html >>>>> list dependencies needed to build the above. >>>>> >>>>> The tests are pretty basic, mostly checking that a Trusted Application in >>>>> the secure world can access and manipulate the memory. >>>> - Can we test that the system doesn't crash badly if user provides >>>> non-secured memory to the users which expect a secure buffer? >>>> >>>> - At the same time corresponding entities shouldn't decode data to the >>>> buffers accessible to the rest of the sytem. >>>> >>>>> Cheers, >>>>> Jens >>>>> >>>>> [1]https://lore.kernel.org/dri-devel/20240515112308.10171-1-yong.wu@mediatek.com/ >>>>> [2]https://lore.kernel.org/lkml/20220805135330.970-1-olivier.masse@nxp.com/ >>>>> >>>>> Changes since Olivier's post [2]: >>>>> * Based on Yong Wu's post [1] where much of dma-buf handling is done in >>>>> the generic restricted heap >>>>> * Simplifications and cleanup >>>>> * New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap >>>>> support" >>>>> * Replaced the word "secure" with "restricted" where applicable >>>>> >>>>> Etienne Carriere (1): >>>>> tee: new ioctl to a register tee_shm from a dmabuf file descriptor >>>>> >>>>> Jens Wiklander (2): >>>>> dma-buf: heaps: restricted_heap: add no_map attribute >>>>> dma-buf: heaps: add Linaro restricted dmabuf heap support >>>>> >>>>> Olivier Masse (1): >>>>> dt-bindings: reserved-memory: add linaro,restricted-heap >>>>> >>>>> .../linaro,restricted-heap.yaml | 56 ++++++ >>>>> drivers/dma-buf/heaps/Kconfig | 10 ++ >>>>> drivers/dma-buf/heaps/Makefile | 1 + >>>>> drivers/dma-buf/heaps/restricted_heap.c | 17 +- >>>>> drivers/dma-buf/heaps/restricted_heap.h | 2 + >>>>> .../dma-buf/heaps/restricted_heap_linaro.c | 165 ++++++++++++++++++ >>>>> drivers/tee/tee_core.c | 38 ++++ >>>>> drivers/tee/tee_shm.c | 104 ++++++++++- >>>>> include/linux/tee_drv.h | 11 ++ >>>>> include/uapi/linux/tee.h | 29 +++ >>>>> 10 files changed, 426 insertions(+), 7 deletions(-) >>>>> create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,restricted-heap.yaml >>>>> create mode 100644 drivers/dma-buf/heaps/restricted_heap_linaro.c >>>>> >>>>> -- >>>>> 2.34.1 >>>>> >
Hi, On Thu, Sep 26, 2024 at 4:03 PM Christian König <christian.koenig@amd.com> wrote: > > Am 26.09.24 um 15:52 schrieb Sumit Garg: > > [Resend in plain text format as my earlier message was rejected by > > some mailing lists] > > > > On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote: > >> On 9/25/24 19:31, Christian König wrote: > >> > >> Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: > >> > >> On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: > >> > >> Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > >> > >> On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > >> > >> On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > >> > >> Hi, > >> > >> On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > >> > >> Hi, > >> > >> This patch set is based on top of Yong Wu's restricted heap patch set [1]. > >> It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > >> > >> The Linaro restricted heap uses genalloc in the kernel to manage the heap > >> carvout. This is a difference from the Mediatek restricted heap which > >> relies on the secure world to manage the carveout. > >> > >> I've tried to adress the comments on [2], but [1] introduces changes so I'm > >> afraid I've had to skip some comments. > >> > >> I know I have raised the same question during LPC (in connection to > >> Qualcomm's dma-heap implementation). Is there any reason why we are > >> using generic heaps instead of allocating the dma-bufs on the device > >> side? > >> > >> In your case you already have TEE device, you can use it to allocate and > >> export dma-bufs, which then get imported by the V4L and DRM drivers. > >> > >> This goes to the heart of why we have dma-heaps in the first place. > >> We don't want to burden userspace with having to figure out the right > >> place to get a dma-buf for a given use-case on a given hardware. > >> That would be very non-portable, and fail at the core purpose of > >> a kernel: to abstract hardware specifics away. > >> > >> Unfortunately all proposals to use dma-buf heaps were moving in the > >> described direction: let app select (somehow) from a platform- and > >> vendor- specific list of dma-buf heaps. In the kernel we at least know > >> the platform on which the system is running. Userspace generally doesn't > >> (and shouldn't). As such, it seems better to me to keep the knowledge in > >> the kernel and allow userspace do its job by calling into existing > >> device drivers. > >> > >> The idea of letting the kernel fully abstract away the complexity of inter > >> device data exchange is a completely failed design. There has been plenty of > >> evidence for that over the years. > >> > >> Because of this in DMA-buf it's an intentional design decision that > >> userspace and *not* the kernel decides where and what to allocate from. > >> > >> Hmm, ok. > >> > >> What the kernel should provide are the necessary information what type of > >> memory a device can work with and if certain memory is accessible or not. > >> This is the part which is unfortunately still not well defined nor > >> implemented at the moment. > >> > >> Apart from that there are a whole bunch of intentional design decision which > >> should prevent developers to move allocation decision inside the kernel. For > >> example DMA-buf doesn't know what the content of the buffer is (except for > >> it's total size) and which use cases a buffer will be used with. > >> > >> So the question if memory should be exposed through DMA-heaps or a driver > >> specific allocator is not a question of abstraction, but rather one of the > >> physical location and accessibility of the memory. > >> > >> If the memory is attached to any physical device, e.g. local memory on a > >> dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the > >> memory as device specific allocator. > >> > >> So, for embedded systems with unified memory all buffers (maybe except > >> PCIe BARs) should come from DMA-BUF heaps, correct? > >> > >> > >> From what I know that is correct, yes. Question is really if that will stay this way. > >> > >> Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. > >> > >> Could be that designs with local specialized memory see a revival any time, who knows. > >> > >> If the memory is not physically attached to any device, but rather just > >> memory attached to the CPU or a system wide memory controller then expose > >> the memory as DMA-heap with specific requirements (e.g. certain sized pages, > >> contiguous, restricted, encrypted, ...). > >> > >> Is encrypted / protected a part of the allocation contract or should it > >> be enforced separately via a call to TEE / SCM / anything else? > >> > >> > >> Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. > >> > > IMHO, I think Dmitry's proposal to rather allow the TEE device to be > > the allocator and exporter of DMA-bufs related to restricted memory > > makes sense to me. Since it's really the TEE implementation (OP-TEE, > > AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a > > particular piece of allocated memory. AFAIK, that happens after the > > DMA-buf gets allocated and then user-space calls into TEE to set up > > which media pipeline is going to access that particular DMA-buf. It > > can also be a static contract depending on a particular platform > > design. > > > > As Jens noted in the other thread, we already manage shared memory > > allocations (from a static carve-out or dynamically mapped) for > > communications among Linux and TEE that were based on DMA-bufs earlier > > but since we didn't required them to be shared with other devices, so > > we rather switched to anonymous memory. > > > > From user-space perspective, it's cleaner to use TEE device IOCTLs for > > DMA-buf allocations since it already knows which underlying TEE > > implementation it's communicating with rather than first figuring out > > which DMA heap to use for allocation and then communicating with TEE > > implementation. > > +1 > > I'm not that deeply into the functionality the TEE device IOCTLs expose, > so can't judge if what's said above is correct or not. > > But in general building on top of existing infrastructure and > information is a really strong argument for a design. > > So from my 10 mile high point of view that sounds like the way to go. That sounds good, I'll prepare another patch set based on that approach so we can see all the details. Thanks, Jens
Le jeudi 26 septembre 2024 à 19:22 +0530, Sumit Garg a écrit : > [Resend in plain text format as my earlier message was rejected by > some mailing lists] > > On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote: > > > > On 9/25/24 19:31, Christian König wrote: > > > > Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: > > > > On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: > > > > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > > > > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > > > > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > > > Hi, > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > > Hi, > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > carvout. This is a difference from the Mediatek restricted heap which > > relies on the secure world to manage the carveout. > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > afraid I've had to skip some comments. > > > > I know I have raised the same question during LPC (in connection to > > Qualcomm's dma-heap implementation). Is there any reason why we are > > using generic heaps instead of allocating the dma-bufs on the device > > side? > > > > In your case you already have TEE device, you can use it to allocate and > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > This goes to the heart of why we have dma-heaps in the first place. > > We don't want to burden userspace with having to figure out the right > > place to get a dma-buf for a given use-case on a given hardware. > > That would be very non-portable, and fail at the core purpose of > > a kernel: to abstract hardware specifics away. > > > > Unfortunately all proposals to use dma-buf heaps were moving in the > > described direction: let app select (somehow) from a platform- and > > vendor- specific list of dma-buf heaps. In the kernel we at least know > > the platform on which the system is running. Userspace generally doesn't > > (and shouldn't). As such, it seems better to me to keep the knowledge in > > the kernel and allow userspace do its job by calling into existing > > device drivers. > > > > The idea of letting the kernel fully abstract away the complexity of inter > > device data exchange is a completely failed design. There has been plenty of > > evidence for that over the years. > > > > Because of this in DMA-buf it's an intentional design decision that > > userspace and *not* the kernel decides where and what to allocate from. > > > > Hmm, ok. > > > > What the kernel should provide are the necessary information what type of > > memory a device can work with and if certain memory is accessible or not. > > This is the part which is unfortunately still not well defined nor > > implemented at the moment. > > > > Apart from that there are a whole bunch of intentional design decision which > > should prevent developers to move allocation decision inside the kernel. For > > example DMA-buf doesn't know what the content of the buffer is (except for > > it's total size) and which use cases a buffer will be used with. > > > > So the question if memory should be exposed through DMA-heaps or a driver > > specific allocator is not a question of abstraction, but rather one of the > > physical location and accessibility of the memory. > > > > If the memory is attached to any physical device, e.g. local memory on a > > dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the > > memory as device specific allocator. > > > > So, for embedded systems with unified memory all buffers (maybe except > > PCIe BARs) should come from DMA-BUF heaps, correct? > > > > > > From what I know that is correct, yes. Question is really if that will stay this way. > > > > Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. > > > > Could be that designs with local specialized memory see a revival any time, who knows. > > > > If the memory is not physically attached to any device, but rather just > > memory attached to the CPU or a system wide memory controller then expose > > the memory as DMA-heap with specific requirements (e.g. certain sized pages, > > contiguous, restricted, encrypted, ...). > > > > Is encrypted / protected a part of the allocation contract or should it > > be enforced separately via a call to TEE / SCM / anything else? > > > > > > Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. > > > > IMHO, I think Dmitry's proposal to rather allow the TEE device to be > the allocator and exporter of DMA-bufs related to restricted memory > makes sense to me. Since it's really the TEE implementation (OP-TEE, > AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a > particular piece of allocated memory. AFAIK, that happens after the > DMA-buf gets allocated and then user-space calls into TEE to set up > which media pipeline is going to access that particular DMA-buf. It > can also be a static contract depending on a particular platform > design. When the memory get the protection is hardware specific. Otherwise the design would be really straightforward, allocate from the a heap or any random driver API and protect that memory through an call into the TEE. Clear seperation would be amazingly better, but this is not how hardware and firmware designer have seen it. In some implementation, there is a carving of memory that be protected before the kernel is booted. I believe (but I'm not affiliated with them) that MTK has hardware restriction making that design the only usable method. In general, the handling of secure memory is bound to the TEE application for the specific platform, it has to be separated from the generic part of tee drivers anyway, and dmabuf heaps is in my opinion the right API for the task. On MTK, if you have followed, when the SCP (their co-processor) is handling restricted video, you can't even call into it anymore directly. So to drive the CODECs, everything has to be routed through the TEE. Would you say that because of that this should not be a V4L2 driver anymore ? > > As Jens noted in the other thread, we already manage shared memory > allocations (from a static carve-out or dynamically mapped) for > communications among Linux and TEE that were based on DMA-bufs earlier > but since we didn't required them to be shared with other devices, so > we rather switched to anonymous memory. > > From user-space perspective, it's cleaner to use TEE device IOCTLs for > DMA-buf allocations since it already knows which underlying TEE > implementation it's communicating with rather than first figuring out > which DMA heap to use for allocation and then communicating with TEE > implementation. As a user-space developer in the majority of my time, adding common code to handle dma heaps is a lot easier and straight forward then having to glue all the different allocators implement in various subsystems. Communicating which heap to work can be generic and simple. Nicolas
On Sat, 28 Sept 2024 at 01:20, Nicolas Dufresne <nicolas@ndufresne.ca> wrote: > > Le jeudi 26 septembre 2024 à 19:22 +0530, Sumit Garg a écrit : > > [Resend in plain text format as my earlier message was rejected by > > some mailing lists] > > > > On Thu, 26 Sept 2024 at 19:17, Sumit Garg <sumit.garg@linaro.org> wrote: > > > > > > On 9/25/24 19:31, Christian König wrote: > > > > > > Am 25.09.24 um 14:51 schrieb Dmitry Baryshkov: > > > > > > On Wed, Sep 25, 2024 at 10:51:15AM GMT, Christian König wrote: > > > > > > Am 25.09.24 um 01:05 schrieb Dmitry Baryshkov: > > > > > > On Tue, Sep 24, 2024 at 01:13:18PM GMT, Andrew Davis wrote: > > > > > > On 9/23/24 1:33 AM, Dmitry Baryshkov wrote: > > > > > > Hi, > > > > > > On Fri, Aug 30, 2024 at 09:03:47AM GMT, Jens Wiklander wrote: > > > > > > Hi, > > > > > > This patch set is based on top of Yong Wu's restricted heap patch set [1]. > > > It's also a continuation on Olivier's Add dma-buf secure-heap patch set [2]. > > > > > > The Linaro restricted heap uses genalloc in the kernel to manage the heap > > > carvout. This is a difference from the Mediatek restricted heap which > > > relies on the secure world to manage the carveout. > > > > > > I've tried to adress the comments on [2], but [1] introduces changes so I'm > > > afraid I've had to skip some comments. > > > > > > I know I have raised the same question during LPC (in connection to > > > Qualcomm's dma-heap implementation). Is there any reason why we are > > > using generic heaps instead of allocating the dma-bufs on the device > > > side? > > > > > > In your case you already have TEE device, you can use it to allocate and > > > export dma-bufs, which then get imported by the V4L and DRM drivers. > > > > > > This goes to the heart of why we have dma-heaps in the first place. > > > We don't want to burden userspace with having to figure out the right > > > place to get a dma-buf for a given use-case on a given hardware. > > > That would be very non-portable, and fail at the core purpose of > > > a kernel: to abstract hardware specifics away. > > > > > > Unfortunately all proposals to use dma-buf heaps were moving in the > > > described direction: let app select (somehow) from a platform- and > > > vendor- specific list of dma-buf heaps. In the kernel we at least know > > > the platform on which the system is running. Userspace generally doesn't > > > (and shouldn't). As such, it seems better to me to keep the knowledge in > > > the kernel and allow userspace do its job by calling into existing > > > device drivers. > > > > > > The idea of letting the kernel fully abstract away the complexity of inter > > > device data exchange is a completely failed design. There has been plenty of > > > evidence for that over the years. > > > > > > Because of this in DMA-buf it's an intentional design decision that > > > userspace and *not* the kernel decides where and what to allocate from. > > > > > > Hmm, ok. > > > > > > What the kernel should provide are the necessary information what type of > > > memory a device can work with and if certain memory is accessible or not. > > > This is the part which is unfortunately still not well defined nor > > > implemented at the moment. > > > > > > Apart from that there are a whole bunch of intentional design decision which > > > should prevent developers to move allocation decision inside the kernel. For > > > example DMA-buf doesn't know what the content of the buffer is (except for > > > it's total size) and which use cases a buffer will be used with. > > > > > > So the question if memory should be exposed through DMA-heaps or a driver > > > specific allocator is not a question of abstraction, but rather one of the > > > physical location and accessibility of the memory. > > > > > > If the memory is attached to any physical device, e.g. local memory on a > > > dGPU, FPGA PCIe BAR, RDMA, camera internal memory etc, then expose the > > > memory as device specific allocator. > > > > > > So, for embedded systems with unified memory all buffers (maybe except > > > PCIe BARs) should come from DMA-BUF heaps, correct? > > > > > > > > > From what I know that is correct, yes. Question is really if that will stay this way. > > > > > > Neural accelerators look a lot stripped down FPGAs these days and the benefit of local memory for GPUs is known for decades. > > > > > > Could be that designs with local specialized memory see a revival any time, who knows. > > > > > > If the memory is not physically attached to any device, but rather just > > > memory attached to the CPU or a system wide memory controller then expose > > > the memory as DMA-heap with specific requirements (e.g. certain sized pages, > > > contiguous, restricted, encrypted, ...). > > > > > > Is encrypted / protected a part of the allocation contract or should it > > > be enforced separately via a call to TEE / SCM / anything else? > > > > > > > > > Well that is a really good question I can't fully answer either. From what I know now I would say it depends on the design. > > > > > > > IMHO, I think Dmitry's proposal to rather allow the TEE device to be > > the allocator and exporter of DMA-bufs related to restricted memory > > makes sense to me. Since it's really the TEE implementation (OP-TEE, > > AMD-TEE, TS-TEE or future QTEE) which sets up the restrictions on a > > particular piece of allocated memory. AFAIK, that happens after the > > DMA-buf gets allocated and then user-space calls into TEE to set up > > which media pipeline is going to access that particular DMA-buf. It > > can also be a static contract depending on a particular platform > > design. > > When the memory get the protection is hardware specific. Otherwise the design > would be really straightforward, allocate from the a heap or any random driver > API and protect that memory through an call into the TEE. Clear seperation would > be amazingly better, but this is not how hardware and firmware designer have > seen it. > > In some implementation, there is a carving of memory that be protected before > the kernel is booted. I believe (but I'm not affiliated with them) that MTK has > hardware restriction making that design the only usable method. Yeah I agree with that. The point I am making here is that the TEE subsystem can abstract all that platform/vendor specific methods for user-space to allocate restricted memory. We already have a similar infrastructure for shared memory among Linux and TEE implementation. The user-space only uses TEE_IOC_SHM_ALLOC [1] where underneath it can either allocate from static carveout of shared memory (as a reserved memory region) OR simply allocate from the kernel heap which is dynamically mapped into the TEE implementation. The choice here depends on the platform/TEE implementation capability. [1] https://docs.kernel.org/userspace-api/tee.html > > In general, the handling of secure memory is bound to the TEE application for > the specific platform, it has to be separated from the generic part of tee > drivers anyway, It is really the TEE implementation core which has the privileges to mark a piece of memory as restricted/secure. The TEE application in MTK is likely a pseudo TA (a terminology similar to Linux kernel modules in the TEE world). So it is rather easier for TEE implementation drivers to abstract out the communication with the vendor specific TEE core implementation. > and dmabuf heaps is in my opinion the right API for the task. Do you really think it is better for user-space to deal with vendor specific dmabuf heaps? > > On MTK, if you have followed, when the SCP (their co-processor) is handling > restricted video, you can't even call into it anymore directly. So to drive the > CODECs, everything has to be routed through the TEE. Would you say that because > of that this should not be a V4L2 driver anymore ? I am not conversant with the MTK hardware/firmware implementation. But my point is the kernel shouldn't be exposing 10s of vendor specific DMAbuf heaps to the user-space to choose from which can rather be just a single TEE device IOCTL used to allocate restricted memory. > > > > > As Jens noted in the other thread, we already manage shared memory > > allocations (from a static carve-out or dynamically mapped) for > > communications among Linux and TEE that were based on DMA-bufs earlier > > but since we didn't required them to be shared with other devices, so > > we rather switched to anonymous memory. > > > > From user-space perspective, it's cleaner to use TEE device IOCTLs for > > DMA-buf allocations since it already knows which underlying TEE > > implementation it's communicating with rather than first figuring out > > which DMA heap to use for allocation and then communicating with TEE > > implementation. > > As a user-space developer in the majority of my time, adding common code to > handle dma heaps is a lot easier and straight forward then having to glue all > the different allocators implement in various subsystems. Communicating which > heap to work can be generic and simple. Yeah I agree with that notion but IMHO having ifdefry to select vendor specific DMA heaps isn't something user-space should be dealing with. -Sumit > > Nicolas >