Message ID | 20230306165143.1671-1-afd@ti.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] dma-buf: cma_heap: Check for device max segment size when attaching | expand |
On Mon, Mar 6, 2023 at 8:52 AM Andrew Davis <afd@ti.com> wrote: > > Although there is usually not such a limitation (and when there is it is > often only because the driver forgot to change the super small default), > it is still correct here to break scatterlist element into chunks of > dma_max_mapping_size(). Hey Andrew! Thanks for sending this out! So *why* is it "correct here to break scatterlist element into chunks of dma_max_mapping_size()." ? > This might cause some issues for users with misbehaving drivers. If > bisecting has landed you on this commit, make sure your drivers both set > dma_set_max_seg_size() and are checking for contiguousness correctly. Why is this change worth the risk? (If this is really likely to break folks, should we maybe provide warnings initially instead? Maybe falling back to the old code if we can catch the failure?) I don't really object to the change, just want to make sure the commit message is more clear on why we should make this change, what the benefit will be along with the potential downsides. thanks -john
On 3/6/23 8:48 PM, John Stultz wrote: > On Mon, Mar 6, 2023 at 8:52 AM Andrew Davis <afd@ti.com> wrote: >> >> Although there is usually not such a limitation (and when there is it is >> often only because the driver forgot to change the super small default), >> it is still correct here to break scatterlist element into chunks of >> dma_max_mapping_size(). > > Hey Andrew! > Thanks for sending this out! > > So *why* is it "correct here to break scatterlist element into chunks > of dma_max_mapping_size()." ? > Good question, I'm not 100% sure on the background myself. It seems since some devices have restrictions on how large a mapping they can handle in a single run, we should not hand out single scatterlist elements longer than that. It is still a contiguous buffer, but some drivers forget to set their mapping limits and also only check the number of elements == 1 to determine if a sg is contiguous (which is not correct as there is no rule that contiguous runs must be merged into a single scatterlist). For those driver this would be an issue (I've only found one such case in-tree and sent a fix, https://lore.kernel.org/lkml/20220825162609.14076-1-afd@ti.com/) >> This might cause some issues for users with misbehaving drivers. If >> bisecting has landed you on this commit, make sure your drivers both set >> dma_set_max_seg_size() and are checking for contiguousness correctly. > > Why is this change worth the risk? (If this is really likely to break > folks, should we maybe provide warnings initially instead? Maybe > falling back to the old code if we can catch the failure?) > > I don't really object to the change, just want to make sure the commit > message is more clear on why we should make this change, what the > benefit will be along with the potential downsides. > I'm not sure it is worth the risk today either, but figured this being a young enough exporter it could be a good spot to start with for exposing misbehaving drivers vs some legacy GPU driver exporter. Plus better to make this change now rather than later in any case. I don't have any strong reason for this yet though, so I'm find with just considering this patch an RFC for now. Thanks, Andrew > thanks > -john
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 1131fb943992..579261a46fa3 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -53,16 +53,18 @@ static int cma_heap_attach(struct dma_buf *dmabuf, { struct cma_heap_buffer *buffer = dmabuf->priv; struct dma_heap_attachment *a; + size_t max_segment; int ret; a = kzalloc(sizeof(*a), GFP_KERNEL); if (!a) return -ENOMEM; - ret = sg_alloc_table_from_pages(&a->table, buffer->pages, - buffer->pagecount, 0, - buffer->pagecount << PAGE_SHIFT, - GFP_KERNEL); + max_segment = dma_get_max_seg_size(attachment->dev); + ret = sg_alloc_table_from_pages_segment(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + max_segment, GFP_KERNEL); if (ret) { kfree(a); return ret;
Although there is usually not such a limitation (and when there is it is often only because the driver forgot to change the super small default), it is still correct here to break scatterlist element into chunks of dma_max_mapping_size(). This might cause some issues for users with misbehaving drivers. If bisecting has landed you on this commit, make sure your drivers both set dma_set_max_seg_size() and are checking for contiguousness correctly. Signed-off-by: Andrew Davis <afd@ti.com> --- Changes from v2: - Rebase v6.3-rc1 Changes from v1: - Fixed mixed declarations and code warning drivers/dma-buf/heaps/cma_heap.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)