Message ID | 1582223216-23459-1-git-send-email-jcrouse@codeaurora.org (mailing list archive) |
---|---|
Headers | show |
Series | msm/gpu/a6xx: use the DMA-API for GMU memory allocations | expand |
On Thu, Feb 20, 2020 at 10:27 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > When CONFIG_INIT_ON_ALLOC_DEFAULT_ON the GMU memory allocator runs afoul of > cache coherency issues because it is mapped as write-combine without clearing > the cache after it was zeroed. > > Rather than duplicate the hacky workaround we use in the GEM allocator for the > same reason it turns out that we don't need to have a bespoke memory allocator > for the GMU anyway. It uses a flat, global address space and there are only > two relatively minor allocations anyway. In short, this is essentially what the > DMA API was created for so replace a bunch of memory management code with two > calls to allocate and free DMA memory and we're fine. > > The only wrinkle is that the memory allocations need to be in a very specific > location in the GMU virtual address space so in order to get the iova allocator > to do the right thing we need to specify the dma-ranges property in the device > tree for the GMU node. Since we've not yet converted the GMU bindings over to > YAML two patches quickly turn into four but at the end of it we have at least > one bindings file converted to YAML and 99 less lines of code to worry about. > > v2: Fix the example bindings for dma-ranges - the third item is the size > Pass false to of_dma_configure so that it fails probe if the DMA region is not > set up. This set still works for me as well. Thanks so much! Tested-by: John Stultz <john.stultz@linaro.org> thanks -john