Message ID | 20220909191916.16013-3-Sergey.Semin@baikalelectronics.ru (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block/nvme: Fix DMA-noncoherent platforms support | expand |
On Fri, Sep 09, 2022 at 10:19:16PM +0300, Serge Semin wrote: > In accordance with [1] the DMA-able memory buffers must be > cacheline-aligned otherwise the cache writing-back and invalidation > performed during the mapping may cause the adjacent data being lost. It's > specifically required for the DMA-noncoherent platforms. Seeing the > opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD > drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods > respectively we must make sure the passed buffers are cacheline-aligned to > prevent the denoted problem. Same comment as for the previous one, this should work, but I think separate allocations for the DMAable buffers would document the intent much better. Given that the opal initialization isn't a fast path I don't think that the overhead should matter either.
Hello Christoph On Sat, Sep 10, 2022 at 07:32:03AM +0200, Christoph Hellwig wrote: > On Fri, Sep 09, 2022 at 10:19:16PM +0300, Serge Semin wrote: > > In accordance with [1] the DMA-able memory buffers must be > > cacheline-aligned otherwise the cache writing-back and invalidation > > performed during the mapping may cause the adjacent data being lost. It's > > specifically required for the DMA-noncoherent platforms. Seeing the > > opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD > > drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods > > respectively we must make sure the passed buffers are cacheline-aligned to > > prevent the denoted problem. > > Same comment as for the previous one, this should work, but I think > separate allocations for the DMAable buffers would document the intent > much better. Given that the opal initialization isn't a fast path > I don't think that the overhead should matter either. Thanks for the comment. I see your point. Let's hear the subsystem maintainers out for their opinion regarding the most suitable solution in this case. If they get to agree with you I'll resend the series with altered fixes. -Sergey
@Jens, @Revanth, @Jonathan do you have anything to say regarding the patch and what @Christoph suggested? On Sun, Sep 11, 2022 at 07:28:57PM +0300, Serge Semin wrote: > Hello Christoph > > On Sat, Sep 10, 2022 at 07:32:03AM +0200, Christoph Hellwig wrote: > > On Fri, Sep 09, 2022 at 10:19:16PM +0300, Serge Semin wrote: > > > In accordance with [1] the DMA-able memory buffers must be > > > cacheline-aligned otherwise the cache writing-back and invalidation > > > performed during the mapping may cause the adjacent data being lost. It's > > > specifically required for the DMA-noncoherent platforms. Seeing the > > > opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD > > > drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods > > > respectively we must make sure the passed buffers are cacheline-aligned to > > > prevent the denoted problem. > > > > > Same comment as for the previous one, this should work, but I think > > separate allocations for the DMAable buffers would document the intent > > much better. Given that the opal initialization isn't a fast path > > I don't think that the overhead should matter either. > > Thanks for the comment. I see your point. Let's hear the subsystem > maintainers out for their opinion regarding the most suitable solution > in this case. If they get to agree with you I'll resend the series > with altered fixes. > > -Sergey
diff --git a/block/sed-opal.c b/block/sed-opal.c index 9700197000f2..222acbd1f03a 100644 --- a/block/sed-opal.c +++ b/block/sed-opal.c @@ -73,6 +73,7 @@ struct parsed_resp { struct opal_resp_tok toks[MAX_TOKS]; }; +/* Presumably DMA-able buffers must be cache-aligned */ struct opal_dev { bool supported; bool mbr_enabled; @@ -88,8 +89,8 @@ struct opal_dev { u64 lowest_lba; size_t pos; - u8 cmd[IO_BUFFER_LENGTH]; - u8 resp[IO_BUFFER_LENGTH]; + u8 cmd[IO_BUFFER_LENGTH] ____cacheline_aligned; + u8 resp[IO_BUFFER_LENGTH] ____cacheline_aligned; struct parsed_resp parsed; size_t prev_d_len;
In accordance with [1] the DMA-able memory buffers must be cacheline-aligned otherwise the cache writing-back and invalidation performed during the mapping may cause the adjacent data being lost. It's specifically required for the DMA-noncoherent platforms. Seeing the opal_dev.{cmd,resp} buffers are used for DMAs in the NVME and SCSI/SD drivers in framework of the nvme_sec_submit() and sd_sec_submit() methods respectively we must make sure the passed buffers are cacheline-aligned to prevent the denoted problem. [1] Documentation/core-api/dma-api.rst Fixes: 455a7b238cd6 ("block: Add Sed-opal library") Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> --- block/sed-opal.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)