Message ID | 1395131520-30207-1-git-send-email-xuelin.shi@freescale.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Dan Williams |
Headers | show |
On Tue, Mar 18, 2014 at 1:32 AM, <xuelin.shi@freescale.com> wrote: > From: Xuelin Shi <xuelin.shi@freescale.com> > > The count which is used to get_unmap_data maybe not the same as the > count computed in dmaengine_unmap which causes to free data in a > wrong pool. > > This patch fixes this issue by keeping the pool in unmap_data > structure. Won't this free the entire count anyways? In what scenario is the count different at unmap? -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
SGkgRGFuLA0KDQpJbiBhc3luY19tdWx0KC4uLikgb2YgYXN5bmNfcmFpZDZfcmVjb3YuYywgdGhl IGNvdW50IDMgaXMgdXNlZCB0byByZXF1ZXN0IGFuIHVubWFwLg0KSG93ZXZlciB0aGUgdG9fY250 IGFuZCBiaWRpX2NudCBhcmUgYm90aCBzZXQgdG8gMSBhbmQgZnJvbV9jbnQgdG8gMC4NClRoZW4g d2hpbGUgdHJ5aW5nIHRvIGRvIHVubWFwLCB3ZSBhcmUgZ2V0dGluZyB0aGUgd3JvbmcgInVubWFw IiBmcm9tIGEgZGlmZmVyZW50IG1lbXBvb2wuDQoNCkluIHRoaXMgcGF0Y2gsIHRoZSBtZW1wb29s IGlzIGFzc29jaWF0ZWQgd2l0aCB0aGUgdW5tYXAgc3RydWN0dXJlIGluc3RlYWQgb2YgY29tcHV0 aW5nIGl0IGFnYWluLg0KQnkgdGhpcyB3YXksIGl0IGlzIGd1YXJhbnRlZWQgdGhhdCB0aGUgdW5t YXAgaXMgdGhlIHNhbWUgd2hlbiB3ZSBnZXQgYW5kIHB1dCB0aGUgdW5tYXAgZGF0YS4NCg0KQlRX OiB0aGUgbWVtcG9vbCBpcyBqdXN0IHVzZWQgdG8gbWFuYWdlIHRoZSBzdHJ1Y3QgdW5tYXAsIG5v dCB0aGUgcGFnZXMuDQoNClRoYW5rcywNClh1ZWxpbiBTaGkNCg0KDQotLS0tLU9yaWdpbmFsIE1l c3NhZ2UtLS0tLQ0KRnJvbTogRGFuIFdpbGxpYW1zIFttYWlsdG86ZGFuLmoud2lsbGlhbXNAaW50 ZWwuY29tXSANClNlbnQ6IDIwMTTE6jPUwjE5yNUgMToxNA0KVG86IFNoaSBYdWVsaW4tQjI5MjM3 DQpDYzogVmlub2QgS291bDsgbGludXhwcGMtZGV2OyBkbWFlbmdpbmVAdmdlci5rZXJuZWwub3Jn DQpTdWJqZWN0OiBSZTogW1BBVENIXSBmaXggZG1hZW5naW5lX3VubWFwIGZhaWx1cmUuDQoNCk9u IFR1ZSwgTWFyIDE4LCAyMDE0IGF0IDE6MzIgQU0sICA8eHVlbGluLnNoaUBmcmVlc2NhbGUuY29t PiB3cm90ZToNCj4gRnJvbTogWHVlbGluIFNoaSA8eHVlbGluLnNoaUBmcmVlc2NhbGUuY29tPg0K Pg0KPiBUaGUgY291bnQgd2hpY2ggaXMgdXNlZCB0byBnZXRfdW5tYXBfZGF0YSBtYXliZSBub3Qg dGhlIHNhbWUgYXMgdGhlIA0KPiBjb3VudCBjb21wdXRlZCBpbiBkbWFlbmdpbmVfdW5tYXAgd2hp Y2ggY2F1c2VzIHRvIGZyZWUgZGF0YSBpbiBhIHdyb25nIA0KPiBwb29sLg0KPg0KPiBUaGlzIHBh dGNoIGZpeGVzIHRoaXMgaXNzdWUgYnkga2VlcGluZyB0aGUgcG9vbCBpbiB1bm1hcF9kYXRhIA0K PiBzdHJ1Y3R1cmUuDQoNCldvbid0IHRoaXMgZnJlZSB0aGUgZW50aXJlIGNvdW50IGFueXdheXM/ ICBJbiB3aGF0IHNjZW5hcmlvIGlzIHRoZSBjb3VudCBkaWZmZXJlbnQgYXQgdW5tYXA/DQoNCg0K -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Mar 18, 2014 at 11:39 PM, Xuelin Shi <xuelin.shi@freescale.com> wrote: > Hi Dan, > > In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap. > However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0. > Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool. > > In this patch, the mempool is associated with the unmap structure instead of computing it again. > By this way, it is guaranteed that the unmap is the same when we get and put the unmap data. > > BTW: the mempool is just used to manage the struct unmap, not the pages. > I see, what about just storing the map_cnt at allocation time? It could be another byte in struct dmaengine_unmap_data rather than an 8 byte pointer. -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Dan, I'm OK to save memory here. I'd like to send v2. Thanks, Xuelin Shi -----Original Message----- From: Dan Williams [mailto:dan.j.williams@intel.com] Sent: 2014?3?19? 23:44 To: Shi Xuelin-B29237 Cc: Vinod Koul; linuxppc-dev; dmaengine@vger.kernel.org Subject: Re: [PATCH] fix dmaengine_unmap failure. On Tue, Mar 18, 2014 at 11:39 PM, Xuelin Shi <xuelin.shi@freescale.com> wrote: > Hi Dan, > > In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap. > However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0. > Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool. > > In this patch, the mempool is associated with the unmap structure instead of computing it again. > By this way, it is guaranteed that the unmap is the same when we get and put the unmap data. > > BTW: the mempool is just used to manage the struct unmap, not the pages. > I see, what about just storing the map_cnt at allocation time? It could be another byte in struct dmaengine_unmap_data rather than an 8 byte pointer.
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index ed610b4..2977eee 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -1014,7 +1014,7 @@ static void dmaengine_unmap(struct kref *kref) dma_unmap_page(dev, unmap->addr[i], unmap->len, DMA_BIDIRECTIONAL); } - mempool_free(unmap, __get_unmap_pool(cnt)->pool); + mempool_free(unmap, unmap->unmap_pool->pool); } void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap) @@ -1071,14 +1071,17 @@ struct dmaengine_unmap_data * dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags) { struct dmaengine_unmap_data *unmap; + struct dmaengine_unmap_pool *unmap_pool; - unmap = mempool_alloc(__get_unmap_pool(nr)->pool, flags); + unmap_pool = __get_unmap_pool(nr); + unmap = mempool_alloc(unmap_pool->pool, flags); if (!unmap) return NULL; memset(unmap, 0, sizeof(*unmap)); kref_init(&unmap->kref); unmap->dev = dev; + unmap->unmap_pool = unmap_pool; return unmap; } diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index c5c92d5..6a25635 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -439,6 +439,7 @@ struct dmaengine_unmap_data { struct device *dev; struct kref kref; size_t len; + struct dmaengine_unmap_pool *unmap_pool; dma_addr_t addr[0]; };