Message ID | 1467039249-7816-2-git-send-email-fcooper@ti.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Jun 27, 2016 at 09:54:07AM -0500, Franklin S Cooper Jr wrote: > Occasionally there are times you need to tweak a chained S/G list while > maintaining the original list. This function will duplicate the passed > in chained S/G list and return a pointer to the cloned copy. This looks reasonable to me and there's not been any other comment over a fairly extended period (this wasn't the first posting) so unless someone screams I'll probably go ahead and apply this soon.
On Tue, Jul 5, 2016 at 5:49 PM, Mark Brown <broonie@kernel.org> wrote: > On Mon, Jun 27, 2016 at 09:54:07AM -0500, Franklin S Cooper Jr wrote: >> Occasionally there are times you need to tweak a chained S/G list while >> maintaining the original list. This function will duplicate the passed >> in chained S/G list and return a pointer to the cloned copy. > > This looks reasonable to me and there's not been any other comment over > a fairly extended period (this wasn't the first posting) so unless > someone screams I'll probably go ahead and apply this soon. It has style issues I suppose. Extra empty lines, for example. I will comment more.
On Mon, Jun 27, 2016 at 5:54 PM, Franklin S Cooper Jr <fcooper@ti.com> wrote: > Occasionally there are times you need to tweak a chained S/G list while > maintaining the original list. This function will duplicate the passed > in chained S/G list and return a pointer to the cloned copy. > > The function also supports passing in a length that can be equal or less > than the original chained S/G list length. Reducing the length can result > in less entries of the chained S/G list being created. My comments below. > +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, > + gfp_t gfp_mask); size_t len ? Or it would be commented somewhere why u64. > +/* > + * sg_clone - Duplicate an existing chained sgl > + * @orig_sgl: Original sg list to be duplicated > + * @len: Total length of sg while taking chaining into account > + * @gfp_mask: GFP allocation mask > + * > + * Description: > + * Clone a chained sgl. This cloned copy may be modified in some ways while > + * keeping the original sgl in tact. Also allow the cloned copy to have > + * a smaller length than the original which may reduce the sgl total > + * sg entries. > + * > + * Returns: > + * Pointer to new kmalloced sg list, ERR_PTR() on error Maybe "...new allocated SG list using kmalloc()..." ? > + * Extra line. > + */ > +static struct scatterlist *sg_clone(struct scatterlist *orig_sgl, u64 len, > + gfp_t gfp_mask) > +{ > + unsigned int nents; > + bool last_entry; > + struct scatterlist *sgl, *head; > + > + nents = sg_nents_for_len(orig_sgl, len); > + Ditto. > + if (nents < 0) > + return ERR_PTR(-EINVAL); return ERR_PTR(nents); > + > + head = sg_kmalloc(nents, gfp_mask); > + Extra line. > + if (!head) > + return ERR_PTR(-ENOMEM); > + > + sgl = head; > + > + sg_init_table(sgl, nents); > + > + for (; sgl; orig_sgl = sg_next(orig_sgl), sgl = sg_next(sgl)) { Maybe sg_init_table(head, nents); for (sgl = head; sgl; ... > + > + last_entry = sg_is_last(sgl); > + *sgl = *orig_sgl; > + > + /* > + * If page_link is pointing to a chained sgl then set it to > + * zero since we already compensated for chained sgl. If > + * page_link is pointing to a page then clear bits 1 and 0. > + */ > + if (sg_is_chain(orig_sgl)) > + sgl->page_link = 0x0; > + else > + sgl->page_link &= ~0x03; > + > + if (last_entry) { > + sg_dma_len(sgl) = len; Hmm... If it's not DMA mapped and CONFIG_NEED_SG_DMA_LENGTH=y, you will set dma_length field and len otherwise. Is it on purpose? > + /* Set bit 1 to indicate end of sgl */ > + sgl->page_link |= 0x02; > + } else { > + len -= sg_dma_len(sgl); Ditto. > + } > + } > + > + return head; > +} > + > +/* > + * sg_table_clone - Duplicate an existing sg_table including chained sgl > + * @orig_table: Original sg_table to be duplicated > + * @len: Total length of sg while taking chaining into account > + * @gfp_mask: GFP allocation mask > + * > + * Description: > + * Clone a sg_table along with chained sgl. This cloned copy may be > + * modified in some ways while keeping the original table and sgl in tact. > + * Also allow the cloned sgl copy to have a smaller length than the original > + * which may reduce the sgl total sg entries. > + * > + * Returns: > + * Pointer to new kmalloced sg_table, ERR_PTR() on error > + * > + */ > +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, > + gfp_t gfp_mask) > +{ > + struct sg_table *table; > + > + table = kmalloc(sizeof(struct sg_table), gfp_mask); > + Extra line. > + if (!table) > + return ERR_PTR(-ENOMEM); > + > + table->sgl = sg_clone(orig_table->sgl, len, gfp_mask); > + Ditto. > + if (IS_ERR(table->sgl)) { > + kfree(table); > + return ERR_PTR(-ENOMEM); > + } > + > + table->nents = table->orig_nents = sg_nents(table->sgl); > + > + return table; > +}
On Monday 27 June 2016 08:24 PM, Franklin S Cooper Jr wrote: > +/* > + * sg_table_clone - Duplicate an existing sg_table including chained sgl This function should probably be called sg_clone_table() to be consistent with sg_alloc_table(), sg_free_table() etc. > + * @orig_table: Original sg_table to be duplicated > + * @len: Total length of sg while taking chaining into account > + * @gfp_mask: GFP allocation mask > + * > + * Description: > + * Clone a sg_table along with chained sgl. This cloned copy may be > + * modified in some ways while keeping the original table and sgl in tact. > + * Also allow the cloned sgl copy to have a smaller length than the original > + * which may reduce the sgl total sg entries. > + * > + * Returns: > + * Pointer to new kmalloced sg_table, ERR_PTR() on error > + * > + */ > +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, > + gfp_t gfp_mask) > +{ > + struct sg_table *table; > + > + table = kmalloc(sizeof(struct sg_table), gfp_mask); Can you use sg_alloc_table() to allocate the new table? The way you have it now, it looks like users will need to use kfree() to free a cloned table and use sg_free_table() otherwise. It will be nice if sg_free_table() can be used consistently. Regards, Sekhar -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/05/2016 12:10 PM, Andy Shevchenko wrote: > On Mon, Jun 27, 2016 at 5:54 PM, Franklin S Cooper Jr <fcooper@ti.com> wrote: >> Occasionally there are times you need to tweak a chained S/G list while >> maintaining the original list. This function will duplicate the passed >> in chained S/G list and return a pointer to the cloned copy. >> >> The function also supports passing in a length that can be equal or less >> than the original chained S/G list length. Reducing the length can result >> in less entries of the chained S/G list being created. > > My comments below. > >> +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, >> + gfp_t gfp_mask); > > size_t len ? > Or it would be commented somewhere why u64. No reason other than sg_nents_for_len required u64. But I see other users of that function using size_t so I'll fix this. > >> +/* >> + * sg_clone - Duplicate an existing chained sgl >> + * @orig_sgl: Original sg list to be duplicated >> + * @len: Total length of sg while taking chaining into account >> + * @gfp_mask: GFP allocation mask >> + * >> + * Description: >> + * Clone a chained sgl. This cloned copy may be modified in some ways while >> + * keeping the original sgl in tact. Also allow the cloned copy to have >> + * a smaller length than the original which may reduce the sgl total >> + * sg entries. >> + * >> + * Returns: >> + * Pointer to new kmalloced sg list, ERR_PTR() on error > > Maybe "...new allocated SG list using kmalloc()..." ? Will fix > >> + * > > Extra line. Will fix along with all the other extra line comments. > >> + */ >> +static struct scatterlist *sg_clone(struct scatterlist *orig_sgl, u64 len, >> + gfp_t gfp_mask) >> +{ >> + unsigned int nents; >> + bool last_entry; >> + struct scatterlist *sgl, *head; >> + >> + nents = sg_nents_for_len(orig_sgl, len); >> + > > Ditto. > >> + if (nents < 0) > >> + return ERR_PTR(-EINVAL); > > return ERR_PTR(nents); Will fix > >> + >> + head = sg_kmalloc(nents, gfp_mask); >> + > > Extra line. > >> + if (!head) >> + return ERR_PTR(-ENOMEM); >> + > >> + sgl = head; >> + >> + sg_init_table(sgl, nents); >> + >> + for (; sgl; orig_sgl = sg_next(orig_sgl), sgl = sg_next(sgl)) { > > Maybe > > sg_init_table(head, nents); > > for (sgl = head; sgl; ... Yeah that will probably look better. Will fix. > >> + >> + last_entry = sg_is_last(sgl); >> + *sgl = *orig_sgl; >> + >> + /* >> + * If page_link is pointing to a chained sgl then set it to >> + * zero since we already compensated for chained sgl. If >> + * page_link is pointing to a page then clear bits 1 and 0. >> + */ >> + if (sg_is_chain(orig_sgl)) >> + sgl->page_link = 0x0; >> + else >> + sgl->page_link &= ~0x03; >> + > >> + if (last_entry) { >> + sg_dma_len(sgl) = len; > > Hmm... If it's not DMA mapped and CONFIG_NEED_SG_DMA_LENGTH=y, you > will set dma_length field and len otherwise. Is it on purpose? Thanks for pointing this out. I didn't take into consideration the above scenario. After looking at this more it seems like on occasion dma_length != length. I see this occurring in various map_sg functions for several architectures. I'm simply trying to reduce the overall sgl length by a certain amount. I'm not sure what the proper approach would be since dma_length and length can be set to different values. Simply setting sgl->dma_length to sgl->length or vice a versa seems like it can be problematic in some scenarios. What confuses me even more is if dma_length != length then how can sg_nents_for_len only use the sgl length property. Any thoughts on what would be the best way to handle this? How about something like the following to address your original question? sgl->length = len if (sgl->dma_address) sg_dma_len(sgl) = sgl->length > >> + /* Set bit 1 to indicate end of sgl */ >> + sgl->page_link |= 0x02; >> + } else { > >> + len -= sg_dma_len(sgl); len -= sgl->length > > Ditto. > >> + } >> + } >> + >> + return head; >> +} >> + >> +/* >> + * sg_table_clone - Duplicate an existing sg_table including chained sgl >> + * @orig_table: Original sg_table to be duplicated >> + * @len: Total length of sg while taking chaining into account >> + * @gfp_mask: GFP allocation mask >> + * >> + * Description: >> + * Clone a sg_table along with chained sgl. This cloned copy may be >> + * modified in some ways while keeping the original table and sgl in tact. >> + * Also allow the cloned sgl copy to have a smaller length than the original >> + * which may reduce the sgl total sg entries. >> + * >> + * Returns: >> + * Pointer to new kmalloced sg_table, ERR_PTR() on error >> + * >> + */ >> +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, >> + gfp_t gfp_mask) >> +{ >> + struct sg_table *table; >> + >> + table = kmalloc(sizeof(struct sg_table), gfp_mask); >> + > > Extra line. > >> + if (!table) >> + return ERR_PTR(-ENOMEM); >> + >> + table->sgl = sg_clone(orig_table->sgl, len, gfp_mask); >> + > > Ditto. > >> + if (IS_ERR(table->sgl)) { >> + kfree(table); >> + return ERR_PTR(-ENOMEM); >> + } >> + >> + table->nents = table->orig_nents = sg_nents(table->sgl); >> + >> + return table; >> +} > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 07/06/2016 05:15 AM, Sekhar Nori wrote: > On Monday 27 June 2016 08:24 PM, Franklin S Cooper Jr wrote: > >> +/* >> + * sg_table_clone - Duplicate an existing sg_table including chained sgl > > This function should probably be called sg_clone_table() to be > consistent with sg_alloc_table(), sg_free_table() etc. Will fix. > >> + * @orig_table: Original sg_table to be duplicated >> + * @len: Total length of sg while taking chaining into account >> + * @gfp_mask: GFP allocation mask >> + * >> + * Description: >> + * Clone a sg_table along with chained sgl. This cloned copy may be >> + * modified in some ways while keeping the original table and sgl in tact. >> + * Also allow the cloned sgl copy to have a smaller length than the original >> + * which may reduce the sgl total sg entries. >> + * >> + * Returns: >> + * Pointer to new kmalloced sg_table, ERR_PTR() on error >> + * >> + */ >> +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, >> + gfp_t gfp_mask) >> +{ >> + struct sg_table *table; >> + >> + table = kmalloc(sizeof(struct sg_table), gfp_mask); > > Can you use sg_alloc_table() to allocate the new table? The way you have > it now, it looks like users will need to use kfree() to free a cloned > table and use sg_free_table() otherwise. It will be nice if > sg_free_table() can be used consistently. Sg_alloc_table doesn't actually allocate a struct sg_table. It allocates a single sgl or chained sgl. Drivers that use sg_table manually allocate it via kmalloc. One change that might make sense is to not kmalloc sg_table but instead have a user pass a pointer to a preallocated instance of it. This way it will mimic how sg_table is typically handled. > > Regards, > Sekhar > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jul 6, 2016 at 8:09 PM, Franklin S Cooper Jr. <fcooper@ti.com> wrote: >>> + last_entry = sg_is_last(sgl); >>> + *sgl = *orig_sgl; >>> + >>> + /* >>> + * If page_link is pointing to a chained sgl then set it to >>> + * zero since we already compensated for chained sgl. If >>> + * page_link is pointing to a page then clear bits 1 and 0. >>> + */ >>> + if (sg_is_chain(orig_sgl)) >>> + sgl->page_link = 0x0; >>> + else >>> + sgl->page_link &= ~0x03; >>> + >> >>> + if (last_entry) { >>> + sg_dma_len(sgl) = len; >> >> Hmm... If it's not DMA mapped and CONFIG_NEED_SG_DMA_LENGTH=y, you >> will set dma_length field and len otherwise. Is it on purpose? > > Thanks for pointing this out. I didn't take into consideration the above > scenario. > > After looking at this more it seems like on occasion dma_length != > length. I see this occurring in various map_sg functions for several > architectures. > > I'm simply trying to reduce the overall sgl length by a certain amount. > I'm not sure what the proper approach would be since dma_length and > length can be set to different values. Simply setting sgl->dma_length to > sgl->length or vice a versa seems like it can be problematic in some > scenarios. What confuses me even more is if dma_length != length then > how can sg_nents_for_len only use the sgl length property. > > Any thoughts on what would be the best way to handle this? First come to my mind is to refuse to clone if SG table is DMA mapped. Imagine the scenario where you have for example last entry which your caller asked to split as total_entry_len = 37235 (input) dma_length = 65536 (64k alignment) asked_len = 32768 (Split chunks: 32768 + 4467) resulting_dma_len = ? What do you put here? Initially it perhaps means the same 64k. But how to distinguish? (I couldn't come with better one, but there are many alike scenarios) Only hack we have is to supply struct device of the DMA device to this SG table where you can get max segment size (but I dunno about alignment). P.S. I'm not familiar of the details of all these.
On 07/06/2016 12:46 PM, Andy Shevchenko wrote: > On Wed, Jul 6, 2016 at 8:09 PM, Franklin S Cooper Jr. <fcooper@ti.com> wrote: > >>>> + last_entry = sg_is_last(sgl); >>>> + *sgl = *orig_sgl; >>>> + >>>> + /* >>>> + * If page_link is pointing to a chained sgl then set it to >>>> + * zero since we already compensated for chained sgl. If >>>> + * page_link is pointing to a page then clear bits 1 and 0. >>>> + */ >>>> + if (sg_is_chain(orig_sgl)) >>>> + sgl->page_link = 0x0; >>>> + else >>>> + sgl->page_link &= ~0x03; >>>> + >>> >>>> + if (last_entry) { >>>> + sg_dma_len(sgl) = len; >>> >>> Hmm... If it's not DMA mapped and CONFIG_NEED_SG_DMA_LENGTH=y, you >>> will set dma_length field and len otherwise. Is it on purpose? >> >> Thanks for pointing this out. I didn't take into consideration the above >> scenario. >> >> After looking at this more it seems like on occasion dma_length != >> length. I see this occurring in various map_sg functions for several >> architectures. >> >> I'm simply trying to reduce the overall sgl length by a certain amount. >> I'm not sure what the proper approach would be since dma_length and >> length can be set to different values. Simply setting sgl->dma_length to >> sgl->length or vice a versa seems like it can be problematic in some >> scenarios. What confuses me even more is if dma_length != length then >> how can sg_nents_for_len only use the sgl length property. >> >> Any thoughts on what would be the best way to handle this? > > First come to my mind is to refuse to clone if SG table is DMA mapped. > > Imagine the scenario where you have for example last entry which your > caller asked to split as > > total_entry_len = 37235 (input) > dma_length = 65536 (64k alignment) Do you know of an example where dma_length will be set to some kind of alignment size rather than based on the number of bytes you want transfered? Or maybe I'm miss understanding this. > > asked_len = 32768 (Split chunks: 32768 + 4467) > resulting_dma_len = ? > > What do you put here? Initially it perhaps means the same 64k. But how > to distinguish? > (I couldn't come with better one, but there are many alike scenarios) > > Only hack we have is to supply struct device of the DMA device to this > SG table where you can get max segment size (but I dunno about > alignment). > > P.S. I'm not familiar of the details of all these. > Unfortunately, for the original purpose of this series the scatterlist I want to clone is indeed DMA mapped. The times I've seen dma_length != length is when the dma_map_sg is trying to combine contiguous sgl (ex ppc_iommu_map_sg and dma_map_cont). So maybe it is better to subtract a value from length and dma_length rather than explicitly setting it to a value. This way it wouldn't matter that dma_length and length aren't equal. This looks like a similar approach used by sg_split_mapped. Although sg_split skips first x number of bytes while I want to skip the last x number of bytes. Maybe Robert can add his thoughts since he created sg_split. -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
"Franklin S Cooper Jr." <fcooper@ti.com> writes: > Unfortunately, for the original purpose of this series the scatterlist I > want to clone is indeed DMA mapped. > > The times I've seen dma_length != length is when the dma_map_sg is > trying to combine contiguous sgl (ex ppc_iommu_map_sg and dma_map_cont). > So maybe it is better to subtract a value from length and dma_length > rather than explicitly setting it to a value. This way it wouldn't > matter that dma_length and length aren't equal. > > This looks like a similar approach used by sg_split_mapped. Although > sg_split skips first x number of bytes while I want to skip the last x > number of bytes. > > Maybe Robert can add his thoughts since he created sg_split. Hi, I've not read it all, but the above chapter is right : dma_length might be different from length when a dma mapping operation coallesced 2 sg entries into a "DMA contiguous one". This coallescing might happen for at least for 2 reasons as far as I know : - 2 sg entries are physically contiguous, ie : sg_phys(sga) + sga.length == sg_phys(sgb) - 2 sg entries are DMA contiguous when an IOMMU maps them contiguously, ie : sg_dma_address(sga) + sga.length == sg_dma_address(sgb) Moreover, this 2 coallescing cases imply a "shift" between (page_link, length) and (dma_address and dma_length). For example a mapped sglist might look like : -sg0: page_link=>page@0k, length=4096, dma_address=0, dma_length=8192 -sg1: page_link=>page@4k, length=4096, dma_address=8192, dma_length=16 \=> see the shift here coming from sg2 -sg2: page_link=>page@8k, length=16, dma_address=0, dma_length=0 For these "tricky" cases, at the time I created sg_split I had done a tester as well. It's very basic, doesn't cover all the corner cases, is a bit dumb, but you might have a look, and the brain cost you'll pay to adapt it to test what you want will hopefully pay off by the knowledge gained on scatterlist. It is appended at the end of the mail. Cheers.
On Thu, Jul 07, 2016 at 12:04:33AM +0200, Robert Jarzmik wrote: > For these "tricky" cases, at the time I created sg_split I had done a tester as > well. It's very basic, doesn't cover all the corner cases, is a bit dumb, but > you might have a look, and the brain cost you'll pay to adapt it to test what > you want will hopefully pay off by the knowledge gained on scatterlist. It is > appended at the end of the mail. Might be worth getting this into the kernel source, under tools/testing perhaps?
On 07/06/2016 05:04 PM, Robert Jarzmik wrote: > "Franklin S Cooper Jr." <fcooper@ti.com> writes: > >> Unfortunately, for the original purpose of this series the scatterlist I >> want to clone is indeed DMA mapped. >> >> The times I've seen dma_length != length is when the dma_map_sg is >> trying to combine contiguous sgl (ex ppc_iommu_map_sg and dma_map_cont). >> So maybe it is better to subtract a value from length and dma_length >> rather than explicitly setting it to a value. This way it wouldn't >> matter that dma_length and length aren't equal. >> >> This looks like a similar approach used by sg_split_mapped. Although >> sg_split skips first x number of bytes while I want to skip the last x >> number of bytes. >> >> Maybe Robert can add his thoughts since he created sg_split. > > Hi, > > I've not read it all, but the above chapter is right : dma_length might be > different from length when a dma mapping operation coallesced 2 sg entries into > a "DMA contiguous one". This coallescing might happen for at least for 2 reasons > as far as I know : > - 2 sg entries are physically contiguous, ie : > sg_phys(sga) + sga.length == sg_phys(sgb) > - 2 sg entries are DMA contiguous when an IOMMU maps them contiguously, ie : > sg_dma_address(sga) + sga.length == sg_dma_address(sgb) > > Moreover, this 2 coallescing cases imply a "shift" between (page_link, length) > and (dma_address and dma_length). For example a mapped sglist might look like : > -sg0: page_link=>page@0k, length=4096, dma_address=0, dma_length=8192 > -sg1: page_link=>page@4k, length=4096, dma_address=8192, dma_length=16 > \=> see the shift here > coming from sg2 > -sg2: page_link=>page@8k, length=16, dma_address=0, dma_length=0 > > For these "tricky" cases, at the time I created sg_split I had done a tester as > well. It's very basic, doesn't cover all the corner cases, is a bit dumb, but > you might have a look, and the brain cost you'll pay to adapt it to test what > you want will hopefully pay off by the knowledge gained on scatterlist. It is > appended at the end of the mail. Thanks Robert for your input and sharing your test program. After looking at sg_split and test program I realized that I can use it instead of creating my own clone function. Seems like you and your patch reviewers already considered and dealt with the above complex scenario. I updated my patchset to use sg_split rather than my custom function since it seems to solve my original problem. I plan on sending out a v3 that incorporates this change. > > Cheers. > -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Mark Brown <broonie@kernel.org> writes: > On Thu, Jul 07, 2016 at 12:04:33AM +0200, Robert Jarzmik wrote: > >> For these "tricky" cases, at the time I created sg_split I had done a tester as >> well. It's very basic, doesn't cover all the corner cases, is a bit dumb, but >> you might have a look, and the brain cost you'll pay to adapt it to test what >> you want will hopefully pay off by the knowledge gained on scatterlist. It is >> appended at the end of the mail. > > Might be worth getting this into the kernel source, under tools/testing > perhaps? Maybe so. I'll try, but I don't trust much my chances of success, given that this tester : - should compile and link in $(TOP)/lib/scatterlist.c, as this is where sg_split() is defined - this implies all its includes - this implies at least these ones : bug.h mm.h scatterlist.h string.h types.h - this implies having page_to_phys and co. defined somewhere without draining the whole include/linux and include/asm* trees For the tester, I had created an apart include/linux tree where all the includes were _manually_ filled in with minimal content. I don't know if an existing selftest had already this kind of problem, ie. having to compile and link a kernel .c file, and that makes me feel this might be difficult to keep a nice standalone tester. Cheers.
On Thu, Jul 07, 2016 at 07:43:25PM +0200, Robert Jarzmik wrote: > I'll try, but I don't trust much my chances of success, given that this tester : > - should compile and link in $(TOP)/lib/scatterlist.c, as this is where > sg_split() is defined > - this implies all its includes > - this implies at least these ones : > bug.h > mm.h > scatterlist.h > string.h > types.h > - this implies having page_to_phys and co. defined somewhere without > draining the whole include/linux and include/asm* trees > For the tester, I had created an apart include/linux tree where all the includes > were _manually_ filled in with minimal content. > I don't know if an existing selftest had already this kind of problem, > ie. having to compile and link a kernel .c file, and that makes me feel this > might be difficult to keep a nice standalone tester. Right, that's messy :( Could it be refactored as a boot/module load time test so it could be built in the kernel environment? Less convenient to use (though KVM/UML help) but easier to build.
Mark Brown <broonie@kernel.org> writes: > On Thu, Jul 07, 2016 at 07:43:25PM +0200, Robert Jarzmik wrote: > >> I'll try, but I don't trust much my chances of success, given that this tester : >> - should compile and link in $(TOP)/lib/scatterlist.c, as this is where >> sg_split() is defined >> - this implies all its includes >> - this implies at least these ones : >> bug.h >> mm.h >> scatterlist.h >> string.h >> types.h >> - this implies having page_to_phys and co. defined somewhere without >> draining the whole include/linux and include/asm* trees > >> For the tester, I had created an apart include/linux tree where all the includes >> were _manually_ filled in with minimal content. > >> I don't know if an existing selftest had already this kind of problem, >> ie. having to compile and link a kernel .c file, and that makes me feel this >> might be difficult to keep a nice standalone tester. > > Right, that's messy :( Could it be refactored as a boot/module load > time test so it could be built in the kernel environment? Less > convenient to use (though KVM/UML help) but easier to build. Actually I thought a bit about it and I got a "messy" idea. In order to keep things in userspace (for tests it really is more convenient), I'm going to try this approach : - make tools/testing/selftests/lib/sg_split.c, based on the tester - make the Makefile generate a scatterlist_generated.c out of lib/scatterlist.c, where all the includes will be 'grepped; out and replaced by a single "#include sg_split.h" containing all the necessary defines - create the sg_split.h as a concatenation of all the .h files I had to use for the tester (I will reuse existing .h if it is applicable) We'll see if that's feasible or not. As long as the "mess" is contained within sg_split.h, and if it doesn't become a too ugly beast, it might work. Cheers.
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index cb3c8fe..9a109da 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -247,6 +247,8 @@ struct scatterlist *sg_next(struct scatterlist *); struct scatterlist *sg_last(struct scatterlist *s, unsigned int); void sg_init_table(struct scatterlist *, unsigned int); void sg_init_one(struct scatterlist *, const void *, unsigned int); +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, + gfp_t gfp_mask); int sg_split(struct scatterlist *in, const int in_mapped_nents, const off_t skip, const int nb_splits, const size_t *split_sizes, diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 004fc70..b15f648 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -180,6 +180,109 @@ static struct scatterlist *sg_kmalloc(unsigned int nents, gfp_t gfp_mask) return kmalloc(nents * sizeof(struct scatterlist), gfp_mask); } +/* + * sg_clone - Duplicate an existing chained sgl + * @orig_sgl: Original sg list to be duplicated + * @len: Total length of sg while taking chaining into account + * @gfp_mask: GFP allocation mask + * + * Description: + * Clone a chained sgl. This cloned copy may be modified in some ways while + * keeping the original sgl in tact. Also allow the cloned copy to have + * a smaller length than the original which may reduce the sgl total + * sg entries. + * + * Returns: + * Pointer to new kmalloced sg list, ERR_PTR() on error + * + */ +static struct scatterlist *sg_clone(struct scatterlist *orig_sgl, u64 len, + gfp_t gfp_mask) +{ + unsigned int nents; + bool last_entry; + struct scatterlist *sgl, *head; + + nents = sg_nents_for_len(orig_sgl, len); + + if (nents < 0) + return ERR_PTR(-EINVAL); + + head = sg_kmalloc(nents, gfp_mask); + + if (!head) + return ERR_PTR(-ENOMEM); + + sgl = head; + + sg_init_table(sgl, nents); + + for (; sgl; orig_sgl = sg_next(orig_sgl), sgl = sg_next(sgl)) { + + last_entry = sg_is_last(sgl); + *sgl = *orig_sgl; + + /* + * If page_link is pointing to a chained sgl then set it to + * zero since we already compensated for chained sgl. If + * page_link is pointing to a page then clear bits 1 and 0. + */ + if (sg_is_chain(orig_sgl)) + sgl->page_link = 0x0; + else + sgl->page_link &= ~0x03; + + if (last_entry) { + sg_dma_len(sgl) = len; + /* Set bit 1 to indicate end of sgl */ + sgl->page_link |= 0x02; + } else { + len -= sg_dma_len(sgl); + } + } + + return head; +} + +/* + * sg_table_clone - Duplicate an existing sg_table including chained sgl + * @orig_table: Original sg_table to be duplicated + * @len: Total length of sg while taking chaining into account + * @gfp_mask: GFP allocation mask + * + * Description: + * Clone a sg_table along with chained sgl. This cloned copy may be + * modified in some ways while keeping the original table and sgl in tact. + * Also allow the cloned sgl copy to have a smaller length than the original + * which may reduce the sgl total sg entries. + * + * Returns: + * Pointer to new kmalloced sg_table, ERR_PTR() on error + * + */ +struct sg_table *sg_table_clone(struct sg_table *orig_table, u64 len, + gfp_t gfp_mask) +{ + struct sg_table *table; + + table = kmalloc(sizeof(struct sg_table), gfp_mask); + + if (!table) + return ERR_PTR(-ENOMEM); + + table->sgl = sg_clone(orig_table->sgl, len, gfp_mask); + + if (IS_ERR(table->sgl)) { + kfree(table); + return ERR_PTR(-ENOMEM); + } + + table->nents = table->orig_nents = sg_nents(table->sgl); + + return table; +} +EXPORT_SYMBOL(sg_table_clone); + static void sg_kfree(struct scatterlist *sg, unsigned int nents) { if (nents == SG_MAX_SINGLE_ALLOC) {
Occasionally there are times you need to tweak a chained S/G list while maintaining the original list. This function will duplicate the passed in chained S/G list and return a pointer to the cloned copy. The function also supports passing in a length that can be equal or less than the original chained S/G list length. Reducing the length can result in less entries of the chained S/G list being created. Signed-off-by: Franklin S Cooper Jr <fcooper@ti.com> --- include/linux/scatterlist.h | 2 + lib/scatterlist.c | 103 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 105 insertions(+)