Message ID | 20240704070357.1993-5-kundan.kumar@samsung.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block: add larger order folio instead of pages | expand |
> nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE - > fi.offset / PAGE_SIZE + 1; > + bio_release_folio(bio, fi.folio, nr_pages); > } > } > EXPORT_SYMBOL_GPL(__bio_release_pages); > diff --git a/block/blk.h b/block/blk.h > index 0c8857fe4079..18520b05c6ce 100644 > --- a/block/blk.h > +++ b/block/blk.h > @@ -548,6 +548,13 @@ static inline void bio_release_page(struct bio *bio, struct page *page) > unpin_user_page(page); > } > > +static inline void bio_release_folio(struct bio *bio, struct folio *folio, > + unsigned long npages) > +{ > + if (bio_flagged(bio, BIO_PAGE_PINNED)) > + unpin_user_folio(folio, npages); > +} This is only used in __bio_release_pages, and given that __bio_release_pages is only called when BIO_PAGE_PINNED is set there is no need to check it inside the loop again. Also this means we know the loop doesn't do anything if mark_dirty is false, which is another trivial check that can move into bio_release_pages. As this optimization already applies as-is I'll send a prep patch for it. so that we can avoid the npages calculation for the !BIO_PAGE_PINNED case. Morover having the BIO_PAGE_PINNED knowledge there means we can skip the entire loop for !BIO_PAGE_PINNED && > +/** > + * unpin_user_folio() - release pages of a folio > + * @folio: pointer to folio to be released > + * @npages: number of pages of same folio > + * > + * Release npages of the folio > + */ > +void unpin_user_folio(struct folio *folio, unsigned long npages) > +{ > + gup_put_folio(folio, npages, FOLL_PIN); > +} > +EXPORT_SYMBOL(unpin_user_folio); Please don't hide a new MM API inside a block patch, but split it out with a mm prefix.
On Sat, Jul 06, 2024 at 10:10:21AM +0200, Christoph Hellwig wrote: > Also this means we know the loop doesn't do anything if mark_dirty is > false, which is another trivial check that can move into > bio_release_pages. As this optimization already applies as-is I'll > send a prep patch for it. > > so that we can avoid the npages calculation for the !BIO_PAGE_PINNED > case. Morover having the BIO_PAGE_PINNED knowledge there means we > can skip the entire loop for !BIO_PAGE_PINNED && Braino - we're obviously already skipping it for all !BIO_PAGE_PINNED case. So discard most of this except for that fact that we should skip the wrapper doing the extra BIO_PAGE_PINNED checks.
diff --git a/block/bio.c b/block/bio.c index 32c9c6d80384..0d923140006e 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1205,20 +1205,15 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty) struct folio_iter fi; bio_for_each_folio_all(fi, bio) { - struct page *page; size_t nr_pages; - if (mark_dirty) { folio_lock(fi.folio); folio_mark_dirty(fi.folio); folio_unlock(fi.folio); } - page = folio_page(fi.folio, fi.offset / PAGE_SIZE); nr_pages = (fi.offset + fi.length - 1) / PAGE_SIZE - fi.offset / PAGE_SIZE + 1; - do { - bio_release_page(bio, page++); - } while (--nr_pages != 0); + bio_release_folio(bio, fi.folio, nr_pages); } } EXPORT_SYMBOL_GPL(__bio_release_pages); diff --git a/block/blk.h b/block/blk.h index 0c8857fe4079..18520b05c6ce 100644 --- a/block/blk.h +++ b/block/blk.h @@ -548,6 +548,13 @@ static inline void bio_release_page(struct bio *bio, struct page *page) unpin_user_page(page); } +static inline void bio_release_folio(struct bio *bio, struct folio *folio, + unsigned long npages) +{ + if (bio_flagged(bio, BIO_PAGE_PINNED)) + unpin_user_folio(folio, npages); +} + struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id); int disk_scan_partitions(struct gendisk *disk, blk_mode_t mode); diff --git a/include/linux/mm.h b/include/linux/mm.h index 9849dfda44d4..b902c6c39e2b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1618,6 +1618,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, bool make_dirty); void unpin_user_pages(struct page **pages, unsigned long npages); +void unpin_user_folio(struct folio *folio, unsigned long npages); static inline bool is_cow_mapping(vm_flags_t flags) { diff --git a/mm/gup.c b/mm/gup.c index ca0f5cedce9b..bc96efa43d1b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -488,6 +488,19 @@ void unpin_user_pages(struct page **pages, unsigned long npages) } EXPORT_SYMBOL(unpin_user_pages); +/** + * unpin_user_folio() - release pages of a folio + * @folio: pointer to folio to be released + * @npages: number of pages of same folio + * + * Release npages of the folio + */ +void unpin_user_folio(struct folio *folio, unsigned long npages) +{ + gup_put_folio(folio, npages, FOLL_PIN); +} +EXPORT_SYMBOL(unpin_user_folio); + /* * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's * lifecycle. Avoid setting the bit unless necessary, or it might cause write
Add a new function bio_release_folio() and use it to put refs by npages count. Signed-off-by: Kundan Kumar <kundan.kumar@samsung.com> --- block/bio.c | 7 +------ block/blk.h | 7 +++++++ include/linux/mm.h | 1 + mm/gup.c | 13 +++++++++++++ 4 files changed, 22 insertions(+), 6 deletions(-)