diff mbox series

[03/18] block/buffer_head: introduce block_{index_to_sector,sector_to_index}

Message ID 20230918110510.66470-4-hare@suse.de (mailing list archive)
State New, archived
Headers show
Series block: update buffer_head for Large-block I/O | expand

Commit Message

Hannes Reinecke Sept. 18, 2023, 11:04 a.m. UTC
Introduce accessor functions block_index_to_sector() and block_sector_to_index()
to convert the page index into the corresponding sector and vice versa.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 include/linux/buffer_head.h | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

Comments

Matthew Wilcox (Oracle) Sept. 18, 2023, 4:36 p.m. UTC | #1
On Mon, Sep 18, 2023 at 01:04:55PM +0200, Hannes Reinecke wrote:
> @@ -449,6 +450,22 @@ __bread(struct block_device *bdev, sector_t block, unsigned size)
>  
>  bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
>  
> +static inline sector_t block_index_to_sector(pgoff_t index, unsigned int blkbits)
> +{
> +	if (PAGE_SHIFT < blkbits)
> +		return (sector_t)index >> (blkbits - PAGE_SHIFT);
> +	else
> +		return (sector_t)index << (PAGE_SHIFT - blkbits);
> +}

Is this actually more efficient than ...

	loff_t pos = (loff_t)index * PAGE_SIZE;
	return pos >> blkbits;

It feels like we're going to be doing this a lot, so we should find out
what's actually faster.
Hannes Reinecke Sept. 18, 2023, 5:42 p.m. UTC | #2
On 9/18/23 18:36, Matthew Wilcox wrote:
> On Mon, Sep 18, 2023 at 01:04:55PM +0200, Hannes Reinecke wrote:
>> @@ -449,6 +450,22 @@ __bread(struct block_device *bdev, sector_t block, unsigned size)
>>   
>>   bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
>>   
>> +static inline sector_t block_index_to_sector(pgoff_t index, unsigned int blkbits)
>> +{
>> +	if (PAGE_SHIFT < blkbits)
>> +		return (sector_t)index >> (blkbits - PAGE_SHIFT);
>> +	else
>> +		return (sector_t)index << (PAGE_SHIFT - blkbits);
>> +}
> 
> Is this actually more efficient than ...
> 
> 	loff_t pos = (loff_t)index * PAGE_SIZE;
> 	return pos >> blkbits;
> 
> It feels like we're going to be doing this a lot, so we should find out
> what's actually faster.
> 
I fear that's my numerical computation background chiming in again.
One always tries to worry about numerical stability, and increasing a 
number always risks of running into an overflow.
But yeah, I guess your version is simpler, and we can always lean onto 
the compiler folks to have the compiler arrive at the same assembler 
code than my version.

Cheers,

Hannes
Matthew Wilcox (Oracle) Sept. 18, 2023, 9:01 p.m. UTC | #3
On Mon, Sep 18, 2023 at 07:42:51PM +0200, Hannes Reinecke wrote:
> On 9/18/23 18:36, Matthew Wilcox wrote:
> > On Mon, Sep 18, 2023 at 01:04:55PM +0200, Hannes Reinecke wrote:
> > > @@ -449,6 +450,22 @@ __bread(struct block_device *bdev, sector_t block, unsigned size)
> > >   bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
> > > +static inline sector_t block_index_to_sector(pgoff_t index, unsigned int blkbits)
> > > +{
> > > +	if (PAGE_SHIFT < blkbits)
> > > +		return (sector_t)index >> (blkbits - PAGE_SHIFT);
> > > +	else
> > > +		return (sector_t)index << (PAGE_SHIFT - blkbits);
> > > +}
> > 
> > Is this actually more efficient than ...
> > 
> > 	loff_t pos = (loff_t)index * PAGE_SIZE;
> > 	return pos >> blkbits;
> > 
> > It feels like we're going to be doing this a lot, so we should find out
> > what's actually faster.
> > 
> I fear that's my numerical computation background chiming in again.
> One always tries to worry about numerical stability, and increasing a number
> always risks of running into an overflow.
> But yeah, I guess your version is simpler, and we can always lean onto the
> compiler folks to have the compiler arrive at the same assembler code than
> my version.

I actually don't mind the additional complexity -- if it's faster.
Yours is a conditional, two subtractions and two shifts (dependent on
the result of the subtractions).  Mine is two shifts, the second
dependent on the first.

I would say mine is safe because we're talking about a file (or a bdev).
By definition, the byte offset into one of those fits into an loff_t,
although maybe not an unsigned long.
diff mbox series

Patch

diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 4ede47649a81..55a3032f8375 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -277,6 +277,7 @@  int generic_cont_expand_simple(struct inode *inode, loff_t size);
 void block_commit_write(struct page *page, unsigned int from, unsigned int to);
 int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
 				get_block_t get_block);
+
 sector_t generic_block_bmap(struct address_space *, sector_t, get_block_t *);
 int block_truncate_page(struct address_space *, loff_t, get_block_t *);
 
@@ -449,6 +450,22 @@  __bread(struct block_device *bdev, sector_t block, unsigned size)
 
 bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
 
+static inline sector_t block_index_to_sector(pgoff_t index, unsigned int blkbits)
+{
+	if (PAGE_SHIFT < blkbits)
+		return (sector_t)index >> (blkbits - PAGE_SHIFT);
+	else
+		return (sector_t)index << (PAGE_SHIFT - blkbits);
+}
+
+static inline pgoff_t block_sector_to_index(sector_t block, unsigned int blkbits)
+{
+	if (PAGE_SHIFT < blkbits)
+		return (pgoff_t)block << (blkbits - PAGE_SHIFT);
+	else
+		return (pgoff_t)block >> (PAGE_SHIFT - blkbits);
+}
+
 #ifdef CONFIG_BUFFER_HEAD
 
 void buffer_init(void);