diff mbox

[3/4] Btrfs: handle unaligned tail of data ranges more efficient

Message ID 20171003150604.19596-4-nefelim4ag@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Timofey Titovets Oct. 3, 2017, 3:06 p.m. UTC
At now while switch page bits in data ranges
we always hande +1 page, for cover case
where end of data range is not page aligned

Let's handle that case more obvious and efficient
Check end aligment directly and touch +1 page
only then needed

Signed-off-by: Timofey Titovets <nefelim4ag@gmail.com>
---
 fs/btrfs/extent_io.c | 12 ++++++++++--
 fs/btrfs/inode.c     |  6 +++++-
 2 files changed, 15 insertions(+), 3 deletions(-)

--
2.14.2
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Sterba Oct. 10, 2017, 4:37 p.m. UTC | #1
On Tue, Oct 03, 2017 at 06:06:03PM +0300, Timofey Titovets wrote:
> At now while switch page bits in data ranges
> we always hande +1 page, for cover case
> where end of data range is not page aligned

The 'end' is inclusive and thus not aligned in most cases, ie. it's
offset 4095 in the page, so the IS_ALIGNED is allways true and the code
is equivalent to the existing condition (index <= end_index).
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Timofey Titovets Oct. 15, 2017, 10:09 p.m. UTC | #2
May be then just add a comment at least at one of that functions?
Like:
/*
 * Handle unaligned end, end is inclusive, so always unaligned
 */

Or something like:
/*
 * It's obvious, kernel use paging, so range
 * Almost always have aligned start like 0
 * and unaligned end like 8192 - 1
 */

Or we assume that everybody who look at kernel code, must understood
that basic things?

Thanks


2017-10-10 19:37 GMT+03:00 David Sterba <dsterba@suse.cz>:
> On Tue, Oct 03, 2017 at 06:06:03PM +0300, Timofey Titovets wrote:
>> At now while switch page bits in data ranges
>> we always hande +1 page, for cover case
>> where end of data range is not page aligned
>
> The 'end' is inclusive and thus not aligned in most cases, ie. it's
> offset 4095 in the page, so the IS_ALIGNED is allways true and the code
> is equivalent to the existing condition (index <= end_index).
David Sterba Oct. 17, 2017, 3:52 p.m. UTC | #3
On Mon, Oct 16, 2017 at 01:09:04AM +0300, Timofey Titovets wrote:
> May be then just add a comment at least at one of that functions?
> Like:
> /*
>  * Handle unaligned end, end is inclusive, so always unaligned
>  */

This would be best documented in the data structure definition, so we
don't have to comment each and every test that compares something
against the range.

> Or something like:
> /*
>  * It's obvious, kernel use paging, so range
>  * Almost always have aligned start like 0
>  * and unaligned end like 8192 - 1
>  */
> 
> Or we assume that everybody who look at kernel code, must understood
> that basic things?

Some level of understanding is required of course, but it's different
for everybody. If the semantics or constraints of code/structures are not
discoverable, then it should be documented.  Adding comment-only patches
is ok, but adding trivial or commenting on the obvious is pointless. IME
the sense of what is/not trivial comes after reading a lot of code.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 0538bf85adc3..131b7d1df9f7 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1359,7 +1359,11 @@  void extent_range_clear_dirty_for_io(struct inode *inode, u64 start, u64 end)
 	unsigned long end_index = end >> PAGE_SHIFT;
 	struct page *page;

-	while (index <= end_index) {
+	/* Don't miss unaligned end */
+	if (!IS_ALIGNED(end, PAGE_SIZE))
+		end_index++;
+
+	while (index < end_index) {
 		page = find_get_page(inode->i_mapping, index);
 		BUG_ON(!page); /* Pages should be in the extent_io_tree */
 		clear_page_dirty_for_io(page);
@@ -1374,7 +1378,11 @@  void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
 	unsigned long end_index = end >> PAGE_SHIFT;
 	struct page *page;

-	while (index <= end_index) {
+	/* Don't miss unaligned end */
+	if (!IS_ALIGNED(end, PAGE_SIZE))
+		end_index++;
+
+	while (index < end_index) {
 		page = find_get_page(inode->i_mapping, index);
 		BUG_ON(!page); /* Pages should be in the extent_io_tree */
 		__set_page_dirty_nobuffers(page);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index b6e81bd650ea..b4974d969f67 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -10799,7 +10799,11 @@  void btrfs_set_range_writeback(void *private_data, u64 start, u64 end)
 	unsigned long end_index = end >> PAGE_SHIFT;
 	struct page *page;

-	while (index <= end_index) {
+	/* Don't miss unaligned end */
+	if (!IS_ALIGNED(end, PAGE_SIZE))
+		end_index++;
+
+	while (index < end_index) {
 		page = find_get_page(inode->i_mapping, index);
 		ASSERT(page); /* Pages should be in the extent_io_tree */
 		set_page_writeback(page);