Message ID | 1389787258-10865-1-git-send-email-miaox@cn.fujitsu.com (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
On Wed, Jan 15, 2014 at 08:00:54PM +0800, Miao Xie wrote: > It is better that the position of the lock is close to the data which is > protected by it, because they may be in the same cache line, we will load > less cache lines when we access them. So we rearrange the members' position > of btrfs_space_info structure to make the lock be closer to the its data. I like that Reviewed-by: David Sterba <dsterba@suse.cz> New structure layout according to pahole looks reasonable. We can try to split 'groups_sem' and 'wait' to 2 cachelines, but this would need more shuffling and maybe some measurement if this wins any perf percents. struct btrfs_space_info { spinlock_t lock; /* 0 4 */ /* XXX 4 bytes hole, try to pack */ u64 total_bytes; /* 8 8 */ u64 bytes_used; /* 16 8 */ u64 bytes_pinned; /* 24 8 */ u64 bytes_reserved; /* 32 8 */ u64 bytes_may_use; /* 40 8 */ u64 bytes_readonly; /* 48 8 */ unsigned int full:1; /* 56:31 4 */ unsigned int chunk_alloc:1; /* 56:30 4 */ unsigned int flush:1; /* 56:29 4 */ /* XXX 29 bits hole, try to pack */ unsigned int force_alloc; /* 60 4 */ /* --- cacheline 1 boundary (64 bytes) --- */ u64 disk_used; /* 64 8 */ u64 disk_total; /* 72 8 */ u64 flags; /* 80 8 */ struct percpu_counter total_bytes_pinned; /* 88 40 */ /* --- cacheline 2 boundary (128 bytes) --- */ struct list_head list; /* 128 16 */ struct rw_semaphore groups_sem; /* 144 32 */ struct list_head block_groups[7]; /* 176 112 */ /* --- cacheline 4 boundary (256 bytes) was 32 bytes ago --- */ wait_queue_head_t wait; /* 288 24 */ /* size: 312, cachelines: 5, members: 19 */ /* sum members: 308, holes: 1, sum holes: 4 */ /* bit holes: 1, sum bit holes: 29 bits */ /* last cacheline: 56 bytes */ }; -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h index 54ab861..1667c9a 100644 --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -1094,7 +1094,7 @@ struct btrfs_qgroup_limit_item { } __attribute__ ((__packed__)); struct btrfs_space_info { - u64 flags; + spinlock_t lock; u64 total_bytes; /* total bytes in the space, this doesn't take mirrors into account */ @@ -1104,14 +1104,25 @@ struct btrfs_space_info { transaction finishes */ u64 bytes_reserved; /* total bytes the allocator has reserved for current allocations */ - u64 bytes_readonly; /* total bytes that are read only */ - u64 bytes_may_use; /* number of bytes that may be used for delalloc/allocations */ + u64 bytes_readonly; /* total bytes that are read only */ + + unsigned int full:1; /* indicates that we cannot allocate any more + chunks for this space */ + unsigned int chunk_alloc:1; /* set if we are allocating a chunk */ + + unsigned int flush:1; /* set if we are trying to make space */ + + unsigned int force_alloc; /* set if we need to force a chunk + alloc for this space */ + u64 disk_used; /* total bytes used on disk */ u64 disk_total; /* total bytes on disk, takes mirrors into account */ + u64 flags; + /* * bytes_pinned is kept in line with what is actually pinned, as in * we've called update_block_group and dropped the bytes_used counter @@ -1124,21 +1135,11 @@ struct btrfs_space_info { */ struct percpu_counter total_bytes_pinned; - unsigned int full:1; /* indicates that we cannot allocate any more - chunks for this space */ - unsigned int chunk_alloc:1; /* set if we are allocating a chunk */ - - unsigned int flush:1; /* set if we are trying to make space */ - - unsigned int force_alloc; /* set if we need to force a chunk - alloc for this space */ - struct list_head list; + struct rw_semaphore groups_sem; /* for block groups in our same type */ struct list_head block_groups[BTRFS_NR_RAID_TYPES]; - spinlock_t lock; - struct rw_semaphore groups_sem; wait_queue_head_t wait; };
It is better that the position of the lock is close to the data which is protected by it, because they may be in the same cache line, we will load less cache lines when we access them. So we rearrange the members' position of btrfs_space_info structure to make the lock be closer to the its data. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> --- fs/btrfs/ctree.h | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-)