Message ID | 20220521101416.29793-2-heming.zhao@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] ocfs2: fix jbd2 assertion in defragment path | expand |
On 5/21/22 6:14 PM, Heming Zhao wrote: > When la state is ENABLE, ocfs2_recalc_la_window restores la window > unconditionally. The logic is wrong. > > Let's image below path. > > 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. > > 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, > ocfs2_la_enable_worker set la state to ENABLED directly. > > 3. a write IOs thread run: > > ``` > ocfs2_write_begin > ... > ocfs2_lock_allocators > ocfs2_reserve_clusters > ocfs2_reserve_clusters_with_limit > ocfs2_reserve_local_alloc_bits > ocfs2_local_alloc_slide_window // [1] > + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] > + ... > + ocfs2_local_alloc_new_window > ocfs2_claim_clusters // [3] > ``` > > [1]: will be called when la window bits used up. > [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work > happened), it unconditionally restores la window to default value. > [3]: will use default la window size to search clusters. IMO the timing > is O(n^4). The timing O(n^4) will cost huge time to scan global > bitmap. It makes write IOs (eg user space 'dd') become dramatically > slow. > > i.e. > an ocfs2 partition size: 1.45TB, cluster size: 4KB, > la window default size: 106MB. > The partition is fragmentation by creating & deleting huge mount of > small file. > > the timing should be (the number got from real world): > - la window size change order (size: MB): > 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 > only 0.8MB succeed, 0.8MB also triggers la window to disable. > ocfs2_local_alloc_new_window retries 8 times, first 7 times totally > runs in worst case. > - group chain number: 242 > ocfs2_claim_suballoc_bits calls for-loop 242 times > - each chain has 49 block group > ocfs2_search_chain calls while-loop 49 times > - each bg has 32256 blocks > ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. > for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use > (32256/64) for timing calucation. > > So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) > > In the worst case, user space writes 100MB data will trigger 42M scanning > times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), > the write IO will suffer another 42M scanning times. It makes the ocfs2 > partition keep pool performance all the time. > The scenario makes sense. I have to spend more time to dig into the code and then get back to you. Thanks, Joseph
On 5/21/22 6:14 PM, Heming Zhao wrote: > When la state is ENABLE, ocfs2_recalc_la_window restores la window > unconditionally. The logic is wrong. > > Let's image below path. > > 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. > > 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, > ocfs2_la_enable_worker set la state to ENABLED directly. > > 3. a write IOs thread run: > > ``` > ocfs2_write_begin > ... > ocfs2_lock_allocators > ocfs2_reserve_clusters > ocfs2_reserve_clusters_with_limit > ocfs2_reserve_local_alloc_bits > ocfs2_local_alloc_slide_window // [1] > + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] > + ... > + ocfs2_local_alloc_new_window > ocfs2_claim_clusters // [3] > ``` > > [1]: will be called when la window bits used up. > [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work > happened), it unconditionally restores la window to default value. > [3]: will use default la window size to search clusters. IMO the timing > is O(n^4). The timing O(n^4) will cost huge time to scan global > bitmap. It makes write IOs (eg user space 'dd') become dramatically > slow. > > i.e. > an ocfs2 partition size: 1.45TB, cluster size: 4KB, > la window default size: 106MB. > The partition is fragmentation by creating & deleting huge mount of > small file. > > the timing should be (the number got from real world): > - la window size change order (size: MB): > 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 > only 0.8MB succeed, 0.8MB also triggers la window to disable. > ocfs2_local_alloc_new_window retries 8 times, first 7 times totally > runs in worst case. > - group chain number: 242 > ocfs2_claim_suballoc_bits calls for-loop 242 times > - each chain has 49 block group > ocfs2_search_chain calls while-loop 49 times > - each bg has 32256 blocks > ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. > for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use > (32256/64) for timing calucation. > > So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) > > In the worst case, user space writes 100MB data will trigger 42M scanning > times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), > the write IO will suffer another 42M scanning times. It makes the ocfs2 > partition keep pool performance all the time. > > The fix method: > > 1. la restores double la size once. > > current code logic decrease la window with half size once, but directly > restores default_bits one time. It bounces the la window between '<1M' > and default_bits. This patch makes restoring process more smoothly. > eg. > la default window is 106MB, current la window is 13MB. > when there is a free action to release one block group space, la should > roll back la size to 26MB (by 13*2). > if there are many free actions to release many block group space, la > will smoothly roll back to default window (106MB). > > 2. introduced a new state: OCFS2_LA_RESTORE. > > Current code uses OCFS2_LA_ENABLED to mark a new big space available. > the state overwrite OCFS2_LA_THROTTLED, it makes la window forget > it's already in throttled status. > '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window > restore to default_bits. Since now we have enough free space, why not restore to default la window? This is an issue happened in a corner case, which blames current restore window is too large. I agree with your method that restoring double la size once instead of default directly. So why not just change the logic of ocfs2_recalc_la_window() to do this? Thanks, Joseph > > Signed-off-by: Heming Zhao <heming.zhao@suse.com> > --- > fs/ocfs2/localalloc.c | 30 +++++++++++++++++++++--------- > fs/ocfs2/ocfs2.h | 18 +++++++++++------- > fs/ocfs2/suballoc.c | 2 +- > 3 files changed, 33 insertions(+), 17 deletions(-) > > diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c > index c4426d12a2ad..28acea717d7f 100644 > --- a/fs/ocfs2/localalloc.c > +++ b/fs/ocfs2/localalloc.c > @@ -205,20 +205,21 @@ void ocfs2_la_set_sizes(struct ocfs2_super *osb, int requested_mb) > > static inline int ocfs2_la_state_enabled(struct ocfs2_super *osb) > { > - return (osb->local_alloc_state == || > - osb->local_alloc_state == OCFS2_LA_ENABLED); > + return osb->local_alloc_state & OCFS2_LA_ACTIVE; > } > > void ocfs2_local_alloc_seen_free_bits(struct ocfs2_super *osb, > unsigned int num_clusters) > { > spin_lock(&osb->osb_lock); > - if (osb->local_alloc_state == OCFS2_LA_DISABLED || > - osb->local_alloc_state == OCFS2_LA_THROTTLED) > + if (osb->local_alloc_state & (OCFS2_LA_DISABLED | > + OCFS2_LA_THROTTLED | OCFS2_LA_RESTORE)) { > if (num_clusters >= osb->local_alloc_default_bits) { > cancel_delayed_work(&osb->la_enable_wq); > - osb->local_alloc_state = OCFS2_LA_ENABLED; > + osb->local_alloc_state &= ~OCFS2_LA_DISABLED; > + osb->local_alloc_state |= OCFS2_LA_RESTORE; > } > + } > spin_unlock(&osb->osb_lock); > } > > @@ -228,7 +229,10 @@ void ocfs2_la_enable_worker(struct work_struct *work) > container_of(work, struct ocfs2_super, > la_enable_wq.work); > spin_lock(&osb->osb_lock); > - osb->local_alloc_state = OCFS2_LA_ENABLED; > + if (osb->local_alloc_state & OCFS2_LA_DISABLED) { > + osb->local_alloc_state &= ~OCFS2_LA_DISABLED; > + osb->local_alloc_state |= OCFS2_LA_ENABLED; > + } > spin_unlock(&osb->osb_lock); > } > > @@ -1067,7 +1071,7 @@ static int ocfs2_recalc_la_window(struct ocfs2_super *osb, > * reason to assume the bitmap situation might > * have changed. > */ > - osb->local_alloc_state = OCFS2_LA_THROTTLED; > + osb->local_alloc_state |= OCFS2_LA_THROTTLED; > osb->local_alloc_bits = bits; > } else { > osb->local_alloc_state = OCFS2_LA_DISABLED; > @@ -1083,8 +1087,16 @@ static int ocfs2_recalc_la_window(struct ocfs2_super *osb, > * risk bouncing around the global bitmap during periods of > * low space. > */ > - if (osb->local_alloc_state != OCFS2_LA_THROTTLED) > - osb->local_alloc_bits = osb->local_alloc_default_bits; > + if (osb->local_alloc_state & OCFS2_LA_RESTORE) { > + bits = osb->local_alloc_bits * 2; > + if (bits > osb->local_alloc_default_bits) { > + osb->local_alloc_bits = osb->local_alloc_default_bits; > + osb->local_alloc_state = OCFS2_LA_ENABLED; > + } else { > + /* keep RESTORE state & set new bits */ > + osb->local_alloc_bits = bits; > + } > + } > > out_unlock: > state = osb->local_alloc_state; > diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h > index 337527571461..1764077e3229 100644 > --- a/fs/ocfs2/ocfs2.h > +++ b/fs/ocfs2/ocfs2.h > @@ -245,14 +245,18 @@ struct ocfs2_alloc_stats > > enum ocfs2_local_alloc_state > { > - OCFS2_LA_UNUSED = 0, /* Local alloc will never be used for > - * this mountpoint. */ > - OCFS2_LA_ENABLED, /* Local alloc is in use. */ > - OCFS2_LA_THROTTLED, /* Local alloc is in use, but number > - * of bits has been reduced. */ > - OCFS2_LA_DISABLED /* Local alloc has temporarily been > - * disabled. */ > + /* Local alloc will never be used for this mountpoint. */ > + OCFS2_LA_UNUSED = 1 << 0, > + /* Local alloc is in use. */ > + OCFS2_LA_ENABLED = 1 << 1, > + /* Local alloc is in use, but number of bits has been reduced. */ > + OCFS2_LA_THROTTLED = 1 << 2, > + /* In throttle state, Local alloc meets contig big space. */ > + OCFS2_LA_RESTORE = 1 << 3, > + /* Local alloc has temporarily been disabled. */ > + OCFS2_LA_DISABLED = 1 << 4, > }; > +#define OCFS2_LA_ACTIVE (OCFS2_LA_ENABLED | OCFS2_LA_THROTTLED | OCFS2_LA_RESTORE) > > enum ocfs2_mount_options > { > diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c > index 166c8918c825..b0df1ab2d6dd 100644 > --- a/fs/ocfs2/suballoc.c > +++ b/fs/ocfs2/suballoc.c > @@ -1530,7 +1530,7 @@ static int ocfs2_cluster_group_search(struct inode *inode, > * of bits. */ > if (min_bits <= res->sr_bits) > search = 0; /* success */ > - else if (res->sr_bits) { > + if (res->sr_bits) { > /* > * Don't show bits which we'll be returning > * for allocation to the local alloc bitmap.
On 6/12/22 10:57, Joseph Qi wrote: > > > On 5/21/22 6:14 PM, Heming Zhao wrote: >> When la state is ENABLE, ocfs2_recalc_la_window restores la window >> unconditionally. The logic is wrong. >> >> Let's image below path. >> >> 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. >> >> 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, >> ocfs2_la_enable_worker set la state to ENABLED directly. >> >> 3. a write IOs thread run: >> >> ``` >> ocfs2_write_begin >> ... >> ocfs2_lock_allocators >> ocfs2_reserve_clusters >> ocfs2_reserve_clusters_with_limit >> ocfs2_reserve_local_alloc_bits >> ocfs2_local_alloc_slide_window // [1] >> + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] >> + ... >> + ocfs2_local_alloc_new_window >> ocfs2_claim_clusters // [3] >> ``` >> >> [1]: will be called when la window bits used up. >> [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work >> happened), it unconditionally restores la window to default value. >> [3]: will use default la window size to search clusters. IMO the timing >> is O(n^4). The timing O(n^4) will cost huge time to scan global >> bitmap. It makes write IOs (eg user space 'dd') become dramatically >> slow. >> >> i.e. >> an ocfs2 partition size: 1.45TB, cluster size: 4KB, >> la window default size: 106MB. >> The partition is fragmentation by creating & deleting huge mount of >> small file. >> >> the timing should be (the number got from real world): >> - la window size change order (size: MB): >> 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 >> only 0.8MB succeed, 0.8MB also triggers la window to disable. >> ocfs2_local_alloc_new_window retries 8 times, first 7 times totally >> runs in worst case. >> - group chain number: 242 >> ocfs2_claim_suballoc_bits calls for-loop 242 times >> - each chain has 49 block group >> ocfs2_search_chain calls while-loop 49 times >> - each bg has 32256 blocks >> ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. >> for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use >> (32256/64) for timing calucation. >> >> So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) >> >> In the worst case, user space writes 100MB data will trigger 42M scanning >> times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), >> the write IO will suffer another 42M scanning times. It makes the ocfs2 >> partition keep pool performance all the time. >> >> The fix method: >> >> 1. la restores double la size once. >> >> current code logic decrease la window with half size once, but directly >> restores default_bits one time. It bounces the la window between '<1M' >> and default_bits. This patch makes restoring process more smoothly. >> eg. >> la default window is 106MB, current la window is 13MB. >> when there is a free action to release one block group space, la should >> roll back la size to 26MB (by 13*2). >> if there are many free actions to release many block group space, la >> will smoothly roll back to default window (106MB). >> >> 2. introduced a new state: OCFS2_LA_RESTORE. >> >> Current code uses OCFS2_LA_ENABLED to mark a new big space available. >> the state overwrite OCFS2_LA_THROTTLED, it makes la window forget >> it's already in throttled status. >> '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window >> restore to default_bits. > > Since now we have enough free space, why not restore to default la > window? The key is: the decrease speed is not same with increase speed. The decrease la window happens on the current la window size is not available any more. But current restore action happens on there appears any one of default_bits space. e.g: the default la window is 100MB, current system only has some 20MB contiguous space. la window change to half size (10MB) when there is no 20MB space any more. but current code restore logic will restore to 100MB. and allocation path will suffer the O(n^4) timing. From my understanding, most of the ocfs2 users use ocfs2 to manage big & not huge number of files. But my patch response scenario: ocfs2 volume contains huge number of small files. user case is creating/deleting/moving the small files all the time. It makes the fs fragmentation totally. > > This is an issue happened in a corner case, which blames current restore > window is too large. I agree with your method that restoring double la > size once instead of default directly. So why not just change the logic > of ocfs2_recalc_la_window() to do this? This path is related with two issues: - restore speed more quick than decrease speed. - unconditionally restore la window only change ocfs2_recalc_la_window() can't avoid unconditionally restore la window. so I introduced new state OCFS2_LA_RESTORE, which will help ocfs2 to keep throttled state. btw, during I investigating this bug, I found other two issues/works need to do: - current allocation algorithm is very easy to generate fragment. - there is O(n^4) timing. even after with this patch, the timing only becomes O(n^3): <group chain number> * <block group per chain> * <bg bitmap size> we needs to improve the allocation algorithm to make ocfs2 more power. Thanks, Heming
On 6/12/22 3:45 PM, heming.zhao@suse.com wrote: > On 6/12/22 10:57, Joseph Qi wrote: >> >> >> On 5/21/22 6:14 PM, Heming Zhao wrote: >>> When la state is ENABLE, ocfs2_recalc_la_window restores la window >>> unconditionally. The logic is wrong. >>> >>> Let's image below path. >>> >>> 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. >>> >>> 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, >>> ocfs2_la_enable_worker set la state to ENABLED directly. >>> >>> 3. a write IOs thread run: >>> >>> ``` >>> ocfs2_write_begin >>> ... >>> ocfs2_lock_allocators >>> ocfs2_reserve_clusters >>> ocfs2_reserve_clusters_with_limit >>> ocfs2_reserve_local_alloc_bits >>> ocfs2_local_alloc_slide_window // [1] >>> + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] >>> + ... >>> + ocfs2_local_alloc_new_window >>> ocfs2_claim_clusters // [3] >>> ``` >>> >>> [1]: will be called when la window bits used up. >>> [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work >>> happened), it unconditionally restores la window to default value. >>> [3]: will use default la window size to search clusters. IMO the timing >>> is O(n^4). The timing O(n^4) will cost huge time to scan global >>> bitmap. It makes write IOs (eg user space 'dd') become dramatically >>> slow. >>> >>> i.e. >>> an ocfs2 partition size: 1.45TB, cluster size: 4KB, >>> la window default size: 106MB. >>> The partition is fragmentation by creating & deleting huge mount of >>> small file. >>> >>> the timing should be (the number got from real world): >>> - la window size change order (size: MB): >>> 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 >>> only 0.8MB succeed, 0.8MB also triggers la window to disable. >>> ocfs2_local_alloc_new_window retries 8 times, first 7 times totally >>> runs in worst case. >>> - group chain number: 242 >>> ocfs2_claim_suballoc_bits calls for-loop 242 times >>> - each chain has 49 block group >>> ocfs2_search_chain calls while-loop 49 times >>> - each bg has 32256 blocks >>> ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. >>> for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use >>> (32256/64) for timing calucation. >>> >>> So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) >>> >>> In the worst case, user space writes 100MB data will trigger 42M scanning >>> times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), >>> the write IO will suffer another 42M scanning times. It makes the ocfs2 >>> partition keep pool performance all the time. >>> >>> The fix method: >>> >>> 1. la restores double la size once. >>> >>> current code logic decrease la window with half size once, but directly >>> restores default_bits one time. It bounces the la window between '<1M' >>> and default_bits. This patch makes restoring process more smoothly. >>> eg. >>> la default window is 106MB, current la window is 13MB. >>> when there is a free action to release one block group space, la should >>> roll back la size to 26MB (by 13*2). >>> if there are many free actions to release many block group space, la >>> will smoothly roll back to default window (106MB). >>> >>> 2. introduced a new state: OCFS2_LA_RESTORE. >>> >>> Current code uses OCFS2_LA_ENABLED to mark a new big space available. >>> the state overwrite OCFS2_LA_THROTTLED, it makes la window forget >>> it's already in throttled status. >>> '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window >>> restore to default_bits. >> >> Since now we have enough free space, why not restore to default la >> window? > > The key is: the decrease speed is not same with increase speed. > The decrease la window happens on the current la window size is not available any more. > But current restore action happens on there appears any one of default_bits space. > e.g: > the default la window is 100MB, current system only has some 20MB contiguous space. > la window change to half size (10MB) when there is no 20MB space any more. > but current code restore logic will restore to 100MB. and allocation path will > suffer the O(n^4) timing. > > From my understanding, most of the ocfs2 users use ocfs2 to manage big & not huge > number of files. But my patch response scenario: ocfs2 volume contains huge number > of small files. user case is creating/deleting/moving the small files all the time. > It makes the fs fragmentation totally. > Yes, typically scenario is vm images with cluster size 1M. This is a corny talk and I'm afraid it still cannot resolve above case with optimized local alloc window. >> >> This is an issue happened in a corner case, which blames current restore >> window is too large. I agree with your method that restoring double la >> size once instead of default directly. So why not just change the logic >> of ocfs2_recalc_la_window() to do this? > > This path is related with two issues: > - restore speed more quick than decrease speed. > - unconditionally restore la window > > only change ocfs2_recalc_la_window() can't avoid unconditionally restore la window. Seems the following will restore double each time? if (osb->local_alloc_state != OCFS2_LA_THROTTLED) osb->local_alloc_bits <<= 1; This may introduce another issue which blames restore too slow. So it's a balance. Thanks, Joseph > so I introduced new state OCFS2_LA_RESTORE, which will help ocfs2 to keep throttled > state. > > btw, during I investigating this bug, I found other two issues/works need to do: > - current allocation algorithm is very easy to generate fragment. > - there is O(n^4) timing. even after with this patch, the timing only becomes O(n^3): > <group chain number> * <block group per chain> * <bg bitmap size> > we needs to improve the allocation algorithm to make ocfs2 more power. > > Thanks, > Heming
On 6/12/22 20:38, Joseph Qi wrote: > > > On 6/12/22 3:45 PM, heming.zhao@suse.com wrote: >> On 6/12/22 10:57, Joseph Qi wrote: >>> >>> >>> On 5/21/22 6:14 PM, Heming Zhao wrote: >>>> When la state is ENABLE, ocfs2_recalc_la_window restores la window >>>> unconditionally. The logic is wrong. >>>> >>>> Let's image below path. >>>> >>>> 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. >>>> >>>> 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, >>>> ocfs2_la_enable_worker set la state to ENABLED directly. >>>> >>>> 3. a write IOs thread run: >>>> >>>> ``` >>>> ocfs2_write_begin >>>> ... >>>> ocfs2_lock_allocators >>>> ocfs2_reserve_clusters >>>> ocfs2_reserve_clusters_with_limit >>>> ocfs2_reserve_local_alloc_bits >>>> ocfs2_local_alloc_slide_window // [1] >>>> + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] >>>> + ... >>>> + ocfs2_local_alloc_new_window >>>> ocfs2_claim_clusters // [3] >>>> ``` >>>> >>>> [1]: will be called when la window bits used up. >>>> [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work >>>> happened), it unconditionally restores la window to default value. >>>> [3]: will use default la window size to search clusters. IMO the timing >>>> is O(n^4). The timing O(n^4) will cost huge time to scan global >>>> bitmap. It makes write IOs (eg user space 'dd') become dramatically >>>> slow. >>>> >>>> i.e. >>>> an ocfs2 partition size: 1.45TB, cluster size: 4KB, >>>> la window default size: 106MB. >>>> The partition is fragmentation by creating & deleting huge mount of >>>> small file. >>>> >>>> the timing should be (the number got from real world): >>>> - la window size change order (size: MB): >>>> 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 >>>> only 0.8MB succeed, 0.8MB also triggers la window to disable. >>>> ocfs2_local_alloc_new_window retries 8 times, first 7 times totally >>>> runs in worst case. >>>> - group chain number: 242 >>>> ocfs2_claim_suballoc_bits calls for-loop 242 times >>>> - each chain has 49 block group >>>> ocfs2_search_chain calls while-loop 49 times >>>> - each bg has 32256 blocks >>>> ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. >>>> for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use >>>> (32256/64) for timing calucation. >>>> >>>> So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) >>>> >>>> In the worst case, user space writes 100MB data will trigger 42M scanning >>>> times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), >>>> the write IO will suffer another 42M scanning times. It makes the ocfs2 >>>> partition keep pool performance all the time. >>>> >>>> The fix method: >>>> >>>> 1. la restores double la size once. >>>> >>>> current code logic decrease la window with half size once, but directly >>>> restores default_bits one time. It bounces the la window between '<1M' >>>> and default_bits. This patch makes restoring process more smoothly. >>>> eg. >>>> la default window is 106MB, current la window is 13MB. >>>> when there is a free action to release one block group space, la should >>>> roll back la size to 26MB (by 13*2). >>>> if there are many free actions to release many block group space, la >>>> will smoothly roll back to default window (106MB). >>>> >>>> 2. introduced a new state: OCFS2_LA_RESTORE. >>>> >>>> Current code uses OCFS2_LA_ENABLED to mark a new big space available. >>>> the state overwrite OCFS2_LA_THROTTLED, it makes la window forget >>>> it's already in throttled status. >>>> '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window >>>> restore to default_bits. >>> >>> Since now we have enough free space, why not restore to default la >>> window? >> >> The key is: the decrease speed is not same with increase speed. >> The decrease la window happens on the current la window size is not available any more. >> But current restore action happens on there appears any one of default_bits space. >> e.g: >> the default la window is 100MB, current system only has some 20MB contiguous space. >> la window change to half size (10MB) when there is no 20MB space any more. >> but current code restore logic will restore to 100MB. and allocation path will >> suffer the O(n^4) timing. >> >> From my understanding, most of the ocfs2 users use ocfs2 to manage big & not huge >> number of files. But my patch response scenario: ocfs2 volume contains huge number >> of small files. user case is creating/deleting/moving the small files all the time. >> It makes the fs fragmentation totally. >> > > Yes, typically scenario is vm images with cluster size 1M. > This is a corny talk and I'm afraid it still cannot resolve above case > with optimized local alloc window. You're right. the optimized la window could help small files case but can't resolve. The total solution should redesign something, eg. allocation algorithm. This issue was triggered by a SUSE customer, they had bad experience during product env frequently happening poor performance issue. In my view, if ocfs2 only works fine on managing vm images (big & not too many files), it's waste & ridiculous. A powerful fs should work fine for any size files. ocfs2 should support small files use case, but there is a long way to go. > >>> >>> This is an issue happened in a corner case, which blames current restore >>> window is too large. I agree with your method that restoring double la >>> size once instead of default directly. So why not just change the logic >>> of ocfs2_recalc_la_window() to do this? >> >> This path is related with two issues: >> - restore speed more quick than decrease speed. >> - unconditionally restore la window >> >> only change ocfs2_recalc_la_window() can't avoid unconditionally restore la window. > > Seems the following will restore double each time? YES. > > if (osb->local_alloc_state != OCFS2_LA_THROTTLED) > osb->local_alloc_bits <<= 1; > > This may introduce another issue which blames restore too slow. So it's a > balance. Yes, I agree it's a balance. But I prefer my patch works better than existed restore method. (IIUC) there are some reasons: 1. Truncate feature may delay restore the released space. So there is a time gap between restore la window and available space ready. 2. existed method is suite for big file use case: restore to default la window gives a chance to grab big space at once. A VM file may tens of GB size, user may free one VM file then create a new VM file. this free action could very likely release enough space for later create action, so ocfs2 could benefit from la window restore default size. 3. this patch introduced slow-down restore give significant help for small file allocation. and not make regression for ocfs2 total performance. 4. the scenario is described in this patch commit log. if la window restore default size unconditionally from the before state DISABLE, ocfs2 will busy with waiting scanning la window. for 3, I give a example. (below analysis base on patch code already merged) (the ocfs2 volume size 1TB) 3.1> csize:1M, group:32G, la:31G (these number get from OCFS2_LA_MAX_DEFAULT_MB source code comment in fs/ocfs2/localalloc.c) For this case, ocfs2 volume is used for saving big file. the worse case describe below (please tell me, if this case is not enough) Current la window: 2MB, release a 40GB file. la window change to 4MB. User wants to create a 40GB file. because la is 4MB, it can successfully get the 4M contiguous space from just released 40GB space. the scanning timing is same as before. but only get 4MB from global bitmap. And later slide window action will speed up by the saving 'hint' in ocfs2_claim_suballoc_bits(). the timing is not O(n^4), may O(n) here. in this case, la window will become double size by every slide la window: ocfs2_local_alloc_slide_window ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) if (osb->local_alloc_state & OCFS2_LA_RESTORE)//set 'la window' * 2 So la window restore sequence (14 times): 4MB,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,default_window(31GB) It costs extra 13 times to slide la window for allocating space. But in my view, current ocfs2 hot spot should be on new file writing IOs, not busy with waiting for scanning la window. User is very likely never feel allocation is slow than before. 3.2> csize:4K, group:126M, la:121M (see fs/ocfs2/localalloc.c for above numbers) the worse case is like <3.1>: allocation a big file. Current la window: 2MB, release a 40GB file. la window change to 4MB. User wants to create a 40GB file. in this case, the la window restore sequence (6 times): 4MB,8,16,32,64,default_window(121MB) It costs extra 5 times vs before. I have same world: wait for scanning la window is not hot spot. /Heming > > Thanks, > Joseph > >> so I introduced new state OCFS2_LA_RESTORE, which will help ocfs2 to keep throttled >> state. >> >> btw, during I investigating this bug, I found other two issues/works need to do: >> - current allocation algorithm is very easy to generate fragment. >> - there is O(n^4) timing. even after with this patch, the timing only becomes O(n^3): >> <group chain number> * <block group per chain> * <bg bitmap size> >> we needs to improve the allocation algorithm to make ocfs2 more power. >> >> Thanks, >> Heming
diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c index c4426d12a2ad..28acea717d7f 100644 --- a/fs/ocfs2/localalloc.c +++ b/fs/ocfs2/localalloc.c @@ -205,20 +205,21 @@ void ocfs2_la_set_sizes(struct ocfs2_super *osb, int requested_mb) static inline int ocfs2_la_state_enabled(struct ocfs2_super *osb) { - return (osb->local_alloc_state == OCFS2_LA_THROTTLED || - osb->local_alloc_state == OCFS2_LA_ENABLED); + return osb->local_alloc_state & OCFS2_LA_ACTIVE; } void ocfs2_local_alloc_seen_free_bits(struct ocfs2_super *osb, unsigned int num_clusters) { spin_lock(&osb->osb_lock); - if (osb->local_alloc_state == OCFS2_LA_DISABLED || - osb->local_alloc_state == OCFS2_LA_THROTTLED) + if (osb->local_alloc_state & (OCFS2_LA_DISABLED | + OCFS2_LA_THROTTLED | OCFS2_LA_RESTORE)) { if (num_clusters >= osb->local_alloc_default_bits) { cancel_delayed_work(&osb->la_enable_wq); - osb->local_alloc_state = OCFS2_LA_ENABLED; + osb->local_alloc_state &= ~OCFS2_LA_DISABLED; + osb->local_alloc_state |= OCFS2_LA_RESTORE; } + } spin_unlock(&osb->osb_lock); } @@ -228,7 +229,10 @@ void ocfs2_la_enable_worker(struct work_struct *work) container_of(work, struct ocfs2_super, la_enable_wq.work); spin_lock(&osb->osb_lock); - osb->local_alloc_state = OCFS2_LA_ENABLED; + if (osb->local_alloc_state & OCFS2_LA_DISABLED) { + osb->local_alloc_state &= ~OCFS2_LA_DISABLED; + osb->local_alloc_state |= OCFS2_LA_ENABLED; + } spin_unlock(&osb->osb_lock); } @@ -1067,7 +1071,7 @@ static int ocfs2_recalc_la_window(struct ocfs2_super *osb, * reason to assume the bitmap situation might * have changed. */ - osb->local_alloc_state = OCFS2_LA_THROTTLED; + osb->local_alloc_state |= OCFS2_LA_THROTTLED; osb->local_alloc_bits = bits; } else { osb->local_alloc_state = OCFS2_LA_DISABLED; @@ -1083,8 +1087,16 @@ static int ocfs2_recalc_la_window(struct ocfs2_super *osb, * risk bouncing around the global bitmap during periods of * low space. */ - if (osb->local_alloc_state != OCFS2_LA_THROTTLED) - osb->local_alloc_bits = osb->local_alloc_default_bits; + if (osb->local_alloc_state & OCFS2_LA_RESTORE) { + bits = osb->local_alloc_bits * 2; + if (bits > osb->local_alloc_default_bits) { + osb->local_alloc_bits = osb->local_alloc_default_bits; + osb->local_alloc_state = OCFS2_LA_ENABLED; + } else { + /* keep RESTORE state & set new bits */ + osb->local_alloc_bits = bits; + } + } out_unlock: state = osb->local_alloc_state; diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h index 337527571461..1764077e3229 100644 --- a/fs/ocfs2/ocfs2.h +++ b/fs/ocfs2/ocfs2.h @@ -245,14 +245,18 @@ struct ocfs2_alloc_stats enum ocfs2_local_alloc_state { - OCFS2_LA_UNUSED = 0, /* Local alloc will never be used for - * this mountpoint. */ - OCFS2_LA_ENABLED, /* Local alloc is in use. */ - OCFS2_LA_THROTTLED, /* Local alloc is in use, but number - * of bits has been reduced. */ - OCFS2_LA_DISABLED /* Local alloc has temporarily been - * disabled. */ + /* Local alloc will never be used for this mountpoint. */ + OCFS2_LA_UNUSED = 1 << 0, + /* Local alloc is in use. */ + OCFS2_LA_ENABLED = 1 << 1, + /* Local alloc is in use, but number of bits has been reduced. */ + OCFS2_LA_THROTTLED = 1 << 2, + /* In throttle state, Local alloc meets contig big space. */ + OCFS2_LA_RESTORE = 1 << 3, + /* Local alloc has temporarily been disabled. */ + OCFS2_LA_DISABLED = 1 << 4, }; +#define OCFS2_LA_ACTIVE (OCFS2_LA_ENABLED | OCFS2_LA_THROTTLED | OCFS2_LA_RESTORE) enum ocfs2_mount_options { diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index 166c8918c825..b0df1ab2d6dd 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -1530,7 +1530,7 @@ static int ocfs2_cluster_group_search(struct inode *inode, * of bits. */ if (min_bits <= res->sr_bits) search = 0; /* success */ - else if (res->sr_bits) { + if (res->sr_bits) { /* * Don't show bits which we'll be returning * for allocation to the local alloc bitmap.
When la state is ENABLE, ocfs2_recalc_la_window restores la window unconditionally. The logic is wrong. Let's image below path. 1. la state (->local_alloc_state) is set THROTTLED or DISABLED. 2. About 30s (OCFS2_LA_ENABLE_INTERVAL), delayed work is triggered, ocfs2_la_enable_worker set la state to ENABLED directly. 3. a write IOs thread run: ``` ocfs2_write_begin ... ocfs2_lock_allocators ocfs2_reserve_clusters ocfs2_reserve_clusters_with_limit ocfs2_reserve_local_alloc_bits ocfs2_local_alloc_slide_window // [1] + ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_SLIDE) // [2] + ... + ocfs2_local_alloc_new_window ocfs2_claim_clusters // [3] ``` [1]: will be called when la window bits used up. [2]: under la state is ENABLED (eg OCFS2_LA_ENABLE_INTERVAL delayed work happened), it unconditionally restores la window to default value. [3]: will use default la window size to search clusters. IMO the timing is O(n^4). The timing O(n^4) will cost huge time to scan global bitmap. It makes write IOs (eg user space 'dd') become dramatically slow. i.e. an ocfs2 partition size: 1.45TB, cluster size: 4KB, la window default size: 106MB. The partition is fragmentation by creating & deleting huge mount of small file. the timing should be (the number got from real world): - la window size change order (size: MB): 106, 53, 26.5, 13, 6.5, 3.25, 1.6, 0.8 only 0.8MB succeed, 0.8MB also triggers la window to disable. ocfs2_local_alloc_new_window retries 8 times, first 7 times totally runs in worst case. - group chain number: 242 ocfs2_claim_suballoc_bits calls for-loop 242 times - each chain has 49 block group ocfs2_search_chain calls while-loop 49 times - each bg has 32256 blocks ocfs2_block_group_find_clear_bits calls while-loop for 32256 bits. for ocfs2_find_next_zero_bit uses ffz() to find zero bit, let's use (32256/64) for timing calucation. So the loop times: 7*242*49*(32256/64) = 41835024 (~42 million times) In the worst case, user space writes 100MB data will trigger 42M scanning times, and if the write can't finish within 30s (OCFS2_LA_ENABLE_INTERVAL), the write IO will suffer another 42M scanning times. It makes the ocfs2 partition keep pool performance all the time. The fix method: 1. la restores double la size once. current code logic decrease la window with half size once, but directly restores default_bits one time. It bounces the la window between '<1M' and default_bits. This patch makes restoring process more smoothly. eg. la default window is 106MB, current la window is 13MB. when there is a free action to release one block group space, la should roll back la size to 26MB (by 13*2). if there are many free actions to release many block group space, la will smoothly roll back to default window (106MB). 2. introduced a new state: OCFS2_LA_RESTORE. Current code uses OCFS2_LA_ENABLED to mark a new big space available. the state overwrite OCFS2_LA_THROTTLED, it makes la window forget it's already in throttled status. '->local_alloc_state' should keep OCFS2_LA_THROTTLED until la window restore to default_bits. Signed-off-by: Heming Zhao <heming.zhao@suse.com> --- fs/ocfs2/localalloc.c | 30 +++++++++++++++++++++--------- fs/ocfs2/ocfs2.h | 18 +++++++++++------- fs/ocfs2/suballoc.c | 2 +- 3 files changed, 33 insertions(+), 17 deletions(-)