diff mbox series

[net] net/smc: delete buf_desc from buffer list under lock protection

Message ID 20240731093102.130154-1-guwen@linux.alibaba.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series [net] net/smc: delete buf_desc from buffer list under lock protection | expand

Checks

Context Check Description
netdev/series_format success Single patches do not need cover letters
netdev/tree_selection success Clearly marked for net
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag present in non-next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 42 this patch: 42
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 10 of 11 maintainers
netdev/build_clang success Errors and warnings before: 43 this patch: 43
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success Fixes tag looks correct
netdev/build_allmodconfig_warn success Errors and warnings before: 43 this patch: 43
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
netdev/contest success net-next-2024-07-31--12-00 (tests: 707)

Commit Message

Wen Gu July 31, 2024, 9:31 a.m. UTC
The operations to link group buffer list should be protected by
sndbufs_lock or rmbs_lock. So fix it.

Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers")
Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
---
 net/smc/smc_core.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

Comments

shaozhengchao July 31, 2024, 10:32 a.m. UTC | #1
Hi Wen Gu:
   "The operations to link group buffer list should be protected by
sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will
this really happen? Because no process is in use with the link group,
does this mean that there is no concurrent scenario?

Thank you

Zhengchao Shao

On 2024/7/31 17:31, Wen Gu wrote:
> The operations to link group buffer list should be protected by
> sndbufs_lock or rmbs_lock. So fix it.
> 
> Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers")
> Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
> ---
>   net/smc/smc_core.c | 10 ++++++++--
>   1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
> index 3b95828d9976..ecfea8c38da9 100644
> --- a/net/smc/smc_core.c
> +++ b/net/smc/smc_core.c
> @@ -1368,18 +1368,24 @@ static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb)
>   {
>   	struct smc_buf_desc *buf_desc, *bf_desc;
>   	struct list_head *buf_list;
> +	struct rw_semaphore *lock;
>   	int i;
>   
>   	for (i = 0; i < SMC_RMBE_SIZES; i++) {
> -		if (is_rmb)
> +		if (is_rmb) {
>   			buf_list = &lgr->rmbs[i];
> -		else
> +			lock = &lgr->rmbs_lock;
> +		} else {
>   			buf_list = &lgr->sndbufs[i];
> +			lock = &lgr->sndbufs_lock;
> +		}
> +		down_write(lock);
>   		list_for_each_entry_safe(buf_desc, bf_desc, buf_list,
>   					 list) {
>   			list_del(&buf_desc->list);
>   			smc_buf_free(lgr, is_rmb, buf_desc);
>   		}
> +		up_write(lock);
>   	}
>   }
>
Wen Gu Aug. 2, 2024, 1:55 a.m. UTC | #2
On 2024/7/31 18:32, shaozhengchao wrote:
> Hi Wen Gu:
>    "The operations to link group buffer list should be protected by
> sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will
> this really happen? Because no process is in use with the link group,
> does this mean that there is no concurrent scenario?
> 

Hi Zhengchao,

Yes, I am also very conflicted about whether to add lock protection.
 From the code, it appears that when __smc_lgr_free_bufs is called, the
link group has already been removed from the lgr_list, so theoretically
there should be no contention (e.g. add to buf_list). However, in order
to maintain consistency with other lgr buf_list operations and to guard
against unforeseen or future changes, I have added lock protection here
as well.

Thanks!

> Thank you
> 
> Zhengchao Shao
> 
> On 2024/7/31 17:31, Wen Gu wrote:
>> The operations to link group buffer list should be protected by
>> sndbufs_lock or rmbs_lock. So fix it.
>>
>> Fixes: 3e034725c0d8 ("net/smc: common functions for RMBs and send buffers")
>> Signed-off-by: Wen Gu <guwen@linux.alibaba.com>
>> ---
>>   net/smc/smc_core.c | 10 ++++++++--
>>   1 file changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
>> index 3b95828d9976..ecfea8c38da9 100644
>> --- a/net/smc/smc_core.c
>> +++ b/net/smc/smc_core.c
>> @@ -1368,18 +1368,24 @@ static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb)
>>   {
>>       struct smc_buf_desc *buf_desc, *bf_desc;
>>       struct list_head *buf_list;
>> +    struct rw_semaphore *lock;
>>       int i;
>>       for (i = 0; i < SMC_RMBE_SIZES; i++) {
>> -        if (is_rmb)
>> +        if (is_rmb) {
>>               buf_list = &lgr->rmbs[i];
>> -        else
>> +            lock = &lgr->rmbs_lock;
>> +        } else {
>>               buf_list = &lgr->sndbufs[i];
>> +            lock = &lgr->sndbufs_lock;
>> +        }
>> +        down_write(lock);
>>           list_for_each_entry_safe(buf_desc, bf_desc, buf_list,
>>                        list) {
>>               list_del(&buf_desc->list);
>>               smc_buf_free(lgr, is_rmb, buf_desc);
>>           }
>> +        up_write(lock);
>>       }
>>   }
Wenjia Zhang Aug. 5, 2024, 8:23 a.m. UTC | #3
On 02.08.24 03:55, Wen Gu wrote:
> 
> 
> On 2024/7/31 18:32, shaozhengchao wrote:
>> Hi Wen Gu:
>>    "The operations to link group buffer list should be protected by
>> sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will
>> this really happen? Because no process is in use with the link group,
>> does this mean that there is no concurrent scenario?
>>
> 
> Hi Zhengchao,
> 
> Yes, I am also very conflicted about whether to add lock protection.
>  From the code, it appears that when __smc_lgr_free_bufs is called, the
> link group has already been removed from the lgr_list, so theoretically
> there should be no contention (e.g. add to buf_list). However, in order
> to maintain consistency with other lgr buf_list operations and to guard
> against unforeseen or future changes, I have added lock protection here
> as well.
> 
> Thanks!
> 

I'm indeed in two minds about if I give you my reviewed-by especially on 
the reason of unforeseen bugs. However, previously the most bugs on 
locking we met in our code are almost because of deadlocks caused by too 
much different locks introduced. Thus, I don't think this patch is 
necessary at lease for now.

Thanks,
Wenjia
Wen Gu Aug. 5, 2024, 9:31 a.m. UTC | #4
On 2024/8/5 16:23, Wenjia Zhang wrote:
> 
> 
> On 02.08.24 03:55, Wen Gu wrote:
>>
>>
>> On 2024/7/31 18:32, shaozhengchao wrote:
>>> Hi Wen Gu:
>>>    "The operations to link group buffer list should be protected by
>>> sndbufs_lock or rmbs_lock" It seems that the logic is smooth. But will
>>> this really happen? Because no process is in use with the link group,
>>> does this mean that there is no concurrent scenario?
>>>
>>
>> Hi Zhengchao,
>>
>> Yes, I am also very conflicted about whether to add lock protection.
>>  From the code, it appears that when __smc_lgr_free_bufs is called, the
>> link group has already been removed from the lgr_list, so theoretically
>> there should be no contention (e.g. add to buf_list). However, in order
>> to maintain consistency with other lgr buf_list operations and to guard
>> against unforeseen or future changes, I have added lock protection here
>> as well.
>>
>> Thanks!
>>
> 
> I'm indeed in two minds about if I give you my reviewed-by especially on the reason of unforeseen bugs. However, previously the most bugs on locking we met in our code are almost because of deadlocks caused by too much different locks introduced. Thus, I don't think this patch is necessary at lease 
> for now.
> 

OK, let's handle it when it causes actual problems. Thanks!

> Thanks,
> Wenjia
diff mbox series

Patch

diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 3b95828d9976..ecfea8c38da9 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -1368,18 +1368,24 @@  static void __smc_lgr_free_bufs(struct smc_link_group *lgr, bool is_rmb)
 {
 	struct smc_buf_desc *buf_desc, *bf_desc;
 	struct list_head *buf_list;
+	struct rw_semaphore *lock;
 	int i;
 
 	for (i = 0; i < SMC_RMBE_SIZES; i++) {
-		if (is_rmb)
+		if (is_rmb) {
 			buf_list = &lgr->rmbs[i];
-		else
+			lock = &lgr->rmbs_lock;
+		} else {
 			buf_list = &lgr->sndbufs[i];
+			lock = &lgr->sndbufs_lock;
+		}
+		down_write(lock);
 		list_for_each_entry_safe(buf_desc, bf_desc, buf_list,
 					 list) {
 			list_del(&buf_desc->list);
 			smc_buf_free(lgr, is_rmb, buf_desc);
 		}
+		up_write(lock);
 	}
 }