Message ID | 1654857639-12346-1-git-send-email-quic_srirrama@quicinc.com (mailing list archive) |
---|---|
Headers | show |
Series | Mesh Fast xmit support | expand |
> > Sriram R (3): > cfg80211: increase mesh config attribute bitmask size > cfg80211: Add provision for changing mesh header cache size > Is there really that much point in making that configurable? I have no idea how a user could possibly set this to a reasonable value? Maybe it would make more sense to auto-size it somehow depending on memory? Or just pick a reasonable upper bound and leave it at that? johannes
>-----Original Message----- >From: Johannes Berg <johannes@sipsolutions.net> >Sent: Friday, July 1, 2022 2:19 PM >To: Sriram R (QUIC) <quic_srirrama@quicinc.com> >Cc: linux-wireless@vger.kernel.org >Subject: Re: [PATCH 0/3] Mesh Fast xmit support > >WARNING: This email originated from outside of Qualcomm. Please be wary of >any links or attachments, and do not enable macros. > >> >> Sriram R (3): >> cfg80211: increase mesh config attribute bitmask size >> cfg80211: Add provision for changing mesh header cache size >> > >Is there really that much point in making that configurable? I have no idea how a >user could possibly set this to a reasonable value? Hi Johannes, Initially it was set it to a default size of 50 when RFC was sent. There was a suggestion to make it configurable where users could configure this cache size proportional to the required/anticipated network capacity. > >Maybe it would make more sense to auto-size it somehow depending on >memory? Or just pick a reasonable upper bound and leave it at that? > Right, setting a good upper bound should be a relatively easy option, if we don’t need this to be configurable. Thanks, Sriram.R
> Initially it was set it to a default size of 50 when RFC was sent. > There was a suggestion to > make it configurable where users could configure this cache size > proportional to the required/anticipated network capacity. Oh, right, I missed that this was in the discussion earlier. The question is what are you afraid of? I mean, even setting it to 500 wouldn't be a huge amount of memory use (~50k), and probably mostly sufficient regardless of the network? And if you never see all those nodes, then it wouldn't use all that memory either. Timing out old entries will also keep memory usage down. So are you worried about worst-case behaviour in attacks, e.g. somebody attempting to join the mesh? But then if you're worried about that I guess you have bigger problems (and should be using secure mesh), such as the number of station entries? Or an attacker mutating their Ethernet address behind some gateway? But they still need to convince the station to even want to send traffic there... But even then, setting a much higher limit than 50 should cope with these cases, while giving enough breathing room for the real usage? johannes
>-----Original Message----- >From: Johannes Berg <johannes@sipsolutions.net> >Sent: Friday, July 1, 2022 2:57 PM >To: Sriram R (QUIC) <quic_srirrama@quicinc.com>; nbd@nbd.name >Cc: linux-wireless@vger.kernel.org >Subject: Re: [PATCH 0/3] Mesh Fast xmit support > >WARNING: This email originated from outside of Qualcomm. Please be wary of >any links or attachments, and do not enable macros. > >> Initially it was set it to a default size of 50 when RFC was sent. >> There was a suggestion to >> make it configurable where users could configure this cache size >> proportional to the required/anticipated network capacity. > >Oh, right, I missed that this was in the discussion earlier. > >The question is what are you afraid of? I mean, even setting it to 500 wouldn't >be a huge amount of memory use (~50k), and probably mostly sufficient >regardless of the network? And if you never see all those nodes, then it wouldn't >use all that memory either. > >Timing out old entries will also keep memory usage down. > >So are you worried about worst-case behaviour in attacks, e.g. somebody >attempting to join the mesh? But then if you're worried about that I guess you >have bigger problems (and should be using secure mesh), such as the number of >station entries? > >Or an attacker mutating their Ethernet address behind some gateway? But they >still need to convince the station to even want to send traffic there... > >But even then, setting a much higher limit than 50 should cope with these cases, >while giving enough breathing room for the real usage? > Hi Johannes, The only concern/reason is to not silently increase the memory requirement of Mesh support with this patch. So was skeptical on having a higher cache size(like 250 or 500 max). Hence had a value of 50 and left the configuration part for devices which needed higher cache. But as you mentioned, this is only runtime max memory and not default. So we should be fine to set some high limit, If above is not a concern could we stick to an upper limit of ~150-200 ? Apart from that, though the points you mentioned are quite possible, the cache Management logic will ensure to cleanup stale entries and in worst case will use regular header generation process if cache is full. So I feel that should ensure things work as normal even under attack. Thanks, Sriram.R
Hi, > The only concern/reason is to not silently increase the memory > requirement of Mesh > support with this patch. OK. > So was skeptical on having a higher cache size(like 250 or 500 max). > Hence had a value of 50 and left the configuration part for devices > which needed higher > cache. > But as you mentioned, this is only runtime max memory and not default. > So we should be fine to set some high limit, If above is not a > concern could we stick to > an upper limit of ~150-200 ? Right, I'm fine with that. I was just throwing out 500 as a random number to show that it's not really a huge memory requirement. > Apart from that, though the points you mentioned are quite possible, > the cache > Management logic will ensure to cleanup stale entries and in worst > case will > use regular header generation process if cache is full. So I feel that > should ensure > things work as normal even under attack. Right. johannes
On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote: > > > So was skeptical on having a higher cache size(like 250 or 500 max). > > Hence had a value of 50 and left the configuration part for devices > > which needed higher > > cache. > > But as you mentioned, this is only runtime max memory and not default. > > So we should be fine to set some high limit, If above is not a > > concern could we stick to > > an upper limit of ~150-200 ? > > Right, I'm fine with that. I was just throwing out 500 as a random > number to show that it's not really a huge memory requirement. > But maybe Felix wants to comment? Felix? johannes
>-----Original Message----- >From: Johannes Berg <johannes@sipsolutions.net> >Sent: Friday, July 1, 2022 3:30 PM >To: Sriram R (QUIC) <quic_srirrama@quicinc.com>; nbd@nbd.name >Cc: linux-wireless@vger.kernel.org >Subject: Re: [PATCH 0/3] Mesh Fast xmit support > >WARNING: This email originated from outside of Qualcomm. Please be wary of >any links or attachments, and do not enable macros. > >On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote: >> >> > So was skeptical on having a higher cache size(like 250 or 500 max). >> > Hence had a value of 50 and left the configuration part for devices >> > which needed higher cache. >> > But as you mentioned, this is only runtime max memory and not default. >> > So we should be fine to set some high limit, If above is not a >> > concern could we stick to an upper limit of ~150-200 ? >> >> Right, I'm fine with that. I was just throwing out 500 as a random >> number to show that it's not really a huge memory requirement. >> > >But maybe Felix wants to comment? Felix? Hi Felix, Could you kindly share your comments on this. Thanks, Sriram.R
On 15.07.22 04:16, Sriram R (QUIC) wrote: >>-----Original Message----- >>From: Johannes Berg <johannes@sipsolutions.net> >>Sent: Friday, July 1, 2022 3:30 PM >>To: Sriram R (QUIC) <quic_srirrama@quicinc.com>; nbd@nbd.name >>Cc: linux-wireless@vger.kernel.org >>Subject: Re: [PATCH 0/3] Mesh Fast xmit support >> >>WARNING: This email originated from outside of Qualcomm. Please be wary of >>any links or attachments, and do not enable macros. >> >>On Fri, 2022-07-01 at 11:59 +0200, Johannes Berg wrote: >>> >>> > So was skeptical on having a higher cache size(like 250 or 500 max). >>> > Hence had a value of 50 and left the configuration part for devices >>> > which needed higher cache. >>> > But as you mentioned, this is only runtime max memory and not default. >>> > So we should be fine to set some high limit, If above is not a >>> > concern could we stick to an upper limit of ~150-200 ? >>> >>> Right, I'm fine with that. I was just throwing out 500 as a random >>> number to show that it's not really a huge memory requirement. >>> >> >>But maybe Felix wants to comment? Felix? > Hi Felix, > > Could you kindly share your comments on this. I agree with making it big enough so that almost nobody has to tune it. I think 512 would be a reasonable default. By the way, if I'm counting correctly, you might be able to reduce the size of the cache entries a bit by moving the 'key' field below the 'band' field, getting rid of some padding. - Felix
>>>> > So was skeptical on having a higher cache size(like 250 or 500 max). >>>> > Hence had a value of 50 and left the configuration part for >>>> > devices which needed higher cache. >>>> > But as you mentioned, this is only runtime max memory and not default. >>>> > So we should be fine to set some high limit, If above is not a >>>> > concern could we stick to an upper limit of ~150-200 ? >>>> >>>> Right, I'm fine with that. I was just throwing out 500 as a random >>>> number to show that it's not really a huge memory requirement. >>>> >>> >>>But maybe Felix wants to comment? Felix? >> Hi Felix, >> >> Could you kindly share your comments on this. >I agree with making it big enough so that almost nobody has to tune it. >I think 512 would be a reasonable default. Sure. >By the way, if I'm counting correctly, you might be able to reduce the size of the >cache entries a bit by moving the 'key' field below the 'band' field, getting rid of >some padding. Oh okay, Thanks for checking, let me revisit this packing. Regards, Sriram.R > >- Felix