Message ID | 20230927082918.197030-1-k.kahurani@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | net/xen-netback: Break build if netback slots > max_skbs + 1 | expand |
On 27/09/2023 09:29, David Kahurani wrote: > If XEN_NETBK_LEGACY_SLOTS_MAX and MAX_SKB_FRAGS have a difference of > more than 1, with MAX_SKB_FRAGS being the lesser value, it opens up a > path for null-dereference. It was also noted that some distributions > were modifying upstream behaviour in that direction which necessitates > this patch. > > Signed-off-by: David Kahurani <k.kahurani@gmail.com> Acked-by: Paul Durrant <paul@xen.org>
On Wed, 27 Sep 2023 11:29:18 +0300 David Kahurani wrote: > If XEN_NETBK_LEGACY_SLOTS_MAX and MAX_SKB_FRAGS have a difference of > more than 1, with MAX_SKB_FRAGS being the lesser value, it opens up a > path for null-dereference. It was also noted that some distributions > were modifying upstream behaviour in that direction which necessitates > this patch. MAX_SKB_FRAGS can now be set via Kconfig, this allows us to create larger super-packets. Can XEN_NETBK_LEGACY_SLOTS_MAX be made relative to MAX_SKB_FRAGS, or does the number have to match between guest and host? Option #2 would be to add a Kconfig dependency for the driver to make sure high MAX_SKB_FRAGS is incompatible with it. Breaking the build will make build bots very sad. We'll also need a Fixes tag, I presume this is a fix?
This change was suggested by Juergen and looked okay and straightforward to me. On Wed, Oct 4, 2023 at 9:48 PM Jakub Kicinski <kuba@kernel.org> wrote: > > On Wed, 27 Sep 2023 11:29:18 +0300 David Kahurani wrote: > > If XEN_NETBK_LEGACY_SLOTS_MAX and MAX_SKB_FRAGS have a difference of > > more than 1, with MAX_SKB_FRAGS being the lesser value, it opens up a > > path for null-dereference. It was also noted that some distributions > > were modifying upstream behaviour in that direction which necessitates > > this patch. > > MAX_SKB_FRAGS can now be set via Kconfig, this allows us to create > larger super-packets. Can XEN_NETBK_LEGACY_SLOTS_MAX be made relative > to MAX_SKB_FRAGS, or does the number have to match between guest and > host? Historically, netback driver allows for a maximum of 18 fragments. With recent changes, it also relies on the assumption that the difference between MAX_SKB_FRAGS and XEN_NETBK_LEGACY_SLOTS_MAX is one and MAX_SKB_FRAGS is the lesser value. Now, look at Ubuntu kernel for instance( a change has been made and, presumably, with good reason so we have reason to assume that the change will persist in future releases). /* To allow 64K frame to be packed as single skb without frag_list we * require 64K/PAGE_SIZE pages plus 1 additional page to allow for * buffers which do not start on a page boundary. * * Since GRO uses frags we allocate at least 16 regardless of page * size. */ #if (65536/PAGE_SIZE + 1) < 16 #define MAX_SKB_FRAGS 16UL #else #define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1) #endif So, MAX_SKB_FRAGS can sometimes be 16. This is exactly what we're trying to avoid with this patch. I host running with this change is vulnerable to attack by the guest(though, this will only happen when PAGE_SIZE > 4096). Option #2 would be to add a Kconfig dependency for the driver > to make sure high MAX_SKB_FRAGS is incompatible with it. netback doesn't support larger super-packets. At least as of now. The maximum number of fragments in a packet is 18. Any packets with the number of fragments above that value from the guest are dropped. I would assume that support for super-packets is probably something that should be worked on or maybe even is already being worked on. However, this is not the issue we are trying to fix in this patch. > > Breaking the build will make build bots very sad. This patch build should not break build for upstream. It will only break for those patching upstream behaviour. My intent is not to break build bots but to alert someone building that netback doesn't work with the particular MAX_SKB_FRAGS value. Seeing as they have modified upstream behaviour, then, they might as well take a look at the issue and make a decision themselves. Seeing as this issue will hit the distros before it goes downstream, I don't think it should be a problem for users. > > We'll also need a Fixes tag, I presume this is a fix? Yeah, I guess that would be needed too. > -- > pw-bot: cr
On Thu, 5 Oct 2023 18:39:51 +0300 David Kahurani wrote: > > MAX_SKB_FRAGS can now be set via Kconfig, this allows us to create > > larger super-packets. Can XEN_NETBK_LEGACY_SLOTS_MAX be made relative > > to MAX_SKB_FRAGS, or does the number have to match between guest and > > host? > > Historically, netback driver allows for a maximum of 18 fragments. > With recent changes, it also relies on the assumption that the > difference between MAX_SKB_FRAGS and XEN_NETBK_LEGACY_SLOTS_MAX is one > and MAX_SKB_FRAGS is the lesser value. > > Now, look at Ubuntu kernel for instance( a change has been made and, > presumably, with good reason so we have reason to assume that the > change will persist in future releases). > > /* To allow 64K frame to be packed as single skb without frag_list we > * require 64K/PAGE_SIZE pages plus 1 additional page to allow for > * buffers which do not start on a page boundary. > * > * Since GRO uses frags we allocate at least 16 regardless of page > * size. > */ > #if (65536/PAGE_SIZE + 1) < 16 > #define MAX_SKB_FRAGS 16UL > #else > #define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1) > #endif > > So, MAX_SKB_FRAGS can sometimes be 16. This is exactly what we're > trying to avoid with this patch. I host running with this change is > vulnerable to attack by the guest(though, this will only happen when > PAGE_SIZE > 4096). My bad, you're protecting from the inverse condition than I thought. But to be clear the code you're quoting (the defines for MAX_SKB_FRAGS) are what has been there upstream forever until 3948b059 was merged. Not 100% sure why 3948b059 switched the min from 16 to 17, I think it was just to keep consistency between builds. If this change gets backported to 6.1 stable it will break ppc build of stable, right? Since ppc has 64k pages.
On Thu, Oct 5, 2023 at 7:03 PM Jakub Kicinski <kuba@kernel.org> wrote: > > On Thu, 5 Oct 2023 18:39:51 +0300 David Kahurani wrote: > > > MAX_SKB_FRAGS can now be set via Kconfig, this allows us to create > > > larger super-packets. Can XEN_NETBK_LEGACY_SLOTS_MAX be made relative > > > to MAX_SKB_FRAGS, or does the number have to match between guest and > > > host? > > > > Historically, netback driver allows for a maximum of 18 fragments. > > With recent changes, it also relies on the assumption that the > > difference between MAX_SKB_FRAGS and XEN_NETBK_LEGACY_SLOTS_MAX is one > > and MAX_SKB_FRAGS is the lesser value. > > > > Now, look at Ubuntu kernel for instance( a change has been made and, > > presumably, with good reason so we have reason to assume that the > > change will persist in future releases). > > > > /* To allow 64K frame to be packed as single skb without frag_list we > > * require 64K/PAGE_SIZE pages plus 1 additional page to allow for > > * buffers which do not start on a page boundary. > > * > > * Since GRO uses frags we allocate at least 16 regardless of page > > * size. > > */ > > #if (65536/PAGE_SIZE + 1) < 16 > > #define MAX_SKB_FRAGS 16UL > > #else > > #define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1) > > #endif > > > > So, MAX_SKB_FRAGS can sometimes be 16. This is exactly what we're > > trying to avoid with this patch. I host running with this change is > > vulnerable to attack by the guest(though, this will only happen when > > PAGE_SIZE > 4096). > > My bad, you're protecting from the inverse condition than I thought. > > But to be clear the code you're quoting (the defines for MAX_SKB_FRAGS) > are what has been there upstream forever until 3948b059 was merged. > Not 100% sure why 3948b059 switched the min from 16 to 17, I think it > was just to keep consistency between builds. Okay, now that might change everything because the patch was made with the assumption that Ubuntu(and probably others) have code modifying the default values for MAX_SKB_FRAGS. If this was upstream, then, maybe when the time comes they will grab 3948b059. I consider this solved at this point :-) > > If this change gets backported to 6.1 stable it will break ppc build > of stable, right? Since ppc has 64k pages.
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 88f760a7cbc3..df032e33787f 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1005,6 +1005,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue, break; } + BUILD_BUG_ON(XEN_NETBK_LEGACY_SLOTS_MAX > MAX_SKB_FRAGS + 1); if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size) data_len = txreq.size;
If XEN_NETBK_LEGACY_SLOTS_MAX and MAX_SKB_FRAGS have a difference of more than 1, with MAX_SKB_FRAGS being the lesser value, it opens up a path for null-dereference. It was also noted that some distributions were modifying upstream behaviour in that direction which necessitates this patch. Signed-off-by: David Kahurani <k.kahurani@gmail.com> --- drivers/net/xen-netback/netback.c | 1 + 1 file changed, 1 insertion(+)