Message ID | 20240701224421.1432654-3-edgar.iglesias@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | xen: mapcache: Fix unmapping of first the entry in a bucket | expand |
On Tue, Jul 02, 2024 at 12:44:21AM +0200, Edgar E. Iglesias wrote: > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > This fixes the clobbering of the entry->next pointer when > unmapping the first entry in a bucket of a mapcache. > > Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") > Reported-by: Anthony PERARD <anthony.perard@vates.tech> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> > --- > hw/xen/xen-mapcache.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c > index 5f23b0adbe..18ba7b1d8f 100644 > --- a/hw/xen/xen-mapcache.c > +++ b/hw/xen/xen-mapcache.c > @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, > pentry->next = entry->next; > g_free(entry); > } else { > - memset(entry, 0, sizeof *entry); > + /* > + * Invalidate mapping but keep entry->next pointing to the rest > + * of the list. > + * > + * Note that lock is already zero here, otherwise we don't unmap. > + */ > + entry->paddr_index = 0; > + entry->vaddr_base = NULL; > + entry->valid_mapping = NULL; > + entry->flags = 0; > + entry->size = 0; This kind of feels like mc->entry should be an array of pointer rather than an array of MapCacheEntry but that seems to work well enough and not the first time entries are been cleared like that. Reviewed-by: Anthony PERARD <anthony.perard@vates.tech> Thanks,
Anthony PERARD <anthony.perard@vates.tech> writes: > On Tue, Jul 02, 2024 at 12:44:21AM +0200, Edgar E. Iglesias wrote: >> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> >> >> This fixes the clobbering of the entry->next pointer when >> unmapping the first entry in a bucket of a mapcache. >> >> Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") >> Reported-by: Anthony PERARD <anthony.perard@vates.tech> >> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> >> --- >> hw/xen/xen-mapcache.c | 12 +++++++++++- >> 1 file changed, 11 insertions(+), 1 deletion(-) >> >> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c >> index 5f23b0adbe..18ba7b1d8f 100644 >> --- a/hw/xen/xen-mapcache.c >> +++ b/hw/xen/xen-mapcache.c >> @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, >> pentry->next = entry->next; >> g_free(entry); >> } else { >> - memset(entry, 0, sizeof *entry); >> + /* >> + * Invalidate mapping but keep entry->next pointing to the rest >> + * of the list. >> + * >> + * Note that lock is already zero here, otherwise we don't unmap. >> + */ >> + entry->paddr_index = 0; >> + entry->vaddr_base = NULL; >> + entry->valid_mapping = NULL; >> + entry->flags = 0; >> + entry->size = 0; > > This kind of feels like mc->entry should be an array of pointer rather > than an array of MapCacheEntry but that seems to work well enough and > not the first time entries are been cleared like that. The use of a hand rolled list is a bit of a concern considering QEMU and Glib both provide various abstractions used around the rest of the code base. The original patch that introduces the mapcache doesn't tell me much about access patterns for the cache, just that it is trying to solve memory exhaustion issues with lots of dynamic small mappings. Maybe a simpler structure is desirable? We also have an interval tree implementation ("qemu/interval-tree.h") if what we really want is a sorted tree of memory that can be iterated locklessly.
On Thu, Jul 04, 2024 at 05:44:52PM +0100, Alex Bennée wrote: > Anthony PERARD <anthony.perard@vates.tech> writes: > > > On Tue, Jul 02, 2024 at 12:44:21AM +0200, Edgar E. Iglesias wrote: > >> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > >> > >> This fixes the clobbering of the entry->next pointer when > >> unmapping the first entry in a bucket of a mapcache. > >> > >> Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") > >> Reported-by: Anthony PERARD <anthony.perard@vates.tech> > >> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> > >> --- > >> hw/xen/xen-mapcache.c | 12 +++++++++++- > >> 1 file changed, 11 insertions(+), 1 deletion(-) > >> > >> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c > >> index 5f23b0adbe..18ba7b1d8f 100644 > >> --- a/hw/xen/xen-mapcache.c > >> +++ b/hw/xen/xen-mapcache.c > >> @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, > >> pentry->next = entry->next; > >> g_free(entry); > >> } else { > >> - memset(entry, 0, sizeof *entry); > >> + /* > >> + * Invalidate mapping but keep entry->next pointing to the rest > >> + * of the list. > >> + * > >> + * Note that lock is already zero here, otherwise we don't unmap. > >> + */ > >> + entry->paddr_index = 0; > >> + entry->vaddr_base = NULL; > >> + entry->valid_mapping = NULL; > >> + entry->flags = 0; > >> + entry->size = 0; > > > > This kind of feels like mc->entry should be an array of pointer rather > > than an array of MapCacheEntry but that seems to work well enough and > > not the first time entries are been cleared like that. > > The use of a hand rolled list is a bit of a concern considering QEMU and > Glib both provide various abstractions used around the rest of the code > base. The original patch that introduces the mapcache doesn't tell me > much about access patterns for the cache, just that it is trying to > solve memory exhaustion issues with lots of dynamic small mappings. > > Maybe a simpler structure is desirable? > > We also have an interval tree implementation ("qemu/interval-tree.h") if > what we really want is a sorted tree of memory that can be iterated > locklessly. > Yes, it would be interesting to benchmark other options. I agree that we should at minimum reuse existing lists/hash tables. We've also had some discussions around removing it partially or alltogether but there are some concerns around that. We're going to need something to keep track of grants. For 32-bit hosts, it's a problem to exhaust virtual address-space if mapping all of the guest (are folks still using 32-bit hosts?). There may be other issues aswell. Some benefits are that we'll remove some of the complexity and latency for mapping and unmapping stuff continously. Cheers, Edgar
On Thu, Jul 4, 2024 at 9:48 PM Edgar E. Iglesias <edgar.iglesias@amd.com> wrote: > On Thu, Jul 04, 2024 at 05:44:52PM +0100, Alex Bennée wrote: > > Anthony PERARD <anthony.perard@vates.tech> writes: > > > > > On Tue, Jul 02, 2024 at 12:44:21AM +0200, Edgar E. Iglesias wrote: > > >> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > >> > > >> This fixes the clobbering of the entry->next pointer when > > >> unmapping the first entry in a bucket of a mapcache. > > >> > > >> Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") > > >> Reported-by: Anthony PERARD <anthony.perard@vates.tech> > > >> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> > > >> --- > > >> hw/xen/xen-mapcache.c | 12 +++++++++++- > > >> 1 file changed, 11 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c > > >> index 5f23b0adbe..18ba7b1d8f 100644 > > >> --- a/hw/xen/xen-mapcache.c > > >> +++ b/hw/xen/xen-mapcache.c > > >> @@ -597,7 +597,17 @@ static void > xen_invalidate_map_cache_entry_unlocked(MapCache *mc, > > >> pentry->next = entry->next; > > >> g_free(entry); > > >> } else { > > >> - memset(entry, 0, sizeof *entry); > > >> + /* > > >> + * Invalidate mapping but keep entry->next pointing to the > rest > > >> + * of the list. > > >> + * > > >> + * Note that lock is already zero here, otherwise we don't > unmap. > > >> + */ > > >> + entry->paddr_index = 0; > > >> + entry->vaddr_base = NULL; > > >> + entry->valid_mapping = NULL; > > >> + entry->flags = 0; > > >> + entry->size = 0; > > > > > > This kind of feels like mc->entry should be an array of pointer rather > > > than an array of MapCacheEntry but that seems to work well enough and > > > not the first time entries are been cleared like that. > > > > The use of a hand rolled list is a bit of a concern considering QEMU and > > Glib both provide various abstractions used around the rest of the code > > base. The original patch that introduces the mapcache doesn't tell me > > much about access patterns for the cache, just that it is trying to > > solve memory exhaustion issues with lots of dynamic small mappings. > > > > Maybe a simpler structure is desirable? > > > > We also have an interval tree implementation ("qemu/interval-tree.h") if > > what we really want is a sorted tree of memory that can be iterated > > locklessly. > > > > Yes, it would be interesting to benchmark other options. > I agree that we should at minimum reuse existing lists/hash tables. > > We've also had some discussions around removing it partially or > alltogether but > there are some concerns around that. We're going to need something to > keep track of grants. For 32-bit hosts, it's a problem to exhaust virtual > address-space if mapping all of the guest (are folks still using 32-bit > hosts?). > There may be other issues aswell. > > Some benefits are that we'll remove some of the complexity and latency for > mapping > and unmapping stuff continously. > > One more thing I forgot to add is that IMO, these larger longer term changes should not block this tiny bugfix... Cheers, Edgar
"Edgar E. Iglesias" <edgar.iglesias@gmail.com> writes: > On Thu, Jul 4, 2024 at 9:48 PM Edgar E. Iglesias <edgar.iglesias@amd.com> wrote: > > On Thu, Jul 04, 2024 at 05:44:52PM +0100, Alex Bennée wrote: > > Anthony PERARD <anthony.perard@vates.tech> writes: > > > > > On Tue, Jul 02, 2024 at 12:44:21AM +0200, Edgar E. Iglesias wrote: > > >> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > >> > > >> This fixes the clobbering of the entry->next pointer when > > >> unmapping the first entry in a bucket of a mapcache. > > >> > > >> Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") > > >> Reported-by: Anthony PERARD <anthony.perard@vates.tech> > > >> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> > > >> --- > > >> hw/xen/xen-mapcache.c | 12 +++++++++++- > > >> 1 file changed, 11 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c > > >> index 5f23b0adbe..18ba7b1d8f 100644 > > >> --- a/hw/xen/xen-mapcache.c > > >> +++ b/hw/xen/xen-mapcache.c > > >> @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, > > >> pentry->next = entry->next; > > >> g_free(entry); > > >> } else { > > >> - memset(entry, 0, sizeof *entry); > > >> + /* > > >> + * Invalidate mapping but keep entry->next pointing to the rest > > >> + * of the list. > > >> + * > > >> + * Note that lock is already zero here, otherwise we don't unmap. > > >> + */ > > >> + entry->paddr_index = 0; > > >> + entry->vaddr_base = NULL; > > >> + entry->valid_mapping = NULL; > > >> + entry->flags = 0; > > >> + entry->size = 0; > > > > > > This kind of feels like mc->entry should be an array of pointer rather > > > than an array of MapCacheEntry but that seems to work well enough and > > > not the first time entries are been cleared like that. > > > > The use of a hand rolled list is a bit of a concern considering QEMU and > > Glib both provide various abstractions used around the rest of the code > > base. The original patch that introduces the mapcache doesn't tell me > > much about access patterns for the cache, just that it is trying to > > solve memory exhaustion issues with lots of dynamic small mappings. > > > > Maybe a simpler structure is desirable? > > > > We also have an interval tree implementation ("qemu/interval-tree.h") if > > what we really want is a sorted tree of memory that can be iterated > > locklessly. > > > > Yes, it would be interesting to benchmark other options. > I agree that we should at minimum reuse existing lists/hash tables. > > We've also had some discussions around removing it partially or alltogether but > there are some concerns around that. We're going to need something to > keep track of grants. For 32-bit hosts, it's a problem to exhaust virtual > address-space if mapping all of the guest (are folks still using 32-bit hosts?). > There may be other issues aswell. > > Some benefits are that we'll remove some of the complexity and latency for mapping > and unmapping stuff continously. > > One more thing I forgot to add is that IMO, these larger longer term > changes should not block this tiny bugfix... Agreed. > > Cheers, > Edgar
On Tue, 2 Jul 2024, Edgar E. Iglesias wrote: > From: "Edgar E. Iglesias" <edgar.iglesias@amd.com> > > This fixes the clobbering of the entry->next pointer when > unmapping the first entry in a bucket of a mapcache. > > Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") > Reported-by: Anthony PERARD <anthony.perard@vates.tech> > Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> > --- > hw/xen/xen-mapcache.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c > index 5f23b0adbe..18ba7b1d8f 100644 > --- a/hw/xen/xen-mapcache.c > +++ b/hw/xen/xen-mapcache.c > @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, > pentry->next = entry->next; > g_free(entry); > } else { > - memset(entry, 0, sizeof *entry); > + /* > + * Invalidate mapping but keep entry->next pointing to the rest > + * of the list. > + * > + * Note that lock is already zero here, otherwise we don't unmap. > + */ > + entry->paddr_index = 0; > + entry->vaddr_base = NULL; > + entry->valid_mapping = NULL; > + entry->flags = 0; > + entry->size = 0; > } > } > > -- > 2.43.0 >
diff --git a/hw/xen/xen-mapcache.c b/hw/xen/xen-mapcache.c index 5f23b0adbe..18ba7b1d8f 100644 --- a/hw/xen/xen-mapcache.c +++ b/hw/xen/xen-mapcache.c @@ -597,7 +597,17 @@ static void xen_invalidate_map_cache_entry_unlocked(MapCache *mc, pentry->next = entry->next; g_free(entry); } else { - memset(entry, 0, sizeof *entry); + /* + * Invalidate mapping but keep entry->next pointing to the rest + * of the list. + * + * Note that lock is already zero here, otherwise we don't unmap. + */ + entry->paddr_index = 0; + entry->vaddr_base = NULL; + entry->valid_mapping = NULL; + entry->flags = 0; + entry->size = 0; } }