Message ID | 0dec8285-a348-53d4-f9fa-552c7c1c405f@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | IOMMU: superpage support when not sharing pagetables | expand |
On 25/04/2022 09:30, Jan Beulich wrote: > Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify > the target level for page table walks"]) have made Coverity notice a > shift count in iommu_pde_from_dfn() which might in theory grow too > large. While this isn't a problem in practice, address the concern > nevertheless to not leave dangling breakage in case very large > superpages would be enabled at some point. > > Coverity ID: 1504264 > > While there also address a similar issue in set_iommu_ptes_present(). > It's not clear to me why Coverity hasn't spotted that one. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> > --- > v4: New. > > --- a/xen/drivers/passthrough/amd/iommu_map.c > +++ b/xen/drivers/passthrough/amd/iommu_map.c > @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese > bool iw, bool ir) > { > union amd_iommu_pte *table, *pde; > - unsigned int page_sz, flush_flags = 0; > + unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); There's an off-by-12 error somewhere here. Judging by it's use, it should be named mapping_frames (or similar) instead. With that fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
On 27.04.2022 15:08, Andrew Cooper wrote: > On 25/04/2022 09:30, Jan Beulich wrote: >> Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify >> the target level for page table walks"]) have made Coverity notice a >> shift count in iommu_pde_from_dfn() which might in theory grow too >> large. While this isn't a problem in practice, address the concern >> nevertheless to not leave dangling breakage in case very large >> superpages would be enabled at some point. >> >> Coverity ID: 1504264 >> >> While there also address a similar issue in set_iommu_ptes_present(). >> It's not clear to me why Coverity hasn't spotted that one. >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> >> --- >> v4: New. >> >> --- a/xen/drivers/passthrough/amd/iommu_map.c >> +++ b/xen/drivers/passthrough/amd/iommu_map.c >> @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese >> bool iw, bool ir) >> { >> union amd_iommu_pte *table, *pde; >> - unsigned int page_sz, flush_flags = 0; >> + unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); > > There's an off-by-12 error somewhere here. > > Judging by it's use, it should be named mapping_frames (or similar) instead. Hmm, I think the author meant "size of the (potentially large) page in units of 4k (base) pages". That's still some form of "page size". > With that fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> If anything there could be another patch renaming the variable; that's certainly not the goal here. But as said, I don't think the variable name is strictly wrong. And with that it also doesn't feel entirely right that I would be on the hook of renaming it. I also think that "mapping_frames" isn't much better; it would need to be something like "nr_frames_per_pte", which is starting to get longish. So for the moment thanks for the R-b, but I will only apply it once we've sorted the condition you provided it under. Jan
On Mon, Apr 25, 2022 at 10:30:33AM +0200, Jan Beulich wrote: > Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify > the target level for page table walks"]) have made Coverity notice a > shift count in iommu_pde_from_dfn() which might in theory grow too > large. While this isn't a problem in practice, address the concern > nevertheless to not leave dangling breakage in case very large > superpages would be enabled at some point. > > Coverity ID: 1504264 > > While there also address a similar issue in set_iommu_ptes_present(). > It's not clear to me why Coverity hasn't spotted that one. > > Signed-off-by: Jan Beulich <jbeulich@suse.com> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> > --- > v4: New. > > --- a/xen/drivers/passthrough/amd/iommu_map.c > +++ b/xen/drivers/passthrough/amd/iommu_map.c > @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese > bool iw, bool ir) > { > union amd_iommu_pte *table, *pde; > - unsigned int page_sz, flush_flags = 0; > + unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); Seeing the discussion from Andrews reply, nr_pages might be more appropriate while still quite short. I'm not making my Rb conditional to that change though. Thanks, Roger.
On 03.05.2022 12:10, Roger Pau Monné wrote: > On Mon, Apr 25, 2022 at 10:30:33AM +0200, Jan Beulich wrote: >> Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify >> the target level for page table walks"]) have made Coverity notice a >> shift count in iommu_pde_from_dfn() which might in theory grow too >> large. While this isn't a problem in practice, address the concern >> nevertheless to not leave dangling breakage in case very large >> superpages would be enabled at some point. >> >> Coverity ID: 1504264 >> >> While there also address a similar issue in set_iommu_ptes_present(). >> It's not clear to me why Coverity hasn't spotted that one. >> >> Signed-off-by: Jan Beulich <jbeulich@suse.com> > > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com> Thanks. >> --- a/xen/drivers/passthrough/amd/iommu_map.c >> +++ b/xen/drivers/passthrough/amd/iommu_map.c >> @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese >> bool iw, bool ir) >> { >> union amd_iommu_pte *table, *pde; >> - unsigned int page_sz, flush_flags = 0; >> + unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); > > Seeing the discussion from Andrews reply, nr_pages might be more > appropriate while still quite short. Yes and no - it then would be ambiguous as to what size pages are meant. > I'm not making my Rb conditional to that change though. Good, thanks. But I guess I'm still somewhat stuck unless hearing back from Andrew (although one might not count a conditional R-b as a "pending objection"). I'll give him a few more days, but I continue to think this ought to be a separate change (if renaming is really needed in the 1st place) ... Jan
--- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese bool iw, bool ir) { union amd_iommu_pte *table, *pde; - unsigned int page_sz, flush_flags = 0; + unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); + unsigned int flush_flags = 0; table = map_domain_page(_mfn(pt_mfn)); pde = &table[pfn_to_pde_idx(dfn, pde_level)]; - page_sz = 1U << (PTE_PER_TABLE_SHIFT * (pde_level - 1)); if ( (void *)(pde + nr_ptes) > (void *)table + PAGE_SIZE ) { @@ -281,7 +281,7 @@ static int iommu_pde_from_dfn(struct dom { unsigned long mfn, pfn; - pfn = dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1); + pfn = dfn & ~((1UL << (PTE_PER_TABLE_SHIFT * next_level)) - 1); mfn = next_table_mfn; /* allocate lower level page table */
Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify the target level for page table walks"]) have made Coverity notice a shift count in iommu_pde_from_dfn() which might in theory grow too large. While this isn't a problem in practice, address the concern nevertheless to not leave dangling breakage in case very large superpages would be enabled at some point. Coverity ID: 1504264 While there also address a similar issue in set_iommu_ptes_present(). It's not clear to me why Coverity hasn't spotted that one. Signed-off-by: Jan Beulich <jbeulich@suse.com> --- v4: New.