Message ID | 20190717071439.14261-3-joro@8bytes.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Sync unmappings in vmalloc/ioremap areas | expand |
On 7/17/19 12:14 AM, Joerg Roedel wrote: > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > index 4a4049f6d458..d71e167662c3 100644 > --- a/arch/x86/mm/fault.c > +++ b/arch/x86/mm/fault.c > @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address) > > pmd = pmd_offset(pud, address); > pmd_k = pmd_offset(pud_k, address); > - if (!pmd_present(*pmd_k)) > - return NULL; > > - if (!pmd_present(*pmd)) > + if (pmd_present(*pmd) ^ pmd_present(*pmd_k)) > set_pmd(pmd, *pmd_k); Wouldn't: if (pmd_present(*pmd) != pmd_present(*pmd_k)) set_pmd(pmd, *pmd_k); be a bit more intuitive? But, either way, these look fine. For the series: Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
On Wed, 17 Jul 2019, Joerg Roedel wrote: > From: Joerg Roedel <jroedel@suse.de> > > With huge-page ioremap areas the unmappings also need to be > synced between all page-tables. Otherwise it can cause data > corruption when a region is unmapped and later re-used. > > Make the vmalloc_sync_one() function ready to sync > unmappings. > > Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') > Signed-off-by: Joerg Roedel <jroedel@suse.de> > --- > arch/x86/mm/fault.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > index 4a4049f6d458..d71e167662c3 100644 > --- a/arch/x86/mm/fault.c > +++ b/arch/x86/mm/fault.c > @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address) > > pmd = pmd_offset(pud, address); > pmd_k = pmd_offset(pud_k, address); > - if (!pmd_present(*pmd_k)) > - return NULL; > > - if (!pmd_present(*pmd)) > + if (pmd_present(*pmd) ^ pmd_present(*pmd_k)) > set_pmd(pmd, *pmd_k); > + > + if (!pmd_present(*pmd_k)) > + return NULL; > else > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); So in case of unmap, this updates only the first entry in the pgd_list because vmalloc_sync_all() will break out of the iteration over pgd_list when NULL is returned from vmalloc_sync_one(). I'm surely missing something, but how is that supposed to sync _all_ page tables on unmap as the changelog claims? Thanks, tglx
Hi Dave, On Wed, Jul 17, 2019 at 02:06:01PM -0700, Dave Hansen wrote: > On 7/17/19 12:14 AM, Joerg Roedel wrote: > > - if (!pmd_present(*pmd)) > > + if (pmd_present(*pmd) ^ pmd_present(*pmd_k)) > > set_pmd(pmd, *pmd_k); > > Wouldn't: > > if (pmd_present(*pmd) != pmd_present(*pmd_k)) > set_pmd(pmd, *pmd_k); > > be a bit more intuitive? Yes, right. That is much better, I changed it in the patch. > But, either way, these look fine. For the series: > > Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Thanks! Joerg
Hi Thomas, On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote: > On Wed, 17 Jul 2019, Joerg Roedel wrote: > > + > > + if (!pmd_present(*pmd_k)) > > + return NULL; > > else > > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); > > So in case of unmap, this updates only the first entry in the pgd_list > because vmalloc_sync_all() will break out of the iteration over pgd_list > when NULL is returned from vmalloc_sync_one(). > > I'm surely missing something, but how is that supposed to sync _all_ page > tables on unmap as the changelog claims? No, you are right, I missed that. It is a bug in this patch, the code that breaks out of the loop in vmalloc_sync_all() needs to be removed as well. Will do that in the next version. Thanks, Joerg
Joerg, On Thu, 18 Jul 2019, Joerg Roedel wrote: > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote: > > On Wed, 17 Jul 2019, Joerg Roedel wrote: > > > + > > > + if (!pmd_present(*pmd_k)) > > > + return NULL; > > > else > > > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); > > > > So in case of unmap, this updates only the first entry in the pgd_list > > because vmalloc_sync_all() will break out of the iteration over pgd_list > > when NULL is returned from vmalloc_sync_one(). > > > > I'm surely missing something, but how is that supposed to sync _all_ page > > tables on unmap as the changelog claims? > > No, you are right, I missed that. It is a bug in this patch, the code > that breaks out of the loop in vmalloc_sync_all() needs to be removed as > well. Will do that in the next version. I assume that p4d/pud do not need the pmd treatment, but a comment explaining why would be appreciated. Thanks, tglx
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote: > On Thu, 18 Jul 2019, Joerg Roedel wrote: > > No, you are right, I missed that. It is a bug in this patch, the code > > that breaks out of the loop in vmalloc_sync_all() needs to be removed as > > well. Will do that in the next version. > > I assume that p4d/pud do not need the pmd treatment, but a comment > explaining why would be appreciated. Yes, p4d and pud don't need to be handled here, as the code is 32-bit only and there p4d is folded anyway. Pud is only relevant for PAE and will already be mapped when the page-table is created (for performance reasons, because pud is top-level at PAE and mapping it later requires a TLB flush). The pud with PAE also never changes during the life-time of the page-table because we can't map a huge-page there. I will put that into a comment. Thanks, Joerg
On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote: > Joerg, > > On Thu, 18 Jul 2019, Joerg Roedel wrote: > > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote: > > > On Wed, 17 Jul 2019, Joerg Roedel wrote: > > > > + > > > > + if (!pmd_present(*pmd_k)) > > > > + return NULL; > > > > else > > > > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); > > > > > > So in case of unmap, this updates only the first entry in the pgd_list > > > because vmalloc_sync_all() will break out of the iteration over pgd_list > > > when NULL is returned from vmalloc_sync_one(). > > > > > > I'm surely missing something, but how is that supposed to sync _all_ page > > > tables on unmap as the changelog claims? > > > > No, you are right, I missed that. It is a bug in this patch, the code > > that breaks out of the loop in vmalloc_sync_all() needs to be removed as > > well. Will do that in the next version. > > I assume that p4d/pud do not need the pmd treatment, but a comment > explaining why would be appreciated. Actually there is already a comment in this function explaining why p4d and pud don't need any treatment: /* * set_pgd(pgd, *pgd_k); here would be useless on PAE * and redundant with the set_pmd() on non-PAE. As would * set_p4d/set_pud. */ I couldn't say it with less words :) Regards, Joerg
On Fri, 19 Jul 2019, Joerg Roedel wrote: > On Thu, Jul 18, 2019 at 11:04:57AM +0200, Thomas Gleixner wrote: > > Joerg, > > > > On Thu, 18 Jul 2019, Joerg Roedel wrote: > > > On Wed, Jul 17, 2019 at 11:43:43PM +0200, Thomas Gleixner wrote: > > > > On Wed, 17 Jul 2019, Joerg Roedel wrote: > > > > > + > > > > > + if (!pmd_present(*pmd_k)) > > > > > + return NULL; > > > > > else > > > > > BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k)); > > > > > > > > So in case of unmap, this updates only the first entry in the pgd_list > > > > because vmalloc_sync_all() will break out of the iteration over pgd_list > > > > when NULL is returned from vmalloc_sync_one(). > > > > > > > > I'm surely missing something, but how is that supposed to sync _all_ page > > > > tables on unmap as the changelog claims? > > > > > > No, you are right, I missed that. It is a bug in this patch, the code > > > that breaks out of the loop in vmalloc_sync_all() needs to be removed as > > > well. Will do that in the next version. > > > > I assume that p4d/pud do not need the pmd treatment, but a comment > > explaining why would be appreciated. > > Actually there is already a comment in this function explaining why p4d > and pud don't need any treatment: > > /* > * set_pgd(pgd, *pgd_k); here would be useless on PAE > * and redundant with the set_pmd() on non-PAE. As would > * set_p4d/set_pud. > */ Indeed. Why did I think there was none? > I couldn't say it with less words :) It's perfectly fine. Thanks, tglx
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 4a4049f6d458..d71e167662c3 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -194,11 +194,12 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address) pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); - if (!pmd_present(*pmd_k)) - return NULL; - if (!pmd_present(*pmd)) + if (pmd_present(*pmd) ^ pmd_present(*pmd_k)) set_pmd(pmd, *pmd_k); + + if (!pmd_present(*pmd_k)) + return NULL; else BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));