Message ID | 20170411174233.21902-9-oohall@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Oliver, > From: Rashmica Gupta <rashmica.g@gmail.com> > > Adds support for removing bolted (i.e kernel linear mapping) mappings > on powernv. This is needed to support memory hot unplug operations > which are required for the teardown of DAX/PMEM devices. > > Cc: Rashmica Gupta <rashmica.g@gmail.com> > Cc: Anton Blanchard <anton@samba.org> > Signed-off-by: Oliver O'Halloran <oohall@gmail.com> > --- > Could the original author of this add their S-o-b? I pulled it out of > Rashmica's memtrace patch, but I remember someone saying Anton wrote > it originally. I did. Signed-off-by: Anton Blanchard <anton@samba.org> Anton > --- > arch/powerpc/mm/hash_native_64.c | 31 +++++++++++++++++++++++++++++++ > 1 file changed, 31 insertions(+) > > diff --git a/arch/powerpc/mm/hash_native_64.c > b/arch/powerpc/mm/hash_native_64.c index 65bb8f33b399..9ba91d4905a4 > 100644 --- a/arch/powerpc/mm/hash_native_64.c > +++ b/arch/powerpc/mm/hash_native_64.c > @@ -407,6 +407,36 @@ static void native_hpte_updateboltedpp(unsigned > long newpp, unsigned long ea, tlbie(vpn, psize, psize, ssize, 0); > } > > +/* > + * Remove a bolted kernel entry. Memory hotplug uses this. > + * > + * No need to lock here because we should be the only user. > + */ > +static int native_hpte_removebolted(unsigned long ea, int psize, int > ssize) +{ > + unsigned long vpn; > + unsigned long vsid; > + long slot; > + struct hash_pte *hptep; > + > + vsid = get_kernel_vsid(ea, ssize); > + vpn = hpt_vpn(ea, vsid, ssize); > + > + slot = native_hpte_find(vpn, psize, ssize); > + if (slot == -1) > + return -ENOENT; > + > + hptep = htab_address + slot; > + > + /* Invalidate the hpte */ > + hptep->v = 0; > + > + /* Invalidate the TLB */ > + tlbie(vpn, psize, psize, ssize, 0); > + return 0; > +} > + > + > static void native_hpte_invalidate(unsigned long slot, unsigned long > vpn, int bpsize, int apsize, int ssize, int local) > { > @@ -725,6 +755,7 @@ void __init hpte_init_native(void) > mmu_hash_ops.hpte_invalidate = native_hpte_invalidate; > mmu_hash_ops.hpte_updatepp = native_hpte_updatepp; > mmu_hash_ops.hpte_updateboltedpp = > native_hpte_updateboltedpp; > + mmu_hash_ops.hpte_removebolted = native_hpte_removebolted; > mmu_hash_ops.hpte_insert = native_hpte_insert; > mmu_hash_ops.hpte_remove = native_hpte_remove; > mmu_hash_ops.hpte_clear_all = native_hpte_clear;
Hi Oliver, On Wed, 12 Apr 2017 08:50:56 +1000 Anton Blanchard <anton@samba.org> wrote: > > > From: Rashmica Gupta <rashmica.g@gmail.com> > > > > Adds support for removing bolted (i.e kernel linear mapping) mappings > > on powernv. This is needed to support memory hot unplug operations > > which are required for the teardown of DAX/PMEM devices. > > > > Cc: Rashmica Gupta <rashmica.g@gmail.com> > > Cc: Anton Blanchard <anton@samba.org> > > Signed-off-by: Oliver O'Halloran <oohall@gmail.com> > > --- > > Could the original author of this add their S-o-b? I pulled it out of > > Rashmica's memtrace patch, but I remember someone saying Anton wrote > > it originally. > > I did. > > Signed-off-by: Anton Blanchard <anton@samba.org> If you are going to claim that Rashmica authored this patch (and you do with the From: line above), then you need her Signed-off-by as well.
On Wed, 2017-04-12 at 03:42 +1000, Oliver O'Halloran wrote: > From: Rashmica Gupta <rashmica.g@gmail.com> > > Adds support for removing bolted (i.e kernel linear mapping) mappings on > powernv. This is needed to support memory hot unplug operations which > are required for the teardown of DAX/PMEM devices. > > Cc: Rashmica Gupta <rashmica.g@gmail.com> > Cc: Anton Blanchard <anton@samba.org> > Signed-off-by: Oliver O'Halloran <oohall@gmail.com> > --- > Could the original author of this add their S-o-b? I pulled it out of > Rashmica's memtrace patch, but I remember someone saying Anton wrote > it originally. > --- > arch/powerpc/mm/hash_native_64.c | 31 +++++++++++++++++++++++++++++++ > 1 file changed, 31 insertions(+) > > diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c > index 65bb8f33b399..9ba91d4905a4 100644 > --- a/arch/powerpc/mm/hash_native_64.c > +++ b/arch/powerpc/mm/hash_native_64.c > @@ -407,6 +407,36 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea, > tlbie(vpn, psize, psize, ssize, 0); > } > > +/* > + * Remove a bolted kernel entry. Memory hotplug uses this. > + * > + * No need to lock here because we should be the only user. As long as this is after the necessary isolation and is called from arch_remove_memory(), I think we should be fine > + */ > +static int native_hpte_removebolted(unsigned long ea, int psize, int ssize) > +{ > + unsigned long vpn; > + unsigned long vsid; > + long slot; > + struct hash_pte *hptep; > + > + vsid = get_kernel_vsid(ea, ssize); > + vpn = hpt_vpn(ea, vsid, ssize); > + > + slot = native_hpte_find(vpn, psize, ssize); > + if (slot == -1) > + return -ENOENT; If slot == -1, it means someone else removed the HPTE entry? Are we racing? I suspect we should never hit this situation during hotunplug, specifically since this is bolted. > + > + hptep = htab_address + slot; > + > + /* Invalidate the hpte */ > + hptep->v = 0; Under DEBUG or otherwise, I would add more checks like 1. was hpte_v & HPTE_V_VALID and BOLTED set? If not, we've already invalidated that hpte and we can skip the tlbie. Since this was bolted you might be right that it is always valid and bolted > + > + /* Invalidate the TLB */ > + tlbie(vpn, psize, psize, ssize, 0); The API also does not clear linear_map_hash_slots[] under DEBUG_PAGEALLOC > + return 0; > +} > + > + Balbir Singh.
On 12/04/17 10:18, Stephen Rothwell wrote: > Hi Oliver, > > On Wed, 12 Apr 2017 08:50:56 +1000 Anton Blanchard <anton@samba.org> wrote: >>> From: Rashmica Gupta <rashmica.g@gmail.com> >>> >>> Adds support for removing bolted (i.e kernel linear mapping) mappings >>> on powernv. This is needed to support memory hot unplug operations >>> which are required for the teardown of DAX/PMEM devices. >>> >>> Cc: Rashmica Gupta <rashmica.g@gmail.com> >>> Cc: Anton Blanchard <anton@samba.org> >>> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> >>> --- >>> Could the original author of this add their S-o-b? I pulled it out of >>> Rashmica's memtrace patch, but I remember someone saying Anton wrote >>> it originally. >> I did. >> >> Signed-off-by: Anton Blanchard <anton@samba.org> > If you are going to claim that Rashmica authored this patch (and you do > with the From: line above), then you need her Signed-off-by as well. > Oliver, can you change the 'From' to a 'Reviewed By'?
On Wed, Apr 12, 2017 at 11:53 AM, Balbir Singh <bsingharora@gmail.com> wrote: > On Wed, 2017-04-12 at 03:42 +1000, Oliver O'Halloran wrote: >> From: Rashmica Gupta <rashmica.g@gmail.com> >> >> Adds support for removing bolted (i.e kernel linear mapping) mappings on >> powernv. This is needed to support memory hot unplug operations which >> are required for the teardown of DAX/PMEM devices. >> >> Cc: Rashmica Gupta <rashmica.g@gmail.com> >> Cc: Anton Blanchard <anton@samba.org> >> Signed-off-by: Oliver O'Halloran <oohall@gmail.com> >> --- >> Could the original author of this add their S-o-b? I pulled it out of >> Rashmica's memtrace patch, but I remember someone saying Anton wrote >> it originally. >> --- >> arch/powerpc/mm/hash_native_64.c | 31 +++++++++++++++++++++++++++++++ >> 1 file changed, 31 insertions(+) >> >> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c >> index 65bb8f33b399..9ba91d4905a4 100644 >> --- a/arch/powerpc/mm/hash_native_64.c >> +++ b/arch/powerpc/mm/hash_native_64.c >> @@ -407,6 +407,36 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea, >> tlbie(vpn, psize, psize, ssize, 0); >> } >> >> +/* >> + * Remove a bolted kernel entry. Memory hotplug uses this. >> + * >> + * No need to lock here because we should be the only user. > > As long as this is after the necessary isolation and is called from > arch_remove_memory(), I think we should be fine > >> + */ >> +static int native_hpte_removebolted(unsigned long ea, int psize, int ssize) >> +{ >> + unsigned long vpn; >> + unsigned long vsid; >> + long slot; >> + struct hash_pte *hptep; >> + >> + vsid = get_kernel_vsid(ea, ssize); >> + vpn = hpt_vpn(ea, vsid, ssize); >> + >> + slot = native_hpte_find(vpn, psize, ssize); >> + if (slot == -1) >> + return -ENOENT; > > If slot == -1, it means someone else removed the HPTE entry? Are we racing? > I suspect we should never hit this situation during hotunplug, specifically > since this is bolted. Or the slot was never populated in the first place. I'd rather keep the current behaviour since it aligns with the behaviour of pSeries_lpar_hpte_removebolted and we might hit these situations in the future if the sub-section hotplug patches are merged (big if...). > >> + >> + hptep = htab_address + slot; >> + >> + /* Invalidate the hpte */ >> + hptep->v = 0; > > Under DEBUG or otherwise, I would add more checks like > > 1. was hpte_v & HPTE_V_VALID and BOLTED set? If not, we've already invalidated > that hpte and we can skip the tlbie. Since this was bolted you might be right > that it is always valid and bolted A VM_WARN_ON() if the bolted bit is clear might be appropriate. We don't need to check the valid bit since hpte_native_find() will fail if it's cleared. > >> + >> + /* Invalidate the TLB */ >> + tlbie(vpn, psize, psize, ssize, 0); > > The API also does not clear linear_map_hash_slots[] under DEBUG_PAGEALLOC I'm not sure what API you're referring to here. The tracking for linear_map_hash_slots[] is agnostic of mmu_hash_ops so we shouldn't be touching it here. It also looks like DEBUG_PAGEALLOC is a bit broken with hotplugged memory anyway so I think that's a fix for a different patch. > >> + return 0; >> +} >> + >> + > > Balbir Singh.
Oliver O'Halloran <oohall@gmail.com> writes: > On Wed, Apr 12, 2017 at 11:53 AM, Balbir Singh <bsingharora@gmail.com> wrote: >> >> The API also does not clear linear_map_hash_slots[] under DEBUG_PAGEALLOC > > I'm not sure what API you're referring to here. The tracking for > linear_map_hash_slots[] is agnostic of mmu_hash_ops so we shouldn't be > touching it here. It also looks like DEBUG_PAGEALLOC is a bit broken > with hotplugged memory anyway so I think that's a fix for a different > patch. It's old and has probably bit-rotted, so I'm happy for it to be fixed up later if necessary. cheers
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c index 65bb8f33b399..9ba91d4905a4 100644 --- a/arch/powerpc/mm/hash_native_64.c +++ b/arch/powerpc/mm/hash_native_64.c @@ -407,6 +407,36 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea, tlbie(vpn, psize, psize, ssize, 0); } +/* + * Remove a bolted kernel entry. Memory hotplug uses this. + * + * No need to lock here because we should be the only user. + */ +static int native_hpte_removebolted(unsigned long ea, int psize, int ssize) +{ + unsigned long vpn; + unsigned long vsid; + long slot; + struct hash_pte *hptep; + + vsid = get_kernel_vsid(ea, ssize); + vpn = hpt_vpn(ea, vsid, ssize); + + slot = native_hpte_find(vpn, psize, ssize); + if (slot == -1) + return -ENOENT; + + hptep = htab_address + slot; + + /* Invalidate the hpte */ + hptep->v = 0; + + /* Invalidate the TLB */ + tlbie(vpn, psize, psize, ssize, 0); + return 0; +} + + static void native_hpte_invalidate(unsigned long slot, unsigned long vpn, int bpsize, int apsize, int ssize, int local) { @@ -725,6 +755,7 @@ void __init hpte_init_native(void) mmu_hash_ops.hpte_invalidate = native_hpte_invalidate; mmu_hash_ops.hpte_updatepp = native_hpte_updatepp; mmu_hash_ops.hpte_updateboltedpp = native_hpte_updateboltedpp; + mmu_hash_ops.hpte_removebolted = native_hpte_removebolted; mmu_hash_ops.hpte_insert = native_hpte_insert; mmu_hash_ops.hpte_remove = native_hpte_remove; mmu_hash_ops.hpte_clear_all = native_hpte_clear;