Message ID | BLU0-SMTP6386DBEA3C99F954FD63F7972F0@phx.gbl (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Am 13.01.2013 22:52, schrieb John David Anglin: > This patch goes a long way toward fixing the minifail bug, and it > significantly > improves the stability of SMP machines such as the rp3440. When > write protecting > a page for COW, we need to purge the existing translation. > Otherwise, the COW > break doesn't occur as expected. > > The patch assumes the kernel will flush page when it does copy. > > Signed-off-by: John David Anglin <dave.anglin@bell.net> CC stable? -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, 2013-01-13 at 16:52 -0500, John David Anglin wrote: > +void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) > +{ > + unsigned long flags, sid; > + > + /* Note: purge_tlb_entries can be called at startup with > + no context. */ > + > + /* Disable preemption while we play with %sr1. */ > + preempt_disable(); > + sid = mfsp(1); > There's no need at all to save and restore %sr1 is there? It's defined to be a volatile register. As long as you make sure nothing gets in to change its value, you never need to restore the previous one. James > > + mtsp(mm->context,1); > + purge_tlb_start(flags); > + pdtlb(addr); > + pitlb(addr); > + purge_tlb_end(flags); > + mtsp(sid,1); > + preempt_enable(); > +} -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index ee99f23..2c6dedb 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -12,11 +12,10 @@ #include <linux/bitops.h> #include <linux/spinlock.h> +#include <linux/mm_types.h> #include <asm/processor.h> #include <asm/cache.h> -struct vm_area_struct; - /* * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel * memory. For the return value to be meaningful, ADDR must be >= @@ -40,7 +39,14 @@ struct vm_area_struct; do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) + +extern void purge_tlb_entries(struct mm_struct *, unsigned long); + +#define set_pte_at(mm,addr,ptep, pteval) \ + do{ \ + set_pte(ptep,pteval); \ + purge_tlb_entries(mm,addr); \ + } while(0) #endif /* !__ASSEMBLY__ */ @@ -466,6 +472,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, old = pte_val(*ptep); new = pte_val(pte_wrprotect(__pte (old))); } while (cmpxchg((unsigned long *) ptep, old, new) != old); + purge_tlb_entries(mm, addr); #else pte_t old_pte = *ptep; set_pte_at(mm, addr, ptep, pte_wrprotect(old_pte)); diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 48e16dc..467d902 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -419,6 +419,26 @@ void kunmap_parisc(void *addr) EXPORT_SYMBOL(kunmap_parisc); #endif +void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) +{ + unsigned long flags, sid; + + /* Note: purge_tlb_entries can be called at startup with + no context. */ + + /* Disable preemption while we play with %sr1. */ + preempt_disable(); + sid = mfsp(1); + mtsp(mm->context,1); + purge_tlb_start(flags); + pdtlb(addr); + pitlb(addr); + purge_tlb_end(flags); + mtsp(sid,1); + preempt_enable(); +} +EXPORT_SYMBOL(purge_tlb_entries); + void __flush_tlb_range(unsigned long sid, unsigned long start, unsigned long end) {
This patch goes a long way toward fixing the minifail bug, and it significantly improves the stability of SMP machines such as the rp3440. When write protecting a page for COW, we need to purge the existing translation. Otherwise, the COW break doesn't occur as expected. The patch assumes the kernel will flush page when it does copy. Signed-off-by: John David Anglin <dave.anglin@bell.net> Dave -- John David Anglin dave.anglin@bell.net