From patchwork Sat Feb 11 20:33:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John David Anglin X-Patchwork-Id: 9568175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 68BEE6043D for ; Sat, 11 Feb 2017 20:33:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5600528558 for ; Sat, 11 Feb 2017 20:33:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4886628567; Sat, 11 Feb 2017 20:33:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9353F28558 for ; Sat, 11 Feb 2017 20:33:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750999AbdBKUdo (ORCPT ); Sat, 11 Feb 2017 15:33:44 -0500 Received: from simcoe208srvr.owm.bell.net ([184.150.200.208]:36152 "EHLO torfep02.bell.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750968AbdBKUdn (ORCPT ); Sat, 11 Feb 2017 15:33:43 -0500 Received: from bell.net torfep02 184.150.200.158 by torfep02.bell.net with ESMTP id <20170211203342.CLVQ24045.torfep02.bell.net@torspm02.bell.net> for ; Sat, 11 Feb 2017 15:33:42 -0500 Received: from [192.168.2.10] (really [174.91.91.98]) by torspm02.bell.net with ESMTP id <20170211203342.LJJT8282.torspm02.bell.net@[192.168.2.10]>; Sat, 11 Feb 2017 15:33:42 -0500 From: John David Anglin Mime-Version: 1.0 (Apple Message framework v1085) Date: Sat, 11 Feb 2017 15:33:41 -0500 Subject: [PATCH] parisc: Fix random faults caused by inequivalent aliases Cc: Helge Deller , James Bottomley To: "linux-parisc@vger.kernel.org List" Message-Id: X-Mailer: Apple Mail (2.1085) X-Cloudmark-Analysis: v=2.2 cv=BLlclBYG c=1 sm=0 tr=0 a=TtiWKAa0zRZPSxrq3ipeaA==:17 a=n2v9WMKugxEA:10 a=FBHGMhGWAAAA:8 a=6iyAWLKOUUcGGEhxm44A:9 a=CjuIK1q_8ugA:10 a=MHb3pKt3NW1GyxN0rqYA:9 a=Ld372NDzu18A:10 a=CTwWI_8SlGQyalyz8_QA:9 a=ATlVsGG5QSsA:10 a=9gvnlMMaQFpL9xblJ6ne:22 Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The attached patch fixes various random faults mainly observed doing gcc compilations. These faults are not reproducible. They are only seen on machines with PA8800 and PA8900 processors. This strongly suggests the faults are caused by inequivalent aliases. The kernel sets up vmap non-equivalent memory regions to do I/O. These regions are not equivalently mapped to the offset pages. Thus, they might be the cause of the random memory corruption. There are two routines, flush_kernel_vmap_range() and invalidate_kernel_vmap_range(), that flush and invalidate kernel vmap ranges. After a lot of testing, I found the following: 1) PG_dcache_dirty is never set on the offset map pages used by invalidate_kernel_vmap_range. So, the for loop never flushes pages in the offset map. PG_dcache_dirty doesn't really indicate a dirty page. It indicates that flush for the page is deferred. 2) vmalloc_to_page() can return NULL but it never happens. 3) We need to flush the offset map in both flush_kernel_vmap_range() and invalidate_kernel_vmap_range(). I moved the routines from cacheflush.h to cache.c. This provides access to parisc_cache_flush_threshold and flush_data_cache(). The routines now flush the entire data cache if the size of the vmap region exceeds an appropriate threshold. This should speed up these routines on small cache machines. Signed-off-by: John David Anglin --- John David Anglin dave.anglin@bell.net diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h index 7bd69bd43a01..1d8c24dc04d4 100644 --- a/arch/parisc/include/asm/cacheflush.h +++ b/arch/parisc/include/asm/cacheflush.h @@ -45,28 +45,9 @@ static inline void flush_kernel_dcache_page(struct page *page) #define flush_kernel_dcache_range(start,size) \ flush_kernel_dcache_range_asm((start), (start)+(size)); -/* vmap range flushes and invalidates. Architecturally, we don't need - * the invalidate, because the CPU should refuse to speculate once an - * area has been flushed, so invalidate is left empty */ -static inline void flush_kernel_vmap_range(void *vaddr, int size) -{ - unsigned long start = (unsigned long)vaddr; - - flush_kernel_dcache_range_asm(start, start + size); -} -static inline void invalidate_kernel_vmap_range(void *vaddr, int size) -{ - unsigned long start = (unsigned long)vaddr; - void *cursor = vaddr; - for ( ; cursor < vaddr + size; cursor += PAGE_SIZE) { - struct page *page = vmalloc_to_page(cursor); - - if (test_and_clear_bit(PG_dcache_dirty, &page->flags)) - flush_kernel_dcache_page(page); - } - flush_kernel_dcache_range_asm(start, start + size); -} +void flush_kernel_vmap_range(void *vaddr, int size); +void invalidate_kernel_vmap_range(void *vaddr, int size); #define flush_cache_vmap(start, end) flush_cache_all() #define flush_cache_vunmap(start, end) flush_cache_all() diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 977f0a4f5ecf..91e594492d19 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -633,3 +633,54 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); } } + +/* Nominally, the caller is responsible for flushing the offset map + alias of the vmap area before performing I/O. This is important + on PA8800/PA8900 machines that only support equivalent aliases. + Failure to flush the offset map leads to random segmentation faults + in user space. Testing has shown that we need to flush the offset + map as well as the vmap range. Once an area has been flushed, + the CPU will not speculate until there is an explicit access. */ +void flush_kernel_vmap_range(void *vaddr, int size) +{ + unsigned long threshold = parisc_cache_flush_threshold; + unsigned long start = (unsigned long)vaddr; + void *cursor = vaddr; + + if (parisc_requires_coherency()) + threshold >>= 1; + if ((unsigned long)size > threshold) { + flush_data_cache(); + return; + } + if (parisc_requires_coherency()) { + for ( ; cursor < vaddr + size; cursor += PAGE_SIZE) { + struct page *page = vmalloc_to_page(cursor); + flush_kernel_dcache_page(page); + } + } + flush_kernel_dcache_range_asm(start, start + size); +} +EXPORT_SYMBOL(flush_kernel_vmap_range); + +void invalidate_kernel_vmap_range(void *vaddr, int size) +{ + unsigned long threshold = parisc_cache_flush_threshold; + unsigned long start = (unsigned long)vaddr; + void *cursor = vaddr; + + if (parisc_requires_coherency()) + threshold >>= 1; + if ((unsigned long)size > threshold) { + flush_data_cache(); + return; + } + if (parisc_requires_coherency()) { + for ( ; cursor < vaddr + size; cursor += PAGE_SIZE) { + struct page *page = vmalloc_to_page(cursor); + flush_kernel_dcache_page(page); + } + } + flush_kernel_dcache_range_asm(start, start + size); +} +EXPORT_SYMBOL(invalidate_kernel_vmap_range);