From patchwork Wed Sep 9 03:17:06 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Bottomley X-Patchwork-Id: 46312 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n893HGN7020501 for ; Wed, 9 Sep 2009 03:17:16 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750778AbZIIDRM (ORCPT ); Tue, 8 Sep 2009 23:17:12 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752170AbZIIDRM (ORCPT ); Tue, 8 Sep 2009 23:17:12 -0400 Received: from cantor.suse.de ([195.135.220.2]:33405 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750778AbZIIDRL (ORCPT ); Tue, 8 Sep 2009 23:17:11 -0400 Received: from relay1.suse.de (relay-ext.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 04DE06CB00; Wed, 9 Sep 2009 05:17:14 +0200 (CEST) Subject: [PATCH 1/5] mm: add coherence API for DMA to vmalloc/vmap areas From: James Bottomley To: Russell King Cc: Parisc List , Linux Filesystem Mailing List , linux-arch@vger.kernel.org, Christoph Hellwig In-Reply-To: <1252466070.13003.365.camel@mulgrave.site> References: <1252434469.13003.3.camel@mulgrave.site> <20090908190031.GF6538@flint.arm.linux.org.uk> <1252437112.13003.39.camel@mulgrave.site> <20090908201619.GG6538@flint.arm.linux.org.uk> <1252442352.13003.132.camel@mulgrave.site> <20090908213910.GH6538@flint.arm.linux.org.uk> <1252466070.13003.365.camel@mulgrave.site> Date: Wed, 09 Sep 2009 03:17:06 +0000 Message-Id: <1252466226.13003.367.camel@mulgrave.site> Mime-Version: 1.0 X-Mailer: Evolution 2.24.1.1 Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley --- include/linux/highmem.h | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 211ff44..9719952 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page static inline void flush_kernel_dcache_page(struct page *page) { } +static incline void flush_kernel_dcache_addr(void *vaddr) +{ +} +static incline void invalidate_kernel_dcache_addr(void *vaddr) +{ +} #endif #include