From patchwork Thu Sep 17 23:07:00 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Bottomley X-Patchwork-Id: 48422 X-Patchwork-Delegate: kyle@mcmartin.ca Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n8HN7pdK003022 for ; Thu, 17 Sep 2009 23:07:51 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753995AbZIQXHb (ORCPT ); Thu, 17 Sep 2009 19:07:31 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753786AbZIQXHa (ORCPT ); Thu, 17 Sep 2009 19:07:30 -0400 Received: from cantor.suse.de ([195.135.220.2]:40316 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753847AbZIQXH1 (ORCPT ); Thu, 17 Sep 2009 19:07:27 -0400 Received: from relay1.suse.de (mail2.suse.de [195.135.221.8]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 7DEB693F19; Fri, 18 Sep 2009 01:07:30 +0200 (CEST) From: James Bottomley To: linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-parisc@vger.kernel.org Cc: Russell King , Christoph Hellwig , Paul Mundt , James Bottomley , James Bottomley Subject: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages Date: Thu, 17 Sep 2009 18:07:00 -0500 Message-Id: <1253228821-4700-6-git-send-email-James.Bottomley@suse.de> X-Mailer: git-send-email 1.6.3.3 In-Reply-To: <1253228821-4700-5-git-send-email-James.Bottomley@suse.de> References: <1253228821-4700-1-git-send-email-James.Bottomley@suse.de> <1253228821-4700-2-git-send-email-James.Bottomley@suse.de> <1253228821-4700-3-git-send-email-James.Bottomley@suse.de> <1253228821-4700-4-git-send-email-James.Bottomley@suse.de> <1253228821-4700-5-git-send-email-James.Bottomley@suse.de> Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: James Bottomley This updates bio_map_kern() to check for pages in the vmalloc address range and call the new kernel flushing APIs if the are. This should allow any kernel user to pass a vmalloc/vmap area to block. Signed-off-by: James Bottomley --- fs/bio.c | 20 ++++++++++++++++++-- 1 files changed, 18 insertions(+), 2 deletions(-) diff --git a/fs/bio.c b/fs/bio.c index 7673800..0cf7b79 100644 --- a/fs/bio.c +++ b/fs/bio.c @@ -1120,6 +1120,14 @@ void bio_unmap_user(struct bio *bio) static void bio_map_kern_endio(struct bio *bio, int err) { + void *kaddr = bio->bi_private; + + if (is_vmalloc_addr(kaddr)) { + int i; + + for (i = 0; i < bio->bi_vcnt; i++) + invalidate_kernel_dcache_addr(kaddr + i * PAGE_SIZE); + } bio_put(bio); } @@ -1138,9 +1146,12 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data, if (!bio) return ERR_PTR(-ENOMEM); + bio->bi_private = data; + offset = offset_in_page(kaddr); for (i = 0; i < nr_pages; i++) { unsigned int bytes = PAGE_SIZE - offset; + struct page *page; if (len <= 0) break; @@ -1148,8 +1159,13 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data, if (bytes > len) bytes = len; - if (bio_add_pc_page(q, bio, virt_to_page(data), bytes, - offset) < bytes) + if (is_vmalloc_addr(data)) { + flush_kernel_dcache_addr(data); + page = vmalloc_to_page(data); + } else + page = virt_to_page(data); + + if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) break; data += bytes;