From patchwork Tue May 12 04:30:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 6385611 X-Patchwork-Delegate: dan.j.williams@gmail.com Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 74E129F6CD for ; Tue, 12 May 2015 04:33:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 98C2D203C4 for ; Tue, 12 May 2015 04:33:00 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ABCC0203B5 for ; Tue, 12 May 2015 04:32:59 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 9FEDD182B84; Mon, 11 May 2015 21:32:59 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by ml01.01.org (Postfix) with ESMTP id E32E4182B6A for ; Mon, 11 May 2015 21:32:57 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 11 May 2015 21:32:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,412,1427785200"; d="scan'208";a="693442362" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.23.232.36]) by orsmga001.jf.intel.com with ESMTP; 11 May 2015 21:32:58 -0700 Subject: [PATCH v3 09/11] block: convert kmap helpers to kmap_atomic_pfn_t() From: Dan Williams To: linux-kernel@vger.kernel.org Date: Tue, 12 May 2015 00:30:17 -0400 Message-ID: <20150512043017.11521.98110.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20150512042629.11521.70356.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-8-g92dd MIME-Version: 1.0 Cc: linux-arch@vger.kernel.org, axboe@kernel.dk, riel@redhat.com, linux-nvdimm@lists.01.org, david@fromorbit.com, mingo@kernel.org, linux-fsdevel@vger.kernel.org, mgorman@suse.de, j.glisse@gmail.com, akpm@linux-foundation.org, hch@lst.de X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Convert the generic helpers to the __pfn_t version of kmap_atomic() in support of generically enabling "page-less" block i/o. Signed-off-by: Dan Williams --- include/linux/bio.h | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/include/linux/bio.h b/include/linux/bio.h index a569e6ea1cd2..6537d78e78b3 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -161,7 +161,7 @@ static inline void *bio_data(struct bio *bio) * I/O completely on that queue (see ide-dma for example) */ #define __bio_kmap_atomic(bio, iter) \ - (kmap_atomic(bvec_page(bio_iter_iovec((bio), (iter)))) + \ + (kmap_atomic_pfn_t(bio_iter_iovec((bio), (iter)).pfn) + \ bio_iter_iovec((bio), (iter)).bv_offset) #define __bio_kunmap_atomic(addr) kunmap_atomic(addr) @@ -491,7 +491,7 @@ static inline char *bvec_kmap_irq(struct bio_vec *bvec, unsigned long *flags) * balancing is a lot nicer this way */ local_irq_save(*flags); - addr = (unsigned long) kmap_atomic(bvec_page(bvec)); + addr = (unsigned long) kmap_atomic_pfn_t(bvec.pfn); BUG_ON(addr & ~PAGE_MASK); @@ -502,18 +502,21 @@ static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) { unsigned long ptr = (unsigned long) buffer & PAGE_MASK; - kunmap_atomic((void *) ptr); + kunmap_atomic_pfn_t((void *) ptr); local_irq_restore(*flags); } #else static inline char *bvec_kmap_irq(struct bio_vec *bvec, unsigned long *flags) { - return page_address(bvec_page(bvec)) + bvec->bv_offset; + return kmap_atomic_pfn_t(bvec->bv_pfn) + bvec->bv_offset; } static inline void bvec_kunmap_irq(char *buffer, unsigned long *flags) { + unsigned long ptr = (unsigned long) buffer & PAGE_MASK; + + kunmap_atomic_pfn_t((void *) ptr); *flags = 0; } #endif