From patchwork Tue Mar 31 13:25:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boaz Harrosh X-Patchwork-Id: 6129901 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 45BB09F2EC for ; Tue, 31 Mar 2015 13:25:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 41A2B201E4 for ; Tue, 31 Mar 2015 13:25:42 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 53220201D3 for ; Tue, 31 Mar 2015 13:25:41 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 48AC980F81; Tue, 31 Mar 2015 06:25:41 -0700 (PDT) X-Original-To: linux-nvdimm@ml01.01.org Delivered-To: linux-nvdimm@ml01.01.org Received: from mail-wi0-f181.google.com (mail-wi0-f181.google.com [209.85.212.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 7553280F80 for ; Tue, 31 Mar 2015 06:25:39 -0700 (PDT) Received: by wibgn9 with SMTP id gn9so25991274wib.1 for ; Tue, 31 Mar 2015 06:25:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=mNG3tmfNERMWaJIn9jV4emAdhLDYARn9qEKMSo7J2CQ=; b=JJmDZ9sBKMiAuGTeoDDpCCZvRLmUj/MJZcUtJYnjgROlleuoAfJaBGdesFpYKXHa9k 1bhJKy8x/4KfZgUd/CAYWGwCH8K1hB29kNzK/fhetp/jo/UR4ezpb2b/ccRrrPWhCLos ER7I4D2+RC6+UrIzppAuBZ9Ab774+hmd3sbIh0VGM948FqlBn/2Nv+m9valz77JgFPLR ls+67/ElD7fr2Uzf7iS10QqQZVkhOvT9V3xszg0504Xir3BuAWKcBixq9hAcYLTNqFh0 wPaanTBzhMYnWU0S9Yr5zOGMpELKpr7fJue+S2QF64TRbiclV/IfYNygMP6flhEqVwHq YQZA== X-Gm-Message-State: ALoCoQkN3EVnyWs/oYff3wOlwkkHyaWhVe7fI/0bIf/HAhMzLs/+6Tc+ejbYSaBF0Wi0//ERIGtu X-Received: by 10.194.177.132 with SMTP id cq4mr70608911wjc.99.1427808337763; Tue, 31 Mar 2015 06:25:37 -0700 (PDT) Received: from [10.0.0.5] ([207.232.55.62]) by mx.google.com with ESMTPSA id gl6sm11858502wib.8.2015.03.31.06.25.36 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Mar 2015 06:25:37 -0700 (PDT) Message-ID: <551AA04F.60209@plexistor.com> Date: Tue, 31 Mar 2015 16:25:35 +0300 From: Boaz Harrosh User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Christoph Hellwig References: <1427358764-6126-1-git-send-email-hch@lst.de> <55143A8B.2060304@plexistor.com> <20150331092526.GA25958@lst.de> <551A9EB3.8000605@plexistor.com> In-Reply-To: <551A9EB3.8000605@plexistor.com> Cc: axboe@kernel.dk, linux-nvdimm@ml01.01.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [Linux-nvdimm] [PATCH 3/6] SQUASHME: pmem: Streamline pmem driver X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP remove 89 lines of code to do a single memcpy. The reason this was so in brd (done badly BTW) is because destination memory is page-by-page based. With pmem we have the destination contiguous so we can do any size, in one go. [v2] Remove the BUG_ON checks on out of range IO. The core already does these checks and I did not see these checks done in other drivers. Signed-off-by: Boaz Harrosh --- drivers/block/pmem.c | 112 ++++++++++----------------------------------------- 1 file changed, 22 insertions(+), 90 deletions(-) diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c index dcb524f..6a45fd5 100644 --- a/drivers/block/pmem.c +++ b/drivers/block/pmem.c @@ -44,91 +44,15 @@ struct pmem_device { static int pmem_major; static atomic_t pmem_index; -/* - * direct translation from (pmem,sector) => void* - * We do not require that sector be page aligned. - * The return value will point to the beginning of the page containing the - * given sector, not to the sector itself. - */ -static void *pmem_lookup_pg_addr(struct pmem_device *pmem, sector_t sector) -{ - size_t page_offset = sector >> PAGE_SECTORS_SHIFT; - size_t offset = page_offset << PAGE_SHIFT; - - BUG_ON(offset >= pmem->size); - return pmem->virt_addr + offset; -} - -/* sector must be page aligned */ -static unsigned long pmem_lookup_pfn(struct pmem_device *pmem, sector_t sector) -{ - size_t page_offset = sector >> PAGE_SECTORS_SHIFT; - - BUG_ON(sector & (PAGE_SECTORS - 1)); - return (pmem->phys_addr >> PAGE_SHIFT) + page_offset; -} - -/* - * sector is not required to be page aligned. - * n is at most a single page, but could be less. - */ -static void copy_to_pmem(struct pmem_device *pmem, const void *src, - sector_t sector, size_t n) -{ - void *dst; - unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT; - size_t copy; - - BUG_ON(n > PAGE_SIZE); - - copy = min_t(size_t, n, PAGE_SIZE - offset); - dst = pmem_lookup_pg_addr(pmem, sector); - memcpy(dst + offset, src, copy); - - if (copy < n) { - src += copy; - sector += copy >> SECTOR_SHIFT; - copy = n - copy; - dst = pmem_lookup_pg_addr(pmem, sector); - memcpy(dst, src, copy); - } -} - -/* - * sector is not required to be page aligned. - * n is at most a single page, but could be less. - */ -static void copy_from_pmem(void *dst, struct pmem_device *pmem, - sector_t sector, size_t n) -{ - void *src; - unsigned int offset = (sector & (PAGE_SECTORS - 1)) << SECTOR_SHIFT; - size_t copy; - - BUG_ON(n > PAGE_SIZE); - - copy = min_t(size_t, n, PAGE_SIZE - offset); - src = pmem_lookup_pg_addr(pmem, sector); - - memcpy(dst, src + offset, copy); - - if (copy < n) { - dst += copy; - sector += copy >> SECTOR_SHIFT; - copy = n - copy; - src = pmem_lookup_pg_addr(pmem, sector); - memcpy(dst, src, copy); - } -} - static void pmem_do_bvec(struct pmem_device *pmem, struct page *page, unsigned int len, unsigned int off, int rw, sector_t sector) { void *mem = kmap_atomic(page); + size_t pmem_off = sector << 9; if (rw == READ) { - copy_from_pmem(mem + off, pmem, sector, len); + memcpy(mem + off, pmem->virt_addr + pmem_off, len); flush_dcache_page(page); } else { /* @@ -136,7 +60,7 @@ static void pmem_do_bvec(struct pmem_device *pmem, struct page *page, * NVDIMMs are actually durable before returning. */ flush_dcache_page(page); - copy_to_pmem(pmem, mem + off, sector, len); + memcpy(pmem->virt_addr + pmem_off, mem + off, len); } kunmap_atomic(mem); @@ -152,25 +76,32 @@ static void pmem_make_request(struct request_queue *q, struct bio *bio) struct bvec_iter iter; int err = 0; - sector = bio->bi_iter.bi_sector; if (bio_end_sector(bio) > get_capacity(bdev->bd_disk)) { err = -EIO; goto out; } - BUG_ON(bio->bi_rw & REQ_DISCARD); + if (WARN_ON(bio->bi_rw & REQ_DISCARD)) { + err = -EINVAL; + goto out; + } rw = bio_rw(bio); if (rw == READA) rw = READ; + sector = bio->bi_iter.bi_sector; bio_for_each_segment(bvec, bio, iter) { - unsigned int len = bvec.bv_len; - - BUG_ON(len > PAGE_SIZE); - pmem_do_bvec(pmem, bvec.bv_page, len, - bvec.bv_offset, rw, sector); - sector += len >> SECTOR_SHIFT; + /* NOTE: There is a legend saying that bv_len might be + * bigger than PAGE_SIZE in the case that bv_page points to + * a physical contiguous PFN set. But for us it is fine because + * it means the Kernel virtual mapping is also contiguous. And + * on the pmem side we are always contiguous both virtual and + * physical + */ + pmem_do_bvec(pmem, bvec.bv_page, bvec.bv_len, bvec.bv_offset, + rw, sector); + sector += bvec.bv_len >> 9; } out: @@ -191,14 +122,15 @@ static long pmem_direct_access(struct block_device *bdev, sector_t sector, void **kaddr, unsigned long *pfn, long size) { struct pmem_device *pmem = bdev->bd_disk->private_data; + size_t offset = sector << 9; if (!pmem) return -ENODEV; - *kaddr = pmem_lookup_pg_addr(pmem, sector); - *pfn = pmem_lookup_pfn(pmem, sector); + *kaddr = pmem->virt_addr + offset; + *pfn = (pmem->phys_addr + offset) >> PAGE_SHIFT; - return pmem->size - (sector * 512); + return pmem->size - offset; } static const struct block_device_operations pmem_fops = {