From patchwork Fri Feb 28 16:34:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 11412793 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BCAF013A4 for ; Fri, 28 Feb 2020 16:35:36 +0000 (UTC) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B365246B9 for ; Fri, 28 Feb 2020 16:35:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hPBJL2Kb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B365246B9 Authentication-Results: mail.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dm-devel-bounces@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1582907735; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:list-id:list-help: list-unsubscribe:list-subscribe:list-post; bh=pWLXc1U+yX6YT1d7lyYvqZXIA0IAq/muh8atV5EtGQ8=; b=hPBJL2KbiLw5jJ2hW13pDcFpYVFR/U5WqIPegPb8H1VULTLIskmIA/mVP3tAIfG+AZIjmr ESn+tivodFtu2F3Wlg16pHjBy27Q3NeWdaiF78RUvS7//eySl/pp69xuI/LxHu1Oq7p2r/ 3YK8pYDHjqSkYBEzFooD6TWoxArjaC4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-474-DQ_Kvs6VOampk9i4Y0-I-Q-1; Fri, 28 Feb 2020 11:35:33 -0500 X-MC-Unique: DQ_Kvs6VOampk9i4Y0-I-Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 72B189222A; Fri, 28 Feb 2020 16:35:27 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.20]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 477CC19C6A; Fri, 28 Feb 2020 16:35:27 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 0355F18089CE; Fri, 28 Feb 2020 16:35:27 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 01SGZELT005234 for ; Fri, 28 Feb 2020 11:35:14 -0500 Received: by smtp.corp.redhat.com (Postfix) id 7A66092995; Fri, 28 Feb 2020 16:35:14 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from horse.redhat.com (unknown [10.18.25.35]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5DC5E90F5B; Fri, 28 Feb 2020 16:35:14 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id D486E2257D7; Fri, 28 Feb 2020 11:35:10 -0500 (EST) From: Vivek Goyal To: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, hch@infradead.org, dan.j.williams@intel.com Date: Fri, 28 Feb 2020 11:34:55 -0500 Message-Id: <20200228163456.1587-6-vgoyal@redhat.com> In-Reply-To: <20200228163456.1587-1-vgoyal@redhat.com> References: <20200228163456.1587-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-loop: dm-devel@redhat.com Cc: jmoyer@redhat.com, david@fromorbit.com, dm-devel@redhat.com, vgoyal@redhat.com Subject: [dm-devel] [PATCH v6 5/6] dax: Use new dax zero page method for zeroing a page X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Use new dax native zero page method for zeroing page if I/O is page aligned. Otherwise fall back to direct_access() + memcpy(). This gets rid of one of the depenendency on block device in dax path. Signed-off-by: Vivek Goyal --- fs/dax.c | 53 +++++++++++++++++++++++------------------------------ 1 file changed, 23 insertions(+), 30 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 35da144375a0..98ba3756163a 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1038,47 +1038,40 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, return ret; } -static bool dax_range_is_aligned(struct block_device *bdev, - unsigned int offset, unsigned int length) -{ - unsigned short sector_size = bdev_logical_block_size(bdev); - - if (!IS_ALIGNED(offset, sector_size)) - return false; - if (!IS_ALIGNED(length, sector_size)) - return false; - - return true; -} - int __dax_zero_page_range(struct block_device *bdev, struct dax_device *dax_dev, sector_t sector, unsigned int offset, unsigned int size) { - if (dax_range_is_aligned(bdev, offset, size)) { - sector_t start_sector = sector + (offset >> 9); + pgoff_t pgoff; + long rc, id; + void *kaddr; + bool page_aligned = false; - return blkdev_issue_zeroout(bdev, start_sector, - size >> 9, GFP_NOFS, 0); - } else { - pgoff_t pgoff; - long rc, id; - void *kaddr; - rc = bdev_dax_pgoff(bdev, sector, PAGE_SIZE, &pgoff); - if (rc) - return rc; + if (IS_ALIGNED(sector << SECTOR_SHIFT, PAGE_SIZE) && + IS_ALIGNED(size, PAGE_SIZE)) + page_aligned = true; + + rc = bdev_dax_pgoff(bdev, sector, PAGE_SIZE, &pgoff); + if (rc) + return rc; - id = dax_read_lock(); + id = dax_read_lock(); + + if (page_aligned) + rc = dax_zero_page_range(dax_dev, pgoff, size >> PAGE_SHIFT); + else rc = dax_direct_access(dax_dev, pgoff, 1, &kaddr, NULL); - if (rc < 0) { - dax_read_unlock(id); - return rc; - } + if (rc < 0) { + dax_read_unlock(id); + return rc; + } + + if (!page_aligned) { memset(kaddr + offset, 0, size); dax_flush(dax_dev, kaddr + offset, size); - dax_read_unlock(id); } + dax_read_unlock(id); return 0; } EXPORT_SYMBOL_GPL(__dax_zero_page_range);