From patchwork Wed Oct 28 22:09:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 7514621 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8992B9F40A for ; Wed, 28 Oct 2015 22:09:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9FF3A20918 for ; Wed, 28 Oct 2015 22:09:47 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BAD5E208FE for ; Wed, 28 Oct 2015 22:09:46 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 74A641A2035; Wed, 28 Oct 2015 15:09:46 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by ml01.01.org (Postfix) with ESMTP id D9ED61A202F for ; Wed, 28 Oct 2015 15:09:43 -0700 (PDT) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 28 Oct 2015 15:09:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,211,1444719600"; d="scan'208";a="805908895" Received: from theros.lm.intel.com ([10.232.112.192]) by orsmga001.jf.intel.com with ESMTP; 28 Oct 2015 15:09:43 -0700 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH 2/2] pmem: Add simple and slow fsync/msync support Date: Wed, 28 Oct 2015 16:09:36 -0600 Message-Id: <1446070176-14568-3-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1446070176-14568-1-git-send-email-ross.zwisler@linux.intel.com> References: <1446070176-14568-1-git-send-email-ross.zwisler@linux.intel.com> Cc: linux-nvdimm@lists.01.org, Dave Chinner , x86@kernel.org, Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Jan Kara X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make blkdev_issue_flush() behave correctly according to its required semantics - all volatile cached data is flushed to stable storage. Eventually this needs to be replaced with something much more precise by tracking dirty DAX entries via the radix tree in struct address_space, but for now this gives us correctness even if the performance is quite bad. Userspace applications looking to avoid the fsync/msync penalty should consider more fine-grained flushing via the NVML library: https://github.com/pmem/nvml Signed-off-by: Ross Zwisler --- drivers/nvdimm/pmem.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 0ba6a97..eea7997 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -80,7 +80,14 @@ static void pmem_make_request(struct request_queue *q, struct bio *bio) if (do_acct) nd_iostat_end(bio, start); - if (bio_data_dir(bio)) + if (bio->bi_rw & REQ_FLUSH) { + void __pmem *addr = pmem->virt_addr + pmem->data_offset; + size_t size = pmem->size - pmem->data_offset; + + wb_cache_pmem(addr, size); + } + + if (bio_data_dir(bio) || (bio->bi_rw & REQ_FLUSH)) wmb_pmem(); bio_endio(bio); @@ -189,6 +196,7 @@ static int pmem_attach_disk(struct device *dev, blk_queue_physical_block_size(pmem->pmem_queue, PAGE_SIZE); blk_queue_max_hw_sectors(pmem->pmem_queue, UINT_MAX); blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY); + blk_queue_flush(pmem->pmem_queue, REQ_FLUSH); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, pmem->pmem_queue); disk = alloc_disk(0);