From patchwork Sat Nov 14 00:06:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 7616001 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EB994BF90C for ; Sat, 14 Nov 2015 00:07:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DCB95207CC for ; Sat, 14 Nov 2015 00:07:36 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EDEA32083A for ; Sat, 14 Nov 2015 00:07:35 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id E19611A21A3; Fri, 13 Nov 2015 16:07:35 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by ml01.01.org (Postfix) with ESMTP id C43151A21A3 for ; Fri, 13 Nov 2015 16:07:34 -0800 (PST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 13 Nov 2015 16:07:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,289,1444719600"; d="scan'208";a="684969821" Received: from theros.lm.intel.com ([10.232.112.87]) by orsmga003.jf.intel.com with ESMTP; 13 Nov 2015 16:07:04 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH v2 03/11] pmem: enable REQ_FUA/REQ_FLUSH handling Date: Fri, 13 Nov 2015 17:06:42 -0700 Message-Id: <1447459610-14259-4-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com> References: <1447459610-14259-1-git-send-email-ross.zwisler@linux.intel.com> Cc: Dave Hansen , Dave Chinner , "J. Bruce Fields" , linux-mm@kvack.org, Andreas Dilger , "H. Peter Anvin" , Jeff Layton , linux-nvdimm@lists.01.org, x86@kernel.org, Ingo Molnar , linux-ext4@vger.kernel.org, xfs@oss.sgi.com, Alexander Viro , Thomas Gleixner , Theodore Ts'o , Jan Kara , linux-fsdevel@vger.kernel.org, Andrew Morton , Matthew Wilcox X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the PMEM driver doesn't accept REQ_FLUSH or REQ_FUA bios. These are sent down via blkdev_issue_flush() in response to a fsync() or msync() and are used by filesystems to order their metadata, among other things. When we get an msync() or fsync() it is the responsibility of the DAX code to flush all dirty pages to media. The PMEM driver then just has issue a wmb_pmem() in response to the REQ_FLUSH to ensure that before we return all the flushed data has been durably stored on the media. Signed-off-by: Ross Zwisler --- drivers/nvdimm/pmem.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 0ba6a97..b914d66 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -80,7 +80,7 @@ static void pmem_make_request(struct request_queue *q, struct bio *bio) if (do_acct) nd_iostat_end(bio, start); - if (bio_data_dir(bio)) + if (bio_data_dir(bio) || (bio->bi_rw & (REQ_FLUSH|REQ_FUA))) wmb_pmem(); bio_endio(bio); @@ -189,6 +189,7 @@ static int pmem_attach_disk(struct device *dev, blk_queue_physical_block_size(pmem->pmem_queue, PAGE_SIZE); blk_queue_max_hw_sectors(pmem->pmem_queue, UINT_MAX); blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY); + blk_queue_flush(pmem->pmem_queue, REQ_FLUSH|REQ_FUA); queue_flag_set_unlocked(QUEUE_FLAG_NONROT, pmem->pmem_queue); disk = alloc_disk(0);