From patchwork Sat Oct 17 00:49:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7421621 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 137779F37F for ; Sat, 17 Oct 2015 00:55:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2CE552062C for ; Sat, 17 Oct 2015 00:55:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3567420634 for ; Sat, 17 Oct 2015 00:55:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751644AbbJQAzZ (ORCPT ); Fri, 16 Oct 2015 20:55:25 -0400 Received: from mga03.intel.com ([134.134.136.65]:40790 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751380AbbJQAzY (ORCPT ); Fri, 16 Oct 2015 20:55:24 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP; 16 Oct 2015 17:55:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,691,1437462000"; d="scan'208";a="582520753" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.39]) by FMSMGA003.fm.intel.com with ESMTP; 16 Oct 2015 17:55:23 -0700 Subject: [RFC PATCH 2/2] block: enable dax for raw block devices From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Xiao Guangrong , kvm@vger.kernel.org, Jeff Moyer , Al Viro , Ross Zwisler , Andrew Morton , Christoph Hellwig Date: Fri, 16 Oct 2015 20:49:41 -0400 Message-ID: <20151017004702.2742.82530.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20151017004653.2742.36299.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20151017004653.2742.36299.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If an application wants exclusive access to all of the persistent memory provided by an NVDIMM namespace it can use this raw-block-dax facility to forgo establishing a filesystem. This capability is targeted primarily to hypervisors wanting to provision persistent memory for guests. Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Al Viro Cc: Andrew Morton Cc: Ross Zwisler Cc: Xiao Guangrong Signed-off-by: Dan Williams --- Only lighted tested so far, but seems to work, is the shortest path to a DAX mapping, and makes it easier to trigger the pmd_fault path (no fs-block-allocator interactions). fs/block_dev.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 83 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/block_dev.c b/fs/block_dev.c index 5277dd83d254..498b71455570 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -1687,13 +1687,95 @@ static const struct address_space_operations def_blk_aops = { .is_dirty_writeback = buffer_check_dirty_writeback, }; +#ifdef CONFIG_FS_DAX +static int blkdev_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct inode *bd_inode = file_bd_inode(vma->vm_file); + struct block_device *bdev = I_BDEV(bd_inode); + int ret; + + mutex_lock(&bdev->bd_mutex); + ret = __dax_fault(vma, vmf, blkdev_get_block, NULL); + mutex_unlock(&bdev->bd_mutex); + + return ret; +} + +static int blkdev_dax_pmd_fault(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, unsigned int flags) +{ + struct inode *bd_inode = file_bd_inode(vma->vm_file); + struct block_device *bdev = I_BDEV(bd_inode); + int ret; + + mutex_lock(&bdev->bd_mutex); + ret = __dax_pmd_fault(vma, addr, pmd, flags, blkdev_get_block, NULL); + mutex_unlock(&bdev->bd_mutex); + + return ret; +} + +static int blkdev_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct inode *bd_inode = file_bd_inode(vma->vm_file); + struct block_device *bdev = I_BDEV(bd_inode); + int ret; + + mutex_lock(&bdev->bd_mutex); + ret = __dax_mkwrite(vma, vmf, blkdev_get_block, NULL); + mutex_unlock(&bdev->bd_mutex); + + return ret; +} + +static int blkdev_dax_pfn_mkwrite(struct vm_area_struct *vma, + struct vm_fault *vmf) +{ + struct inode *bd_inode = file_bd_inode(vma->vm_file); + struct block_device *bdev = I_BDEV(bd_inode); + int ret = VM_FAULT_NOPAGE; + loff_t size; + + /* check that the faulting page hasn't raced with bdev resize */ + mutex_lock(&bdev->bd_mutex); + size = (i_size_read(bd_inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; + if (vmf->pgoff >= size) + ret = VM_FAULT_SIGBUS; + mutex_unlock(&bdev->bd_mutex); + + return ret; +} + +static const struct vm_operations_struct blkdev_dax_vm_ops = { + .fault = blkdev_dax_fault, + .pmd_fault = blkdev_dax_pmd_fault, + .page_mkwrite = blkdev_dax_mkwrite, + .pfn_mkwrite = blkdev_dax_pfn_mkwrite, +}; + +static int blkdev_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct inode *bd_inode = file_bd_inode(file); + + if (!IS_DAX(bd_inode)) + return generic_file_mmap(file, vma); + + file_accessed(file); + vma->vm_ops = &blkdev_dax_vm_ops; + vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE; + return 0; +} +#else +#define blkdev_mmap generic_file_mmap +#endif + const struct file_operations def_blk_fops = { .open = blkdev_open, .release = blkdev_close, .llseek = block_llseek, .read_iter = blkdev_read_iter, .write_iter = blkdev_write_iter, - .mmap = generic_file_mmap, + .mmap = blkdev_mmap, .fsync = blkdev_fsync, .unlocked_ioctl = block_ioctl, #ifdef CONFIG_COMPAT