From patchwork Wed Nov 23 18:44:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9444105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9C9A560779 for ; Wed, 23 Nov 2016 18:45:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE1F5276D7 for ; Wed, 23 Nov 2016 18:45:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A31F327C7A; Wed, 23 Nov 2016 18:45:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4A76D276D7 for ; Wed, 23 Nov 2016 18:45:13 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 4D59A81ED4; Wed, 23 Nov 2016 10:45:13 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id BFCAF81ED0 for ; Wed, 23 Nov 2016 10:45:12 -0800 (PST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga105.jf.intel.com with ESMTP; 23 Nov 2016 10:45:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,539,1473145200"; d="scan'208";a="904841888" Received: from rzwisler-desk1.amr.corp.intel.com (HELO tarkir.intel.com) ([10.255.3.143]) by orsmga003.jf.intel.com with ESMTP; 23 Nov 2016 10:45:11 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Subject: [PATCH 6/6] dax: add tracepoints to dax_pmd_insert_mapping() Date: Wed, 23 Nov 2016 11:44:22 -0700 Message-Id: <1479926662-21718-7-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1479926662-21718-1-git-send-email-ross.zwisler@linux.intel.com> References: <1479926662-21718-1-git-send-email-ross.zwisler@linux.intel.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , Andrew Morton , Matthew Wilcox , linux-nvdimm@lists.01.org, Dave Chinner , Steven Rostedt , Christoph Hellwig , linux-mm@kvack.org, Ingo Molnar , Alexander Viro , linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Add tracepoints to dax_pmd_insert_mapping(), following the same logging conventions as the tracepoints in dax_iomap_pmd_fault(). Here is an example PMD fault showing the new tracepoints: big-1544 [006] .... 48.153479: dax_pmd_fault: shared mapping write address 0x10505000 vm_start 0x10200000 vm_end 0x10700000 pgoff 0x200 max_pgoff 0x1400 big-1544 [006] .... 48.155230: dax_pmd_insert_mapping: shared mapping write address 0x10505000 length 0x200000 pfn 0x100600 DEV|MAP radix_entry 0xc000e big-1544 [006] .... 48.155266: dax_pmd_fault_done: shared mapping write address 0x10505000 vm_start 0x10200000 vm_end 0x10700000 pgoff 0x200 max_pgoff 0x1400 NOPAGE Signed-off-by: Ross Zwisler Reviewed-by: Jan Kara --- fs/dax.c | 10 +++++++--- include/linux/pfn_t.h | 6 ++++++ include/trace/events/fs_dax.h | 42 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 55 insertions(+), 3 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 2824414..d6ba4a3 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1236,10 +1236,10 @@ static int dax_pmd_insert_mapping(struct vm_area_struct *vma, pmd_t *pmd, .size = PMD_SIZE, }; long length = dax_map_atomic(bdev, &dax); - void *ret; + void *ret = NULL; if (length < 0) /* dax_map_atomic() failed */ - return VM_FAULT_FALLBACK; + goto fallback; if (length < PMD_SIZE) goto unmap_fallback; if (pfn_t_to_pfn(dax.pfn) & PG_PMD_COLOUR) @@ -1252,13 +1252,17 @@ static int dax_pmd_insert_mapping(struct vm_area_struct *vma, pmd_t *pmd, ret = dax_insert_mapping_entry(mapping, vmf, *entryp, dax.sector, RADIX_DAX_PMD); if (IS_ERR(ret)) - return VM_FAULT_FALLBACK; + goto fallback; *entryp = ret; + trace_dax_pmd_insert_mapping(vma, address, write, length, dax.pfn, ret); return vmf_insert_pfn_pmd(vma, address, pmd, dax.pfn, write); unmap_fallback: dax_unmap_atomic(bdev, &dax); +fallback: + trace_dax_pmd_insert_mapping_fallback(vma, address, write, length, + dax.pfn, ret); return VM_FAULT_FALLBACK; } diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h index a3d90b9..033fc7b 100644 --- a/include/linux/pfn_t.h +++ b/include/linux/pfn_t.h @@ -15,6 +15,12 @@ #define PFN_DEV (1ULL << (BITS_PER_LONG_LONG - 3)) #define PFN_MAP (1ULL << (BITS_PER_LONG_LONG - 4)) +#define PFN_FLAGS_TRACE \ + { PFN_SG_CHAIN, "SG_CHAIN" }, \ + { PFN_SG_LAST, "SG_LAST" }, \ + { PFN_DEV, "DEV" }, \ + { PFN_MAP, "MAP" } + static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, u64 flags) { pfn_t pfn_t = { .val = pfn | (flags & PFN_FLAGS_MASK), }; diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h index 8814b1a..a03f820 100644 --- a/include/trace/events/fs_dax.h +++ b/include/trace/events/fs_dax.h @@ -87,6 +87,48 @@ DEFINE_EVENT(dax_pmd_load_hole_class, name, \ DEFINE_PMD_LOAD_HOLE_EVENT(dax_pmd_load_hole); DEFINE_PMD_LOAD_HOLE_EVENT(dax_pmd_load_hole_fallback); +DECLARE_EVENT_CLASS(dax_pmd_insert_mapping_class, + TP_PROTO(struct vm_area_struct *vma, unsigned long address, int write, + long length, pfn_t pfn, void *radix_entry), + TP_ARGS(vma, address, write, length, pfn, radix_entry), + TP_STRUCT__entry( + __field(unsigned long, vm_flags) + __field(unsigned long, address) + __field(int, write) + __field(long, length) + __field(u64, pfn_val) + __field(void *, radix_entry) + ), + TP_fast_assign( + __entry->vm_flags = vma->vm_flags; + __entry->address = address; + __entry->write = write; + __entry->length = length; + __entry->pfn_val = pfn.val; + __entry->radix_entry = radix_entry; + ), + TP_printk("%s mapping %s address %#lx length %#lx pfn %#llx %s" + " radix_entry %#lx", + __entry->vm_flags & VM_SHARED ? "shared" : "private", + __entry->write ? "write" : "read", + __entry->address, + __entry->length, + __entry->pfn_val & ~PFN_FLAGS_MASK, + __print_flags(__entry->pfn_val & PFN_FLAGS_MASK, "|", + PFN_FLAGS_TRACE), + (unsigned long)__entry->radix_entry + ) +) + +#define DEFINE_PMD_INSERT_MAPPING_EVENT(name) \ +DEFINE_EVENT(dax_pmd_insert_mapping_class, name, \ + TP_PROTO(struct vm_area_struct *vma, unsigned long address, \ + int write, long length, pfn_t pfn, void *radix_entry), \ + TP_ARGS(vma, address, write, length, pfn, radix_entry)) + +DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping); +DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping_fallback); + #endif /* _TRACE_FS_DAX_H */ /* This part must be outside protection */