From patchwork Tue Jul 14 07:02:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11661753 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B89E114DD for ; Tue, 14 Jul 2020 07:05:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A09F02075F for ; Tue, 14 Jul 2020 07:05:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727810AbgGNHFI (ORCPT ); Tue, 14 Jul 2020 03:05:08 -0400 Received: from mga04.intel.com ([192.55.52.120]:50365 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726930AbgGNHET (ORCPT ); Tue, 14 Jul 2020 03:04:19 -0400 IronPort-SDR: fwgGQ7YNDCwQCEUIT2phI1ATj/Qu6M9vCHJZzzk27SSttayB+89Un9q37EB3RVQaaelIib8SR0 y/oiYUdEnOKA== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="146304344" X-IronPort-AV: E=Sophos;i="5.75,350,1589266800"; d="scan'208";a="146304344" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 00:04:19 -0700 IronPort-SDR: 9LKgIdM/SdUreJg1navD+iwUluxTNA29jANU+9W/GbR+y4NJhTqBe5QwxAfUIq7aVqFXyOWk5Q vWEckWQDPPOw== X-IronPort-AV: E=Sophos;i="5.75,350,1589266800"; d="scan'208";a="317627579" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jul 2020 00:04:18 -0700 From: ira.weiny@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , Dan Williams , x86@kernel.org, Dave Hansen , Vishal Verma , Andrew Morton , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH 10/15] drivers/dax: Expand lock scope to cover the use of addresses Date: Tue, 14 Jul 2020 00:02:15 -0700 Message-Id: <20200714070220.3500839-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200714070220.3500839-1-ira.weiny@intel.com> References: <20200714070220.3500839-1-ira.weiny@intel.com> MIME-Version: 1.0 Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org From: Ira Weiny The addition of PKS protection to dax read lock/unlock will require that the address returned by dax_direct_access() be protected by this lock. While not technically necessary for this series, this corrects the locking by ensuring that the use of kaddr and end_kaddr are covered by the dax read lock/unlock. Change the lock scope to cover the kaddr and end_kaddr use. Reviewed-by: Dan Williams Signed-off-by: Ira Weiny --- drivers/dax/super.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index 8e32345be0f7..021739768093 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -103,11 +103,11 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev, id = dax_read_lock(); len = dax_direct_access(dax_dev, pgoff, 1, &kaddr, &pfn); len2 = dax_direct_access(dax_dev, pgoff_end, 1, &end_kaddr, &end_pfn); - dax_read_unlock(id); if (len < 1 || len2 < 1) { pr_debug("%s: error: dax access failed (%ld)\n", bdevname(bdev, buf), len < 1 ? len : len2); + dax_read_unlock(id); return false; } @@ -137,6 +137,7 @@ bool __generic_fsdax_supported(struct dax_device *dax_dev, put_dev_pagemap(end_pgmap); } + dax_read_unlock(id); if (!dax_enabled) { pr_debug("%s: error: dax support not enabled\n",