From patchwork Tue Dec 19 12:17:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Wei W" X-Patchwork-Id: 10123183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 626496057F for ; Tue, 19 Dec 2017 12:38:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F2A4292F9 for ; Tue, 19 Dec 2017 12:38:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 43EF4292F8; Tue, 19 Dec 2017 12:38:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 123ED29207 for ; Tue, 19 Dec 2017 12:38:16 +0000 (UTC) Received: from localhost ([::1]:41885 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eRHA0-0006Oo-6K for patchwork-qemu-devel@patchwork.kernel.org; Tue, 19 Dec 2017 07:38:16 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49444) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eRH6g-0003Zk-GC for qemu-devel@nongnu.org; Tue, 19 Dec 2017 07:34:51 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eRH6f-0000Fa-9A for qemu-devel@nongnu.org; Tue, 19 Dec 2017 07:34:50 -0500 Received: from mga02.intel.com ([134.134.136.20]:51438) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eRH6e-0000Bf-UO for qemu-devel@nongnu.org; Tue, 19 Dec 2017 07:34:49 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Dec 2017 04:34:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,426,1508828400"; d="scan'208";a="13568029" Received: from devel-ww.sh.intel.com ([10.239.48.110]) by orsmga003.jf.intel.com with ESMTP; 19 Dec 2017 04:34:44 -0800 From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, mhocko@kernel.org, akpm@linux-foundation.org, mawilcox@microsoft.com Date: Tue, 19 Dec 2017 20:17:54 +0800 Message-Id: <1513685879-21823-3-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513685879-21823-1-git-send-email-wei.w.wang@intel.com> References: <1513685879-21823-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.20 Subject: [Qemu-devel] [PATCH v20 2/7] xbitmap: potential improvement X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aarcange@redhat.com, yang.zhang.wz@gmail.com, quan.xu0@gmail.com, david@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp, liliang.opensource@gmail.com, willy@infradead.org, amit.shah@redhat.com, wei.w.wang@intel.com, cornelia.huck@de.ibm.com, pbonzini@redhat.com, nilal@redhat.com, mgorman@techsingularity.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch made some changes to the original xbitmap implementation from the linux-dax tree: - xb_set_bit: delete the new inserted radix_tree_node when failing to get the per cpu ida bitmap, this avoids the kind of memory leak of the unused radix tree node left in the tree. - xb_preload: with the original implementation, the CPU that successfully do __radix_tree_preload() may get into sleep by kmalloc(), which has a risk of getting the caller of xb_preload() scheduled to another CPU after waken up, and the new CPU may not have radix_tree_node pre-allocated there, this will be a problem when inserting a node to the tree later. This patch moves __radix_tree_preload() after kmalloc() and returns a boolean to indicate the success or failure. Also, add the __must_check annotation to xb_preload for prudence purpose. Signed-off-by: Wei Wang Cc: Matthew Wilcox Cc: Andrew Morton Cc: Michal Hocko Cc: Michael S. Tsirkin Cc: Tetsuo Handa --- include/linux/xbitmap.h | 2 +- lib/radix-tree.c | 21 ++++++++++++++++++--- lib/xbitmap.c | 4 +++- 3 files changed, 22 insertions(+), 5 deletions(-) diff --git a/include/linux/xbitmap.h b/include/linux/xbitmap.h index 4ac2b8d..108f929 100644 --- a/include/linux/xbitmap.h +++ b/include/linux/xbitmap.h @@ -41,7 +41,7 @@ static inline bool xb_empty(const struct xb *xb) return radix_tree_empty(&xb->xbrt); } -void xb_preload(gfp_t); +int xb_preload(gfp_t); static inline void xb_preload_end(void) { diff --git a/lib/radix-tree.c b/lib/radix-tree.c index 2650e9e..f30347a 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -2142,17 +2142,32 @@ int ida_pre_get(struct ida *ida, gfp_t gfp) } EXPORT_SYMBOL(ida_pre_get); -void xb_preload(gfp_t gfp) +/** + * xb_preload - preload for xb_set_bit() + * @gfp_mask: allocation mask to use for preloading + * + * Preallocate memory to use for the next call to xb_set_bit(). On success, + * return zero, with preemption disabled. On error, return -ENOMEM with + * preemption not disabled. + */ +__must_check int xb_preload(gfp_t gfp) { - __radix_tree_preload(gfp, XB_PRELOAD_SIZE); if (!this_cpu_read(ida_bitmap)) { struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp); if (!bitmap) - return; + return -ENOMEM; + /* + * The per-CPU variable is updated with preemption enabled. + * If the calling task is unlucky to be scheduled to another + * CPU which has no ida_bitmap allocation, it will be detected + * when setting a bit (i.e. xb_set_bit()). + */ bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap); kfree(bitmap); } + + return __radix_tree_preload(gfp, XB_PRELOAD_SIZE); } EXPORT_SYMBOL(xb_preload); diff --git a/lib/xbitmap.c b/lib/xbitmap.c index 236afa9..2dcfad5 100644 --- a/lib/xbitmap.c +++ b/lib/xbitmap.c @@ -29,8 +29,10 @@ int xb_set_bit(struct xb *xb, unsigned long bit) bitmap = rcu_dereference_raw(*slot); if (!bitmap) { bitmap = this_cpu_xchg(ida_bitmap, NULL); - if (!bitmap) + if (!bitmap) { + __radix_tree_delete(root, node, slot); return -EAGAIN; + } memset(bitmap, 0, sizeof(*bitmap)); __radix_tree_replace(root, node, slot, bitmap, NULL); }