From patchwork Thu Oct 10 22:33:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11184539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 595DB912 for ; Thu, 10 Oct 2019 22:33:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14049214E0 for ; Thu, 10 Oct 2019 22:33:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="vJeXf3az" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14049214E0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 002538E0006; Thu, 10 Oct 2019 18:33:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF49D8E0003; Thu, 10 Oct 2019 18:33:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBCF68E0006; Thu, 10 Oct 2019 18:33:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0158.hostedemail.com [216.40.44.158]) by kanga.kvack.org (Postfix) with ESMTP id BB9108E0003 for ; Thu, 10 Oct 2019 18:33:30 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 4B18382437CF for ; Thu, 10 Oct 2019 22:33:30 +0000 (UTC) X-FDA: 76029327780.17.rat26_75d6a04c69634 X-Spam-Summary: 50,0,0,3750a159243dcebd,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org:dwagner@suse.de:bigeasy@linutronix.de:tglx@linutronix.de::linux-kernel@vger.kernel.org:peterz@infradead.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2195:2196:2199:2200:2393:2559:2562:2693:2732:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4385:5007:6261:6653:6742:6755:7514:7875:7903:8603:9413:10004:11026:11473:11658:11914:12043:12219:12291:12295:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:13894:14096:14181:14394:14687:14721:14819:21080:21433:21444:21451:21627:21666:21966:30003:30034:30054:30070:30074,0,RBL:209.85.208.193-irl.urbl.hostedemail.com-127.0.0.150,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Dom ainCache X-HE-Tag: rat26_75d6a04c69634 X-Filterd-Recvd-Size: 7358 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Oct 2019 22:33:29 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id v24so7853996ljj.3 for ; Thu, 10 Oct 2019 15:33:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=TAvn4d+6F76ruJCv+xI5LO/P/Y+59nUm5AGlvaTi8Qw=; b=vJeXf3azUyrJDmPHgSurrI5ADTbYih+86/WFbg8dxAp7IHN3cdZWYjBLR60qwZ5WDv +7zEknqFxZF8ZKQBs5ulbxKt9GQz6IiGJO4107AQWREr75bdRb2+pqZd/WouJkMySosy bKzS9qmyQq++va8SBxc+B05JbhbbfSnBoXlPzP2MUzc6PpJRFps+LbQwi54hbrdNemyi yv6YVRDj33bT3RROfu4d77Ug71GRpwU1E83SVN+5W9BgV9wJpPu15TrVAkYRIRQ8QLdk NH9pWVNjc6mgh14iNXiBtnqiZzq6toTZsDPEw08v3Wr48zoh3VWbDh8ejlHbntKinseP 4aQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=TAvn4d+6F76ruJCv+xI5LO/P/Y+59nUm5AGlvaTi8Qw=; b=fQORbSjwCvWLJKTgdGv4jI1kJ2oaCLooxi/GI7t0LVoSdtXXc9lIGCE9C1f6LztsfR NhVFxnjDdHfM5OrMJytMTLGfeKGPPGkcmTDsOzB85za8gzWovus/BmFF0PT7f/8rz0HS QK1lBNYpHZO7HhdinkE3sQ14mQFY69EKmDM2V/vCR3Z8qKdKSocQiqKJJHclRBgJx56h BXy9unheYEB3trGGfJoXUCQ6ArEPYK7utv9VI/+FR604L0sYt2ENxiaCwrCfFM7gpFAH wuVJb14UNV/NqKbz01ognVk1RAdro1JFwe6j60f0U34dmv0P67E4RxZKkI91cTKQDoht FFRw== X-Gm-Message-State: APjAAAXAZxgNiQNTSEJY9lD8FyVNtvUTHnA32stDoc8XpWIPOXbKgQn6 HxUWXfiEJIJMM+yMHtz39OY= X-Google-Smtp-Source: APXvYqwLfOoXtt3G/Kcf9WrLtPl4oTNYwUgcoETwkfA3rEAYZyw4U95S9dGoHtCwrxFX6ktf3C6YvQ== X-Received: by 2002:a2e:9d56:: with SMTP id y22mr7786051ljj.37.1570746807968; Thu, 10 Oct 2019 15:33:27 -0700 (PDT) Received: from pc636.lan (h5ef52e31.seluork.dyn.perspektivbredband.net. [94.245.46.49]) by smtp.gmail.com with ESMTPSA id h2sm1530033ljm.26.2019.10.10.15.33.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Oct 2019 15:33:26 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH v2 1/1] mm/vmalloc: remove preempt_disable/enable when do preloading Date: Fri, 11 Oct 2019 00:33:18 +0200 Message-Id: <20191010223318.28115-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. The reason is that calling spin_lock() with disabled preemtion is forbidden in CONFIG_PREEMPT_RT kernel. Therefore, we do not guarantee that a CPU is preloaded, instead we minimize the case when it is not with this change. For example i run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded. So it can happen but the number is negligible. V1 -> V2: - move __this_cpu_cmpxchg check when spin_lock is taken, as proposed by Andrew Morton - add more explanation in regard of preloading - adjust and move some comments Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose") Reviewed-by: Steven Rostedt (VMware) Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Sebastian Andrzej Siewior Acked-by: Daniel Wagner --- mm/vmalloc.c | 50 +++++++++++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 17 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e92ff5f7dd8b..f48cd0711478 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -969,6 +969,19 @@ adjust_va_to_fit_type(struct vmap_area *va, * There are a few exceptions though, as an example it is * a first allocation (early boot up) when we have "one" * big free space that has to be split. + * + * Also we can hit this path in case of regular "vmap" + * allocations, if "this" current CPU was not preloaded. + * See the comment in alloc_vmap_area() why. If so, then + * GFP_NOWAIT is used instead to get an extra object for + * split purpose. That is rare and most time does not + * occur. + * + * What happens if an allocation gets failed. Basically, + * an "overflow" path is triggered to purge lazily freed + * areas to free some memory, then, the "retry" path is + * triggered to repeat one more time. See more details + * in alloc_vmap_area() function. */ lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); if (!lva) @@ -1078,31 +1091,34 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, retry: /* - * Preload this CPU with one extra vmap_area object to ensure - * that we have it available when fit type of free area is - * NE_FIT_TYPE. + * Preload this CPU with one extra vmap_area object. It is used + * when fit type of free area is NE_FIT_TYPE. Please note, it + * does not guarantee that an allocation occurs on a CPU that + * is preloaded, instead we minimize the case when it is not. + * It can happen because of migration, because there is a race + * until the below spinlock is taken. * * The preload is done in non-atomic context, thus it allows us * to use more permissive allocation masks to be more stable under - * low memory condition and high memory pressure. + * low memory condition and high memory pressure. In rare case, + * if not preloaded, GFP_NOWAIT is used. * - * Even if it fails we do not really care about that. Just proceed - * as it is. "overflow" path will refill the cache we allocate from. + * Set "pva" to NULL here, because of "retry" path. */ - preempt_disable(); - if (!__this_cpu_read(ne_fit_preload_node)) { - preempt_enable(); - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); - preempt_disable(); + pva = NULL; - if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { - if (pva) - kmem_cache_free(vmap_area_cachep, pva); - } - } + if (!this_cpu_read(ne_fit_preload_node)) + /* + * Even if it fails we do not really care about that. + * Just proceed as it is. If needed "overflow" path + * will refill the cache we allocate from. + */ + pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); spin_lock(&vmap_area_lock); - preempt_enable(); + + if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) + kmem_cache_free(vmap_area_cachep, pva); /* * If an allocation fails, the "vend" address is