From patchwork Wed Oct 9 16:49:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11181613 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 535FB1575 for ; Wed, 9 Oct 2019 16:49:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1F74A21848 for ; Wed, 9 Oct 2019 16:49:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uWNLLt4F" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1F74A21848 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49F1A8E0005; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 44F6B8E0003; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33D808E0005; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 0D0578E0003 for ; Wed, 9 Oct 2019 12:49:48 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A023D180AD7C3 for ; Wed, 9 Oct 2019 16:49:47 +0000 (UTC) X-FDA: 76024832814.21.dress69_7285a40269e18 X-Spam-Summary: 2,0,0,e0ba2fe524d30ed0,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org:dwagner@suse.de:bigeasy@linutronix.de:tglx@linutronix.de::linux-kernel@vger.kernel.org:peterz@infradead.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1542:1711:1730:1747:1777:1792:2195:2196:2199:2200:2393:2559:2562:2693:2732:2903:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6261:6653:6742:7514:8603:9413:10004:11026:11473:11658:11914:12043:12295:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13229:13894:14096:14181:14394:14687:14721:14819:21080:21433:21444:21451:21627:21666:21966:30003:30034:30054:30070,0,RBL:209.85.208.193:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF :fp,MSBL X-HE-Tag: dress69_7285a40269e18 X-Filterd-Recvd-Size: 5665 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Oct 2019 16:49:46 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id f5so3197642ljg.8 for ; Wed, 09 Oct 2019 09:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SkGcyVg51H0E4AFHBmfSaZnSH8nue7geHmrbA1SXwgw=; b=uWNLLt4FV01tXR42m0BzyJwNl1r244JLyctfZn3BKgVSv8nrE2xMlQExCd8KFyjmoT tJoDvZUsuEvEO0Rvk/HUBFEgScUx4csk/VmA86Bc5NqNo6fPkq4jZqzLJt8iWhfGWGce aJZS8y2YAas3LDhAcmUNE26ifi+pxy8q6uMFEM+yzfkPhD6EnwTDFLALmpPCBQenMJl5 YHYgJjgPku/rq/0eVi8SZ37AJBvyxjMZ8NrE+279ZX8itZqCjZCCzaePaY94FCu7QpmB Yxp2cxCQvJLKAv/vHf8g2j9RcaaWr4oTu8USsy6m4DBdniUQFhbvkVJTCDrN/XnYT2ki VYPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=SkGcyVg51H0E4AFHBmfSaZnSH8nue7geHmrbA1SXwgw=; b=rDnPZUo8tNwv0xDAMleI9fLV1P6sh6gv4UgWley6sPzqMiLYzp12+cny67wL2pFXAL GIGGDA989/6DSrHszgglJoMiC9FuBhR/kBwbLxRItPs3wNlg06YxN7Xwf/BQOPdVcVOb 4ORPsZ5EYKalMEaQroxx0PnxU3Do2IFd/6myXkym6wnz5jVzmztHsV/wZVhbANDYwZWh eSatxzmNfwZ2zvv6fpKt3DZMjsM/l/CzMvw92HVKUZ8JfFsx5Kd/diXdSTsHc95m7Pmt siKiBz6NjJC9IxvyHxkpAtCorjpEn3GSYGeYywXBDHEiyXvIKbiyv9S14QOaiLAA++GD lRWg== X-Gm-Message-State: APjAAAWh/vkiJGZmKzl61mRtxF7u44K4j1Vk4ZsynjKivce6KbWd39cf tVK83T7GdEGDRdRPUZjFypU= X-Google-Smtp-Source: APXvYqx8NBaZTGpzzgmaw4+w/0S+bRs+h195tjNsjKyxhK3dLzZj0MM8Vr8WJWzs4oe58eU6g9n1Ag== X-Received: by 2002:a05:651c:105c:: with SMTP id x28mr3057983ljm.114.1570639785225; Wed, 09 Oct 2019 09:49:45 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id h5sm623557ljf.83.2019.10.09.09.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2019 09:49:44 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH 1/1] mm/vmalloc: remove preempt_disable/enable when do preloading Date: Wed, 9 Oct 2019 18:49:34 +0200 Message-Id: <20191009164934.10166-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. The reason is that calling spin_lock() with disabled preemtion is forbidden in CONFIG_PREEMPT_RT kernel. Therefore, we do not guarantee that a CPU is preloaded, instead we minimize the case when it is not with this change. For example i run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded thus. So it can happen but the number is rather negligible. Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose") Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Steven Rostedt (VMware) --- mm/vmalloc.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e92ff5f7dd8b..2ed6fef86950 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1078,9 +1078,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, retry: /* - * Preload this CPU with one extra vmap_area object to ensure - * that we have it available when fit type of free area is - * NE_FIT_TYPE. + * Preload this CPU with one extra vmap_area object. It is used + * when fit type of free area is NE_FIT_TYPE. Please note, it + * does not guarantee that an allocation occurs on a CPU that + * is preloaded, instead we minimize the case when it is not. + * It can happen because of migration, because there is a race + * until the below spinlock is taken. * * The preload is done in non-atomic context, thus it allows us * to use more permissive allocation masks to be more stable under @@ -1089,20 +1092,16 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, * Even if it fails we do not really care about that. Just proceed * as it is. "overflow" path will refill the cache we allocate from. */ - preempt_disable(); - if (!__this_cpu_read(ne_fit_preload_node)) { - preempt_enable(); + if (!this_cpu_read(ne_fit_preload_node)) { pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); - preempt_disable(); - if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { + if (this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { if (pva) kmem_cache_free(vmap_area_cachep, pva); } } spin_lock(&vmap_area_lock); - preempt_enable(); /* * If an allocation fails, the "vend" address is