From patchwork Wed Oct 16 09:54:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11192793 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 367371390 for ; Wed, 16 Oct 2019 09:54:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED96421848 for ; Wed, 16 Oct 2019 09:54:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QE5ek4pV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED96421848 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3EC8F8E0007; Wed, 16 Oct 2019 05:54:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 34EDC8E0001; Wed, 16 Oct 2019 05:54:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23D588E0007; Wed, 16 Oct 2019 05:54:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id E74108E0001 for ; Wed, 16 Oct 2019 05:54:50 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 84C0C183D71E7 for ; Wed, 16 Oct 2019 09:54:50 +0000 (UTC) X-FDA: 76049188740.15.shoes16_646742778555c X-Spam-Summary: 2,0,0,93797cc7bac2900e,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org:dwagner@suse.de:bigeasy@linutronix.de:tglx@linutronix.de::linux-kernel@vger.kernel.org:peterz@infradead.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1543:1605:1711:1730:1747:1777:1792:2195:2196:2199:2200:2393:2559:2562:2693:2732:2895:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:4385:5007:6261:6653:6742:7514:7875:7903:8603:9413:10004:11026:11473:11658:11914:12043:12291:12295:12296:12297:12438:12517:12519:12555:12679:12683:12895:13095:13161:13229:13894:14096:14181:14394:14687:14721:14819:21080:21433:21444:21451:21627:21666:21966:30003:30034:30054:30070,0,RBL:209.85.208.196:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck: none,Dom X-HE-Tag: shoes16_646742778555c X-Filterd-Recvd-Size: 6856 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 09:54:49 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id m7so23331466lji.2 for ; Wed, 16 Oct 2019 02:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=bdoCE4EoxSLapTmXd4+oVGs4R70YFCwt9DjIODBWD6I=; b=QE5ek4pVPZfLzXJzV28P6nJ+0ySzes/vJ33aguYDmlDCotZp11VGmfub55bpIPwULu 445YzJByyi9BRAIfThepb5GU9gtU2sJw66rXQq2NXUn/K4dZOUAT1hnRMFbHniqB+JiG weY5ynwlgmCL0ItKNN1Ahe0Uxaw+QL7EQRIy+gWWoG2biYMDsmURA9GzkizOBx9Q5MAR GlB4hTBjI240TBfNo3FXNft53oV6xc4OyN+9u6a7TZYNCfNM55vYO4oSixwfV6A2LWAu ho2FLuaKimvVybjSGJRrV8X58Dg13nlrNVPA5wSaWe7KAy3j8b6QuhgnmpgQs7k4Bdn1 mP8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=bdoCE4EoxSLapTmXd4+oVGs4R70YFCwt9DjIODBWD6I=; b=UGNH4WP/0ozm5yWsvlgGwK41dGicoVb3Oj0vyGp+gzAuBqS86fGEkSTln6Ch+EYn6w UEycB6qJZ3UIZII2tKYAFyp+TyhHdPB3S6ZgQ4tozqTI3SByNoxpv/2ea87Y8BRk7zbS 56pE8bTviSTeqbHYVJ0l6PQBmJxQChCn8DTy2pKCMOK0WXV3ufIVrBVXbCABCOsLuGTU 8nJvlxBBGIMZstn+Jo9GW/d26KSLP5EOWlIW7Iugqft8wre9DDqihPg6q7/etOiA7Dhy OGiHjBqNldr2S46nx3MHvOXezrxt640jNKfpqjU8hdjojVk90b1oPGscDwjQmNTkL1os ZOcw== X-Gm-Message-State: APjAAAVjdgPzcoYrxypF8L9AE6Oeat68nf2ZhUsgp9j58kRwPWKgdE6K XeLE2fs1TzmELrP9iaNf4OQ= X-Google-Smtp-Source: APXvYqxiDS53+QyM7r3Br5Xxnwu0u7VAzS+Uv9ERxHSO38DcMbzVXfh6Mfhp0Z22QaporVWAcWkmBQ== X-Received: by 2002:a2e:9d83:: with SMTP id c3mr25563346ljj.237.1571219688236; Wed, 16 Oct 2019 02:54:48 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id b2sm886452lfq.27.2019.10.16.02.54.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2019 02:54:47 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH v3 1/3] mm/vmalloc: remove preempt_disable/enable when do preloading Date: Wed, 16 Oct 2019 11:54:36 +0200 Message-Id: <20191016095438.12391-1-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some background. The preemption was disabled before to guarantee that a preloaded object is available for a CPU, it was stored for. The aim was to not allocate in atomic context when spinlock is taken later, for regular vmap allocations. But that approach conflicts with CONFIG_PREEMPT_RT philosophy. It means that calling spin_lock() with disabled preemption is forbidden in the CONFIG_PREEMPT_RT kernel. Therefore, get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. As a result we do not guarantee now that a CPU is preloaded, instead we minimize the case when it is not, with this change. For example i run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded. So it can happen but the number is negligible. V2 - > V3: - update the commit message V1 -> V2: - move __this_cpu_cmpxchg check when spin_lock is taken, as proposed by Andrew Morton - add more explanation in regard of preloading - adjust and move some comments Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose") Reviewed-by: Steven Rostedt (VMware) Acked-by: Sebastian Andrzej Siewior Acked-by: Daniel Wagner Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko --- mm/vmalloc.c | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e92ff5f7dd8b..b7b443bfdd92 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1078,31 +1078,34 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, retry: /* - * Preload this CPU with one extra vmap_area object to ensure - * that we have it available when fit type of free area is - * NE_FIT_TYPE. + * Preload this CPU with one extra vmap_area object. It is used + * when fit type of free area is NE_FIT_TYPE. Please note, it + * does not guarantee that an allocation occurs on a CPU that + * is preloaded, instead we minimize the case when it is not. + * It can happen because of cpu migration, because there is a + * race until the below spinlock is taken. * * The preload is done in non-atomic context, thus it allows us * to use more permissive allocation masks to be more stable under - * low memory condition and high memory pressure. + * low memory condition and high memory pressure. In rare case, + * if not preloaded, GFP_NOWAIT is used. * - * Even if it fails we do not really care about that. Just proceed - * as it is. "overflow" path will refill the cache we allocate from. + * Set "pva" to NULL here, because of "retry" path. */ - preempt_disable(); - if (!__this_cpu_read(ne_fit_preload_node)) { - preempt_enable(); - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); - preempt_disable(); + pva = NULL; - if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) { - if (pva) - kmem_cache_free(vmap_area_cachep, pva); - } - } + if (!this_cpu_read(ne_fit_preload_node)) + /* + * Even if it fails we do not really care about that. + * Just proceed as it is. If needed "overflow" path + * will refill the cache we allocate from. + */ + pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); spin_lock(&vmap_area_lock); - preempt_enable(); + + if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) + kmem_cache_free(vmap_area_cachep, pva); /* * If an allocation fails, the "vend" address is From patchwork Wed Oct 16 09:54:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11192795 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 190D01390 for ; Wed, 16 Oct 2019 09:54:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D9CA62168B for ; Wed, 16 Oct 2019 09:54:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hh2BQR9H" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9CA62168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6FD818E0008; Wed, 16 Oct 2019 05:54:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6372E8E0001; Wed, 16 Oct 2019 05:54:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39F8F8E0008; Wed, 16 Oct 2019 05:54:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 0B4968E0001 for ; Wed, 16 Oct 2019 05:54:52 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 95737183D71F1 for ; Wed, 16 Oct 2019 09:54:51 +0000 (UTC) X-FDA: 76049188782.15.clam72_6491c71a9b03e X-Spam-Summary: 2,0,0,5d99db1b158e1053,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org:dwagner@suse.de:bigeasy@linutronix.de:tglx@linutronix.de::linux-kernel@vger.kernel.org:peterz@infradead.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2195:2199:2393:2559:2562:2693:2732:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:3871:3872:3874:4250:5007:6261:6653:6742:7514:8603:9413:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14096:14181:14384:14394:14687:14721:21080:21433:21444:21451:21627:21666:30054,0,RBL:209.85.167.67:@gmail.com:.lbl8.mailshell.net-62.18.175.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMAR Y:none X-HE-Tag: clam72_6491c71a9b03e X-Filterd-Recvd-Size: 4871 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 09:54:51 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id r2so16896821lfn.8 for ; Wed, 16 Oct 2019 02:54:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8BWvlmXa5Itb4HNOlJ7mNfofmk+n7/jwhwr224X7wkA=; b=hh2BQR9H/TttXyKoEjHs8VBNq116FbnDFbRYWngV3I1sbmyuCJJ0vQH83n9YmJgHQS USutVgNbliEBrCahSr2saDHdP9vgSwJOLG8mjfs+fEW3aFx4aduW3klPJGYQsa2F3DnQ P2KzeoYK2UXDFk9PzwqjjggtmuIrxibVzQClvj8qnA9Q7L72eManT4pLUm2JFW/CvjgK eM87O72+RoLgxEDvHpa4Nn6Mxt33vFRsd/Fcc5ic0fnnZqgX5PNG/YQcBURfbDSMckxf xIDEtnk1d6jfkScKnA5gc/H63GJYsDi+B0OLWl+qIKqqsK7SFX+yLlCvDEeav2dSA1r4 c6Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8BWvlmXa5Itb4HNOlJ7mNfofmk+n7/jwhwr224X7wkA=; b=lYTahVXKu3DGM/26X6s40I+UB0E6k0XyIER9KvkqlnuS3TDX4bdM2z6uWmQshGs4Cf CwB9iZG9wD9St0RxaDYOPP/Aen3S36ADLd1rLlVUlhjbbiu7ylH3KIx2dewfh0LloheF 7fwaye3KeiwVAbCPZl+zxvcCiGxJdDvzSWgXuDuVr2CQI8YH5CAg1YINxi5cm8Y4pDSi 3vos/EolgHV9XHr7Ttw+L2YKd0rry02HJGXQmSolGKQi77hFE8DHSiqLFo4kMlhil/9L z1gi8dLsra9i5cJ0qoc3Xqwd8oGmQjEeVtB0bCwkEuOGfA9HQjWy5etiIK7aLKn4JzwM 1E1A== X-Gm-Message-State: APjAAAX7Azr3ubvrXTmz7XR6EEBzToRjeoF/ee+L66dm2Fki1s07G8UN 3STTB4wFPilORYmBt8Xj3kQ= X-Google-Smtp-Source: APXvYqyHqhSJPrwVPXwEIlMetVVFuIqlUxsmBmpW3xM6pBv558tYFejHW/wi29csutcwzsGqsNpstg== X-Received: by 2002:ac2:4c38:: with SMTP id u24mr1294786lfq.45.1571219689589; Wed, 16 Oct 2019 02:54:49 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id b2sm886452lfq.27.2019.10.16.02.54.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2019 02:54:48 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH v3 2/3] mm/vmalloc: respect passed gfp_mask when do preloading Date: Wed, 16 Oct 2019 11:54:37 +0200 Message-Id: <20191016095438.12391-2-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191016095438.12391-1-urezki@gmail.com> References: <20191016095438.12391-1-urezki@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: alloc_vmap_area() is given a gfp_mask for the page allocator. Let's respect that mask and consider it even in the case when doing regular CPU preloading, i.e. where a context can sleep. Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko Signed-off-by: Andrew Morton --- mm/vmalloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b7b443bfdd92..593bf554518d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1064,9 +1064,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, return ERR_PTR(-EBUSY); might_sleep(); + gfp_mask = gfp_mask & GFP_RECLAIM_MASK; - va = kmem_cache_alloc_node(vmap_area_cachep, - gfp_mask & GFP_RECLAIM_MASK, node); + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); if (unlikely(!va)) return ERR_PTR(-ENOMEM); @@ -1074,7 +1074,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, * Only scan the relevant parts containing pointers to other objects * to avoid false negatives. */ - kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); + kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); retry: /* @@ -1100,7 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, * Just proceed as it is. If needed "overflow" path * will refill the cache we allocate from. */ - pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); + pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); spin_lock(&vmap_area_lock); From patchwork Wed Oct 16 09:54:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 11192797 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83739112B for ; Wed, 16 Oct 2019 09:54:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 462DB2168B for ; Wed, 16 Oct 2019 09:54:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ijBxkcdl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 462DB2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9BACE8E0009; Wed, 16 Oct 2019 05:54:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 946E58E0001; Wed, 16 Oct 2019 05:54:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65E118E0009; Wed, 16 Oct 2019 05:54:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 3A4BE8E0001 for ; Wed, 16 Oct 2019 05:54:53 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 8B45C183D88C8 for ; Wed, 16 Oct 2019 09:54:52 +0000 (UTC) X-FDA: 76049188824.10.brick62_64b94d2152a0f X-Spam-Summary: 2,0,0,9a7d4c030e670cd7,d41d8cd98f00b204,urezki@gmail.com,:akpm@linux-foundation.org:dwagner@suse.de:bigeasy@linutronix.de:tglx@linutronix.de::linux-kernel@vger.kernel.org:peterz@infradead.org:urezki@gmail.com:hdanton@sina.com:mhocko@suse.com:willy@infradead.org:oleksiy.avramchenko@sonymobile.com:rostedt@goodmis.org,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:5007:6261:6653:6742:6755:7514:9413:10004:11026:11473:11658:11914:12043:12114:12219:12296:12297:12438:12517:12519:12555:12895:13069:13161:13210:13229:13311:13357:13894:14096:14181:14384:14394:14687:14721:21080:21324:21433:21444:21451:21627:21666:30054:30074,0,RBL:209.85.208.196:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNS BL:neutr X-HE-Tag: brick62_64b94d2152a0f X-Filterd-Recvd-Size: 4911 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Oct 2019 09:54:52 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id y23so23287159lje.9 for ; Wed, 16 Oct 2019 02:54:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bZoZqPU58/r7tmZ5PSLpDxdfOuQtypWPAK4xAcMDbso=; b=ijBxkcdl1C3p3Gf7uWxe52LorIx/LjRVkwiJGk0CWquv3ePAFCZEe2wexe2Fz3JwPO esV4vb9Z4BI9Pngz2wu8YntDZSLjS9aeJKg33rADix7xL2qvVJEbliL8UkQYb2oodX/c nTFchhA2yD3GZXz7ytWbJY46syb/wzIKfh4MTYpfxArMLUOUe7dKXBE0yiyCYg9jS2YC SVTMBbaErQHePI585OlSi5XYmCK7LdQH3/ZYv7xon+fhEyTE2UK02EmGZwX+M7J/D5wL L+JIgM6+u7B2SbK9HioC8RlEabIcuXU0yWorEcyOtH9xhbt2n0flhgdI2OCdcRwHR9+c veTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bZoZqPU58/r7tmZ5PSLpDxdfOuQtypWPAK4xAcMDbso=; b=R4DBYUvIvEOCHYkY/3Nf/DHw2fdCnKTNOtQ3Q9HWR+EeNaF543pI87YxWtkEX3lhb9 3gRhCLPw7XywAqBGtG6sYMQF8UGcFedQxoKvhYMkO+amkwqhsWyfUf0JOG8IaDehrj1U JzfdhnjM0x5xEOp1U+MF1WLXzAwvQ5eIB/ApAZ/LbzWHzsxRI2CXDvPzPSBYo+7oxp0V j/fZe9vBQI7giZB5AK2XtdHyaT61O+J580t7B5zISAOtak7I0JhayvxTJGsEuKJ5F9sf Z8qIV9CxJanYmsM4N6e2M4oEy5kpGFr1Ti+SfYJ31pPoCWsyCnGaaiEgc4Mpv2qasKNc 8ClA== X-Gm-Message-State: APjAAAXIHbB6EzYzZGHd/QjNxo4U3HpcpCfGas/cfuzZH32x0am+RECT T1oeUE3QXtRzt7f0rpQGwUs= X-Google-Smtp-Source: APXvYqzPH+636QBSKvduXVSag2KSoZKJjqlR0+ZPMMRfitxESnYfCgGkOEj80hsD56wUCQvi6U1GZw== X-Received: by 2002:a2e:9bcb:: with SMTP id w11mr24632508ljj.11.1571219690788; Wed, 16 Oct 2019 02:54:50 -0700 (PDT) Received: from pc636.semobile.internal ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id b2sm886452lfq.27.2019.10.16.02.54.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2019 02:54:50 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Uladzislau Rezki , Hillf Danton , Michal Hocko , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: [PATCH v3 3/3] mm/vmalloc: add more comments to the adjust_va_to_fit_type() Date: Wed, 16 Oct 2019 11:54:38 +0200 Message-Id: <20191016095438.12391-3-urezki@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191016095438.12391-1-urezki@gmail.com> References: <20191016095438.12391-1-urezki@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When fit type is NE_FIT_TYPE there is a need in one extra object. Usually the "ne_fit_preload_node" per-CPU variable has it and there is no need in GFP_NOWAIT allocation, but there are exceptions. This commit just adds more explanations, as a result giving answers on questions like when it can occur, how often, under which conditions and what happens if GFP_NOWAIT gets failed. Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko --- mm/vmalloc.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 593bf554518d..2290a0d270e4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -969,6 +969,19 @@ adjust_va_to_fit_type(struct vmap_area *va, * There are a few exceptions though, as an example it is * a first allocation (early boot up) when we have "one" * big free space that has to be split. + * + * Also we can hit this path in case of regular "vmap" + * allocations, if "this" current CPU was not preloaded. + * See the comment in alloc_vmap_area() why. If so, then + * GFP_NOWAIT is used instead to get an extra object for + * split purpose. That is rare and most time does not + * occur. + * + * What happens if an allocation gets failed. Basically, + * an "overflow" path is triggered to purge lazily freed + * areas to free some memory, then, the "retry" path is + * triggered to repeat one more time. See more details + * in alloc_vmap_area() function. */ lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); if (!lva)