From patchwork Mon Mar 30 02:32:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 11464467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB21A13A4 for ; Mon, 30 Mar 2020 02:33:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 61D272082E for ; Mon, 30 Mar 2020 02:33:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="EZj/LdDI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61D272082E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B627F6B0008; Sun, 29 Mar 2020 22:33:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AEB566B0037; Sun, 29 Mar 2020 22:33:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B2E26B006C; Sun, 29 Mar 2020 22:33:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 7DD396B0008 for ; Sun, 29 Mar 2020 22:33:34 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7DC6A1867 for ; Mon, 30 Mar 2020 02:33:34 +0000 (UTC) X-FDA: 76650457548.29.beds18_595c5bf626808 X-Spam-Summary: 2,0,0,fcb277b2b14d5255,d41d8cd98f00b204,joel@joelfernandes.org,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2736:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4385:4390:4395:4605:5007:6261:6653:6742:7514:7576:7875:8603:9010:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:13161:13229:13894:14096:14181:14394:14721:21080:21444:21451:21627:21740:21795:21987:21990:30029:30051:30054:30070,0,RBL:209.85.160.195:@joelfernandes.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: beds18_595c5bf626808 X-Filterd-Recvd-Size: 7507 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Mar 2020 02:33:33 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id g7so13882482qtj.13 for ; Sun, 29 Mar 2020 19:33:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SHLAtJMh4QvoU/qfwuEaXXVei4ZAh6L5O9IfzaCS1DM=; b=EZj/LdDIb8HO3LYMjwICXSEVb7Ug5CB6y+9yeeADTqwVWhn/iu4pUuRlJQp8IWjNRv ZHhEK+CiED++Nq7ZmpKIIk9AzYqhwZhx4OfWzy5quBd5oc00JjePZcq6xqYMOn+tAGOG pzdgsMtvd6/BeARibxZlYR4poF6nx+XzQnoHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SHLAtJMh4QvoU/qfwuEaXXVei4ZAh6L5O9IfzaCS1DM=; b=Iu5C1SOexgAPKtUHISDWJool4Le8njaQ3xRS8zg6z9H5DSmFHm6iKSkk13RyEwRzma 2d4qmEFm9MJpA8IYxj1zpayW1Xh6E4a9dkHDvm2GoEZ7+8FcuVQ2gD/Q6XtUj8dcOOgR V5EHdK82AwUJFAyB8fthvAHiWrL9XoR6T+A4umlMjAROtTCT8NyQxuLk7HmVB/4k46j1 j6OkvQE8FoMlO8hwwqJQJ+JCvEuXue7h6t7VBQjcy4SDxAXo6cc7IonMPmiDklOM7XNB YgtWejR6IY0LAHMf7k6AZ/MTfKZ+68t86JnEg2mK5FGP7xo/Z+pwTH4QrImjB6oO/5YU 0xJg== X-Gm-Message-State: ANhLgQ07koM/8b7+mUoSStOYYx2r9YzXxf6SHyUBTkheeSdCnYq8DOoK TfL/qxr75qr1ShSf3YzOdli4pjBj9Co= X-Google-Smtp-Source: ADFU+vt25iE1CsRcLCeax/iZQAcxxqmJ1TSX0Yl3KB0TJZ2nCudJEfCk4XFJ5v786E3wtic9LAXdDA== X-Received: by 2002:aed:2535:: with SMTP id v50mr10066898qtc.354.1585535613043; Sun, 29 Mar 2020 19:33:33 -0700 (PDT) Received: from joelaf.cam.corp.google.com ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id q15sm10030625qtj.83.2020.03.29.19.33.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Mar 2020 19:33:32 -0700 (PDT) From: "Joel Fernandes (Google)" To: linux-kernel@vger.kernel.org Cc: "Uladzislau Rezki (Sony)" , Joel Fernandes , Andrew Morton , Ingo Molnar , Josh Triplett , Lai Jiangshan , linux-mm@kvack.org, Mathieu Desnoyers , "Paul E. McKenney" , "Rafael J. Wysocki" , rcu@vger.kernel.org, Steven Rostedt Subject: [PATCH 02/18] rcu: Introduce kvfree_rcu() interface Date: Sun, 29 Mar 2020 22:32:32 -0400 Message-Id: <20200330023248.164994-3-joel@joelfernandes.org> X-Mailer: git-send-email 2.26.0.rc2.310.g2932bb562d-goog In-Reply-To: <20200330023248.164994-1-joel@joelfernandes.org> References: <20200330023248.164994-1-joel@joelfernandes.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Uladzislau Rezki (Sony)" kvfree_rcu() can deal with an allocated memory that is obtained via kvmalloc(). It can return two types of allocated memory or "pointers", one can belong to regular SLAB allocator and another one can be vmalloc one. It depends on requested size and memory pressure. Based on that, two streams are split, thus if a pointer belongs to vmalloc allocator it is queued to the list, otherwise SLAB one is queued into "bulk array" for further processing. The main reason of such splitting is: a) to distinguish kmalloc()/vmalloc() ptrs; b) there is no vmalloc_bulk() interface. As of now we have list_lru.c user that needs such interface, also there will be new comers. Apart of that it is preparation to have a head-less variant later. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Joel Fernandes (Google) Signed-off-by: Joel Fernandes (Google) --- include/linux/rcupdate.h | 9 +++++++++ kernel/rcu/tiny.c | 3 ++- kernel/rcu/tree.c | 17 ++++++++++++----- 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 3598bbb5ff407..8b7128d0860e2 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -870,6 +870,15 @@ do { \ __kfree_rcu(&((___p)->rhf), offsetof(typeof(*(ptr)), rhf)); \ } while (0) +/** + * kvfree_rcu() - kvfree an object after a grace period. + * @ptr: pointer to kvfree + * @rhf: the name of the struct rcu_head within the type of @ptr. + * + * Same as kfree_rcu(), just simple alias. + */ +#define kvfree_rcu(ptr, rhf) kfree_rcu(ptr, rhf) + /* * Place this after a lock-acquisition primitive to guarantee that * an UNLOCK+LOCK pair acts as a full barrier. This guarantee applies diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index dd572ce7c7479..4b99f7b88beec 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "rcu.h" @@ -86,7 +87,7 @@ static inline bool rcu_reclaim_tiny(struct rcu_head *head) rcu_lock_acquire(&rcu_callback_map); if (__is_kfree_rcu_offset(offset)) { trace_rcu_invoke_kfree_callback("", head, offset); - kfree((void *)head - offset); + kvfree((void *)head - offset); rcu_lock_release(&rcu_callback_map); return true; } diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 4eb424eb44acb..2d10c50621c38 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2925,9 +2925,9 @@ static void kfree_rcu_work(struct work_struct *work) } /* - * Emergency case only. It can happen under low memory - * condition when an allocation gets failed, so the "bulk" - * path can not be temporary maintained. + * vmalloc() pointers end up here also emergency case. It can + * happen under low memory condition when an allocation gets + * failed, so the "bulk" path can not be temporary maintained. */ for (; head; head = next) { unsigned long offset = (unsigned long)head->func; @@ -2938,7 +2938,7 @@ static void kfree_rcu_work(struct work_struct *work) trace_rcu_invoke_kfree_callback(rcu_state.name, head, offset); if (!WARN_ON_ONCE(!__is_kfree_rcu_offset(offset))) - kfree((void *)head - offset); + kvfree((void *)head - offset); rcu_lock_release(&rcu_callback_map); cond_resched_tasks_rcu_qs(); @@ -3112,10 +3112,17 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) } /* + * We do not queue vmalloc pointers into array, + * instead they are just queued to the list. We + * do it because of: + * a) to distinguish kmalloc()/vmalloc() ptrs; + * b) there is no vmalloc_bulk() interface. + * * Under high memory pressure GFP_NOWAIT can fail, * in that case the emergency path is maintained. */ - if (unlikely(!kfree_call_rcu_add_ptr_to_bulk(krcp, head, func))) { + if (is_vmalloc_addr((void *) head - (unsigned long) func) || + !kfree_call_rcu_add_ptr_to_bulk(krcp, head, func)) { head->func = func; head->next = krcp->head; krcp->head = head;