From patchwork Tue Mar 26 10:37:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13603884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F698C6FD1F for ; Tue, 26 Mar 2024 10:37:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 158FB6B0096; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 10AE46B009C; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E275B6B009A; Tue, 26 Mar 2024 06:37:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BAFB16B0098 for ; Tue, 26 Mar 2024 06:37:56 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6794014093A for ; Tue, 26 Mar 2024 10:37:56 +0000 (UTC) X-FDA: 81938839752.15.037BA5E Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf22.hostedemail.com (Postfix) with ESMTP id E2C50C0009 for ; Tue, 26 Mar 2024 10:37:53 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=p6JfQaVG; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=poNlEcWM; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=1Bpz1pQK; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=n3gJJFTR; dmarc=none; spf=pass (imf22.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711449474; a=rsa-sha256; cv=none; b=f7w9O3CknRF9P8Xu6hj1V1S0h7lHFj2bmYRZIeZNxlFQXYYqL9yKfJNiXgT9CfH72mKQKa SOzih8Z7wSvvbHbtdCu5lhC8ELzCamKIWkhlEBdET6JQkcpp4FrO9JqLYGJB4SKq+vGZh4 DmgpF9ivs5RyuJcGPzr+ZokaSWbGnyk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=p6JfQaVG; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=poNlEcWM; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=1Bpz1pQK; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=n3gJJFTR; dmarc=none; spf=pass (imf22.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711449474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fdqpTNfBL6CDPsW+4XIFGU0dAIIklmVOKONFz+wRod4=; b=VD6Ox150751m+Ht3lPub8w5NviYdd5/e5/5Ng3CabzTbqu2LI6ZnUJP9tJ0Pc5EBW0GA7y 3h/xGsD94JANNxxecB2oVwO0lhXv0WiIBy+3Y5iI/ZZe9qggWuUkdxUS2n+L38AmmDTL3k 5VO9CEcYp0jpNX+0dCO+ld4tUHt0TyU= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 3831237A6B; Tue, 26 Mar 2024 10:37:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711449472; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdqpTNfBL6CDPsW+4XIFGU0dAIIklmVOKONFz+wRod4=; b=p6JfQaVGlBOAcnOm0QHMN9V3GMRqStWz/eJ5fpu45cysS/p/Lfuj95fv7MDUSsR4TCwyl9 IhSlVXQabcmTkZ/ereOxnTr6u9ubW5WAcmVxVv/D7kigB6lhWJBDXHtcft/n8aQMZq42Bl pa6PXRebIE6B3P3wAzSKPQiLQLSshnM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711449472; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdqpTNfBL6CDPsW+4XIFGU0dAIIklmVOKONFz+wRod4=; b=poNlEcWMJfuSQ9MrQW7HILiTVogJ7HWHHVe9FcRuVR4qgVZnKblvvaAfU6+AklIpvYjarm WFsM+T1YXp5VLHCg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711449471; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdqpTNfBL6CDPsW+4XIFGU0dAIIklmVOKONFz+wRod4=; b=1Bpz1pQKjMkK4KAZT0LcHpQ/KzwlqAJyV07KCdM4xF85yQXweRf3eHqiSaU2TJWZUrgAuJ yi8wCcfSNOQdW40CKce7qUVYUrRa+0l2KH7rui9vOIS0zipTCJxt0c55HE2pQX15+n5PtD DpQlzRoEEDwecyW5FC/tVAch3RtMu28= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711449471; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fdqpTNfBL6CDPsW+4XIFGU0dAIIklmVOKONFz+wRod4=; b=n3gJJFTR05soFbll0onq0dgeqNEx8cIfiCanw3NXYhi5jtzRsOyN+twfsV5tljhJGlH32X 7LhKgM1514TkEjBw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 0EF3513931; Tue, 26 Mar 2024 10:37:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id 2JVhA3+lAmb/AwAAD6G6ig (envelope-from ); Tue, 26 Mar 2024 10:37:51 +0000 From: Vlastimil Babka Date: Tue, 26 Mar 2024 11:37:38 +0100 Subject: [PATCH mm-unstable v3 1/2] mm, slab: move memcg charging to post-alloc hook MIME-Version: 1.0 Message-Id: <20240326-slab-memcg-v3-1-d85d2563287a@suse.cz> References: <20240326-slab-memcg-v3-0-d85d2563287a@suse.cz> In-Reply-To: <20240326-slab-memcg-v3-0-d85d2563287a@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara , Shakeel Butt Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , Chengming Zhou X-Mailer: b4 0.13.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10626; i=vbabka@suse.cz; h=from:subject:message-id; bh=v8BpMQZJx/jvHbswkGC9Q5pVkesuLjgllv/l4RdaIt0=; b=owEBbQGS/pANAwAIAbvgsHXSRYiaAcsmYgBmAqV5p4vALUxpS4sW6IQ8HQ9z906F1bgJ44ngi 7qGfjwD+aOJATMEAAEIAB0WIQR7u8hBFZkjSJZITfG74LB10kWImgUCZgKleQAKCRC74LB10kWI mjqYB/oDcOQmJEhS/JjtgM8SVghbqNGDKMUWHK25Dk421AR7DfO/cEKQdYMp3aIAeT3i47odpQJ F3dW9GCNEvOw9ZAdgWY9mtgE3CpQZE6zkShwuV4x7xJBqtxlvZ8S/UJDX0+ZknSfjiWu0pdq2mU pm4B53rkC6Fr8GvVzq4vSQnK9EeL0JkpuXw6g5yj2wpdR66aMXiqpUGvJToS6XjgA+rDTb2nkJ+ BAL8LkqcygUAb0iWpruutA/SPjA1/9miGu6qzqj1/cY5VavlK3yD1J3rQqEtzS8ZbqeeoAykRdq 9iQFPgve7fOpMFj6LBiu61FVkOi4DCFU41vgNeY/eOFSiRVD X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E2C50C0009 X-Stat-Signature: 6wa89u5u5uqjpg9rjb6ab3wcbzafn4ib X-HE-Tag: 1711449473-734959 X-HE-Meta: U2FsdGVkX18CfYZjqNiWU9LUGe1MqoAD5lpWW/OLkoaW1uU8bvEXhHI93n4FNlEuR78NfM2ML87T016xpexZtafwYn0gCipT0zmyW8qAN8qs0y9i4PQMOqjcgFr9FZd5ysfcZdtRWnDERmCNas8GVszbQlOKqxWBtdi5r3FWaE380AKMdeAREhn2JQNlrVC4F2RJAwOlhrc/PJZqR8BDRiWvpq7JxzYiaWM6Yl/ZHe5dJKa/FtcHLQ/3X6+qJWvr66S7siAZ3G6GWsnQ6U995Vg5E4rNA450sj7mK1Txo+wf/wSb2dPgprqliwU1b+nssvtZ88sJEJKTV2yXbjjPH2Btx8GnhpmdreEUpdKZgt5yU6qFSb3NUevLUj2HGfWrbHak5WCkFwNesYEJCQcZft8poko3qkJxgCHI3f80koapuk7hK+Fs/eCQSvw8CrinUuoRD3j2ZH4ShUJIMkU70phNhd0YaqXXgCTxTXro8gyTXCBigrQMkDULo2FWUOc0op5Ekyhz/OBmcsfUxrZ/3QC+rUOwvh7XWfa9rFi3PiZWZjIy8Mx1aV0QyESY+RUFLVnGnK2KsbIYhv8BzdXZZEdp12seMNApkimQ1C615RhNz4TwQRVN5rG/vc//AfwtCgFauIPv14cdxSrb0wIc/hkZ0XcjbYSIhkfRs9UdjIqkfDZZMDbeCRf571Dt9YhrHxxD8yNBzOg+XBdZDu7+1imuzLbAMjylAJ6uh1eWTzbhakzprw9nr4kbU1tsh+SzMEhBCtZNudfp+f4EumW6t2JYHvbWbPEqcUEm7fw45EjOmWme173rPdnQh7VOVOu8vT4PpRv09BaHRcRkXSc8pVABy9Fqhj1g7q2QLknKm7Ci2PlCmOwM/Haw1E28fCPcGiYNXl3p7A4EZfY+JmXTQ5a1KNB0ipKg6ZjZ6N6fazho1eQstWTs/wtj07ffsBvxNvp07N4mOFFrhXpB9FS +GyPfF0N JhdWo+wnZTTP9qi/defygyHo+vfB5ggVtjCE+lhCJwIhljFTCQ1B9/W2gD/BAgAqCNqSWn834grJ6yFe4VgY1o+fCPEeC9P3KEp1wQoIelBcSXtppJY8wYOxyqHh/4J7tX7mY1rjOhiaxSf3Cgufo9f5ogLlA37BsXnAtsRyB6YLpA4mIjCjMGIkg1V6vJeAkDAY1EJ5DNEaFk5hwdqmVnerWqQSKIoqZ0fnwcsx9SGWYCAEl+XbbRcBRC048060SmkSnntxsOVpKigVBFRdGWauMwIJzE8dOXCLxRFOEl22W+b0/qdbf//0stsBEbeYud5X2lu41WF0GHacI9obMQKOvnrg/exM6YbymvN9AhYs34yiDliET157MUsRH4eYpI6gbuZyE0rHPh0txirg5CDSpUP3oHCUkHc/tQfh3PnVcYfMZ73OimOI8mQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The MEMCG_KMEM integration with slab currently relies on two hooks during allocation. memcg_slab_pre_alloc_hook() determines the objcg and charges it, and memcg_slab_post_alloc_hook() assigns the objcg pointer to the allocated object(s). As Linus pointed out, this is unnecessarily complex. Failing to charge due to memcg limits should be rare, so we can optimistically allocate the object(s) and do the charging together with assigning the objcg pointer in a single post_alloc hook. In the rare case the charging fails, we can free the object(s) back. This simplifies the code (no need to pass around the objcg pointer) and potentially allows to separate charging from allocation in cases where it's common that the allocation would be immediately freed, and the memcg handling overhead could be saved. Suggested-by: Linus Torvalds Link: https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@mail.gmail.com/ Reviewed-by: Roman Gushchin Reviewed-by: Chengming Zhou Signed-off-by: Vlastimil Babka --- mm/slub.c | 180 +++++++++++++++++++++++++++----------------------------------- 1 file changed, 77 insertions(+), 103 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 7b68a3451eb9..263ff2a9f251 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2098,23 +2098,36 @@ static inline size_t obj_full_size(struct kmem_cache *s) return s->size + sizeof(struct obj_cgroup *); } -/* - * Returns false if the allocation should fail. - */ -static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t objects, gfp_t flags) +static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, + gfp_t flags, size_t size, + void **p) { + struct obj_cgroup *objcg; + struct slab *slab; + unsigned long off; + size_t i; + /* * The obtained objcg pointer is safe to use within the current scope, * defined by current task or set_active_memcg() pair. * obj_cgroup_get() is used to get a permanent reference. */ - struct obj_cgroup *objcg = current_obj_cgroup(); + objcg = current_obj_cgroup(); if (!objcg) return true; + /* + * slab_alloc_node() avoids the NULL check, so we might be called with a + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill + * the whole requested size. + * return success as there's nothing to free back + */ + if (unlikely(*p == NULL)) + return true; + + flags &= gfp_allowed_mask; + if (lru) { int ret; struct mem_cgroup *memcg; @@ -2127,71 +2140,51 @@ static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, return false; } - if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) return false; - *objcgp = objcg; + for (i = 0; i < size; i++) { + slab = virt_to_slab(p[i]); + + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { + obj_cgroup_uncharge(objcg, obj_full_size(s)); + continue; + } + + off = obj_to_index(s, slab, p[i]); + obj_cgroup_get(objcg); + slab_obj_exts(slab)[off].objcg = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), + cache_vmstat_idx(s), obj_full_size(s)); + } + return true; } -/* - * Returns false if the allocation should fail. - */ +static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); + static __fastpath_inline -bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, - struct obj_cgroup **objcgp, size_t objects, - gfp_t flags) +bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p) { - if (!memcg_kmem_online()) + if (likely(!memcg_kmem_online())) return true; if (likely(!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT))) return true; - return likely(__memcg_slab_pre_alloc_hook(s, lru, objcgp, objects, - flags)); -} - -static void __memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, - gfp_t flags, size_t size, - void **p) -{ - struct slab *slab; - unsigned long off; - size_t i; - - flags &= gfp_allowed_mask; - - for (i = 0; i < size; i++) { - if (likely(p[i])) { - slab = virt_to_slab(p[i]); - - if (!slab_obj_exts(slab) && - alloc_slab_obj_exts(slab, s, flags, false)) { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - continue; - } + if (likely(__memcg_slab_post_alloc_hook(s, lru, flags, size, p))) + return true; - off = obj_to_index(s, slab, p[i]); - obj_cgroup_get(objcg); - slab_obj_exts(slab)[off].objcg = objcg; - mod_objcg_state(objcg, slab_pgdat(slab), - cache_vmstat_idx(s), obj_full_size(s)); - } else { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - } + if (likely(size == 1)) { + memcg_alloc_abort_single(s, p); + *p = NULL; + } else { + kmem_cache_free_bulk(s, size, p); } -} - -static __fastpath_inline -void memcg_slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, - gfp_t flags, size_t size, void **p) -{ - if (likely(!memcg_kmem_online() || !objcg)) - return; - return __memcg_slab_post_alloc_hook(s, objcg, flags, size, p); + return false; } static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, @@ -2230,40 +2223,19 @@ void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, __memcg_slab_free_hook(s, slab, p, objects, obj_exts); } - -static inline -void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, - struct obj_cgroup *objcg) -{ - if (objcg) - obj_cgroup_uncharge(objcg, objects * obj_full_size(s)); -} #else /* CONFIG_MEMCG_KMEM */ -static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t objects, gfp_t flags) -{ - return true; -} - -static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, +static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, gfp_t flags, size_t size, void **p) { + return true; } static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } - -static inline -void memcg_slab_alloc_error_hook(struct kmem_cache *s, int objects, - struct obj_cgroup *objcg) -{ -} #endif /* CONFIG_MEMCG_KMEM */ /* @@ -3943,10 +3915,7 @@ noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) ALLOW_ERROR_INJECTION(should_failslab, ERRNO); static __fastpath_inline -struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t size, gfp_t flags) +struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) { flags &= gfp_allowed_mask; @@ -3955,14 +3924,11 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, if (unlikely(should_failslab(s, flags))) return NULL; - if (unlikely(!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags))) - return NULL; - return s; } static __fastpath_inline -void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, +bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, gfp_t flags, size_t size, void **p, bool init, unsigned int orig_size) { @@ -4024,7 +3990,7 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, } } - memcg_slab_post_alloc_hook(s, objcg, flags, size, p); + return memcg_slab_post_alloc_hook(s, lru, flags, size, p); } /* @@ -4041,10 +4007,9 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; - struct obj_cgroup *objcg = NULL; bool init = false; - s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); + s = slab_pre_alloc_hook(s, gfpflags); if (unlikely(!s)) return NULL; @@ -4061,8 +4026,10 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list /* * When init equals 'true', like for kzalloc() family, only * @orig_size bytes might be zeroed instead of s->object_size + * In case this fails due to memcg_slab_post_alloc_hook(), + * object is set to NULL */ - slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size); + slab_post_alloc_hook(s, lru, gfpflags, 1, &object, init, orig_size); return object; } @@ -4502,6 +4469,16 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object, do_slab_free(s, slab, object, object, 1, addr); } +#ifdef CONFIG_MEMCG_KMEM +/* Do not inline the rare memcg charging failed path into the allocation path */ +static noinline +void memcg_alloc_abort_single(struct kmem_cache *s, void *object) +{ + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) + do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); +} +#endif + static __fastpath_inline void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, unsigned long addr) @@ -4838,29 +4815,26 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { int i; - struct obj_cgroup *objcg = NULL; if (!size) return 0; - /* memcg and kmem_cache debug support */ - s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); + s = slab_pre_alloc_hook(s, flags); if (unlikely(!s)) return 0; i = __kmem_cache_alloc_bulk(s, flags, size, p); + if (unlikely(i == 0)) + return 0; /* * memcg and kmem_cache debug support and memory initialization. * Done outside of the IRQ disabled fastpath loop. */ - if (likely(i != 0)) { - slab_post_alloc_hook(s, objcg, flags, size, p, - slab_want_init_on_alloc(flags, s), s->object_size); - } else { - memcg_slab_alloc_error_hook(s, size, objcg); + if (unlikely(!slab_post_alloc_hook(s, NULL, flags, size, p, + slab_want_init_on_alloc(flags, s), s->object_size))) { + return 0; } - return i; } EXPORT_SYMBOL(kmem_cache_alloc_bulk_noprof); From patchwork Tue Mar 26 10:37:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13603885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B53EACD11DF for ; Tue, 26 Mar 2024 10:38:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 578066B0098; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 527626B009A; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 359DD6B009D; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 078416B0098 for ; Tue, 26 Mar 2024 06:37:57 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 843F0408A8 for ; Tue, 26 Mar 2024 10:37:56 +0000 (UTC) X-FDA: 81938839752.13.37B4EB0 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf29.hostedemail.com (Postfix) with ESMTP id 0B39D12001A for ; Tue, 26 Mar 2024 10:37:53 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=GCpllyKE; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=oWpFples; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=PwnGi39Q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=CD1op1JG; dmarc=none; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711449474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GDfSPXjTzI8NpXr42wOWxfoEVhCQpaMh0AnyozY6l0E=; b=oTpGHHpSiCgDd+66DfY6vhvBdwMR6fH8+Ea1nCvCusf1/QeP2EASLXzSc6MW8GCZaZQ2PC 8gDlPorhz81dARSl+ZzfmsNRZIaf7L73yokRE/FN0UAc0tVWATmzR66kXBYuDZtFN0Qrw3 SwsjnRt6scpujv4s/zFOTYcLqOS/BqQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=GCpllyKE; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=oWpFples; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=PwnGi39Q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=CD1op1JG; dmarc=none; spf=pass (imf29.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711449474; a=rsa-sha256; cv=none; b=63QcFjxVBsoE08jmgqYPWspAnc/zPuwQWsiT1VxLD/T0i9T0Ui2u+BR21enoCD7/kYAhIp kjU96ACyCESrTNwv6QmRkGxwuF3sz1ZUld3NRPl4zHlj5Yzsc2bURiR7UPcTn4QnX8f/c3 N3GcifFEEKdScIZ+sLvzN2r/nOfOsDo= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 588C237A70; Tue, 26 Mar 2024 10:37:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711449472; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GDfSPXjTzI8NpXr42wOWxfoEVhCQpaMh0AnyozY6l0E=; b=GCpllyKECK9loiIlPQRKwi2+c3gbX7EqAX/DuIN9UOfwf582bMqb/kcgEu6N6BOF57ujTs hCl3BaveQ5DZDlmsYTVkTG6VOomCESkDZzV71nTMXLOw2FX1mGclnMtKXwYiAGSOdG1VC5 Sw9ca5Ccbv3MNhmFkN/zjykq7ctx3Zs= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711449472; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GDfSPXjTzI8NpXr42wOWxfoEVhCQpaMh0AnyozY6l0E=; b=oWpFples/lcvviFjY1gsgzeNjy0fmnQ/iUUb9UGyEJoymOebLjrCWuyli1vrlpAL31qhG+ CoBlYMsrGLW2XFCA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711449471; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GDfSPXjTzI8NpXr42wOWxfoEVhCQpaMh0AnyozY6l0E=; b=PwnGi39Q0EbhixY/+JdWiGWqkvZnIz8QuXyRhYQu+xQaRZdCzeUPJ0NURy6vbT8chtnl9H 3F5+sk4xBNrR9zB1OhL2Wg7yhTdZJMM/SEqdfXQmoyqYEFQ68s3EI71q9sIbrKjzJihpzL 12LN9twHBQ7SV2lK8UizSMc90KYLXGY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711449471; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GDfSPXjTzI8NpXr42wOWxfoEVhCQpaMh0AnyozY6l0E=; b=CD1op1JGC7naV6cSaHPt/62+MZpbcpO4Vaix4IrNiKj2bN47TIka12TZn0esuqu4WUbvSd opbH8nSVxM89uBAA== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 31C9013939; Tue, 26 Mar 2024 10:37:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id EDffC3+lAmb/AwAAD6G6ig (envelope-from ); Tue, 26 Mar 2024 10:37:51 +0000 From: Vlastimil Babka Date: Tue, 26 Mar 2024 11:37:39 +0100 Subject: [PATCH mm-unstable v3 2/2] mm, slab: move slab_memcg hooks to mm/memcontrol.c MIME-Version: 1.0 Message-Id: <20240326-slab-memcg-v3-2-d85d2563287a@suse.cz> References: <20240326-slab-memcg-v3-0-d85d2563287a@suse.cz> In-Reply-To: <20240326-slab-memcg-v3-0-d85d2563287a@suse.cz> To: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara , Shakeel Butt Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.13.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=8601; i=vbabka@suse.cz; h=from:subject:message-id; bh=Ef6ry3rkXSHybfPVL08fo6mn1Sdgp7WIoOxeD10eQkM=; b=owGbwMvMwMG4+8GG0kuuHbMYT6slMaQxLa3xDFuuHCc3N/fF7zCxOu69gv5vt5YpK1T2x89fa aga9Di/k9GYhYGRg0FWTJGlevcJR9GZyh7TPHw/wgxiZQKZwsDFKQATYevlYOhyWq30JvCU97XL k3kuV6xfduSqpMH/y8a8fskh11+fF+A7tmUmW2iqtfx/1m4lz7LwN/7alhop8wTeyMo62HFdmbf 3aPGjTXvslfas7yvg5r9V8ipCfkl2yYfiG523rYWFL3hElz3cf63wjV2xygKT98s5LXfky01wni LpJ6Xztj3qu+TNFYw/nCfX/IwztJu67GhlwyGbEn27KLv1aqtXeeW7Wy7REBN7YsrH7CApfnPzn IViD+eqtjWffnbjsSqf2Za5NTKBfTbh7pl5K7z+Tv+UoiCpLtMs8zVTxilLfOe5+a1R+RkvEg3r PO+uftl+UNo6dTZbq/HRqsxFZzeEzDXasc6n2Se6f2MtAA== X-Developer-Key: i=vbabka@suse.cz; a=openpgp; fpr=A940D434992C2E8E99103D50224FA7E7CC82A664 X-Rspamd-Queue-Id: 0B39D12001A X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xrx5gi9fwh3t17qapan5ax9gxzibda9k X-HE-Tag: 1711449473-550446 X-HE-Meta: U2FsdGVkX18dI/Lg5eSnefxBEJ70rMlcb+veGne6Jq6TGCbakn8s52xLLrXzuF4iW8Pe0NxCxMzR9Qk+Na51weQVp9v6JAom9nfMB5P+Eo8F4K95VeNnkiyP9N9g1hUA6EFEXw3Dh8G4NviJZjWxEqzYZnJO36NMl/LKmt7lvWN7OB5APPiP6FmYT6QS3RRHp9tEqQilSsDMud1uzY08QTNBhz90AhwOqUrbXyVOSCWAvYzOdO1RCUtcGwH0AY7jpaqFsgcjxf4gZ4QJTuWc8bXwyn9EyJ4WBKCfpsgbrnrKZdZpbt+o9mSQw0GsOYHUX/UhSHgYlqBqhgjzP8FG9GmcPUJ1aE56TvJlC5Fwp5Dq5OONwjVBje94zlct+sPJjns9oBYKOcadEKSD62jeNDKHFm2aMMXmetdD6BSogHwr35OMeOKwtBLcq0kQVZUVkVfGqCpvH6JGbZaMVSTcI+tbO9S/rWDRqJdmK2/BUobZ/IwTc/hVwFxNEgk6V2HkCnu4WVc8AhFTjYEeOxfNtHOMnNECXl98RQ6vo5Eg0gtNvIClE2QqLvUPI0+IpegEkKj3zaKYSvvBB+fALxo18zaG9H71VElFZNMTeafKqtWg6CL1N69uPXw87rOa70U1ibNgkDjgpcmzcgQ08XNX/3l9I1Y54HV9N6Fzb8n0KuEW3XbI6EiXInHQgz7RV5BvWR5HHzhInkrxVHpttMccgMswdIlPCCEstPLJRmHdZAox9mvuH+HvpcD4iKCreZnf6pG+nyY8fLTq4oDSIa0wLcmFezPNQKjHKKp9wFo6xn0jn8/PJlW5YL/GAC9BDFoBdg7J0psI8ofdPTnkBd5m9t0UqYkI9P5sudd6to51S8Ci4nR9P9f8diuyz8vBMnPpyBTfy2DVxadcKpa0lhO+PbaJLQNwBCe8obWAa5G0tfXiMNAL7KZ+wRFS8sMSw9aMjA4vAAGr/VB8RAtfHMe 0k6KkE7+ fLpXaCqR7W9T2k48t5020bfJbjQTNPpbwTU8IpCZ/zJ/zEhdY/bRiUvhyAAa6n10s/J7XVKs2VpHBHPp3Rk5PTdnjDzQVC0u4uVP3ug06mL7lqqdBRMLnIqk72Jz+iWKg/hiYolLxRU0o1E6LiWXMhGwlxUIM7T4zc4UaGt8QibbndRlpQYppZ3FkAuZ4mc8uZ9EaC/mJAO+cMLk1EeqgoUg4lLiusNcpyWos/zIaEplCRXKqyQwPNh2+JkmrTey0HrS3GcYL80UvisvI4TLL9H7XExlWQkrCF1N4YgKR0semU1D8yShNRF6q9lD9FirS4ibrT76r1b8biGreDzb7MpvhGV/YnZpN+aIu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The hooks make multiple calls to functions in mm/memcontrol.c, including to th current_obj_cgroup() marked __always_inline. It might be faster to make a single call to the hook in mm/memcontrol.c instead. The hooks also don't use almost anything from mm/slub.c. obj_full_size() can move with the hooks and cache_vmstat_idx() to the internal mm/slab.h Reviewed-by: Roman Gushchin Signed-off-by: Vlastimil Babka --- mm/memcontrol.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 13 +++++++ mm/slub.c | 103 ++------------------------------------------------------ 3 files changed, 105 insertions(+), 101 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 0a0720858ddb..1b3c3394a2ba 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3558,6 +3558,96 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size, true); } +static inline size_t obj_full_size(struct kmem_cache *s) +{ + /* + * For each accounted object there is an extra space which is used + * to store obj_cgroup membership. Charge it too. + */ + return s->size + sizeof(struct obj_cgroup *); +} + +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p) +{ + struct obj_cgroup *objcg; + struct slab *slab; + unsigned long off; + size_t i; + + /* + * The obtained objcg pointer is safe to use within the current scope, + * defined by current task or set_active_memcg() pair. + * obj_cgroup_get() is used to get a permanent reference. + */ + objcg = current_obj_cgroup(); + if (!objcg) + return true; + + /* + * slab_alloc_node() avoids the NULL check, so we might be called with a + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill + * the whole requested size. + * return success as there's nothing to free back + */ + if (unlikely(*p == NULL)) + return true; + + flags &= gfp_allowed_mask; + + if (lru) { + int ret; + struct mem_cgroup *memcg; + + memcg = get_mem_cgroup_from_objcg(objcg); + ret = memcg_list_lru_alloc(memcg, lru, flags); + css_put(&memcg->css); + + if (ret) + return false; + } + + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) + return false; + + for (i = 0; i < size; i++) { + slab = virt_to_slab(p[i]); + + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { + obj_cgroup_uncharge(objcg, obj_full_size(s)); + continue; + } + + off = obj_to_index(s, slab, p[i]); + obj_cgroup_get(objcg); + slab_obj_exts(slab)[off].objcg = objcg; + mod_objcg_state(objcg, slab_pgdat(slab), + cache_vmstat_idx(s), obj_full_size(s)); + } + + return true; +} + +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct slabobj_ext *obj_exts) +{ + for (int i = 0; i < objects; i++) { + struct obj_cgroup *objcg; + unsigned int off; + + off = obj_to_index(s, slab, p[i]); + objcg = obj_exts[off].objcg; + if (!objcg) + continue; + + obj_exts[off].objcg = NULL; + obj_cgroup_uncharge(objcg, obj_full_size(s)); + mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), + -obj_full_size(s)); + obj_cgroup_put(objcg); + } +} #endif /* CONFIG_MEMCG_KMEM */ /* diff --git a/mm/slab.h b/mm/slab.h index 1343bfa12cee..411251b9bdd1 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -558,6 +558,9 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) return (struct slabobj_ext *)(obj_exts & ~OBJEXTS_FLAGS_MASK); } +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); + #else /* CONFIG_SLAB_OBJ_EXT */ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) @@ -567,7 +570,17 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) #endif /* CONFIG_SLAB_OBJ_EXT */ +static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) +{ + return (s->flags & SLAB_RECLAIM_ACCOUNT) ? + NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; +} + #ifdef CONFIG_MEMCG_KMEM +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + gfp_t flags, size_t size, void **p); +void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects, struct slabobj_ext *obj_exts); void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); #endif diff --git a/mm/slub.c b/mm/slub.c index 263ff2a9f251..f5b151a58b7d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1865,12 +1865,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct slab *slab, #endif #endif /* CONFIG_SLUB_DEBUG */ -static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) -{ - return (s->flags & SLAB_RECLAIM_ACCOUNT) ? - NR_SLAB_RECLAIMABLE_B : NR_SLAB_UNRECLAIMABLE_B; -} - #ifdef CONFIG_SLAB_OBJ_EXT #ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG @@ -1929,8 +1923,8 @@ static inline void handle_failed_objexts_alloc(unsigned long obj_exts, #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \ __GFP_ACCOUNT | __GFP_NOFAIL) -static int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab) +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) { unsigned int objects = objs_per_slab(s, slab); unsigned long new_exts; @@ -2089,78 +2083,6 @@ alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, #endif /* CONFIG_SLAB_OBJ_EXT */ #ifdef CONFIG_MEMCG_KMEM -static inline size_t obj_full_size(struct kmem_cache *s) -{ - /* - * For each accounted object there is an extra space which is used - * to store obj_cgroup membership. Charge it too. - */ - return s->size + sizeof(struct obj_cgroup *); -} - -static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - gfp_t flags, size_t size, - void **p) -{ - struct obj_cgroup *objcg; - struct slab *slab; - unsigned long off; - size_t i; - - /* - * The obtained objcg pointer is safe to use within the current scope, - * defined by current task or set_active_memcg() pair. - * obj_cgroup_get() is used to get a permanent reference. - */ - objcg = current_obj_cgroup(); - if (!objcg) - return true; - - /* - * slab_alloc_node() avoids the NULL check, so we might be called with a - * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill - * the whole requested size. - * return success as there's nothing to free back - */ - if (unlikely(*p == NULL)) - return true; - - flags &= gfp_allowed_mask; - - if (lru) { - int ret; - struct mem_cgroup *memcg; - - memcg = get_mem_cgroup_from_objcg(objcg); - ret = memcg_list_lru_alloc(memcg, lru, flags); - css_put(&memcg->css); - - if (ret) - return false; - } - - if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) - return false; - - for (i = 0; i < size; i++) { - slab = virt_to_slab(p[i]); - - if (!slab_obj_exts(slab) && - alloc_slab_obj_exts(slab, s, flags, false)) { - obj_cgroup_uncharge(objcg, obj_full_size(s)); - continue; - } - - off = obj_to_index(s, slab, p[i]); - obj_cgroup_get(objcg); - slab_obj_exts(slab)[off].objcg = objcg; - mod_objcg_state(objcg, slab_pgdat(slab), - cache_vmstat_idx(s), obj_full_size(s)); - } - - return true; -} static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); @@ -2187,27 +2109,6 @@ bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, return false; } -static void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, - void **p, int objects, - struct slabobj_ext *obj_exts) -{ - for (int i = 0; i < objects; i++) { - struct obj_cgroup *objcg; - unsigned int off; - - off = obj_to_index(s, slab, p[i]); - objcg = obj_exts[off].objcg; - if (!objcg) - continue; - - obj_exts[off].objcg = NULL; - obj_cgroup_uncharge(objcg, obj_full_size(s)); - mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), - -obj_full_size(s)); - obj_cgroup_put(objcg); - } -} - static __fastpath_inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects)