From patchwork Wed Aug 31 03:19:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC080ECAAD8 for ; Wed, 31 Aug 2022 03:20:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5F9796B0072; Tue, 30 Aug 2022 23:20:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 55A3D6B0073; Tue, 30 Aug 2022 23:20:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D4868D0001; Tue, 30 Aug 2022 23:20:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2CF096B0072 for ; Tue, 30 Aug 2022 23:20:14 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0792080562 for ; Wed, 31 Aug 2022 03:20:14 +0000 (UTC) X-FDA: 79858434348.08.5870AC3 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by imf08.hostedemail.com (Postfix) with ESMTP id AEEBF160033 for ; Wed, 31 Aug 2022 03:20:13 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id x19so11348769pfr.1 for ; Tue, 30 Aug 2022 20:20:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=im3YqRL2Rng/LKklgBeywc8L3p9g3oy/mXuP77cK8zU=; b=RD4nDOqLlkuwwB2t022lPA2anuLlf/CAS8a53Uv3i/r5QGL9l4YCtb8ZIiMBG7hbJC guoomosMxuYuN8jO6yVhowNvNsVavr98McnRWRcR5PVm8BYcH/KDGlV03enHRsETKqdN RXQKdUc/lvnulb44PzME6KYSESUO3mj/jRRKKZ+hOecySABPTqTrzlN6ah9eCr7yRUbS rjjh7Pmwpe0t+OMAT8XqcZDkDWbhDpmfXa2MFKhcLexZBRe76Jmz51AIvWELchSaNWUV LVgK/Y0PlNXGer69Rht7WgMzdbG9dlBLhwiFMBj/HMyH7wQN3BH5WnmbOS4U1Ucpon68 uxrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=im3YqRL2Rng/LKklgBeywc8L3p9g3oy/mXuP77cK8zU=; b=PvAA9G1vZfQIsFYyTqkUz9qW4t5SrNnuDDHB93GDnag9wIW3rhEnZSAa72cdgz9q0Q i5WCXZLm6Hg5twu8WkjEypn9JELYZYg2oxD3hSj2tqFj8hZkVbsvcGjq5sR9NSMWrf/9 jaGeFoUic79eH0szNKNw9ZqtlzDO18VNppx6/zUaQJo5s3GVrywapmYTQWue+X/rCGWc 0PYX1v02gW3y07bmPU87otgBD9PW8siC4i7XMGHQwKsNzcG9SGA9BJ5BhWOpBUQ2FfHL e46l8m9UCIMXndKK5KGQFxG3NWiMYhiQmmv89eUAfLzmh3d/nA9BeYoBo08F60tNDHE5 Zoqg== X-Gm-Message-State: ACgBeo3CjHpOTKwYXeOAS/Z5JSqfep1D8l/aqsbDq8DgdwIBxiTyKpp/ Mt4LoLFgoIyYBd+LK8GkjKEzsg== X-Google-Smtp-Source: AA6agR6oR5JRUz8+cD5CEEtVrrOhgd1Fo8O/zVTIvVDMVu9FdFzHaITegImRSaKYL7R1u+K2Ofk0/A== X-Received: by 2002:a65:6417:0:b0:42b:2f13:8477 with SMTP id a23-20020a656417000000b0042b2f138477mr20270294pgv.329.1661916012858; Tue, 30 Aug 2022 20:20:12 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:12 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 1/7] mm: introduce common struct mm_slot Date: Wed, 31 Aug 2022 11:19:45 +0800 Message-Id: <20220831031951.43152-2-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=RD4nDOqL; spf=pass (imf08.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916013; a=rsa-sha256; cv=none; b=FY8SY7fkoxMeDWmxQwqSgPRuTyTXhZyhj03NKTkY6UGHHUu5uZgkh1dHi844zyGIup/Rj6 TDCksTBRhl7Duc8W1iAoy5VCW+qRYxCdGxaBEcCf/SFh77o1PBL2G72rhFU2Y+0aXNRcX3 Dw+ULbcMPxbG7+JIBEOAFf+8T0DZ1Xw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=im3YqRL2Rng/LKklgBeywc8L3p9g3oy/mXuP77cK8zU=; b=Dl6b2BM+4XzmudjER5cFxHNlNve1atRTgzuMnBXOlyEaD7IKBUoD2Ili+uwcMu80vtLGVC na2lXJrR6RcNXR7DIX35IHL9jX/lyZJ7FjhYpcmYWwakYHe42rkgDrlDKDfcd+ON4O99FU PMkqeCfmiyTeACzOLqmx0A6ywaQHtdM= X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=RD4nDOqL; spf=pass (imf08.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.169 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam02 X-Stat-Signature: s3wbieaiqtdygmszgs15bxdmynf9rri4 X-Rspamd-Queue-Id: AEEBF160033 X-HE-Tag: 1661916013-657375 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: At present, both THP and KSM module have similar structures mm_slot for organizing and recording the information required for scanning mm, and each defines the following exactly the same operation functions: - alloc_mm_slot - free_mm_slot - get_mm_slot - insert_to_mm_slots_hash In order to de-duplicate these codes, this patch introduces a common struct mm_slot, and subsequent patches will let THP and KSM to use it. Signed-off-by: Qi Zheng --- mm/mm_slot.h | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 mm/mm_slot.h diff --git a/mm/mm_slot.h b/mm/mm_slot.h new file mode 100644 index 000000000000..83f18ed1c4bd --- /dev/null +++ b/mm/mm_slot.h @@ -0,0 +1,55 @@ +// SPDX-License-Identifier: GPL-2.0 + +#ifndef _LINUX_MM_SLOT_H +#define _LINUX_MM_SLOT_H + +#include +#include + +/* + * struct mm_slot - hash lookup from mm to mm_slot + * @hash: link to the mm_slots hash list + * @mm_node: link into the mm_slots list + * @mm: the mm that this information is valid for + */ +struct mm_slot { + struct hlist_node hash; + struct list_head mm_node; + struct mm_struct *mm; +}; + +#define mm_slot_entry(ptr, type, member) \ + container_of(ptr, type, member) + +static inline void *mm_slot_alloc(struct kmem_cache *cache) +{ + if (!cache) /* initialization failed */ + return NULL; + return kmem_cache_zalloc(cache, GFP_KERNEL); +} + +static inline void mm_slot_free(struct kmem_cache *cache, void *objp) +{ + kmem_cache_free(cache, objp); +} + +#define mm_slot_lookup(_hashtable, _mm) \ +({ \ + struct mm_slot *tmp_slot, *mm_slot = NULL; \ + \ + hash_for_each_possible(_hashtable, tmp_slot, hash, (unsigned long)_mm) \ + if (_mm == tmp_slot->mm) { \ + mm_slot = tmp_slot; \ + break; \ + } \ + \ + mm_slot; \ +}) + +#define mm_slot_insert(_hashtable, _mm, _mm_slot) \ +({ \ + _mm_slot->mm = _mm; \ + hash_add(_hashtable, &_mm_slot->hash, (unsigned long)_mm); \ +}) + +#endif /* _LINUX_MM_SLOT_H */ From patchwork Wed Aug 31 03:19:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5C5EECAAA1 for ; Wed, 31 Aug 2022 03:20:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5362B6B0073; Tue, 30 Aug 2022 23:20:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BF586B0074; Tue, 30 Aug 2022 23:20:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 311B18D0001; Tue, 30 Aug 2022 23:20:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 207656B0073 for ; Tue, 30 Aug 2022 23:20:20 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E7196AB8A8 for ; Wed, 31 Aug 2022 03:20:19 +0000 (UTC) X-FDA: 79858434558.27.7B86D1B Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf31.hostedemail.com (Postfix) with ESMTP id 9B1FC20051 for ; Wed, 31 Aug 2022 03:20:19 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id j9-20020a17090a3e0900b001fd9568b117so10114908pjc.3 for ; Tue, 30 Aug 2022 20:20:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=f6rQgw8USChv8oA+nAwqR6F2V5pWKOCQTtX51ximUyU=; b=Jt71XTo0ZlGofe5ovfd68QMBd8ExztA/4UOo5WwvaEOU3Xtd6o0Wqm3KZVUreaULEG 9koGq03fI74L6+20WWygZRZW2FdIiFGHzLKDTfoQt/4jVIN8F+2kwbg/b7ABHu1ZkBJ7 b94igBpmmBWhETJ+R/F3xWei+whbuH8YRwJIpk/CMKNmo9vAsWOtrT4ZBTw/KfUbvN2l /b+zrOJhHtUInoou0qtMurMleEg7e820kLYYpxjQ7ThCOPxtzzIZ384SPOKM/Lk9mU9l BFy6wOTGf1RrSkFizHBKQwPdQZG5ZpInfgx2QuhlYrkWYVExcT0DbZUNjZXXr8oTUVsb fTzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=f6rQgw8USChv8oA+nAwqR6F2V5pWKOCQTtX51ximUyU=; b=bt2OCMz/1iEigUMWrh3BVI7vAQIi2CIzIKtCoHIfL8r/7N82eq3FQ5zqirPDAHYLgW eeJ/1VvvEKxTVtaHfcTChL7ajNLsSRIxucdef8GKyhw26YUrNhiQE9HkkZMT/ew7YeBe 5NABpCX/fL9t5n9xlQvf8r0X8Zdo3/OWl8c1xxbeqagHSDEpvQBMBXf+e3NZ3EmcQtvW ia/i4Pnd0W52xgHpyR55ELW82Fdl2ZnQcPoLAItcn9LcyCl36H/2wDaUTJzpJV8F+oLm W4XgNnzqpmoOB5JdKGzeQ/1aYZQJV6ImMGQ7PUKfnQQEeQmkqdPHtj0OqMqPdaCp0hQa dmsg== X-Gm-Message-State: ACgBeo3zo5mJIVGLNz6vO/Nay+yHnwItPKxUX5xYgAw0x9yBqsLkrtjw 25Aw+NsdohbXL4syhBKvGCW/IA== X-Google-Smtp-Source: AA6agR5bHfmikkHIIhDlWRWZ0Q14KKxmcCqsG3gc8oih9F6xWSMI7e2q6hWo4RFi7gmkBImue6w5ZA== X-Received: by 2002:a17:90b:38cf:b0:1fb:783c:e0f with SMTP id nn15-20020a17090b38cf00b001fb783c0e0fmr1193594pjb.204.1661916018725; Tue, 30 Aug 2022 20:20:18 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:18 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 2/7] mm: thp: convert to use common struct mm_slot Date: Wed, 31 Aug 2022 11:19:46 +0800 Message-Id: <20220831031951.43152-3-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Jt71XTo0; spf=pass (imf31.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916019; a=rsa-sha256; cv=none; b=j+n3eX3CZSDiLW2EX/Eoz9DR1NBd+goMfEaPk3030pkm5NDSlsKgL/dIuCyUcnHaXWXQPg 9bbPMoZi67i7UYuzSIVPDQwNBCCZbUDFMSX6tMBbuMXu4773mD3196pG8D714a86JBBfnC zPi663LMA+O3L0zRLDgzM383LFoG2C4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f6rQgw8USChv8oA+nAwqR6F2V5pWKOCQTtX51ximUyU=; b=lpHX6qT9sEcl1EyKqCAWppReIwHyUcFBUbq7rFYehXtIZZpFhOe28K733WmhqWpTrem9Pp yLiFJ5wd2QX38iB8lTwVRdhnCvILpbqz5uy+Vdk50/EjL+vvrFIt7ejdmmDNv5gvb4KSV3 Oh+eo/UG8WniwpUmqwB5cqGwfXuoyY8= Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Jt71XTo0; spf=pass (imf31.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: 4w35uwaisb5ax49e48x3fs6w47oa53ck X-Rspamd-Queue-Id: 9B1FC20051 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1661916019-192811 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename private struct mm_slot to struct khugepaged_mm_slot and convert to use common struct mm_slot with no functional change. Signed-off-by: Qi Zheng --- mm/khugepaged.c | 121 ++++++++++++++++++++---------------------------- 1 file changed, 51 insertions(+), 70 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d8e388106322..2d74cf01f694 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -23,6 +23,7 @@ #include #include #include "internal.h" +#include "mm_slot.h" enum scan_result { SCAN_FAIL, @@ -104,17 +105,13 @@ struct collapse_control { }; /** - * struct mm_slot - hash lookup from mm to mm_slot - * @hash: hash collision list - * @mm_node: khugepaged scan list headed in khugepaged_scan.mm_head - * @mm: the mm that this information is valid for + * struct khugepaged_mm_slot - khugepaged information per mm that is being scanned + * @slot: hash lookup from mm to mm_slot * @nr_pte_mapped_thp: number of pte mapped THP * @pte_mapped_thp: address array corresponding pte mapped THP */ -struct mm_slot { - struct hlist_node hash; - struct list_head mm_node; - struct mm_struct *mm; +struct khugepaged_mm_slot { + struct mm_slot slot; /* pte-mapped THP in this mm */ int nr_pte_mapped_thp; @@ -131,7 +128,7 @@ struct mm_slot { */ struct khugepaged_scan { struct list_head mm_head; - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; unsigned long address; }; @@ -395,8 +392,9 @@ int hugepage_madvise(struct vm_area_struct *vma, int __init khugepaged_init(void) { mm_slot_cache = kmem_cache_create("khugepaged_mm_slot", - sizeof(struct mm_slot), - __alignof__(struct mm_slot), 0, NULL); + sizeof(struct khugepaged_mm_slot), + __alignof__(struct khugepaged_mm_slot), + 0, NULL); if (!mm_slot_cache) return -ENOMEM; @@ -413,36 +411,6 @@ void __init khugepaged_destroy(void) kmem_cache_destroy(mm_slot_cache); } -static inline struct mm_slot *alloc_mm_slot(void) -{ - if (!mm_slot_cache) /* initialization failed */ - return NULL; - return kmem_cache_zalloc(mm_slot_cache, GFP_KERNEL); -} - -static inline void free_mm_slot(struct mm_slot *mm_slot) -{ - kmem_cache_free(mm_slot_cache, mm_slot); -} - -static struct mm_slot *get_mm_slot(struct mm_struct *mm) -{ - struct mm_slot *mm_slot; - - hash_for_each_possible(mm_slots_hash, mm_slot, hash, (unsigned long)mm) - if (mm == mm_slot->mm) - return mm_slot; - - return NULL; -} - -static void insert_to_mm_slots_hash(struct mm_struct *mm, - struct mm_slot *mm_slot) -{ - mm_slot->mm = mm; - hash_add(mm_slots_hash, &mm_slot->hash, (long)mm); -} - static inline int hpage_collapse_test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) == 0; @@ -450,28 +418,31 @@ static inline int hpage_collapse_test_exit(struct mm_struct *mm) void __khugepaged_enter(struct mm_struct *mm) { - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; + struct mm_slot *slot; int wakeup; - mm_slot = alloc_mm_slot(); + mm_slot = mm_slot_alloc(mm_slot_cache); if (!mm_slot) return; + slot = &mm_slot->slot; + /* __khugepaged_exit() must not run from under us */ VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { - free_mm_slot(mm_slot); + mm_slot_free(mm_slot_cache, mm_slot); return; } spin_lock(&khugepaged_mm_lock); - insert_to_mm_slots_hash(mm, mm_slot); + mm_slot_insert(mm_slots_hash, mm, slot); /* * Insert just behind the scanning cursor, to let the area settle * down a little. */ wakeup = list_empty(&khugepaged_scan.mm_head); - list_add_tail(&mm_slot->mm_node, &khugepaged_scan.mm_head); + list_add_tail(&slot->mm_node, &khugepaged_scan.mm_head); spin_unlock(&khugepaged_mm_lock); mmgrab(mm); @@ -491,21 +462,23 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, void __khugepaged_exit(struct mm_struct *mm) { - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; + struct mm_slot *slot; int free = 0; spin_lock(&khugepaged_mm_lock); - mm_slot = get_mm_slot(mm); + slot = mm_slot_lookup(mm_slots_hash, mm); + mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); if (mm_slot && khugepaged_scan.mm_slot != mm_slot) { - hash_del(&mm_slot->hash); - list_del(&mm_slot->mm_node); + hash_del(&slot->hash); + list_del(&slot->mm_node); free = 1; } spin_unlock(&khugepaged_mm_lock); if (free) { clear_bit(MMF_VM_HUGEPAGE, &mm->flags); - free_mm_slot(mm_slot); + mm_slot_free(mm_slot_cache, mm_slot); mmdrop(mm); } else if (mm_slot) { /* @@ -1321,16 +1294,17 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, return result; } -static void collect_mm_slot(struct mm_slot *mm_slot) +static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) { - struct mm_struct *mm = mm_slot->mm; + struct mm_slot *slot = &mm_slot->slot; + struct mm_struct *mm = slot->mm; lockdep_assert_held(&khugepaged_mm_lock); if (hpage_collapse_test_exit(mm)) { /* free mm_slot */ - hash_del(&mm_slot->hash); - list_del(&mm_slot->mm_node); + hash_del(&slot->hash); + list_del(&slot->mm_node); /* * Not strictly needed because the mm exited already. @@ -1339,7 +1313,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) */ /* khugepaged_mm_lock actually not necessary for the below */ - free_mm_slot(mm_slot); + mm_slot_free(mm_slot_cache, mm_slot); mmdrop(mm); } } @@ -1352,12 +1326,14 @@ static void collect_mm_slot(struct mm_slot *mm_slot) static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) { - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; + struct mm_slot *slot; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); spin_lock(&khugepaged_mm_lock); - mm_slot = get_mm_slot(mm); + slot = mm_slot_lookup(mm_slots_hash, mm); + mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr; spin_unlock(&khugepaged_mm_lock); @@ -1489,9 +1465,10 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) goto drop_hpage; } -static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) +static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot) { - struct mm_struct *mm = mm_slot->mm; + struct mm_slot *slot = &mm_slot->slot; + struct mm_struct *mm = slot->mm; int i; if (likely(mm_slot->nr_pte_mapped_thp == 0)) @@ -2054,7 +2031,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, __acquires(&khugepaged_mm_lock) { struct vma_iterator vmi; - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; + struct mm_slot *slot; struct mm_struct *mm; struct vm_area_struct *vma; int progress = 0; @@ -2064,18 +2042,20 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, lockdep_assert_held(&khugepaged_mm_lock); *result = SCAN_FAIL; - if (khugepaged_scan.mm_slot) + if (khugepaged_scan.mm_slot) { mm_slot = khugepaged_scan.mm_slot; - else { - mm_slot = list_entry(khugepaged_scan.mm_head.next, + slot = &mm_slot->slot; + } else { + slot = list_entry(khugepaged_scan.mm_head.next, struct mm_slot, mm_node); + mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); khugepaged_scan.address = 0; khugepaged_scan.mm_slot = mm_slot; } spin_unlock(&khugepaged_mm_lock); khugepaged_collapse_pte_mapped_thps(mm_slot); - mm = mm_slot->mm; + mm = slot->mm; /* * Don't wait for semaphore (to avoid long wait times). Just move to * the next mm on the list. @@ -2171,10 +2151,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, * khugepaged runs here, khugepaged_exit will find * mm_slot not pointing to the exiting mm. */ - if (mm_slot->mm_node.next != &khugepaged_scan.mm_head) { - khugepaged_scan.mm_slot = list_entry( - mm_slot->mm_node.next, - struct mm_slot, mm_node); + if (slot->mm_node.next != &khugepaged_scan.mm_head) { + slot = list_entry(slot->mm_node.next, + struct mm_slot, mm_node); + khugepaged_scan.mm_slot = + mm_slot_entry(slot, struct khugepaged_mm_slot, slot); khugepaged_scan.address = 0; } else { khugepaged_scan.mm_slot = NULL; @@ -2269,7 +2250,7 @@ static void khugepaged_wait_work(void) static int khugepaged(void *none) { - struct mm_slot *mm_slot; + struct khugepaged_mm_slot *mm_slot; set_freezable(); set_user_nice(current, MAX_NICE); From patchwork Wed Aug 31 03:19:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5468AECAAA1 for ; Wed, 31 Aug 2022 03:20:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5CE56B0074; Tue, 30 Aug 2022 23:20:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE65A8D0001; Tue, 30 Aug 2022 23:20:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C38D16B0078; Tue, 30 Aug 2022 23:20:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B18816B0074 for ; Tue, 30 Aug 2022 23:20:24 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 883B240D5C for ; Wed, 31 Aug 2022 03:20:24 +0000 (UTC) X-FDA: 79858434768.18.D7A319D Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf18.hostedemail.com (Postfix) with ESMTP id 4AC971C003A for ; Wed, 31 Aug 2022 03:20:24 +0000 (UTC) Received: by mail-pf1-f176.google.com with SMTP id 199so13152546pfz.2 for ; Tue, 30 Aug 2022 20:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=x9fs/pcgOChAtJLUDPXp3cv3Ogg9mSNMF+2EQZokRvQ=; b=3EWz+RdAoilxVzBAZIbXaT2y9w/JYKSOiuSsmrx0ab0Rk5p98tjF4aJJrLqO6lxLmI olDtI9r36o23DHUAJS+usg6A2arKZfNQRUIA65hXH3D3o2R/8xjGCW2QWa5MK6z3UZ90 BknGzOP61FsSU+OVGuvg0nCzUbaEMm1ek3r2F66u8egKy3DvD8Fr/CSYWLf4T/lMNng4 TbsWpMafB47RdxPm6vzmiRlPaggUyBG1IOU2R2ootfTB9kH5cfD12UqOcEyJsLVIky7Z U5amLNhO0XgSkmpVeyK72YZwucWP5Udql1VdAKKpsRDuPt0vOY7+ZzZhkpSdx11G5R9P qF+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=x9fs/pcgOChAtJLUDPXp3cv3Ogg9mSNMF+2EQZokRvQ=; b=sCPNXAtLtjN46aFMLOJKsaL+VFVGkm0EtN0yuX9g7OfrMhrHsgQt1V4ngJtFA5kRpt whmCDBNF/S/SU6o0XXxeFQbA4odknkcVcMPw8G+neopcPMsjOZBIqyu3ZE+kmFDavXub LDrC1mwolj/A6yx7COUOu2KgAZuinNcn7l1MszqNiDsd+Qx1sDrZxnlAX2eYdwwjLm16 FpC6WLb7xe107+tKCEL1svwpEGKzvsXpL7m5gzoFvmTS0SmzJIdc5fBmRvt9Cdiqsb5Z kEIX25T39HdK6vDnXdtxRIo+oLmN0XEw00PL7LjlbIUCgzntkNPCLDjXgLJMfrGZSfMK O+Vw== X-Gm-Message-State: ACgBeo1AF8enXnf40lLm2iXTnFNeI1zggv0vmr8hD6NltAb69KhYtZai vhNbc8FtTeLXrAV1v+i4yrvRZA== X-Google-Smtp-Source: AA6agR7R8mNCEwL/D3l+vcBtcrgdDAyd3mWsTq2jAGPWnHMySYMHR3qbt1URZX212VAIIGTEVLmBpw== X-Received: by 2002:a63:84c6:0:b0:42b:e461:348a with SMTP id k189-20020a6384c6000000b0042be461348amr12598809pgd.87.1661916023299; Tue, 30 Aug 2022 20:20:23 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:22 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 3/7] ksm: remove redundant declarations in ksm.h Date: Wed, 31 Aug 2022 11:19:47 +0800 Message-Id: <20220831031951.43152-4-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=3EWz+RdA; spf=pass (imf18.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916024; a=rsa-sha256; cv=none; b=M2GGTMicb5RGBnJa9Ontun64v6J6i84X58EX+bpMeY8q/fbp5jazpc2qLVZJI9HV8oBCQw PNGjW99SOfuAe1H5s4xxIKmBffseWED8BRlRIoe4anrLljOYCT0OAsYtSjOdDpEIjzJAQN Whf3jgNl/3fsENHy0YCYX8erm/E7Aig= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916024; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x9fs/pcgOChAtJLUDPXp3cv3Ogg9mSNMF+2EQZokRvQ=; b=JmVvvQ/8u8/adJmEMStvABHrbEclACGR2FCt5u7zn+QW7p3zYL8hom3ITxJpf2OzkEmtW0 a0ZUK1NhitnTlOyEs18dOZ8ynUsgGktyeMeUbAz+ODjN9VsWrKOtnc7OmpmSv2oEb+yqKU 1baLdnPgTsbt2+piJtbuQnuurfftklM= X-Stat-Signature: y63kzqpz9zzrym9xm3tf47bzi95hnc8k X-Rspamd-Queue-Id: 4AC971C003A Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=3EWz+RdA; spf=pass (imf18.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1661916024-829435 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, for struct stable_node, no one uses it in both the include/linux/ksm.h file and the file that contains it. For struct mem_cgroup, it's also not used in ksm.h. So they're all redundant, just remove them. Signed-off-by: Qi Zheng --- include/linux/ksm.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 0b4f17418f64..7e232ba59b86 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -15,9 +15,6 @@ #include #include -struct stable_node; -struct mem_cgroup; - #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, unsigned long *vm_flags); From patchwork Wed Aug 31 03:19:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5621DECAAD8 for ; Wed, 31 Aug 2022 03:20:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5D526B0075; Tue, 30 Aug 2022 23:20:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE5AE6B0078; Tue, 30 Aug 2022 23:20:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C398C6B007B; Tue, 30 Aug 2022 23:20:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AFD566B0075 for ; Tue, 30 Aug 2022 23:20:29 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 834A51C6B65 for ; Wed, 31 Aug 2022 03:20:29 +0000 (UTC) X-FDA: 79858434978.19.8EEC703 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf09.hostedemail.com (Postfix) with ESMTP id 33967140050 for ; Wed, 31 Aug 2022 03:20:29 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id u9-20020a17090a1f0900b001fde6477464so6728194pja.4 for ; Tue, 30 Aug 2022 20:20:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Aa9yqb8lqoZTwAblsaxo7iVAt4Fo5BpnbvdwKRhtu2w=; b=h7WvKt+dUFOBsAYlRo/uJqchGIfXUfUuKWd6zCwlKIFeMb6CJxQfkVgOpK4ayoCP84 s+9IqY2ANy0W/ch18D3lPWZKnnpMY8wz8I3zbhj2/Gh2e5e8e/Unv1bInQSZgKN4zVlW yoeJDM+QlTqwrFQuQbCFy/c1VrF5YnE7aakkjnwlxYyctPIchbtli2hfw7Y3jdA4l/W/ DFYrVb3MQ8eUsIoAFIoV8dznIl15cXTJKdBlLJqs4/3gPT6gIdqnZSHt1x+R7/gbvbFi 045lJL96THdxnyze7WqFqAQjxRFSXHOUJXAEAtaj2b7iDdlh5ypWKXi6FkcBnm091oB8 j3lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Aa9yqb8lqoZTwAblsaxo7iVAt4Fo5BpnbvdwKRhtu2w=; b=tCcZW6CFLUwn7fT+B5d5XA4exAkvjImGkF30shcFiQaC/Uf7GrL1XZvso2BYiATU8+ qDDyWpKd6OH0PMIr2jIUc/KVnR1T/AD8XhqGHlkU0PrUI1+bSrHmLQ+CvO80/q1kov69 Qyrz81z2HRK64pplfxqzuIxpk6KAYRydj/qNDrHdDhXTGbAanx7ijN59AkDswyJjuA9/ AUhhW5yxJJi1erK2qtEzv83cyT5nikUaixMvhpzwhlsb4Y86Xq7leIfWCv5M5pINhG/f qOIDYtXd2FYqO44g7UiALxkgxL5IgSa10//9Lu97KbMeZJCGqtpSRNc53/ofsFNFCYYQ vU0g== X-Gm-Message-State: ACgBeo0/e+Pn3TREqxQmGdJw7vvtd52xq98gFSYuEAoDwDAS2G9CmPla iczhKwQG0NLIxkzXgKw0HQx0Tg== X-Google-Smtp-Source: AA6agR7xt56mkesUzkLJqn+xuZ4Qnr9M3/bm6/s2f5r7foTtKA+jk2//H6GVazB4X1hWoBsbWYXCag== X-Received: by 2002:a17:90a:d151:b0:1fa:b2a6:226a with SMTP id t17-20020a17090ad15100b001fab2a6226amr1197842pjw.104.1661916027839; Tue, 30 Aug 2022 20:20:27 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:27 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 4/7] ksm: add the ksm prefix to the names of the ksm private structures Date: Wed, 31 Aug 2022 11:19:48 +0800 Message-Id: <20220831031951.43152-5-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=h7WvKt+d; spf=pass (imf09.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916029; a=rsa-sha256; cv=none; b=1CsuSGV/T8B54bmbVh2GzSWZ26rky43V0M/DEja9caVt3+vpbAQfcwlaTzy0ErLysctwlG bG1XzfMFlb7Dw9uB00bNSqxp19Ycsdjss9lPowV6VLC4YyEJjSsRETDwlqYLbHz0JqJjs6 Ou7WFRwmonrgCbIBEmMDCrYPsdDL65o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Aa9yqb8lqoZTwAblsaxo7iVAt4Fo5BpnbvdwKRhtu2w=; b=YH0s75qlMOX4xbr0xKdGtcNRxB9CQ/q8WmXkKI2hu9S1fTl+/PoDiSiE6dvrTEYdoazOYE CLD9/9NPmBUrdKa22kSX9k3sClpSgotGkfSivm14KZgGoRnztb+xcFCs6rF+FLclYjVQhS XHgi7vWap+BAdpxv/luC7LD9fcY/Hq0= X-Rspam-User: Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=h7WvKt+d; spf=pass (imf09.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam02 X-Stat-Signature: 36zjwzskfgy5t8po5e9hn1si9hkmqim6 X-Rspamd-Queue-Id: 33967140050 X-HE-Tag: 1661916029-999303 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to prevent the name of the private structure of ksm from being the same as the name of the common structure used in subsequent patches, prefix their names with ksm in advance. Signed-off-by: Qi Zheng --- Documentation/mm/ksm.rst | 2 +- mm/ksm.c | 216 +++++++++++++++++++-------------------- 2 files changed, 109 insertions(+), 109 deletions(-) diff --git a/Documentation/mm/ksm.rst b/Documentation/mm/ksm.rst index 9e37add068e6..f83cfbc12f4c 100644 --- a/Documentation/mm/ksm.rst +++ b/Documentation/mm/ksm.rst @@ -26,7 +26,7 @@ tree. If a KSM page is shared between less than ``max_page_sharing`` VMAs, the node of the stable tree that represents such KSM page points to a -list of struct rmap_item and the ``page->mapping`` of the +list of struct ksm_rmap_item and the ``page->mapping`` of the KSM page points to the stable tree node. When the sharing passes this threshold, KSM adds a second dimension to diff --git a/mm/ksm.c b/mm/ksm.c index e34cc21d5556..3937111f9ab8 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -82,7 +82,7 @@ * different KSM page copy of that content * * Internally, the regular nodes, "dups" and "chains" are represented - * using the same struct stable_node structure. + * using the same struct ksm_stable_node structure. * * In addition to the stable tree, KSM uses a second data structure called the * unstable tree: this tree holds pointers to pages which have been found to @@ -112,16 +112,16 @@ */ /** - * struct mm_slot - ksm information per mm that is being scanned + * struct ksm_mm_slot - ksm information per mm that is being scanned * @link: link to the mm_slots hash list * @mm_list: link into the mm_slots list, rooted in ksm_mm_head * @rmap_list: head for this mm_slot's singly-linked list of rmap_items * @mm: the mm that this information is valid for */ -struct mm_slot { +struct ksm_mm_slot { struct hlist_node link; struct list_head mm_list; - struct rmap_item *rmap_list; + struct ksm_rmap_item *rmap_list; struct mm_struct *mm; }; @@ -135,14 +135,14 @@ struct mm_slot { * There is only the one ksm_scan instance of this cursor structure. */ struct ksm_scan { - struct mm_slot *mm_slot; + struct ksm_mm_slot *mm_slot; unsigned long address; - struct rmap_item **rmap_list; + struct ksm_rmap_item **rmap_list; unsigned long seqnr; }; /** - * struct stable_node - node of the stable rbtree + * struct ksm_stable_node - node of the stable rbtree * @node: rb node of this ksm page in the stable tree * @head: (overlaying parent) &migrate_nodes indicates temporarily on that list * @hlist_dup: linked into the stable_node->hlist with a stable_node chain @@ -153,7 +153,7 @@ struct ksm_scan { * @rmap_hlist_len: number of rmap_item entries in hlist or STABLE_NODE_CHAIN * @nid: NUMA node id of stable tree in which linked (may not match kpfn) */ -struct stable_node { +struct ksm_stable_node { union { struct rb_node node; /* when node of stable tree */ struct { /* when listed for migration */ @@ -182,7 +182,7 @@ struct stable_node { }; /** - * struct rmap_item - reverse mapping item for virtual addresses + * struct ksm_rmap_item - reverse mapping item for virtual addresses * @rmap_list: next rmap_item in mm_slot's singly-linked rmap_list * @anon_vma: pointer to anon_vma for this mm,address, when in stable tree * @nid: NUMA node id of unstable tree in which linked (may not match page) @@ -193,8 +193,8 @@ struct stable_node { * @head: pointer to stable_node heading this list in the stable tree * @hlist: link into hlist of rmap_items hanging off that stable_node */ -struct rmap_item { - struct rmap_item *rmap_list; +struct ksm_rmap_item { + struct ksm_rmap_item *rmap_list; union { struct anon_vma *anon_vma; /* when stable */ #ifdef CONFIG_NUMA @@ -207,7 +207,7 @@ struct rmap_item { union { struct rb_node node; /* when node of unstable tree */ struct { /* when listed from stable tree */ - struct stable_node *head; + struct ksm_stable_node *head; struct hlist_node hlist; }; }; @@ -230,7 +230,7 @@ static LIST_HEAD(migrate_nodes); #define MM_SLOTS_HASH_BITS 10 static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); -static struct mm_slot ksm_mm_head = { +static struct ksm_mm_slot ksm_mm_head = { .mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list), }; static struct ksm_scan ksm_scan = { @@ -298,21 +298,21 @@ static DECLARE_WAIT_QUEUE_HEAD(ksm_iter_wait); static DEFINE_MUTEX(ksm_thread_mutex); static DEFINE_SPINLOCK(ksm_mmlist_lock); -#define KSM_KMEM_CACHE(__struct, __flags) kmem_cache_create("ksm_"#__struct,\ +#define KSM_KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\ sizeof(struct __struct), __alignof__(struct __struct),\ (__flags), NULL) static int __init ksm_slab_init(void) { - rmap_item_cache = KSM_KMEM_CACHE(rmap_item, 0); + rmap_item_cache = KSM_KMEM_CACHE(ksm_rmap_item, 0); if (!rmap_item_cache) goto out; - stable_node_cache = KSM_KMEM_CACHE(stable_node, 0); + stable_node_cache = KSM_KMEM_CACHE(ksm_stable_node, 0); if (!stable_node_cache) goto out_free1; - mm_slot_cache = KSM_KMEM_CACHE(mm_slot, 0); + mm_slot_cache = KSM_KMEM_CACHE(ksm_mm_slot, 0); if (!mm_slot_cache) goto out_free2; @@ -334,18 +334,18 @@ static void __init ksm_slab_free(void) mm_slot_cache = NULL; } -static __always_inline bool is_stable_node_chain(struct stable_node *chain) +static __always_inline bool is_stable_node_chain(struct ksm_stable_node *chain) { return chain->rmap_hlist_len == STABLE_NODE_CHAIN; } -static __always_inline bool is_stable_node_dup(struct stable_node *dup) +static __always_inline bool is_stable_node_dup(struct ksm_stable_node *dup) { return dup->head == STABLE_NODE_DUP_HEAD; } -static inline void stable_node_chain_add_dup(struct stable_node *dup, - struct stable_node *chain) +static inline void stable_node_chain_add_dup(struct ksm_stable_node *dup, + struct ksm_stable_node *chain) { VM_BUG_ON(is_stable_node_dup(dup)); dup->head = STABLE_NODE_DUP_HEAD; @@ -354,14 +354,14 @@ static inline void stable_node_chain_add_dup(struct stable_node *dup, ksm_stable_node_dups++; } -static inline void __stable_node_dup_del(struct stable_node *dup) +static inline void __stable_node_dup_del(struct ksm_stable_node *dup) { VM_BUG_ON(!is_stable_node_dup(dup)); hlist_del(&dup->hlist_dup); ksm_stable_node_dups--; } -static inline void stable_node_dup_del(struct stable_node *dup) +static inline void stable_node_dup_del(struct ksm_stable_node *dup) { VM_BUG_ON(is_stable_node_chain(dup)); if (is_stable_node_dup(dup)) @@ -373,9 +373,9 @@ static inline void stable_node_dup_del(struct stable_node *dup) #endif } -static inline struct rmap_item *alloc_rmap_item(void) +static inline struct ksm_rmap_item *alloc_rmap_item(void) { - struct rmap_item *rmap_item; + struct ksm_rmap_item *rmap_item; rmap_item = kmem_cache_zalloc(rmap_item_cache, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); @@ -384,14 +384,14 @@ static inline struct rmap_item *alloc_rmap_item(void) return rmap_item; } -static inline void free_rmap_item(struct rmap_item *rmap_item) +static inline void free_rmap_item(struct ksm_rmap_item *rmap_item) { ksm_rmap_items--; rmap_item->mm = NULL; /* debug safety */ kmem_cache_free(rmap_item_cache, rmap_item); } -static inline struct stable_node *alloc_stable_node(void) +static inline struct ksm_stable_node *alloc_stable_node(void) { /* * The allocation can take too long with GFP_KERNEL when memory is under @@ -401,28 +401,28 @@ static inline struct stable_node *alloc_stable_node(void) return kmem_cache_alloc(stable_node_cache, GFP_KERNEL | __GFP_HIGH); } -static inline void free_stable_node(struct stable_node *stable_node) +static inline void free_stable_node(struct ksm_stable_node *stable_node) { VM_BUG_ON(stable_node->rmap_hlist_len && !is_stable_node_chain(stable_node)); kmem_cache_free(stable_node_cache, stable_node); } -static inline struct mm_slot *alloc_mm_slot(void) +static inline struct ksm_mm_slot *alloc_mm_slot(void) { if (!mm_slot_cache) /* initialization failed */ return NULL; return kmem_cache_zalloc(mm_slot_cache, GFP_KERNEL); } -static inline void free_mm_slot(struct mm_slot *mm_slot) +static inline void free_mm_slot(struct ksm_mm_slot *mm_slot) { kmem_cache_free(mm_slot_cache, mm_slot); } -static struct mm_slot *get_mm_slot(struct mm_struct *mm) +static struct ksm_mm_slot *get_mm_slot(struct mm_struct *mm) { - struct mm_slot *slot; + struct ksm_mm_slot *slot; hash_for_each_possible(mm_slots_hash, slot, link, (unsigned long)mm) if (slot->mm == mm) @@ -432,7 +432,7 @@ static struct mm_slot *get_mm_slot(struct mm_struct *mm) } static void insert_to_mm_slots_hash(struct mm_struct *mm, - struct mm_slot *mm_slot) + struct ksm_mm_slot *mm_slot) { mm_slot->mm = mm; hash_add(mm_slots_hash, &mm_slot->link, (unsigned long)mm); @@ -528,7 +528,7 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, return vma; } -static void break_cow(struct rmap_item *rmap_item) +static void break_cow(struct ksm_rmap_item *rmap_item) { struct mm_struct *mm = rmap_item->mm; unsigned long addr = rmap_item->address; @@ -547,7 +547,7 @@ static void break_cow(struct rmap_item *rmap_item) mmap_read_unlock(mm); } -static struct page *get_mergeable_page(struct rmap_item *rmap_item) +static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item) { struct mm_struct *mm = rmap_item->mm; unsigned long addr = rmap_item->address; @@ -588,10 +588,10 @@ static inline int get_kpfn_nid(unsigned long kpfn) return ksm_merge_across_nodes ? 0 : NUMA(pfn_to_nid(kpfn)); } -static struct stable_node *alloc_stable_node_chain(struct stable_node *dup, +static struct ksm_stable_node *alloc_stable_node_chain(struct ksm_stable_node *dup, struct rb_root *root) { - struct stable_node *chain = alloc_stable_node(); + struct ksm_stable_node *chain = alloc_stable_node(); VM_BUG_ON(is_stable_node_chain(dup)); if (likely(chain)) { INIT_HLIST_HEAD(&chain->hlist); @@ -621,7 +621,7 @@ static struct stable_node *alloc_stable_node_chain(struct stable_node *dup, return chain; } -static inline void free_stable_node_chain(struct stable_node *chain, +static inline void free_stable_node_chain(struct ksm_stable_node *chain, struct rb_root *root) { rb_erase(&chain->node, root); @@ -629,9 +629,9 @@ static inline void free_stable_node_chain(struct stable_node *chain, ksm_stable_node_chains--; } -static void remove_node_from_stable_tree(struct stable_node *stable_node) +static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node) { - struct rmap_item *rmap_item; + struct ksm_rmap_item *rmap_item; /* check it's not STABLE_NODE_CHAIN or negative */ BUG_ON(stable_node->rmap_hlist_len < 0); @@ -693,7 +693,7 @@ enum get_ksm_page_flags { * a page to put something that might look like our key in page->mapping. * is on its way to being freed; but it is an anomaly to bear in mind. */ -static struct page *get_ksm_page(struct stable_node *stable_node, +static struct page *get_ksm_page(struct ksm_stable_node *stable_node, enum get_ksm_page_flags flags) { struct page *page; @@ -772,10 +772,10 @@ static struct page *get_ksm_page(struct stable_node *stable_node, * Removing rmap_item from stable or unstable tree. * This function will clean the information from the stable/unstable tree. */ -static void remove_rmap_item_from_tree(struct rmap_item *rmap_item) +static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item) { if (rmap_item->address & STABLE_FLAG) { - struct stable_node *stable_node; + struct ksm_stable_node *stable_node; struct page *page; stable_node = rmap_item->head; @@ -822,10 +822,10 @@ static void remove_rmap_item_from_tree(struct rmap_item *rmap_item) cond_resched(); /* we're called from many long loops */ } -static void remove_trailing_rmap_items(struct rmap_item **rmap_list) +static void remove_trailing_rmap_items(struct ksm_rmap_item **rmap_list) { while (*rmap_list) { - struct rmap_item *rmap_item = *rmap_list; + struct ksm_rmap_item *rmap_item = *rmap_list; *rmap_list = rmap_item->rmap_list; remove_rmap_item_from_tree(rmap_item); free_rmap_item(rmap_item); @@ -862,18 +862,18 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma, return err; } -static inline struct stable_node *folio_stable_node(struct folio *folio) +static inline struct ksm_stable_node *folio_stable_node(struct folio *folio) { return folio_test_ksm(folio) ? folio_raw_mapping(folio) : NULL; } -static inline struct stable_node *page_stable_node(struct page *page) +static inline struct ksm_stable_node *page_stable_node(struct page *page) { return folio_stable_node(page_folio(page)); } static inline void set_page_stable_node(struct page *page, - struct stable_node *stable_node) + struct ksm_stable_node *stable_node) { VM_BUG_ON_PAGE(PageAnon(page) && PageAnonExclusive(page), page); page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); @@ -883,7 +883,7 @@ static inline void set_page_stable_node(struct page *page, /* * Only called through the sysfs control interface: */ -static int remove_stable_node(struct stable_node *stable_node) +static int remove_stable_node(struct ksm_stable_node *stable_node) { struct page *page; int err; @@ -921,10 +921,10 @@ static int remove_stable_node(struct stable_node *stable_node) return err; } -static int remove_stable_node_chain(struct stable_node *stable_node, +static int remove_stable_node_chain(struct ksm_stable_node *stable_node, struct rb_root *root) { - struct stable_node *dup; + struct ksm_stable_node *dup; struct hlist_node *hlist_safe; if (!is_stable_node_chain(stable_node)) { @@ -948,14 +948,14 @@ static int remove_stable_node_chain(struct stable_node *stable_node, static int remove_all_stable_nodes(void) { - struct stable_node *stable_node, *next; + struct ksm_stable_node *stable_node, *next; int nid; int err = 0; for (nid = 0; nid < ksm_nr_node_ids; nid++) { while (root_stable_tree[nid].rb_node) { stable_node = rb_entry(root_stable_tree[nid].rb_node, - struct stable_node, node); + struct ksm_stable_node, node); if (remove_stable_node_chain(stable_node, root_stable_tree + nid)) { err = -EBUSY; @@ -974,14 +974,14 @@ static int remove_all_stable_nodes(void) static int unmerge_and_remove_all_rmap_items(void) { - struct mm_slot *mm_slot; + struct ksm_mm_slot *mm_slot; struct mm_struct *mm; struct vm_area_struct *vma; int err = 0; spin_lock(&ksm_mmlist_lock); ksm_scan.mm_slot = list_entry(ksm_mm_head.mm_list.next, - struct mm_slot, mm_list); + struct ksm_mm_slot, mm_list); spin_unlock(&ksm_mmlist_lock); for (mm_slot = ksm_scan.mm_slot; mm_slot != &ksm_mm_head; @@ -1006,7 +1006,7 @@ static int unmerge_and_remove_all_rmap_items(void) spin_lock(&ksm_mmlist_lock); ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next, - struct mm_slot, mm_list); + struct ksm_mm_slot, mm_list); if (ksm_test_exit(mm)) { hash_del(&mm_slot->link); list_del(&mm_slot->mm_list); @@ -1293,7 +1293,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, * * This function returns 0 if the pages were merged, -EFAULT otherwise. */ -static int try_to_merge_with_ksm_page(struct rmap_item *rmap_item, +static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item, struct page *page, struct page *kpage) { struct mm_struct *mm = rmap_item->mm; @@ -1330,9 +1330,9 @@ static int try_to_merge_with_ksm_page(struct rmap_item *rmap_item, * Note that this function upgrades page to ksm page: if one of the pages * is already a ksm page, try_to_merge_with_ksm_page should be used. */ -static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item, +static struct page *try_to_merge_two_pages(struct ksm_rmap_item *rmap_item, struct page *page, - struct rmap_item *tree_rmap_item, + struct ksm_rmap_item *tree_rmap_item, struct page *tree_page) { int err; @@ -1352,7 +1352,7 @@ static struct page *try_to_merge_two_pages(struct rmap_item *rmap_item, } static __always_inline -bool __is_page_sharing_candidate(struct stable_node *stable_node, int offset) +bool __is_page_sharing_candidate(struct ksm_stable_node *stable_node, int offset) { VM_BUG_ON(stable_node->rmap_hlist_len < 0); /* @@ -1366,17 +1366,17 @@ bool __is_page_sharing_candidate(struct stable_node *stable_node, int offset) } static __always_inline -bool is_page_sharing_candidate(struct stable_node *stable_node) +bool is_page_sharing_candidate(struct ksm_stable_node *stable_node) { return __is_page_sharing_candidate(stable_node, 0); } -static struct page *stable_node_dup(struct stable_node **_stable_node_dup, - struct stable_node **_stable_node, +static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup, + struct ksm_stable_node **_stable_node, struct rb_root *root, bool prune_stale_stable_nodes) { - struct stable_node *dup, *found = NULL, *stable_node = *_stable_node; + struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node; struct hlist_node *hlist_safe; struct page *_tree_page, *tree_page = NULL; int nr = 0; @@ -1490,7 +1490,7 @@ static struct page *stable_node_dup(struct stable_node **_stable_node_dup, return tree_page; } -static struct stable_node *stable_node_dup_any(struct stable_node *stable_node, +static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node, struct rb_root *root) { if (!is_stable_node_chain(stable_node)) @@ -1517,12 +1517,12 @@ static struct stable_node *stable_node_dup_any(struct stable_node *stable_node, * function and will be overwritten in all cases, the caller doesn't * need to initialize it. */ -static struct page *__stable_node_chain(struct stable_node **_stable_node_dup, - struct stable_node **_stable_node, +static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_dup, + struct ksm_stable_node **_stable_node, struct rb_root *root, bool prune_stale_stable_nodes) { - struct stable_node *stable_node = *_stable_node; + struct ksm_stable_node *stable_node = *_stable_node; if (!is_stable_node_chain(stable_node)) { if (is_page_sharing_candidate(stable_node)) { *_stable_node_dup = stable_node; @@ -1539,18 +1539,18 @@ static struct page *__stable_node_chain(struct stable_node **_stable_node_dup, prune_stale_stable_nodes); } -static __always_inline struct page *chain_prune(struct stable_node **s_n_d, - struct stable_node **s_n, +static __always_inline struct page *chain_prune(struct ksm_stable_node **s_n_d, + struct ksm_stable_node **s_n, struct rb_root *root) { return __stable_node_chain(s_n_d, s_n, root, true); } -static __always_inline struct page *chain(struct stable_node **s_n_d, - struct stable_node *s_n, +static __always_inline struct page *chain(struct ksm_stable_node **s_n_d, + struct ksm_stable_node *s_n, struct rb_root *root) { - struct stable_node *old_stable_node = s_n; + struct ksm_stable_node *old_stable_node = s_n; struct page *tree_page; tree_page = __stable_node_chain(s_n_d, &s_n, root, false); @@ -1574,8 +1574,8 @@ static struct page *stable_tree_search(struct page *page) struct rb_root *root; struct rb_node **new; struct rb_node *parent; - struct stable_node *stable_node, *stable_node_dup, *stable_node_any; - struct stable_node *page_node; + struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any; + struct ksm_stable_node *page_node; page_node = page_stable_node(page); if (page_node && page_node->head != &migrate_nodes) { @@ -1595,7 +1595,7 @@ static struct page *stable_tree_search(struct page *page) int ret; cond_resched(); - stable_node = rb_entry(*new, struct stable_node, node); + stable_node = rb_entry(*new, struct ksm_stable_node, node); stable_node_any = NULL; tree_page = chain_prune(&stable_node_dup, &stable_node, root); /* @@ -1818,14 +1818,14 @@ static struct page *stable_tree_search(struct page *page) * This function returns the stable tree node just allocated on success, * NULL otherwise. */ -static struct stable_node *stable_tree_insert(struct page *kpage) +static struct ksm_stable_node *stable_tree_insert(struct page *kpage) { int nid; unsigned long kpfn; struct rb_root *root; struct rb_node **new; struct rb_node *parent; - struct stable_node *stable_node, *stable_node_dup, *stable_node_any; + struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any; bool need_chain = false; kpfn = page_to_pfn(kpage); @@ -1840,7 +1840,7 @@ static struct stable_node *stable_tree_insert(struct page *kpage) int ret; cond_resched(); - stable_node = rb_entry(*new, struct stable_node, node); + stable_node = rb_entry(*new, struct ksm_stable_node, node); stable_node_any = NULL; tree_page = chain(&stable_node_dup, stable_node, root); if (!stable_node_dup) { @@ -1909,7 +1909,7 @@ static struct stable_node *stable_tree_insert(struct page *kpage) rb_insert_color(&stable_node_dup->node, root); } else { if (!is_stable_node_chain(stable_node)) { - struct stable_node *orig = stable_node; + struct ksm_stable_node *orig = stable_node; /* chain is missing so create it */ stable_node = alloc_stable_node_chain(orig, root); if (!stable_node) { @@ -1938,7 +1938,7 @@ static struct stable_node *stable_tree_insert(struct page *kpage) * the same walking algorithm in an rbtree. */ static -struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, +struct ksm_rmap_item *unstable_tree_search_insert(struct ksm_rmap_item *rmap_item, struct page *page, struct page **tree_pagep) { @@ -1952,12 +1952,12 @@ struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, new = &root->rb_node; while (*new) { - struct rmap_item *tree_rmap_item; + struct ksm_rmap_item *tree_rmap_item; struct page *tree_page; int ret; cond_resched(); - tree_rmap_item = rb_entry(*new, struct rmap_item, node); + tree_rmap_item = rb_entry(*new, struct ksm_rmap_item, node); tree_page = get_mergeable_page(tree_rmap_item); if (!tree_page) return NULL; @@ -2009,8 +2009,8 @@ struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, * rmap_items hanging off a given node of the stable tree, all sharing * the same ksm page. */ -static void stable_tree_append(struct rmap_item *rmap_item, - struct stable_node *stable_node, +static void stable_tree_append(struct ksm_rmap_item *rmap_item, + struct ksm_stable_node *stable_node, bool max_page_sharing_bypass) { /* @@ -2052,12 +2052,12 @@ static void stable_tree_append(struct rmap_item *rmap_item, * @page: the page that we are searching identical page to. * @rmap_item: the reverse mapping into the virtual address of this page */ -static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) +static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_item) { struct mm_struct *mm = rmap_item->mm; - struct rmap_item *tree_rmap_item; + struct ksm_rmap_item *tree_rmap_item; struct page *tree_page = NULL; - struct stable_node *stable_node; + struct ksm_stable_node *stable_node; struct page *kpage; unsigned int checksum; int err; @@ -2213,11 +2213,11 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) } } -static struct rmap_item *get_next_rmap_item(struct mm_slot *mm_slot, - struct rmap_item **rmap_list, +static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, + struct ksm_rmap_item **rmap_list, unsigned long addr) { - struct rmap_item *rmap_item; + struct ksm_rmap_item *rmap_item; while (*rmap_list) { rmap_item = *rmap_list; @@ -2241,12 +2241,12 @@ static struct rmap_item *get_next_rmap_item(struct mm_slot *mm_slot, return rmap_item; } -static struct rmap_item *scan_get_next_rmap_item(struct page **page) +static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) { struct mm_struct *mm; - struct mm_slot *slot; + struct ksm_mm_slot *slot; struct vm_area_struct *vma; - struct rmap_item *rmap_item; + struct ksm_rmap_item *rmap_item; struct vma_iterator vmi; int nid; @@ -2274,7 +2274,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) * so prune them once before each full scan. */ if (!ksm_merge_across_nodes) { - struct stable_node *stable_node, *next; + struct ksm_stable_node *stable_node, *next; struct page *page; list_for_each_entry_safe(stable_node, next, @@ -2291,7 +2291,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) root_unstable_tree[nid] = RB_ROOT; spin_lock(&ksm_mmlist_lock); - slot = list_entry(slot->mm_list.next, struct mm_slot, mm_list); + slot = list_entry(slot->mm_list.next, struct ksm_mm_slot, mm_list); ksm_scan.mm_slot = slot; spin_unlock(&ksm_mmlist_lock); /* @@ -2365,7 +2365,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) spin_lock(&ksm_mmlist_lock); ksm_scan.mm_slot = list_entry(slot->mm_list.next, - struct mm_slot, mm_list); + struct ksm_mm_slot, mm_list); if (ksm_scan.address == 0) { /* * We've completed a full scan of all vmas, holding mmap_lock @@ -2411,7 +2411,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) */ static void ksm_do_scan(unsigned int scan_npages) { - struct rmap_item *rmap_item; + struct ksm_rmap_item *rmap_item; struct page *page; while (scan_npages-- && likely(!freezing(current))) { @@ -2515,7 +2515,7 @@ EXPORT_SYMBOL_GPL(ksm_madvise); int __ksm_enter(struct mm_struct *mm) { - struct mm_slot *mm_slot; + struct ksm_mm_slot *mm_slot; int needs_wakeup; mm_slot = alloc_mm_slot(); @@ -2554,7 +2554,7 @@ int __ksm_enter(struct mm_struct *mm) void __ksm_exit(struct mm_struct *mm) { - struct mm_slot *mm_slot; + struct ksm_mm_slot *mm_slot; int easy_to_free = 0; /* @@ -2632,8 +2632,8 @@ struct page *ksm_might_need_to_copy(struct page *page, void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) { - struct stable_node *stable_node; - struct rmap_item *rmap_item; + struct ksm_stable_node *stable_node; + struct ksm_rmap_item *rmap_item; int search_new_forks = 0; VM_BUG_ON_FOLIO(!folio_test_ksm(folio), folio); @@ -2703,7 +2703,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) #ifdef CONFIG_MIGRATION void folio_migrate_ksm(struct folio *newfolio, struct folio *folio) { - struct stable_node *stable_node; + struct ksm_stable_node *stable_node; VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_locked(newfolio), newfolio); @@ -2736,7 +2736,7 @@ static void wait_while_offlining(void) } } -static bool stable_node_dup_remove_range(struct stable_node *stable_node, +static bool stable_node_dup_remove_range(struct ksm_stable_node *stable_node, unsigned long start_pfn, unsigned long end_pfn) { @@ -2752,12 +2752,12 @@ static bool stable_node_dup_remove_range(struct stable_node *stable_node, return false; } -static bool stable_node_chain_remove_range(struct stable_node *stable_node, +static bool stable_node_chain_remove_range(struct ksm_stable_node *stable_node, unsigned long start_pfn, unsigned long end_pfn, struct rb_root *root) { - struct stable_node *dup; + struct ksm_stable_node *dup; struct hlist_node *hlist_safe; if (!is_stable_node_chain(stable_node)) { @@ -2781,14 +2781,14 @@ static bool stable_node_chain_remove_range(struct stable_node *stable_node, static void ksm_check_stable_tree(unsigned long start_pfn, unsigned long end_pfn) { - struct stable_node *stable_node, *next; + struct ksm_stable_node *stable_node, *next; struct rb_node *node; int nid; for (nid = 0; nid < ksm_nr_node_ids; nid++) { node = rb_first(root_stable_tree + nid); while (node) { - stable_node = rb_entry(node, struct stable_node, node); + stable_node = rb_entry(node, struct ksm_stable_node, node); if (stable_node_chain_remove_range(stable_node, start_pfn, end_pfn, root_stable_tree + From patchwork Wed Aug 31 03:19:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 972B2ECAAD8 for ; Wed, 31 Aug 2022 03:20:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5DCE6B0078; Tue, 30 Aug 2022 23:20:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE5EE6B007B; Tue, 30 Aug 2022 23:20:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C131B8D0001; Tue, 30 Aug 2022 23:20:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AF3D86B0078 for ; Tue, 30 Aug 2022 23:20:33 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7E9D61A064D for ; Wed, 31 Aug 2022 03:20:33 +0000 (UTC) X-FDA: 79858435146.14.A7C0831 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf17.hostedemail.com (Postfix) with ESMTP id 1AA0140023 for ; Wed, 31 Aug 2022 03:20:32 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id c2so12970758plo.3 for ; Tue, 30 Aug 2022 20:20:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=lt2/YaYHr5rJLJwkCdxmg5YZwJpd4IJKxnznGjt9X+s=; b=0N1toz3Fx8xpZdKGsdyOMDfgo8HENQJGt/GN4Dq6KrIB0nFRHfbVwHakUBxIfB83zH OD3SQlKQwLFqQAQvNMZngYz1rHGLtqFpAZnywjpkWRfnzF2m6ZfxhBWgBjoM0Sq2jASR 2Zt850BxiEaXuKGEhE7zh9U0jpehhZmWl4WJH0TouQmFDhjCHNnnunxCD0l336wDAN/M 4Iu0fhbDHOgFEcyCDlueiskgx7uw/wLlKSFo93+IbvGxNS/eJaOB4isEYA+6uS8FQ7ZZ KguwklR7PIDPotwjIN6d+bfyAY9LiYL/hnFBIa36LXV37N/N/pRWK0daTQK+777FYSRa v+eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=lt2/YaYHr5rJLJwkCdxmg5YZwJpd4IJKxnznGjt9X+s=; b=w2Jv0hU2zYFOe+Ha1OvXIKz4W9yBEuiE/Vz9wDaMxb+eB813MCcvJUiPBe2lY9iIh8 qR0Cbyryp7imeZATsRckKbmppLVxrQVXogVqbal1fT+DbZWyMywXyrZcfMKMXa5f3V88 QEr1kKYW0kTeLuycAxDHvcknMsPly/lX/ZrlfANsDSxj+YD2fhI4j0BoECYoYjegO9Zm gLal6z6KTm5xJpmwZ/JWldlSP3Pk+x6OSPQz3XNTtj/IqEwDjgJd8lKLYisgRjroish/ zWrtq9sDGlqfoDtBK1/lwKh2gIxsX6xiPWDmcMopKfaXuwUjxFwItZl7uHN8jqA7uHhH JXTw== X-Gm-Message-State: ACgBeo2uQyyVX9qqqodPWZ2++zCt4dFTMvTwm8+gvSMCP1GzQYbfMM8e rvDigRsbQvYWXrSxSZELWL3sWg== X-Google-Smtp-Source: AA6agR6WVNQbj0YQ9diql3Yc2i0fOPGwhWAQ0rKwIzK2ns4up9w0TNuEUbdHfL3HCAdE6u8Kh98pqA== X-Received: by 2002:a17:90b:1812:b0:1fd:d509:93e5 with SMTP id lw18-20020a17090b181200b001fdd50993e5mr1176557pjb.25.1661916032237; Tue, 30 Aug 2022 20:20:32 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:31 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 5/7] ksm: convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_node Date: Wed, 31 Aug 2022 11:19:49 +0800 Message-Id: <20220831031951.43152-6-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0N1toz3F; spf=pass (imf17.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916033; a=rsa-sha256; cv=none; b=C9a9lIyiTDCz17c8gguyO2sqHvBt9e2gJFG8YGyaPKk00gxVMFCcw/g+nElYOSvorsYPuQ hA0JBff3ELqXkkXvbLn9TGhc9egt+vwMxlbGJBYf5Hv70oQN93HnhcWWefMtzudexP38kU escTa31409cFyZIKWeoZB6eGLGPnT6Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916033; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lt2/YaYHr5rJLJwkCdxmg5YZwJpd4IJKxnznGjt9X+s=; b=MOYMkjAzGbX94f4/doqWf6zqQYlbx1LgdKTX96o50Mdq3adfhvKd7tdM5dkt0NXn/HWTnd +4R/cdrEEA4f9JdJtLLixPZXZ/mTlHRbvpzFAk14RMmNPM+EVUZN79f1hbmYV3PomY89bh lDrlewmQf94rSLkOX/iPvrnZ6JkYbtQ= X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=0N1toz3F; spf=pass (imf17.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam02 X-Stat-Signature: thj5f9oxmie3abfcknef7f4uc8epnq6x X-Rspamd-Queue-Id: 1AA0140023 X-HE-Tag: 1661916032-141592 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to use common struct mm_slot, convert ksm_mm_slot.mm_list to ksm_mm_slot.mm_node in advance, no functional change. Signed-off-by: Qi Zheng --- mm/ksm.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 3937111f9ab8..8c52aa7e0a02 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -114,13 +114,13 @@ /** * struct ksm_mm_slot - ksm information per mm that is being scanned * @link: link to the mm_slots hash list - * @mm_list: link into the mm_slots list, rooted in ksm_mm_head + * @mm_node: link into the mm_slots list, rooted in ksm_mm_head * @rmap_list: head for this mm_slot's singly-linked list of rmap_items * @mm: the mm that this information is valid for */ struct ksm_mm_slot { struct hlist_node link; - struct list_head mm_list; + struct list_head mm_node; struct ksm_rmap_item *rmap_list; struct mm_struct *mm; }; @@ -231,7 +231,7 @@ static LIST_HEAD(migrate_nodes); static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); static struct ksm_mm_slot ksm_mm_head = { - .mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list), + .mm_node = LIST_HEAD_INIT(ksm_mm_head.mm_node), }; static struct ksm_scan ksm_scan = { .mm_slot = &ksm_mm_head, @@ -980,8 +980,8 @@ static int unmerge_and_remove_all_rmap_items(void) int err = 0; spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(ksm_mm_head.mm_list.next, - struct ksm_mm_slot, mm_list); + ksm_scan.mm_slot = list_entry(ksm_mm_head.mm_node.next, + struct ksm_mm_slot, mm_node); spin_unlock(&ksm_mmlist_lock); for (mm_slot = ksm_scan.mm_slot; mm_slot != &ksm_mm_head; @@ -1005,11 +1005,11 @@ static int unmerge_and_remove_all_rmap_items(void) mmap_read_unlock(mm); spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next, - struct ksm_mm_slot, mm_list); + ksm_scan.mm_slot = list_entry(mm_slot->mm_node.next, + struct ksm_mm_slot, mm_node); if (ksm_test_exit(mm)) { hash_del(&mm_slot->link); - list_del(&mm_slot->mm_list); + list_del(&mm_slot->mm_node); spin_unlock(&ksm_mmlist_lock); free_mm_slot(mm_slot); @@ -2250,7 +2250,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) struct vma_iterator vmi; int nid; - if (list_empty(&ksm_mm_head.mm_list)) + if (list_empty(&ksm_mm_head.mm_node)) return NULL; slot = ksm_scan.mm_slot; @@ -2291,7 +2291,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) root_unstable_tree[nid] = RB_ROOT; spin_lock(&ksm_mmlist_lock); - slot = list_entry(slot->mm_list.next, struct ksm_mm_slot, mm_list); + slot = list_entry(slot->mm_node.next, struct ksm_mm_slot, mm_node); ksm_scan.mm_slot = slot; spin_unlock(&ksm_mmlist_lock); /* @@ -2364,8 +2364,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) remove_trailing_rmap_items(ksm_scan.rmap_list); spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(slot->mm_list.next, - struct ksm_mm_slot, mm_list); + ksm_scan.mm_slot = list_entry(slot->mm_node.next, + struct ksm_mm_slot, mm_node); if (ksm_scan.address == 0) { /* * We've completed a full scan of all vmas, holding mmap_lock @@ -2377,7 +2377,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) * mmap_lock then protects against race with MADV_MERGEABLE). */ hash_del(&slot->link); - list_del(&slot->mm_list); + list_del(&slot->mm_node); spin_unlock(&ksm_mmlist_lock); free_mm_slot(slot); @@ -2426,7 +2426,7 @@ static void ksm_do_scan(unsigned int scan_npages) static int ksmd_should_run(void) { - return (ksm_run & KSM_RUN_MERGE) && !list_empty(&ksm_mm_head.mm_list); + return (ksm_run & KSM_RUN_MERGE) && !list_empty(&ksm_mm_head.mm_node); } static int ksm_scan_thread(void *nothing) @@ -2523,7 +2523,7 @@ int __ksm_enter(struct mm_struct *mm) return -ENOMEM; /* Check ksm_run too? Would need tighter locking */ - needs_wakeup = list_empty(&ksm_mm_head.mm_list); + needs_wakeup = list_empty(&ksm_mm_head.mm_node); spin_lock(&ksm_mmlist_lock); insert_to_mm_slots_hash(mm, mm_slot); @@ -2538,9 +2538,9 @@ int __ksm_enter(struct mm_struct *mm) * missed: then we might as well insert at the end of the list. */ if (ksm_run & KSM_RUN_UNMERGE) - list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list); + list_add_tail(&mm_slot->mm_node, &ksm_mm_head.mm_node); else - list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list); + list_add_tail(&mm_slot->mm_node, &ksm_scan.mm_slot->mm_node); spin_unlock(&ksm_mmlist_lock); set_bit(MMF_VM_MERGEABLE, &mm->flags); @@ -2571,11 +2571,11 @@ void __ksm_exit(struct mm_struct *mm) if (mm_slot && ksm_scan.mm_slot != mm_slot) { if (!mm_slot->rmap_list) { hash_del(&mm_slot->link); - list_del(&mm_slot->mm_list); + list_del(&mm_slot->mm_node); easy_to_free = 1; } else { - list_move(&mm_slot->mm_list, - &ksm_scan.mm_slot->mm_list); + list_move(&mm_slot->mm_node, + &ksm_scan.mm_slot->mm_node); } } spin_unlock(&ksm_mmlist_lock); From patchwork Wed Aug 31 03:19:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B50D2ECAAA1 for ; Wed, 31 Aug 2022 03:20:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 50EE66B007B; Tue, 30 Aug 2022 23:20:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 499786B007D; Tue, 30 Aug 2022 23:20:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EA138D0001; Tue, 30 Aug 2022 23:20:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1D0E76B007B for ; Tue, 30 Aug 2022 23:20:38 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D8D02160512 for ; Wed, 31 Aug 2022 03:20:37 +0000 (UTC) X-FDA: 79858435314.25.530665F Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) by imf30.hostedemail.com (Postfix) with ESMTP id 8C4778000F for ; Wed, 31 Aug 2022 03:20:37 +0000 (UTC) Received: by mail-pf1-f182.google.com with SMTP id c66so2948037pfc.10 for ; Tue, 30 Aug 2022 20:20:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=Al2MAS+95n2CdX2+DzZ9586scn02KUFSxAEU+wkQoAw=; b=qJOddgBlJ6QYyyCedtrzab/fiepTxwXPZUhHtVCwS12VTaTT2yX+tNuPdYbeQmsqeh gw9zRMj5RXYKc9GEb4169ocwRzD/4oUrZhYjmbmTFB82XEmRLmrkBhLTtoTXAyC7lAzg yW6Y0fS9iTsdGdRgVBinWppWyPiq2TgP1O4iwdf3OebRHvSD1br7lIl/5NiYzKFGqfih GqhLkTPpGQTfogbYCVnYaN6P1YjWP1wHKb0mlleAbFlm0M5c8jIZ0FnokMggWMi3N6BZ 0t/+cmMwT22FRNiy9Tr6HB81hwBSHa63aUYbE5dXUBcesSQ5KPvH4qDL/rb4VAM4C4Ws ER9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=Al2MAS+95n2CdX2+DzZ9586scn02KUFSxAEU+wkQoAw=; b=ikehUKw62/3gf0S6nI8bcm3sXUXpmIBiFt2lTTphYbyRQmCYHwaLG95+H+b2DPlXGe CRmXFv1bAfZgqtDWHlyYuYaVyzo9Y+8E2IQASGN8zD7S36jQmr4MPHNG5hEcr+Obhh9y 6LvnCGFs+iG0BcPEO3kcvJNlwmMF6ptr/jXir1eNLAEDveeIJNxiWDZITzKjZQIN/fCB X2YJUHAWKKqCtlswXtgE7otg1lrbTa8E2DtrnsXtTw2mnfdjSkF2Cv7ftPVmbn4I0XhZ XC3jaAf0e71N/HCWDJBQXnKClVZ2LkLIM7AvAxmnFyRdaO18l7TCDd4C0pedP1eJdcFo rBhQ== X-Gm-Message-State: ACgBeo2zAdECHBaKkVLzMcvKixrbRuimqQ4BC+TkQhj4fnGGQg21ZQQc /QtxEj8zYiCr2tq9mK8yhkk1mw== X-Google-Smtp-Source: AA6agR5hQijp1ihYFHuxOS7SKbB6brdOYfWD5KVgxzapCa+48fl3/7b3bZVzq1jr8Gkh0eEBAfDOyA== X-Received: by 2002:a65:5847:0:b0:42a:df5:1035 with SMTP id s7-20020a655847000000b0042a0df51035mr20339602pgr.248.1661916036655; Tue, 30 Aug 2022 20:20:36 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:36 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 6/7] ksm: convert ksm_mm_slot.link to ksm_mm_slot.hash Date: Wed, 31 Aug 2022 11:19:50 +0800 Message-Id: <20220831031951.43152-7-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=qJOddgBl; spf=pass (imf30.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916037; a=rsa-sha256; cv=none; b=ymUCyVuN7qtzYFosLRAvrqeG6Fr7dPbfI8vvLVkMb30+WRwRdo2yHLUUufNxbbClE74Jyg ZLHzfHksKvy/5MZ3CB8VOqzQZ95olB8uS+xojQaVx8C+mwIyDlUK66B1poDQjmf+iu6fVo tzpcCQ7d+7j76/w2V4wDgna1tOlBy44= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916037; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Al2MAS+95n2CdX2+DzZ9586scn02KUFSxAEU+wkQoAw=; b=WcBTFt9iyMsWUn6ENepNytt+cb/XtqlpEeNVG3lCZ+NgnwjZ0zYWRwXwQzG8wRqAbp+31z IR5ltbdduzmlXWZOOIsSUzhPn+eUU39G3h6q8iLBew4bBaVaqz56AgRj5nd2pxlIQ+wO/N bvm6bDZl3mea2zYJhgSu7bUqEeASmnU= X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8C4778000F X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=qJOddgBl; spf=pass (imf30.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: kg8bb6ggp88fptq1mo1u7ypx38i5bbwg X-HE-Tag: 1661916037-109345 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to use common struct mm_slot, convert ksm_mm_slot.link to ksm_mm_slot.hash in advance, no functional change. Signed-off-by: Qi Zheng --- mm/ksm.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 8c52aa7e0a02..667efca75b0d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -113,13 +113,13 @@ /** * struct ksm_mm_slot - ksm information per mm that is being scanned - * @link: link to the mm_slots hash list + * @hash: link to the mm_slots hash list * @mm_node: link into the mm_slots list, rooted in ksm_mm_head * @rmap_list: head for this mm_slot's singly-linked list of rmap_items * @mm: the mm that this information is valid for */ struct ksm_mm_slot { - struct hlist_node link; + struct hlist_node hash; struct list_head mm_node; struct ksm_rmap_item *rmap_list; struct mm_struct *mm; @@ -424,7 +424,7 @@ static struct ksm_mm_slot *get_mm_slot(struct mm_struct *mm) { struct ksm_mm_slot *slot; - hash_for_each_possible(mm_slots_hash, slot, link, (unsigned long)mm) + hash_for_each_possible(mm_slots_hash, slot, hash, (unsigned long)mm) if (slot->mm == mm) return slot; @@ -435,7 +435,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, struct ksm_mm_slot *mm_slot) { mm_slot->mm = mm; - hash_add(mm_slots_hash, &mm_slot->link, (unsigned long)mm); + hash_add(mm_slots_hash, &mm_slot->hash, (unsigned long)mm); } /* @@ -1008,7 +1008,7 @@ static int unmerge_and_remove_all_rmap_items(void) ksm_scan.mm_slot = list_entry(mm_slot->mm_node.next, struct ksm_mm_slot, mm_node); if (ksm_test_exit(mm)) { - hash_del(&mm_slot->link); + hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); spin_unlock(&ksm_mmlist_lock); @@ -2376,7 +2376,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) * or when all VM_MERGEABLE areas have been unmapped (and * mmap_lock then protects against race with MADV_MERGEABLE). */ - hash_del(&slot->link); + hash_del(&slot->hash); list_del(&slot->mm_node); spin_unlock(&ksm_mmlist_lock); @@ -2570,7 +2570,7 @@ void __ksm_exit(struct mm_struct *mm) mm_slot = get_mm_slot(mm); if (mm_slot && ksm_scan.mm_slot != mm_slot) { if (!mm_slot->rmap_list) { - hash_del(&mm_slot->link); + hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); easy_to_free = 1; } else { From patchwork Wed Aug 31 03:19:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12960298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54610ECAAD8 for ; Wed, 31 Aug 2022 03:20:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE8656B007D; Tue, 30 Aug 2022 23:20:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D74966B007E; Tue, 30 Aug 2022 23:20:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC4F38D0001; Tue, 30 Aug 2022 23:20:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A87B46B007D for ; Tue, 30 Aug 2022 23:20:42 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 82A7340585 for ; Wed, 31 Aug 2022 03:20:42 +0000 (UTC) X-FDA: 79858435524.26.8403850 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf25.hostedemail.com (Postfix) with ESMTP id 3BC7CA002D for ; Wed, 31 Aug 2022 03:20:42 +0000 (UTC) Received: by mail-pg1-f173.google.com with SMTP id s206so12409237pgs.3 for ; Tue, 30 Aug 2022 20:20:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=EoGywsqR3Qxm0ioJNhxU6FuWToutHdJdhmaSFzdi9gE=; b=Ry5R5htbwYmzwnCheXUOYlUheFCUpd3DWn23SISDbh5FFFrNnYShP3CiKtubfjtsJA B2hlp4lxkrtzVD563WhGafC8HqfLzFH7LLQD4DE/DuYnizcNR1ESt7nAe+p5+XEV4GbE KfyDitLGDQCVgBm7xcxi6N9NPZrjLMI2tDYaXxsB8eqjeh1n9X7Cr7eUNjMNsjaN5VvK jMX1d1i8wFYZc8U2535wScxoBWqj0zkDn2bK+uVj3FVPn6m3DDJfJDnPbzwdDmFmOIXO h7E0F1hoV0kAVrGoZQ5NGNJ+lpYM55XtZ0qdQ4aPjJpO49f1xSCFKJtjamhBqoyDuaen 0zfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=EoGywsqR3Qxm0ioJNhxU6FuWToutHdJdhmaSFzdi9gE=; b=ahY6skbZNkWuNr9MLkt5OO+zZJ+Ft/DIMYz00MZKSturGloNs0XNlruH03OAsSUR42 gRopgXWGdTFhnRDkyOCvZsd7KltpcAGpmgoxeN1L7NcD+rOiQQ433fFSzUa9vj1upzJ/ Z4vjW6ueRAGjeFzfG92QFawxQGDz9UotukBSeZp6D7os4UxTVPWG5lBYALMEK0ppA6Rn VZdoBv1cL5A2Ku1FWXF5Izkd9F3rhhe2qlF0gIT6/5jRn8FixBc96DLpKQe+YahHt72H 1CPe1m2d+ShZt3g0vPzxXqy7Aom5nmUMR9YC9HLutChyoOSKQXz/k3Sz3h1ZWcTOvI5I 5u/Q== X-Gm-Message-State: ACgBeo1OadGRc8E77VrEYioY5FaG82Nvl7FzIGK4Yeq01X3VReKXv6SO ojb/mdxNxE0RZmixImA+wmnuSQ== X-Google-Smtp-Source: AA6agR6B5Vc7odb1gh/RNIZE9Im56O0C4zoTP80UqYg7hw2yZis5IUBGN/AQraK+lkJgxuwAmdftbA== X-Received: by 2002:a63:2a49:0:b0:41d:95d8:3d3d with SMTP id q70-20020a632a49000000b0041d95d83d3dmr20305943pgq.43.1661916041243; Tue, 30 Aug 2022 20:20:41 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id i13-20020a170902c94d00b0015e8d4eb1d5sm8633535pla.31.2022.08.30.20.20.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 Aug 2022 20:20:40 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, shy828301@gmail.com, willy@infradead.org, vbabka@suse.cz, hannes@cmpxchg.org, minchan@kernel.org, rppt@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [PATCH v2 7/7] ksm: convert to use common struct mm_slot Date: Wed, 31 Aug 2022 11:19:51 +0800 Message-Id: <20220831031951.43152-8-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20220831031951.43152-1-zhengqi.arch@bytedance.com> References: <20220831031951.43152-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661916042; a=rsa-sha256; cv=none; b=fvlkiZjP8muXS6tMuBecMFyuvtRWEHlURp94ILDvFCVIeomxCrFfDIzKwvCbF7lyYW+Kmh U2zm6VB002vHDFM6yJNYUcgfk32JeeMmUMRIZbaffYLV9DYGHa6Fnd2NyNhYRnTemBXy9i HVwsdIZakikCYhn9w5kYPULqSVWZ8uA= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Ry5R5htb; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661916042; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EoGywsqR3Qxm0ioJNhxU6FuWToutHdJdhmaSFzdi9gE=; b=czUKlLaPGOoSbO/sTqUOEgBhubj791Szbd6Dm5ATF3k2U664MUbiGPy39Tzzaoj7zDQ02l I63kTZR9Acn+PNO8BN8d1wog5HP3T7jJhBbKjDk14+46DlofS3DAXLOyZItE5/ECvULNe9 Td1EXu5QTqY4e7nI0CqynUEnFZOCo94= X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Ry5R5htb; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com X-Stat-Signature: fpc7unnjybqfanfocdiftxjae18w76tu X-Rspamd-Queue-Id: 3BC7CA002D X-Rspamd-Server: rspam07 X-HE-Tag: 1661916042-972231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert to use common struct mm_slot, no functional change. Signed-off-by: Qi Zheng --- mm/ksm.c | 132 +++++++++++++++++++++++-------------------------------- 1 file changed, 56 insertions(+), 76 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 667efca75b0d..d8c4819d81eb 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -42,6 +42,7 @@ #include #include "internal.h" +#include "mm_slot.h" #ifdef CONFIG_NUMA #define NUMA(x) (x) @@ -113,16 +114,12 @@ /** * struct ksm_mm_slot - ksm information per mm that is being scanned - * @hash: link to the mm_slots hash list - * @mm_node: link into the mm_slots list, rooted in ksm_mm_head + * @slot: hash lookup from mm to mm_slot * @rmap_list: head for this mm_slot's singly-linked list of rmap_items - * @mm: the mm that this information is valid for */ struct ksm_mm_slot { - struct hlist_node hash; - struct list_head mm_node; + struct mm_slot slot; struct ksm_rmap_item *rmap_list; - struct mm_struct *mm; }; /** @@ -231,7 +228,7 @@ static LIST_HEAD(migrate_nodes); static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS); static struct ksm_mm_slot ksm_mm_head = { - .mm_node = LIST_HEAD_INIT(ksm_mm_head.mm_node), + .slot.mm_node = LIST_HEAD_INIT(ksm_mm_head.slot.mm_node), }; static struct ksm_scan ksm_scan = { .mm_slot = &ksm_mm_head, @@ -408,36 +405,6 @@ static inline void free_stable_node(struct ksm_stable_node *stable_node) kmem_cache_free(stable_node_cache, stable_node); } -static inline struct ksm_mm_slot *alloc_mm_slot(void) -{ - if (!mm_slot_cache) /* initialization failed */ - return NULL; - return kmem_cache_zalloc(mm_slot_cache, GFP_KERNEL); -} - -static inline void free_mm_slot(struct ksm_mm_slot *mm_slot) -{ - kmem_cache_free(mm_slot_cache, mm_slot); -} - -static struct ksm_mm_slot *get_mm_slot(struct mm_struct *mm) -{ - struct ksm_mm_slot *slot; - - hash_for_each_possible(mm_slots_hash, slot, hash, (unsigned long)mm) - if (slot->mm == mm) - return slot; - - return NULL; -} - -static void insert_to_mm_slots_hash(struct mm_struct *mm, - struct ksm_mm_slot *mm_slot) -{ - mm_slot->mm = mm; - hash_add(mm_slots_hash, &mm_slot->hash, (unsigned long)mm); -} - /* * ksmd, and unmerge_and_remove_all_rmap_items(), must not touch an mm's * page tables after it has passed through ksm_exit() - which, if necessary, @@ -975,20 +942,22 @@ static int remove_all_stable_nodes(void) static int unmerge_and_remove_all_rmap_items(void) { struct ksm_mm_slot *mm_slot; + struct mm_slot *slot; struct mm_struct *mm; struct vm_area_struct *vma; int err = 0; spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(ksm_mm_head.mm_node.next, - struct ksm_mm_slot, mm_node); + slot = list_entry(ksm_mm_head.slot.mm_node.next, + struct mm_slot, mm_node); + ksm_scan.mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); spin_unlock(&ksm_mmlist_lock); for (mm_slot = ksm_scan.mm_slot; mm_slot != &ksm_mm_head; mm_slot = ksm_scan.mm_slot) { - VMA_ITERATOR(vmi, mm_slot->mm, 0); + VMA_ITERATOR(vmi, mm_slot->slot.mm, 0); - mm = mm_slot->mm; + mm = mm_slot->slot.mm; mmap_read_lock(mm); for_each_vma(vmi, vma) { if (ksm_test_exit(mm)) @@ -1005,14 +974,15 @@ static int unmerge_and_remove_all_rmap_items(void) mmap_read_unlock(mm); spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(mm_slot->mm_node.next, - struct ksm_mm_slot, mm_node); + slot = list_entry(mm_slot->slot.mm_node.next, + struct mm_slot, mm_node); + ksm_scan.mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); if (ksm_test_exit(mm)) { - hash_del(&mm_slot->hash); - list_del(&mm_slot->mm_node); + hash_del(&mm_slot->slot.hash); + list_del(&mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock); - free_mm_slot(mm_slot); + mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); mmdrop(mm); } else @@ -2233,7 +2203,7 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, rmap_item = alloc_rmap_item(); if (rmap_item) { /* It has already been zeroed */ - rmap_item->mm = mm_slot->mm; + rmap_item->mm = mm_slot->slot.mm; rmap_item->address = addr; rmap_item->rmap_list = *rmap_list; *rmap_list = rmap_item; @@ -2244,17 +2214,18 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) { struct mm_struct *mm; - struct ksm_mm_slot *slot; + struct ksm_mm_slot *mm_slot; + struct mm_slot *slot; struct vm_area_struct *vma; struct ksm_rmap_item *rmap_item; struct vma_iterator vmi; int nid; - if (list_empty(&ksm_mm_head.mm_node)) + if (list_empty(&ksm_mm_head.slot.mm_node)) return NULL; - slot = ksm_scan.mm_slot; - if (slot == &ksm_mm_head) { + mm_slot = ksm_scan.mm_slot; + if (mm_slot == &ksm_mm_head) { /* * A number of pages can hang around indefinitely on per-cpu * pagevecs, raised page count preventing write_protect_page @@ -2291,20 +2262,23 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) root_unstable_tree[nid] = RB_ROOT; spin_lock(&ksm_mmlist_lock); - slot = list_entry(slot->mm_node.next, struct ksm_mm_slot, mm_node); - ksm_scan.mm_slot = slot; + slot = list_entry(mm_slot->slot.mm_node.next, + struct mm_slot, mm_node); + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); + ksm_scan.mm_slot = mm_slot; spin_unlock(&ksm_mmlist_lock); /* * Although we tested list_empty() above, a racing __ksm_exit * of the last mm on the list may have removed it since then. */ - if (slot == &ksm_mm_head) + if (mm_slot == &ksm_mm_head) return NULL; next_mm: ksm_scan.address = 0; - ksm_scan.rmap_list = &slot->rmap_list; + ksm_scan.rmap_list = &mm_slot->rmap_list; } + slot = &mm_slot->slot; mm = slot->mm; vma_iter_init(&vmi, mm, ksm_scan.address); @@ -2334,7 +2308,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) if (PageAnon(*page)) { flush_anon_page(vma, *page, ksm_scan.address); flush_dcache_page(*page); - rmap_item = get_next_rmap_item(slot, + rmap_item = get_next_rmap_item(mm_slot, ksm_scan.rmap_list, ksm_scan.address); if (rmap_item) { ksm_scan.rmap_list = @@ -2355,7 +2329,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) if (ksm_test_exit(mm)) { no_vmas: ksm_scan.address = 0; - ksm_scan.rmap_list = &slot->rmap_list; + ksm_scan.rmap_list = &mm_slot->rmap_list; } /* * Nuke all the rmap_items that are above this current rmap: @@ -2364,8 +2338,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) remove_trailing_rmap_items(ksm_scan.rmap_list); spin_lock(&ksm_mmlist_lock); - ksm_scan.mm_slot = list_entry(slot->mm_node.next, - struct ksm_mm_slot, mm_node); + slot = list_entry(mm_slot->slot.mm_node.next, + struct mm_slot, mm_node); + ksm_scan.mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); if (ksm_scan.address == 0) { /* * We've completed a full scan of all vmas, holding mmap_lock @@ -2376,11 +2351,11 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) * or when all VM_MERGEABLE areas have been unmapped (and * mmap_lock then protects against race with MADV_MERGEABLE). */ - hash_del(&slot->hash); - list_del(&slot->mm_node); + hash_del(&mm_slot->slot.hash); + list_del(&mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock); - free_mm_slot(slot); + mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); mmap_read_unlock(mm); mmdrop(mm); @@ -2397,8 +2372,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) } /* Repeat until we've completed scanning the whole list */ - slot = ksm_scan.mm_slot; - if (slot != &ksm_mm_head) + mm_slot = ksm_scan.mm_slot; + if (mm_slot != &ksm_mm_head) goto next_mm; ksm_scan.seqnr++; @@ -2426,7 +2401,7 @@ static void ksm_do_scan(unsigned int scan_npages) static int ksmd_should_run(void) { - return (ksm_run & KSM_RUN_MERGE) && !list_empty(&ksm_mm_head.mm_node); + return (ksm_run & KSM_RUN_MERGE) && !list_empty(&ksm_mm_head.slot.mm_node); } static int ksm_scan_thread(void *nothing) @@ -2516,17 +2491,20 @@ EXPORT_SYMBOL_GPL(ksm_madvise); int __ksm_enter(struct mm_struct *mm) { struct ksm_mm_slot *mm_slot; + struct mm_slot *slot; int needs_wakeup; - mm_slot = alloc_mm_slot(); + mm_slot = mm_slot_alloc(mm_slot_cache); if (!mm_slot) return -ENOMEM; + slot = &mm_slot->slot; + /* Check ksm_run too? Would need tighter locking */ - needs_wakeup = list_empty(&ksm_mm_head.mm_node); + needs_wakeup = list_empty(&ksm_mm_head.slot.mm_node); spin_lock(&ksm_mmlist_lock); - insert_to_mm_slots_hash(mm, mm_slot); + mm_slot_insert(mm_slots_hash, mm, slot); /* * When KSM_RUN_MERGE (or KSM_RUN_STOP), * insert just behind the scanning cursor, to let the area settle @@ -2538,9 +2516,9 @@ int __ksm_enter(struct mm_struct *mm) * missed: then we might as well insert at the end of the list. */ if (ksm_run & KSM_RUN_UNMERGE) - list_add_tail(&mm_slot->mm_node, &ksm_mm_head.mm_node); + list_add_tail(&slot->mm_node, &ksm_mm_head.slot.mm_node); else - list_add_tail(&mm_slot->mm_node, &ksm_scan.mm_slot->mm_node); + list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node); spin_unlock(&ksm_mmlist_lock); set_bit(MMF_VM_MERGEABLE, &mm->flags); @@ -2555,6 +2533,7 @@ int __ksm_enter(struct mm_struct *mm) void __ksm_exit(struct mm_struct *mm) { struct ksm_mm_slot *mm_slot; + struct mm_slot *slot; int easy_to_free = 0; /* @@ -2567,21 +2546,22 @@ void __ksm_exit(struct mm_struct *mm) */ spin_lock(&ksm_mmlist_lock); - mm_slot = get_mm_slot(mm); + slot = mm_slot_lookup(mm_slots_hash, mm); + mm_slot = mm_slot_entry(slot, struct ksm_mm_slot, slot); if (mm_slot && ksm_scan.mm_slot != mm_slot) { if (!mm_slot->rmap_list) { - hash_del(&mm_slot->hash); - list_del(&mm_slot->mm_node); + hash_del(&slot->hash); + list_del(&slot->mm_node); easy_to_free = 1; } else { - list_move(&mm_slot->mm_node, - &ksm_scan.mm_slot->mm_node); + list_move(&slot->mm_node, + &ksm_scan.mm_slot->slot.mm_node); } } spin_unlock(&ksm_mmlist_lock); if (easy_to_free) { - free_mm_slot(mm_slot); + mm_slot_free(mm_slot_cache, mm_slot); clear_bit(MMF_VM_MERGEABLE, &mm->flags); mmdrop(mm); } else if (mm_slot) {