From patchwork Mon Nov 15 18:59:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F343C433F5 for ; Mon, 15 Nov 2021 18:59:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B3B116368C for ; Mon, 15 Nov 2021 18:59:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B3B116368C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 180556B0085; Mon, 15 Nov 2021 13:59:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 091FA6B0087; Mon, 15 Nov 2021 13:59:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D39ED6B0088; Mon, 15 Nov 2021 13:59:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id C20D26B0085 for ; Mon, 15 Nov 2021 13:59:25 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8827984D18 for ; Mon, 15 Nov 2021 18:59:25 +0000 (UTC) X-FDA: 78812077890.26.8EC79E5 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf25.hostedemail.com (Postfix) with ESMTP id BBFD8B00018E for ; Mon, 15 Nov 2021 18:59:01 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id n8so15264571plf.4 for ; Mon, 15 Nov 2021 10:59:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZvggfUleMqIfDPHbdwrJ/rWz4NRzcNXyUdl0j9jqldo=; b=VSTuDWnE3YRnE2mo0qcaOFmFewKDjQHZIgJsskhO499xUejBnEwrQiw3cNkas9zUfQ cDKPMVzKPRmxt5YS/adj/jZlRW8bRqWbXIEfWsECt8itJVvIkCCfptRhneJj8NCnYyLr 7ElRZBaZHM1SFt7HZX0RPX9EXwluOIIt83lNe8zbZfolfcfJgy6VUs6uXTPUfVCfyDlk a/80R2hnai2hm50AINISvZT1Z2kQAsruVGCk4I1eMDmT0razTooXS0Eq9gwwwrhrfQke WD/bSSucwJVymECs3yNYAvy6DHc+v+V0iejYDg/lOw6hZUNDTtTCmq3ZN+91+Rems0ob M11A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=ZvggfUleMqIfDPHbdwrJ/rWz4NRzcNXyUdl0j9jqldo=; b=4BCJ90/1Su9D2uBiQiq2r2eMw0vMel2EBISgQ/NWe7PKCuGWzIt5ndzAeYURpDS0I7 s865xpf5Y9X/Pgm3PeX657BtcHVzqeLnZn416SfAVb18zh7YdoL56BW1rT7K0ceE63ot g8686m9kmKx9pbnf47gTLoZa8Bay3q1Lh8XoYIwWR9P+tCvactuAz7Qi+CZaO3Kw3k+w Vyr7aeZ8C44LpaGb7UGVut9SsW6/YNmb9VgXkWBiDpcVMoNV3dyPUtp10aOp+IV/Lw77 Zq/5FAzdVhRUDoqpyz5Am/4mYyu1jTe6mt/vTI93BB/OC+tU/VBkhhTUd50RO9Lxx4st JqlA== X-Gm-Message-State: AOAM533j67P1mtIUIUarANPt3gYZ4xE5ggqb3dZqVOpdR86b8+Z8nF8Q 4iK1X7fzTwItT3oNL9vChO0= X-Google-Smtp-Source: ABdhPJzIvEAbNi9+kGaMo8OsM7Fdz8XMbJ/zryCLDttdb7o0g8LqlsHwPGanI3Nax1fd2VwyMqhRww== X-Received: by 2002:a17:902:784c:b0:138:f4e5:9df8 with SMTP id e12-20020a170902784c00b00138f4e59df8mr38105810pln.14.1637002754026; Mon, 15 Nov 2021 10:59:14 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:13 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 1/9] zsmalloc: introduce some helper functions Date: Mon, 15 Nov 2021 10:59:01 -0800 Message-Id: <20211115185909.3949505-2-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=VSTuDWnE; spf=pass (imf25.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BBFD8B00018E X-Stat-Signature: qgftkjo6jy7afjid8hzu75bxdw9gcs97 X-HE-Tag: 1637002741-247321 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: get_zspage_mapping returns fullness as well as class_idx. However, the fullness is usually not used since it could be stale in some contexts. It causes misleading as well as unnecessary instructions so this patch introduces zspage_class. obj_to_location also produces page and index but we don't need always the index, either so this patch introduces obj_to_page. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 54 ++++++++++++++++++++++----------------------------- 1 file changed, 23 insertions(+), 31 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b897ce3b399a..f8c63bacd22e 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -517,6 +517,12 @@ static void get_zspage_mapping(struct zspage *zspage, *class_idx = zspage->class; } +static struct size_class *zspage_class(struct zs_pool *pool, + struct zspage *zspage) +{ + return pool->size_class[zspage->class]; +} + static void set_zspage_mapping(struct zspage *zspage, unsigned int class_idx, enum fullness_group fullness) @@ -844,6 +850,12 @@ static void obj_to_location(unsigned long obj, struct page **page, *obj_idx = (obj & OBJ_INDEX_MASK); } +static void obj_to_page(unsigned long obj, struct page **page) +{ + obj >>= OBJ_TAG_BITS; + *page = pfn_to_page(obj >> OBJ_INDEX_BITS); +} + /** * location_to_obj - get obj value encoded from (, ) * @page: page object resides in zspage @@ -1246,8 +1258,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, unsigned long obj, off; unsigned int obj_idx; - unsigned int class_idx; - enum fullness_group fg; struct size_class *class; struct mapping_area *area; struct page *pages[2]; @@ -1270,8 +1280,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, /* migration cannot move any subpage in this zspage */ migrate_read_lock(zspage); - get_zspage_mapping(zspage, &class_idx, &fg); - class = pool->size_class[class_idx]; + class = zspage_class(pool, zspage); off = (class->size * obj_idx) & ~PAGE_MASK; area = &get_cpu_var(zs_map_area); @@ -1304,16 +1313,13 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) unsigned long obj, off; unsigned int obj_idx; - unsigned int class_idx; - enum fullness_group fg; struct size_class *class; struct mapping_area *area; obj = handle_to_obj(handle); obj_to_location(obj, &page, &obj_idx); zspage = get_zspage(page); - get_zspage_mapping(zspage, &class_idx, &fg); - class = pool->size_class[class_idx]; + class = zspage_class(pool, zspage); off = (class->size * obj_idx) & ~PAGE_MASK; area = this_cpu_ptr(&zs_map_area); @@ -1491,8 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle) struct zspage *zspage; struct page *f_page; unsigned long obj; - unsigned int f_objidx; - int class_idx; struct size_class *class; enum fullness_group fullness; bool isolated; @@ -1502,13 +1506,11 @@ void zs_free(struct zs_pool *pool, unsigned long handle) pin_tag(handle); obj = handle_to_obj(handle); - obj_to_location(obj, &f_page, &f_objidx); + obj_to_page(obj, &f_page); zspage = get_zspage(f_page); migrate_read_lock(zspage); - - get_zspage_mapping(zspage, &class_idx, &fullness); - class = pool->size_class[class_idx]; + class = zspage_class(pool, zspage); spin_lock(&class->lock); obj_free(class, obj); @@ -1866,8 +1868,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { struct zs_pool *pool; struct size_class *class; - int class_idx; - enum fullness_group fullness; struct zspage *zspage; struct address_space *mapping; @@ -1880,15 +1880,10 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) zspage = get_zspage(page); - /* - * Without class lock, fullness could be stale while class_idx is okay - * because class_idx is constant unless page is freed so we should get - * fullness again under class lock. - */ - get_zspage_mapping(zspage, &class_idx, &fullness); mapping = page_mapping(page); pool = mapping->private_data; - class = pool->size_class[class_idx]; + + class = zspage_class(pool, zspage); spin_lock(&class->lock); if (get_zspage_inuse(zspage) == 0) { @@ -1907,6 +1902,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) * size_class to prevent further object allocation from the zspage. */ if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { + enum fullness_group fullness; + unsigned int class_idx; + get_zspage_mapping(zspage, &class_idx, &fullness); atomic_long_inc(&pool->isolated_pages); remove_zspage(class, zspage, fullness); @@ -1923,8 +1921,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, { struct zs_pool *pool; struct size_class *class; - int class_idx; - enum fullness_group fullness; struct zspage *zspage; struct page *dummy; void *s_addr, *d_addr, *addr; @@ -1949,9 +1945,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, /* Concurrent compactor cannot migrate any subpage in zspage */ migrate_write_lock(zspage); - get_zspage_mapping(zspage, &class_idx, &fullness); pool = mapping->private_data; - class = pool->size_class[class_idx]; + class = zspage_class(pool, zspage); offset = get_first_obj_offset(page); spin_lock(&class->lock); @@ -2049,8 +2044,6 @@ static void zs_page_putback(struct page *page) { struct zs_pool *pool; struct size_class *class; - int class_idx; - enum fullness_group fg; struct address_space *mapping; struct zspage *zspage; @@ -2058,10 +2051,9 @@ static void zs_page_putback(struct page *page) VM_BUG_ON_PAGE(!PageIsolated(page), page); zspage = get_zspage(page); - get_zspage_mapping(zspage, &class_idx, &fg); mapping = page_mapping(page); pool = mapping->private_data; - class = pool->size_class[class_idx]; + class = zspage_class(pool, zspage); spin_lock(&class->lock); dec_zspage_isolation(zspage); From patchwork Mon Nov 15 18:59:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71393C433EF for ; Mon, 15 Nov 2021 18:59:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 10E3163490 for ; Mon, 15 Nov 2021 18:59:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 10E3163490 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 882316B007B; Mon, 15 Nov 2021 13:59:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8615E6B007D; Mon, 15 Nov 2021 13:59:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 746A86B007E; Mon, 15 Nov 2021 13:59:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 630246B007B for ; Mon, 15 Nov 2021 13:59:17 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 077DA180AE800 for ; Mon, 15 Nov 2021 18:59:17 +0000 (UTC) X-FDA: 78812077554.25.F2E010A Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf12.hostedemail.com (Postfix) with ESMTP id 2F38210003CC for ; Mon, 15 Nov 2021 18:59:16 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id gb13-20020a17090b060d00b001a674e2c4a8so626331pjb.4 for ; Mon, 15 Nov 2021 10:59:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DTS88cwpZRhNdCWVFjW1Uni2clTkdxNvhvbd9hJHl9Q=; b=da6IwRTzPyK70awYlgE0TSC0pnxwDgovl10PZugFz0nSCXgF/M3bHXusjkMbSiGEOd M9Vc0uIfWa8+0U6z9vMK8g06eYPzbzC7XiVg0yj1fCgkV7dMXea/QPn9ifrNbmtPJ/Qr pskfzpzHiUgzbxDmPaGh0JXTvkIpgtGeWDoT0Pp0A8XRirme8UnTUY6l8ZWyv/e9uS2i NXbMdwxPNjFvv7rOzAgv1L+GEGVRWsZg2eXmy1lrja8xt/qVp5dhku2yaG+NOx/Avcy4 gU7HQdbyybIhiFVC+AxFd3nXRotqAL8XMYBf1kTtNPC6IMnpRXFuL5+Lqr7rAFzODrbF 6+Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=DTS88cwpZRhNdCWVFjW1Uni2clTkdxNvhvbd9hJHl9Q=; b=fnskcX2WyzWBeoUvgZMxiW4QxLXgkRG2N77KOMthXK3+1uhGN0quxOcQphPGiPI8wt wyKm2cEWTp8IEWPzPwqX0N9N5UebneqTh/n/fLMsMERqBhJs5zF8ET7XuUX2sNMvYI/H GynZZ2iyDbNEMI93nrhqa7n7456vNvLMfY4cj/3Y3MBXomkXf/nEOJM5kz6FhwJ6hPkp 4CdyKeoH9YacX2UdhUtiZN6mAxofyyiPDoCNKhi01uaMS1fLSkTwE6DP/gq8zA126//x 9gZSmyjhQ5Io5p8ksb/MKFBse/f8qp4pAucIxJMAK6MSI8XH1tVjY8oQ/9LbjNa/GEEW y3aQ== X-Gm-Message-State: AOAM530KWOMwvoyTZ40H0fQGr83+VWEOZ6hq4WzJY8gFPi2Ps2RowhvU JB9JrmdTC+1seq7g5BqMta1ZntgkRaY= X-Google-Smtp-Source: ABdhPJzxfgbtwijPOncIcmJbV2TmcVA+OjV2UZGher8++1JTm+1DFMz36WOHauXa42QkDTYF1C6bUA== X-Received: by 2002:a17:902:8214:b0:142:61cf:7be with SMTP id x20-20020a170902821400b0014261cf07bemr38182979pln.0.1637002755117; Mon, 15 Nov 2021 10:59:15 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:14 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 2/9] zsmalloc: rename zs_stat_type to class_stat_type Date: Mon, 15 Nov 2021 10:59:02 -0800 Message-Id: <20211115185909.3949505-3-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 2F38210003CC X-Stat-Signature: 8uu73zejrkqzgakcxj7de3n61hnkeuzj Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=da6IwRTz; spf=pass (imf12.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002756-565546 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The stat aims for class stat, not zspage so rename it. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index f8c63bacd22e..c149ccf734ba 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -158,7 +158,7 @@ enum fullness_group { NR_ZS_FULLNESS, }; -enum zs_stat_type { +enum class_stat_type { CLASS_EMPTY, CLASS_ALMOST_EMPTY, CLASS_ALMOST_FULL, @@ -549,21 +549,21 @@ static int get_size_class_index(int size) return min_t(int, ZS_SIZE_CLASSES - 1, idx); } -/* type can be of enum type zs_stat_type or fullness_group */ -static inline void zs_stat_inc(struct size_class *class, +/* type can be of enum type class_stat_type or fullness_group */ +static inline void class_stat_inc(struct size_class *class, int type, unsigned long cnt) { class->stats.objs[type] += cnt; } -/* type can be of enum type zs_stat_type or fullness_group */ -static inline void zs_stat_dec(struct size_class *class, +/* type can be of enum type class_stat_type or fullness_group */ +static inline void class_stat_dec(struct size_class *class, int type, unsigned long cnt) { class->stats.objs[type] -= cnt; } -/* type can be of enum type zs_stat_type or fullness_group */ +/* type can be of enum type class_stat_type or fullness_group */ static inline unsigned long zs_stat_get(struct size_class *class, int type) { @@ -725,7 +725,7 @@ static void insert_zspage(struct size_class *class, { struct zspage *head; - zs_stat_inc(class, fullness, 1); + class_stat_inc(class, fullness, 1); head = list_first_entry_or_null(&class->fullness_list[fullness], struct zspage, list); /* @@ -750,7 +750,7 @@ static void remove_zspage(struct size_class *class, VM_BUG_ON(is_zspage_isolated(zspage)); list_del_init(&zspage->list); - zs_stat_dec(class, fullness, 1); + class_stat_dec(class, fullness, 1); } /* @@ -964,7 +964,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class, cache_free_zspage(pool, zspage); - zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage); + class_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage); atomic_long_sub(class->pages_per_zspage, &pool->pages_allocated); } @@ -1394,7 +1394,7 @@ static unsigned long obj_malloc(struct size_class *class, kunmap_atomic(vaddr); mod_zspage_inuse(zspage, 1); - zs_stat_inc(class, OBJ_USED, 1); + class_stat_inc(class, OBJ_USED, 1); obj = location_to_obj(m_page, obj); @@ -1458,7 +1458,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) record_obj(handle, obj); atomic_long_add(class->pages_per_zspage, &pool->pages_allocated); - zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage); + class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage); /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); @@ -1489,7 +1489,7 @@ static void obj_free(struct size_class *class, unsigned long obj) kunmap_atomic(vaddr); set_freeobj(zspage, f_objidx); mod_zspage_inuse(zspage, -1); - zs_stat_dec(class, OBJ_USED, 1); + class_stat_dec(class, OBJ_USED, 1); } void zs_free(struct zs_pool *pool, unsigned long handle) From patchwork Mon Nov 15 18:59:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B1EFC433F5 for ; Mon, 15 Nov 2021 19:06:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4D9FC636DD for ; Mon, 15 Nov 2021 19:06:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4D9FC636DD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E1D836B007D; Mon, 15 Nov 2021 14:06:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DCD1D6B007E; Mon, 15 Nov 2021 14:06:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C21C16B0080; Mon, 15 Nov 2021 14:06:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id ABC186B007D for ; Mon, 15 Nov 2021 14:06:08 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7240C84CE7 for ; Mon, 15 Nov 2021 19:06:08 +0000 (UTC) X-FDA: 78812094816.16.CE6E90F Received: from mail-ua1-f54.google.com (mail-ua1-f54.google.com [209.85.222.54]) by imf09.hostedemail.com (Postfix) with ESMTP id EBD103000106 for ; Mon, 15 Nov 2021 19:06:05 +0000 (UTC) Received: by mail-ua1-f54.google.com with SMTP id p2so37051667uad.11 for ; Mon, 15 Nov 2021 11:06:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bkZwS8vDT7uRdINCU8D9UvCZGIRIwIQc1XPDfLaq/tc=; b=ID+pjGn9EI+5i9DzY7oraYWEDBEyS4qP50y8Lkb3ajNkoHifRj/HHAtvYL9u1VX44D drIlHbZdVckM9/q44Cp1KQqtmgU+RhbmA8YWJZuFJZv2sRbr7eony6wddip6RSJfO/06 3xyvaQqlMNTZhfyk2Xo72fYJ69qEtOZK4knIhJzDFMzUcTtFRZ0w5mWie96UlDmpAq9/ UxeJTQzkwqbBNO1xZXcBdTf2Moju+wBPB4y3Ma4OH7AJzTjcKeS7sNsM8y2EYnH4K2r/ 554PYK+N6apvMfA8x75n6JxLmGuvYz1jjpysB6EXt/DO1ZVPG8KhhP7guvwLXstwFenP sApQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=bkZwS8vDT7uRdINCU8D9UvCZGIRIwIQc1XPDfLaq/tc=; b=tZ+cwnZgLLsJaWQekhaJrlC5oB5eRlifACC7iwECZyF2bSVuaqOoty5jKcVBVIeuPJ 0W8pUNPQHKNkGS1WXfKK0ErLcECEg8pzjeB2ykbv3KiHYpqJvqJmn9vTTD4nNlaNuXB1 /PfCyh8w8/zOCdaYPPByN6vgByapCdcj9m9AB/fZAxLhAuiO6vn/WCHIYe0KuyqN1c52 AvnQbLWTWlT4a6X2PbV8UqXV0T3tPON/AZu+9cDuqG0+cyPTUUt8vAoxxZ7ushIHRCD6 S/N8g+GPJYd8rIUgDVt4eIDD+1vED3dkEYow0mnAPeqI70DeRL8Zkx4arxHXk5GeZMc0 4hVA== X-Gm-Message-State: AOAM5326n0NRR65cGAfSrSSe29ibFWqkVeQLvhPr5UEoPsdekeWvWUKq SV0kYObwEVYYcsDezUl9vLSKjBtjxcQ= X-Google-Smtp-Source: ABdhPJwR04xuNImDuDENYg/S2BKdzT/IVQQJy9hJxt242TMObRPK9EwYVpUkWoWWtDxueOE2ZRucyg== X-Received: by 2002:aa7:9e9c:0:b0:49f:c7cf:ff5 with SMTP id p28-20020aa79e9c000000b0049fc7cf0ff5mr34844423pfq.62.1637002756146; Mon, 15 Nov 2021 10:59:16 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:15 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 3/9] zsmalloc: decouple class actions from zspage works Date: Mon, 15 Nov 2021 10:59:03 -0800 Message-Id: <20211115185909.3949505-4-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EBD103000106 X-Stat-Signature: z3xtx16tbhy7zn8ckc4oqwhcnoqdyyyi Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ID+pjGn9; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf09.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.222.54 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com X-HE-Tag: 1637003165-827326 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch moves class stat update out of obj_malloc since it's not related to zspage operation. This is a preparation to introduce new lock scheme in next patch. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c149ccf734ba..7a14090e4a53 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1360,17 +1360,19 @@ size_t zs_huge_class_size(struct zs_pool *pool) } EXPORT_SYMBOL_GPL(zs_huge_class_size); -static unsigned long obj_malloc(struct size_class *class, +static unsigned long obj_malloc(struct zs_pool *pool, struct zspage *zspage, unsigned long handle) { int i, nr_page, offset; unsigned long obj; struct link_free *link; + struct size_class *class; struct page *m_page; unsigned long m_offset; void *vaddr; + class = pool->size_class[zspage->class]; handle |= OBJ_ALLOCATED_TAG; obj = get_freeobj(zspage); @@ -1394,7 +1396,6 @@ static unsigned long obj_malloc(struct size_class *class, kunmap_atomic(vaddr); mod_zspage_inuse(zspage, 1); - class_stat_inc(class, OBJ_USED, 1); obj = location_to_obj(m_page, obj); @@ -1433,10 +1434,11 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) spin_lock(&class->lock); zspage = find_get_zspage(class); if (likely(zspage)) { - obj = obj_malloc(class, zspage, handle); + obj = obj_malloc(pool, zspage, handle); /* Now move the zspage to another fullness group, if required */ fix_fullness_group(class, zspage); record_obj(handle, obj); + class_stat_inc(class, OBJ_USED, 1); spin_unlock(&class->lock); return handle; @@ -1451,7 +1453,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) } spin_lock(&class->lock); - obj = obj_malloc(class, zspage, handle); + obj = obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); set_zspage_mapping(zspage, class->index, newfg); @@ -1459,6 +1461,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) atomic_long_add(class->pages_per_zspage, &pool->pages_allocated); class_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage); + class_stat_inc(class, OBJ_USED, 1); /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); @@ -1468,7 +1471,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) } EXPORT_SYMBOL_GPL(zs_malloc); -static void obj_free(struct size_class *class, unsigned long obj) +static void obj_free(int class_size, unsigned long obj) { struct link_free *link; struct zspage *zspage; @@ -1478,7 +1481,7 @@ static void obj_free(struct size_class *class, unsigned long obj) void *vaddr; obj_to_location(obj, &f_page, &f_objidx); - f_offset = (class->size * f_objidx) & ~PAGE_MASK; + f_offset = (class_size * f_objidx) & ~PAGE_MASK; zspage = get_zspage(f_page); vaddr = kmap_atomic(f_page); @@ -1489,7 +1492,6 @@ static void obj_free(struct size_class *class, unsigned long obj) kunmap_atomic(vaddr); set_freeobj(zspage, f_objidx); mod_zspage_inuse(zspage, -1); - class_stat_dec(class, OBJ_USED, 1); } void zs_free(struct zs_pool *pool, unsigned long handle) @@ -1513,7 +1515,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle) class = zspage_class(pool, zspage); spin_lock(&class->lock); - obj_free(class, obj); + obj_free(class->size, obj); + class_stat_dec(class, OBJ_USED, 1); fullness = fix_fullness_group(class, zspage); if (fullness != ZS_EMPTY) { migrate_read_unlock(zspage); @@ -1671,7 +1674,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, } used_obj = handle_to_obj(handle); - free_obj = obj_malloc(class, get_zspage(d_page), handle); + free_obj = obj_malloc(pool, get_zspage(d_page), handle); zs_object_copy(class, free_obj, used_obj); obj_idx++; /* @@ -1683,7 +1686,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, free_obj |= BIT(HANDLE_PIN_BIT); record_obj(handle, free_obj); unpin_tag(handle); - obj_free(class, used_obj); + obj_free(class->size, used_obj); } /* Remember last position in this iteration */ From patchwork Mon Nov 15 18:59:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3547C433F5 for ; Mon, 15 Nov 2021 18:59:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 829A36368C for ; Mon, 15 Nov 2021 18:59:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 829A36368C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2DC9A6B007D; Mon, 15 Nov 2021 13:59:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2176E6B007E; Mon, 15 Nov 2021 13:59:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DEF86B0080; Mon, 15 Nov 2021 13:59:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id F3A876B007D for ; Mon, 15 Nov 2021 13:59:18 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8AE03181AEF21 for ; Mon, 15 Nov 2021 18:59:18 +0000 (UTC) X-FDA: 78812077596.29.BDEE2C3 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf22.hostedemail.com (Postfix) with ESMTP id 35F901923 for ; Mon, 15 Nov 2021 18:59:18 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id r130so15887454pfc.1 for ; Mon, 15 Nov 2021 10:59:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pSUi2+kpqxZMVeYqD80ROQDgqt7JZMMjZIIlCp0yEtk=; b=G6MoOgbvzNNQtqZiSD8/K3soql42kJiSY7PB/Q5GykBx4HKqjTY0OqiP+f4IsINuOk XOUcTx26GxXg5dKW4gjtYDSezec+I4Q1N6xbJasQwfwCqJ+VqnT+09yGNTSOnNT4Kpmc NAIAIuszweVULkidU0CW1sqGbgeuJB1eZJWaRoKZ2/Osi5J1pAy/Mgu438UoSjeRxFD2 7t6dNrUXBw6Cx+ET9k4gc+MbkM/GprmPCED/gcVSB/e+Kjv5EChStMjCqrRuNX7bcSaw iTbwAqKg6+vs8Cgya0PjPCefhoRPuvbjrEbV1gH5PI7QZtN3vuMtfawynRKCIBQSh27u wpNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=pSUi2+kpqxZMVeYqD80ROQDgqt7JZMMjZIIlCp0yEtk=; b=DUvq8LZJfaWh2Dyp9ccWBPyyU+Kj2HzbEaLe4kBHQOoFnyhurf6PAOzK9HAeZZK326 tw5cykBUw5lcTA21YJ4EphTTwaCuwOdRfTamWR96tKO2mWLh0fbovaFIjVfXsbGy2/+g WW0M2Ig/UDmi7dEuRq1SwcXWcNveaMx7xZHnbHyIAMLIcDVNOfuz3EolxFXNA/9pEL+c rZOg7V5QLbvGcRsGIyswpj9zzPoRoJhMVwsOejpHjEP2geHwMyfJIRd/eQ5y9cLb298A 7HBZi6jSS0l2K+SVhSqRp+mbsqbAtU6DOAQpKI2EXFoLcwwTad2ucQH6CoiXE2mgLQ5d zzhQ== X-Gm-Message-State: AOAM532jRKCI9MsBbCs1mZRGsUjAF2gLsvTZN7urPMrFVBXgLAuKp08p bGZBmChJg1ehKeydnnvrUIM= X-Google-Smtp-Source: ABdhPJyTVlrU2ufUgiTIqfkdmqbn+i2qnwloef8v3S29vZM4IrQf4Ztu5eUAeCBUNR2GKxP4gq4zRw== X-Received: by 2002:a63:6c49:: with SMTP id h70mr766458pgc.368.1637002757322; Mon, 15 Nov 2021 10:59:17 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:16 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 4/9] zsmalloc: introduce obj_allocated Date: Mon, 15 Nov 2021 10:59:04 -0800 Message-Id: <20211115185909.3949505-5-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 35F901923 X-Stat-Signature: m5tkno1erxtfjzgb3e84fonrnexq3z11 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=G6MoOgbv; spf=pass (imf22.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002758-17922 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The usage pattern for obj_to_head is to check whether the zpage is allocated or not. Thus, introduce obj_allocated. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 7a14090e4a53..6ca130c0f7dc 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -877,13 +877,21 @@ static unsigned long handle_to_obj(unsigned long handle) return *(unsigned long *)handle; } -static unsigned long obj_to_head(struct page *page, void *obj) +static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) { + unsigned long handle; + if (unlikely(PageHugeObject(page))) { VM_BUG_ON_PAGE(!is_first_page(page), page); - return page->index; + handle = page->index; } else - return *(unsigned long *)obj; + handle = *(unsigned long *)obj; + + if (!(handle & OBJ_ALLOCATED_TAG)) + return false; + + *phandle = handle & ~OBJ_ALLOCATED_TAG; + return true; } static inline int testpin_tag(unsigned long handle) @@ -1606,7 +1614,6 @@ static void zs_object_copy(struct size_class *class, unsigned long dst, static unsigned long find_alloced_obj(struct size_class *class, struct page *page, int *obj_idx) { - unsigned long head; int offset = 0; int index = *obj_idx; unsigned long handle = 0; @@ -1616,9 +1623,7 @@ static unsigned long find_alloced_obj(struct size_class *class, offset += class->size * index; while (offset < PAGE_SIZE) { - head = obj_to_head(page, addr + offset); - if (head & OBJ_ALLOCATED_TAG) { - handle = head & ~OBJ_ALLOCATED_TAG; + if (obj_allocated(page, addr + offset, &handle)) { if (trypin_tag(handle)) break; handle = 0; @@ -1928,7 +1933,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, struct page *dummy; void *s_addr, *d_addr, *addr; int offset, pos; - unsigned long handle, head; + unsigned long handle; unsigned long old_obj, new_obj; unsigned int obj_idx; int ret = -EAGAIN; @@ -1964,9 +1969,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, pos = offset; s_addr = kmap_atomic(page); while (pos < PAGE_SIZE) { - head = obj_to_head(page, s_addr + pos); - if (head & OBJ_ALLOCATED_TAG) { - handle = head & ~OBJ_ALLOCATED_TAG; + if (obj_allocated(page, s_addr + pos, &handle)) { if (!trypin_tag(handle)) goto unpin_objects; } @@ -1982,9 +1985,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, for (addr = s_addr + offset; addr < s_addr + pos; addr += class->size) { - head = obj_to_head(page, addr); - if (head & OBJ_ALLOCATED_TAG) { - handle = head & ~OBJ_ALLOCATED_TAG; + if (obj_allocated(page, addr, &handle)) { BUG_ON(!testpin_tag(handle)); old_obj = handle_to_obj(handle); @@ -2029,9 +2030,7 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, unpin_objects: for (addr = s_addr + offset; addr < s_addr + pos; addr += class->size) { - head = obj_to_head(page, addr); - if (head & OBJ_ALLOCATED_TAG) { - handle = head & ~OBJ_ALLOCATED_TAG; + if (obj_allocated(page, addr, &handle)) { BUG_ON(!testpin_tag(handle)); unpin_tag(handle); } From patchwork Mon Nov 15 18:59:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64538C433FE for ; Mon, 15 Nov 2021 18:59:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1964E6368A for ; Mon, 15 Nov 2021 18:59:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1964E6368A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B43936B007E; Mon, 15 Nov 2021 13:59:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ACC3D6B0080; Mon, 15 Nov 2021 13:59:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91D1A6B0081; Mon, 15 Nov 2021 13:59:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 83F3B6B007E for ; Mon, 15 Nov 2021 13:59:20 -0500 (EST) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4D34B84CE2 for ; Mon, 15 Nov 2021 18:59:20 +0000 (UTC) X-FDA: 78812077680.36.850736D Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf27.hostedemail.com (Postfix) with ESMTP id C6E5470009CE for ; Mon, 15 Nov 2021 18:59:19 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id g28so15381605pgg.3 for ; Mon, 15 Nov 2021 10:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BWDMmjqynWZEAd9h9yKAs1pSujUR3PfRIBQcnnobkdE=; b=cGyp/bWHN50rNMxDmiuZEHaXRJEeRkWOdSakIX0N4UwOYhdSBFeMyR0uyEDlXphM2V uZ5ciB4CuOVSOnKC9vu6SGqyw+eQG8q+KwyQO6om8iGrVik95do8EHG79t/NCWMn05A6 QXT/CDwlhsUjgTaWjT2ZqzCa49XdfHK1fmkMOYFrc+CXuqAZ7S6l20qrwJaTd1l2tZtE WmqS4Kz9lSOOuefNctrbvZFvM9/CxCQU1tk+vNieCFdA7tTKXuIB62uB6WMGshNf0IiX +YQPmi/1UoUzPUeW7+mE4oMkKnQCvuG6xA1XOY2q1DiURcBes1WKse1rUCCnEP2UON5w 0mmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=BWDMmjqynWZEAd9h9yKAs1pSujUR3PfRIBQcnnobkdE=; b=s8kmmt85ajtx8A7Vq9vQbRyqSqvkn3jswFtqkFk82uRDzgSXOAoIHmn0a+IOJFau9a 8GyrLC6adxDhl9ru3j/qhAr+AcnTp2kXCOC4xfGJSAUgQQ+annjLQAk1r4AUngGeJg9O sMNbfn5NQxQfivG0VA5dPbEWJKB3KqprgNpjWKZVDPKqpUukwn1H4iVddoul0OfwFArT a8pmkkbl8cgXWO7AC+J7oqzhmbjGwltiBObKZwXVZb8pV0k8yDvPN7LaHYY6kkelJYMk pKynaAglidXt6tC8598mlLFtiNNfasRRclr7xyjBm8rn3FGAiNVVT7JIYQy+ajKdRTgG Okeg== X-Gm-Message-State: AOAM533ldT+J1VjqRp6mX8IblNDUNY94Jg+Vk/tMJF29td4dfDkiWcrs Vjp2JcPZlPilvCpAKNs8ifs= X-Google-Smtp-Source: ABdhPJxRww3aWuI1I1XEJ4dSrjY48BDA9+C8l/r6j0Lk5KgQVsKDok4BYLh5U2AamCmcsl+iYK4QyQ== X-Received: by 2002:a63:7cd:: with SMTP id 196mr747749pgh.224.1637002758896; Mon, 15 Nov 2021 10:59:18 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:17 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 5/9] zsmalloc: move huge compressed obj from page to zspage Date: Mon, 15 Nov 2021 10:59:05 -0800 Message-Id: <20211115185909.3949505-6-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: C6E5470009CE X-Stat-Signature: d1pt795zar4yk7n6sqxydsixm39nsh6j Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="cGyp/bWH"; spf=pass (imf27.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.174 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002759-388122 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: the flag aims for zspage, not per page. Let's move it to zspage. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 6ca130c0f7dc..26e571cc354e 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -121,6 +121,7 @@ #define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS) #define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1) +#define HUGE_BITS 1 #define FULLNESS_BITS 2 #define CLASS_BITS 8 #define ISOLATED_BITS 3 @@ -213,22 +214,6 @@ struct size_class { struct zs_size_stat stats; }; -/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ -static void SetPageHugeObject(struct page *page) -{ - SetPageOwnerPriv1(page); -} - -static void ClearPageHugeObject(struct page *page) -{ - ClearPageOwnerPriv1(page); -} - -static int PageHugeObject(struct page *page) -{ - return PageOwnerPriv1(page); -} - /* * Placed within free objects to form a singly linked list. * For every zspage, zspage->freeobj gives head of this list. @@ -278,6 +263,7 @@ struct zs_pool { struct zspage { struct { + unsigned int huge:HUGE_BITS; unsigned int fullness:FULLNESS_BITS; unsigned int class:CLASS_BITS + 1; unsigned int isolated:ISOLATED_BITS; @@ -298,6 +284,17 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; +/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ +static void SetZsHugePage(struct zspage *zspage) +{ + zspage->huge = 1; +} + +static bool ZsHugePage(struct zspage *zspage) +{ + return zspage->huge; +} + #ifdef CONFIG_COMPACTION static int zs_register_migration(struct zs_pool *pool); static void zs_unregister_migration(struct zs_pool *pool); @@ -830,7 +827,9 @@ static struct zspage *get_zspage(struct page *page) static struct page *get_next_page(struct page *page) { - if (unlikely(PageHugeObject(page))) + struct zspage *zspage = get_zspage(page); + + if (unlikely(ZsHugePage(zspage))) return NULL; return page->freelist; @@ -880,8 +879,9 @@ static unsigned long handle_to_obj(unsigned long handle) static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) { unsigned long handle; + struct zspage *zspage = get_zspage(page); - if (unlikely(PageHugeObject(page))) { + if (unlikely(ZsHugePage(zspage))) { VM_BUG_ON_PAGE(!is_first_page(page), page); handle = page->index; } else @@ -920,7 +920,6 @@ static void reset_page(struct page *page) ClearPagePrivate(page); set_page_private(page, 0); page_mapcount_reset(page); - ClearPageHugeObject(page); page->freelist = NULL; } @@ -1062,7 +1061,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, SetPagePrivate(page); if (unlikely(class->objs_per_zspage == 1 && class->pages_per_zspage == 1)) - SetPageHugeObject(page); + SetZsHugePage(zspage); } else { prev_page->freelist = page; } @@ -1307,7 +1306,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, ret = __zs_map_object(area, pages, off, class->size); out: - if (likely(!PageHugeObject(page))) + if (likely(!ZsHugePage(zspage))) ret += ZS_HANDLE_SIZE; return ret; @@ -1395,7 +1394,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, vaddr = kmap_atomic(m_page); link = (struct link_free *)vaddr + m_offset / sizeof(*link); set_freeobj(zspage, link->next >> OBJ_TAG_BITS); - if (likely(!PageHugeObject(m_page))) + if (likely(!ZsHugePage(zspage))) /* record handle in the header of allocated chunk */ link->handle = handle; else @@ -1496,7 +1495,10 @@ static void obj_free(int class_size, unsigned long obj) /* Insert this object in containing zspage's freelist */ link = (struct link_free *)(vaddr + f_offset); - link->next = get_freeobj(zspage) << OBJ_TAG_BITS; + if (likely(!ZsHugePage(zspage))) + link->next = get_freeobj(zspage) << OBJ_TAG_BITS; + else + f_page->index = 0; kunmap_atomic(vaddr); set_freeobj(zspage, f_objidx); mod_zspage_inuse(zspage, -1); @@ -1867,7 +1869,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, create_page_chain(class, zspage, pages); set_first_obj_offset(newpage, get_first_obj_offset(oldpage)); - if (unlikely(PageHugeObject(oldpage))) + if (unlikely(ZsHugePage(zspage))) newpage->index = oldpage->index; __SetPageMovable(newpage, page_mapping(oldpage)); } From patchwork Mon Nov 15 18:59:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F719C433EF for ; Mon, 15 Nov 2021 18:59:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D128063490 for ; Mon, 15 Nov 2021 18:59:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D128063490 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 304D36B0080; Mon, 15 Nov 2021 13:59:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C9E66B0081; Mon, 15 Nov 2021 13:59:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01C496B0082; Mon, 15 Nov 2021 13:59:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id E279B6B0080 for ; Mon, 15 Nov 2021 13:59:21 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 9EBD282499B9 for ; Mon, 15 Nov 2021 18:59:21 +0000 (UTC) X-FDA: 78812077722.12.F2F90B7 Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf03.hostedemail.com (Postfix) with ESMTP id 6D46A300009B for ; Mon, 15 Nov 2021 18:59:11 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id q12so3696296pgh.5 for ; Mon, 15 Nov 2021 10:59:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Kp/n3vPVpQpSYjqQfcvRXiRJ5vffvwwCVDv5n3UxcVQ=; b=GWM+qeAv8mmsF9bJq0Z9EUbTGn0U1+Q0gaZPyYINVtllBeJfHpYqygPRvOxnB6f/Rf JF3ADNjF3AV7uPcXpj6NRwiiA3PqvjdIAWME/JzJ67Jh4n+hHtbfKFJKhb93aqvgkNpm QIos1TVXOapoIh2otXxmU5BY1nwmFbzolmHveWnMnrFCw7HNBEixH/27bzlTG95IOkjb UryiWmAU6NjK2PQd7Eo2DGYJJHeZ/GH5txIxX3XTYtUcD5tT/JZjsQ9bDVPNKfgzEN4M RCF9Q78SfFX16AhSjXnpccxN5OFW0LdHshatOHyEPD9s6TSzCeoeeRaSzNmRlK2g78EF Hmtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=Kp/n3vPVpQpSYjqQfcvRXiRJ5vffvwwCVDv5n3UxcVQ=; b=46dvNUKPWbgxWZXRUlGaQv3Kj0meZKGEnhzu//K8MZae5LtgTrKXjkLH3xSpar2eJ6 3C9Pxny8yfLb0TI2l96tgDXwpu7+dtUYwoNkj8nXYL8Ekl/2e153fUzmuiPW4OSqtNo1 hLCmaD4iAmdMvBHQNv7lsAA7SvjCxa8XeDFu2+/NhOkD26IIFwvTIJKHUYIZr+jtjWr7 /Br2zBUBj8mdY7XQxAF+U+MKhkts8gTa2q7X+kO9Nk91BFVgVUrsF2J3ilJao6iperlU AuoqXIQWwICGv7AIvskQtkBSi3lAV9E2b3Ux620q1NAYSSwQxagaG8yALOR6BZbngBwk 1vBA== X-Gm-Message-State: AOAM532GIXSbzbjzqsWCTyK++TlmTzCT2mSe1bir1SMVdhxqJTe7VjE8 EF9X73rW2M8HKvkea7qeQX5muqdemDI= X-Google-Smtp-Source: ABdhPJz2+TmqGTNo8pAmZoqX8FEfs7OnLOY/43EUAUKB68+j6Rp3uv03qMkLLatszPMSl3Aqzv0nKw== X-Received: by 2002:a63:924c:: with SMTP id s12mr756953pgn.416.1637002760042; Mon, 15 Nov 2021 10:59:20 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:19 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 6/9] zsmalloc: remove zspage isolation for migration Date: Mon, 15 Nov 2021 10:59:06 -0800 Message-Id: <20211115185909.3949505-7-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6D46A300009B X-Stat-Signature: rn7d88sdikukzipzxse8f7fx17bobcja Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GWM+qeAv; spf=pass (imf03.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002751-451442 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: zspage isolation for migration introduced additional exceptions to be dealt with since the zspage was isolated from class list. The reason why I isolated zspage from class list was to prevent race between obj_malloc and page migration via allocating zpage from the zspage further. However, it couldn't prevent object freeing from zspage so it needed corner case handling. This patch removes the whole mess. Now, we are fine since class->lock and zspage->lock can prevent the race. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 157 +++----------------------------------------------- 1 file changed, 8 insertions(+), 149 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 26e571cc354e..b8b098be92fa 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -254,10 +254,6 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct inode *inode; struct work_struct free_work; - /* A wait queue for when migration races with async_free_zspage() */ - struct wait_queue_head migration_wait; - atomic_long_t isolated_pages; - bool destroying; #endif }; @@ -454,11 +450,6 @@ MODULE_ALIAS("zpool-zsmalloc"); /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */ static DEFINE_PER_CPU(struct mapping_area, zs_map_area); -static bool is_zspage_isolated(struct zspage *zspage) -{ - return zspage->isolated; -} - static __maybe_unused int is_first_page(struct page *page) { return PagePrivate(page); @@ -744,7 +735,6 @@ static void remove_zspage(struct size_class *class, enum fullness_group fullness) { VM_BUG_ON(list_empty(&class->fullness_list[fullness])); - VM_BUG_ON(is_zspage_isolated(zspage)); list_del_init(&zspage->list); class_stat_dec(class, fullness, 1); @@ -770,13 +760,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class, if (newfg == currfg) goto out; - if (!is_zspage_isolated(zspage)) { - remove_zspage(class, zspage, currfg); - insert_zspage(class, zspage, newfg); - } - + remove_zspage(class, zspage, currfg); + insert_zspage(class, zspage, newfg); set_zspage_mapping(zspage, class_idx, newfg); - out: return newfg; } @@ -1511,7 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle) unsigned long obj; struct size_class *class; enum fullness_group fullness; - bool isolated; if (unlikely(!handle)) return; @@ -1533,11 +1518,9 @@ void zs_free(struct zs_pool *pool, unsigned long handle) goto out; } - isolated = is_zspage_isolated(zspage); migrate_read_unlock(zspage); /* If zspage is isolated, zs_page_putback will free the zspage */ - if (likely(!isolated)) - free_zspage(pool, class, zspage); + free_zspage(pool, class, zspage); out: spin_unlock(&class->lock); @@ -1718,7 +1701,6 @@ static struct zspage *isolate_zspage(struct size_class *class, bool source) zspage = list_first_entry_or_null(&class->fullness_list[fg[i]], struct zspage, list); if (zspage) { - VM_BUG_ON(is_zspage_isolated(zspage)); remove_zspage(class, zspage, fg[i]); return zspage; } @@ -1739,8 +1721,6 @@ static enum fullness_group putback_zspage(struct size_class *class, { enum fullness_group fullness; - VM_BUG_ON(is_zspage_isolated(zspage)); - fullness = get_fullness_group(class, zspage); insert_zspage(class, zspage, fullness); set_zspage_mapping(zspage, class->index, fullness); @@ -1822,35 +1802,10 @@ static void inc_zspage_isolation(struct zspage *zspage) static void dec_zspage_isolation(struct zspage *zspage) { + VM_BUG_ON(zspage->isolated == 0); zspage->isolated--; } -static void putback_zspage_deferred(struct zs_pool *pool, - struct size_class *class, - struct zspage *zspage) -{ - enum fullness_group fg; - - fg = putback_zspage(class, zspage); - if (fg == ZS_EMPTY) - schedule_work(&pool->free_work); - -} - -static inline void zs_pool_dec_isolated(struct zs_pool *pool) -{ - VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0); - atomic_long_dec(&pool->isolated_pages); - /* - * Checking pool->destroying must happen after atomic_long_dec() - * for pool->isolated_pages above. Paired with the smp_mb() in - * zs_unregister_migration(). - */ - smp_mb__after_atomic(); - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying) - wake_up_all(&pool->migration_wait); -} - static void replace_sub_page(struct size_class *class, struct zspage *zspage, struct page *newpage, struct page *oldpage) { @@ -1876,10 +1831,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { - struct zs_pool *pool; - struct size_class *class; struct zspage *zspage; - struct address_space *mapping; /* * Page is locked so zspage couldn't be destroyed. For detail, look at @@ -1889,39 +1841,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) VM_BUG_ON_PAGE(PageIsolated(page), page); zspage = get_zspage(page); - - mapping = page_mapping(page); - pool = mapping->private_data; - - class = zspage_class(pool, zspage); - - spin_lock(&class->lock); - if (get_zspage_inuse(zspage) == 0) { - spin_unlock(&class->lock); - return false; - } - - /* zspage is isolated for object migration */ - if (list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { - spin_unlock(&class->lock); - return false; - } - - /* - * If this is first time isolation for the zspage, isolate zspage from - * size_class to prevent further object allocation from the zspage. - */ - if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { - enum fullness_group fullness; - unsigned int class_idx; - - get_zspage_mapping(zspage, &class_idx, &fullness); - atomic_long_inc(&pool->isolated_pages); - remove_zspage(class, zspage, fullness); - } - + migrate_write_lock(zspage); inc_zspage_isolation(zspage); - spin_unlock(&class->lock); + migrate_write_unlock(zspage); return true; } @@ -2004,21 +1926,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, dec_zspage_isolation(zspage); - /* - * Page migration is done so let's putback isolated zspage to - * the list if @page is final isolated subpage in the zspage. - */ - if (!is_zspage_isolated(zspage)) { - /* - * We cannot race with zs_destroy_pool() here because we wait - * for isolation to hit zero before we start destroying. - * Also, we ensure that everyone can see pool->destroying before - * we start waiting. - */ - putback_zspage_deferred(pool, class, zspage); - zs_pool_dec_isolated(pool); - } - if (page_zone(newpage) != page_zone(page)) { dec_zone_page_state(page, NR_ZSPAGES); inc_zone_page_state(newpage, NR_ZSPAGES); @@ -2046,30 +1953,15 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, static void zs_page_putback(struct page *page) { - struct zs_pool *pool; - struct size_class *class; - struct address_space *mapping; struct zspage *zspage; VM_BUG_ON_PAGE(!PageMovable(page), page); VM_BUG_ON_PAGE(!PageIsolated(page), page); zspage = get_zspage(page); - mapping = page_mapping(page); - pool = mapping->private_data; - class = zspage_class(pool, zspage); - - spin_lock(&class->lock); + migrate_write_lock(zspage); dec_zspage_isolation(zspage); - if (!is_zspage_isolated(zspage)) { - /* - * Due to page_lock, we cannot free zspage immediately - * so let's defer. - */ - putback_zspage_deferred(pool, class, zspage); - zs_pool_dec_isolated(pool); - } - spin_unlock(&class->lock); + migrate_write_unlock(zspage); } static const struct address_space_operations zsmalloc_aops = { @@ -2091,36 +1983,8 @@ static int zs_register_migration(struct zs_pool *pool) return 0; } -static bool pool_isolated_are_drained(struct zs_pool *pool) -{ - return atomic_long_read(&pool->isolated_pages) == 0; -} - -/* Function for resolving migration */ -static void wait_for_isolated_drain(struct zs_pool *pool) -{ - - /* - * We're in the process of destroying the pool, so there are no - * active allocations. zs_page_isolate() fails for completely free - * zspages, so we need only wait for the zs_pool's isolated - * count to hit zero. - */ - wait_event(pool->migration_wait, - pool_isolated_are_drained(pool)); -} - static void zs_unregister_migration(struct zs_pool *pool) { - pool->destroying = true; - /* - * We need a memory barrier here to ensure global visibility of - * pool->destroying. Thus pool->isolated pages will either be 0 in which - * case we don't care, or it will be > 0 and pool->destroying will - * ensure that we wake up once isolation hits 0. - */ - smp_mb(); - wait_for_isolated_drain(pool); /* This can block */ flush_work(&pool->free_work); iput(pool->inode); } @@ -2150,7 +2014,6 @@ static void async_free_zspage(struct work_struct *work) spin_unlock(&class->lock); } - list_for_each_entry_safe(zspage, tmp, &free_pages, list) { list_del(&zspage->list); lock_zspage(zspage); @@ -2363,10 +2226,6 @@ struct zs_pool *zs_create_pool(const char *name) if (!pool->name) goto err; -#ifdef CONFIG_COMPACTION - init_waitqueue_head(&pool->migration_wait); -#endif - if (create_cache(pool)) goto err; From patchwork Mon Nov 15 18:59:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65983C433EF for ; Mon, 15 Nov 2021 18:59:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 186C763688 for ; Mon, 15 Nov 2021 18:59:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 186C763688 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 268F06B0081; Mon, 15 Nov 2021 13:59:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F3C56B0082; Mon, 15 Nov 2021 13:59:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0440C6B0083; Mon, 15 Nov 2021 13:59:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id E1BBD6B0081 for ; Mon, 15 Nov 2021 13:59:22 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A33B582499B9 for ; Mon, 15 Nov 2021 18:59:22 +0000 (UTC) X-FDA: 78812077764.15.39B8383 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf19.hostedemail.com (Postfix) with ESMTP id C9352B0000BE for ; Mon, 15 Nov 2021 18:59:11 +0000 (UTC) Received: by mail-pf1-f180.google.com with SMTP id o4so15796900pfp.13 for ; Mon, 15 Nov 2021 10:59:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oSU/bxyKp6FyGYKZlQ+qKRvq76tinNQyIE7V87t1vWc=; b=RSoOTeOk8+nU/lOvTffH2t/NjrUdgFLCBX9mYPw/F/aURDsLdqmIcD9f6s6eLRTICW ByuQFIrRsGR8psdlUEMBmmHdmAYttPL9Xxbatf22MUeWHAQf3yPydB91UTTD/hpaNWyu BLZky7bgD7/FbDydPkLXHhV1g/MFduxNELbcz0Cm8sP2QYtYtXHAfsYjys4/3IMqzZv8 zYDaHMno0dApmDcOzOpTlsunF5Cwx732lgY+WINTciOyfSPGkP+8ktuQX/YnA8a4c9LP YH3g9py8JhVTSACPfW4rdi2JcjVEoz0rQfGsiRgGadf17kHpc6VkitcRiB8GH9YXcEu5 uiXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=oSU/bxyKp6FyGYKZlQ+qKRvq76tinNQyIE7V87t1vWc=; b=V6d1M3tFTxtMoaz4x1MmdK1YwiXAFjtxhLZXedSOh9e6EeH9vuCCz+JUBwp1dq/3Uq 1rqXYoq/+mJSetIOZkrrfONMpUmILQ0EvgjVq8+DKWSMouI2moBcgNVfUu+LNh7I+wj0 Q4QWDMwOJIM1T6BeyqgZ1uxcge0zJujrZmaARQJ7G3h1ak/H0Gy/2DdhAGLSnDqHG9Y1 mXyF4SrevEH2o32QbI+hBu4TKLg2pilmH/HJXfxHYcbXohU/feLdmOaM443SY9LJkBXN xlFZ2KMuMqFc8b/yrZpecb0tOyhUuOaD3NJwqg0Q+ptm6WYUBUGmJOVXSp9ossSVdmJ2 TLqQ== X-Gm-Message-State: AOAM53237XOPVMlPDMxUCttwDdvGNeCRJCtMiCFyT5px23c68wYB2SwJ vLYq+2KzcpzfDCPNe+u29lY= X-Google-Smtp-Source: ABdhPJxCRfxSUmnVJPZw0iLv/IrkJKX4mjo6ceoMEhA8QOI3WkCxC/83gtcbSMLOj8s2icx22mFWfg== X-Received: by 2002:a63:8143:: with SMTP id t64mr730692pgd.43.1637002760989; Mon, 15 Nov 2021 10:59:20 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:20 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim , Peter Zijlstra Subject: [PATCH v2 7/9] locking/rwlocks: introduce write_lock_nested Date: Mon, 15 Nov 2021 10:59:07 -0800 Message-Id: <20211115185909.3949505-8-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: C9352B0000BE X-Stat-Signature: 3kjqcfjsc45ff5ibdu4jgabnsn4jg5tz Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RSoOTeOk; spf=pass (imf19.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002751-106746 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for converting bit_spin_lock to rwlock in zsmalloc so that multiple writers of zspages can run at the same time but those zspages are supposed to be different zspage instance. Thus, it's not deadlock. This patch adds write_lock_nested to support the case for LOCKDEP. Cc: Peter Zijlstra (Intel) Signed-off-by: Minchan Kim Acked-by: Peter Zijlstra (Intel) Signed-off-by: Minchan Kim Reported-by: kernel test robot Acked-by: Sebastian Andrzej Siewior --- include/linux/rwlock.h | 6 ++++++ include/linux/rwlock_api_smp.h | 9 +++++++++ include/linux/rwlock_rt.h | 6 ++++++ include/linux/spinlock_api_up.h | 1 + kernel/locking/spinlock.c | 6 ++++++ kernel/locking/spinlock_rt.c | 12 ++++++++++++ 6 files changed, 40 insertions(+) diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 2c0ad417ce3c..8f416c5e929e 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -55,6 +55,12 @@ do { \ #define write_lock(lock) _raw_write_lock(lock) #define read_lock(lock) _raw_read_lock(lock) +#ifdef CONFIG_DEBUG_LOCK_ALLOC +#define write_lock_nested(lock, subclass) _raw_write_lock_nested(lock, subclass) +#else +#define write_lock_nested(lock, subclass) _raw_write_lock(lock) +#endif + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) #define read_lock_irqsave(lock, flags) \ diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index f1db6f17c4fb..f0c535ec4e65 100644 --- a/include/linux/rwlock_api_smp.h +++ b/include/linux/rwlock_api_smp.h @@ -17,6 +17,7 @@ void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); +void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock); void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); @@ -46,6 +47,7 @@ _raw_write_unlock_irqrestore(rwlock_t *lock, unsigned long flags) #ifdef CONFIG_INLINE_WRITE_LOCK #define _raw_write_lock(lock) __raw_write_lock(lock) +#define _raw_write_lock_nested(lock, subclass) __raw_write_lock_nested(lock, subclass) #endif #ifdef CONFIG_INLINE_READ_LOCK_BH @@ -209,6 +211,13 @@ static inline void __raw_write_lock(rwlock_t *lock) LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock); } +static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) +{ + preempt_disable(); + rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); + LOCK_CONTENDED(lock, do_raw_write_trylock, do_raw_write_lock); +} + #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ static inline void __raw_write_unlock(rwlock_t *lock) diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h index 49c1f3842ed5..efd6da62c893 100644 --- a/include/linux/rwlock_rt.h +++ b/include/linux/rwlock_rt.h @@ -28,6 +28,7 @@ extern void rt_read_lock(rwlock_t *rwlock); extern int rt_read_trylock(rwlock_t *rwlock); extern void rt_read_unlock(rwlock_t *rwlock); extern void rt_write_lock(rwlock_t *rwlock); +extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass); extern int rt_write_trylock(rwlock_t *rwlock); extern void rt_write_unlock(rwlock_t *rwlock); @@ -83,6 +84,11 @@ static __always_inline void write_lock(rwlock_t *rwlock) rt_write_lock(rwlock); } +static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) +{ + rt_write_lock_nested(rwlock, subclass); +} + static __always_inline void write_lock_bh(rwlock_t *rwlock) { local_bh_disable(); diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h index d0d188861ad6..b8ba00ccccde 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -59,6 +59,7 @@ #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) #define _raw_read_lock(lock) __LOCK(lock) #define _raw_write_lock(lock) __LOCK(lock) +#define _raw_write_lock_nested(lock, subclass) __LOCK(lock) #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) #define _raw_read_lock_bh(lock) __LOCK_BH(lock) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index b562f9289372..996811efa6d6 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -300,6 +300,12 @@ void __lockfunc _raw_write_lock(rwlock_t *lock) __raw_write_lock(lock); } EXPORT_SYMBOL(_raw_write_lock); + +void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) +{ + __raw_write_lock_nested(lock, subclass); +} +EXPORT_SYMBOL(_raw_write_lock_nested); #endif #ifndef CONFIG_INLINE_WRITE_LOCK_IRQSAVE diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index b2e553f9255b..b82d346f1e00 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -239,6 +239,18 @@ void __sched rt_write_lock(rwlock_t *rwlock) } EXPORT_SYMBOL(rt_write_lock); +#ifdef CONFIG_DEBUG_LOCK_ALLOC +void __sched rt_write_lock_nested(rwlock_t *rwlock, int subclass) +{ + ___might_sleep(__FILE__, __LINE__, 0); + rwlock_acquire(&rwlock->dep_map, subclass, 0, _RET_IP_); + rwbase_write_lock(&rwlock->rwbase, TASK_RTLOCK_WAIT); + rcu_read_lock(); + migrate_disable(); +} +EXPORT_SYMBOL(rt_write_lock_nested); +#endif + void __sched rt_read_unlock(rwlock_t *rwlock) { rwlock_release(&rwlock->dep_map, _RET_IP_); From patchwork Mon Nov 15 18:59:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2C23C433FE for ; Mon, 15 Nov 2021 19:04:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 43339633E4 for ; Mon, 15 Nov 2021 19:04:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 43339633E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D0F2F6B007B; Mon, 15 Nov 2021 14:04:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C96A06B007D; Mon, 15 Nov 2021 14:04:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE8F86B007E; Mon, 15 Nov 2021 14:04:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 985106B007B for ; Mon, 15 Nov 2021 14:04:45 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4FF2D18155822 for ; Mon, 15 Nov 2021 19:04:45 +0000 (UTC) X-FDA: 78812091330.12.125DCF3 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) by imf03.hostedemail.com (Postfix) with ESMTP id 731723001A14 for ; Mon, 15 Nov 2021 19:04:33 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id gt5so13662086pjb.1 for ; Mon, 15 Nov 2021 11:04:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3szfw6jRsFyxQTVEG1ikPVQdN5P2dWMh3dSumqMxVhs=; b=TijEmTvOjN0UoX3NB3h1bJm6DMUpDGiy+GhYbzr58hhcQCtldQd2SN/8uRdvV/dI9w FdCpdTBLg/YV0fwOhXkZ2lK/QaniOE2/K8XO6AldlqknhIZ1M0wjlNDqwBvkqPOgsArr jvs26/8SOpjuCbRsdl4CNGCzAkmeP3pgE3O6Ez7LE01zQkIgm+hBF9/nD0m/74MP6ydR 8q0wrzBsmq2lInmOo2k4m5ctzGK6ZNxEhKy8GGjfZZEXHcJdO0LfEZi9KnItOpLq05Xh MztX6TLBBvCyHYfUjzvmXvNXrhaQ7by2I8YtfmCFb5TtAB4wzYIGgpR7XyUYiwbI5cpa fZtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=3szfw6jRsFyxQTVEG1ikPVQdN5P2dWMh3dSumqMxVhs=; b=7+wA3eB3AZot3fMO64RrH7+gY6oYruKUtrUEYEjZg/yhptjoAVCZW6uKHGFZb1qzJC luJutusM+n4KlNBf2qg7bmpoBEAhGf39lBd2V+VIiK0FqVh5mdSkbgxpWH5wjP7aFfs0 GrtQmBd3Qz7RFYM37W0zp1CXameade9xfAsQYM2vPi/STEYGO7ElIYABPW3c6QQB02Y6 2p6acDHW0YydilyPzVJmoP6HU+meh5jii3JACZcPJPnbyJW8YdYFE0P55nKpOnjdGJF0 TgJIypdDoYBfczUrUdC7bTmhrMF575OsWDNtiMy2Eee9ZXkdoWOA7ZTA3Tvl2OXCHoji 3toQ== X-Gm-Message-State: AOAM533HxWyOu3PLwQuYtbKpne6NpKHxtSfzUIOpe58C8QWoK/NXB9nN JXdsFz3cMICy3B9lZokd2BTZL5r+5PE= X-Google-Smtp-Source: ABdhPJxYOuLm/Stm4uY5Ema/Ylcp1xpbFg+jEKlCS2YZJNeg3vH/aVrND2VhZub4h8vKg04Dwo433A== X-Received: by 2002:a17:902:758b:b0:13e:8b1:e49f with SMTP id j11-20020a170902758b00b0013e08b1e49fmr38558283pll.6.1637002762089; Mon, 15 Nov 2021 10:59:22 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:21 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim Subject: [PATCH v2 8/9] zsmalloc: replace per zpage lock with pool->migrate_lock Date: Mon, 15 Nov 2021 10:59:08 -0800 Message-Id: <20211115185909.3949505-9-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 731723001A14 X-Stat-Signature: o65a5ee3hs8bfd88hktqsrfk666kcrkx Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=TijEmTvO; spf=pass (imf03.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.42 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637003073-837177 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The zsmalloc has used a bit for spin_lock in zpage handle to keep zpage object alive during several operations. However, it causes the problem for PREEMPT_RT as well as introducing too complicated. This patch replaces the bit spin_lock with pool->migrate_lock rwlock. It could make the code simple as well as zsmalloc work under PREEMPT_RT. The drawback is the pool->migrate_lock is bigger granuarity than per zpage lock so the contention would be higher than old when both IO-related operations(i.e., zsmalloc, zsfree, zs_[map|unmap]) and compaction(page/zpage migration) are going in parallel(*, the migrate_lock is rwlock and IO related functions are all read side lock so there is no contention). However, the write-side is fast enough(dominant overhead is just page copy) so it wouldn't affect much. If the lock granurity becomes more problem later, we could introduce table locks based on handle as a hash value. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 205 +++++++++++++++++++++++--------------------------- 1 file changed, 96 insertions(+), 109 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b8b098be92fa..5d4c4d254679 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -30,6 +30,14 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +/* + * lock ordering: + * page_lock + * pool->migrate_lock + * class->lock + * zspage->lock + */ + #include #include #include @@ -100,15 +108,6 @@ #define _PFN_BITS (MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT) -/* - * Memory for allocating for handle keeps object position by - * encoding and the encoded value has a room - * in least bit(ie, look at obj_to_location). - * We use the bit to synchronize between object access by - * user and migration. - */ -#define HANDLE_PIN_BIT 0 - /* * Head in allocated object should have OBJ_ALLOCATED_TAG * to identify the object was allocated or not. @@ -255,6 +254,8 @@ struct zs_pool { struct inode *inode; struct work_struct free_work; #endif + /* protect page/zspage migration */ + rwlock_t migrate_lock; }; struct zspage { @@ -297,6 +298,9 @@ static void zs_unregister_migration(struct zs_pool *pool); static void migrate_lock_init(struct zspage *zspage); static void migrate_read_lock(struct zspage *zspage); static void migrate_read_unlock(struct zspage *zspage); +static void migrate_write_lock(struct zspage *zspage); +static void migrate_write_lock_nested(struct zspage *zspage); +static void migrate_write_unlock(struct zspage *zspage); static void kick_deferred_free(struct zs_pool *pool); static void init_deferred_free(struct zs_pool *pool); static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage); @@ -308,6 +312,9 @@ static void zs_unregister_migration(struct zs_pool *pool) {} static void migrate_lock_init(struct zspage *zspage) {} static void migrate_read_lock(struct zspage *zspage) {} static void migrate_read_unlock(struct zspage *zspage) {} +static void migrate_write_lock(struct zspage *zspage) {} +static void migrate_write_lock_nested(struct zspage *zspage) {} +static void migrate_write_unlock(struct zspage *zspage) {} static void kick_deferred_free(struct zs_pool *pool) {} static void init_deferred_free(struct zs_pool *pool) {} static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {} @@ -359,14 +366,10 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage) kmem_cache_free(pool->zspage_cachep, zspage); } +/* class->lock(which owns the handle) synchronizes races */ static void record_obj(unsigned long handle, unsigned long obj) { - /* - * lsb of @obj represents handle lock while other bits - * represent object value the handle is pointing so - * updating shouldn't do store tearing. - */ - WRITE_ONCE(*(unsigned long *)handle, obj); + *(unsigned long *)handle = obj; } /* zpool driver */ @@ -880,26 +883,6 @@ static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) return true; } -static inline int testpin_tag(unsigned long handle) -{ - return bit_spin_is_locked(HANDLE_PIN_BIT, (unsigned long *)handle); -} - -static inline int trypin_tag(unsigned long handle) -{ - return bit_spin_trylock(HANDLE_PIN_BIT, (unsigned long *)handle); -} - -static void pin_tag(unsigned long handle) __acquires(bitlock) -{ - bit_spin_lock(HANDLE_PIN_BIT, (unsigned long *)handle); -} - -static void unpin_tag(unsigned long handle) __releases(bitlock) -{ - bit_spin_unlock(HANDLE_PIN_BIT, (unsigned long *)handle); -} - static void reset_page(struct page *page) { __ClearPageMovable(page); @@ -968,6 +951,11 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class, VM_BUG_ON(get_zspage_inuse(zspage)); VM_BUG_ON(list_empty(&zspage->list)); + /* + * Since zs_free couldn't be sleepable, this function cannot call + * lock_page. The page locks trylock_zspage got will be released + * by __free_zspage. + */ if (!trylock_zspage(zspage)) { kick_deferred_free(pool); return; @@ -1263,15 +1251,20 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, */ BUG_ON(in_interrupt()); - /* From now on, migration cannot move the object */ - pin_tag(handle); - + /* It guarantees it can get zspage from handle safely */ + read_lock(&pool->migrate_lock); obj = handle_to_obj(handle); obj_to_location(obj, &page, &obj_idx); zspage = get_zspage(page); - /* migration cannot move any subpage in this zspage */ + /* + * migration cannot move any zpages in this zspage. Here, class->lock + * is too heavy since callers would take some time until they calls + * zs_unmap_object API so delegate the locking from class to zspage + * which is smaller granularity. + */ migrate_read_lock(zspage); + read_unlock(&pool->migrate_lock); class = zspage_class(pool, zspage); off = (class->size * obj_idx) & ~PAGE_MASK; @@ -1330,7 +1323,6 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) put_cpu_var(zs_map_area); migrate_read_unlock(zspage); - unpin_tag(handle); } EXPORT_SYMBOL_GPL(zs_unmap_object); @@ -1424,6 +1416,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) size += ZS_HANDLE_SIZE; class = pool->size_class[get_size_class_index(size)]; + /* class->lock effectively protects the zpage migration */ spin_lock(&class->lock); zspage = find_get_zspage(class); if (likely(zspage)) { @@ -1501,30 +1494,27 @@ void zs_free(struct zs_pool *pool, unsigned long handle) if (unlikely(!handle)) return; - pin_tag(handle); + /* + * The pool->migrate_lock protects the race with zpage's migration + * so it's safe to get the page from handle. + */ + read_lock(&pool->migrate_lock); obj = handle_to_obj(handle); obj_to_page(obj, &f_page); zspage = get_zspage(f_page); - - migrate_read_lock(zspage); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + read_unlock(&pool->migrate_lock); + obj_free(class->size, obj); class_stat_dec(class, OBJ_USED, 1); fullness = fix_fullness_group(class, zspage); - if (fullness != ZS_EMPTY) { - migrate_read_unlock(zspage); + if (fullness != ZS_EMPTY) goto out; - } - migrate_read_unlock(zspage); - /* If zspage is isolated, zs_page_putback will free the zspage */ free_zspage(pool, class, zspage); out: - spin_unlock(&class->lock); - unpin_tag(handle); cache_free_handle(pool, handle); } EXPORT_SYMBOL_GPL(zs_free); @@ -1608,11 +1598,8 @@ static unsigned long find_alloced_obj(struct size_class *class, offset += class->size * index; while (offset < PAGE_SIZE) { - if (obj_allocated(page, addr + offset, &handle)) { - if (trypin_tag(handle)) - break; - handle = 0; - } + if (obj_allocated(page, addr + offset, &handle)) + break; offset += class->size; index++; @@ -1658,7 +1645,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, /* Stop if there is no more space */ if (zspage_full(class, get_zspage(d_page))) { - unpin_tag(handle); ret = -ENOMEM; break; } @@ -1667,15 +1653,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, free_obj = obj_malloc(pool, get_zspage(d_page), handle); zs_object_copy(class, free_obj, used_obj); obj_idx++; - /* - * record_obj updates handle's value to free_obj and it will - * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which - * breaks synchronization using pin_tag(e,g, zs_free) so - * let's keep the lock bit. - */ - free_obj |= BIT(HANDLE_PIN_BIT); record_obj(handle, free_obj); - unpin_tag(handle); obj_free(class->size, used_obj); } @@ -1789,6 +1767,11 @@ static void migrate_write_lock(struct zspage *zspage) write_lock(&zspage->lock); } +static void migrate_write_lock_nested(struct zspage *zspage) +{ + write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING); +} + static void migrate_write_unlock(struct zspage *zspage) { write_unlock(&zspage->lock); @@ -1856,11 +1839,10 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, struct zspage *zspage; struct page *dummy; void *s_addr, *d_addr, *addr; - int offset, pos; + int offset; unsigned long handle; unsigned long old_obj, new_obj; unsigned int obj_idx; - int ret = -EAGAIN; /* * We cannot support the _NO_COPY case here, because copy needs to @@ -1873,32 +1855,25 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, VM_BUG_ON_PAGE(!PageMovable(page), page); VM_BUG_ON_PAGE(!PageIsolated(page), page); - zspage = get_zspage(page); - - /* Concurrent compactor cannot migrate any subpage in zspage */ - migrate_write_lock(zspage); pool = mapping->private_data; + + /* + * The pool migrate_lock protects the race between zpage migration + * and zs_free. + */ + write_lock(&pool->migrate_lock); + zspage = get_zspage(page); class = zspage_class(pool, zspage); - offset = get_first_obj_offset(page); + /* + * the class lock protects zpage alloc/free in the zspage. + */ spin_lock(&class->lock); - if (!get_zspage_inuse(zspage)) { - /* - * Set "offset" to end of the page so that every loops - * skips unnecessary object scanning. - */ - offset = PAGE_SIZE; - } + /* the migrate_write_lock protects zpage access via zs_map_object */ + migrate_write_lock(zspage); - pos = offset; + offset = get_first_obj_offset(page); s_addr = kmap_atomic(page); - while (pos < PAGE_SIZE) { - if (obj_allocated(page, s_addr + pos, &handle)) { - if (!trypin_tag(handle)) - goto unpin_objects; - } - pos += class->size; - } /* * Here, any user cannot access all objects in the zspage so let's move. @@ -1907,25 +1882,30 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, memcpy(d_addr, s_addr, PAGE_SIZE); kunmap_atomic(d_addr); - for (addr = s_addr + offset; addr < s_addr + pos; + for (addr = s_addr + offset; addr < s_addr + PAGE_SIZE; addr += class->size) { if (obj_allocated(page, addr, &handle)) { - BUG_ON(!testpin_tag(handle)); old_obj = handle_to_obj(handle); obj_to_location(old_obj, &dummy, &obj_idx); new_obj = (unsigned long)location_to_obj(newpage, obj_idx); - new_obj |= BIT(HANDLE_PIN_BIT); record_obj(handle, new_obj); } } + kunmap_atomic(s_addr); replace_sub_page(class, zspage, newpage, page); - get_page(newpage); - + /* + * Since we complete the data copy and set up new zspage structure, + * it's okay to release migration_lock. + */ + write_unlock(&pool->migrate_lock); + spin_unlock(&class->lock); dec_zspage_isolation(zspage); + migrate_write_unlock(zspage); + get_page(newpage); if (page_zone(newpage) != page_zone(page)) { dec_zone_page_state(page, NR_ZSPAGES); inc_zone_page_state(newpage, NR_ZSPAGES); @@ -1933,22 +1913,8 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, reset_page(page); put_page(page); - page = newpage; - - ret = MIGRATEPAGE_SUCCESS; -unpin_objects: - for (addr = s_addr + offset; addr < s_addr + pos; - addr += class->size) { - if (obj_allocated(page, addr, &handle)) { - BUG_ON(!testpin_tag(handle)); - unpin_tag(handle); - } - } - kunmap_atomic(s_addr); - spin_unlock(&class->lock); - migrate_write_unlock(zspage); - return ret; + return MIGRATEPAGE_SUCCESS; } static void zs_page_putback(struct page *page) @@ -2077,8 +2043,13 @@ static unsigned long __zs_compact(struct zs_pool *pool, struct zspage *dst_zspage = NULL; unsigned long pages_freed = 0; + /* protect the race between zpage migration and zs_free */ + write_lock(&pool->migrate_lock); + /* protect zpage allocation/free */ spin_lock(&class->lock); while ((src_zspage = isolate_zspage(class, true))) { + /* protect someone accessing the zspage(i.e., zs_map_object) */ + migrate_write_lock(src_zspage); if (!zs_can_compact(class)) break; @@ -2087,6 +2058,8 @@ static unsigned long __zs_compact(struct zs_pool *pool, cc.s_page = get_first_page(src_zspage); while ((dst_zspage = isolate_zspage(class, false))) { + migrate_write_lock_nested(dst_zspage); + cc.d_page = get_first_page(dst_zspage); /* * If there is no more space in dst_page, resched @@ -2096,6 +2069,10 @@ static unsigned long __zs_compact(struct zs_pool *pool, break; putback_zspage(class, dst_zspage); + migrate_write_unlock(dst_zspage); + dst_zspage = NULL; + if (rwlock_is_contended(&pool->migrate_lock)) + break; } /* Stop if we couldn't find slot */ @@ -2103,19 +2080,28 @@ static unsigned long __zs_compact(struct zs_pool *pool, break; putback_zspage(class, dst_zspage); + migrate_write_unlock(dst_zspage); + if (putback_zspage(class, src_zspage) == ZS_EMPTY) { + migrate_write_unlock(src_zspage); free_zspage(pool, class, src_zspage); pages_freed += class->pages_per_zspage; - } + } else + migrate_write_unlock(src_zspage); spin_unlock(&class->lock); + write_unlock(&pool->migrate_lock); cond_resched(); + write_lock(&pool->migrate_lock); spin_lock(&class->lock); } - if (src_zspage) + if (src_zspage) { putback_zspage(class, src_zspage); + migrate_write_unlock(src_zspage); + } spin_unlock(&class->lock); + write_unlock(&pool->migrate_lock); return pages_freed; } @@ -2221,6 +2207,7 @@ struct zs_pool *zs_create_pool(const char *name) return NULL; init_deferred_free(pool); + rwlock_init(&pool->migrate_lock); pool->name = kstrdup(name, GFP_KERNEL); if (!pool->name) From patchwork Mon Nov 15 18:59:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12620235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D0CC433EF for ; Mon, 15 Nov 2021 18:59:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 12CC66368D for ; Mon, 15 Nov 2021 18:59:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 12CC66368D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2E15E6B0082; Mon, 15 Nov 2021 13:59:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28D976B0083; Mon, 15 Nov 2021 13:59:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17D336B0085; Mon, 15 Nov 2021 13:59:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 097346B0082 for ; Mon, 15 Nov 2021 13:59:25 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C1AD384761 for ; Mon, 15 Nov 2021 18:59:24 +0000 (UTC) X-FDA: 78812077848.09.58D865F Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf08.hostedemail.com (Postfix) with ESMTP id A76C4300024F for ; Mon, 15 Nov 2021 18:59:07 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id fv9-20020a17090b0e8900b001a6a5ab1392so637936pjb.1 for ; Mon, 15 Nov 2021 10:59:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pSLrn5y02nFYb56rbdejNlZIJ15XWP3dy4aaYUGw/MI=; b=RYLXxS412x5P9PLxG/vpcXj0PeJFVvZsbSQUYS6T0nLuS8Oh3IpnpC5stTjDBfyEbg HrB6vour5ki1ihptvQ56/i2OiyZ6Bzdz6Fr/gZkRYicQwmz1btnU/OAKDB4IIq37dOys UT+LrPgPYUcg8PWFjPZJ4DuB6HR+NU8ij0ybDL3Qs/wMi/IuEBXGHLUEoPd2Q9YU2loR ALennVfrbcQAjUsi3r8iXSnS/S8M0vxi/rDTC9avuNnqySmErzxPwth+P7o5BWR0/o3V bSq8Yw4XiJiyTHaeaHAC2jDBrzmBiMowF5kFyRI0fa6sbrGvP0UXfAFHcjOYN5XGn/0z hCLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=pSLrn5y02nFYb56rbdejNlZIJ15XWP3dy4aaYUGw/MI=; b=26UavWLpnqY59tI6aeGwup+2gddtyymg9fRsUU+h6cFL0WuzmvtIHRegEqVXdy7wSP rO4lsR9Jn8ovkD6GWZrDxURDf0RJNBI2o+THX9qbr9Fq7mo8bnf0H2Dlke4xKH37oGvp frZOF8YXUlBR7CTP5BYBwOeALk+RHUaIpuHxGBuU6TGsZ56MCGxt+bP9axWngEqZalXY eSX1ZZD1OkXekZ+nVORUq4q7TKNCcn5imXgHuAqK5mU7S3yiSxtcLadbKUyS1QZy/28/ 2Eg8WrUymVbbRtyKcCwS0SYHMqDQFobwP/5LNbuDSnXaTiEqYzy/4WUdB7yAKNUT5wLv I6bQ== X-Gm-Message-State: AOAM532qnaoTOaH9cGo6GVipuEbik4bzMO6+/eQpUTrs7ePE/QMdtCb9 mGLQHtpPVF7q2oMc+FcZrw4= X-Google-Smtp-Source: ABdhPJwrGFruznHxWpMGeZyh0kKzDStfOuXOf5UduhvpYVsBEjGiaE1wgASm37OzdreK+H5/sdFdeg== X-Received: by 2002:a17:90b:4acd:: with SMTP id mh13mr68392410pjb.230.1637002763347; Mon, 15 Nov 2021 10:59:23 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:7a54:8083:4365:b23d]) by smtp.gmail.com with ESMTPSA id c3sm11882941pfv.67.2021.11.15.10.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Nov 2021 10:59:22 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , LKML , Minchan Kim , Mike Galbraith , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH v2 9/9] zsmalloc: replace get_cpu_var with local_lock Date: Mon, 15 Nov 2021 10:59:09 -0800 Message-Id: <20211115185909.3949505-10-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211115185909.3949505-1-minchan@kernel.org> References: <20211115185909.3949505-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A76C4300024F X-Stat-Signature: zbdkpxjpbommrtgxkmpinay5u518sq87 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RYLXxS41; spf=pass (imf08.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.49 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1637002747-900532 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Galbraith The usage of get_cpu_var() in zs_map_object() is problematic because it disables preemption and makes it impossible to acquire any sleeping lock on PREEMPT_RT such as a spinlock_t. Replace the get_cpu_var() usage with a local_lock_t which is embedded struct mapping_area. It ensures that the access the struct is synchronized against all users on the same CPU. Signed-off-by: Mike Galbraith Signed-off-by: Thomas Gleixner Signed-off-by: Sebastian Andrzej Siewior [minchan: remove the bit_spin_lock part and change the title] Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 5d4c4d254679..7e03cc9363bb 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -65,6 +65,7 @@ #include #include #include +#include #define ZSPAGE_MAGIC 0x58 @@ -276,6 +277,7 @@ struct zspage { }; struct mapping_area { + local_lock_t lock; char *vm_buf; /* copy buffer for objects that span pages */ char *vm_addr; /* address of kmap_atomic()'ed pages */ enum zs_mapmode vm_mm; /* mapping mode */ @@ -451,7 +453,9 @@ MODULE_ALIAS("zpool-zsmalloc"); #endif /* CONFIG_ZPOOL */ /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */ -static DEFINE_PER_CPU(struct mapping_area, zs_map_area); +static DEFINE_PER_CPU(struct mapping_area, zs_map_area) = { + .lock = INIT_LOCAL_LOCK(lock), +}; static __maybe_unused int is_first_page(struct page *page) { @@ -1269,7 +1273,8 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, class = zspage_class(pool, zspage); off = (class->size * obj_idx) & ~PAGE_MASK; - area = &get_cpu_var(zs_map_area); + local_lock(&zs_map_area.lock); + area = this_cpu_ptr(&zs_map_area); area->vm_mm = mm; if (off + class->size <= PAGE_SIZE) { /* this object is contained entirely within a page */ @@ -1320,7 +1325,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) __zs_unmap_object(area, pages, off, class->size); } - put_cpu_var(zs_map_area); + local_unlock(&zs_map_area.lock); migrate_read_unlock(zspage); }