From patchwork Tue Jul 31 09:06:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 10550495 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E6015139A for ; Tue, 31 Jul 2018 09:07:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3B1C29FB4 for ; Tue, 31 Jul 2018 09:07:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7E032A2EC; Tue, 31 Jul 2018 09:07:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27B1529FB4 for ; Tue, 31 Jul 2018 09:07:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF4EA6B026C; Tue, 31 Jul 2018 05:07:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D49AD6B026B; Tue, 31 Jul 2018 05:07:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEAAD6B026E; Tue, 31 Jul 2018 05:07:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 77A426B026A for ; Tue, 31 Jul 2018 05:07:04 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id a26-v6so9003968pgw.7 for ; Tue, 31 Jul 2018 02:07:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=Zmu2pax0++6UTwp6O2vWwFt4kT16ClgiEPaq5JXs28w=; b=Yt4A4Vj0KrF12oB5JcYawut405r/gVV2/OF72miPj0dJsHDiZVZHPlSyxMetyFXFuI 8SoqjFMMPx/J6/0WhAduKnv+0g5Pxegcw5DKbs3rjxsedaxfIXK6nQhNR/hAncGADfrn +LS63pX7SvIeUEefpvnf89NKtSoNfwP9d7Sy7LU9FcGU0pMHN9GUT5tKF4sUkDXIY75O QCHvkRGwhbCXyDezbT7HAlmKWWqpMapmZxoGz6XHus/kmGcXB88IKQ7CVO96K5kcmLEV KTTuqdW1HaIi7tmk5EVolvbMaiGD5aUSfFX0B6iTz5NKMunqHWtrMDSCDhm5HApyHLrD u8bg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Gm-Message-State: AOUpUlH5OaS7xL247VQilWTG+B1pZKsX+Hu8Rj0BOR0dv+hUtzl4S2o0 Sj/1p8Bnh6fpzBRq4tjElIiPY/62IJy4cqf7COa1PSDO3f+CUzp6jd4MNMQ/bCuOp0WjcH14z5O 2dNZ+255CB8SBD//WgHCjykE5eQQShoterFwd0LL3tSZdvwXBqKgNhV4FCVy6XZjMcA== X-Received: by 2002:a62:a649:: with SMTP id t70-v6mr21434535pfe.149.1533028024163; Tue, 31 Jul 2018 02:07:04 -0700 (PDT) X-Google-Smtp-Source: AAOMgpc00rAL3Mu3hxWfJ6/wZy2/BhkCJTWMmjtIPFt89GXbqlqNL5KoP45zuAY8PXW2dBsIKiJ2 X-Received: by 2002:a62:a649:: with SMTP id t70-v6mr21434495pfe.149.1533028023355; Tue, 31 Jul 2018 02:07:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533028023; cv=none; d=google.com; s=arc-20160816; b=tEIvduERRYlblBxCPAhXc43LwoqUbleuo2GR2GeqG2mgqtqwbcYQ+otOLKtubphd6x NkuhvkgsZMuT2O1XZICZBm1k/7uA72gzAB+XzZTlCXzOKPrv/xw5mhhL2cGdF2DIfLWv 4wdHTg9HnbtUjkIoy1Ij/dRVtgGy4ZyrMM5jjxmUXXxkENbYc1hixuKoS/pNbgFdocp4 qE+OnRfprqYis16RKJowge7kdizRRd2jXIV/XMLbahaHyaPLF4k/rD5IpiNlAIbj7m8S kEhcbPwqNMu1Mc1O6PrhFMWPRwKbcQrzgYwLS7zLJoaxhG1LSyvYiflTPsuXFD6bxRa+ 8NEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Zmu2pax0++6UTwp6O2vWwFt4kT16ClgiEPaq5JXs28w=; b=fP5sHfCyy+SakcSEpGFKDHVmbTmBnThoELPv45p3c7Hvti0JYS9jkb4n8R7sx1CZri iefZFhYAe/bLGYywnQ0YJ/4Xhzn7k3BQe5xiI61pM+90zhePtPUKwXMjmuzC2xhSCOfJ oCy6wXgR6KSKf0yktPA1HvdtYpkbG8/McjKlE3dQuRadWNvNO2AzJFnGCxZEpzWHiDUB KYTroaOji/Ns9FD+ToEl6+YAdCfUwOa60X8u6IUNL2bzwusNHIn2lNUDn1jdrp8Opc/e ARu74wuliN68NqlzRPKjYZRJ75kaw7cd24LI5qDNhDKtA35ECID7kdhuGV5cx3qC/drj u4Mg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id m193-v6si13532380pfc.312.2018.07.31.02.07.02 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 31 Jul 2018 02:07:03 -0700 (PDT) Received-SPF: pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 90331AE6D; Tue, 31 Jul 2018 09:06:58 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Roman Gushchin , Michal Hocko , Johannes Weiner , Christoph Lameter , David Rientjes , Joonsoo Kim , Mel Gorman , Matthew Wilcox , Vlastimil Babka , Vijayanand Jitta , Laura Abbott , Sumit Semwal Subject: [PATCH v4 4/6] mm: rename and change semantics of nr_indirectly_reclaimable_bytes Date: Tue, 31 Jul 2018 11:06:47 +0200 Message-Id: <20180731090649.16028-5-vbabka@suse.cz> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180731090649.16028-1-vbabka@suse.cz> References: <20180731090649.16028-1-vbabka@suse.cz> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The vmstat counter NR_INDIRECTLY_RECLAIMABLE_BYTES was introduced by commit eb59254608bc ("mm: introduce NR_INDIRECTLY_RECLAIMABLE_BYTES") with the goal of accounting objects that can be reclaimed, but cannot be allocated via a SLAB_RECLAIM_ACCOUNT cache. This is now possible via kmalloc() with __GFP_RECLAIMABLE flag, and the dcache external names user is converted. The counter is however still useful for accounting direct page allocations (i.e. not slab) with a shrinker, such as the ION page pool. So keep it, and: - change granularity to pages to be more like other counters; sub-page allocations should be able to use kmalloc - rename the counter to NR_KERNEL_MISC_RECLAIMABLE - expose the counter again in vmstat as "nr_kernel_misc_reclaimable"; we can again remove the check for not printing "hidden" counters Signed-off-by: Vlastimil Babka Cc: Vijayanand Jitta Cc: Laura Abbott Cc: Sumit Semwal Acked-by: Christoph Lameter Acked-by: Roman Gushchin --- drivers/staging/android/ion/ion_page_pool.c | 8 ++++---- include/linux/mmzone.h | 2 +- mm/page_alloc.c | 19 +++++++------------ mm/util.c | 3 +-- mm/vmstat.c | 6 +----- 5 files changed, 14 insertions(+), 24 deletions(-) diff --git a/drivers/staging/android/ion/ion_page_pool.c b/drivers/staging/android/ion/ion_page_pool.c index 9bc56eb48d2a..0d2a95957ee8 100644 --- a/drivers/staging/android/ion/ion_page_pool.c +++ b/drivers/staging/android/ion/ion_page_pool.c @@ -33,8 +33,8 @@ static void ion_page_pool_add(struct ion_page_pool *pool, struct page *page) pool->low_count++; } - mod_node_page_state(page_pgdat(page), NR_INDIRECTLY_RECLAIMABLE_BYTES, - (1 << (PAGE_SHIFT + pool->order))); + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + 1 << pool->order); mutex_unlock(&pool->mutex); } @@ -53,8 +53,8 @@ static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high) } list_del(&page->lru); - mod_node_page_state(page_pgdat(page), NR_INDIRECTLY_RECLAIMABLE_BYTES, - -(1 << (PAGE_SHIFT + pool->order))); + mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, + -(1 << pool->order)); return page; } diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 32699b2dc52a..c2f6bc4c9e8a 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -180,7 +180,7 @@ enum node_stat_item { NR_VMSCAN_IMMEDIATE, /* Prioritise for reclaim when writeback ends */ NR_DIRTIED, /* page dirtyings since bootup */ NR_WRITTEN, /* page writings since bootup */ - NR_INDIRECTLY_RECLAIMABLE_BYTES, /* measured in bytes */ + NR_KERNEL_MISC_RECLAIMABLE, /* reclaimable non-slab kernel pages */ NR_VM_NODE_STAT_ITEMS }; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5d800d61ddb7..91f75bf4404d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4704,6 +4704,7 @@ long si_mem_available(void) unsigned long pagecache; unsigned long wmark_low = 0; unsigned long pages[NR_LRU_LISTS]; + unsigned long reclaimable; struct zone *zone; int lru; @@ -4729,19 +4730,13 @@ long si_mem_available(void) available += pagecache; /* - * Part of the reclaimable slab consists of items that are in use, - * and cannot be freed. Cap this estimate at the low watermark. + * Part of the reclaimable slab and other kernel memory consists of + * items that are in use, and cannot be freed. Cap this estimate at the + * low watermark. */ - available += global_node_page_state(NR_SLAB_RECLAIMABLE) - - min(global_node_page_state(NR_SLAB_RECLAIMABLE) / 2, - wmark_low); - - /* - * Part of the kernel memory, which can be released under memory - * pressure. - */ - available += global_node_page_state(NR_INDIRECTLY_RECLAIMABLE_BYTES) >> - PAGE_SHIFT; + reclaimable = global_node_page_state(NR_SLAB_RECLAIMABLE) + + global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); + available += reclaimable - min(reclaimable / 2, wmark_low); if (available < 0) available = 0; diff --git a/mm/util.c b/mm/util.c index 3351659200e6..891f0654e7b5 100644 --- a/mm/util.c +++ b/mm/util.c @@ -675,8 +675,7 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin) * Part of the kernel memory, which can be released * under memory pressure. */ - free += global_node_page_state( - NR_INDIRECTLY_RECLAIMABLE_BYTES) >> PAGE_SHIFT; + free += global_node_page_state(NR_KERNEL_MISC_RECLAIMABLE); /* * Leave reserved pages. The pages are not for anonymous pages. diff --git a/mm/vmstat.c b/mm/vmstat.c index 8ba0870ecddd..c5e52f94ba5f 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1161,7 +1161,7 @@ const char * const vmstat_text[] = { "nr_vmscan_immediate_reclaim", "nr_dirtied", "nr_written", - "", /* nr_indirectly_reclaimable */ + "nr_kernel_misc_reclaimable", /* enum writeback_stat_item counters */ "nr_dirty_threshold", @@ -1704,10 +1704,6 @@ static int vmstat_show(struct seq_file *m, void *arg) unsigned long *l = arg; unsigned long off = l - (unsigned long *)m->private; - /* Skip hidden vmstat items. */ - if (*vmstat_text[off] == '\0') - return 0; - seq_puts(m, vmstat_text[off]); seq_put_decimal_ull(m, " ", *l); seq_putc(m, '\n');