From patchwork Wed Jun 24 09:28:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikar Dronamraju X-Patchwork-Id: 11622731 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 964C51731 for ; Wed, 24 Jun 2020 09:29:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5EF4220857 for ; Wed, 24 Jun 2020 09:29:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5EF4220857 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.vnet.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C3F46B0002; Wed, 24 Jun 2020 05:29:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 874C96B0003; Wed, 24 Jun 2020 05:29:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78B2C6B0005; Wed, 24 Jun 2020 05:29:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 619766B0002 for ; Wed, 24 Jun 2020 05:29:19 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1C05B824556B for ; Wed, 24 Jun 2020 09:29:19 +0000 (UTC) X-FDA: 76963582038.30.oil84_440f22026e43 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id EAD3D180B3AA7 for ; Wed, 24 Jun 2020 09:29:18 +0000 (UTC) X-Spam-Summary: 1,0,0,c3e3e068cb32602e,d41d8cd98f00b204,srikar@linux.vnet.ibm.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:966:967:973:982:988:989:1260:1261:1263:1345:1431:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2559:2566:2682:2685:2691:2693:2741:2859:2892:2895:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3653:3865:3866:3867:3868:3870:3871:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4052:4250:4321:4379:4385:4605:5007:6117:6119:6261:6630:6742:7875:7903:7904:8599:8603:8957:9000:9025:9121:9388:10004:10026:10049:11026:11232:11233:11473:11657:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13053:13161:13180:13229:13846:13870:14096:14394:21063:21080:21324:21451:21611:21627:21749:21795:21811:21939:30034:30051:30054:30070:30075:30080,0,RBL:148.163.156.1:@linux.vnet.ibm.com:.lbl8.mailshell.net-64.201.201.201 62.14.0.100;04ygygf38s9tw35m1xmp77xqqejr5op6jd7ua9xy5axbg8mad96x6cpq9u7z6ae.67rmp3tr dp6us5d9 X-HE-Tag: oil84_440f22026e43 X-Filterd-Recvd-Size: 12930 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 09:29:18 +0000 (UTC) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 05O934mM190109; Wed, 24 Jun 2020 05:29:13 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 31uwysj68t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 24 Jun 2020 05:29:13 -0400 Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 05O94VCs195280; Wed, 24 Jun 2020 05:29:13 -0400 Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com with ESMTP id 31uwysj668-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 24 Jun 2020 05:29:12 -0400 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 05O9Q0lP028374; Wed, 24 Jun 2020 09:29:09 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06fra.de.ibm.com with ESMTP id 31uuspr742-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 24 Jun 2020 09:29:08 +0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 05O9T6Xj57933914 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 24 Jun 2020 09:29:06 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 62F6652059; Wed, 24 Jun 2020 09:29:06 +0000 (GMT) Received: from srikart450.in.ibm.com (unknown [9.102.29.235]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id A66C852057; Wed, 24 Jun 2020 09:29:01 +0000 (GMT) From: Srikar Dronamraju To: Andrew Morton Cc: Srikar Dronamraju , linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Michal Hocko , Mel Gorman , Vlastimil Babka , "Kirill A. Shutemov" , Christopher Lameter , Michael Ellerman , Linus Torvalds , Gautham R Shenoy , Satheesh Rajendran , David Hildenbrand Subject: [PATCH v5 0/3] Offline memoryless cpuless node 0 Date: Wed, 24 Jun 2020 14:58:43 +0530 Message-Id: <20200624092846.9194-1-srikar@linux.vnet.ibm.com> X-Mailer: git-send-email 2.17.1 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.216,18.0.687 definitions=2020-06-24_04:2020-06-24,2020-06-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 impostorscore=0 phishscore=0 cotscore=-2147483648 mlxlogscore=999 spamscore=0 adultscore=0 clxscore=1011 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000 definitions=main-2006240067 X-Rspamd-Queue-Id: EAD3D180B3AA7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Changelog v4:->v5: - rebased to v5.8-rc2 link v4: http://lore.kernel.org/lkml/20200512132937.19295-1-srikar@linux.vnet.ibm.com/t/#u Changelog v3:->v4: - Resolved comments from Christopher. Link v3: http://lore.kernel.org/lkml/20200501031128.19584-1-srikar@linux.vnet.ibm.com/t/#u Changelog v2:->v3: - Resolved comments from Gautham. Link v2: https://lore.kernel.org/linuxppc-dev/20200428093836.27190-1-srikar@linux.vnet.ibm.com/t/#u Changelog v1:->v2: - Rebased to v5.7-rc3 - Updated the changelog. Link v1: https://lore.kernel.org/linuxppc-dev/20200311110237.5731-1-srikar@linux.vnet.ibm.com/t/#u Linux kernel configured with CONFIG_NUMA on a system with multiple possible nodes, marks node 0 as online at boot. However in practice, there are systems which have node 0 as memoryless and cpuless. This can cause 1. numa_balancing to be enabled on systems with only one online node. 2. Existence of dummy (cpuless and memoryless) node which can confuse users/scripts looking at output of lscpu / numactl. This patchset wants to correct this anomaly. This should only affect systems that have CONFIG_MEMORYLESS_NODES. Currently there are only 2 architectures ia64 and powerpc that have this config. Note: Patch 3 in this patch series depends on patches 1 and 2. Without patches 1 and 2, patch 3 might crash powerpc. v5.8-rc2 available: 2 nodes (0,2) node 0 cpus: node 0 size: 0 MB node 0 free: 0 MB node 2 cpus: 0 1 2 3 4 5 6 7 node 2 size: 32625 MB node 2 free: 31490 MB node distances: node 0 2 0: 10 20 2: 20 10 proc and sys files ------------------ /sys/devices/system/node/online: 0,2 /proc/sys/kernel/numa_balancing: 1 /sys/devices/system/node/has_cpu: 2 /sys/devices/system/node/has_memory: 2 /sys/devices/system/node/has_normal_memory: 2 /sys/devices/system/node/possible: 0-31 v5.8-rc2 + patches ------------------ available: 1 nodes (2) node 2 cpus: 0 1 2 3 4 5 6 7 node 2 size: 32625 MB node 2 free: 31487 MB node distances: node 2 2: 10 proc and sys files ------------------ /sys/devices/system/node/online: 2 /proc/sys/kernel/numa_balancing: 0 /sys/devices/system/node/has_cpu: 2 /sys/devices/system/node/has_memory: 2 /sys/devices/system/node/has_normal_memory: 2 /sys/devices/system/node/possible: 0-31 1. User space applications like Numactl, lscpu, that parse the sysfs tend to believe there is an extra online node. This tends to confuse users and applications. Other user space applications start believing that system was not able to use all the resources (i.e missing resources) or the system was not setup correctly. 2. Also existence of dummy node also leads to inconsistent information. The number of online nodes is inconsistent with the information in the device-tree and resource-dump 3. When the dummy node is present, single node non-Numa systems end up showing up as NUMA systems and numa_balancing gets enabled. This will mean we take the hit from the unnecessary numa hinting faults. On a machine with just one node with node number not being 0, the current setup will end up showing 2 online nodes. And when there are more than one online nodes, numa_balancing gets enabled. Without patch $ grep numa /proc/vmstat numa_hit 95179 numa_miss 0 numa_foreign 0 numa_interleave 3764 numa_local 95179 numa_other 0 numa_pte_updates 1206973 <---------- numa_huge_pte_updates 4654 <---------- numa_hint_faults 19560 <---------- numa_hint_faults_local 19560 <---------- numa_pages_migrated 0 With patch $ grep numa /proc/vmstat numa_hit 322338756 numa_miss 0 numa_foreign 0 numa_interleave 3790 numa_local 322338756 numa_other 0 numa_pte_updates 0 <---------- numa_huge_pte_updates 0 <---------- numa_hint_faults 0 <---------- numa_hint_faults_local 0 <---------- numa_pages_migrated 0 Here are 2 sample numa programs. numa01.sh is a set of 2 process each running threads as many as number of cpus; each thread doing 50 loops on 3GB process shared memory operations. numa02.sh is a single process with threads as many as number of cpus; each thread doing 800 loops on 32MB thread local memory operations. Testcase Time: Min Max Avg StdDev ./numa01.sh Real: 149.62 149.66 149.64 0.02 ./numa01.sh Sys: 3.21 3.71 3.46 0.25 ./numa01.sh User: 4755.13 4758.15 4756.64 1.51 ./numa02.sh Real: 24.98 25.02 25.00 0.02 ./numa02.sh Sys: 0.51 0.59 0.55 0.04 ./numa02.sh User: 790.28 790.88 790.58 0.30 Testcase Time: Min Max Avg StdDev %Change ./numa01.sh Real: 149.44 149.46 149.45 0.01 0.127133% ./numa01.sh Sys: 0.71 0.89 0.80 0.09 332.5% ./numa01.sh User: 4754.19 4754.48 4754.33 0.15 0.0485873% ./numa02.sh Real: 24.97 24.98 24.98 0.00 0.0800641% ./numa02.sh Sys: 0.26 0.41 0.33 0.08 66.6667% ./numa02.sh User: 789.75 790.28 790.01 0.27 0.072151% numa01.sh param no_patch with_patch %Change ----- ---------- ---------- ------- numa_hint_faults 1131164 0 -100% numa_hint_faults_local 1131164 0 -100% numa_hit 213696 214244 0.256439% numa_local 213696 214244 0.256439% numa_pte_updates 1131294 0 -100% pgfault 1380845 241424 -82.5162% pgmajfault 75 60 -20% Here are 2 sample numa programs. numa01.sh is a set of 2 process each running threads as many as number of cpus; each thread doing 50 loops on 3GB process shared memory operations. numa02.sh is a single process with threads as many as number of cpus; each thread doing 800 loops on 32MB thread local memory operations. Without patch ------------- Testcase Time: Min Max Avg StdDev ./numa01.sh Real: 149.62 149.66 149.64 0.02 ./numa01.sh Sys: 3.21 3.71 3.46 0.25 ./numa01.sh User: 4755.13 4758.15 4756.64 1.51 ./numa02.sh Real: 24.98 25.02 25.00 0.02 ./numa02.sh Sys: 0.51 0.59 0.55 0.04 ./numa02.sh User: 790.28 790.88 790.58 0.30 With patch ----------- Testcase Time: Min Max Avg StdDev %Change ./numa01.sh Real: 149.44 149.46 149.45 0.01 0.127133% ./numa01.sh Sys: 0.71 0.89 0.80 0.09 332.5% ./numa01.sh User: 4754.19 4754.48 4754.33 0.15 0.0485873% ./numa02.sh Real: 24.97 24.98 24.98 0.00 0.0800641% ./numa02.sh Sys: 0.26 0.41 0.33 0.08 66.6667% ./numa02.sh User: 789.75 790.28 790.01 0.27 0.072151% numa01.sh param no_patch with_patch %Change ----- ---------- ---------- ------- numa_hint_faults 1131164 0 -100% numa_hint_faults_local 1131164 0 -100% numa_hit 213696 214244 0.256439% numa_local 213696 214244 0.256439% numa_pte_updates 1131294 0 -100% pgfault 1380845 241424 -82.5162% pgmajfault 75 60 -20% numa02.sh param no_patch with_patch %Change ----- ---------- ---------- ------- numa_hint_faults 111878 0 -100% numa_hint_faults_local 111878 0 -100% numa_hit 41854 43220 3.26373% numa_local 41854 43220 3.26373% numa_pte_updates 113926 0 -100% pgfault 163662 51210 -68.7099% pgmajfault 56 52 -7.14286% Observations: The real time and user time actually doesn't change much. However the system time changes to some extent. The reason being the number of numa hinting faults. With the patch we are not seeing the numa hinting faults. Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: Michal Hocko Cc: Mel Gorman Cc: Vlastimil Babka Cc: "Kirill A. Shutemov" Cc: Christopher Lameter Cc: Michael Ellerman Cc: Andrew Morton Cc: Linus Torvalds Cc: Gautham R Shenoy Cc: Satheesh Rajendran Cc: David Hildenbrand Srikar Dronamraju (3): powerpc/numa: Set numa_node for all possible cpus powerpc/numa: Prefer node id queried from vphn mm/page_alloc: Keep memoryless cpuless node 0 offline arch/powerpc/mm/numa.c | 35 +++++++++++++++++++++++++---------- mm/page_alloc.c | 4 +++- 2 files changed, 28 insertions(+), 11 deletions(-)