From patchwork Tue Sep 1 02:50:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11747255 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07C9414E5 for ; Tue, 1 Sep 2020 02:50:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D303E207EA for ; Tue, 1 Sep 2020 02:50:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D303E207EA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ABF456B005A; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 916F56B005D; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8046F6B005A; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 671386B0037 for ; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 269ED3636 for ; Tue, 1 Sep 2020 02:50:30 +0000 (UTC) X-FDA: 77212964220.03.spade23_03075fd27094 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id ECFBA28A4E8 for ; Tue, 1 Sep 2020 02:50:29 +0000 (UTC) X-Spam-Summary: 1,0,0,e51e5ae7fc84da53,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1261:1345:1431:1437:1535:1605:1606:1730:1747:1777:1792:1801:2393:2559:2562:2693:2742:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:4605:5007:6117:6261:7903:7904:8531:8603:9036:9121:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12683:12895:12986:13138:13231:13255:13846:14394:21060:21080:21324:21451:21627:21795:21990:30051:30054:30070,0,RBL:115.124.30.132:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04yf7fzmufqm1jf1ytogz5xfizk5jocw7i4ni4a48i39n4oxnqzgtggbrjbf1ct.wuqkdf387ezyz7bq6aatt4gazjbcmm4sbwdqtdwfbbfm3g7ff443hdckgbyn3yi.1-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spade23_03075fd27094 X-Filterd-Recvd-Size: 7144 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 02:50:27 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01355;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U7Txrtg_1598928621; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U7Txrtg_1598928621) by smtp.aliyun-inc.com(127.0.0.1); Tue, 01 Sep 2020 10:50:22 +0800 From: Alex Shi To: Anshuman Khandual , David Hildenbrand , Matthew Wilcox Cc: Andrew Morton , Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 1/3] mm/pageblock: mitigation cmpxchg false sharing in pageblock flags Date: Tue, 1 Sep 2020 10:50:10 +0800 Message-Id: <1598928612-68996-1-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Rspamd-Queue-Id: ECFBA28A4E8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pageblock_flags is used as long, since every pageblock_flags is just 4 bits, 'long' size will include 8(32bit machine) or 16 pageblocks' flags, that flag setting has to sync in cmpxchg with 7 or 15 other pageblock flags. It would cause long waiting for sync. If we could change the pageblock_flags variable as char, we could use char size cmpxchg, which just sync up with 2 pageblock flags. it could relief much false sharing in cmpxchg. With this and next patch, we could see mmtests/thpscale get slight fast on my 4 cores box, and cmpxchg retry times is reduced. pageblock pageblock pageblock rc2 rc2 rc2 16 16-2 16-3 a b c Duration User 14.81 15.24 14.55 14.76 14.97 14.38 Duration System 84.44 88.38 90.64 100.43 89.15 88.89 Duration Elapsed 98.83 99.06 99.81 100.30 99.24 99.14 rc2 is 5.9-rc2 kernel, pageblock is 5.9-rc2 + this patchset Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Mel Gorman Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/mmzone.h | 6 +++--- include/linux/pageblock-flags.h | 2 +- mm/page_alloc.c | 38 +++++++++++++++++++------------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8379432f4f2f..be676e659fb7 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -437,7 +437,7 @@ struct zone { * Flags for a pageblock_nr_pages block. See pageblock-flags.h. * In SPARSEMEM, this map is stored in struct mem_section */ - unsigned long *pageblock_flags; + unsigned char *pageblock_flags; #endif /* CONFIG_SPARSEMEM */ /* zone_start_pfn == zone_start_paddr >> PAGE_SHIFT */ @@ -1159,7 +1159,7 @@ struct mem_section_usage { DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); #endif /* See declaration of similar field in struct zone */ - unsigned long pageblock_flags[0]; + unsigned char pageblock_flags[0]; }; void subsection_map_init(unsigned long pfn, unsigned long nr_pages); @@ -1212,7 +1212,7 @@ struct mem_section { extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]; #endif -static inline unsigned long *section_to_usemap(struct mem_section *ms) +static inline unsigned char *section_to_usemap(struct mem_section *ms) { return ms->usage->pageblock_flags; } diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index fff52ad370c1..d189441568eb 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -54,7 +54,7 @@ enum pageblock_bits { /* Forward declaration */ struct page; -unsigned long get_pfnblock_flags_mask(struct page *page, +unsigned char get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fab5e97dc9ca..81e96d4d9c42 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -445,7 +445,7 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) #endif /* Return a pointer to the bitmap storing bits affecting a block of pages */ -static inline unsigned long *get_pageblock_bitmap(struct page *page, +static inline unsigned char *get_pageblock_bitmap(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM @@ -474,24 +474,24 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) * Return: pageblock_bits flags */ static __always_inline -unsigned long __get_pfnblock_flags_mask(struct page *page, +unsigned char __get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { - unsigned long *bitmap; - unsigned long bitidx, word_bitidx; - unsigned long word; + unsigned char *bitmap; + unsigned long bitidx, byte_bitidx; + unsigned char byte; bitmap = get_pageblock_bitmap(page, pfn); bitidx = pfn_to_bitidx(page, pfn); - word_bitidx = bitidx / BITS_PER_LONG; - bitidx &= (BITS_PER_LONG-1); + byte_bitidx = bitidx / BITS_PER_BYTE; + bitidx &= (BITS_PER_BYTE-1); - word = bitmap[word_bitidx]; - return (word >> bitidx) & mask; + byte = bitmap[byte_bitidx]; + return (byte >> bitidx) & mask; } -unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn, +unsigned char get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { return __get_pfnblock_flags_mask(page, pfn, mask); @@ -513,29 +513,29 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long pfn, unsigned long mask) { - unsigned long *bitmap; - unsigned long bitidx, word_bitidx; - unsigned long old_word, word; + unsigned char *bitmap; + unsigned long bitidx, byte_bitidx; + unsigned char old_byte, byte; BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); bitmap = get_pageblock_bitmap(page, pfn); bitidx = pfn_to_bitidx(page, pfn); - word_bitidx = bitidx / BITS_PER_LONG; - bitidx &= (BITS_PER_LONG-1); + byte_bitidx = bitidx / BITS_PER_BYTE; + bitidx &= (BITS_PER_BYTE-1); VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); mask <<= bitidx; flags <<= bitidx; - word = READ_ONCE(bitmap[word_bitidx]); + byte = READ_ONCE(bitmap[byte_bitidx]); for (;;) { - old_word = cmpxchg(&bitmap[word_bitidx], word, (word & ~mask) | flags); - if (word == old_word) + old_byte = cmpxchg(&bitmap[byte_bitidx], byte, (byte & ~mask) | flags); + if (byte == old_byte) break; - word = old_word; + byte = old_byte; } } From patchwork Tue Sep 1 02:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11747257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0990114E5 for ; Tue, 1 Sep 2020 02:50:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D5363207EA for ; Tue, 1 Sep 2020 02:50:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5363207EA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A03346B005C; Mon, 31 Aug 2020 22:50:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9B3146B005D; Mon, 31 Aug 2020 22:50:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A3716B0062; Mon, 31 Aug 2020 22:50:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 6F0B46B005C for ; Mon, 31 Aug 2020 22:50:31 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2F5501DF7 for ; Tue, 1 Sep 2020 02:50:31 +0000 (UTC) X-FDA: 77212964262.25.cup23_610e3d727094 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id E50CC1804E3A8 for ; Tue, 1 Sep 2020 02:50:30 +0000 (UTC) X-Spam-Summary: 1,0,0,feba510af42fa1f9,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:355:379:421:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:2693:2740:3138:3139:3140:3141:3142:3352:3865:3867:3868:3870:3871:3872:4321:4605:5007:6261:6630:8603:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:13846:14096:14181:14384:14394:14721:21060:21080:21222:21451:21627:30054,0,RBL:115.124.30.132:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04yfpnb39bcutpotz7zjwk1iwhy5zocfhrcjuwtyzxy4kf1kiorbdmjqyxuw35p.r8raane3f4x6nfzkskwscnx1fdg7q516sqbt3km1y3m7ut3bbnxnurkyd181dw9.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: cup23_610e3d727094 X-Filterd-Recvd-Size: 2957 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 02:50:28 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01419;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0U7Txrtg_1598928621; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U7Txrtg_1598928621) by smtp.aliyun-inc.com(127.0.0.1); Tue, 01 Sep 2020 10:50:22 +0800 From: Alex Shi To: Anshuman Khandual , David Hildenbrand , Matthew Wilcox Cc: Andrew Morton , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 2/3] mm/pageblock: remove false sharing in pageblock_flags Date: Tue, 1 Sep 2020 10:50:11 +0800 Message-Id: <1598928612-68996-2-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598928612-68996-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598928612-68996-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: E50CC1804E3A8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current pageblock_flags is only 4 bits, so it has to share a char size in cmpxchg when get set, the false sharing cause perf drop. If we incrase the bits up to 8, false sharing would gone in cmpxchg. and the only cost is half char per pageblock, which is half char per 128MB on x86, 4 chars in 1 GB. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Mel Gorman Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/pageblock-flags.h | 2 +- mm/page_alloc.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index d189441568eb..f785c9d6d68c 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -25,7 +25,7 @@ enum pageblock_bits { * Assume the bits will always align on a word. If this assumption * changes then get/set pageblock needs updating. */ - NR_PAGEBLOCK_BITS + NR_PAGEBLOCK_BITS = BITS_PER_BYTE }; #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 81e96d4d9c42..60342e764090 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -517,7 +517,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long bitidx, byte_bitidx; unsigned char old_byte, byte; - BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); + BUILD_BUG_ON(NR_PAGEBLOCK_BITS != BITS_PER_BYTE); BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); bitmap = get_pageblock_bitmap(page, pfn); From patchwork Tue Sep 1 02:50:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11747253 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D174F14E5 for ; Tue, 1 Sep 2020 02:50:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94C41207EA for ; Tue, 1 Sep 2020 02:50:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94C41207EA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 80B8A6B0037; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7675B8E0001; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 657196B005C; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 4CE9C6B0037 for ; Mon, 31 Aug 2020 22:50:30 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0C77E180AD802 for ; Tue, 1 Sep 2020 02:50:30 +0000 (UTC) X-FDA: 77212964220.15.cork13_4310ffd27094 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id CBBC01814B0C7 for ; Tue, 1 Sep 2020 02:50:29 +0000 (UTC) X-Spam-Summary: 1,0,0,779b534531fab3e2,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1535:1544:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:3872:4117:4321:4605:5007:6261:9036:9707:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12895:12986:13138:13221:13229:13231:13846:14096:14181:14394:14721:21060:21080:21433:21451:21627:21939:21990:30054:30064,0,RBL:115.124.30.56:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201;04ygm37txd5uyykib45ddo8ddptmnyc7mjcex3zhwycasyne3gb337uuxasicc4.h6cjs6m3q4fqwp61usx5uhp7e8199u1ap8j4z6bqyhhq9wt5wkcf5od8ek69hij.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cork13_4310ffd27094 X-Filterd-Recvd-Size: 6115 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 02:50:27 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R571e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04392;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0U7Txrtg_1598928621; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U7Txrtg_1598928621) by smtp.aliyun-inc.com(127.0.0.1); Tue, 01 Sep 2020 10:50:22 +0800 From: Alex Shi To: Anshuman Khandual , David Hildenbrand , Matthew Wilcox Cc: Andrew Morton , Baolin Wang , Russell King , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 3/3] mm/armv6: work around armv6 cmpxchg support issue Date: Tue, 1 Sep 2020 10:50:12 +0800 Message-Id: <1598928612-68996-3-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598928612-68996-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598928612-68996-1-git-send-email-alex.shi@linux.alibaba.com> X-Rspamd-Queue-Id: CBBC01814B0C7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Armv6 can not simulate cmpxchg1 func, so we have to use cmpxchg4 on it. arm-linux-gnueabi-ld: mm/page_alloc.o: in function `set_pfnblock_flags_mask': (.text+0x32b4): undefined reference to `__bad_cmpxchg' arm-linux-gnueabi-ld: (.text+0x32e0): undefined reference to `__bad_cmpxchg' Reported-by: kernel test robot Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Baolin Wang Cc: Russell King Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Reported-by: kernel test robot Signed-off-by: Alex Shi --- include/linux/mmzone.h | 15 ++++++++++++--- mm/page_alloc.c | 24 ++++++++++++------------ 2 files changed, 24 insertions(+), 15 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index be676e659fb7..4c91ca807473 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -406,6 +406,15 @@ enum zone_type { #ifndef __GENERATING_BOUNDS_H +/* cmpxchg only support 32-bits operands on ARMv6. */ +#ifdef CONFIG_CPU_V6 +#define BITS_PER_FLAGS BITS_PER_LONG +typedef unsigned long pageblockflags_t; +#else +#define BITS_PER_FLAGS BITS_PER_BYTE +typedef unsigned char pageblockflags_t; +#endif + struct zone { /* Read-mostly fields */ @@ -437,7 +446,7 @@ struct zone { * Flags for a pageblock_nr_pages block. See pageblock-flags.h. * In SPARSEMEM, this map is stored in struct mem_section */ - unsigned char *pageblock_flags; + pageblockflags_t *pageblock_flags; #endif /* CONFIG_SPARSEMEM */ /* zone_start_pfn == zone_start_paddr >> PAGE_SHIFT */ @@ -1159,7 +1168,7 @@ struct mem_section_usage { DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); #endif /* See declaration of similar field in struct zone */ - unsigned char pageblock_flags[0]; + pageblockflags_t pageblock_flags[0]; }; void subsection_map_init(unsigned long pfn, unsigned long nr_pages); @@ -1212,7 +1221,7 @@ struct mem_section { extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]; #endif -static inline unsigned char *section_to_usemap(struct mem_section *ms) +static inline pageblockflags_t *section_to_usemap(struct mem_section *ms) { return ms->usage->pageblock_flags; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 60342e764090..9a41c5dc78eb 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -445,7 +445,7 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) #endif /* Return a pointer to the bitmap storing bits affecting a block of pages */ -static inline unsigned char *get_pageblock_bitmap(struct page *page, +static inline pageblockflags_t *get_pageblock_bitmap(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM @@ -474,24 +474,24 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) * Return: pageblock_bits flags */ static __always_inline -unsigned char __get_pfnblock_flags_mask(struct page *page, +pageblockflags_t __get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { - unsigned char *bitmap; + pageblockflags_t *bitmap; unsigned long bitidx, byte_bitidx; - unsigned char byte; + pageblockflags_t byte; bitmap = get_pageblock_bitmap(page, pfn); bitidx = pfn_to_bitidx(page, pfn); - byte_bitidx = bitidx / BITS_PER_BYTE; - bitidx &= (BITS_PER_BYTE-1); + byte_bitidx = bitidx / BITS_PER_FLAGS; + bitidx &= (BITS_PER_FLAGS - 1); byte = bitmap[byte_bitidx]; return (byte >> bitidx) & mask; } -unsigned char get_pfnblock_flags_mask(struct page *page, unsigned long pfn, +pageblockflags_t get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { return __get_pfnblock_flags_mask(page, pfn, mask); @@ -513,17 +513,17 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long pfn, unsigned long mask) { - unsigned char *bitmap; + pageblockflags_t *bitmap; unsigned long bitidx, byte_bitidx; - unsigned char old_byte, byte; + pageblockflags_t old_byte, byte; - BUILD_BUG_ON(NR_PAGEBLOCK_BITS != BITS_PER_BYTE); + BUILD_BUG_ON(NR_PAGEBLOCK_BITS != BITS_PER_FLAGS); BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); bitmap = get_pageblock_bitmap(page, pfn); bitidx = pfn_to_bitidx(page, pfn); - byte_bitidx = bitidx / BITS_PER_BYTE; - bitidx &= (BITS_PER_BYTE-1); + byte_bitidx = bitidx / BITS_PER_FLAGS; + bitidx &= (BITS_PER_FLAGS - 1); VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);