From patchwork Tue Nov 19 09:26:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 11251605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6CC32186D for ; Tue, 19 Nov 2019 09:26:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3BEB421823 for ; Tue, 19 Nov 2019 09:26:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3BEB421823 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 362D56B0280; Tue, 19 Nov 2019 04:26:52 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3468B6B0282; Tue, 19 Nov 2019 04:26:52 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251006B0283; Tue, 19 Nov 2019 04:26:52 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 0EBDA6B0280 for ; Tue, 19 Nov 2019 04:26:52 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id C185EABF3 for ; Tue, 19 Nov 2019 09:26:51 +0000 (UTC) X-FDA: 76172497422.28.lunch07_58f594f6d5b56 X-Spam-Summary: 2,0,0,d89fe1347570ad21,d41d8cd98f00b204,mstsxfx@gmail.com,:akpm@linux-foundation.org:pasha.tatashin@soleen.com:vincent.whitchurch@axis.com:osalvador@suse.com:david@redhat.com::linux-kernel@vger.kernel.org:mhocko@suse.com,RULES_HIT:41:355:379:541:800:960:967:973:988:989:1260:1263:1311:1314:1345:1437:1515:1534:1542:1711:1730:1747:1777:1792:2393:2525:2559:2563:2682:2685:2859:2919:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:5007:6117:6261:7576:7875:7903:8599:8603:8660:9025:10004:10913:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12783:12895:12986:13148:13161:13221:13229:13230:13846:13894:14181:14394:14721:14849:21067:21080:21444:21451:21627:30054:30056:30070,0,RBL:209.85.128.65:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk, SPF:fp,M X-HE-Tag: lunch07_58f594f6d5b56 X-Filterd-Recvd-Size: 4813 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 19 Nov 2019 09:26:51 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id 8so2654092wmo.0 for ; Tue, 19 Nov 2019 01:26:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=M+2OHFpC0q55v8InPSssCQHOt0deRDCWxkFPtuptQPU=; b=PuY0pc1afcgPPrw0sRaF03mW6SZW6XjT9+fgVUAiwbxFT+JMr0f7fwgL5NNe8z2Dvw R3ZQJtmCfSU6g0Gb9pZ3rfA86NN/NoDK62qfyQG+Mw/eYXP3V7WPJg0rxbWHG1Q0/2qt 66T4FYQaKdSJkLNJ7Qg7G1ROfhbjX9tEvnVVkfN7JDs/OrQeFelQ1Yg3aPEhbVovMsDa pHoE6W6uQxAJSWxMASaERNxl1P5IC1EE9efFjbWkbnXhQYlEwZL/Rvwrvop5/8BnILwp dFkYCEY7sKRap52thRCKJysb2m6Xz3t7KdunELVFYArkR+rhQO4Z9+BMMe8k31Km8WSR zKbQ== X-Gm-Message-State: APjAAAUNfdZK+eR0pOTXcsDqpbvq+Rds4p1ZFzwqrG2Ox+C551kkyF8p PHlN7WNoshtrBrQjfpRBRAE= X-Google-Smtp-Source: APXvYqy1wQcmGCsaSRfc1zaf+MRHK4ds374WMhyJbp+jps3aovj80g9aPVmycHMKppG3CGpTvkm36Q== X-Received: by 2002:a7b:cbd9:: with SMTP id n25mr4583498wmi.64.1574155609894; Tue, 19 Nov 2019 01:26:49 -0800 (PST) Received: from tiehlicka.microfocus.com (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id c15sm26392622wrx.78.2019.11.19.01.26.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Nov 2019 01:26:48 -0800 (PST) From: Michal Hocko To: Andrew Morton Cc: Pavel Tatashin , Vincent Whitchurch , Oscar Salvador , David Hildenbrand , , LKML , Michal Hocko Subject: [PATCH] mm, sparse: do not waste pre allocated memmap space Date: Tue, 19 Nov 2019 10:26:42 +0100 Message-Id: <20191119092642.31799-1-mhocko@kernel.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko Vincent has noticed [1] that there is something unusual with the memmap allocations going on on his platform : I noticed this because on my ARM64 platform, with 1 GiB of memory the : first [and only] section is allocated from the zeroing path while with : 2 GiB of memory the first 1 GiB section is allocated from the : non-zeroing path. The underlying problem is that although sparse_buffer_init allocates enough memory for all sections on the node sparse_buffer_alloc is not able to consume them due to mismatch in the expected allocation alignement. While sparse_buffer_init preallocation uses the PAGE_SIZE alignment the real memmap has to be aligned to section_map_size() this results in a wasted initial chunk of the preallocated memmap and unnecessary fallback allocation for a section. While we are at it also change __populate_section_memmap to align to the requested size because at least VMEMMAP has constrains to have memmap properly aligned. [1] http://lkml.kernel.org/r/20191030131122.8256-1-vincent.whitchurch@axis.com Reported-and-debugged-by: Vincent Whitchurch Fixes: 35fd1eb1e821 ("mm/sparse: abstract sparse buffer allocations") Signed-off-by: Michal Hocko Acked-by: David Hildenbrand --- mm/sparse.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index f6891c1992b1..079f3e3c4cab 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -458,8 +458,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn, if (map) return map; - map = memblock_alloc_try_nid(size, - PAGE_SIZE, addr, + map = memblock_alloc_try_nid(size, size, addr, MEMBLOCK_ALLOC_ACCESSIBLE, nid); if (!map) panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%pa\n", @@ -482,8 +481,13 @@ static void __init sparse_buffer_init(unsigned long size, int nid) { phys_addr_t addr = __pa(MAX_DMA_ADDRESS); WARN_ON(sparsemap_buf); /* forgot to call sparse_buffer_fini()? */ + /* + * Pre-allocated buffer is mainly used by __populate_section_memmap + * and we want it to be properly aligned to the section size - this is + * especially the case for VMEMMAP which maps memmap to PMDs + */ sparsemap_buf = - memblock_alloc_try_nid_raw(size, PAGE_SIZE, + memblock_alloc_try_nid_raw(size, section_map_size(), addr, MEMBLOCK_ALLOC_ACCESSIBLE, nid); sparsemap_buf_end = sparsemap_buf + size;