From patchwork Wed Jul 1 08:38:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 11635645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99D28739 for ; Wed, 1 Jul 2020 08:38:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7217D2074D for ; Wed, 1 Jul 2020 08:38:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7217D2074D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=8bytes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBDCA8D0025; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B2C4E8D0024; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A7F28D0025; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id 61AA08D0012 for ; Wed, 1 Jul 2020 04:38:47 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2EF91181AC9CC for ; Wed, 1 Jul 2020 08:38:47 +0000 (UTC) X-FDA: 76988856294.21.soup66_190dbad26e7f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id B81BE180442C0 for ; Wed, 1 Jul 2020 08:38:43 +0000 (UTC) X-Spam-Summary: 1,0,0,dc4b63a0c37cd775,d41d8cd98f00b204,joro@8bytes.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1437:1534:1542:1711:1730:1747:1777:1792:2393:2559:2562:2693:2898:3138:3139:3140:3141:3142:3354:3865:3867:3868:3870:3871:3872:4250:4321:5007:6119:6261:7576:10004:11026:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13972:14181:14394:14721:21080:21627:21990:30054:30070,0,RBL:81.169.241.247:@8bytes.org:.lbl8.mailshell.net-62.2.6.100 64.100.201.201;04y8y5omx6i48688f3tzgwyxur194opmrboyoo6px9rm7usay6j33yfqe9xe4oq.oupyqwnt8bm1wu84ocbqajdqqa77cyh476jed19fjmf4coaror6uthsw44repo7.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: soup66_190dbad26e7f X-Filterd-Recvd-Size: 3406 Received: from theia.8bytes.org (8bytes.org [81.169.241.247]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 1 Jul 2020 08:38:42 +0000 (UTC) Received: by theia.8bytes.org (Postfix, from userid 1000) id 07570217; Wed, 1 Jul 2020 10:38:41 +0200 (CEST) From: Joerg Roedel To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Andrew Morton , Steven Rostedt , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joerg Roedel Subject: [PATCH v2 1/3] x86/mm/64: Pre-allocate p4d/pud pages for vmalloc area Date: Wed, 1 Jul 2020 10:38:36 +0200 Message-Id: <20200701083839.19193-2-joro@8bytes.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200701083839.19193-1-joro@8bytes.org> References: <20200701083839.19193-1-joro@8bytes.org> X-Rspamd-Queue-Id: B81BE180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joerg Roedel Pre-allocate the page-table pages for the vmalloc area at the level which needs synchronization on x86. This is P4D for 5-level and PUD for 4-level paging. Doing this at boot makes sure all page-tables in the system have these pages already and do not need to be synchronized at runtime. The runtime synchronizatin takes the pgd_lock and iterates over all page-tables in the system, so it can take quite long and is better avoided. Signed-off-by: Joerg Roedel --- arch/x86/mm/init_64.c | 52 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index dbae185511cd..e76bdb001460 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1238,6 +1238,56 @@ static void __init register_page_bootmem_info(void) #endif } +/* + * Pre-allocates page-table pages for the vmalloc area in the kernel page-table. + * Only the level which needs to be synchronized between all page-tables is + * allocated because the synchronization can be expensive. + */ +static void __init preallocate_vmalloc_pages(void) +{ + unsigned long addr; + const char *lvl; + + for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d; + pud_t *pud; + + p4d = p4d_offset(pgd, addr); + if (p4d_none(*p4d)) { + /* Can only happen with 5-level paging */ + p4d = p4d_alloc(&init_mm, pgd, addr); + if (!p4d) { + lvl = "p4d"; + goto failed; + } + } + + if (pgtable_l5_enabled()) + continue; + + pud = pud_offset(p4d, addr); + if (pud_none(*pud)) { + /* Ends up here only with 4-level paging */ + pud = pud_alloc(&init_mm, p4d, addr); + if (!pud) { + lvl = "pud"; + goto failed; + } + } + } + + return; + +failed: + + /* + * The pages have to be there now or they will be missing in + * process page-tables later. + */ + panic("Failed to pre-allocate %s pages for vmalloc area\n", lvl); +} + void __init mem_init(void) { pci_iommu_alloc(); @@ -1261,6 +1311,8 @@ void __init mem_init(void) if (get_gate_vma(&init_mm)) kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR, PAGE_SIZE, KCORE_USER); + preallocate_vmalloc_pages(); + mem_init_print_info(NULL); }