From patchwork Fri Mar 21 19:15:17 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chirantan Ekbote X-Patchwork-Id: 3876091 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 86226BF540 for ; Fri, 21 Mar 2014 19:16:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B435520203 for ; Fri, 21 Mar 2014 19:16:40 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B1167201FD for ; Fri, 21 Mar 2014 19:16:39 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WR4vF-0005sK-SO; Fri, 21 Mar 2014 19:16:06 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WR4v4-00087m-LV; Fri, 21 Mar 2014 19:15:54 +0000 Received: from mail-pb0-x234.google.com ([2607:f8b0:400e:c01::234]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WR4v0-00084r-UP for linux-arm-kernel@lists.infradead.org; Fri, 21 Mar 2014 19:15:52 +0000 Received: by mail-pb0-f52.google.com with SMTP id rr13so2817865pbb.11 for ; Fri, 21 Mar 2014 12:15:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id; bh=qnD73QFc6/C33OxG//n21coSxWyvW4sF78sfCLqIEng=; b=M2AT2sbzhoMEVaHsPm/kc1gE0uE7FhwLTWMjuM/JqYVMgKikUYXVj8VDelh5Li954S zFyTdmWXgLA2CfaS3e1xBUDyKVd1uFVZfQA/1gUxkTfcrp3azRxzolKtK2zwWRVuzlD7 msCdycebJhEY2h1FHT9JkW4GGVMWP+22o0+9w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=qnD73QFc6/C33OxG//n21coSxWyvW4sF78sfCLqIEng=; b=mUe4DXUGOQIGdpC1GRJBk9Ickeza0QkvD78y4Jv6DlH9X3LDyzFQ5YkhidaDrGVXo7 9slroNKifdFSmPF2MVJOpC87HRLe94r2GxcspkwBUZ92tX61Ar/EB+KiMvZ7l4W3VXkv 3xTDl+tRd2Np0CmHoQyPonpHtnWPV3ivsi4Nk1iFvYt+wRYmdKAdZJQCPDOZsB7p5nUb qy2G9uGH9aKJ1RW3UUzqJ1P7E8HWb2TiV4l5DEqFLY0mqomdWskiDn4cKL6u6MCxkWFi c6F4B04mcvgWYn+BVcs7PfWOo7j5p60DlH/CVjKwO4ByxRLIPuNIMx1fVPwQxbwsFSZX DSPg== X-Gm-Message-State: ALoCoQmhI3vwGFzFPHrkxEswtipfKNCDHH3ymOWkA/UG+1t3EOzqE5sxg2YCqgd0KgNNt6fnBRAM X-Received: by 10.68.215.68 with SMTP id og4mr55574368pbc.112.1395429326870; Fri, 21 Mar 2014 12:15:26 -0700 (PDT) Received: from endor.mtv.corp.google.com (endor.mtv.corp.google.com [172.22.73.11]) by mx.google.com with ESMTPSA id op3sm11340438pbc.40.2014.03.21.12.15.25 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 21 Mar 2014 12:15:26 -0700 (PDT) From: Chirantan Ekbote To: linux@arm.linux.org.uk Subject: [PATCH] ARM: mm: Speed up page list initialization during boot Date: Fri, 21 Mar 2014 12:15:17 -0700 Message-Id: <1395429317-10084-1-git-send-email-chirantan@chromium.org> X-Mailer: git-send-email 1.9.1.423.g4596e3a X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140321_151551_116168_9EE3B7A3 X-CRM114-Status: GOOD ( 12.60 ) X-Spam-Score: -2.0 (--) Cc: Chirantan Ekbote , linux-kernel@vger.kernel.org, dianders@chromium.org, linux-mm@kvack.org, sonnyrao@chromium.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP During boot, we populate the page lists by using the page freeing mechanism on every individual page. Unfortunately, this is very inefficient because the memory manager spends a lot of time coalescing pairs of adjacent free pages into bigger blocks. Rather than adding a single order 0 page at a time, we can take advantage of the fact that we know that all the pages are available and free up big blocks of pages at a time instead. Signed-off-by: Chirantan Ekbote --- arch/arm/mm/init.c | 19 +++++++++++++++++-- include/linux/gfp.h | 1 + mm/internal.h | 1 - 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index 97c293e..c7fc2d8 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include @@ -469,8 +470,22 @@ static void __init free_unused_memmap(struct meminfo *mi) #ifdef CONFIG_HIGHMEM static inline void free_area_high(unsigned long pfn, unsigned long end) { - for (; pfn < end; pfn++) - free_highmem_page(pfn_to_page(pfn)); + while (pfn < end) { + struct page *page = pfn_to_page(pfn); + unsigned long order = min(__ffs(pfn), MAX_ORDER - 1); + unsigned long nr_pages = 1 << order; + unsigned long rem = end - pfn; + + if (nr_pages > rem) { + order = __fls(rem); + nr_pages = 1 << order; + } + + __free_pages_bootmem(page, order); + totalram_pages += nr_pages; + totalhigh_pages += nr_pages; + pfn += nr_pages; + } } #endif diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 39b81dc..a63d666 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -367,6 +367,7 @@ void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask); #define __get_dma_pages(gfp_mask, order) \ __get_free_pages((gfp_mask) | GFP_DMA, (order)) +extern void __free_pages_bootmem(struct page *page, unsigned int order); extern void __free_pages(struct page *page, unsigned int order); extern void free_pages(unsigned long addr, unsigned int order); extern void free_hot_cold_page(struct page *page, int cold); diff --git a/mm/internal.h b/mm/internal.h index 29e1e76..d2b8738 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -93,7 +93,6 @@ extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); /* * in mm/page_alloc.c */ -extern void __free_pages_bootmem(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned long order); #ifdef CONFIG_MEMORY_FAILURE extern bool is_free_buddy_page(struct page *page);