From patchwork Mon Dec 3 15:47:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 10709913 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 953C314BD for ; Mon, 3 Dec 2018 15:48:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 831E029F5D for ; Mon, 3 Dec 2018 15:48:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7571E2A6CD; Mon, 3 Dec 2018 15:48:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0975129F5D for ; Mon, 3 Dec 2018 15:48:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726838AbeLCPss (ORCPT ); Mon, 3 Dec 2018 10:48:48 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:37528 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726822AbeLCPsr (ORCPT ); Mon, 3 Dec 2018 10:48:47 -0500 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wB3FibaE133926 for ; Mon, 3 Dec 2018 10:48:45 -0500 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2p5677v655-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 03 Dec 2018 10:48:42 -0500 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 3 Dec 2018 15:47:54 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 3 Dec 2018 15:47:48 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wB3Fllek29229290 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 3 Dec 2018 15:47:47 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 68D62A4062; Mon, 3 Dec 2018 15:47:47 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3C56AA4064; Mon, 3 Dec 2018 15:47:44 +0000 (GMT) Received: from rapoport-lnx (unknown [9.148.206.196]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Mon, 3 Dec 2018 15:47:44 +0000 (GMT) Received: by rapoport-lnx (sSMTP sendmail emulation); Mon, 03 Dec 2018 17:47:43 +0200 From: Mike Rapoport To: Andrew Morton Cc: Arnd Bergmann , Benjamin Herrenschmidt , "David S. Miller" , Guan Xuetao , Greentime Hu , Jonas Bonn , Michael Ellerman , Michal Hocko , Michal Simek , Mark Salter , Paul Mackerras , Rich Felker , Russell King , Stefan Kristiansson , Stafford Horne , Vincent Chen , Yoshinori Sato , linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, Mike Rapoport Subject: [PATCH v2 6/6] arm, unicore32: remove early_alloc*() wrappers Date: Mon, 3 Dec 2018 17:47:15 +0200 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1543852035-26634-1-git-send-email-rppt@linux.ibm.com> References: <1543852035-26634-1-git-send-email-rppt@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18120315-4275-0000-0000-000002EB6079 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18120315-4276-0000-0000-000037F86483 Message-Id: <1543852035-26634-7-git-send-email-rppt@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-12-03_09:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1812030148 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On arm and unicore32i the early_alloc_aligned() and and early_alloc() are oneliner wrappers for memblock_alloc. Replace their usage with direct call to memblock_alloc. Suggested-by: Christoph Hellwig Signed-off-by: Mike Rapoport --- arch/arm/mm/mmu.c | 11 +++-------- arch/unicore32/mm/mmu.c | 12 ++++-------- 2 files changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 0a04c9a5..57de0dd 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -719,14 +719,9 @@ EXPORT_SYMBOL(phys_mem_access_prot); #define vectors_base() (vectors_high() ? 0xffff0000 : 0) -static void __init *early_alloc_aligned(unsigned long sz, unsigned long align) -{ - return memblock_alloc(sz, align); -} - static void __init *early_alloc(unsigned long sz) { - return early_alloc_aligned(sz, sz); + return memblock_alloc(sz, sz); } static void *__init late_alloc(unsigned long sz) @@ -998,7 +993,7 @@ void __init iotable_init(struct map_desc *io_desc, int nr) if (!nr) return; - svm = early_alloc_aligned(sizeof(*svm) * nr, __alignof__(*svm)); + svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm)); for (md = io_desc; nr; md++, nr--) { create_mapping(md); @@ -1020,7 +1015,7 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size, struct vm_struct *vm; struct static_vm *svm; - svm = early_alloc_aligned(sizeof(*svm), __alignof__(*svm)); + svm = memblock_alloc(sizeof(*svm), __alignof__(*svm)); vm = &svm->vm; vm->addr = (void *)addr; diff --git a/arch/unicore32/mm/mmu.c b/arch/unicore32/mm/mmu.c index 50d8c1a..a402192 100644 --- a/arch/unicore32/mm/mmu.c +++ b/arch/unicore32/mm/mmu.c @@ -141,16 +141,12 @@ static void __init build_mem_type_table(void) #define vectors_base() (vectors_high() ? 0xffff0000 : 0) -static void __init *early_alloc(unsigned long sz) -{ - return memblock_alloc(sz, sz); -} - static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, unsigned long prot) { if (pmd_none(*pmd)) { - pte_t *pte = early_alloc(PTRS_PER_PTE * sizeof(pte_t)); + pte_t *pte = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), + PTRS_PER_PTE * sizeof(pte_t)); __pmd_populate(pmd, __pa(pte) | prot); } BUG_ON(pmd_bad(*pmd)); @@ -352,7 +348,7 @@ static void __init devicemaps_init(void) /* * Allocate the vector page early. */ - vectors = early_alloc(PAGE_SIZE); + vectors = memblock_alloc(PAGE_SIZE, PAGE_SIZE); for (addr = VMALLOC_END; addr; addr += PGDIR_SIZE) pmd_clear(pmd_off_k(addr)); @@ -429,7 +425,7 @@ void __init paging_init(void) top_pmd = pmd_off_k(0xffff0000); /* allocate the zero page. */ - zero_page = early_alloc(PAGE_SIZE); + zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); bootmem_init();