From patchwork Tue Oct 21 11:00:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jungseung Lee X-Patchwork-Id: 5113551 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AA45D9F374 for ; Tue, 21 Oct 2014 11:03:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AAAF120123 for ; Tue, 21 Oct 2014 11:03:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2062200DE for ; Tue, 21 Oct 2014 11:03:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XgXBw-0004Gt-Uj; Tue, 21 Oct 2014 11:01:28 +0000 Received: from mail-pa0-x22b.google.com ([2607:f8b0:400e:c03::22b]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XgXBt-00047G-Ub for linux-arm-kernel@lists.infradead.org; Tue, 21 Oct 2014 11:01:26 +0000 Received: by mail-pa0-f43.google.com with SMTP id lf10so1185513pab.2 for ; Tue, 21 Oct 2014 04:01:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lp/hmfQT5kHqRfBKxA8Gyw+xQz9JEplpxTnjHkBV2ds=; b=DPyoBxiuDAA488uVVFQgX30YFKfYbDfcCadMCwZ3FdvmGpLMzHVEMh0xBSLfocCzny F0Ai2pScatknl2TtqpKb6QCcQdg1/ALuB7zDqV4B/agd7UoTjvj7nJg/tWm2C/FHDHsx c8lyXGbJf/JZOH4Rb/a3E/S7WIF8gonrf3AnsMscIrZdB10G2zDvQhkAt5LJPXUNFegx ++OXxZ5d9tw8sDIRpEH4ZaDzMsbG0FI3c/IekyZcNxpbbW66ittUI/w29HLXbCaktIn+ E/BLWUhT3089PY3nrvOEerQEtSp0M8P4vVKzGfvc/CXJa16V5NEX6ss7/nuTygr6W8jA gcGA== X-Received: by 10.68.93.228 with SMTP id cx4mr33921793pbb.45.1413889263879; Tue, 21 Oct 2014 04:01:03 -0700 (PDT) Received: from localhost.localdomain ([165.132.120.48]) by mx.google.com with ESMTPSA id yk3sm11522828pbb.60.2014.10.21.04.01.00 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 21 Oct 2014 04:01:03 -0700 (PDT) From: Jungseung Lee To: Russell King , Laura Abbott , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/2] arm: mm: Refine set_memory_* functions Date: Tue, 21 Oct 2014 20:00:37 +0900 Message-Id: <1413889237-6933-2-git-send-email-js07.lee@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1413889237-6933-1-git-send-email-js07.lee@gmail.com> References: <1413889237-6933-1-git-send-email-js07.lee@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141021_040126_009775_23671149 X-CRM114-Status: GOOD ( 18.42 ) X-Spam-Score: -0.8 (/) Cc: Jungseung Lee X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.2 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: Jungseung Lee --- arch/arm/mm/Makefile | 2 +- arch/arm/mm/mmu.c | 38 --------------------- arch/arm/mm/pageattr.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+), 39 deletions(-) create mode 100644 arch/arm/mm/pageattr.c diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 91da64d..d3afdf9 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -6,7 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ iomap.o obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \ - mmap.o pgd.o mmu.o + mmap.o pgd.o mmu.o pageattr.o ifneq ($(CONFIG_MMU),y) obj-y += nommu.o diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 9f98cec..06817e1 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -354,44 +354,6 @@ const struct mem_type *get_mem_type(unsigned int type) } EXPORT_SYMBOL(get_mem_type); -#define PTE_SET_FN(_name, pteop) \ -static int pte_set_##_name(pte_t *ptep, pgtable_t token, unsigned long addr, \ - void *data) \ -{ \ - pte_t pte = pteop(*ptep); \ -\ - set_pte_ext(ptep, pte, 0); \ - return 0; \ -} \ - -#define SET_MEMORY_FN(_name, callback) \ -int set_memory_##_name(unsigned long addr, int numpages) \ -{ \ - unsigned long start = addr; \ - unsigned long size = PAGE_SIZE*numpages; \ - unsigned end = start + size; \ -\ - if (start < MODULES_VADDR || start >= MODULES_END) \ - return -EINVAL;\ -\ - if (end < MODULES_VADDR || end >= MODULES_END) \ - return -EINVAL; \ -\ - apply_to_page_range(&init_mm, start, size, callback, NULL); \ - flush_tlb_kernel_range(start, end); \ - return 0;\ -} - -PTE_SET_FN(ro, pte_wrprotect) -PTE_SET_FN(rw, pte_mkwrite) -PTE_SET_FN(x, pte_mkexec) -PTE_SET_FN(nx, pte_mknexec) - -SET_MEMORY_FN(ro, pte_set_ro) -SET_MEMORY_FN(rw, pte_set_rw) -SET_MEMORY_FN(x, pte_set_x) -SET_MEMORY_FN(nx, pte_set_nx) - /* * Adjust the PMD section entries according to the CPU in use. */ diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c new file mode 100644 index 0000000..004e35c --- /dev/null +++ b/arch/arm/mm/pageattr.c @@ -0,0 +1,91 @@ +/* + * Copyright (c) 2014, The Linux Foundation. All rights reserved. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 and + * only version 2 as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ +#include +#include + +#include +#include + +struct page_change_data { + pgprot_t set_mask; + pgprot_t clear_mask; +}; + +static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, + void *data) +{ + struct page_change_data *cdata = data; + pte_t pte = *ptep; + + pte = clear_pte_bit(pte, cdata->clear_mask); + pte = set_pte_bit(pte, cdata->set_mask); + + set_pte_ext(ptep, pte, 0); + return 0; +} + +static int change_memory_common(unsigned long addr, int numpages, + pgprot_t set_mask, pgprot_t clear_mask) +{ + unsigned long start = addr; + unsigned long size = PAGE_SIZE*numpages; + unsigned long end = start + size; + int ret; + struct page_change_data data; + + if (!IS_ALIGNED(addr, PAGE_SIZE)) { + start &= PAGE_MASK; + end = start + size; + WARN_ON_ONCE(1); + } + + if (!is_module_address(start) || !is_module_address(end - 1)) + return -EINVAL; + + data.set_mask = set_mask; + data.clear_mask = clear_mask; + + ret = apply_to_page_range(&init_mm, start, size, change_page_range, + &data); + + flush_tlb_kernel_range(start, end); + return ret; +} + +int set_memory_ro(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + __pgprot(L_PTE_RDONLY), + __pgprot(0)); +} + +int set_memory_rw(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + __pgprot(0), + __pgprot(L_PTE_RDONLY)); +} + +int set_memory_nx(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + __pgprot(L_PTE_XN), + __pgprot(0)); +} + +int set_memory_x(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + __pgprot(0), + __pgprot(L_PTE_XN)); +}