Message ID | 20220606074815.139265-6-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | arm64: Cleanup ioremap() and support ioremap_prot() | expand |
> +#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC) > +#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE) Please avoid the overly long lines here. Independt of that having a non-trivial body on a separate line tends to generlly be a lot more readable anyway. > +#define ioremap_cache(addr, size) ({ \ > + pfn_is_map_memory(__phys_to_pfn(addr)) ? \ > + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \ > +}) And this really should be an inline function. > +int iounmap_allowed(void __iomem *addr) > { > /* > * We could get an address outside vmalloc range in case > * of ioremap_cache() reusing a RAM mapping. > */ > + return is_vmalloc_addr(addr) ? 0 : -EINVAL; As the generic ioremap only returns vmalloc addresses, this check really should go into common code.
On 2022/6/6 15:54, Christoph Hellwig wrote: >> +#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC) >> +#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE) > Please avoid the overly long lines here. Independt of that having > a non-trivial body on a separate line tends to generlly be a lot more > readable anyway. Hi Christoph, As commit bdc48fa11e46 ("checkpatch/coding-style: deprecate 80-column warning") increased the limit to 100 columns,so I don't get warning when using checkpatch, and it is not a hard limit, but if this is a mandatory requirement, I will resend them with break lines. >> +#define ioremap_cache(addr, size) ({ \ >> + pfn_is_map_memory(__phys_to_pfn(addr)) ? \ >> + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \ >> +}) > And this really should be an inline function. We still need a define, see kernel/iomem.c, #ifndef ioremap_cache __weak void __iomem *ioremap_cache(resource_size_t offset, unsigned long size) { return ioremap(offset, size); } #endif > >> +int iounmap_allowed(void __iomem *addr) >> { >> /* >> * We could get an address outside vmalloc range in case >> * of ioremap_cache() reusing a RAM mapping. >> */ >> + return is_vmalloc_addr(addr) ? 0 : -EINVAL; > As the generic ioremap only returns vmalloc addresses, this check > really should go into common code. Good point, will move into generic ioremap, thanks. > .
On Mon, Jun 06, 2022 at 09:28:22PM +0800, Kefeng Wang wrote: > As commit bdc48fa11e46 ("checkpatch/coding-style: deprecate 80-column > warning") increased > > the limit to 100 columns,so I don't get warning when using checkpatch, and > it is not a hard limit, No, it did not increase the general limit, just for exception cases. Block comments certainly don't fall under that, and while the mileage on the defineѕ may vary it generally is more readable to have them on the next line. checkpath is unfortunately totall broken these days :( > > but if this is a mandatory requirement, I will resend them with break lines. > > > > +#define ioremap_cache(addr, size) ({ \ > > > + pfn_is_map_memory(__phys_to_pfn(addr)) ? \ > > > + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \ > > > +}) > > And this really should be an inline function. > > We still need a define, see kernel/iomem.c, You can just define it to the same name after the inline.
Hi Kefeng, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on v5.19-rc1 next-20220606] [cannot apply to arm64/for-next/core arnd-asm-generic/master akpm-mm/mm-everything] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/arm64-Cleanup-ioremap-and-support-ioremap_prot/20220606-154131 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git f2906aa863381afb0015a9eb7fefad885d4e5a56 config: arm64-randconfig-s032-20220605 (https://download.01.org/0day-ci/archive/20220607/202206070159.mnbdPS9W-lkp@intel.com/config) compiler: aarch64-linux-gcc (GCC) 11.3.0 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # apt-get install sparse # sparse version: v0.6.4-18-g56afb504-dirty # https://github.com/intel-lab-lkp/linux/commit/3fd9e961045f81f0ee188501228f8425c3d868a8 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Kefeng-Wang/arm64-Cleanup-ioremap-and-support-ioremap_prot/20220606-154131 git checkout 3fd9e961045f81f0ee188501228f8425c3d868a8 # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/mm/ If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot <lkp@intel.com> sparse warnings: (new ones prefixed by >>) >> arch/arm64/mm/ioremap.c:28:32: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected void const *x @@ got void [noderef] __iomem *addr @@ arch/arm64/mm/ioremap.c:28:32: sparse: expected void const *x arch/arm64/mm/ioremap.c:28:32: sparse: got void [noderef] __iomem *addr vim +28 arch/arm64/mm/ioremap.c 21 22 int iounmap_allowed(void __iomem *addr) 23 { 24 /* 25 * We could get an address outside vmalloc range in case 26 * of ioremap_cache() reusing a RAM mapping. 27 */ > 28 return is_vmalloc_addr(addr) ? 0 : -EINVAL; 29 } 30
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1652a9800ebe..ac160aa26126 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -126,6 +126,7 @@ config ARM64 select GENERIC_CPU_VULNERABILITIES select GENERIC_EARLY_IOREMAP select GENERIC_IDLE_POLL_SETUP + select GENERIC_IOREMAP select GENERIC_IRQ_IPI select GENERIC_IRQ_PROBE select GENERIC_IRQ_SHOW diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 3995652daf81..02aa5a5e9190 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -163,13 +163,21 @@ extern void __memset_io(volatile void __iomem *, int, size_t); /* * I/O memory mapping functions. */ -extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot); -extern void iounmap(volatile void __iomem *addr); -extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); -#define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) -#define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC)) -#define ioremap_np(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRnE)) +int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot); +#define ioremap_allowed ioremap_allowed + +int iounmap_allowed(void __iomem *addr); +#define iounmap_allowed iounmap_allowed + +#define _PAGE_IOREMAP PROT_DEVICE_nGnRE + +#define ioremap_wc(addr, size) ioremap_prot((addr), (size), PROT_NORMAL_NC) +#define ioremap_np(addr, size) ioremap_prot((addr), (size), PROT_DEVICE_nGnRnE) +#define ioremap_cache(addr, size) ({ \ + pfn_is_map_memory(__phys_to_pfn(addr)) ? \ + (void __iomem *)__phys_to_virt(addr) : ioremap_prot(addr, size, PROT_NORMAL); \ +}) /* * io{read,write}{16,32,64}be() macros diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c index e4dea8db6924..a5a256e3f9fe 100644 --- a/arch/arm64/kernel/acpi.c +++ b/arch/arm64/kernel/acpi.c @@ -351,7 +351,7 @@ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size) prot = __acpi_get_writethrough_mem_attribute(); } } - return __ioremap(phys, size, prot); + return ioremap_prot(phys, size, pgprot_val(prot)); } /* diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b21f91cd830d..4a3f526c6057 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -1,96 +1,32 @@ // SPDX-License-Identifier: GPL-2.0-only -/* - * Based on arch/arm/mm/ioremap.c - * - * (C) Copyright 1995 1996 Linus Torvalds - * Hacked for ARM by Phil Blundell <philb@gnu.org> - * Hacked to allow all architectures to build, and various cleanups - * by Russell King - * Copyright (C) 2012 ARM Ltd. - */ -#include <linux/export.h> #include <linux/mm.h> #include <linux/vmalloc.h> #include <linux/io.h> -#include <asm/fixmap.h> -#include <asm/tlbflush.h> - -static void __iomem *__ioremap_caller(phys_addr_t phys_addr, size_t size, - pgprot_t prot, void *caller) +int ioremap_allowed(phys_addr_t phys_addr, size_t size, unsigned long prot) { - unsigned long last_addr; - unsigned long offset = phys_addr & ~PAGE_MASK; - int err; - unsigned long addr; - struct vm_struct *area; - - /* - * Page align the mapping address and size, taking account of any - * offset. - */ - phys_addr &= PAGE_MASK; - size = PAGE_ALIGN(size + offset); + unsigned long last_addr = phys_addr + size - 1; - /* - * Don't allow wraparound, zero size or outside PHYS_MASK. - */ - last_addr = phys_addr + size - 1; - if (!size || last_addr < phys_addr || (last_addr & ~PHYS_MASK)) - return NULL; + /* Don't allow outside PHYS_MASK */ + if (last_addr & ~PHYS_MASK) + return -EINVAL; - /* - * Don't allow RAM to be mapped. - */ + /* Don't allow RAM to be mapped. */ if (WARN_ON(pfn_is_map_memory(__phys_to_pfn(phys_addr)))) - return NULL; + return -EINVAL; - area = get_vm_area_caller(size, VM_IOREMAP, caller); - if (!area) - return NULL; - addr = (unsigned long)area->addr; - area->phys_addr = phys_addr; - - err = ioremap_page_range(addr, addr + size, phys_addr, prot); - if (err) { - vunmap((void *)addr); - return NULL; - } - - return (void __iomem *)(offset + addr); -} - -void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot) -{ - return __ioremap_caller(phys_addr, size, prot, - __builtin_return_address(0)); + return 0; } -EXPORT_SYMBOL(__ioremap); -void iounmap(volatile void __iomem *io_addr) +int iounmap_allowed(void __iomem *addr) { - unsigned long addr = (unsigned long)io_addr & PAGE_MASK; - /* * We could get an address outside vmalloc range in case * of ioremap_cache() reusing a RAM mapping. */ - if (is_vmalloc_addr((void *)addr)) - vunmap((void *)addr); -} -EXPORT_SYMBOL(iounmap); - -void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size) -{ - /* For normal memory we already have a cacheable mapping. */ - if (pfn_is_map_memory(__phys_to_pfn(phys_addr))) - return (void __iomem *)__phys_to_virt(phys_addr); - - return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL), - __builtin_return_address(0)); + return is_vmalloc_addr(addr) ? 0 : -EINVAL; } -EXPORT_SYMBOL(ioremap_cache); /* * Must be called after early_fixmap_init