Message ID | 20230809-virt-to-phys-powerpc-v1-1-12e912a7d439@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | powerpc: Make virt_to_pfn() a static inline | expand |
Linus Walleij <linus.walleij@linaro.org> writes: > Making virt_to_pfn() a static inline taking a strongly typed > (const void *) makes the contract of a passing a pointer of that > type to the function explicit and exposes any misuse of the > macro virt_to_pfn() acting polymorphic and accepting many types > such as (void *), (unitptr_t) or (unsigned long) as arguments > without warnings. ... > diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h > index f2b6bf5687d0..9ee4b6d4a82a 100644 > --- a/arch/powerpc/include/asm/page.h > +++ b/arch/powerpc/include/asm/page.h > @@ -9,6 +9,7 @@ > #ifndef __ASSEMBLY__ > #include <linux/types.h> > #include <linux/kernel.h> > +#include <linux/bug.h> > #else > #include <asm/types.h> > #endif > @@ -119,16 +120,6 @@ extern long long virt_phys_offset; > #define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT)) > #endif > > -#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) > -#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) > -#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) > - > -#define virt_addr_valid(vaddr) ({ \ > - unsigned long _addr = (unsigned long)vaddr; \ > - _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ > - pfn_valid(virt_to_pfn(_addr)); \ > -}) > - > /* > * On Book-E parts we need __va to parse the device tree and we can't > * determine MEMORY_START until then. However we can determine PHYSICAL_START > @@ -233,6 +224,25 @@ extern long long virt_phys_offset; > #endif > #endif > > +#ifndef __ASSEMBLY__ > +static inline unsigned long virt_to_pfn(const void *kaddr) > +{ > + return __pa(kaddr) >> PAGE_SHIFT; > +} > + > +static inline const void *pfn_to_kaddr(unsigned long pfn) > +{ > + return (const void *)(((unsigned long)__va(pfn)) << PAGE_SHIFT); Any reason to do it this way rather than: + return __va(pfn << PAGE_SHIFT); Seems to be equivalent and much cleaner? cheers
Le 14/08/2023 à 14:37, Michael Ellerman a écrit : > Linus Walleij <linus.walleij@linaro.org> writes: >> Making virt_to_pfn() a static inline taking a strongly typed >> (const void *) makes the contract of a passing a pointer of that >> type to the function explicit and exposes any misuse of the >> macro virt_to_pfn() acting polymorphic and accepting many types >> such as (void *), (unitptr_t) or (unsigned long) as arguments >> without warnings. > ... >> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h >> index f2b6bf5687d0..9ee4b6d4a82a 100644 >> --- a/arch/powerpc/include/asm/page.h >> +++ b/arch/powerpc/include/asm/page.h >> @@ -9,6 +9,7 @@ >> #ifndef __ASSEMBLY__ >> #include <linux/types.h> >> #include <linux/kernel.h> >> +#include <linux/bug.h> >> #else >> #include <asm/types.h> >> #endif >> @@ -119,16 +120,6 @@ extern long long virt_phys_offset; >> #define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT)) >> #endif >> >> -#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) >> -#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) >> -#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) >> - >> -#define virt_addr_valid(vaddr) ({ \ >> - unsigned long _addr = (unsigned long)vaddr; \ >> - _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ >> - pfn_valid(virt_to_pfn(_addr)); \ >> -}) >> - >> /* >> * On Book-E parts we need __va to parse the device tree and we can't >> * determine MEMORY_START until then. However we can determine PHYSICAL_START >> @@ -233,6 +224,25 @@ extern long long virt_phys_offset; >> #endif >> #endif >> >> +#ifndef __ASSEMBLY__ >> +static inline unsigned long virt_to_pfn(const void *kaddr) >> +{ >> + return __pa(kaddr) >> PAGE_SHIFT; >> +} >> + >> +static inline const void *pfn_to_kaddr(unsigned long pfn) >> +{ >> + return (const void *)(((unsigned long)__va(pfn)) << PAGE_SHIFT); > > Any reason to do it this way rather than: > > + return __va(pfn << PAGE_SHIFT); Even cleaner: return __va(PFN_PHYS(pfn)); > > Seems to be equivalent and much cleaner? > > cheers
On Mon, Aug 14, 2023 at 2:37 PM Michael Ellerman <mpe@ellerman.id.au> wrote: > > +static inline const void *pfn_to_kaddr(unsigned long pfn) > > +{ > > + return (const void *)(((unsigned long)__va(pfn)) << PAGE_SHIFT); > > Any reason to do it this way rather than: > > + return __va(pfn << PAGE_SHIFT); > > Seems to be equivalent and much cleaner? I was afraid of changing the semantic in the original macro which converts to a virtual address before shifting, instead of shifting first, but you're right, I'm too cautious. I'll propose the elegant solution from you & Christophe instead! Yours, Linus Walleij
Christophe Leroy <christophe.leroy@csgroup.eu> writes: > Le 14/08/2023 à 14:37, Michael Ellerman a écrit : >> Linus Walleij <linus.walleij@linaro.org> writes: >>> Making virt_to_pfn() a static inline taking a strongly typed >>> (const void *) makes the contract of a passing a pointer of that >>> type to the function explicit and exposes any misuse of the >>> macro virt_to_pfn() acting polymorphic and accepting many types >>> such as (void *), (unitptr_t) or (unsigned long) as arguments >>> without warnings. >> ... >>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h >>> index f2b6bf5687d0..9ee4b6d4a82a 100644 >>> --- a/arch/powerpc/include/asm/page.h >>> +++ b/arch/powerpc/include/asm/page.h >>> @@ -233,6 +224,25 @@ extern long long virt_phys_offset; >>> #endif >>> #endif >>> >>> +#ifndef __ASSEMBLY__ >>> +static inline unsigned long virt_to_pfn(const void *kaddr) >>> +{ >>> + return __pa(kaddr) >> PAGE_SHIFT; >>> +} >>> + >>> +static inline const void *pfn_to_kaddr(unsigned long pfn) >>> +{ >>> + return (const void *)(((unsigned long)__va(pfn)) << PAGE_SHIFT); >> >> Any reason to do it this way rather than: >> >> + return __va(pfn << PAGE_SHIFT); > > Even cleaner: > > return __va(PFN_PHYS(pfn)); PFN_PHYS() includes a cast to phys_addr_t before shifting, so it's not entirely equivalent. But if phys_addr_t is larger than unsinged long then that cast is important. Which makes me wonder how/if pfn_to_kaddr() has been working until now for CONFIG_PHYS_ADDR_T_64BIT=y. cheers
Linus Walleij <linus.walleij@linaro.org> writes: > Making virt_to_pfn() a static inline taking a strongly typed > (const void *) makes the contract of a passing a pointer of that > type to the function explicit and exposes any misuse of the > macro virt_to_pfn() acting polymorphic and accepting many types > such as (void *), (unitptr_t) or (unsigned long) as arguments > without warnings. > > Move the virt_to_pfn() and related functions below the > declaration of __pa() so it compiles. > > For symmetry do the same with pfn_to_kaddr(). > > As the file is included right into the linker file, we need > to surround the functions with ifndef __ASSEMBLY__ so we > don't cause compilation errors. > > The conversion moreover exposes the fact that pmd_page_vaddr() > was returning an unsigned long rather than a const void * as > could be expected, so all the sites defining pmd_page_vaddr() > had to be augmented as well. ... > diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h > index 6a88bfdaa69b..a9515d3d7831 100644 > --- a/arch/powerpc/include/asm/pgtable.h > +++ b/arch/powerpc/include/asm/pgtable.h > @@ -60,9 +60,9 @@ static inline pgprot_t pte_pgprot(pte_t pte) > } > > #ifndef pmd_page_vaddr > -static inline unsigned long pmd_page_vaddr(pmd_t pmd) > +static inline const void *pmd_page_vaddr(pmd_t pmd) > { > - return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); > + return (const void *)((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); This can also just be: return __va(pmd_val(pmd) & ~PMD_MASKED_BITS); I've squashed that in. cheers
On Tue, Aug 15, 2023 at 9:30 AM Michael Ellerman <mpe@ellerman.id.au> wrote: > Linus Walleij <linus.walleij@linaro.org> writes: > > - return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); > > + return (const void *)((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); > > This can also just be: > > return __va(pmd_val(pmd) & ~PMD_MASKED_BITS); > > I've squashed that in. Oh you applied it, then I don't need to send revised versions, thanks Michael! Yours, Linus Walleij
Linus Walleij <linus.walleij@linaro.org> writes: > On Tue, Aug 15, 2023 at 9:30 AM Michael Ellerman <mpe@ellerman.id.au> wrote: >> Linus Walleij <linus.walleij@linaro.org> writes: > >> > - return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); >> > + return (const void *)((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); >> >> This can also just be: >> >> return __va(pmd_val(pmd) & ~PMD_MASKED_BITS); >> >> I've squashed that in. > > Oh you applied it, then I don't need to send revised versions, thanks Michael! Yeah, it's in my next-test, so I can still change it if needed for a day or two. But if you're happy with me squashing those changes in then that's easy, no need to send a v2. cheers
On Wed, 09 Aug 2023 10:07:13 +0200, Linus Walleij wrote: > Making virt_to_pfn() a static inline taking a strongly typed > (const void *) makes the contract of a passing a pointer of that > type to the function explicit and exposes any misuse of the > macro virt_to_pfn() acting polymorphic and accepting many types > such as (void *), (unitptr_t) or (unsigned long) as arguments > without warnings. > > [...] Applied to powerpc/next. [1/1] powerpc: Make virt_to_pfn() a static inline https://git.kernel.org/powerpc/c/58b6fed89ab0f602de0d143c617c29c3d4c67429 cheers
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h index fec56d965f00..d6201b5096b8 100644 --- a/arch/powerpc/include/asm/nohash/32/pgtable.h +++ b/arch/powerpc/include/asm/nohash/32/pgtable.h @@ -355,7 +355,7 @@ static inline int pte_young(pte_t pte) #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) #else #define pmd_page_vaddr(pmd) \ - ((unsigned long)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) + ((const void *)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) #define pmd_pfn(pmd) (__pa(pmd_val(pmd)) >> PAGE_SHIFT) #endif diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h index 287e25864ffa..81c801880933 100644 --- a/arch/powerpc/include/asm/nohash/64/pgtable.h +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h @@ -127,7 +127,7 @@ static inline pte_t pmd_pte(pmd_t pmd) #define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \ || (pmd_val(pmd) & PMD_BAD_BITS)) #define pmd_present(pmd) (!pmd_none(pmd)) -#define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS) +#define pmd_page_vaddr(pmd) ((const void *)(pmd_val(pmd) & ~PMD_MASKED_BITS)) extern struct page *pmd_page(pmd_t pmd); #define pmd_pfn(pmd) (page_to_pfn(pmd_page(pmd))) diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index f2b6bf5687d0..9ee4b6d4a82a 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -9,6 +9,7 @@ #ifndef __ASSEMBLY__ #include <linux/types.h> #include <linux/kernel.h> +#include <linux/bug.h> #else #include <asm/types.h> #endif @@ -119,16 +120,6 @@ extern long long virt_phys_offset; #define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT)) #endif -#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) -#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) -#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) - -#define virt_addr_valid(vaddr) ({ \ - unsigned long _addr = (unsigned long)vaddr; \ - _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ - pfn_valid(virt_to_pfn(_addr)); \ -}) - /* * On Book-E parts we need __va to parse the device tree and we can't * determine MEMORY_START until then. However we can determine PHYSICAL_START @@ -233,6 +224,25 @@ extern long long virt_phys_offset; #endif #endif +#ifndef __ASSEMBLY__ +static inline unsigned long virt_to_pfn(const void *kaddr) +{ + return __pa(kaddr) >> PAGE_SHIFT; +} + +static inline const void *pfn_to_kaddr(unsigned long pfn) +{ + return (const void *)(((unsigned long)__va(pfn)) << PAGE_SHIFT); +} +#endif + +#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) +#define virt_addr_valid(vaddr) ({ \ + unsigned long _addr = (unsigned long)vaddr; \ + _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ + pfn_valid(virt_to_pfn((void *)_addr)); \ +}) + /* * Unfortunately the PLT is in the BSS in the PPC32 ELF ABI, * and needs to be executable. This means the whole heap ends diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h index 6a88bfdaa69b..a9515d3d7831 100644 --- a/arch/powerpc/include/asm/pgtable.h +++ b/arch/powerpc/include/asm/pgtable.h @@ -60,9 +60,9 @@ static inline pgprot_t pte_pgprot(pte_t pte) } #ifndef pmd_page_vaddr -static inline unsigned long pmd_page_vaddr(pmd_t pmd) +static inline const void *pmd_page_vaddr(pmd_t pmd) { - return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); + return (const void *)((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); } #define pmd_page_vaddr pmd_page_vaddr #endif diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 7f765d5ad436..efd0ebf70a5e 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -182,7 +182,7 @@ void kvmppc_free_hpt(struct kvm_hpt_info *info) vfree(info->rev); info->rev = NULL; if (info->cma) - kvm_free_hpt_cma(virt_to_page(info->virt), + kvm_free_hpt_cma(virt_to_page((void *)info->virt), 1 << (info->order - PAGE_SHIFT)); else if (info->virt) free_pages(info->virt, info->order - PAGE_SHIFT);
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. Move the virt_to_pfn() and related functions below the declaration of __pa() so it compiles. For symmetry do the same with pfn_to_kaddr(). As the file is included right into the linker file, we need to surround the functions with ifndef __ASSEMBLY__ so we don't cause compilation errors. The conversion moreover exposes the fact that pmd_page_vaddr() was returning an unsigned long rather than a const void * as could be expected, so all the sites defining pmd_page_vaddr() had to be augmented as well. Finally the KVM code in book3s_64_mmu_hv.c was passing an unsigned int to virt_to_phys() so fix that up with a cast so the result compiles. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> --- arch/powerpc/include/asm/nohash/32/pgtable.h | 2 +- arch/powerpc/include/asm/nohash/64/pgtable.h | 2 +- arch/powerpc/include/asm/page.h | 30 ++++++++++++++++++---------- arch/powerpc/include/asm/pgtable.h | 4 ++-- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- 5 files changed, 25 insertions(+), 15 deletions(-) --- base-commit: 06c2afb862f9da8dc5efa4b6076a0e48c3fbaaa5 change-id: 20230808-virt-to-phys-powerpc-634cf7acd39a Best regards,