Message ID | 20180510162347.3858-2-steve.capper@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
[+Christoffer] Hi Steve, On 10/05/18 17:23, Steve Capper wrote: > We assume that the direct linear map ends at ~0 in the KVM HYP map > intersection checking code. This assumption will become invalid later on > for arm64 when the address space of the kernel is re-arranged. > > This patch introduces a new constant PAGE_OFFSET_END for both arm and > arm64 and defines it to be ~0UL > > Signed-off-by: Steve Capper <steve.capper@arm.com> > --- > arch/arm/include/asm/memory.h | 1 + > arch/arm64/include/asm/memory.h | 1 + > virt/kvm/arm/mmu.c | 4 ++-- > 3 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > index ed8fd0d19a3e..45c211fd50da 100644 > --- a/arch/arm/include/asm/memory.h > +++ b/arch/arm/include/asm/memory.h > @@ -24,6 +24,7 @@ > > /* PAGE_OFFSET - the virtual address of the start of the kernel image */ > #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) > +#define PAGE_OFFSET_END (~0UL) > > #ifdef CONFIG_MMU > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > index 49d99214f43c..c5617cbbf1ff 100644 > --- a/arch/arm64/include/asm/memory.h > +++ b/arch/arm64/include/asm/memory.h > @@ -61,6 +61,7 @@ > (UL(1) << VA_BITS) + 1) > #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ > (UL(1) << (VA_BITS - 1)) + 1) > +#define PAGE_OFFSET_END (~0UL) > #define KIMAGE_VADDR (MODULES_END) > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) > #define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE) > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index 7f6a944db23d..22af347d65f1 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1927,10 +1927,10 @@ int kvm_mmu_init(void) > kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); > kvm_debug("HYP VA range: %lx:%lx\n", > kern_hyp_va(PAGE_OFFSET), > - kern_hyp_va((unsigned long)high_memory - 1)); > + kern_hyp_va(PAGE_OFFSET_END)); > > if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) && > - hyp_idmap_start < kern_hyp_va((unsigned long)high_memory - 1) && > + hyp_idmap_start < kern_hyp_va(PAGE_OFFSET_END) && This doesn't feel right to me now that we have the HYP randomization code merged. The way kern_hyp_va works now is only valid for addresses between VA(memblock_start_of_DRAM()) and high_memory. I fear that you could trigger the failing condition below as you evaluate the idmap address against something that is now not a HYP VA. > hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) { > /* > * The idmap page is intersecting with the VA space, > I'd appreciate if you could keep me cc'd on this series. Thanks, M.
On Thu, May 10, 2018 at 06:11:35PM +0100, Marc Zyngier wrote: > [+Christoffer] > > Hi Steve, Hi Marc, > > On 10/05/18 17:23, Steve Capper wrote: > > We assume that the direct linear map ends at ~0 in the KVM HYP map > > intersection checking code. This assumption will become invalid later on > > for arm64 when the address space of the kernel is re-arranged. > > > > This patch introduces a new constant PAGE_OFFSET_END for both arm and > > arm64 and defines it to be ~0UL > > > > Signed-off-by: Steve Capper <steve.capper@arm.com> > > --- > > arch/arm/include/asm/memory.h | 1 + > > arch/arm64/include/asm/memory.h | 1 + > > virt/kvm/arm/mmu.c | 4 ++-- > > 3 files changed, 4 insertions(+), 2 deletions(-) > > > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h > > index ed8fd0d19a3e..45c211fd50da 100644 > > --- a/arch/arm/include/asm/memory.h > > +++ b/arch/arm/include/asm/memory.h > > @@ -24,6 +24,7 @@ > > > > /* PAGE_OFFSET - the virtual address of the start of the kernel image */ > > #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) > > +#define PAGE_OFFSET_END (~0UL) > > > > #ifdef CONFIG_MMU > > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h > > index 49d99214f43c..c5617cbbf1ff 100644 > > --- a/arch/arm64/include/asm/memory.h > > +++ b/arch/arm64/include/asm/memory.h > > @@ -61,6 +61,7 @@ > > (UL(1) << VA_BITS) + 1) > > #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ > > (UL(1) << (VA_BITS - 1)) + 1) > > +#define PAGE_OFFSET_END (~0UL) > > #define KIMAGE_VADDR (MODULES_END) > > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) > > #define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE) > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > > index 7f6a944db23d..22af347d65f1 100644 > > --- a/virt/kvm/arm/mmu.c > > +++ b/virt/kvm/arm/mmu.c > > @@ -1927,10 +1927,10 @@ int kvm_mmu_init(void) > > kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); > > kvm_debug("HYP VA range: %lx:%lx\n", > > kern_hyp_va(PAGE_OFFSET), > > - kern_hyp_va((unsigned long)high_memory - 1)); > > + kern_hyp_va(PAGE_OFFSET_END)); > > > > if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) && > > - hyp_idmap_start < kern_hyp_va((unsigned long)high_memory - 1) && > > + hyp_idmap_start < kern_hyp_va(PAGE_OFFSET_END) && > > This doesn't feel right to me now that we have the HYP randomization > code merged. The way kern_hyp_va works now is only valid for addresses > between VA(memblock_start_of_DRAM()) and high_memory. > > I fear that you could trigger the failing condition below as you > evaluate the idmap address against something that is now not a HYP VA. > > > hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) { > > /* > > * The idmap page is intersecting with the VA space, > > Thanks! Yes, this patch is completely spurious, apologies I think I made a mistake rebasing my V2 series on top of HASLR (originally I replaced ~0LL with PAGE_OFFSET_END in V1). I will drop this patch from the next version of the series. > > I'd appreciate if you could keep me cc'd on this series. Apologies, I'll be much more careful with git send-email. Cheers, -- Steve IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On Fri, May 11, 2018 at 10:46:28AM +0100, Steve Capper wrote: > On Thu, May 10, 2018 at 06:11:35PM +0100, Marc Zyngier wrote: > > [+Christoffer] > > > > Hi Steve, > > Hi Marc, > > > > > On 10/05/18 17:23, Steve Capper wrote: [...] > > > > I'd appreciate if you could keep me cc'd on this series. > > Apologies, I'll be much more careful with git send-email. > > Cheers, > -- > Steve > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. > I will also be more careful with my email client, please ignore this disclaimer.
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index ed8fd0d19a3e..45c211fd50da 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -24,6 +24,7 @@ /* PAGE_OFFSET - the virtual address of the start of the kernel image */ #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) +#define PAGE_OFFSET_END (~0UL) #ifdef CONFIG_MMU diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 49d99214f43c..c5617cbbf1ff 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -61,6 +61,7 @@ (UL(1) << VA_BITS) + 1) #define PAGE_OFFSET (UL(0xffffffffffffffff) - \ (UL(1) << (VA_BITS - 1)) + 1) +#define PAGE_OFFSET_END (~0UL) #define KIMAGE_VADDR (MODULES_END) #define MODULES_END (MODULES_VADDR + MODULES_VSIZE) #define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 7f6a944db23d..22af347d65f1 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1927,10 +1927,10 @@ int kvm_mmu_init(void) kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); kvm_debug("HYP VA range: %lx:%lx\n", kern_hyp_va(PAGE_OFFSET), - kern_hyp_va((unsigned long)high_memory - 1)); + kern_hyp_va(PAGE_OFFSET_END)); if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) && - hyp_idmap_start < kern_hyp_va((unsigned long)high_memory - 1) && + hyp_idmap_start < kern_hyp_va(PAGE_OFFSET_END) && hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) { /* * The idmap page is intersecting with the VA space,
We assume that the direct linear map ends at ~0 in the KVM HYP map intersection checking code. This assumption will become invalid later on for arm64 when the address space of the kernel is re-arranged. This patch introduces a new constant PAGE_OFFSET_END for both arm and arm64 and defines it to be ~0UL Signed-off-by: Steve Capper <steve.capper@arm.com> --- arch/arm/include/asm/memory.h | 1 + arch/arm64/include/asm/memory.h | 1 + virt/kvm/arm/mmu.c | 4 ++-- 3 files changed, 4 insertions(+), 2 deletions(-)