Message ID | 20250228102530.1229089-2-vdonnefort@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Stage-2 huge mappings for pKVM np-guests | expand |
On Friday 28 Feb 2025 at 10:25:17 (+0000), Vincent Donnefort wrote: > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index 19c3c631708c..a796e257c41f 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -219,14 +219,24 @@ static void guest_s2_put_page(void *addr) > > static void clean_dcache_guest_page(void *va, size_t size) > { > - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); > - hyp_fixmap_unmap(); > + while (size) { Nit: not a problem at the moment, but this makes me mildly worried if size ever became non-page-aligned, could we make the code robust to that? > + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), > + PAGE_SIZE); > + hyp_fixmap_unmap(); > + va += PAGE_SIZE; > + size -= PAGE_SIZE; > + } > } > > static void invalidate_icache_guest_page(void *va, size_t size) > { > - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); > - hyp_fixmap_unmap(); > + while (size) { > + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), > + PAGE_SIZE); > + hyp_fixmap_unmap(); > + va += PAGE_SIZE; > + size -= PAGE_SIZE; > + } > } > > int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) > -- > 2.48.1.711.g2feabab25a-goog >
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 19c3c631708c..a796e257c41f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,24 @@ static void guest_s2_put_page(void *addr) static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va += PAGE_SIZE; + size -= PAGE_SIZE; + } } static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va += PAGE_SIZE; + size -= PAGE_SIZE; + } } int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd)
clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort <vdonnefort@google.com>