Message ID | 20220902090811.2430228-1-zhao1.liu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: SVM: Replace kmap_atomic() with kmap_local_page() | expand |
On 9/2/22 02:08, Zhao Liu wrote: > From: Zhao Liu <zhao1.liu@intel.com> > > The use of kmap_atomic() is being deprecated in favor of > kmap_local_page()[1]. > > In arch/x86/kvm/svm/sev.c, the function sev_clflush_pages() doesn't > need to disable pagefaults and preemption in kmap_atomic(). It can > simply use kmap_local_page() / kunmap_local() that can instead do the > mapping / unmapping regardless of the context. > > With kmap_local_page(), the mapping is per thread, CPU local and not > globally visible. Therefore, sev_clflush_pages() is a function where > the use of kmap_local_page() in place of kmap_atomic() is correctly > suited. This changelog is a little on the weak side. You could literally take any arbitrary call-site and file for kmap_atomic() and slap that changelog on it. For instance: In drivers/target/target_core_sbc.c, the function sbc_dif_copy_prot() doesn't need to disable pagefaults and preemption in kmap_atomic(). It can simply use kmap_local_page() / kunmap_local() that can instead do the mapping / unmapping regardless of the context. With kmap_local_page(), the mapping is per thread, CPU local and not globally visible. Therefore, sbc_dif_copy_prot() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. That's all valid English and there's nothing incorrect in it. But, it doesn't indicate that any actual analysis was performed. It's utterly generic. It could literally have been generated by a pretty trivial script. It would be great to add at least a small, call-site-specific human touch to these changelogs. In this case, saying something about how global the cache flush is would be a great addition.
On Fri, Sep 02, 2022, Zhao Liu wrote: > From: Zhao Liu <zhao1.liu@intel.com> > > The use of kmap_atomic() is being deprecated in favor of > kmap_local_page()[1]. > > In arch/x86/kvm/svm/sev.c, the function sev_clflush_pages() doesn't > need to disable pagefaults and preemption in kmap_atomic(). It can > simply use kmap_local_page() / kunmap_local() that can instead do the > mapping / unmapping regardless of the context. > > With kmap_local_page(), the mapping is per thread, CPU local and not > globally visible. Therefore, sev_clflush_pages() is a function where > the use of kmap_local_page() in place of kmap_atomic() is correctly > suited. > > Convert the calls of kmap_atomic() / kunmap_atomic() to > kmap_local_page() / kunmap_local(). > > [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com > > Suggested-by: Ira Weiny <ira.weiny@intel.com> > Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com> > --- > Suggested by credits. > Ira: Referred to his task document and review comments. > Fabio: Referred to his boiler plate commit message. > --- > arch/x86/kvm/svm/sev.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 28064060413a..12747c7bda4e 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -465,9 +465,9 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) > return; > > for (i = 0; i < npages; i++) { > - page_virtual = kmap_atomic(pages[i]); > + page_virtual = kmap_local_page(pages[i]); > clflush_cache_range(page_virtual, PAGE_SIZE); SEV is 64-bit only, any reason not to go straight to page_address()? diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 28064060413a..aaf39e3c7bb5 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -457,7 +457,6 @@ static void sev_unpin_memory(struct kvm *kvm, struct page **pages, static void sev_clflush_pages(struct page *pages[], unsigned long npages) { - uint8_t *page_virtual; unsigned long i; if (this_cpu_has(X86_FEATURE_SME_COHERENT) || npages == 0 || @@ -465,9 +464,7 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) return; for (i = 0; i < npages; i++) { - page_virtual = kmap_atomic(pages[i]); - clflush_cache_range(page_virtual, PAGE_SIZE); - kunmap_atomic(page_virtual); + clflush_cache_range(page_address(pages[i]), PAGE_SIZE); cond_resched(); } } config KVM_AMD_SEV def_bool y bool "AMD Secure Encrypted Virtualization (SEV) support" depends on KVM_AMD && X86_64 <================================== > - kunmap_atomic(page_virtual); > + kunmap_local(page_virtual); > cond_resched(); > } > } > -- > 2.34.1 >
On 9/2/22 08:15, Sean Christopherson wrote: >> for (i = 0; i < npages; i++) { >> - page_virtual = kmap_atomic(pages[i]); >> + page_virtual = kmap_local_page(pages[i]); >> clflush_cache_range(page_virtual, PAGE_SIZE); > SEV is 64-bit only, any reason not to go straight to page_address()? Yes. page_address() is a hack. People get away with using it, but they really shouldn't, especially when it is used on pages you didn't allocate yourself. IOW: page = alloc_page(GFP_KERNEL); ptr = page_address(page); is fine. But: page = alloc_page(GFP_HIGHUSER); ptr = page_address(page); even on something that's Kconfig'd 64-bit only is a no-no in my book. The same goes for a generic-looking function like sev_clflush_pages() where the pages come from who-knows-where. It's incredibly useful for kernel accesses to random pages to be bounded explicitly. Keeping the kmap() *API* in place means it can be used for things other than highmem mappings (like protection keys). The kmap*() family is a pretty thin wrapper around page_address() on 64-bit most of the time anyway.
On Fri, Sep 02, 2022 at 08:09:29AM -0700, Dave Hansen wrote: > This changelog is a little on the weak side. You could literally take > any arbitrary call-site and file for kmap_atomic() and slap that > changelog on it. For instance: > > In drivers/target/target_core_sbc.c, the function > sbc_dif_copy_prot() doesn't need to disable pagefaults and > preemption in kmap_atomic(). It can simply use kmap_local_page() > / kunmap_local() that can instead do the mapping / unmapping > regardless of the context. > > With kmap_local_page(), the mapping is per thread, CPU local and > not globally visible. Therefore, sbc_dif_copy_prot() is a > function where the use of kmap_local_page() in place of > kmap_atomic() is correctly suited. > > That's all valid English and there's nothing incorrect in it. But, it > doesn't indicate that any actual analysis was performed. It's utterly > generic. It could literally have been generated by a pretty trivial script. > > It would be great to add at least a small, call-site-specific human > touch to these changelogs. > > In this case, saying something about how global the cache flush is would > be a great addition. Thanks Dave! I'll add more detail to the commit message and avoid overly templated changelogs. Zhao
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 28064060413a..12747c7bda4e 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -465,9 +465,9 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) return; for (i = 0; i < npages; i++) { - page_virtual = kmap_atomic(pages[i]); + page_virtual = kmap_local_page(pages[i]); clflush_cache_range(page_virtual, PAGE_SIZE); - kunmap_atomic(page_virtual); + kunmap_local(page_virtual); cond_resched(); } }