diff mbox series

[v2] KVM: SVM: Replace kmap_atomic() with kmap_local_page()

Message ID 20220928092748.463631-1-zhao1.liu@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series [v2] KVM: SVM: Replace kmap_atomic() with kmap_local_page() | expand

Commit Message

Zhao Liu Sept. 28, 2022, 9:27 a.m. UTC
From: Zhao Liu <zhao1.liu@intel.com>

The use of kmap_atomic() is being deprecated in favor of
kmap_local_page()[1].

The main difference between kmap_atomic() and kmap_local_page() is the
latter allows pagefaults and preemption.

There're 2 reasons we can use kmap_local_page() here:
1. SEV is 64-bit only and kmap_locla_page() doesn't disable migration in
this case, but here the function clflush_cache_range() uses CLFLUSHOPT
instruction to flush, and on x86 CLFLUSHOPT is not CPU-local and flushes
the page out of the entire cache hierarchy on all CPUs (APM volume 3,
chapter 3, CLFLUSHOPT). So there's no need to disable preemption to ensure
CPU-local.
2. clflush_cache_range() doesn't need to disable pagefault and the mapping
is still valid even if sleeps. This is also true for sched out/in when
preempted.

In addition, though kmap_local_page() is a thin wrapper around
page_address() on 64-bit, kmap_local_page() should still be used here in
preference to page_address() since page_address() isn't suitable to be used
in a generic function (like sev_clflush_pages()) where the page passed in
is not easy to determine the source of allocation. Keeping the kmap* API in
place means it can be used for things other than highmem mappings[2].

Therefore, sev_clflush_pages() is a function that should use
kmap_local_page() in place of kmap_atomic().

Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() /
kunmap_local().

[1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com
[2]: https://lore.kernel.org/lkml/5d667258-b58b-3d28-3609-e7914c99b31b@intel.com/

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Suggested-by: Ira Weiny <ira.weiny@intel.com>
Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
---
Suggested by credits:
  Dave: Referred to his explanation about cache flush and usage of
        page_address().
  Ira: Referred to his task document, review comments and explanation about
       cache flush.
  Fabio: Referred to his boiler plate commit message.
---
Changes since v1:
  * Add the explanation of global cache flush for sev_clflush_pages() in commit
    message.
  * Add the explanation why not use page_address() directly.
---
 arch/x86/kvm/svm/sev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Sean Christopherson Oct. 6, 2022, 12:14 a.m. UTC | #1
On Wed, Sep 28, 2022, Zhao Liu wrote:
> From: Zhao Liu <zhao1.liu@intel.com>
> 
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1].
> 
> The main difference between kmap_atomic() and kmap_local_page() is the
> latter allows pagefaults and preemption.

Uber nit, I would phrase this as saying that local mappings don't disable
page faults and preemption, which is slightly different than stating that they
allow pagefaults/preemption.  E.g. if preemption is already disabled.

> There're 2 reasons we can use kmap_local_page() here:
> 1. SEV is 64-bit only and kmap_locla_page() doesn't disable migration in

Nit, s/kmap_locla_page/kmap_local_page

For future reference, even better would be to use human language after "introducing"
the functions, e.g.

  The main difference between atomic and local mappings is that local
  mappings don't disable page faults or preemption.

Obviously that doesn't magically prevent typos, but it does make the changelog
easier to read (IMO).

> this case, but here the function clflush_cache_range() uses CLFLUSHOPT
> instruction to flush, and on x86 CLFLUSHOPT is not CPU-local and flushes
> the page out of the entire cache hierarchy on all CPUs (APM volume 3,
> chapter 3, CLFLUSHOPT). So there's no need to disable preemption to ensure
> CPU-local.
> 2. clflush_cache_range() doesn't need to disable pagefault and the mapping
> is still valid even if sleeps. This is also true for sched out/in when
> preempted.
> 
> In addition, though kmap_local_page() is a thin wrapper around
> page_address() on 64-bit, kmap_local_page() should still be used here in
> preference to page_address() since page_address() isn't suitable to be used
> in a generic function (like sev_clflush_pages()) where the page passed in
> is not easy to determine the source of allocation. Keeping the kmap* API in
> place means it can be used for things other than highmem mappings[2].
> 
> Therefore, sev_clflush_pages() is a function that should use
> kmap_local_page() in place of kmap_atomic().
> 
> Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() /
> kunmap_local().
> 
> [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com
> [2]: https://lore.kernel.org/lkml/5d667258-b58b-3d28-3609-e7914c99b31b@intel.com/
> 
> Suggested-by: Dave Hansen <dave.hansen@intel.com>
> Suggested-by: Ira Weiny <ira.weiny@intel.com>
> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---

No need to send a v3, the above are all the nittiest of nits.

Reviewed-by: Sean Christopherson <seanjc@google.com>
Zhao Liu Oct. 6, 2022, 12:34 p.m. UTC | #2
On Thu, Oct 06, 2022 at 12:14:17AM +0000, Sean Christopherson wrote:
> Date: Thu, 6 Oct 2022 00:14:17 +0000
> From: Sean Christopherson <seanjc@google.com>
> Subject: Re: [PATCH v2] KVM: SVM: Replace kmap_atomic() with
>  kmap_local_page()
> 
> On Wed, Sep 28, 2022, Zhao Liu wrote:
> > From: Zhao Liu <zhao1.liu@intel.com>
> > 
> > The use of kmap_atomic() is being deprecated in favor of
> > kmap_local_page()[1].
> > 
> > The main difference between kmap_atomic() and kmap_local_page() is the
> > latter allows pagefaults and preemption.
> 
> Uber nit, I would phrase this as saying that local mappings don't disable
> page faults and preemption, which is slightly different than stating that they
> allow pagefaults/preemption.  E.g. if preemption is already disabled.
> 
> > There're 2 reasons we can use kmap_local_page() here:
> > 1. SEV is 64-bit only and kmap_locla_page() doesn't disable migration in
> 
> Nit, s/kmap_locla_page/kmap_local_page
> 
> For future reference, even better would be to use human language after "introducing"
> the functions, e.g.
> 
>   The main difference between atomic and local mappings is that local
>   mappings don't disable page faults or preemption.
> 
> Obviously that doesn't magically prevent typos, but it does make the changelog
> easier to read (IMO).
> 
> > this case, but here the function clflush_cache_range() uses CLFLUSHOPT
> > instruction to flush, and on x86 CLFLUSHOPT is not CPU-local and flushes
> > the page out of the entire cache hierarchy on all CPUs (APM volume 3,
> > chapter 3, CLFLUSHOPT). So there's no need to disable preemption to ensure
> > CPU-local.
> > 2. clflush_cache_range() doesn't need to disable pagefault and the mapping
> > is still valid even if sleeps. This is also true for sched out/in when
> > preempted.
> > 
> > In addition, though kmap_local_page() is a thin wrapper around
> > page_address() on 64-bit, kmap_local_page() should still be used here in
> > preference to page_address() since page_address() isn't suitable to be used
> > in a generic function (like sev_clflush_pages()) where the page passed in
> > is not easy to determine the source of allocation. Keeping the kmap* API in
> > place means it can be used for things other than highmem mappings[2].
> > 
> > Therefore, sev_clflush_pages() is a function that should use
> > kmap_local_page() in place of kmap_atomic().
> > 
> > Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() /
> > kunmap_local().
> > 
> > [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com
> > [2]: https://lore.kernel.org/lkml/5d667258-b58b-3d28-3609-e7914c99b31b@intel.com/
> > 
> > Suggested-by: Dave Hansen <dave.hansen@intel.com>
> > Suggested-by: Ira Weiny <ira.weiny@intel.com>
> > Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> > ---
> 
> No need to send a v3, the above are all the nittiest of nits.
Thanks Sean! I'll pay more attention to these next time.

Zhao
>
> Reviewed-by: Sean Christopherson <seanjc@google.com>
Sean Christopherson Dec. 2, 2022, 7:21 p.m. UTC | #3
On Wed, Sep 28, 2022, Zhao Liu wrote:
> From: Zhao Liu <zhao1.liu@intel.com>
> 
> The use of kmap_atomic() is being deprecated in favor of
> kmap_local_page()[1].
> 
> The main difference between kmap_atomic() and kmap_local_page() is the
> latter allows pagefaults and preemption.
> 
> There're 2 reasons we can use kmap_local_page() here:
> 1. SEV is 64-bit only and kmap_locla_page() doesn't disable migration in
> this case, but here the function clflush_cache_range() uses CLFLUSHOPT
> instruction to flush, and on x86 CLFLUSHOPT is not CPU-local and flushes
> the page out of the entire cache hierarchy on all CPUs (APM volume 3,
> chapter 3, CLFLUSHOPT). So there's no need to disable preemption to ensure
> CPU-local.
> 2. clflush_cache_range() doesn't need to disable pagefault and the mapping
> is still valid even if sleeps. This is also true for sched out/in when
> preempted.
> 
> In addition, though kmap_local_page() is a thin wrapper around
> page_address() on 64-bit, kmap_local_page() should still be used here in
> preference to page_address() since page_address() isn't suitable to be used
> in a generic function (like sev_clflush_pages()) where the page passed in
> is not easy to determine the source of allocation. Keeping the kmap* API in
> place means it can be used for things other than highmem mappings[2].
> 
> Therefore, sev_clflush_pages() is a function that should use
> kmap_local_page() in place of kmap_atomic().
> 
> Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() /
> kunmap_local().
> 
> [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com
> [2]: https://lore.kernel.org/lkml/5d667258-b58b-3d28-3609-e7914c99b31b@intel.com/
> 
> Suggested-by: Dave Hansen <dave.hansen@intel.com>
> Suggested-by: Ira Weiny <ira.weiny@intel.com>
> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
> Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> ---

Merged to kvm/queue, thanks!

https://lore.kernel.org/all/Y4lHxds8pvBhxXFX@google.com
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 28064060413a..12747c7bda4e 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -465,9 +465,9 @@  static void sev_clflush_pages(struct page *pages[], unsigned long npages)
 		return;
 
 	for (i = 0; i < npages; i++) {
-		page_virtual = kmap_atomic(pages[i]);
+		page_virtual = kmap_local_page(pages[i]);
 		clflush_cache_range(page_virtual, PAGE_SIZE);
-		kunmap_atomic(page_virtual);
+		kunmap_local(page_virtual);
 		cond_resched();
 	}
 }