Message ID | 20210114003708.3798992-7-seanjc@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: SVM: Misc SEV cleanups | expand |
On 1/13/21 6:37 PM, Sean Christopherson wrote: > Drop the sev_enabled flag and switch its one user over to sev_active(). > sev_enabled was made redundant with the introduction of sev_status in > commit b57de6cd1639 ("x86/sev-es: Add SEV-ES Feature Detection"). > sev_enabled and sev_active() are guaranteed to be equivalent, as each is > true iff 'sev_status & MSR_AMD64_SEV_ENABLED' is true, and are only ever > written in tandem (ignoring compressed boot's version of sev_status). > > Removing sev_enabled avoids confusion over whether it refers to the guest > or the host, and will also allow KVM to usurp "sev_enabled" for its own > purposes. > > No functional change intended. > > Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> > --- > arch/x86/include/asm/mem_encrypt.h | 1 - > arch/x86/mm/mem_encrypt.c | 12 +++++------- > arch/x86/mm/mem_encrypt_identity.c | 1 - > 3 files changed, 5 insertions(+), 9 deletions(-) >
On 1/13/21 6:37 PM, Sean Christopherson wrote: > Drop the sev_enabled flag and switch its one user over to sev_active(). > sev_enabled was made redundant with the introduction of sev_status in > commit b57de6cd1639 ("x86/sev-es: Add SEV-ES Feature Detection"). > sev_enabled and sev_active() are guaranteed to be equivalent, as each is > true iff 'sev_status & MSR_AMD64_SEV_ENABLED' is true, and are only ever > written in tandem (ignoring compressed boot's version of sev_status). > > Removing sev_enabled avoids confusion over whether it refers to the guest > or the host, and will also allow KVM to usurp "sev_enabled" for its own > purposes. > > No functional change intended. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/include/asm/mem_encrypt.h | 1 - > arch/x86/mm/mem_encrypt.c | 12 +++++------- > arch/x86/mm/mem_encrypt_identity.c | 1 - > 3 files changed, 5 insertions(+), 9 deletions(-) Thanks Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> > diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h > index 2f62bbdd9d12..88d624499411 100644 > --- a/arch/x86/include/asm/mem_encrypt.h > +++ b/arch/x86/include/asm/mem_encrypt.h > @@ -20,7 +20,6 @@ > > extern u64 sme_me_mask; > extern u64 sev_status; > -extern bool sev_enabled; > > void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, > unsigned long decrypted_kernel_vaddr, > diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c > index bc0833713be9..b89bc03c63a2 100644 > --- a/arch/x86/mm/mem_encrypt.c > +++ b/arch/x86/mm/mem_encrypt.c > @@ -44,8 +44,6 @@ EXPORT_SYMBOL(sme_me_mask); > DEFINE_STATIC_KEY_FALSE(sev_enable_key); > EXPORT_SYMBOL_GPL(sev_enable_key); > > -bool sev_enabled __section(".data"); > - > /* Buffer used for early in-place encryption by BSP, no locking needed */ > static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE); > > @@ -342,16 +340,16 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) > * up under SME the trampoline area cannot be encrypted, whereas under SEV > * the trampoline area must be encrypted. > */ > -bool sme_active(void) > -{ > - return sme_me_mask && !sev_enabled; > -} > - > bool sev_active(void) > { > return sev_status & MSR_AMD64_SEV_ENABLED; > } > > +bool sme_active(void) > +{ > + return sme_me_mask && !sev_active(); > +} > + > /* Needs to be called from non-instrumentable code */ > bool noinstr sev_es_active(void) > { > diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c > index 6c5eb6f3f14f..0c2759b7f03a 100644 > --- a/arch/x86/mm/mem_encrypt_identity.c > +++ b/arch/x86/mm/mem_encrypt_identity.c > @@ -545,7 +545,6 @@ void __init sme_enable(struct boot_params *bp) > > /* SEV state cannot be controlled by a command line option */ > sme_me_mask = me_mask; > - sev_enabled = true; > physical_mask &= ~sme_me_mask; > return; > }
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 2f62bbdd9d12..88d624499411 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -20,7 +20,6 @@ extern u64 sme_me_mask; extern u64 sev_status; -extern bool sev_enabled; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, unsigned long decrypted_kernel_vaddr, diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index bc0833713be9..b89bc03c63a2 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -44,8 +44,6 @@ EXPORT_SYMBOL(sme_me_mask); DEFINE_STATIC_KEY_FALSE(sev_enable_key); EXPORT_SYMBOL_GPL(sev_enable_key); -bool sev_enabled __section(".data"); - /* Buffer used for early in-place encryption by BSP, no locking needed */ static char sme_early_buffer[PAGE_SIZE] __initdata __aligned(PAGE_SIZE); @@ -342,16 +340,16 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size) * up under SME the trampoline area cannot be encrypted, whereas under SEV * the trampoline area must be encrypted. */ -bool sme_active(void) -{ - return sme_me_mask && !sev_enabled; -} - bool sev_active(void) { return sev_status & MSR_AMD64_SEV_ENABLED; } +bool sme_active(void) +{ + return sme_me_mask && !sev_active(); +} + /* Needs to be called from non-instrumentable code */ bool noinstr sev_es_active(void) { diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index 6c5eb6f3f14f..0c2759b7f03a 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -545,7 +545,6 @@ void __init sme_enable(struct boot_params *bp) /* SEV state cannot be controlled by a command line option */ sme_me_mask = me_mask; - sev_enabled = true; physical_mask &= ~sme_me_mask; return; }
Drop the sev_enabled flag and switch its one user over to sev_active(). sev_enabled was made redundant with the introduction of sev_status in commit b57de6cd1639 ("x86/sev-es: Add SEV-ES Feature Detection"). sev_enabled and sev_active() are guaranteed to be equivalent, as each is true iff 'sev_status & MSR_AMD64_SEV_ENABLED' is true, and are only ever written in tandem (ignoring compressed boot's version of sev_status). Removing sev_enabled avoids confusion over whether it refers to the guest or the host, and will also allow KVM to usurp "sev_enabled" for its own purposes. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> --- arch/x86/include/asm/mem_encrypt.h | 1 - arch/x86/mm/mem_encrypt.c | 12 +++++------- arch/x86/mm/mem_encrypt_identity.c | 1 - 3 files changed, 5 insertions(+), 9 deletions(-)