Message ID | 20180612143915.68065-10-kirill.shutemov@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
> +int page_keyid(const struct page *page) > +{ > + if (mktme_status != MKTME_ENABLED) > + return 0; > + > + return lookup_page_ext(page)->keyid; > +} > +EXPORT_SYMBOL(page_keyid); Please start using a proper X86_FEATURE_* flag for this. It will give you all the fancy static patching that you are missing by doing it this way.
On Wed, Jun 13, 2018 at 06:20:10PM +0000, Dave Hansen wrote: > > +int page_keyid(const struct page *page) > > +{ > > + if (mktme_status != MKTME_ENABLED) > > + return 0; > > + > > + return lookup_page_ext(page)->keyid; > > +} > > +EXPORT_SYMBOL(page_keyid); > > Please start using a proper X86_FEATURE_* flag for this. It will give > you all the fancy static patching that you are missing by doing it this way. There's no MKTME CPU feature. Well, I guess we can invent syntactic one or just use static key directly. Let's see how it behaves performance-wise before optimizing this.
On 06/18/2018 03:07 AM, Kirill A. Shutemov wrote: > On Wed, Jun 13, 2018 at 06:20:10PM +0000, Dave Hansen wrote: >>> +int page_keyid(const struct page *page) >>> +{ >>> + if (mktme_status != MKTME_ENABLED) >>> + return 0; >>> + >>> + return lookup_page_ext(page)->keyid; >>> +} >>> +EXPORT_SYMBOL(page_keyid); >> Please start using a proper X86_FEATURE_* flag for this. It will give >> you all the fancy static patching that you are missing by doing it this way. > There's no MKTME CPU feature. Right. We have tons of synthetic features that have no basis in the hardware CPUID feature. > Well, I guess we can invent syntactic one or just use static key directly. Did you mean synthetic? > Let's see how it behaves performance-wise before optimizing this. It's not an optimization, it's how we do things in arch/x86, and it has a *ton* of optimization infrastructure behind it that you get for free if you use it. I'm just trying to save Thomas's tired fingers from having to say the same thing in a week or two when he looks at this.
On Mon, Jun 18, 2018 at 12:54:29PM +0000, Dave Hansen wrote: > On 06/18/2018 03:07 AM, Kirill A. Shutemov wrote: > > On Wed, Jun 13, 2018 at 06:20:10PM +0000, Dave Hansen wrote: > >>> +int page_keyid(const struct page *page) > >>> +{ > >>> + if (mktme_status != MKTME_ENABLED) > >>> + return 0; > >>> + > >>> + return lookup_page_ext(page)->keyid; > >>> +} > >>> +EXPORT_SYMBOL(page_keyid); > >> Please start using a proper X86_FEATURE_* flag for this. It will give > >> you all the fancy static patching that you are missing by doing it this way. > > There's no MKTME CPU feature. > > Right. We have tons of synthetic features that have no basis in the > hardware CPUID feature. > > > Well, I guess we can invent syntactic one or just use static key directly. > > Did you mean synthetic? Right. > > Let's see how it behaves performance-wise before optimizing this. > > It's not an optimization, it's how we do things in arch/x86, and it has > a *ton* of optimization infrastructure behind it that you get for free > if you use it. > > I'm just trying to save Thomas's tired fingers from having to say the > same thing in a week or two when he looks at this. Okay, I'll look into this.
On Mon, Jun 18, 2018 at 12:54:29PM +0000, Dave Hansen wrote: > On 06/18/2018 03:07 AM, Kirill A. Shutemov wrote: > > On Wed, Jun 13, 2018 at 06:20:10PM +0000, Dave Hansen wrote: > >>> +int page_keyid(const struct page *page) > >>> +{ > >>> + if (mktme_status != MKTME_ENABLED) > >>> + return 0; > >>> + > >>> + return lookup_page_ext(page)->keyid; > >>> +} > >>> +EXPORT_SYMBOL(page_keyid); > >> Please start using a proper X86_FEATURE_* flag for this. It will give > >> you all the fancy static patching that you are missing by doing it this way. > > There's no MKTME CPU feature. > > Right. We have tons of synthetic features that have no basis in the > hardware CPUID feature. I've tried the approach, but it doesn't fit here. We enable MKTME relatively late during boot process -- after page_ext as page_keyid() depends on it. Enabling it earlier would make page_keyid() return garbage. By the time page_ext initialized, CPU features is already handled and setup_force_cpu_cap() doesn't do anything. I've implemented the enabling with static key instead.
diff --git a/arch/x86/include/asm/mktme.h b/arch/x86/include/asm/mktme.h index 08f613953207..0fe0db424e48 100644 --- a/arch/x86/include/asm/mktme.h +++ b/arch/x86/include/asm/mktme.h @@ -2,6 +2,7 @@ #define _ASM_X86_MKTME_H #include <linux/types.h> +#include <linux/page_ext.h> struct vm_area_struct; @@ -16,6 +17,11 @@ bool vma_is_encrypted(struct vm_area_struct *vma); #define vma_keyid vma_keyid int vma_keyid(struct vm_area_struct *vma); +extern struct page_ext_operations page_mktme_ops; + +#define page_keyid page_keyid +int page_keyid(const struct page *page); + #else #define mktme_keyid_mask ((phys_addr_t)0) #define mktme_nr_keyids 0 diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h index 7555b48803a8..39af59487d5f 100644 --- a/arch/x86/include/asm/page.h +++ b/arch/x86/include/asm/page.h @@ -19,6 +19,7 @@ struct page; #include <linux/range.h> +#include <asm/mktme.h> extern struct range pfn_mapped[]; extern int nr_pfn_mapped; diff --git a/arch/x86/mm/mktme.c b/arch/x86/mm/mktme.c index 3b2f28a21d99..b02d5b9d4339 100644 --- a/arch/x86/mm/mktme.c +++ b/arch/x86/mm/mktme.c @@ -5,6 +5,15 @@ phys_addr_t mktme_keyid_mask; int mktme_nr_keyids; int mktme_keyid_shift; +int page_keyid(const struct page *page) +{ + if (mktme_status != MKTME_ENABLED) + return 0; + + return lookup_page_ext(page)->keyid; +} +EXPORT_SYMBOL(page_keyid); + bool vma_is_encrypted(struct vm_area_struct *vma) { return pgprot_val(vma->vm_page_prot) & mktme_keyid_mask; @@ -20,3 +29,15 @@ int vma_keyid(struct vm_area_struct *vma) prot = pgprot_val(vma->vm_page_prot); return (prot & mktme_keyid_mask) >> mktme_keyid_shift; } + +static bool need_page_mktme(void) +{ + /* Make sure keyid doesn't collide with extended page flags */ + BUILD_BUG_ON(__NR_PAGE_EXT_FLAGS > 16); + + return true; +} + +struct page_ext_operations page_mktme_ops = { + .need = need_page_mktme, +}; diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index f84f167ec04c..d9c5aae9523f 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -23,6 +23,7 @@ enum page_ext_flags { PAGE_EXT_YOUNG, PAGE_EXT_IDLE, #endif + __NR_PAGE_EXT_FLAGS }; /* @@ -33,7 +34,15 @@ enum page_ext_flags { * then the page_ext for pfn always exists. */ struct page_ext { - unsigned long flags; + union { + unsigned long flags; +#ifdef CONFIG_X86_INTEL_MKTME + struct { + unsigned short __pad; + unsigned short keyid; + }; +#endif + }; }; extern void pgdat_page_ext_init(struct pglist_data *pgdat); diff --git a/mm/page_ext.c b/mm/page_ext.c index a9826da84ccb..036658229842 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -68,6 +68,9 @@ static struct page_ext_operations *page_ext_ops[] = { #if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_X86_INTEL_MKTME + &page_mktme_ops, +#endif }; static unsigned long total_usage;
Store KeyID in bits 31:16 of extended page flags. These bits are unused. We don't yet set KeyID for the page. It will come in the following patch that implements prep_encrypted_page(). All pages have KeyID-0 for now. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/x86/include/asm/mktme.h | 6 ++++++ arch/x86/include/asm/page.h | 1 + arch/x86/mm/mktme.c | 21 +++++++++++++++++++++ include/linux/page_ext.h | 11 ++++++++++- mm/page_ext.c | 3 +++ 5 files changed, 41 insertions(+), 1 deletion(-)