diff mbox series

[v5,02/10] x86/mm: Avoid hard-coding PAT in get_page_from_l1e()

Message ID 0026e56a0c91cb0dde9fe19200f473d720a9a950.1671497984.git.demi@invisiblethingslab.com (mailing list archive)
State Superseded
Headers show
Series Make PAT handling less brittle | expand

Commit Message

Demi Marie Obenour Dec. 20, 2022, 1:07 a.m. UTC
get_page_from_l1e() relied on Xen's choice of PAT, which is brittle in
the face of future PAT changes.  Instead, compute the actual cacheability
used by the CPU and switch on that, as this will work no matter what PAT
Xen uses.

No functional change intended.  This code is itself questionable and may
be removed in the future, but removing it would be an observable
behavior change and so is out of scope for this patch series.

Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
---
Changes since v4:
- Do not add new pte_flags_to_cacheability() helper, as this code may be
  removed in the near future and so adding a new helper for it is a bad
  idea.
- Do not BUG() in the event of an unexpected cacheability.  This cannot
  happen, but it is simpler to force such types to UC than to prove that
  the BUG() is not reachable.

Changes since v3:
- Compute and use the actual cacheability as seen by the processor.

Changes since v2:
- Improve commit message.
---
 xen/arch/x86/mm.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Comments

Jan Beulich Dec. 20, 2022, 8:19 a.m. UTC | #1
On 20.12.2022 02:07, Demi Marie Obenour wrote:
> get_page_from_l1e() relied on Xen's choice of PAT, which is brittle in
> the face of future PAT changes.  Instead, compute the actual cacheability
> used by the CPU and switch on that, as this will work no matter what PAT
> Xen uses.
> 
> No functional change intended.  This code is itself questionable and may
> be removed in the future, but removing it would be an observable
> behavior change and so is out of scope for this patch series.
> 
> Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> ---
> Changes since v4:
> - Do not add new pte_flags_to_cacheability() helper, as this code may be
>   removed in the near future and so adding a new helper for it is a bad
>   idea.
> - Do not BUG() in the event of an unexpected cacheability.  This cannot
>   happen, but it is simpler to force such types to UC than to prove that
>   the BUG() is not reachable.
> 
> Changes since v3:
> - Compute and use the actual cacheability as seen by the processor.
> 
> Changes since v2:
> - Improve commit message.
> ---
>  xen/arch/x86/mm.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 78b1972e4170cacccc9c37c6e64e76e66a7da87f..dba6c77ef2f7ed7fcb7f7e526583ccadd35e62cc 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -959,14 +959,16 @@ get_page_from_l1e(
>              flip = _PAGE_RW;
>          }
>  
> -        switch ( l1f & PAGE_CACHE_ATTRS )
> +        switch ( 0xFF & (XEN_MSR_PAT >> (8 * pte_flags_to_cacheattr(l1f))) )
>          {
> -        case 0: /* WB */
> -            flip |= _PAGE_PWT | _PAGE_PCD;
> +        case X86_MT_UC:
> +        case X86_MT_UCM:
> +        case X86_MT_WC:
> +            /* not cacheable */
>              break;
> -        case _PAGE_PWT: /* WT */
> -        case _PAGE_PWT | _PAGE_PAT: /* WP */
> -            flip |= _PAGE_PCD | (l1f & _PAGE_PAT);
> +        default:
> +            /* cacheable */
> +            flip |= ((l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC);
>              break;

In v4 the comment here was "cacheable, force to UC". The latter aspect is
quite relevant (and iirc also what Andrew had asked for to have as a
comment). But with this now being the default case, the comment (in either
this or the earlier form) would become stale. A forward compatible way of
wording this would e.g. be "force any other type to UC". With an adjustment
along these lines (which I think could be done while committing)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan
Jan Beulich Dec. 22, 2022, 9:51 a.m. UTC | #2
On 20.12.2022 09:19, Jan Beulich wrote:
> On 20.12.2022 02:07, Demi Marie Obenour wrote:
>> get_page_from_l1e() relied on Xen's choice of PAT, which is brittle in
>> the face of future PAT changes.  Instead, compute the actual cacheability
>> used by the CPU and switch on that, as this will work no matter what PAT
>> Xen uses.
>>
>> No functional change intended.  This code is itself questionable and may
>> be removed in the future, but removing it would be an observable
>> behavior change and so is out of scope for this patch series.
>>
>> Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
>> ---
>> Changes since v4:
>> - Do not add new pte_flags_to_cacheability() helper, as this code may be
>>   removed in the near future and so adding a new helper for it is a bad
>>   idea.
>> - Do not BUG() in the event of an unexpected cacheability.  This cannot
>>   happen, but it is simpler to force such types to UC than to prove that
>>   the BUG() is not reachable.
>>
>> Changes since v3:
>> - Compute and use the actual cacheability as seen by the processor.
>>
>> Changes since v2:
>> - Improve commit message.
>> ---
>>  xen/arch/x86/mm.c | 14 ++++++++------
>>  1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 78b1972e4170cacccc9c37c6e64e76e66a7da87f..dba6c77ef2f7ed7fcb7f7e526583ccadd35e62cc 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -959,14 +959,16 @@ get_page_from_l1e(
>>              flip = _PAGE_RW;
>>          }
>>  
>> -        switch ( l1f & PAGE_CACHE_ATTRS )
>> +        switch ( 0xFF & (XEN_MSR_PAT >> (8 * pte_flags_to_cacheattr(l1f))) )
>>          {
>> -        case 0: /* WB */
>> -            flip |= _PAGE_PWT | _PAGE_PCD;
>> +        case X86_MT_UC:
>> +        case X86_MT_UCM:
>> +        case X86_MT_WC:
>> +            /* not cacheable */
>>              break;
>> -        case _PAGE_PWT: /* WT */
>> -        case _PAGE_PWT | _PAGE_PAT: /* WP */
>> -            flip |= _PAGE_PCD | (l1f & _PAGE_PAT);
>> +        default:
>> +            /* cacheable */
>> +            flip |= ((l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC);
>>              break;
> 
> In v4 the comment here was "cacheable, force to UC". The latter aspect is
> quite relevant (and iirc also what Andrew had asked for to have as a
> comment). But with this now being the default case, the comment (in either
> this or the earlier form) would become stale. A forward compatible way of
> wording this would e.g. be "force any other type to UC". With an adjustment
> along these lines (which I think could be done while committing)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

If you had replied signaling your consent (and perhaps the preferred by you
wording), I would have committed this. Now it's going to be v6 afaic ...

Jan
Demi Marie Obenour Dec. 22, 2022, 9:58 a.m. UTC | #3
On Thu, Dec 22, 2022 at 10:51:08AM +0100, Jan Beulich wrote:
> On 20.12.2022 09:19, Jan Beulich wrote:
> > On 20.12.2022 02:07, Demi Marie Obenour wrote:
> >> get_page_from_l1e() relied on Xen's choice of PAT, which is brittle in
> >> the face of future PAT changes.  Instead, compute the actual cacheability
> >> used by the CPU and switch on that, as this will work no matter what PAT
> >> Xen uses.
> >>
> >> No functional change intended.  This code is itself questionable and may
> >> be removed in the future, but removing it would be an observable
> >> behavior change and so is out of scope for this patch series.
> >>
> >> Signed-off-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> >> ---
> >> Changes since v4:
> >> - Do not add new pte_flags_to_cacheability() helper, as this code may be
> >>   removed in the near future and so adding a new helper for it is a bad
> >>   idea.
> >> - Do not BUG() in the event of an unexpected cacheability.  This cannot
> >>   happen, but it is simpler to force such types to UC than to prove that
> >>   the BUG() is not reachable.
> >>
> >> Changes since v3:
> >> - Compute and use the actual cacheability as seen by the processor.
> >>
> >> Changes since v2:
> >> - Improve commit message.
> >> ---
> >>  xen/arch/x86/mm.c | 14 ++++++++------
> >>  1 file changed, 8 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> >> index 78b1972e4170cacccc9c37c6e64e76e66a7da87f..dba6c77ef2f7ed7fcb7f7e526583ccadd35e62cc 100644
> >> --- a/xen/arch/x86/mm.c
> >> +++ b/xen/arch/x86/mm.c
> >> @@ -959,14 +959,16 @@ get_page_from_l1e(
> >>              flip = _PAGE_RW;
> >>          }
> >>  
> >> -        switch ( l1f & PAGE_CACHE_ATTRS )
> >> +        switch ( 0xFF & (XEN_MSR_PAT >> (8 * pte_flags_to_cacheattr(l1f))) )
> >>          {
> >> -        case 0: /* WB */
> >> -            flip |= _PAGE_PWT | _PAGE_PCD;
> >> +        case X86_MT_UC:
> >> +        case X86_MT_UCM:
> >> +        case X86_MT_WC:
> >> +            /* not cacheable */
> >>              break;
> >> -        case _PAGE_PWT: /* WT */
> >> -        case _PAGE_PWT | _PAGE_PAT: /* WP */
> >> -            flip |= _PAGE_PCD | (l1f & _PAGE_PAT);
> >> +        default:
> >> +            /* cacheable */
> >> +            flip |= ((l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC);
> >>              break;
> > 
> > In v4 the comment here was "cacheable, force to UC". The latter aspect is
> > quite relevant (and iirc also what Andrew had asked for to have as a
> > comment). But with this now being the default case, the comment (in either
> > this or the earlier form) would become stale. A forward compatible way of
> > wording this would e.g. be "force any other type to UC". With an adjustment
> > along these lines (which I think could be done while committing)
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> If you had replied signaling your consent (and perhaps the preferred by you
> wording), I would have committed this. Now it's going to be v6 afaic ...
> 
> Jan

Sorry about that.  "potentially cacheable, force to UC" is the wording
I have planned for v6, along with "not cacheable, allow" in the other
case.  Feel free to go ahead and commit if you want.
diff mbox series

Patch

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 78b1972e4170cacccc9c37c6e64e76e66a7da87f..dba6c77ef2f7ed7fcb7f7e526583ccadd35e62cc 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -959,14 +959,16 @@  get_page_from_l1e(
             flip = _PAGE_RW;
         }
 
-        switch ( l1f & PAGE_CACHE_ATTRS )
+        switch ( 0xFF & (XEN_MSR_PAT >> (8 * pte_flags_to_cacheattr(l1f))) )
         {
-        case 0: /* WB */
-            flip |= _PAGE_PWT | _PAGE_PCD;
+        case X86_MT_UC:
+        case X86_MT_UCM:
+        case X86_MT_WC:
+            /* not cacheable */
             break;
-        case _PAGE_PWT: /* WT */
-        case _PAGE_PWT | _PAGE_PAT: /* WP */
-            flip |= _PAGE_PCD | (l1f & _PAGE_PAT);
+        default:
+            /* cacheable */
+            flip |= ((l1f & PAGE_CACHE_ATTRS) ^ _PAGE_UC);
             break;
         }