diff mbox series

x86/HVM: tidy state on hvmemul_map_linear_addr()'s error path

Message ID c4c1d2b3-591e-403f-879b-bbb897f7ff25@suse.com (mailing list archive)
State New, archived
Headers show
Series x86/HVM: tidy state on hvmemul_map_linear_addr()'s error path | expand

Commit Message

Jan Beulich Feb. 6, 2024, 12:06 p.m. UTC
While in the vast majority of cases failure of the function will not
be followed by re-invocation with the same emulation context, a few
very specific insns - involving multiple independent writes, e.g. ENTER
and PUSHA - exist where this can happen. Since failure of the function
only signals to the caller that it ought to try an MMIO write instead,
such failure also cannot be assumed to result in wholesale failure of
emulation of the current insn. Instead we have to maintain internal
state such that another invocation of the function with the same
emulation context remains possible. To achieve that we need to reset MFN
slots after putting page references on the error path.

Note that all of this affects debugging code only, in causing an
assertion to trigger (higher up in the function). There's otherwise no
misbehavior - such a "leftover" slot would simply be overwritten by new
contents in a release build.

Also extend the related unmap() assertion, to further check for MFN 0.

Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings")
Reported.by: Manuel Andreas <manuel.andreas@tum.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While probably I could be convinced to omit the #ifndef, I'm really
considering to extend the one in hvmemul_unmap_linear_addr(), to
eliminate the zapping from release builds: Leaving MFN 0 in place is not
much better than leaving a (presently) guest-owned one there. And we
can't really put/leave INVALID_MFN there, as that would conflict with
other debug checking.

Comments

Jan Beulich Feb. 8, 2024, 3:59 p.m. UTC | #1
On 06.02.2024 13:06, Jan Beulich wrote:
> While in the vast majority of cases failure of the function will not
> be followed by re-invocation with the same emulation context, a few
> very specific insns - involving multiple independent writes, e.g. ENTER
> and PUSHA - exist where this can happen. Since failure of the function
> only signals to the caller that it ought to try an MMIO write instead,
> such failure also cannot be assumed to result in wholesale failure of
> emulation of the current insn. Instead we have to maintain internal
> state such that another invocation of the function with the same
> emulation context remains possible. To achieve that we need to reset MFN
> slots after putting page references on the error path.
> 
> Note that all of this affects debugging code only, in causing an
> assertion to trigger (higher up in the function). There's otherwise no
> misbehavior - such a "leftover" slot would simply be overwritten by new
> contents in a release build.
> 
> Also extend the related unmap() assertion, to further check for MFN 0.
> 
> Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings")
> Reported.by: Manuel Andreas <manuel.andreas@tum.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Just noticed that I forgot to Cc Paul.

Jan

> ---
> While probably I could be convinced to omit the #ifndef, I'm really
> considering to extend the one in hvmemul_unmap_linear_addr(), to
> eliminate the zapping from release builds: Leaving MFN 0 in place is not
> much better than leaving a (presently) guest-owned one there. And we
> can't really put/leave INVALID_MFN there, as that would conflict with
> other debug checking.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -696,7 +696,12 @@ static void *hvmemul_map_linear_addr(
>   out:
>      /* Drop all held references. */
>      while ( mfn-- > hvmemul_ctxt->mfn )
> +    {
>          put_page(mfn_to_page(*mfn));
> +#ifndef NDEBUG /* Clean slot for a subsequent map()'s error checking. */
> +        *mfn = _mfn(0);
> +#endif
> +    }
>  
>      return err;
>  }
> @@ -718,7 +723,7 @@ static void hvmemul_unmap_linear_addr(
>  
>      for ( i = 0; i < nr_frames; i++ )
>      {
> -        ASSERT(mfn_valid(*mfn));
> +        ASSERT(mfn_x(*mfn) && mfn_valid(*mfn));
>          paging_mark_dirty(currd, *mfn);
>          put_page(mfn_to_page(*mfn));
>
Paul Durrant Feb. 8, 2024, 4:11 p.m. UTC | #2
On 08/02/2024 15:59, Jan Beulich wrote:
> On 06.02.2024 13:06, Jan Beulich wrote:
>> While in the vast majority of cases failure of the function will not
>> be followed by re-invocation with the same emulation context, a few
>> very specific insns - involving multiple independent writes, e.g. ENTER
>> and PUSHA - exist where this can happen. Since failure of the function
>> only signals to the caller that it ought to try an MMIO write instead,
>> such failure also cannot be assumed to result in wholesale failure of
>> emulation of the current insn. Instead we have to maintain internal
>> state such that another invocation of the function with the same
>> emulation context remains possible. To achieve that we need to reset MFN
>> slots after putting page references on the error path.
>>
>> Note that all of this affects debugging code only, in causing an
>> assertion to trigger (higher up in the function). There's otherwise no
>> misbehavior - such a "leftover" slot would simply be overwritten by new
>> contents in a release build.
>>
>> Also extend the related unmap() assertion, to further check for MFN 0.
>>
>> Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings")
>> Reported.by: Manuel Andreas <manuel.andreas@tum.de>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Just noticed that I forgot to Cc Paul.
> 
> Jan
> 
>> ---
>> While probably I could be convinced to omit the #ifndef, I'm really
>> considering to extend the one in hvmemul_unmap_linear_addr(), to
>> eliminate the zapping from release builds: Leaving MFN 0 in place is not
>> much better than leaving a (presently) guest-owned one there. And we
>> can't really put/leave INVALID_MFN there, as that would conflict with
>> other debug checking.

Would it be worth defining a sentinel value for this purpose rather than 
hardcoding _mfn(0)? (_mfn(0) seems like a reasonable sentinel... it's 
just a question of having a #define for it).

Either way...

Acked-by: Paul Durrant <paul@xen.org>

>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -696,7 +696,12 @@ static void *hvmemul_map_linear_addr(
>>    out:
>>       /* Drop all held references. */
>>       while ( mfn-- > hvmemul_ctxt->mfn )
>> +    {
>>           put_page(mfn_to_page(*mfn));
>> +#ifndef NDEBUG /* Clean slot for a subsequent map()'s error checking. */
>> +        *mfn = _mfn(0);
>> +#endif
>> +    }
>>   
>>       return err;
>>   }
>> @@ -718,7 +723,7 @@ static void hvmemul_unmap_linear_addr(
>>   
>>       for ( i = 0; i < nr_frames; i++ )
>>       {
>> -        ASSERT(mfn_valid(*mfn));
>> +        ASSERT(mfn_x(*mfn) && mfn_valid(*mfn));
>>           paging_mark_dirty(currd, *mfn);
>>           put_page(mfn_to_page(*mfn));
>>   
>
Jan Beulich Feb. 8, 2024, 4:17 p.m. UTC | #3
On 08.02.2024 17:11, Paul Durrant wrote:
> On 08/02/2024 15:59, Jan Beulich wrote:
>> On 06.02.2024 13:06, Jan Beulich wrote:
>>> While in the vast majority of cases failure of the function will not
>>> be followed by re-invocation with the same emulation context, a few
>>> very specific insns - involving multiple independent writes, e.g. ENTER
>>> and PUSHA - exist where this can happen. Since failure of the function
>>> only signals to the caller that it ought to try an MMIO write instead,
>>> such failure also cannot be assumed to result in wholesale failure of
>>> emulation of the current insn. Instead we have to maintain internal
>>> state such that another invocation of the function with the same
>>> emulation context remains possible. To achieve that we need to reset MFN
>>> slots after putting page references on the error path.
>>>
>>> Note that all of this affects debugging code only, in causing an
>>> assertion to trigger (higher up in the function). There's otherwise no
>>> misbehavior - such a "leftover" slot would simply be overwritten by new
>>> contents in a release build.
>>>
>>> Also extend the related unmap() assertion, to further check for MFN 0.
>>>
>>> Fixes: 8cbd4fb0b7ea ("x86/hvm: implement hvmemul_write() using real mappings")
>>> Reported.by: Manuel Andreas <manuel.andreas@tum.de>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Just noticed that I forgot to Cc Paul.
>>
>> Jan
>>
>>> ---
>>> While probably I could be convinced to omit the #ifndef, I'm really
>>> considering to extend the one in hvmemul_unmap_linear_addr(), to
>>> eliminate the zapping from release builds: Leaving MFN 0 in place is not
>>> much better than leaving a (presently) guest-owned one there. And we
>>> can't really put/leave INVALID_MFN there, as that would conflict with
>>> other debug checking.
> 
> Would it be worth defining a sentinel value for this purpose rather than 
> hardcoding _mfn(0)? (_mfn(0) seems like a reasonable sentinel... it's 
> just a question of having a #define for it).

Perhaps, but that's for a separate patch then.

> Either way...
> 
> Acked-by: Paul Durrant <paul@xen.org>

Thanks.

Jan
diff mbox series

Patch

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -696,7 +696,12 @@  static void *hvmemul_map_linear_addr(
  out:
     /* Drop all held references. */
     while ( mfn-- > hvmemul_ctxt->mfn )
+    {
         put_page(mfn_to_page(*mfn));
+#ifndef NDEBUG /* Clean slot for a subsequent map()'s error checking. */
+        *mfn = _mfn(0);
+#endif
+    }
 
     return err;
 }
@@ -718,7 +723,7 @@  static void hvmemul_unmap_linear_addr(
 
     for ( i = 0; i < nr_frames; i++ )
     {
-        ASSERT(mfn_valid(*mfn));
+        ASSERT(mfn_x(*mfn) && mfn_valid(*mfn));
         paging_mark_dirty(currd, *mfn);
         put_page(mfn_to_page(*mfn));