Message ID | 4E2EA5AA.2010708@cn.fujitsu.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 07/26/2011 02:31 PM, Xiao Guangrong wrote: > Sometimes, we only modify the last one byte of a pte to update status bit, > for example, clear_bit is used to clear r/w bit in linux kernel and 'andb' > instruction is used in this function, in this case, kvm_mmu_pte_write will > treat it as misaligned access, and the shadow page table is zapped > > @@ -3597,6 +3597,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, > > offset = offset_in_page(gpa); > pte_size = sp->role.cr4_pae ? 8 : 4; > + > + /* > + * Sometimes, the OS only writes the last one bytes to update status > + * bits, for example, in linux, andb instruction is used in clear_bit(). > + */ > + if (sp->role.level == 1&& !(offset& (pte_size - 1))&& bytes == 1) > + return false; > + Could be true for level > 1, no?
On 07/27/2011 05:15 PM, Avi Kivity wrote: > On 07/26/2011 02:31 PM, Xiao Guangrong wrote: >> Sometimes, we only modify the last one byte of a pte to update status bit, >> for example, clear_bit is used to clear r/w bit in linux kernel and 'andb' >> instruction is used in this function, in this case, kvm_mmu_pte_write will >> treat it as misaligned access, and the shadow page table is zapped >> >> @@ -3597,6 +3597,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, >> >> offset = offset_in_page(gpa); >> pte_size = sp->role.cr4_pae ? 8 : 4; >> + >> + /* >> + * Sometimes, the OS only writes the last one bytes to update status >> + * bits, for example, in linux, andb instruction is used in clear_bit(). >> + */ >> + if (sp->role.level == 1&& !(offset& (pte_size - 1))&& bytes == 1) >> + return false; >> + > > Could be true for level > 1, no? > In my origin mind, i thought one-byte-instruction is usually used to update the last pte, but we do better remove this restriction. Will fix it in the next version, thanks! -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 2328ee6..bb55b15 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3597,6 +3597,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, offset = offset_in_page(gpa); pte_size = sp->role.cr4_pae ? 8 : 4; + + /* + * Sometimes, the OS only writes the last one bytes to update status + * bits, for example, in linux, andb instruction is used in clear_bit(). + */ + if (sp->role.level == 1 && !(offset & (pte_size - 1)) && bytes == 1) + return false; + misaligned = (offset ^ (offset + bytes - 1)) & ~(pte_size - 1); misaligned |= bytes < 4;
Sometimes, we only modify the last one byte of a pte to update status bit, for example, clear_bit is used to clear r/w bit in linux kernel and 'andb' instruction is used in this function, in this case, kvm_mmu_pte_write will treat it as misaligned access, and the shadow page table is zapped Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> --- arch/x86/kvm/mmu.c | 8 ++++++++ 1 files changed, 8 insertions(+), 0 deletions(-)