diff mbox

[07/20] KVM/MIPS32: Dynamic binary translation of select privileged instructions.

Message ID 3E678B37-B4C1-409F-A1CB-A7CC83B2D874@kymasys.com (mailing list archive)
State New, archived
Headers show

Commit Message

Sanjay Lal Oct. 31, 2012, 3:19 p.m. UTC
Currently, the following instructions are translated:
- CACHE (indexed)
- CACHE (va based): translated to a synci, overkill on D-CACHE operations, but still much faster than a trap.
- mfc0/mtc0: the virtual COP0 registers for the guest are implemented as 2-D array
  [COP#][SEL] and this is mapped into the guest kernel address space @ VA 0x0.
  mfc0/mtc0 operations are transformed to load/stores.

Signed-off-by: Sanjay Lal <sanjayl@kymasys.com>
---
 arch/mips/kvm/kvm_mips_comm.h     |  24 +++++++
 arch/mips/kvm/kvm_mips_commpage.c |  38 ++++++++++
 arch/mips/kvm/kvm_mips_dyntrans.c | 142 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 204 insertions(+)
 create mode 100644 arch/mips/kvm/kvm_mips_comm.h
 create mode 100644 arch/mips/kvm/kvm_mips_commpage.c
 create mode 100644 arch/mips/kvm/kvm_mips_dyntrans.c

Comments

Avi Kivity Nov. 1, 2012, 3:24 p.m. UTC | #1
On 10/31/2012 05:19 PM, Sanjay Lal wrote:
> Currently, the following instructions are translated:
> - CACHE (indexed)
> - CACHE (va based): translated to a synci, overkill on D-CACHE operations, but still much faster than a trap.
> - mfc0/mtc0: the virtual COP0 registers for the guest are implemented as 2-D array
>   [COP#][SEL] and this is mapped into the guest kernel address space @ VA 0x0.
>   mfc0/mtc0 operations are transformed to load/stores.
> 

Seems to be more of binary patching, yes?  Binary translation usually
involves hiding the translated code so the guest is not able to detect
that it is patched.
Varun Sethi Nov. 2, 2012, 5:58 a.m. UTC | #2
> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On
> Behalf Of Avi Kivity
> Sent: Thursday, November 01, 2012 8:54 PM
> To: Sanjay Lal
> Cc: kvm@vger.kernel.org; linux-mips@linux-mips.org
> Subject: Re: [PATCH 07/20] KVM/MIPS32: Dynamic binary translation of
> select privileged instructions.
> 
> On 10/31/2012 05:19 PM, Sanjay Lal wrote:
> > Currently, the following instructions are translated:
> > - CACHE (indexed)
> > - CACHE (va based): translated to a synci, overkill on D-CACHE
> operations, but still much faster than a trap.
> > - mfc0/mtc0: the virtual COP0 registers for the guest are implemented
> as 2-D array
> >   [COP#][SEL] and this is mapped into the guest kernel address space @
> VA 0x0.
> >   mfc0/mtc0 operations are transformed to load/stores.
> >
> 
> Seems to be more of binary patching, yes?  Binary translation usually
> involves hiding the translated code so the guest is not able to detect
> that it is patched.
> 
Typically, a dynamic binary translation solution should also involve a mechanism to trace the guest access to the modified pages. I don't think that support is present as a part
of the patch set. Do you plan to implement it?

-Varun

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sanjay Lal Nov. 2, 2012, 4:57 p.m. UTC | #3
On Nov 1, 2012, at 11:24 AM, Avi Kivity wrote:

> On 10/31/2012 05:19 PM, Sanjay Lal wrote:
>> Currently, the following instructions are translated:
>> - CACHE (indexed)
>> - CACHE (va based): translated to a synci, overkill on D-CACHE operations, but still much faster than a trap.
>> - mfc0/mtc0: the virtual COP0 registers for the guest are implemented as 2-D array
>>  [COP#][SEL] and this is mapped into the guest kernel address space @ VA 0x0.
>>  mfc0/mtc0 operations are transformed to load/stores.
>> 
> 
> Seems to be more of binary patching, yes?  Binary translation usually
> involves hiding the translated code so the guest is not able to detect
> that it is patched.
> 
> 
> -- 
> error compiling committee.c: too many arguments to function
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sanjay Lal Nov. 2, 2012, 5 p.m. UTC | #4
On Nov 1, 2012, at 11:24 AM, Avi Kivity wrote:

> On 10/31/2012 05:19 PM, Sanjay Lal wrote:
>> Currently, the following instructions are translated:
>> - CACHE (indexed)
>> - CACHE (va based): translated to a synci, overkill on D-CACHE operations, but still much faster than a trap.
>> - mfc0/mtc0: the virtual COP0 registers for the guest are implemented as 2-D array
>>  [COP#][SEL] and this is mapped into the guest kernel address space @ VA 0x0.
>>  mfc0/mtc0 operations are transformed to load/stores.
>> 
> 
> Seems to be more of binary patching, yes?  Binary translation usually
> involves hiding the translated code so the guest is not able to detect
> that it is patched.

Now that you mention it, I think binary patching would be more applicable.  If the "self-aware" guest ever compared the code it would realize that it has changed.

Regards
Sanjay


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/mips/kvm/kvm_mips_comm.h b/arch/mips/kvm/kvm_mips_comm.h
new file mode 100644
index 0000000..02073db
--- /dev/null
+++ b/arch/mips/kvm/kvm_mips_comm.h
@@ -0,0 +1,24 @@ 
+/*
+* This file is subject to the terms and conditions of the GNU General Public
+* License.  See the file "COPYING" in the main directory of this archive
+* for more details.
+*
+* KVM/MIPS commpage: mapped into guest kernel @ VA: 0x0 to support dynamic translation
+*
+* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+* Authors: Sanjay Lal <sanjayl@kymasys.com>
+*/
+
+#ifndef __KVM_MIPS_COMMPAGE_H__
+#define __KVM_MIPS_COMMPAGE_H__
+
+struct kvm_mips_commpage {
+    struct mips_coproc cop0;    /* COP0 state is mapped into Guest kernel via commpage */
+};
+
+#define KVM_MIPS_COMM_EIDI_OFFSET       0x0
+
+extern void kvm_mips_commpage_init (struct kvm_vcpu *vcpu);
+
+#endif /* __KVM_MIPS_COMMPAGE_H__ */
+
diff --git a/arch/mips/kvm/kvm_mips_commpage.c b/arch/mips/kvm/kvm_mips_commpage.c
new file mode 100644
index 0000000..5a4b21f
--- /dev/null
+++ b/arch/mips/kvm/kvm_mips_commpage.c
@@ -0,0 +1,38 @@ 
+/*
+* This file is subject to the terms and conditions of the GNU General Public
+* License.  See the file "COPYING" in the main directory of this archive
+* for more details.
+*
+* commpage, currently used for Virtual COP0 registers. Mapped into the guest kernel
+* aspace @ 0x0.
+*
+* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+* Authors: Sanjay Lal <sanjayl@kymasys.com>
+*/
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/fs.h>
+#include <linux/bootmem.h>
+#include <asm/page.h>
+#include <asm/cacheflush.h>
+#include <asm/mmu_context.h>
+
+#include <linux/kvm_host.h>
+
+#include "kvm_mips_comm.h"
+
+void
+kvm_mips_commpage_init (struct kvm_vcpu *vcpu)
+{
+    struct kvm_mips_commpage *page = vcpu->arch.kseg0_commpage;
+    memset (page, 0, sizeof(struct kvm_mips_commpage));
+
+    /* Specific init values for fields */
+    vcpu->arch.cop0 = &page->cop0;
+    memset(vcpu->arch.cop0, 0, sizeof(struct mips_coproc));
+
+    return;
+}
diff --git a/arch/mips/kvm/kvm_mips_dyntrans.c b/arch/mips/kvm/kvm_mips_dyntrans.c
new file mode 100644
index 0000000..2cbbdde
--- /dev/null
+++ b/arch/mips/kvm/kvm_mips_dyntrans.c
@@ -0,0 +1,142 @@ 
+/*
+* This file is subject to the terms and conditions of the GNU General Public
+* License.  See the file "COPYING" in the main directory of this archive
+* for more details.
+*
+* KVM/MIPS: Dynamic translation for privileged instructions, reduces traps.
+*
+* Copyright (C) 2012  MIPS Technologies, Inc.  All rights reserved.
+* Authors: Sanjay Lal <sanjayl@kymasys.com>
+*/
+
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/kvm_host.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/fs.h>
+#include <linux/bootmem.h>
+
+#include "kvm_mips_comm.h"
+
+#define SYNCI_TEMPLATE  0x041f0000
+#define SYNCI_BASE(x)   (((x) >> 21) & 0x1f)
+#define SYNCI_OFFSET    ((x) & 0xffff)
+
+#define LW_TEMPLATE     0x8c000000
+#define CLEAR_TEMPLATE  0x00000020
+#define SW_TEMPLATE     0xac000000
+
+int
+kvm_mips_trans_cache_index (uint32_t inst, uint32_t __user *opc, struct kvm_vcpu *vcpu)
+{
+    int result = 0;
+    ulong kseg0_opc;
+    uint32_t synci_inst = 0x0;
+
+    /* Replace the CACHE instruction, with a NOP */
+    kseg0_opc = CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa (vcpu, (ulong) opc));
+    memcpy((void *) kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
+    mips32_SyncICache(kseg0_opc, 32);
+
+    return (result);
+}
+
+/*
+ *  Address based CACHE instructions are transformed into synci(s). A little heavy
+ * for just D-cache invalidates, but avoids an expensive trap
+ */
+int
+kvm_mips_trans_cache_va (uint32_t inst, uint32_t __user *opc, struct kvm_vcpu *vcpu)
+{
+    int result = 0;
+    ulong kseg0_opc;
+    uint32_t synci_inst = SYNCI_TEMPLATE, base, offset;
+
+    base = (inst >> 21) & 0x1f;
+    offset = inst & 0xffff;
+    synci_inst |= (base << 21);
+    synci_inst |= offset;
+
+    kseg0_opc = CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa (vcpu, (ulong) opc));
+    memcpy((void *) kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
+    mips32_SyncICache(kseg0_opc, 32);
+
+    return (result);
+}
+
+
+int
+kvm_mips_trans_mfc0 (uint32_t inst, uint32_t __user *opc, struct kvm_vcpu *vcpu)
+{
+    int32_t rt, rd, sel;
+    uint32_t mfc0_inst;
+    ulong kseg0_opc, flags;
+    
+
+    rt = (inst >> 16) & 0x1f;
+    rd = (inst >> 11) & 0x1f;
+    sel = inst & 0x7;
+
+    if ((rd == MIPS_CP0_ERRCTL) && (sel == 0)) {
+        mfc0_inst = CLEAR_TEMPLATE;
+        mfc0_inst |= ((rt & 0x1f) << 16);
+    }
+    else {
+        mfc0_inst = LW_TEMPLATE;
+        mfc0_inst |= ((rt & 0x1f) << 16);
+        mfc0_inst |= offsetof(struct mips_coproc, reg[rd][sel]) + offsetof(struct kvm_mips_commpage, cop0);
+    }
+
+    if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
+        kseg0_opc = CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa (vcpu, (ulong) opc));
+        memcpy((void *) kseg0_opc, (void *)&mfc0_inst, sizeof(uint32_t));
+        mips32_SyncICache(kseg0_opc, 32);
+    }
+    else if (KVM_GUEST_KSEGX((ulong)opc) == KVM_GUEST_KSEG23) {
+        ENTER_CRITICAL(flags);
+        memcpy((void *) opc, (void *)&mfc0_inst, sizeof(uint32_t));
+        mips32_SyncICache((ulong)opc, 32);
+        EXIT_CRITICAL(flags);
+    }
+    else {
+        kvm_err("%s: Invalid address: %p\n", __func__, opc);
+        return -EFAULT; 
+    }
+
+    return 0;
+}
+
+int
+kvm_mips_trans_mtc0 (uint32_t inst, uint32_t __user *opc, struct kvm_vcpu *vcpu)
+{
+    int32_t rt, rd, sel;
+    uint32_t mtc0_inst = SW_TEMPLATE;
+    ulong kseg0_opc, flags;
+
+    rt = (inst >> 16) & 0x1f;
+    rd = (inst >> 11) & 0x1f;
+    sel = inst & 0x7;
+
+    mtc0_inst |= ((rt & 0x1f) << 16);
+    mtc0_inst |= offsetof(struct mips_coproc, reg[rd][sel]) + offsetof(struct kvm_mips_commpage, cop0);
+
+    if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
+        kseg0_opc = CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa (vcpu, (ulong) opc));
+        memcpy((void *) kseg0_opc, (void *)&mtc0_inst, sizeof(uint32_t));
+        mips32_SyncICache(kseg0_opc, 32);
+    }
+    else if (KVM_GUEST_KSEGX((ulong)opc) == KVM_GUEST_KSEG23) {
+        ENTER_CRITICAL(flags);
+        memcpy((void *) opc, (void *)&mtc0_inst, sizeof(uint32_t));
+        mips32_SyncICache((ulong)opc, 32);
+        EXIT_CRITICAL(flags);
+    }
+    else {
+        kvm_err("%s: Invalid address: %p\n", __func__, opc);
+        return -EFAULT; 
+    }
+
+    return 0;
+}
+