Message ID | 20210430103305.28849-1-chenyi.qiang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] i386: Add ratelimit for bus locks acquired in guest | expand |
Patchew URL: https://patchew.org/QEMU/20210430103305.28849-1-chenyi.qiang@intel.com/ Hi, This series seems to have some coding style problems. See output below for more information: Type: series Message-id: 20210430103305.28849-1-chenyi.qiang@intel.com Subject: [PATCH v3] i386: Add ratelimit for bus locks acquired in guest === TEST SCRIPT BEGIN === #!/bin/bash git rev-parse base > /dev/null || exit 0 git config --local diff.renamelimit 0 git config --local diff.renames True git config --local diff.algorithm histogram ./scripts/checkpatch.pl --mailback base.. === TEST SCRIPT END === Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384 From https://github.com/patchew-project/qemu - [tag update] patchew/20210415163304.4120052-1-philmd@redhat.com -> patchew/20210415163304.4120052-1-philmd@redhat.com * [new tag] patchew/20210430103305.28849-1-chenyi.qiang@intel.com -> patchew/20210430103305.28849-1-chenyi.qiang@intel.com Switched to a new branch 'test' c1a1812 i386: Add ratelimit for bus locks acquired in guest === OUTPUT BEGIN === ERROR: line over 90 characters #162: FILE: target/i386/kvm/kvm.c:2290: + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, x86ms->bus_lock_ratelimit, WARNING: line over 80 characters #180: FILE: target/i386/kvm/kvm.c:4255: + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); total: 1 errors, 1 warnings, 139 lines checked Commit c1a18122a1be (i386: Add ratelimit for bus locks acquired in guest) has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. === OUTPUT END === Test command exited with code: 1 The full log is available at http://patchew.org/logs/20210430103305.28849-1-chenyi.qiang@intel.com/testing.checkpatch/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
+Cc qemu-devel@nongnu.org On 5/14/2021 9:07 AM, Chenyi Qiang wrote: > Hi Paolo, Eduardo > > Any comments on this version? > > On 4/30/2021 6:33 PM, Chenyi Qiang wrote: >> A bus lock is acquired through either split locked access to writeback >> (WB) memory or any locked access to non-WB memory. It is typically >1000 >> cycles slower than an atomic operation within a cache and can also >> disrupts performance on other cores. >> >> Virtual Machines can exploit bus locks to degrade the performance of >> system. To address this kind of performance DOS attack coming from the >> VMs, bus lock VM exit is introduced in KVM and it can report the bus >> locks detected in guest. If enabled in KVM, it would exit to the >> userspace to let the user enforce throttling policies once bus locks >> acquired in VMs. >> >> The availability of bus lock VM exit can be detected through the >> KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential >> policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in >> bitmap is the only supported strategy at present. It indicates that KVM >> will exit to userspace to handle the bus locks. >> >> This patch adds a ratelimit on the bus locks acquired in guest as a >> mitigation policy. >> >> Introduce a new field "bus_lock_ratelimit" to record the limited speed >> of bus locks in the target VM. The user can specify it through the >> "bus-lock-ratelimit" as a machine property. In current implementation, >> the default value of the speed is 0 per second, which means no >> restrictions on the bus locks >> >> As for ratelimit on detected bus locks, simply set the ratelimit >> interval to 1s and restrict the quota of bus lock occurence to the value >> of "bus_lock_ratelimit". A potential alternative is to introduce the >> time slice as a property which can help the user achieve more precise >> control. >> >> The detail of Bus lock VM exit can be found in spec: >> https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html >> >> >> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> >> >> --- >> Changes from v2: >> - do some rename work (bus-lock-ratelimit and BUS_LOCK_TIME_SLICE). >> (Eduardo) >> - change to register a class property at the x86_machine_class_init() >> and write the gettter/setter for the bus_lock_ratelimit property. >> (Eduardo) >> - add the lock to access the Ratelimit instance to avoid vcpu thread >> race condition. (Eduardo) >> - v2: >> https://lore.kernel.org/qemu-devel/20210420093736.17613-1-chenyi.qiang@intel.com/ >> >> >> Changes from RFC v1: >> - Remove the rip info output, as the rip can't reflect the bus lock >> position correctly. (Xiaoyao) >> - RFC v1: >> https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qiang@intel.com/ >> >> --- >> hw/i386/x86.c | 24 ++++++++++++++++++++++ >> include/hw/i386/x86.h | 9 +++++++++ >> target/i386/kvm/kvm.c | 46 +++++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 79 insertions(+) >> >> diff --git a/hw/i386/x86.c b/hw/i386/x86.c >> index ed796fe6ba..d30cf27e29 100644 >> --- a/hw/i386/x86.c >> +++ b/hw/i386/x86.c >> @@ -1246,6 +1246,23 @@ static void x86_machine_set_oem_table_id(Object >> *obj, const char *value, >> strncpy(x86ms->oem_table_id, value, 8); >> } >> +static void x86_machine_get_bus_lock_ratelimit(Object *obj, Visitor *v, >> + const char *name, void *opaque, Error >> **errp) >> +{ >> + X86MachineState *x86ms = X86_MACHINE(obj); >> + uint64_t bus_lock_ratelimit = x86ms->bus_lock_ratelimit; >> + >> + visit_type_uint64(v, name, &bus_lock_ratelimit, errp); >> +} >> + >> +static void x86_machine_set_bus_lock_ratelimit(Object *obj, Visitor *v, >> + const char *name, void *opaque, Error >> **errp) >> +{ >> + X86MachineState *x86ms = X86_MACHINE(obj); >> + >> + visit_type_uint64(v, name, &x86ms->bus_lock_ratelimit, errp); >> +} >> + >> static void x86_machine_initfn(Object *obj) >> { >> X86MachineState *x86ms = X86_MACHINE(obj); >> @@ -1256,6 +1273,7 @@ static void x86_machine_initfn(Object *obj) >> x86ms->pci_irq_mask = ACPI_BUILD_PCI_IRQS; >> x86ms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6); >> x86ms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8); >> + x86ms->bus_lock_ratelimit = 0; >> } >> static void x86_machine_class_init(ObjectClass *oc, void *data) >> @@ -1299,6 +1317,12 @@ static void x86_machine_class_init(ObjectClass >> *oc, void *data) >> "Override the default >> value of field OEM Table ID " >> "in ACPI table header." >> "The string may be up to 8 >> bytes in size"); >> + >> + object_class_property_add(oc, X86_MACHINE_BUS_LOCK_RATELIMIT, >> "uint64_t", >> + x86_machine_get_bus_lock_ratelimit, >> + x86_machine_set_bus_lock_ratelimit, >> NULL, NULL); >> + object_class_property_set_description(oc, >> X86_MACHINE_BUS_LOCK_RATELIMIT, >> + "Set the ratelimit for the bus locks acquired in VMs"); >> } >> static const TypeInfo x86_machine_info = { >> diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h >> index c09b648dff..49b130a649 100644 >> --- a/include/hw/i386/x86.h >> +++ b/include/hw/i386/x86.h >> @@ -74,12 +74,21 @@ struct X86MachineState { >> * will be translated to MSI messages in the address space. >> */ >> AddressSpace *ioapic_as; >> + >> + /* >> + * Ratelimit enforced on detected bus locks in guest. >> + * The default value of the bus_lock_ratelimit is 0 per second, >> + * which means no limitation on the guest's bus locks. >> + */ >> + uint64_t bus_lock_ratelimit; >> + RateLimit bus_lock_ratelimit_ctrl; >> }; >> #define X86_MACHINE_SMM "smm" >> #define X86_MACHINE_ACPI "acpi" >> #define X86_MACHINE_OEM_ID "x-oem-id" >> #define X86_MACHINE_OEM_TABLE_ID "x-oem-table-id" >> +#define X86_MACHINE_BUS_LOCK_RATELIMIT "bus-lock-ratelimit" >> #define TYPE_X86_MACHINE MACHINE_TYPE_NAME("x86") >> OBJECT_DECLARE_TYPE(X86MachineState, X86MachineClass, X86_MACHINE) >> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c >> index 7fe9f52710..19b6c4a7e8 100644 >> --- a/target/i386/kvm/kvm.c >> +++ b/target/i386/kvm/kvm.c >> @@ -130,6 +130,9 @@ static bool has_msr_mcg_ext_ctl; >> static struct kvm_cpuid2 *cpuid_cache; >> static struct kvm_msr_list *kvm_feature_msrs; >> +#define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ >> +static QemuMutex bus_lock_ratelimit_lock; >> + >> int kvm_has_pit_state2(void) >> { >> return has_pit_state2; >> @@ -2267,6 +2270,28 @@ int kvm_arch_init(MachineState *ms, KVMState *s) >> } >> } >> + if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) { >> + X86MachineState *x86ms = X86_MACHINE(ms); >> + >> + if (x86ms->bus_lock_ratelimit > 0) { >> + ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT); >> + if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) { >> + error_report("kvm: bus lock detection unsupported"); >> + return -ENOTSUP; >> + } >> + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0, >> + KVM_BUS_LOCK_DETECTION_EXIT); >> + if (ret < 0) { >> + error_report("kvm: Failed to enable bus lock >> detection cap: %s", >> + strerror(-ret)); >> + return ret; >> + } >> + qemu_mutex_init(&bus_lock_ratelimit_lock); >> + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, >> x86ms->bus_lock_ratelimit, >> + BUS_LOCK_SLICE_TIME); >> + } >> + } >> + >> return 0; >> } >> @@ -4221,6 +4246,20 @@ void kvm_arch_pre_run(CPUState *cpu, struct >> kvm_run *run) >> } >> } >> +static void kvm_rate_limit_on_bus_lock(void) >> +{ >> + MachineState *ms = MACHINE(qdev_get_machine()); >> + X86MachineState *x86ms = X86_MACHINE(ms); >> + >> + qemu_mutex_lock(&bus_lock_ratelimit_lock); >> + uint64_t delay_ns = >> ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); >> + qemu_mutex_unlock(&bus_lock_ratelimit_lock); >> + >> + if (delay_ns) { >> + g_usleep(delay_ns / SCALE_US); >> + } >> +} >> + >> MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) >> { >> X86CPU *x86_cpu = X86_CPU(cpu); >> @@ -4236,6 +4275,9 @@ MemTxAttrs kvm_arch_post_run(CPUState *cpu, >> struct kvm_run *run) >> } else { >> env->eflags &= ~IF_MASK; >> } >> + if (run->flags & KVM_RUN_X86_BUS_LOCK) { >> + kvm_rate_limit_on_bus_lock(); >> + } >> /* We need to protect the apic state against concurrent accesses >> from >> * different threads in case the userspace irqchip is used. */ >> @@ -4594,6 +4636,10 @@ int kvm_arch_handle_exit(CPUState *cs, struct >> kvm_run *run) >> ioapic_eoi_broadcast(run->eoi.vector); >> ret = 0; >> break; >> + case KVM_EXIT_X86_BUS_LOCK: >> + /* already handled in kvm_arch_post_run */ >> + ret = 0; >> + break; >> default: >> fprintf(stderr, "KVM: unknown exit reason %d\n", >> run->exit_reason); >> ret = -1; >>
+Stefan I have a question about ratelimit_set_speed() below: On Fri, Apr 30, 2021 at 06:33:05PM +0800, Chenyi Qiang wrote: > A bus lock is acquired through either split locked access to writeback > (WB) memory or any locked access to non-WB memory. It is typically >1000 > cycles slower than an atomic operation within a cache and can also > disrupts performance on other cores. > > Virtual Machines can exploit bus locks to degrade the performance of > system. To address this kind of performance DOS attack coming from the > VMs, bus lock VM exit is introduced in KVM and it can report the bus > locks detected in guest. If enabled in KVM, it would exit to the > userspace to let the user enforce throttling policies once bus locks > acquired in VMs. > > The availability of bus lock VM exit can be detected through the > KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential > policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in > bitmap is the only supported strategy at present. It indicates that KVM > will exit to userspace to handle the bus locks. > > This patch adds a ratelimit on the bus locks acquired in guest as a > mitigation policy. > > Introduce a new field "bus_lock_ratelimit" to record the limited speed > of bus locks in the target VM. The user can specify it through the > "bus-lock-ratelimit" as a machine property. In current implementation, > the default value of the speed is 0 per second, which means no > restrictions on the bus locks > > As for ratelimit on detected bus locks, simply set the ratelimit > interval to 1s and restrict the quota of bus lock occurence to the value > of "bus_lock_ratelimit". A potential alternative is to introduce the > time slice as a property which can help the user achieve more precise > control. > > The detail of Bus lock VM exit can be found in spec: > https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html > > Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> > > --- > Changes from v2: > - do some rename work (bus-lock-ratelimit and BUS_LOCK_TIME_SLICE). > (Eduardo) > - change to register a class property at the x86_machine_class_init() > and write the gettter/setter for the bus_lock_ratelimit property. > (Eduardo) > - add the lock to access the Ratelimit instance to avoid vcpu thread > race condition. (Eduardo) > - v2: https://lore.kernel.org/qemu-devel/20210420093736.17613-1-chenyi.qiang@intel.com/ > > Changes from RFC v1: > - Remove the rip info output, as the rip can't reflect the bus lock > position correctly. (Xiaoyao) > - RFC v1: https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qiang@intel.com/ [...] > diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h > index c09b648dff..49b130a649 100644 > --- a/include/hw/i386/x86.h > +++ b/include/hw/i386/x86.h > @@ -74,12 +74,21 @@ struct X86MachineState { > * will be translated to MSI messages in the address space. > */ > AddressSpace *ioapic_as; > + > + /* > + * Ratelimit enforced on detected bus locks in guest. > + * The default value of the bus_lock_ratelimit is 0 per second, > + * which means no limitation on the guest's bus locks. > + */ > + uint64_t bus_lock_ratelimit; > + RateLimit bus_lock_ratelimit_ctrl; > }; > > #define X86_MACHINE_SMM "smm" > #define X86_MACHINE_ACPI "acpi" > #define X86_MACHINE_OEM_ID "x-oem-id" > #define X86_MACHINE_OEM_TABLE_ID "x-oem-table-id" > +#define X86_MACHINE_BUS_LOCK_RATELIMIT "bus-lock-ratelimit" > > #define TYPE_X86_MACHINE MACHINE_TYPE_NAME("x86") > OBJECT_DECLARE_TYPE(X86MachineState, X86MachineClass, X86_MACHINE) > diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c > index 7fe9f52710..19b6c4a7e8 100644 > --- a/target/i386/kvm/kvm.c > +++ b/target/i386/kvm/kvm.c > @@ -130,6 +130,9 @@ static bool has_msr_mcg_ext_ctl; > static struct kvm_cpuid2 *cpuid_cache; > static struct kvm_msr_list *kvm_feature_msrs; > > +#define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ > +static QemuMutex bus_lock_ratelimit_lock; > + > int kvm_has_pit_state2(void) > { > return has_pit_state2; > @@ -2267,6 +2270,28 @@ int kvm_arch_init(MachineState *ms, KVMState *s) > } > } > > + if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) { > + X86MachineState *x86ms = X86_MACHINE(ms); > + > + if (x86ms->bus_lock_ratelimit > 0) { > + ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT); > + if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) { > + error_report("kvm: bus lock detection unsupported"); > + return -ENOTSUP; > + } > + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0, > + KVM_BUS_LOCK_DETECTION_EXIT); > + if (ret < 0) { > + error_report("kvm: Failed to enable bus lock detection cap: %s", > + strerror(-ret)); > + return ret; > + } > + qemu_mutex_init(&bus_lock_ratelimit_lock); > + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, x86ms->bus_lock_ratelimit, > + BUS_LOCK_SLICE_TIME); > + } > + } > + > return 0; > } > > @@ -4221,6 +4246,20 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run) > } > } > > +static void kvm_rate_limit_on_bus_lock(void) > +{ > + MachineState *ms = MACHINE(qdev_get_machine()); qdev_get_machine() seems thread safe except for the first call, but it's not documented as such. Until it is documented as thread safe (which could take a while, considering that there are ongoing attempts to clean it up), I would avoid calling without the BQL, just in case. > + X86MachineState *x86ms = X86_MACHINE(ms); > + > + qemu_mutex_lock(&bus_lock_ratelimit_lock); > + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); > + qemu_mutex_unlock(&bus_lock_ratelimit_lock); Stefan, ratelimit_calculate_delay() is supposed to be thread safe, correct? In that case, bus_lock_ratelimit_lock would be completely unnecessary. I normally prefer to avoid static variables, but in this case a static RateLimit bus_lock_ratelimit_ctrl; variable could be the simplest solution here. > + > + if (delay_ns) { > + g_usleep(delay_ns / SCALE_US); > + } > +} > + > MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) > { > X86CPU *x86_cpu = X86_CPU(cpu); > @@ -4236,6 +4275,9 @@ MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) > } else { > env->eflags &= ~IF_MASK; > } > + if (run->flags & KVM_RUN_X86_BUS_LOCK) { > + kvm_rate_limit_on_bus_lock(); > + } > > /* We need to protect the apic state against concurrent accesses from > * different threads in case the userspace irqchip is used. */ > @@ -4594,6 +4636,10 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) > ioapic_eoi_broadcast(run->eoi.vector); > ret = 0; > break; > + case KVM_EXIT_X86_BUS_LOCK: > + /* already handled in kvm_arch_post_run */ > + ret = 0; > + break; > default: > fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason); > ret = -1; > -- > 2.17.1 >
On Mon, May 17, 2021 at 03:46:29PM -0400, Eduardo Habkost wrote: > > + X86MachineState *x86ms = X86_MACHINE(ms); > > + > > + qemu_mutex_lock(&bus_lock_ratelimit_lock); > > + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); > > + qemu_mutex_unlock(&bus_lock_ratelimit_lock); > > Stefan, ratelimit_calculate_delay() is supposed to be thread > safe, correct? Yes, it was a recent change by Paolo Bonzini: commit 4951967d84a0acbf47895add9158e2d4c6056ea0 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Tue Apr 13 10:20:32 2021 +0200 ratelimit: protect with a mutex > > In that case, bus_lock_ratelimit_lock would be completely unnecessary. I agree.
On 5/18/2021 3:46 AM, Eduardo Habkost wrote: > +Stefan > > I have a question about ratelimit_set_speed() below: > > On Fri, Apr 30, 2021 at 06:33:05PM +0800, Chenyi Qiang wrote: >> A bus lock is acquired through either split locked access to writeback >> (WB) memory or any locked access to non-WB memory. It is typically >1000 >> cycles slower than an atomic operation within a cache and can also >> disrupts performance on other cores. >> >> Virtual Machines can exploit bus locks to degrade the performance of >> system. To address this kind of performance DOS attack coming from the >> VMs, bus lock VM exit is introduced in KVM and it can report the bus >> locks detected in guest. If enabled in KVM, it would exit to the >> userspace to let the user enforce throttling policies once bus locks >> acquired in VMs. >> >> The availability of bus lock VM exit can be detected through the >> KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential >> policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in >> bitmap is the only supported strategy at present. It indicates that KVM >> will exit to userspace to handle the bus locks. >> >> This patch adds a ratelimit on the bus locks acquired in guest as a >> mitigation policy. >> >> Introduce a new field "bus_lock_ratelimit" to record the limited speed >> of bus locks in the target VM. The user can specify it through the >> "bus-lock-ratelimit" as a machine property. In current implementation, >> the default value of the speed is 0 per second, which means no >> restrictions on the bus locks >> >> As for ratelimit on detected bus locks, simply set the ratelimit >> interval to 1s and restrict the quota of bus lock occurence to the value >> of "bus_lock_ratelimit". A potential alternative is to introduce the >> time slice as a property which can help the user achieve more precise >> control. >> >> The detail of Bus lock VM exit can be found in spec: >> https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html >> >> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> >> >> --- >> Changes from v2: >> - do some rename work (bus-lock-ratelimit and BUS_LOCK_TIME_SLICE). >> (Eduardo) >> - change to register a class property at the x86_machine_class_init() >> and write the gettter/setter for the bus_lock_ratelimit property. >> (Eduardo) >> - add the lock to access the Ratelimit instance to avoid vcpu thread >> race condition. (Eduardo) >> - v2: https://lore.kernel.org/qemu-devel/20210420093736.17613-1-chenyi.qiang@intel.com/ >> >> Changes from RFC v1: >> - Remove the rip info output, as the rip can't reflect the bus lock >> position correctly. (Xiaoyao) >> - RFC v1: https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qiang@intel.com/ > [...] >> diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h >> index c09b648dff..49b130a649 100644 >> --- a/include/hw/i386/x86.h >> +++ b/include/hw/i386/x86.h >> @@ -74,12 +74,21 @@ struct X86MachineState { >> * will be translated to MSI messages in the address space. >> */ >> AddressSpace *ioapic_as; >> + >> + /* >> + * Ratelimit enforced on detected bus locks in guest. >> + * The default value of the bus_lock_ratelimit is 0 per second, >> + * which means no limitation on the guest's bus locks. >> + */ >> + uint64_t bus_lock_ratelimit; >> + RateLimit bus_lock_ratelimit_ctrl; >> }; >> >> #define X86_MACHINE_SMM "smm" >> #define X86_MACHINE_ACPI "acpi" >> #define X86_MACHINE_OEM_ID "x-oem-id" >> #define X86_MACHINE_OEM_TABLE_ID "x-oem-table-id" >> +#define X86_MACHINE_BUS_LOCK_RATELIMIT "bus-lock-ratelimit" >> >> #define TYPE_X86_MACHINE MACHINE_TYPE_NAME("x86") >> OBJECT_DECLARE_TYPE(X86MachineState, X86MachineClass, X86_MACHINE) >> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c >> index 7fe9f52710..19b6c4a7e8 100644 >> --- a/target/i386/kvm/kvm.c >> +++ b/target/i386/kvm/kvm.c >> @@ -130,6 +130,9 @@ static bool has_msr_mcg_ext_ctl; >> static struct kvm_cpuid2 *cpuid_cache; >> static struct kvm_msr_list *kvm_feature_msrs; >> >> +#define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ >> +static QemuMutex bus_lock_ratelimit_lock; >> + >> int kvm_has_pit_state2(void) >> { >> return has_pit_state2; >> @@ -2267,6 +2270,28 @@ int kvm_arch_init(MachineState *ms, KVMState *s) >> } >> } >> >> + if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) { >> + X86MachineState *x86ms = X86_MACHINE(ms); >> + >> + if (x86ms->bus_lock_ratelimit > 0) { >> + ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT); >> + if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) { >> + error_report("kvm: bus lock detection unsupported"); >> + return -ENOTSUP; >> + } >> + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0, >> + KVM_BUS_LOCK_DETECTION_EXIT); >> + if (ret < 0) { >> + error_report("kvm: Failed to enable bus lock detection cap: %s", >> + strerror(-ret)); >> + return ret; >> + } >> + qemu_mutex_init(&bus_lock_ratelimit_lock); >> + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, x86ms->bus_lock_ratelimit, >> + BUS_LOCK_SLICE_TIME); >> + } >> + } >> + >> return 0; >> } >> >> @@ -4221,6 +4246,20 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run) >> } >> } >> >> +static void kvm_rate_limit_on_bus_lock(void) >> +{ >> + MachineState *ms = MACHINE(qdev_get_machine()); > > qdev_get_machine() seems thread safe except for the first call, > but it's not documented as such. > > Until it is documented as thread safe (which could take a while, > considering that there are ongoing attempts to clean it up), I > would avoid calling without the BQL, just in case. > OK, I would add the BQL here. >> + X86MachineState *x86ms = X86_MACHINE(ms); >> + >> + qemu_mutex_lock(&bus_lock_ratelimit_lock); >> + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); >> + qemu_mutex_unlock(&bus_lock_ratelimit_lock); > > Stefan, ratelimit_calculate_delay() is supposed to be thread > safe, correct? > > In that case, bus_lock_ratelimit_lock would be completely unnecessary. > Will remove it. > I normally prefer to avoid static variables, but in this case a > > static RateLimit bus_lock_ratelimit_ctrl; > > variable could be the simplest solution here. > Yes, static variable is simpler. will change it if acceptable. > >> + >> + if (delay_ns) { >> + g_usleep(delay_ns / SCALE_US); >> + } >> +} >> + >> MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) >> { >> X86CPU *x86_cpu = X86_CPU(cpu); >> @@ -4236,6 +4275,9 @@ MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) >> } else { >> env->eflags &= ~IF_MASK; >> } >> + if (run->flags & KVM_RUN_X86_BUS_LOCK) { >> + kvm_rate_limit_on_bus_lock(); >> + } >> >> /* We need to protect the apic state against concurrent accesses from >> * different threads in case the userspace irqchip is used. */ >> @@ -4594,6 +4636,10 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) >> ioapic_eoi_broadcast(run->eoi.vector); >> ret = 0; >> break; >> + case KVM_EXIT_X86_BUS_LOCK: >> + /* already handled in kvm_arch_post_run */ >> + ret = 0; >> + break; >> default: >> fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason); >> ret = -1; >> -- >> 2.17.1 >> >
On Wed, May 19, 2021 at 06:00:18PM +0800, Chenyi Qiang wrote: > > > On 5/18/2021 3:46 AM, Eduardo Habkost wrote: > > +Stefan > > > > I have a question about ratelimit_set_speed() below: > > > > On Fri, Apr 30, 2021 at 06:33:05PM +0800, Chenyi Qiang wrote: > > > A bus lock is acquired through either split locked access to writeback > > > (WB) memory or any locked access to non-WB memory. It is typically >1000 > > > cycles slower than an atomic operation within a cache and can also > > > disrupts performance on other cores. > > > > > > Virtual Machines can exploit bus locks to degrade the performance of > > > system. To address this kind of performance DOS attack coming from the > > > VMs, bus lock VM exit is introduced in KVM and it can report the bus > > > locks detected in guest. If enabled in KVM, it would exit to the > > > userspace to let the user enforce throttling policies once bus locks > > > acquired in VMs. > > > > > > The availability of bus lock VM exit can be detected through the > > > KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential > > > policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in > > > bitmap is the only supported strategy at present. It indicates that KVM > > > will exit to userspace to handle the bus locks. > > > > > > This patch adds a ratelimit on the bus locks acquired in guest as a > > > mitigation policy. > > > > > > Introduce a new field "bus_lock_ratelimit" to record the limited speed > > > of bus locks in the target VM. The user can specify it through the > > > "bus-lock-ratelimit" as a machine property. In current implementation, > > > the default value of the speed is 0 per second, which means no > > > restrictions on the bus locks > > > > > > As for ratelimit on detected bus locks, simply set the ratelimit > > > interval to 1s and restrict the quota of bus lock occurence to the value > > > of "bus_lock_ratelimit". A potential alternative is to introduce the > > > time slice as a property which can help the user achieve more precise > > > control. > > > > > > The detail of Bus lock VM exit can be found in spec: > > > https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html > > > > > > Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> > > > > > > --- > > > Changes from v2: > > > - do some rename work (bus-lock-ratelimit and BUS_LOCK_TIME_SLICE). > > > (Eduardo) > > > - change to register a class property at the x86_machine_class_init() > > > and write the gettter/setter for the bus_lock_ratelimit property. > > > (Eduardo) > > > - add the lock to access the Ratelimit instance to avoid vcpu thread > > > race condition. (Eduardo) > > > - v2: https://lore.kernel.org/qemu-devel/20210420093736.17613-1-chenyi.qiang@intel.com/ > > > > > > Changes from RFC v1: > > > - Remove the rip info output, as the rip can't reflect the bus lock > > > position correctly. (Xiaoyao) > > > - RFC v1: https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qiang@intel.com/ > > [...] > > > diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h > > > index c09b648dff..49b130a649 100644 > > > --- a/include/hw/i386/x86.h > > > +++ b/include/hw/i386/x86.h > > > @@ -74,12 +74,21 @@ struct X86MachineState { > > > * will be translated to MSI messages in the address space. > > > */ > > > AddressSpace *ioapic_as; > > > + > > > + /* > > > + * Ratelimit enforced on detected bus locks in guest. > > > + * The default value of the bus_lock_ratelimit is 0 per second, > > > + * which means no limitation on the guest's bus locks. > > > + */ > > > + uint64_t bus_lock_ratelimit; > > > + RateLimit bus_lock_ratelimit_ctrl; > > > }; > > > #define X86_MACHINE_SMM "smm" > > > #define X86_MACHINE_ACPI "acpi" > > > #define X86_MACHINE_OEM_ID "x-oem-id" > > > #define X86_MACHINE_OEM_TABLE_ID "x-oem-table-id" > > > +#define X86_MACHINE_BUS_LOCK_RATELIMIT "bus-lock-ratelimit" > > > #define TYPE_X86_MACHINE MACHINE_TYPE_NAME("x86") > > > OBJECT_DECLARE_TYPE(X86MachineState, X86MachineClass, X86_MACHINE) > > > diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c > > > index 7fe9f52710..19b6c4a7e8 100644 > > > --- a/target/i386/kvm/kvm.c > > > +++ b/target/i386/kvm/kvm.c > > > @@ -130,6 +130,9 @@ static bool has_msr_mcg_ext_ctl; > > > static struct kvm_cpuid2 *cpuid_cache; > > > static struct kvm_msr_list *kvm_feature_msrs; > > > +#define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ > > > +static QemuMutex bus_lock_ratelimit_lock; > > > + > > > int kvm_has_pit_state2(void) > > > { > > > return has_pit_state2; > > > @@ -2267,6 +2270,28 @@ int kvm_arch_init(MachineState *ms, KVMState *s) > > > } > > > } > > > + if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) { > > > + X86MachineState *x86ms = X86_MACHINE(ms); > > > + > > > + if (x86ms->bus_lock_ratelimit > 0) { > > > + ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT); > > > + if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) { > > > + error_report("kvm: bus lock detection unsupported"); > > > + return -ENOTSUP; > > > + } > > > + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0, > > > + KVM_BUS_LOCK_DETECTION_EXIT); > > > + if (ret < 0) { > > > + error_report("kvm: Failed to enable bus lock detection cap: %s", > > > + strerror(-ret)); > > > + return ret; > > > + } > > > + qemu_mutex_init(&bus_lock_ratelimit_lock); > > > + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, x86ms->bus_lock_ratelimit, > > > + BUS_LOCK_SLICE_TIME); > > > + } > > > + } > > > + > > > return 0; > > > } > > > @@ -4221,6 +4246,20 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run) > > > } > > > } > > > +static void kvm_rate_limit_on_bus_lock(void) > > > +{ > > > + MachineState *ms = MACHINE(qdev_get_machine()); > > > > qdev_get_machine() seems thread safe except for the first call, > > but it's not documented as such. > > > > Until it is documented as thread safe (which could take a while, > > considering that there are ongoing attempts to clean it up), I > > would avoid calling without the BQL, just in case. > > > > OK, I would add the BQL here. My suggestion was to avoid calling qdev_get_machine() at all, so you don't need the BQL. The static variable mentioned below would solve this. > > > > + X86MachineState *x86ms = X86_MACHINE(ms); > > > + > > > + qemu_mutex_lock(&bus_lock_ratelimit_lock); > > > + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); > > > + qemu_mutex_unlock(&bus_lock_ratelimit_lock); > > > > Stefan, ratelimit_calculate_delay() is supposed to be thread > > safe, correct? > > > > In that case, bus_lock_ratelimit_lock would be completely unnecessary. > > > > Will remove it. > > > I normally prefer to avoid static variables, but in this case a > > > > static RateLimit bus_lock_ratelimit_ctrl; > > > > variable could be the simplest solution here. > > > > Yes, static variable is simpler. will change it if acceptable. Thanks. The static variable sounds good enough to me, despite not being the ideal solution. Maybe other people have suggestions. Note: don't forget to call ratelimit_init(), which is a new requirement of the ratelimit API added by commit 4951967d84a0 ("ratelimit: protect with a mutex"). > > > > > > + > > > + if (delay_ns) { > > > + g_usleep(delay_ns / SCALE_US); > > > + } > > > +} > > > + > > > MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) > > > { > > > X86CPU *x86_cpu = X86_CPU(cpu); > > > @@ -4236,6 +4275,9 @@ MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) > > > } else { > > > env->eflags &= ~IF_MASK; > > > } > > > + if (run->flags & KVM_RUN_X86_BUS_LOCK) { > > > + kvm_rate_limit_on_bus_lock(); > > > + } > > > /* We need to protect the apic state against concurrent accesses from > > > * different threads in case the userspace irqchip is used. */ > > > @@ -4594,6 +4636,10 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) > > > ioapic_eoi_broadcast(run->eoi.vector); > > > ret = 0; > > > break; > > > + case KVM_EXIT_X86_BUS_LOCK: > > > + /* already handled in kvm_arch_post_run */ > > > + ret = 0; > > > + break; > > > default: > > > fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason); > > > ret = -1; > > > -- > > > 2.17.1 > > > > > >
diff --git a/hw/i386/x86.c b/hw/i386/x86.c index ed796fe6ba..d30cf27e29 100644 --- a/hw/i386/x86.c +++ b/hw/i386/x86.c @@ -1246,6 +1246,23 @@ static void x86_machine_set_oem_table_id(Object *obj, const char *value, strncpy(x86ms->oem_table_id, value, 8); } +static void x86_machine_get_bus_lock_ratelimit(Object *obj, Visitor *v, + const char *name, void *opaque, Error **errp) +{ + X86MachineState *x86ms = X86_MACHINE(obj); + uint64_t bus_lock_ratelimit = x86ms->bus_lock_ratelimit; + + visit_type_uint64(v, name, &bus_lock_ratelimit, errp); +} + +static void x86_machine_set_bus_lock_ratelimit(Object *obj, Visitor *v, + const char *name, void *opaque, Error **errp) +{ + X86MachineState *x86ms = X86_MACHINE(obj); + + visit_type_uint64(v, name, &x86ms->bus_lock_ratelimit, errp); +} + static void x86_machine_initfn(Object *obj) { X86MachineState *x86ms = X86_MACHINE(obj); @@ -1256,6 +1273,7 @@ static void x86_machine_initfn(Object *obj) x86ms->pci_irq_mask = ACPI_BUILD_PCI_IRQS; x86ms->oem_id = g_strndup(ACPI_BUILD_APPNAME6, 6); x86ms->oem_table_id = g_strndup(ACPI_BUILD_APPNAME8, 8); + x86ms->bus_lock_ratelimit = 0; } static void x86_machine_class_init(ObjectClass *oc, void *data) @@ -1299,6 +1317,12 @@ static void x86_machine_class_init(ObjectClass *oc, void *data) "Override the default value of field OEM Table ID " "in ACPI table header." "The string may be up to 8 bytes in size"); + + object_class_property_add(oc, X86_MACHINE_BUS_LOCK_RATELIMIT, "uint64_t", + x86_machine_get_bus_lock_ratelimit, + x86_machine_set_bus_lock_ratelimit, NULL, NULL); + object_class_property_set_description(oc, X86_MACHINE_BUS_LOCK_RATELIMIT, + "Set the ratelimit for the bus locks acquired in VMs"); } static const TypeInfo x86_machine_info = { diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h index c09b648dff..49b130a649 100644 --- a/include/hw/i386/x86.h +++ b/include/hw/i386/x86.h @@ -74,12 +74,21 @@ struct X86MachineState { * will be translated to MSI messages in the address space. */ AddressSpace *ioapic_as; + + /* + * Ratelimit enforced on detected bus locks in guest. + * The default value of the bus_lock_ratelimit is 0 per second, + * which means no limitation on the guest's bus locks. + */ + uint64_t bus_lock_ratelimit; + RateLimit bus_lock_ratelimit_ctrl; }; #define X86_MACHINE_SMM "smm" #define X86_MACHINE_ACPI "acpi" #define X86_MACHINE_OEM_ID "x-oem-id" #define X86_MACHINE_OEM_TABLE_ID "x-oem-table-id" +#define X86_MACHINE_BUS_LOCK_RATELIMIT "bus-lock-ratelimit" #define TYPE_X86_MACHINE MACHINE_TYPE_NAME("x86") OBJECT_DECLARE_TYPE(X86MachineState, X86MachineClass, X86_MACHINE) diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c index 7fe9f52710..19b6c4a7e8 100644 --- a/target/i386/kvm/kvm.c +++ b/target/i386/kvm/kvm.c @@ -130,6 +130,9 @@ static bool has_msr_mcg_ext_ctl; static struct kvm_cpuid2 *cpuid_cache; static struct kvm_msr_list *kvm_feature_msrs; +#define BUS_LOCK_SLICE_TIME 1000000000ULL /* ns */ +static QemuMutex bus_lock_ratelimit_lock; + int kvm_has_pit_state2(void) { return has_pit_state2; @@ -2267,6 +2270,28 @@ int kvm_arch_init(MachineState *ms, KVMState *s) } } + if (object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE)) { + X86MachineState *x86ms = X86_MACHINE(ms); + + if (x86ms->bus_lock_ratelimit > 0) { + ret = kvm_check_extension(s, KVM_CAP_X86_BUS_LOCK_EXIT); + if (!(ret & KVM_BUS_LOCK_DETECTION_EXIT)) { + error_report("kvm: bus lock detection unsupported"); + return -ENOTSUP; + } + ret = kvm_vm_enable_cap(s, KVM_CAP_X86_BUS_LOCK_EXIT, 0, + KVM_BUS_LOCK_DETECTION_EXIT); + if (ret < 0) { + error_report("kvm: Failed to enable bus lock detection cap: %s", + strerror(-ret)); + return ret; + } + qemu_mutex_init(&bus_lock_ratelimit_lock); + ratelimit_set_speed(&x86ms->bus_lock_ratelimit_ctrl, x86ms->bus_lock_ratelimit, + BUS_LOCK_SLICE_TIME); + } + } + return 0; } @@ -4221,6 +4246,20 @@ void kvm_arch_pre_run(CPUState *cpu, struct kvm_run *run) } } +static void kvm_rate_limit_on_bus_lock(void) +{ + MachineState *ms = MACHINE(qdev_get_machine()); + X86MachineState *x86ms = X86_MACHINE(ms); + + qemu_mutex_lock(&bus_lock_ratelimit_lock); + uint64_t delay_ns = ratelimit_calculate_delay(&x86ms->bus_lock_ratelimit_ctrl, 1); + qemu_mutex_unlock(&bus_lock_ratelimit_lock); + + if (delay_ns) { + g_usleep(delay_ns / SCALE_US); + } +} + MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) { X86CPU *x86_cpu = X86_CPU(cpu); @@ -4236,6 +4275,9 @@ MemTxAttrs kvm_arch_post_run(CPUState *cpu, struct kvm_run *run) } else { env->eflags &= ~IF_MASK; } + if (run->flags & KVM_RUN_X86_BUS_LOCK) { + kvm_rate_limit_on_bus_lock(); + } /* We need to protect the apic state against concurrent accesses from * different threads in case the userspace irqchip is used. */ @@ -4594,6 +4636,10 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run) ioapic_eoi_broadcast(run->eoi.vector); ret = 0; break; + case KVM_EXIT_X86_BUS_LOCK: + /* already handled in kvm_arch_post_run */ + ret = 0; + break; default: fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason); ret = -1;
A bus lock is acquired through either split locked access to writeback (WB) memory or any locked access to non-WB memory. It is typically >1000 cycles slower than an atomic operation within a cache and can also disrupts performance on other cores. Virtual Machines can exploit bus locks to degrade the performance of system. To address this kind of performance DOS attack coming from the VMs, bus lock VM exit is introduced in KVM and it can report the bus locks detected in guest. If enabled in KVM, it would exit to the userspace to let the user enforce throttling policies once bus locks acquired in VMs. The availability of bus lock VM exit can be detected through the KVM_CAP_X86_BUS_LOCK_EXIT. The returned bitmap contains the potential policies supported by KVM. The field KVM_BUS_LOCK_DETECTION_EXIT in bitmap is the only supported strategy at present. It indicates that KVM will exit to userspace to handle the bus locks. This patch adds a ratelimit on the bus locks acquired in guest as a mitigation policy. Introduce a new field "bus_lock_ratelimit" to record the limited speed of bus locks in the target VM. The user can specify it through the "bus-lock-ratelimit" as a machine property. In current implementation, the default value of the speed is 0 per second, which means no restrictions on the bus locks As for ratelimit on detected bus locks, simply set the ratelimit interval to 1s and restrict the quota of bus lock occurence to the value of "bus_lock_ratelimit". A potential alternative is to introduce the time slice as a property which can help the user achieve more precise control. The detail of Bus lock VM exit can be found in spec: https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programming-reference.html Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> --- Changes from v2: - do some rename work (bus-lock-ratelimit and BUS_LOCK_TIME_SLICE). (Eduardo) - change to register a class property at the x86_machine_class_init() and write the gettter/setter for the bus_lock_ratelimit property. (Eduardo) - add the lock to access the Ratelimit instance to avoid vcpu thread race condition. (Eduardo) - v2: https://lore.kernel.org/qemu-devel/20210420093736.17613-1-chenyi.qiang@intel.com/ Changes from RFC v1: - Remove the rip info output, as the rip can't reflect the bus lock position correctly. (Xiaoyao) - RFC v1: https://lore.kernel.org/qemu-devel/20210317084709.15605-1-chenyi.qiang@intel.com/ --- hw/i386/x86.c | 24 ++++++++++++++++++++++ include/hw/i386/x86.h | 9 +++++++++ target/i386/kvm/kvm.c | 46 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 79 insertions(+)