Message ID | 4A3B99DD.50306@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, Jun 19, 2009 at 03:59:57PM +0200, Jes Sorensen wrote: > Hi, > > This one introduces a -maxcpus setting, allowing the user to specify > the maximum number of vCPUs the system can have, as discussed with Gleb > earlier in the week. What is the maximum value for the 'maxcpus' setting for KVM ? libvirt currently does fd = open("/dev/kvm") r = ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_NR_VCPUS); to figure out what the maximum allowed vCPUs will be for KVM, and currently it is returning 16 IIRC. > @@ -5666,6 +5667,13 @@ int main(int argc, char **argv, char **e > exit(1); > } > break; > + case QEMU_OPTION_maxcpus: > + max_cpus = atoi(optarg); > + if ((max_cpus < 1) || (max_cpus > machine->max_cpus)) { > + fprintf(stderr, "Invalid number of CPUs\n"); > + exit(1); > + } > + break; > case QEMU_OPTION_vnc: > display_type = DT_VNC; > vnc_display = optarg; This implies the limit (for x86 pc machine at least) is now 255. Is that the correct interpretation on my part ? Regards, Daniel
On 06/19/2009 04:15 PM, Daniel P. Berrange wrote: > On Fri, Jun 19, 2009 at 03:59:57PM +0200, Jes Sorensen wrote: >> Hi, >> >> This one introduces a -maxcpus setting, allowing the user to specify >> the maximum number of vCPUs the system can have, as discussed with Gleb >> earlier in the week. > > What is the maximum value for the 'maxcpus' setting for KVM ? Right now it is still 16, however I plan to change this. > libvirt currently does > > fd = open("/dev/kvm") > r = ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_NR_VCPUS); > > to figure out what the maximum allowed vCPUs will be for KVM, > and currently it is returning 16 IIRC. Interesting, this will need to be addressed as well. I have plans to introduce a mechanism telling the kernel where the limit will be, in order to allow it to allocate data structures in a reasonable manner. >> @@ -5666,6 +5667,13 @@ int main(int argc, char **argv, char **e >> exit(1); >> } >> break; >> + case QEMU_OPTION_maxcpus: >> + max_cpus = atoi(optarg); >> + if ((max_cpus< 1) || (max_cpus> machine->max_cpus)) { >> + fprintf(stderr, "Invalid number of CPUs\n"); >> + exit(1); >> + } > This implies the limit (for x86 pc machine at least) is now 255. Is that > the correct interpretation on my part ? Actually the 255 limit is tied to ACPI. Once we support ACPI 3.0 and x2apic, it will get much worse. Be afraid, be very afraid :-) To be honest, that is going to be a while before we get to that, but I hope to get to it eventually. I strongly recommend you try not to impose static limits within libvirt for the number of vCPUs. I guess it will become a tricky issue who is telling who what the limit is. Ideally I would like to see the kernel limit becoming unlimited and the restrictions being set by userland. Cheers, Jes -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On (Fri) Jun 19 2009 [15:59:57], Jes Sorensen wrote: > Hi, > > This one introduces a -maxcpus setting, allowing the user to specify > the maximum number of vCPUs the system can have, as discussed with Gleb > earlier in the week. ACK, but please fix this: +DEF("maxcpus", HAS_ARG, QEMU_OPTION_maxcpus, + "-maxcpus n set maximumthe number of possibly CPUs to 'n'\n") +STEXI Amit -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/19/2009 05:23 PM, Jes Sorensen wrote: > >> libvirt currently does >> >> fd = open("/dev/kvm") >> r = ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_NR_VCPUS); >> >> to figure out what the maximum allowed vCPUs will be for KVM, >> and currently it is returning 16 IIRC. > > Interesting, this will need to be addressed as well. I have plans to > introduce a mechanism telling the kernel where the limit will be, in > order to allow it to allocate data structures in a reasonable manner. I prefer to have the kernel adjust dynamically. The vcpu array wants 8 bytes per vcpu, so for 256 vcpus we're still a very reasonable 2K. For (say) 4K vcpus, we'll need to reallocate the array dynamically, for which rcu will be perfect or better. >> This implies the limit (for x86 pc machine at least) is now 255. Is that >> the correct interpretation on my part ? > > Actually the 255 limit is tied to ACPI. Once we support ACPI 3.0 and > x2apic, it will get much worse. Be afraid, be very afraid :-) To be > honest, that is going to be a while before we get to that, but I hope > to get to it eventually. > > I strongly recommend you try not to impose static limits within libvirt > for the number of vCPUs. > > I guess it will become a tricky issue who is telling who what the limit > is. Ideally I would like to see the kernel limit becoming unlimited and > the restrictions being set by userland. The kernel needs some kind of limit to avoid resource exhaustion. Could be set quite high. Beyond that there should be additional setup. Note for 4K vcpus you'll need to increase the maximum opened files opened by a process, threads, etc.
On 06/19/2009 04:59 PM, Jes Sorensen wrote: > Hi, > > This one introduces a -maxcpus setting, allowing the user to specify > the maximum number of vCPUs the system can have, as discussed with Gleb > earlier in the week. > Patch looks good. There is nothing kvm-specific about it, so please send it to qemu-devel@. Also, please split the qemu part from the bios part.
Introduce -maxcpus flag to QEMU and pass the value to the BIOS through FW_CFG. Follow on patch will use it to determine the size of the MADT. Signed-off-by: Jes Sorensen <jes@sgi.com> --- hw/fw_cfg.c | 1 + hw/fw_cfg.h | 1 + kvm/bios/rombios32.c | 16 ++++++++++++++++ qemu-options.hx | 9 +++++++++ sysemu.h | 1 + vl.c | 8 ++++++++ 6 files changed, 36 insertions(+) Index: qemu-kvm/hw/fw_cfg.c =================================================================== --- qemu-kvm.orig/hw/fw_cfg.c +++ qemu-kvm/hw/fw_cfg.c @@ -279,6 +279,7 @@ void *fw_cfg_init(uint32_t ctl_port, uin fw_cfg_add_bytes(s, FW_CFG_UUID, qemu_uuid, 16); fw_cfg_add_i16(s, FW_CFG_NOGRAPHIC, (uint16_t)(display_type == DT_NOGRAPHIC)); fw_cfg_add_i16(s, FW_CFG_NB_CPUS, (uint16_t)smp_cpus); + fw_cfg_add_i16(s, FW_CFG_MAX_CPUS, (uint16_t)max_cpus); register_savevm("fw_cfg", -1, 1, fw_cfg_save, fw_cfg_load, s); qemu_register_reset(fw_cfg_reset, 0, s); Index: qemu-kvm/hw/fw_cfg.h =================================================================== --- qemu-kvm.orig/hw/fw_cfg.h +++ qemu-kvm/hw/fw_cfg.h @@ -15,6 +15,7 @@ #define FW_CFG_INITRD_SIZE 0x0b #define FW_CFG_BOOT_DEVICE 0x0c #define FW_CFG_NUMA 0x0d +#define FW_CFG_MAX_CPUS 0x0e #define FW_CFG_MAX_ENTRY 0x10 #define FW_CFG_WRITE_CHANNEL 0x4000 Index: qemu-kvm/kvm/bios/rombios32.c =================================================================== --- qemu-kvm.orig/kvm/bios/rombios32.c +++ qemu-kvm/kvm/bios/rombios32.c @@ -441,6 +441,7 @@ void delay_ms(int n) } uint16_t smp_cpus; +uint16_t max_cpus = MAX_CPUS; uint32_t cpuid_signature; uint32_t cpuid_features; uint32_t cpuid_ext_features; @@ -484,6 +485,7 @@ void wrmsr_smp(uint32_t index, uint64_t #define QEMU_CFG_ID 0x01 #define QEMU_CFG_UUID 0x02 #define QEMU_CFG_NUMA 0x0D +#define QEMU_CFG_MAX_CPUS 0x0E #define QEMU_CFG_ARCH_LOCAL 0x8000 #define QEMU_CFG_ACPI_TABLES (QEMU_CFG_ARCH_LOCAL + 0) #define QEMU_CFG_SMBIOS_ENTRIES (QEMU_CFG_ARCH_LOCAL + 1) @@ -546,6 +548,19 @@ static uint16_t smbios_entries(void) return cnt; } +static uint16_t get_max_cpus(void) +{ + uint16_t cnt; + + qemu_cfg_select(QEMU_CFG_MAX_CPUS); + qemu_cfg_read((uint8_t*)&cnt, sizeof(cnt)); + + if (!cnt) + cnt = MAX_CPUS; + + return cnt; +} + uint64_t qemu_cfg_get64 (void) { uint64_t ret; @@ -1655,6 +1670,7 @@ void acpi_bios_init(void) addr += sizeof(SSDTCode); #ifdef BX_QEMU + max_cpus = get_max_cpus(); qemu_cfg_select(QEMU_CFG_NUMA); nb_numa_nodes = qemu_cfg_get64(); #else Index: qemu-kvm/qemu-options.hx =================================================================== --- qemu-kvm.orig/qemu-options.hx +++ qemu-kvm/qemu-options.hx @@ -47,6 +47,15 @@ CPUs are supported. On Sparc32 target, L to 4. ETEXI +DEF("maxcpus", HAS_ARG, QEMU_OPTION_maxcpus, + "-maxcpus n set maximumthe number of possibly CPUs to 'n'\n") +STEXI +@item -maxcpus @var{n} +Set the maximum number of possible CPUs to @var(n). @var(n) has to be +bigger or equal to the value of -smp. If @var(n) is equal to -smp, +there will be no space for hotplug cpus to be added later. +ETEXI + DEF("numa", HAS_ARG, QEMU_OPTION_numa, "-numa node[,mem=size][,cpus=cpu[-cpu]][,nodeid=node]\n") STEXI Index: qemu-kvm/sysemu.h =================================================================== --- qemu-kvm.orig/sysemu.h +++ qemu-kvm/sysemu.h @@ -119,6 +119,7 @@ extern int alt_grab; extern int usb_enabled; extern int no_virtio_balloon; extern int smp_cpus; +extern int max_cpus; extern int cursor_hide; extern int graphic_rotate; extern int no_quit; Index: qemu-kvm/vl.c =================================================================== --- qemu-kvm.orig/vl.c +++ qemu-kvm/vl.c @@ -246,6 +246,7 @@ int singlestep = 0; const char *assigned_devices[MAX_DEV_ASSIGN_CMDLINE]; int assigned_devices_index; int smp_cpus = 1; +int max_cpus = 16; const char *vnc_display; int acpi_enabled = 1; int no_hpet = 0; @@ -5666,6 +5667,13 @@ int main(int argc, char **argv, char **e exit(1); } break; + case QEMU_OPTION_maxcpus: + max_cpus = atoi(optarg); + if ((max_cpus < 1) || (max_cpus > machine->max_cpus)) { + fprintf(stderr, "Invalid number of CPUs\n"); + exit(1); + } + break; case QEMU_OPTION_vnc: display_type = DT_VNC; vnc_display = optarg;