Message ID | 4A57118F.3030907@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, 2009-07-10 at 12:01 +0200, Lukáš Doktor wrote: > After discussion I split the patches. Hi Lukáš, sorry for the delay answering your patch. Looks good to me in general, I have some remarks to make: 1) When posting patches to the autotest kvm tests, please cross post the autotest mailing list (autotest@test.kernel.org) and the KVM list. 2) About scripts to prepare the environment to perform tests - we've had some discussion about including shell scripts on autotest. Bottom line, autotest has a policy of not including non python code when possible [1]. So, would you mind re-creating your hugepage setup code in python and re-sending it? Thanks for your contribution, looking forward getting it integrated to our tests. [1] Unless when it is not practical for testing purposes - writing tests in C is just fine, for example. > This patch adds kvm_hugepage variant. It prepares the host system and > start vm with -mem-path option. It does not clean after itself, because > it's impossible to unmount and free hugepages before all guests are > destroyed. > > I need to ask you what to do with change of qemu parameter. Newest > versions are using -mempath insted of -mem-path. This is impossible to > fix using current config file. I can see 2 solutions: > 1) direct change in kvm_vm.py (parse output and try another param) > 2) detect qemu capabilities outside and create additional layer (better > for future occurrence) > > Dne 9.7.2009 11:24, Lukáš Doktor napsal(a): > > This patch adds kvm_hugepage variant. It prepares the host system and > > start vm with -mem-path option. It does not clean after itself, because > > it's impossible to unmount and free hugepages before all guests are > > destroyed. > > > > There is also added autotest.libhugetlbfs test. > > > > I need to ask you what to do with change of qemu parameter. Newest > > versions are using -mempath insted of -mem-path. This is impossible to > > fix using current config file. I can see 2 solutions: > > 1) direct change in kvm_vm.py (parse output and try another param) > > 2) detect qemu capabilities outside and create additional layer (better > > for future occurrence) > > > > Tested by:ldoktor@redhat.com on RHEL5.4 with kvm-83-72.el5 > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff -Narup orig/client/tests/kvm/kvm_tests.cfg.sample new/client/tests/kvm/kvm_tests.cfg.sample --- orig/client/tests/kvm/kvm_tests.cfg.sample 2009-07-08 13:18:07.000000000 +0200 +++ new/client/tests/kvm/kvm_tests.cfg.sample 2009-07-09 10:15:58.000000000 +0200 @@ -546,6 +549,12 @@ variants: only default image_format = raw +variants: + - @kvm_smallpages: + - kvm_hugepages: + pre_command = "/bin/bash scripts/hugepage.sh /mnt/hugepage" + extra_params += " -mem-path /mnt/hugepage" + variants: - @basic: @@ -559,6 +568,7 @@ variants: only Fedora.8.32 only install setup boot shutdown only rtl8139 + only kvm_smallpages - @sample1: only qcow2 only ide diff -Narup orig/client/tests/kvm/kvm_vm.py new/client/tests/kvm/kvm_vm.py --- orig/client/tests/kvm/kvm_vm.py 2009-07-08 13:18:07.000000000 +0200 +++ new/client/tests/kvm/kvm_vm.py 2009-07-09 10:05:19.000000000 +0200 @@ -400,6 +400,13 @@ class VM: self.destroy() return False + if output: + logging.debug("qemu produced some output:\n%s", output) + if "alloc_mem_area" in output: + logging.error("Could not allocate hugepage memory" + " -- qemu command:\n%s", qemu_command) + return False + logging.debug("VM appears to be alive with PID %d", self.pid) return True diff -Narup orig/client/tests/kvm/scripts/hugepage.sh new/client/tests/kvm/scripts/hugepage.sh --- orig/client/tests/kvm/scripts/hugepage.sh 1970-01-01 01:00:00.000000000 +0100 +++ new/client/tests/kvm/scripts/hugepage.sh 2009-07-09 09:47:14.000000000 +0200 @@ -0,0 +1,34 @@ +#!/bin/bash +# Alocates enough hugepages and mount hugetlbfs to $1. +if [ $# -ne 1 ]; then + echo "USAGE: $0 mem_path" + exit 1 +fi + +Hugepagesize=$(grep Hugepagesize /proc/meminfo | cut -d':' -f 2 | \ + xargs | cut -d' ' -f1) +VMS=$(expr $(echo $KVM_TEST_vms | grep -c ' ') + 1) +if [ "$KVM_TEST_max_vms" ] && [ "$VMS" -lt "$KVM_TEST_max_vms" ]; then + VMS="$KVM_TEST_max_vms" +fi +VMSM=$(expr $(expr $VMS \* $KVM_TEST_mem) + $(expr $VMS \* 64 )) +TARGET=$(expr $VMSM \* 1024 \/ $Hugepagesize) + +NR=$(cat /proc/sys/vm/nr_hugepages) +while [ "$NR" -ne "$TARGET" ]; do + NR_="$NR";echo $TARGET > /proc/sys/vm/nr_hugepages + sleep 5s + NR=$(cat /proc/sys/vm/nr_hugepages) + if [ "$NR" -eq "$NR_" ] ; then + echo "Can not alocate $TARGET of hugepages" + exit 2 + fi +done + +if [ ! "$(mount | grep /mnt/hugepage |grep hugetlbfs)" ]; then + mkdir -p $1 + mount -t hugetlbfs none $1 || \ + (echo "Can not mount hugetlbfs filesystem to $1"; exit 3) +else + echo "hugetlbfs filesystem already mounted" +fi