Message ID | 20170105015203.GA11785@char.us.oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 05/01/2017 01:52, Konrad Rzeszutek Wilk wrote: > Hey, > > I was trying to bootup on an 30 CPU machine (15 core, SMT). > > It works just fine with credit1 (see further down the log) > but if I try credit2 it ends up hanging during bootup. > > I am a going to naively assume it is due to how the vCPUs are > exposed (Where they match the physical CPUs under credit1), > but under credit2 they are different. > > The dom0_max_vcpus does not seem to have any affect. When I remove it > things are still being problematic. > > Help!? This matches the symptoms seen by XenServer when trying to stress 32vcpu guests under Credit2. Malcolm did find (based on interpreted iperf throughput graphs) that Credit2 did seem to preferentially schedule the lower-number vcpus, rather than scheduling them evenly. The iperf graphs showed this as having a diminishing graph of vcpu id vs total traffic passed (which was a straight line under credit1), but Linux deciding a soft lockup is also very plausible is there is asynchronous scheduling. ~Andrew
On Wed, Jan 04, 2017 at 08:52:03PM -0500, Konrad Rzeszutek Wilk wrote: > Hey, > > I was trying to bootup on an 30 CPU machine (15 core, SMT). > > It works just fine with credit1 (see further down the log) > but if I try credit2 it ends up hanging during bootup. > > I am a going to naively assume it is due to how the vCPUs are > exposed (Where they match the physical CPUs under credit1), > but under credit2 they are different. > > The dom0_max_vcpus does not seem to have any affect. When I remove it > things are still being problematic. > > Help!? It seems now that I took dom0_max_vcpus out of the picture I can reproduce this with credit1 scheduler. So it looks like an Linux issue. Boris, any ideas? This is 4.9.
On 01/04/2017 09:10 PM, Konrad Rzeszutek Wilk wrote: > On Wed, Jan 04, 2017 at 08:52:03PM -0500, Konrad Rzeszutek Wilk wrote: >> Hey, >> >> I was trying to bootup on an 30 CPU machine (15 core, SMT). >> >> It works just fine with credit1 (see further down the log) >> but if I try credit2 it ends up hanging during bootup. >> >> I am a going to naively assume it is due to how the vCPUs are >> exposed (Where they match the physical CPUs under credit1), >> but under credit2 they are different. >> >> The dom0_max_vcpus does not seem to have any affect. When I remove it >> things are still being problematic. >> >> Help!? > > It seems now that I took dom0_max_vcpus out of the picture I can > reproduce this with credit1 scheduler. So it looks like an Linux issue. > > Boris, any ideas? This is 4.9. > I think 4.9 is broken. There were changes in topology initialization that broke Xen in early 4.9 RCs. tglx posted a patch that resolved this issue (he was actually addressing something else and fixing dom0 crash was a side effect). I thought it would make it to 4.9 but apparently it didn't: I just tried 4.9 with default Xen scheduler and it crashed. Not with the error that you are seeing though: ... [ 41.327438] cpu 31 spinlock event irq 327 [ 41.386455] x86: Booted up 1 node, 32 CPUs [ 41.400223] BUG: arch topology borken [ 41.412415] the SMT domain not a subset of the MC domain ... [ 42.375375] BUG: arch topology borken [ 42.387665] the SMT domain not a subset of the MC domain [ 42.412511] divide error: 0000 [#1] SMP [ 42.424831] Modules linked in: [ 42.435129] CPU: 1 PID: 2 Comm: kthreadd Not tainted 4.9.0 #66 [ 42.454579] Hardware name: Intel Corporation S2600CP/S2600CP, BIOS SE5C600.86B.99.99.x032.072520111118 07/25/2011 [ 42.488610] task: ffff8808143a0dc0 task.stack: ffffc90004240000 [ 42.508346] RIP: e030:[<ffffffff810cfe4b>] [<ffffffff810cfe4b>] select_task_rq_fair+0x2fb/0x730 ... Latest 4.10-rc2 boots both credit1 and credit2. So you can either try that or apply 9d85eb9119f4eeeb48e87adfcd71f752655700e9, which I think is the missing patch. -boris
On Thu, 2017-01-05 at 02:05 +0000, Andrew Cooper wrote: > On 05/01/2017 01:52, Konrad Rzeszutek Wilk wrote: > > It works just fine with credit1 (see further down the log) > > but if I try credit2 it ends up hanging during bootup. > > > > I am a going to naively assume it is due to how the vCPUs are > > exposed (Where they match the physical CPUs under credit1), > > but under credit2 they are different. > > > > The dom0_max_vcpus does not seem to have any affect. When I remove > > it > > things are still being problematic. > > > > Help!? > > This matches the symptoms seen by XenServer when trying to stress > 32vcpu > guests under Credit2. Malcolm did find (based on interpreted iperf > throughput graphs) that Credit2 did seem to preferentially schedule > the > lower-number vcpus, rather than scheduling them evenly. > To be fair (and just for the records, since the cause seems actually to be something else), this was with an old version (at least two Xen releases ago, IIRC, certainly not 4.8) and known to be buggy version of Credit2. We have other tests and benchmarks, done on equally big machines which proves the scheduler is 100% functional. Regards, Dario
On 05/01/2017 08:39, Dario Faggioli wrote: > On Thu, 2017-01-05 at 02:05 +0000, Andrew Cooper wrote: >> On 05/01/2017 01:52, Konrad Rzeszutek Wilk wrote: >>> It works just fine with credit1 (see further down the log) >>> but if I try credit2 it ends up hanging during bootup. >>> >>> I am a going to naively assume it is due to how the vCPUs are >>> exposed (Where they match the physical CPUs under credit1), >>> but under credit2 they are different. >>> >>> The dom0_max_vcpus does not seem to have any affect. When I remove >>> it >>> things are still being problematic. >>> >>> Help!? >> >> This matches the symptoms seen by XenServer when trying to stress >> 32vcpu >> guests under Credit2. Malcolm did find (based on interpreted iperf >> throughput graphs) that Credit2 did seem to preferentially schedule >> the >> lower-number vcpus, rather than scheduling them evenly. >> > To be fair (and just for the records, since the cause seems actually to > be something else), this was with an old version (at least two Xen > releases ago, IIRC, certainly not 4.8) and known to be buggy version of > Credit2. > > We have other tests and benchmarks, done on equally big machines which > proves the scheduler is 100% functional. > Yes, I have done extensive stress tests on XenSever(IIRC Xen - 4.7) with Credit2 on 32 or more VCPU guests. Didn't see any hangups. Malcolm, did find hangup and crash issues with 32VCPU guests, but the issue was highlighted and fixed. So, during my testing I specifically foucussed on this scenario and found no problem. > Regards, > Dario > Anshul
> Latest 4.10-rc2 boots both credit1 and credit2. So you can either try > that or apply 9d85eb9119f4eeeb48e87adfcd71f752655700e9, which I think > is the missing patch. And this patch has just been queued for 4.9-stable. -boris
On Wed, 2017-01-04 at 22:13 -0500, Boris Ostrovsky wrote: > On 01/04/2017 09:10 PM, Konrad Rzeszutek Wilk wrote: > > On Wed, Jan 04, 2017 at 08:52:03PM -0500, Konrad Rzeszutek Wilk > > wrote: > > > I was trying to bootup on an 30 CPU machine (15 core, SMT). > > > > > > It works just fine with credit1 (see further down the log) > > > but if I try credit2 it ends up hanging during bootup. > > > > > > I am a going to naively assume it is due to how the vCPUs are > > > exposed (Where they match the physical CPUs under credit1), > > > but under credit2 they are different. > > > > It seems now that I took dom0_max_vcpus out of the picture I can > > reproduce this with credit1 scheduler. So it looks like an Linux > > issue. > > > > Boris, any ideas? This is 4.9. > > > > I think 4.9 is broken. There were changes in topology initialization > that broke Xen in early 4.9 RCs. > Maybe it's me misremembering/saying stupid things, but I recall that at some point we were testing some of the recent and in development Linux branches in OSSTest. I don't think we do that any longer, and that may be part of the reason why we missed this one? Ian, Wei, thoughts? Regards, Dario
On 01/12/2017 07:50 AM, Dario Faggioli wrote: > On Wed, 2017-01-04 at 22:13 -0500, Boris Ostrovsky wrote: >> On 01/04/2017 09:10 PM, Konrad Rzeszutek Wilk wrote: >>> On Wed, Jan 04, 2017 at 08:52:03PM -0500, Konrad Rzeszutek Wilk >>> wrote: >>>> I was trying to bootup on an 30 CPU machine (15 core, SMT). >>>> >>>> It works just fine with credit1 (see further down the log) >>>> but if I try credit2 it ends up hanging during bootup. >>>> >>>> I am a going to naively assume it is due to how the vCPUs are >>>> exposed (Where they match the physical CPUs under credit1), >>>> but under credit2 they are different. >>> It seems now that I took dom0_max_vcpus out of the picture I can >>> reproduce this with credit1 scheduler. So it looks like an Linux >>> issue. >>> >>> Boris, any ideas? This is 4.9. >>> >> I think 4.9 is broken. There were changes in topology initialization >> that broke Xen in early 4.9 RCs. >> > Maybe it's me misremembering/saying stupid things, but I recall that at > some point we were testing some of the recent and in development Linux > branches in OSSTest. > > I don't think we do that any longer, and that may be part of the reason > why we missed this one? I believe you needed to be on a multi-socket system to catch this bug. That's why, for example, my tests missed it --- the boxes that I use are all single-node. -boris > > Ian, Wei, thoughts? > > Regards, > Dario
On Thu, 2017-01-12 at 11:22 -0500, Boris Ostrovsky wrote: > On 01/12/2017 07:50 AM, Dario Faggioli wrote: > > I don't think we do that any longer, and that may be part of the > > reason > > why we missed this one? > > I believe you needed to be on a multi-socket system to catch this > bug. > That's why, for example, my tests missed it --- the boxes that I use > are > all single-node. > Yeah, while I do test on NUMA, but I do mostly Xen development so I test the latest Xen but (most of the time) with whatever distro kernel is easier to use (although, usually fairly recent ones, like 4.8). Anyway, we should have some multi-socket boxes on OSSTest, AFAICR. Dario
Dario Faggioli writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"):
> Anyway, we should have some multi-socket boxes on OSSTest, AFAICR.
I think we do but I haven't got a systematic way of answering that
question other than by manual eyeballing of the spec sheets.
IF there were something easy to look for in the dmesg output (say) I
could probably grep historical logs.
Ian.
On 01/12/2017 01:27 PM, Ian Jackson wrote: > Dario Faggioli writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): >> Anyway, we should have some multi-socket boxes on OSSTest, AFAICR. > > I think we do but I haven't got a systematic way of answering that > question other than by manual eyeballing of the spec sheets. > > IF there were something easy to look for in the dmesg output (say) I > could probably grep historical logs. [root@ovs104 ~]# xl dmesg | grep Scrubbing (XEN) Scrubbing Free RAM on 2 nodes using 16 CPUs [root@ovs104 ~]# or [root@ovs104 ~]# xl info | grep nr_nodes nr_nodes : 2 [root@ovs104 ~]# may be useful. BTW, when I said that the problem that this thread was started with required multi-socket system I should have also said that dom0 needs to span nodes (or so I think). -boris
On Thu, 2017-01-12 at 11:22 -0500, Boris Ostrovsky wrote: > On 01/12/2017 07:50 AM, Dario Faggioli wrote: > > I don't think we do that any longer, and that may be part of the > > reason > > why we missed this one? > > I believe you needed to be on a multi-socket system to catch this > bug. > That's why, for example, my tests missed it --- the boxes that I use > are > all single-node. > Which will happen in OSSTest in most cases, as we don't usually use dom0_max_vcpus, AFAICR. But I think the point here is really what Ian was asking. I.e., leaving aside the specific characteristic of this very issue, do you (and Juergen and Konrad) think it would be useful to have OSSTest smoke test upstream-ish kernel again? I think it is you, Xen-Linux people, that may find it helpful, as it may save you some local testing, etc. But this is only true if you think it could fit in your workflow to check its output and deal with it, which is something only you can tell. :-) If the answer is no, then nevermind, and sorry for the noise. :-D Regards, Dario
On 01/13/2017 03:31 AM, Dario Faggioli wrote: > On Thu, 2017-01-12 at 11:22 -0500, Boris Ostrovsky wrote: >> On 01/12/2017 07:50 AM, Dario Faggioli wrote: >>> I don't think we do that any longer, and that may be part of the >>> reason >>> why we missed this one? >> I believe you needed to be on a multi-socket system to catch this >> bug. >> That's why, for example, my tests missed it --- the boxes that I use >> are >> all single-node. >> > Which will happen in OSSTest in most cases, as we don't usually use > dom0_max_vcpus, AFAICR. > > But I think the point here is really what Ian was asking. I.e., leaving > aside the specific characteristic of this very issue, do you (and > Juergen and Konrad) think it would be useful to have OSSTest smoke test > upstream-ish kernel again? > > I think it is you, Xen-Linux people, that may find it helpful, as it > may save you some local testing, etc. But this is only true if you > think it could fit in your workflow to check its output and deal with > it, which is something only you can tell. :-) > > If the answer is no, then nevermind, and sorry for the noise. :-D I can give it a try although I have practically no experience with OSSTest. Is there a way to subscribe to notifications for those tests? -boris
Boris Ostrovsky writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): > I can give it a try although I have practically no experience with > OSSTest. Is there a way to subscribe to notifications for those tests? osstest's reports are posted to xen-devel. To give you an example of what they look like, I have pasted below the top of the report from last test report of linux-linus (including interesting mail headers). If you find the mail filtering of xen-devel awkward, I can arrange to send these to a specific list, or something. As I say the Linux kernel ones that we are discussing are currently disabled, so the mail below is from last April and all of the logs it refers to will have expired. If someone is volunteering to look at the output I can re-enable them. Previously we were testing: linux-linus git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git linux-next git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git linux-mingo-tip-master git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git (In each case that list contains the osstest `branch' name which occurs in the Subject line etc., and the URL. In each of the above cases we test `master'.) Ian. Message-ID: <osstest-92668-mainreport@xen.org> X-Osstest-Failures: linux-linus:build-i386-rumpuserxen:xen-build:fail:regression linux-linus:test-amd64-amd64-xl:guest-localmigrate:fail:regression linux-linus:build-amd64-rumpuserxen:xen-build:fail:regression linux-linus:test-amd64-amd64-xl-credit2:guest-localmigrate:fail:regression linux-linus:test-amd64-amd64-xl-xsm:guest-localmigrate:fail:regression linux-linus:test-amd64-amd64-xl-multivcpu:guest-localmigrate:fail:regression linux-linus:test-amd64-i386-xl-xsm:guest-localmigrate:fail:regression linux-linus:test-amd64-i386-xl:guest-localmigrate:fail:regression linux-linus:test-amd64-amd64-libvirt-xsm:guest-stop:fail:regression linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:regression linux-linus:test-amd64-i386-pair:guest-migrate/dst_host/src_host:fail:regression linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:regression linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:regression linux-linus:build-armhf-pvops:kernel-build:fail:regression linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable linux-linus:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:allowable linux-linus:test-amd64-i386-libvirt-pair:guest-migrate/dst_host/src_host:fail:allowable linux-linus:test-amd64-i386-libvirt-xsm:guest-saverestore.2:fail:allowable linux-linus:test-amd64-amd64-libvirt:guest-saverestore.2:fail:allowable linux-linus:test-amd64-i386-libvirt:guest-saverestore.2:fail:allowable linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:allowable linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:allowable linux-linus:test-amd64-amd64-rumpuserxen-amd64:build-check(1):blocked:nonblocking linux-linus:test-amd64-i386-rumpuserxen-i386:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-libvirt-qcow2:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-xsm:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-libvirt-xsm:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking linux-linus:test-amd64-amd64-xl-pvh-intel:guest-saverestore:fail:nonblocking linux-linus:test-amd64-amd64-xl-pvh-amd:guest-start:fail:nonblocking linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:nonblocking linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking linux-linus:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:nonblocking X-Osstest-Versions-This: linux=02da2d72174c61988eb4456b53f405e3ebdebce4 X-Osstest-Versions-That: linux=45820c294fe1b1a9df495d57f40585ef2d069a39 Content-Type: text/plain From: osstest service owner <osstest-admin@xenproject.org> To: <xen-devel@lists.xensource.com>, <osstest-admin@xenproject.org> Subject: [linux-linus test] 92668: regressions - FAIL Date: Mon, 25 Apr 2016 18:39:44 +0000 flight 92668 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/92668/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i386-rumpuserxen 6 xen-build fail REGR. vs. 59254 test-amd64-amd64-xl 15 guest-localmigrate fail REGR. vs. 59254 build-amd64-rumpuserxen 6 xen-build fail REGR. vs. 59254 test-amd64-amd64-xl-credit2 15 guest-localmigrate fail REGR. vs. 59254 test-amd64-amd64-xl-xsm 15 guest-localmigrate fail REGR. vs. 59254 test-amd64-amd64-xl-multivcpu 15 guest-localmigrate fail REGR. vs. 59254 test-amd64-i386-xl-xsm 15 guest-localmigrate fail REGR. vs. 59254 test-amd64-i386-xl 15 guest-localmigrate fail REGR. vs. 59254 test-amd64-amd64-libvirt-xsm 16 guest-stop fail REGR. vs. 59254 test-amd64-amd64-pair 22 guest-migrate/dst_host/src_host fail REGR. vs. 59254 test-amd64-i386-pair 22 guest-migrate/dst_host/src_host fail REGR. vs. 59254 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-localmigrate/x10 fail REGR. vs. 59254 test-amd64-amd64-xl-qemut-debianhvm-amd64 15 guest-localmigrate/x10 fail REGR. vs. 59254 build-armhf-pvops 5 kernel-build fail REGR. vs. 59254 ....
On 01/13/2017 11:27 AM, Ian Jackson wrote: > > osstest's reports are posted to xen-devel. To give you an example of > what they look like, I have pasted below the top of the report from > last test report of linux-linus (including interesting mail headers). > > If you find the mail filtering of xen-devel awkward, I can arrange to > send these to a specific list, or something. Hopefully I should be able to filter on "X-Osstest-Failures includes "linux-linus:". > > As I say the Linux kernel ones that we are discussing are currently > disabled, so the mail below is from last April and all of the logs it > refers to will have expired. > > If someone is volunteering to look at the output I can re-enable > them. Previously we were testing: > > linux-linus > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git > > linux-next > git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git > > linux-mingo-tip-master > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git I think linux-linus would be the most interesting to test. Can you enable just that one? I'll see if I can parse results. Thanks -boris
Boris Ostrovsky writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): > Hopefully I should be able to filter on "X-Osstest-Failures includes > "linux-linus:". Indeed. > > linux-linus > > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git ... > I think linux-linus would be the most interesting to test. Can you > enable just that one? I'll see if I can parse results. Sure. Ian.
On Fri, Jan 13, 2017 at 04:27:52PM +0000, Ian Jackson wrote: > Boris Ostrovsky writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): > > I can give it a try although I have practically no experience with > > OSSTest. Is there a way to subscribe to notifications for those tests? > > osstest's reports are posted to xen-devel. To give you an example of > what they look like, I have pasted below the top of the report from > last test report of linux-linus (including interesting mail headers). > > If you find the mail filtering of xen-devel awkward, I can arrange to > send these to a specific list, or something. > > As I say the Linux kernel ones that we are discussing are currently > disabled, so the mail below is from last April and all of the logs it > refers to will have expired. > > If someone is volunteering to look at the output I can re-enable > them. Previously we were testing: > > linux-linus > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git Shouldn't this be: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git (instead of linux-2.6.git?) Roger.
Roger Pau Monné writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): > Shouldn't this be: > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Err, yes, thanks. Ian.
Boris Ostrovsky writes ("Re: [Xen-devel] Xen 4.8 + Linux 4.9 + Credit2 = can't bootup"): > Hopefully I should be able to filter on "X-Osstest-Failures includes > "linux-linus:". The first report from the restarted tests is below: osstest service owner writes ("[linux-linus test] 104237: regressions - FAIL"): > flight 104237 linux-linus real [real] > http://logs.test-lab.xenproject.org/osstest/logs/104237/ > > Regressions :-( > > Tests which did not succeed and are blocking, > including tests which could not be run: > test-amd64-amd64-xl-qemut-win7-amd64 6 xen-boot fail REGR. vs. 59254 > test-amd64-i386-qemuu-rhel6hvm-amd 6 xen-boot fail REGR. vs. 59254 > test-amd64-i386-xl-qemuu-win7-amd64 6 xen-boot fail REGR. vs. 59254 > test-amd64-amd64-xl-qemuu-win7-amd64 6 xen-boot fail REGR. vs. 59254 > build-armhf-pvops 5 kernel-build fail REGR. vs. 59254 So 1. it doesn't boot on some (but not all) hosts 2. it doesn't build on armhf (32-bit ARM) Ian.
diff --git a/tools/firmware/ovmf-makefile b/tools/firmware/ovmf-makefile index 2838744..f58016f 100644 --- a/tools/firmware/ovmf-makefile +++ b/tools/firmware/ovmf-makefile @@ -4,7 +4,7 @@ include $(XEN_ROOT)/tools/Rules.mk ifeq ($(debug),y) TARGET=DEBUG else -TARGET=RELEASE +TARGET=DEBUG endif # OVMF build system has its own parallel building support.