Message ID | 1243251652-27617-2-git-send-email-ehrhardt@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, May 25, 2009 at 01:40:49PM +0200, ehrhardt@linux.vnet.ibm.com wrote: > From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> > > To ensure vcpu's come out of guest context in certain cases this patch adds a > s390 specific way to kick them out of guest context. Currently it kicks them > out to rerun the vcpu_run path in the s390 code, but the mechanism itself is > expandable and with a new flag we could also add e.g. kicks to userspace etc. > > Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> "For now I added the optimization to skip kicking vcpus out of guest that had the request bit already set to the s390 specific loop (sent as v2 in a few minutes). We might one day consider standardizing some generic kickout levels e.g. kick to "inner loop", "arch vcpu run", "generic vcpu run", "userspace", ... whatever levels fit *all* our use cases. And then let that kicks be implemented in an kvm_arch_* backend as it might be very different how they behave on different architectures." That would be ideal, yes. Two things make_all_requests handles: 1) It disables preemption with get_cpu(), so it can reliably check for cpu id. Somehow you don't need that for s390 when kicking multiple vcpus? 2) It uses smp_call_function_many(wait=1), which guarantees that by the time make_all_requests returns no vcpus will be using stale data (the remote vcpus will have executed ack_flush). If smp_call_function_many is hidden behind kvm_arch_kick_vcpus, can you make use of make_all_requests for S390 (without the smp_call_function performance impact you mentioned) ? For x86 we can further optimize make_all_requests by checking REQ_KICK, and kvm_arch_kick_vcpus would be a good place for that. And the kickout levels idea you mentioned can come later, as an optimization? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti wrote: > On Mon, May 25, 2009 at 01:40:49PM +0200, ehrhardt@linux.vnet.ibm.com wrote: > >> From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >> >> To ensure vcpu's come out of guest context in certain cases this patch adds a >> s390 specific way to kick them out of guest context. Currently it kicks them >> out to rerun the vcpu_run path in the s390 code, but the mechanism itself is >> expandable and with a new flag we could also add e.g. kicks to userspace etc. >> >> Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >> > > "For now I added the optimization to skip kicking vcpus out of guest > that had the request bit already set to the s390 specific loop (sent as > v2 in a few minutes). > > We might one day consider standardizing some generic kickout levels e.g. > kick to "inner loop", "arch vcpu run", "generic vcpu run", "userspace", > ... whatever levels fit *all* our use cases. And then let that kicks be > implemented in an kvm_arch_* backend as it might be very different how > they behave on different architectures." > > That would be ideal, yes. Two things make_all_requests handles: > > 1) It disables preemption with get_cpu(), so it can reliably check for > cpu id. Somehow you don't need that for s390 when kicking multiple > vcpus? > I don't even need the cpuid as make_all_requests does, I just insert a special bit in the vcpu arch part and the vcpu will "come out to me (host)". Fortunateley the kick is rare and fast so I can just insert it unconditionally (it's even ok to insert it if the vcpu is not in guest state). That prevents us from needing vcpu lock or detailed checks which would end up where we started (no guarantee that vcpu's come out of guest context while trying to aquire all vcpu locks) > 2) It uses smp_call_function_many(wait=1), which guarantees that by the > time make_all_requests returns no vcpus will be using stale data (the > remote vcpus will have executed ack_flush). > yes this is really a part my s390 implementation doesn't fulfill yet. Currently on return vcpus might still use the old memslot information. As mentioned before letting all interrupts come "too far" out of the hot loop would be a performance issue, therefore I think I will need some request&confirm mechanism. I'm not sure yet but maybe it could be as easy as this pseudo code example: # in make_all_requests # remember we have slots_lock write here and the reentry that updates the vcpu specific data aquires slots_lock for read. loop vcpus set_bit in vcpu requests kick vcpu #arch function endloop loop vcpus until the requested bit is disappeared #as the reentry path uses test_and_clear it will disappear endloop That would be a implicit synchronization and should work, as I wrote before setting memslots while the guest is running is rare if ever existant for s390. On x86 smp_call_many could then work without the wait flag being set. But I assume that this synchronization approach is slower as it serializes all vcpus on reentry (they wait for the slots_lock to get dropped), therefore I wanted to ask how often setting memslots on runtime will occur on x86 ? Would this approach be acceptable ? If it is too adventurous for now I can implement it that way in the s390 code and we split the long term discussion (synchronization + generic kickout levels + who knows what comes up). > If smp_call_function_many is hidden behind kvm_arch_kick_vcpus, can you > make use of make_all_requests for S390 (without the smp_call_function > performance impact you mentioned) ? > In combination with the request&confirm mechanism desribed above it should work if smp_call function and all the cpuid gathering which belongs to it is hidden behind kvm_arch_kick_vcpus. > For x86 we can further optimize make_all_requests by checking REQ_KICK, > and kvm_arch_kick_vcpus would be a good place for that. > > And the kickout levels idea you mentioned can come later, as an > optimization? yes I agree splitting that to a later optimization is a good idea. > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >
On Tue, May 26, 2009 at 10:02:59AM +0200, Christian Ehrhardt wrote: > Marcelo Tosatti wrote: >> On Mon, May 25, 2009 at 01:40:49PM +0200, ehrhardt@linux.vnet.ibm.com wrote: >> >>> From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >>> >>> To ensure vcpu's come out of guest context in certain cases this patch adds a >>> s390 specific way to kick them out of guest context. Currently it kicks them >>> out to rerun the vcpu_run path in the s390 code, but the mechanism itself is >>> expandable and with a new flag we could also add e.g. kicks to userspace etc. >>> >>> Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >>> >> >> "For now I added the optimization to skip kicking vcpus out of guest >> that had the request bit already set to the s390 specific loop (sent as >> v2 in a few minutes). >> >> We might one day consider standardizing some generic kickout levels e.g. >> kick to "inner loop", "arch vcpu run", "generic vcpu run", "userspace", >> ... whatever levels fit *all* our use cases. And then let that kicks be >> implemented in an kvm_arch_* backend as it might be very different how >> they behave on different architectures." >> >> That would be ideal, yes. Two things make_all_requests handles: >> >> 1) It disables preemption with get_cpu(), so it can reliably check for >> cpu id. Somehow you don't need that for s390 when kicking multiple >> vcpus? >> > I don't even need the cpuid as make_all_requests does, I just insert a > special bit in the vcpu arch part and the vcpu will "come out to me > (host)". > Fortunateley the kick is rare and fast so I can just insert it > unconditionally (it's even ok to insert it if the vcpu is not in guest > state). That prevents us from needing vcpu lock or detailed checks which > would end up where we started (no guarantee that vcpu's come out of > guest context while trying to aquire all vcpu locks) Let me see if I get this right: you kick the vcpus out of guest mode by using a special bit in the vcpu arch part. OK. What I don't understand is this: "would end up where we started (no guarantee that vcpu's come out of guest context while trying to aquire all vcpu locks)" So you _need_ a mechanism to kick all vcpus out of guest mode? >> 2) It uses smp_call_function_many(wait=1), which guarantees that by the >> time make_all_requests returns no vcpus will be using stale data (the >> remote vcpus will have executed ack_flush). >> > yes this is really a part my s390 implementation doesn't fulfill yet. > Currently on return vcpus might still use the old memslot information. > As mentioned before letting all interrupts come "too far" out of the hot > loop would be a performance issue, therefore I think I will need some > request&confirm mechanism. I'm not sure yet but maybe it could be as > easy as this pseudo code example: > > # in make_all_requests > # remember we have slots_lock write here and the reentry that updates > the vcpu specific data aquires slots_lock for read. > loop vcpus > set_bit in vcpu requests > kick vcpu #arch function > endloop > > loop vcpus > until the requested bit is disappeared #as the reentry path uses > test_and_clear it will disappear > endloop > > That would be a implicit synchronization and should work, as I wrote > before setting memslots while the guest is running is rare if ever > existant for s390. On x86 smp_call_many could then work without the wait > flag being set. I see, yes. > But I assume that this synchronization approach is slower as it > serializes all vcpus on reentry (they wait for the slots_lock to get > dropped), therefore I wanted to ask how often setting memslots on > runtime will occur on x86 ? Would this approach be acceptable ? For x86 we need slots_lock for two things: 1) to protect the memslot structures from changing (very rare), ie: kvm_set_memory. 2) to protect updates to the dirty bitmap (operations on behalf of guest) which take slots_lock for read versus updates to that dirty bitmap (an ioctl that retrieves what pages have been dirtied in the memslots, and clears the dirtyness info). All you need for S390 is 1), AFAICS. For 1), we can drop the slots_lock usage, but instead create an explicit synchronization point, where all vcpus are forced to (say kvm_vcpu_block) "paused" state. qemu-kvm has such notion. Same language? > If it is too adventurous for now I can implement it that way in the s390 > code and we split the long term discussion (synchronization + generic > kickout levels + who knows what comes up). >> If smp_call_function_many is hidden behind kvm_arch_kick_vcpus, can you >> make use of make_all_requests for S390 (without the smp_call_function >> performance impact you mentioned) ? >> > In combination with the request&confirm mechanism desribed above it > should work if smp_call function and all the cpuid gathering which > belongs to it is hidden behind kvm_arch_kick_vcpus. >> For x86 we can further optimize make_all_requests by checking REQ_KICK, >> and kvm_arch_kick_vcpus would be a good place for that. >> >> And the kickout levels idea you mentioned can come later, as an >> optimization? > yes I agree splitting that to a later optimization is a good idea. > >> -- >> To unsubscribe from this list: send the line "unsubscribe kvm" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > > > -- > > Grüsse / regards, Christian Ehrhardt > IBM Linux Technology Center, Open Virtualization -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Marcelo Tosatti wrote: > On Tue, May 26, 2009 at 10:02:59AM +0200, Christian Ehrhardt wrote: > >> Marcelo Tosatti wrote: >> >>> On Mon, May 25, 2009 at 01:40:49PM +0200, ehrhardt@linux.vnet.ibm.com wrote: >>> >>> >>>> From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >>>> >>>> To ensure vcpu's come out of guest context in certain cases this patch adds a >>>> s390 specific way to kick them out of guest context. Currently it kicks them >>>> out to rerun the vcpu_run path in the s390 code, but the mechanism itself is >>>> expandable and with a new flag we could also add e.g. kicks to userspace etc. >>>> >>>> Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> >>>> >>>> >>> "For now I added the optimization to skip kicking vcpus out of guest >>> that had the request bit already set to the s390 specific loop (sent as >>> v2 in a few minutes). >>> >>> We might one day consider standardizing some generic kickout levels e.g. >>> kick to "inner loop", "arch vcpu run", "generic vcpu run", "userspace", >>> ... whatever levels fit *all* our use cases. And then let that kicks be >>> implemented in an kvm_arch_* backend as it might be very different how >>> they behave on different architectures." >>> >>> That would be ideal, yes. Two things make_all_requests handles: >>> >>> 1) It disables preemption with get_cpu(), so it can reliably check for >>> cpu id. Somehow you don't need that for s390 when kicking multiple >>> vcpus? >>> >>> >> I don't even need the cpuid as make_all_requests does, I just insert a >> special bit in the vcpu arch part and the vcpu will "come out to me >> (host)". >> Fortunateley the kick is rare and fast so I can just insert it >> unconditionally (it's even ok to insert it if the vcpu is not in guest >> state). That prevents us from needing vcpu lock or detailed checks which >> would end up where we started (no guarantee that vcpu's come out of >> guest context while trying to aquire all vcpu locks) >> > > Let me see if I get this right: you kick the vcpus out of guest mode by > using a special bit in the vcpu arch part. OK. > > What I don't understand is this: > "would end up where we started (no guarantee that vcpu's come out of > guest context while trying to aquire all vcpu locks)" > initially the mechanism looped over vcpu's and just aquired the vcpu lock and then updated the vcpu.arch infor directly. Avi mentioned that we have no guarantee if/when the vcpu will come out of guest context to free a lock currently held and suggested the mechanism x86 uses via setting vcpu->request and kicking the vcpu. Thats the eason behind "end up where we (the discussion) started", if we would need the vcpu lock again we would be at the beginnign of the discussion. > So you _need_ a mechanism to kick all vcpus out of guest mode? > I have a mechanism to kick a vcpu, and I use it. Due to the fact that smp_call_* don't work as kick for us the kick is an arch specific function. I hop ethat clarified this part :-) >>> 2) It uses smp_call_function_many(wait=1), which guarantees that by the >>> time make_all_requests returns no vcpus will be using stale data (the >>> remote vcpus will have executed ack_flush). >>> >>> >> yes this is really a part my s390 implementation doesn't fulfill yet. >> Currently on return vcpus might still use the old memslot information. >> As mentioned before letting all interrupts come "too far" out of the hot >> loop would be a performance issue, therefore I think I will need some >> request&confirm mechanism. I'm not sure yet but maybe it could be as >> easy as this pseudo code example: >> >> # in make_all_requests >> # remember we have slots_lock write here and the reentry that updates >> the vcpu specific data aquires slots_lock for read. >> loop vcpus >> set_bit in vcpu requests >> kick vcpu #arch function >> endloop >> >> loop vcpus >> until the requested bit is disappeared #as the reentry path uses >> test_and_clear it will disappear >> endloop >> >> That would be a implicit synchronization and should work, as I wrote >> before setting memslots while the guest is running is rare if ever >> existant for s390. On x86 smp_call_many could then work without the wait >> flag being set. >> > > I see, yes. > > >> But I assume that this synchronization approach is slower as it >> serializes all vcpus on reentry (they wait for the slots_lock to get >> dropped), therefore I wanted to ask how often setting memslots on >> runtime will occur on x86 ? Would this approach be acceptable ? >> > > For x86 we need slots_lock for two things: > > 1) to protect the memslot structures from changing (very rare), ie: > kvm_set_memory. > > 2) to protect updates to the dirty bitmap (operations on behalf of > guest) which take slots_lock for read versus updates to that dirty > bitmap (an ioctl that retrieves what pages have been dirtied in the > memslots, and clears the dirtyness info). > > All you need for S390 is 1), AFAICS. > correct > For 1), we can drop the slots_lock usage, but instead create an > explicit synchronization point, where all vcpus are forced to (say > kvm_vcpu_block) "paused" state. qemu-kvm has such notion. > > Same language? > Yes, I think i got your point :-) But I think by keeping slots_lock we already got our synchronization point and don't need an explicit one adding extra code and maybe locks. As I mentioned above it should synchronize already implicit. When I looked at it once more yesterday I realized that kvm_set_memory is not performance critical anyway (i.e. does not have to be the fastest ioctl on earth) so we could be one step smarter and instead of serializing all vcpu's among each other we could set_memory just let do it one by one. In case I lost you again due to my obviously confusing mainframe language this week you might want to my next patch submission where I implement that in the s390 arch code as an example. I'll put you on cc and in that new code we might find an implicit language synchronization for us :-) [...]
Christian Ehrhardt wrote: >> So you _need_ a mechanism to kick all vcpus out of guest mode? >> > I have a mechanism to kick a vcpu, and I use it. Due to the fact that > smp_call_* don't work as kick for us the kick is an arch specific > function. > I hop ethat clarified this part :-) > You could still use make_all_vcpus_request(), just change smp_call_function_many() to your own kicker.
Avi Kivity wrote: > Christian Ehrhardt wrote: >>> So you _need_ a mechanism to kick all vcpus out of guest mode? >>> >> I have a mechanism to kick a vcpu, and I use it. Due to the fact that >> smp_call_* don't work as kick for us the kick is an arch specific >> function. >> I hop ethat clarified this part :-) >> > > You could still use make_all_vcpus_request(), just change > smp_call_function_many() to your own kicker. > Yes and I like this idea for further unification, but I don't want it mixed too much into the patches in discussion atm. Because on one hand I have some problems giving my arch specific kick a behaviour like "return when the guest WAS kicked" and on the other hand I would e.g. also need to streamline the check in make_all_vcpus_request which cpu is running etc because vcpu->cpu stays -1 all the time on s390 (never used). Therefore I would unify things step by step and this way allow single task to went off my task pile here :-)
Index: kvm/arch/s390/kvm/intercept.c =================================================================== --- kvm.orig/arch/s390/kvm/intercept.c +++ kvm/arch/s390/kvm/intercept.c @@ -128,7 +128,7 @@ static int handle_noop(struct kvm_vcpu * static int handle_stop(struct kvm_vcpu *vcpu) { - int rc; + int rc = 0; vcpu->stat.exit_stop_request++; atomic_clear_mask(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags); @@ -141,12 +141,18 @@ static int handle_stop(struct kvm_vcpu * rc = -ENOTSUPP; } + if (vcpu->arch.local_int.action_bits & ACTION_RELOADVCPU_ON_STOP) { + vcpu->arch.local_int.action_bits &= ~ACTION_RELOADVCPU_ON_STOP; + rc = SIE_INTERCEPT_RERUNVCPU; + vcpu->run->exit_reason = KVM_EXIT_INTR; + } + if (vcpu->arch.local_int.action_bits & ACTION_STOP_ON_STOP) { vcpu->arch.local_int.action_bits &= ~ACTION_STOP_ON_STOP; VCPU_EVENT(vcpu, 3, "%s", "cpu stopped"); rc = -ENOTSUPP; - } else - rc = 0; + } + spin_unlock_bh(&vcpu->arch.local_int.lock); return rc; } Index: kvm/arch/s390/kvm/kvm-s390.c =================================================================== --- kvm.orig/arch/s390/kvm/kvm-s390.c +++ kvm/arch/s390/kvm/kvm-s390.c @@ -487,6 +487,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v vcpu_load(vcpu); +rerun_vcpu: /* verify, that memory has been registered */ if (!vcpu->kvm->arch.guest_memsize) { vcpu_put(vcpu); @@ -506,6 +507,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v vcpu->arch.sie_block->gpsw.addr = kvm_run->s390_sieic.addr; break; case KVM_EXIT_UNKNOWN: + case KVM_EXIT_INTR: case KVM_EXIT_S390_RESET: break; default: @@ -519,6 +521,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_v rc = kvm_handle_sie_intercept(vcpu); } while (!signal_pending(current) && !rc); + if (rc == SIE_INTERCEPT_RERUNVCPU) + goto rerun_vcpu; + if (signal_pending(current) && !rc) rc = -EINTR; Index: kvm/arch/s390/kvm/kvm-s390.h =================================================================== --- kvm.orig/arch/s390/kvm/kvm-s390.h +++ kvm/arch/s390/kvm/kvm-s390.h @@ -20,6 +20,8 @@ typedef int (*intercept_handler_t)(struct kvm_vcpu *vcpu); +/* negativ values are error codes, positive values for internal conditions */ +#define SIE_INTERCEPT_RERUNVCPU (1<<0) int kvm_handle_sie_intercept(struct kvm_vcpu *vcpu); #define VM_EVENT(d_kvm, d_loglevel, d_string, d_args...)\ @@ -50,6 +52,7 @@ int kvm_s390_inject_vm(struct kvm *kvm, int kvm_s390_inject_vcpu(struct kvm_vcpu *vcpu, struct kvm_s390_interrupt *s390int); int kvm_s390_inject_program_int(struct kvm_vcpu *vcpu, u16 code); +int kvm_s390_inject_sigp_stop(struct kvm_vcpu *vcpu, int action); /* implemented in priv.c */ int kvm_s390_handle_b2(struct kvm_vcpu *vcpu); Index: kvm/arch/s390/include/asm/kvm_host.h =================================================================== --- kvm.orig/arch/s390/include/asm/kvm_host.h +++ kvm/arch/s390/include/asm/kvm_host.h @@ -180,8 +180,9 @@ struct kvm_s390_interrupt_info { }; /* for local_interrupt.action_flags */ -#define ACTION_STORE_ON_STOP 1 -#define ACTION_STOP_ON_STOP 2 +#define ACTION_STORE_ON_STOP (1<<0) +#define ACTION_STOP_ON_STOP (1<<1) +#define ACTION_RELOADVCPU_ON_STOP (1<<2) struct kvm_s390_local_interrupt { spinlock_t lock; Index: kvm/arch/s390/kvm/sigp.c =================================================================== --- kvm.orig/arch/s390/kvm/sigp.c +++ kvm/arch/s390/kvm/sigp.c @@ -1,7 +1,7 @@ /* * sigp.c - handlinge interprocessor communication * - * Copyright IBM Corp. 2008 + * Copyright IBM Corp. 2008,2009 * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License (version 2 only) @@ -9,6 +9,7 @@ * * Author(s): Carsten Otte <cotte@de.ibm.com> * Christian Borntraeger <borntraeger@de.ibm.com> + * Christian Ehrhardt <ehrhardt@de.ibm.com> */ #include <linux/kvm.h> @@ -107,46 +108,57 @@ unlock: return rc; } -static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int store) +static int __inject_sigp_stop(struct kvm_s390_local_interrupt *li, int action) { - struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int; - struct kvm_s390_local_interrupt *li; struct kvm_s390_interrupt_info *inti; - int rc; - - if (cpu_addr >= KVM_MAX_VCPUS) - return 3; /* not operational */ inti = kzalloc(sizeof(*inti), GFP_KERNEL); if (!inti) return -ENOMEM; - inti->type = KVM_S390_SIGP_STOP; - spin_lock(&fi->lock); - li = fi->local_int[cpu_addr]; - if (li == NULL) { - rc = 3; /* not operational */ - kfree(inti); - goto unlock; - } spin_lock_bh(&li->lock); list_add_tail(&inti->list, &li->list); atomic_set(&li->active, 1); atomic_set_mask(CPUSTAT_STOP_INT, li->cpuflags); - if (store) - li->action_bits |= ACTION_STORE_ON_STOP; - li->action_bits |= ACTION_STOP_ON_STOP; + li->action_bits |= action; if (waitqueue_active(&li->wq)) wake_up_interruptible(&li->wq); spin_unlock_bh(&li->lock); - rc = 0; /* order accepted */ + + return 0; /* order accepted */ +} + +static int __sigp_stop(struct kvm_vcpu *vcpu, u16 cpu_addr, int action) +{ + struct kvm_s390_float_interrupt *fi = &vcpu->kvm->arch.float_int; + struct kvm_s390_local_interrupt *li; + int rc; + + if (cpu_addr >= KVM_MAX_VCPUS) + return 3; /* not operational */ + + spin_lock(&fi->lock); + li = fi->local_int[cpu_addr]; + if (li == NULL) { + rc = 3; /* not operational */ + goto unlock; + } + + rc = __inject_sigp_stop(li, action); + unlock: spin_unlock(&fi->lock); VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x", cpu_addr); return rc; } +int kvm_s390_inject_sigp_stop(struct kvm_vcpu *vcpu, int action) +{ + struct kvm_s390_local_interrupt *li = &vcpu->arch.local_int; + return __inject_sigp_stop(li, action); +} + static int __sigp_set_arch(struct kvm_vcpu *vcpu, u32 parameter) { int rc; @@ -261,11 +273,11 @@ int kvm_s390_handle_sigp(struct kvm_vcpu break; case SIGP_STOP: vcpu->stat.instruction_sigp_stop++; - rc = __sigp_stop(vcpu, cpu_addr, 0); + rc = __sigp_stop(vcpu, cpu_addr, ACTION_STOP_ON_STOP); break; case SIGP_STOP_STORE_STATUS: vcpu->stat.instruction_sigp_stop++; - rc = __sigp_stop(vcpu, cpu_addr, 1); + rc = __sigp_stop(vcpu, cpu_addr, ACTION_STORE_ON_STOP); break; case SIGP_SET_ARCH: vcpu->stat.instruction_sigp_arch++;