Message ID | 20240424033449.168398-3-xin.wang2@amd.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Remaining patches for dynamic node programming using overlay dtbo | expand |
Hi Henry, On 24/04/2024 04:34, Henry Wang wrote: > From: Vikram Garhwal <fnu.vikram@xilinx.com> > > Enable interrupt assign/remove for running VMs in CONFIG_OVERLAY_DTB. > > Currently, irq_route and mapping is only allowed at the domain creation. Adding > exception for CONFIG_OVERLAY_DTB. AFAICT, this is mostly reverting b8577547236f ("xen/arm: Restrict when a physical IRQ can be routed/removed from/to a domain"). > > Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> > Signed-off-by: Henry Wang <xin.wang2@amd.com> > --- > xen/arch/arm/gic.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index 44c40e86de..a775f886ed 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -140,8 +140,10 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, > * back to the physical IRQ. To prevent get unsync, restrict the > * routing to when the Domain is been created. > */ The above comment explains why the check was added. But the commit message doesn't explain why this can be disregarded for your use-case. Looking at the history, I don't think you can simply remove the checks. Regardless that... > +#ifndef CONFIG_OVERLAY_DTB ... I am against such #ifdef. A distros may want to have OVERLAY_DTB enabled, yet the user will not use it. Instead, you want to remove the check once the code can properly handle routing an IRQ the domain is created or ... > if ( d->creation_finished ) > return -EBUSY; > +#endif > > ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); > if ( ret ) > @@ -171,8 +173,10 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, > * Removing an interrupt while the domain is running may have > * undesirable effect on the vGIC emulation. > */ > +#ifndef CONFIG_OVERLAY_DTB > if ( !d->is_dying ) > return -EBUSY; > +#endif ... removed before they domain is destroyed. > > desc->handler->shutdown(desc); > Cheers,
Hi Julien, On 4/24/2024 8:58 PM, Julien Grall wrote: > Hi Henry, > > On 24/04/2024 04:34, Henry Wang wrote: >> From: Vikram Garhwal <fnu.vikram@xilinx.com> >> >> Enable interrupt assign/remove for running VMs in CONFIG_OVERLAY_DTB. >> >> Currently, irq_route and mapping is only allowed at the domain >> creation. Adding >> exception for CONFIG_OVERLAY_DTB. > > AFAICT, this is mostly reverting b8577547236f ("xen/arm: Restrict when > a physical IRQ can be routed/removed from/to a domain"). > >> >> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com> >> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> >> Signed-off-by: Henry Wang <xin.wang2@amd.com> >> --- >> xen/arch/arm/gic.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >> index 44c40e86de..a775f886ed 100644 >> --- a/xen/arch/arm/gic.c >> +++ b/xen/arch/arm/gic.c >> @@ -140,8 +140,10 @@ int gic_route_irq_to_guest(struct domain *d, >> unsigned int virq, >> * back to the physical IRQ. To prevent get unsync, restrict the >> * routing to when the Domain is been created. >> */ > > The above comment explains why the check was added. But the commit > message doesn't explain why this can be disregarded for your use-case. > > Looking at the history, I don't think you can simply remove the checks. > > Regardless that... > >> +#ifndef CONFIG_OVERLAY_DTB > > ... I am against such #ifdef. A distros may want to have OVERLAY_DTB > enabled, yet the user will not use it. > > Instead, you want to remove the check once the code can properly > handle routing an IRQ the domain is created or ... > >> if ( d->creation_finished ) >> return -EBUSY; >> +#endif >> ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); >> if ( ret ) >> @@ -171,8 +173,10 @@ int gic_remove_irq_from_guest(struct domain *d, >> unsigned int virq, >> * Removing an interrupt while the domain is running may have >> * undesirable effect on the vGIC emulation. >> */ >> +#ifndef CONFIG_OVERLAY_DTB >> if ( !d->is_dying ) >> return -EBUSY; >> +#endif > > ... removed before they domain is destroyed. Thanks for your feeedback. After checking the b8577547236f commit message I think I now understand your point. Do you have any suggestion about how can I properly add the support to route/remove the IRQ to running domains? Thanks. Kind regards, Henry > >> desc->handler->shutdown(desc); > > Cheers, >
Hi, On 25/04/2024 08:06, Henry Wang wrote: > Hi Julien, > > On 4/24/2024 8:58 PM, Julien Grall wrote: >> Hi Henry, >> >> On 24/04/2024 04:34, Henry Wang wrote: >>> From: Vikram Garhwal <fnu.vikram@xilinx.com> >>> >>> Enable interrupt assign/remove for running VMs in CONFIG_OVERLAY_DTB. >>> >>> Currently, irq_route and mapping is only allowed at the domain >>> creation. Adding >>> exception for CONFIG_OVERLAY_DTB. >> >> AFAICT, this is mostly reverting b8577547236f ("xen/arm: Restrict when >> a physical IRQ can be routed/removed from/to a domain"). >> >>> >>> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> >>> Signed-off-by: Henry Wang <xin.wang2@amd.com> >>> --- >>> xen/arch/arm/gic.c | 4 ++++ >>> 1 file changed, 4 insertions(+) >>> >>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >>> index 44c40e86de..a775f886ed 100644 >>> --- a/xen/arch/arm/gic.c >>> +++ b/xen/arch/arm/gic.c >>> @@ -140,8 +140,10 @@ int gic_route_irq_to_guest(struct domain *d, >>> unsigned int virq, >>> * back to the physical IRQ. To prevent get unsync, restrict the >>> * routing to when the Domain is been created. >>> */ >> >> The above comment explains why the check was added. But the commit >> message doesn't explain why this can be disregarded for your use-case. >> >> Looking at the history, I don't think you can simply remove the checks. >> >> Regardless that... >> >>> +#ifndef CONFIG_OVERLAY_DTB >> >> ... I am against such #ifdef. A distros may want to have OVERLAY_DTB >> enabled, yet the user will not use it. >> >> Instead, you want to remove the check once the code can properly >> handle routing an IRQ the domain is created or ... >> >>> if ( d->creation_finished ) >>> return -EBUSY; >>> +#endif >>> ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); >>> if ( ret ) >>> @@ -171,8 +173,10 @@ int gic_remove_irq_from_guest(struct domain *d, >>> unsigned int virq, >>> * Removing an interrupt while the domain is running may have >>> * undesirable effect on the vGIC emulation. >>> */ >>> +#ifndef CONFIG_OVERLAY_DTB >>> if ( !d->is_dying ) >>> return -EBUSY; >>> +#endif >> >> ... removed before they domain is destroyed. > > Thanks for your feeedback. After checking the b8577547236f commit > message I think I now understand your point. Do you have any suggestion > about how can I properly add the support to route/remove the IRQ to > running domains? Thanks. I haven't really look at that code in quite a while. I think we need to make sure that the virtual and physical IRQ state matches at the time we do the routing. I am undecided on whether we want to simply prevent the action to happen or try to reset the state. There is also the question of what to do if the guest is enabling the vIRQ before it is routed. Overall, someone needs to spend some time reading the code and then make a proposal (this could be just documentation if we believe it is safe to do). Both the current vGIC and the new one may need an update. Cheers,
Hi Julien, Sorry for the late reply, On 4/25/2024 10:28 PM, Julien Grall wrote: > Hi, > > On 25/04/2024 08:06, Henry Wang wrote: >> Hi Julien, >> >> On 4/24/2024 8:58 PM, Julien Grall wrote: >>> Hi Henry, >>> >>> On 24/04/2024 04:34, Henry Wang wrote: >>>> From: Vikram Garhwal <fnu.vikram@xilinx.com> >>>> >>>> Enable interrupt assign/remove for running VMs in CONFIG_OVERLAY_DTB. >>>> >>>> Currently, irq_route and mapping is only allowed at the domain >>>> creation. Adding >>>> exception for CONFIG_OVERLAY_DTB. >>> >>> AFAICT, this is mostly reverting b8577547236f ("xen/arm: Restrict >>> when a physical IRQ can be routed/removed from/to a domain"). >>> >>>> >>>> Signed-off-by: Vikram Garhwal <fnu.vikram@xilinx.com> >>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> >>>> Signed-off-by: Henry Wang <xin.wang2@amd.com> >>>> --- >>>> xen/arch/arm/gic.c | 4 ++++ >>>> 1 file changed, 4 insertions(+) >>>> >>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >>>> index 44c40e86de..a775f886ed 100644 >>>> --- a/xen/arch/arm/gic.c >>>> +++ b/xen/arch/arm/gic.c >>>> @@ -140,8 +140,10 @@ int gic_route_irq_to_guest(struct domain *d, >>>> unsigned int virq, >>>> * back to the physical IRQ. To prevent get unsync, restrict the >>>> * routing to when the Domain is been created. >>>> */ >>> >>> The above comment explains why the check was added. But the commit >>> message doesn't explain why this can be disregarded for your use-case. >>> >>> Looking at the history, I don't think you can simply remove the checks. >>> >>> Regardless that... >>> >>>> +#ifndef CONFIG_OVERLAY_DTB >>> >>> ... I am against such #ifdef. A distros may want to have OVERLAY_DTB >>> enabled, yet the user will not use it. >>> >>> Instead, you want to remove the check once the code can properly >>> handle routing an IRQ the domain is created or ... >>> >>>> if ( d->creation_finished ) >>>> return -EBUSY; >>>> +#endif >>>> ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); >>>> if ( ret ) >>>> @@ -171,8 +173,10 @@ int gic_remove_irq_from_guest(struct domain >>>> *d, unsigned int virq, >>>> * Removing an interrupt while the domain is running may have >>>> * undesirable effect on the vGIC emulation. >>>> */ >>>> +#ifndef CONFIG_OVERLAY_DTB >>>> if ( !d->is_dying ) >>>> return -EBUSY; >>>> +#endif >>> >>> ... removed before they domain is destroyed. >> >> Thanks for your feeedback. After checking the b8577547236f commit >> message I think I now understand your point. Do you have any >> suggestion about how can I properly add the support to route/remove >> the IRQ to running domains? Thanks. I spent some time going through the GIC/vGIC code and had some discussions with Stefano and Stewart during the last couple of days, let me see if I can describe the use case properly now to continue the discussion: We have some use cases that requires assigning devices to domains after domain boot time. For example, suppose there is an FPGA on the board which can simulate a device, and the bitstream for the FPGA is provided and programmed after domain boot. So we need a way to assign the device to the running domain. This series tries to implement this use case by using device tree overlay - users can firstly add the overlay to Xen dtb, assign the device in the overlay to a domain by the xl command, then apply the overlay to Linux. > I haven't really look at that code in quite a while. I think we need > to make sure that the virtual and physical IRQ state matches at the > time we do the routing. > > I am undecided on whether we want to simply prevent the action to > happen or try to reset the state. > > There is also the question of what to do if the guest is enabling the > vIRQ before it is routed. Sorry for bothering, would you mind elaborating a bit more about the two cases that you mentioned above? Commit b8577547236f ("xen/arm: Restrict when a physical IRQ can be routed/removed from/to a domain") only said there will be undesirable effects, so I am not sure if I understand the concerns raised above and the consequences of these two use cases. I am probably wrong, I think when we add the overlay, we are probably fine as the interrupt is not being used before. Also since we only load the device driver after the IRQ is routed to the guest, I am not sure the guest can enable the vIRQ before it is routed. Kind regards, Henry > Overall, someone needs to spend some time reading the code and then > make a proposal (this could be just documentation if we believe it is > safe to do). Both the current vGIC and the new one may need an update. > > Cheers, >
Hi Henry, On 30/04/2024 04:50, Henry Wang wrote: > On 4/25/2024 10:28 PM, Julien Grall wrote: >>> Thanks for your feeedback. After checking the b8577547236f commit >>> message I think I now understand your point. Do you have any >>> suggestion about how can I properly add the support to route/remove >>> the IRQ to running domains? Thanks. > > I spent some time going through the GIC/vGIC code and had some > discussions with Stefano and Stewart during the last couple of days, let > me see if I can describe the use case properly now to continue the > discussion: > > We have some use cases that requires assigning devices to domains after > domain boot time. For example, suppose there is an FPGA on the board > which can simulate a device, and the bitstream for the FPGA is provided > and programmed after domain boot. So we need a way to assign the device > to the running domain. This series tries to implement this use case by > using device tree overlay - users can firstly add the overlay to Xen > dtb, assign the device in the overlay to a domain by the xl command, > then apply the overlay to Linux. Thanks for the description! This helps to understand your goal :). > >> I haven't really look at that code in quite a while. I think we need >> to make sure that the virtual and physical IRQ state matches at the >> time we do the routing. >> >> I am undecided on whether we want to simply prevent the action to >> happen or try to reset the state. >> >> There is also the question of what to do if the guest is enabling the >> vIRQ before it is routed. > > Sorry for bothering, would you mind elaborating a bit more about the two > cases that you mentioned above? Commit b8577547236f ("xen/arm: Restrict > when a physical IRQ can be routed/removed from/to a domain") only said > there will be undesirable effects, so I am not sure if I understand the > concerns raised above and the consequences of these two use cases. I will try to explain them below after I answer the rest. > I am > probably wrong, I think when we add the overlay, we are probably fine as > the interrupt is not being used before. What if the DT overlay is unloaded and then reloaded? Wouldn't the same interrupt be re-used? As a more generic case, this could also be a new bitstream for the FPGA. But even if the interrupt is brand new every time for the DT overlay, you are effectively relaxing the check for every user (such as XEN_DOMCTL_bind_pt_irq). So the interrupt re-use case needs to be taken into account. > Also since we only load the > device driver after the IRQ is routed to the guest, This is what a well-behave guest will do. However, we need to think what will happen if a guest misbehaves. I am not concerned about a guest only impacting itself, I am more concerned about the case where the rest of the system is impacted. > I am not sure the > guest can enable the vIRQ before it is routed. Xen allows the guest to enable a vIRQ even if there is no pIRQ assigned. Thanksfully, it looks like the vgic_connect_hw_irq(), in both the current and new vGIC, will return an error if we are trying to route a pIRQ to an already enabled vIRQ. But we need to investigate all the possible scenarios to make sure that any inconsistencies between the physical state and virtual state (including the LRs) will not result to bigger problem. The one that comes to my mind is: The physical interrupt is de-assigned from the guest before it was EOIed. In this case, the interrupt will still be in the LR with the HW bit set. This would allow the guest to EOI the interrupt even if it is routed to someone else. It is unclear what would be the impact on the other guest. Cheers,
Hi Julien, On 5/1/2024 4:13 AM, Julien Grall wrote: > Hi Henry, > > On 30/04/2024 04:50, Henry Wang wrote: >> On 4/25/2024 10:28 PM, Julien Grall wrote: >>>> Thanks for your feeedback. After checking the b8577547236f commit >>>> message I think I now understand your point. Do you have any >>>> suggestion about how can I properly add the support to route/remove >>>> the IRQ to running domains? Thanks. >> >> I spent some time going through the GIC/vGIC code and had some >> discussions with Stefano and Stewart during the last couple of days, >> let me see if I can describe the use case properly now to continue >> the discussion: >> >> We have some use cases that requires assigning devices to domains >> after domain boot time. For example, suppose there is an FPGA on the >> board which can simulate a device, and the bitstream for the FPGA is >> provided and programmed after domain boot. So we need a way to assign >> the device to the running domain. This series tries to implement this >> use case by using device tree overlay - users can firstly add the >> overlay to Xen dtb, assign the device in the overlay to a domain by >> the xl command, then apply the overlay to Linux. > > Thanks for the description! This helps to understand your goal :). Thank you very much for spending your time on discussing this and provide these valuable comments! >> >>> I haven't really look at that code in quite a while. I think we need >>> to make sure that the virtual and physical IRQ state matches at the >>> time we do the routing. >>> >>> I am undecided on whether we want to simply prevent the action to >>> happen or try to reset the state. >>> >>> There is also the question of what to do if the guest is enabling >>> the vIRQ before it is routed. >> >> Sorry for bothering, would you mind elaborating a bit more about the >> two cases that you mentioned above? Commit b8577547236f ("xen/arm: >> Restrict when a physical IRQ can be routed/removed from/to a domain") >> only said there will be undesirable effects, so I am not sure if I >> understand the concerns raised above and the consequences of these >> two use cases. > > I will try to explain them below after I answer the rest. > >> I am probably wrong, I think when we add the overlay, we are probably >> fine as the interrupt is not being used before. > > What if the DT overlay is unloaded and then reloaded? Wouldn't the > same interrupt be re-used? As a more generic case, this could also be > a new bitstream for the FPGA. > > But even if the interrupt is brand new every time for the DT overlay, > you are effectively relaxing the check for every user (such as > XEN_DOMCTL_bind_pt_irq). So the interrupt re-use case needs to be > taken into account. I agree. I think IIUC, with your explanation here and below, could we simplify the problem to how to properly handle the removal of the IRQ from a running guest, if we always properly remove and clean up the information when remove the IRQ from the guest? In this way, the IRQ can always be viewed as a brand new one when we add it back. Then the only corner case that we need to take care of would be... >> Also since we only load the device driver after the IRQ is routed to >> the guest, > > This is what a well-behave guest will do. However, we need to think > what will happen if a guest misbehaves. I am not concerned about a > guest only impacting itself, I am more concerned about the case where > the rest of the system is impacted. > >> I am not sure the guest can enable the vIRQ before it is routed. > > Xen allows the guest to enable a vIRQ even if there is no pIRQ > assigned. Thanksfully, it looks like the vgic_connect_hw_irq(), in > both the current and new vGIC, will return an error if we are trying > to route a pIRQ to an already enabled vIRQ. > > But we need to investigate all the possible scenarios to make sure > that any inconsistencies between the physical state and virtual state > (including the LRs) will not result to bigger problem. > > The one that comes to my mind is: The physical interrupt is > de-assigned from the guest before it was EOIed. In this case, the > interrupt will still be in the LR with the HW bit set. This would > allow the guest to EOI the interrupt even if it is routed to someone > else. It is unclear what would be the impact on the other guest. ...same as this case, i.e. test_bit(_IRQ_INPROGRESS, &desc->status) || !test_bit(_IRQ_DISABLED, &desc->status)) when we try to remove the IRQ from a running domain. we have 3 possible states which can be read from LR for this case : active, pending, pending and active. - I don't think we can do anything about the active state, so we should return -EBUSY and reject the whole operation of removing the IRQ from running guest, and user can always retry this operation. - For the pending (and active) case, can we clear the LR and point the LR for the pending_irq to invalid? Kind regards, Henry > > Cheers, >
Hi Henry, On 06/05/2024 09:32, Henry Wang wrote: > On 5/1/2024 4:13 AM, Julien Grall wrote: >> Hi Henry, >> >> On 30/04/2024 04:50, Henry Wang wrote: >>> On 4/25/2024 10:28 PM, Julien Grall wrote: >>>>> Thanks for your feeedback. After checking the b8577547236f commit >>>>> message I think I now understand your point. Do you have any >>>>> suggestion about how can I properly add the support to route/remove >>>>> the IRQ to running domains? Thanks. >>> >>> I spent some time going through the GIC/vGIC code and had some >>> discussions with Stefano and Stewart during the last couple of days, >>> let me see if I can describe the use case properly now to continue >>> the discussion: >>> >>> We have some use cases that requires assigning devices to domains >>> after domain boot time. For example, suppose there is an FPGA on the >>> board which can simulate a device, and the bitstream for the FPGA is >>> provided and programmed after domain boot. So we need a way to assign >>> the device to the running domain. This series tries to implement this >>> use case by using device tree overlay - users can firstly add the >>> overlay to Xen dtb, assign the device in the overlay to a domain by >>> the xl command, then apply the overlay to Linux. >> >> Thanks for the description! This helps to understand your goal :). > > Thank you very much for spending your time on discussing this and > provide these valuable comments! > >>> >>>> I haven't really look at that code in quite a while. I think we need >>>> to make sure that the virtual and physical IRQ state matches at the >>>> time we do the routing. >>>> >>>> I am undecided on whether we want to simply prevent the action to >>>> happen or try to reset the state. >>>> >>>> There is also the question of what to do if the guest is enabling >>>> the vIRQ before it is routed. >>> >>> Sorry for bothering, would you mind elaborating a bit more about the >>> two cases that you mentioned above? Commit b8577547236f ("xen/arm: >>> Restrict when a physical IRQ can be routed/removed from/to a domain") >>> only said there will be undesirable effects, so I am not sure if I >>> understand the concerns raised above and the consequences of these >>> two use cases. >> >> I will try to explain them below after I answer the rest. >> >>> I am probably wrong, I think when we add the overlay, we are probably >>> fine as the interrupt is not being used before. >> >> What if the DT overlay is unloaded and then reloaded? Wouldn't the >> same interrupt be re-used? As a more generic case, this could also be >> a new bitstream for the FPGA. >> >> But even if the interrupt is brand new every time for the DT overlay, >> you are effectively relaxing the check for every user (such as >> XEN_DOMCTL_bind_pt_irq). So the interrupt re-use case needs to be >> taken into account. > > I agree. I think IIUC, with your explanation here and below, could we > simplify the problem to how to properly handle the removal of the IRQ > from a running guest, if we always properly remove and clean up the > information when remove the IRQ from the guest? In this way, the IRQ can > always be viewed as a brand new one when we add it back. If we can make sure the virtual IRQ and physical IRQ is cleaned then yes. > Then the only > corner case that we need to take care of would be... Can you clarify whether you say the "only corner case" because you looked at the code? Or is it just because I mentioned only one? > >>> Also since we only load the device driver after the IRQ is routed to >>> the guest, >> >> This is what a well-behave guest will do. However, we need to think >> what will happen if a guest misbehaves. I am not concerned about a >> guest only impacting itself, I am more concerned about the case where >> the rest of the system is impacted. >> >>> I am not sure the guest can enable the vIRQ before it is routed. >> >> Xen allows the guest to enable a vIRQ even if there is no pIRQ >> assigned. Thanksfully, it looks like the vgic_connect_hw_irq(), in >> both the current and new vGIC, will return an error if we are trying >> to route a pIRQ to an already enabled vIRQ. >> >> But we need to investigate all the possible scenarios to make sure >> that any inconsistencies between the physical state and virtual state >> (including the LRs) will not result to bigger problem. >> >> The one that comes to my mind is: The physical interrupt is >> de-assigned from the guest before it was EOIed. In this case, the >> interrupt will still be in the LR with the HW bit set. This would >> allow the guest to EOI the interrupt even if it is routed to someone >> else. It is unclear what would be the impact on the other guest. > > ...same as this case, i.e. > test_bit(_IRQ_INPROGRESS, &desc->status) || !test_bit(_IRQ_DISABLED, > &desc->status)) when we try to remove the IRQ from a running domain. We already call ->shutdown() which will disable the IRQ. So don't we only need to take care of _IRQ_INPROGRESS? [...] > we have 3 possible states which can be read from LR for this case : > active, pending, pending and active. > - I don't think we can do anything about the active state, so we should > return -EBUSY and reject the whole operation of removing the IRQ from > running guest, and user can always retry this operation. This would mean a malicious/buggy guest would be able to prevent a device to be de-assigned. This is not a good idea in particular when the domain is dying. That said, I think you can handle this case. The LR has a bit to indicate whether the pIRQ needs to be EOIed. You can clear it and this would prevent the guest to touch the pIRQ. There might be other clean-up to do in the vGIC datastructure. Anyway, we don't have to handle removing an active IRQ when the domain is still running (although we do when the domain is destroying). But I think this would need to be solved before the feature is (security) supported. > - For the pending (and active) case, Shouldn't the pending and active case handled the same way as the active case? > can we clear the LR and point the > LR for the pending_irq to invalid? LRs can be cleared. You will need to find which vCPU was used for the injection and then pause it so the LR can be safely updated. There will also be some private state to clear. I don't know how easy it will be. However, we decided to not do anything for ICPENDR (which requires a similar behavior) as this was complex (?) to do with the existing vGIC. I vaguely remember we had some discussions on the ML. I didn't look for them though. Anyway, same as above, this could possibly handled later on. But this would probably need to be solved before the feature is (security supported). Cheers,
Hi Julien, On 5/8/2024 5:54 AM, Julien Grall wrote: > Hi Henry, >>> What if the DT overlay is unloaded and then reloaded? Wouldn't the >>> same interrupt be re-used? As a more generic case, this could also >>> be a new bitstream for the FPGA. >>> >>> But even if the interrupt is brand new every time for the DT >>> overlay, you are effectively relaxing the check for every user (such >>> as XEN_DOMCTL_bind_pt_irq). So the interrupt re-use case needs to be >>> taken into account. >> >> I agree. I think IIUC, with your explanation here and below, could we >> simplify the problem to how to properly handle the removal of the IRQ >> from a running guest, if we always properly remove and clean up the >> information when remove the IRQ from the guest? In this way, the IRQ >> can always be viewed as a brand new one when we add it back. > > If we can make sure the virtual IRQ and physical IRQ is cleaned then yes. > >> Then the only corner case that we need to take care of would be... > > Can you clarify whether you say the "only corner case" because you > looked at the code? Or is it just because I mentioned only one? Well, I indeed checked the code and to my best knowledge the corner case that you pointed out would be the only one I can think of. >>> Xen allows the guest to enable a vIRQ even if there is no pIRQ >>> assigned. Thanksfully, it looks like the vgic_connect_hw_irq(), in >>> both the current and new vGIC, will return an error if we are trying >>> to route a pIRQ to an already enabled vIRQ. >>> >>> But we need to investigate all the possible scenarios to make sure >>> that any inconsistencies between the physical state and virtual >>> state (including the LRs) will not result to bigger problem. >>> >>> The one that comes to my mind is: The physical interrupt is >>> de-assigned from the guest before it was EOIed. In this case, the >>> interrupt will still be in the LR with the HW bit set. This would >>> allow the guest to EOI the interrupt even if it is routed to someone >>> else. It is unclear what would be the impact on the other guest. >> >> ...same as this case, i.e. >> test_bit(_IRQ_INPROGRESS, &desc->status) || !test_bit(_IRQ_DISABLED, >> &desc->status)) when we try to remove the IRQ from a running domain. > > We already call ->shutdown() which will disable the IRQ. So don't we > only need to take care of _IRQ_INPROGRESS? Yes you are correct. >> we have 3 possible states which can be read from LR for this case : >> active, pending, pending and active. >> - I don't think we can do anything about the active state, so we >> should return -EBUSY and reject the whole operation of removing the >> IRQ from running guest, and user can always retry this operation. > > This would mean a malicious/buggy guest would be able to prevent a > device to be de-assigned. This is not a good idea in particular when > the domain is dying. > > That said, I think you can handle this case. The LR has a bit to > indicate whether the pIRQ needs to be EOIed. You can clear it and this > would prevent the guest to touch the pIRQ. There might be other > clean-up to do in the vGIC datastructure. I probably misunderstood this sentence, do you mean the EOI bit in the pINTID field? I think this bit is only available when the HW bit of LR is 0, but in our case the HW is supposed to be 1 (as indicated as your previous comment). Would you mind clarifying a bit more? Thanks! > Anyway, we don't have to handle removing an active IRQ when the domain > is still running (although we do when the domain is destroying). But I > think this would need to be solved before the feature is (security) > supported. > >> - For the pending (and active) case, > > Shouldn't the pending and active case handled the same way as the > active case? Sorry, yes you are correct. Kind regards, Henry
Hi Henry, On 08/05/2024 08:49, Henry Wang wrote: > On 5/8/2024 5:54 AM, Julien Grall wrote: >> Hi Henry, >>>> What if the DT overlay is unloaded and then reloaded? Wouldn't the >>>> same interrupt be re-used? As a more generic case, this could also >>>> be a new bitstream for the FPGA. >>>> >>>> But even if the interrupt is brand new every time for the DT >>>> overlay, you are effectively relaxing the check for every user (such >>>> as XEN_DOMCTL_bind_pt_irq). So the interrupt re-use case needs to be >>>> taken into account. >>> >>> I agree. I think IIUC, with your explanation here and below, could we >>> simplify the problem to how to properly handle the removal of the IRQ >>> from a running guest, if we always properly remove and clean up the >>> information when remove the IRQ from the guest? In this way, the IRQ >>> can always be viewed as a brand new one when we add it back. >> >> If we can make sure the virtual IRQ and physical IRQ is cleaned then yes. >> >>> Then the only corner case that we need to take care of would be... >> >> Can you clarify whether you say the "only corner case" because you >> looked at the code? Or is it just because I mentioned only one? > > Well, I indeed checked the code and to my best knowledge the corner case > that you pointed out would be the only one I can think of. Ok :). I was just checking you had a look as well. [...] >>> we have 3 possible states which can be read from LR for this case : >>> active, pending, pending and active. >>> - I don't think we can do anything about the active state, so we >>> should return -EBUSY and reject the whole operation of removing the >>> IRQ from running guest, and user can always retry this operation. >> >> This would mean a malicious/buggy guest would be able to prevent a >> device to be de-assigned. This is not a good idea in particular when >> the domain is dying. >> >> That said, I think you can handle this case. The LR has a bit to >> indicate whether the pIRQ needs to be EOIed. You can clear it and this >> would prevent the guest to touch the pIRQ. There might be other >> clean-up to do in the vGIC datastructure. > > I probably misunderstood this sentence, do you mean the EOI bit in the > pINTID field? I think this bit is only available when the HW bit of LR > is 0, but in our case the HW is supposed to be 1 (as indicated as your > previous comment). Would you mind clarifying a bit more? Thanks! You are right, ICH_LR.HW will be 1 for physical IRQ routed to a guest. What I was trying to explain is this bit could be cleared (with ICH_LR.pINTD adjusted). Cheers,
Hi Julien, On 5/9/2024 4:46 AM, Julien Grall wrote: > Hi Henry, > [...] > >>>> we have 3 possible states which can be read from LR for this case : >>>> active, pending, pending and active. >>>> - I don't think we can do anything about the active state, so we >>>> should return -EBUSY and reject the whole operation of removing the >>>> IRQ from running guest, and user can always retry this operation. >>> >>> This would mean a malicious/buggy guest would be able to prevent a >>> device to be de-assigned. This is not a good idea in particular when >>> the domain is dying. >>> >>> That said, I think you can handle this case. The LR has a bit to >>> indicate whether the pIRQ needs to be EOIed. You can clear it and >>> this would prevent the guest to touch the pIRQ. There might be other >>> clean-up to do in the vGIC datastructure. >> >> I probably misunderstood this sentence, do you mean the EOI bit in >> the pINTID field? I think this bit is only available when the HW bit >> of LR is 0, but in our case the HW is supposed to be 1 (as indicated >> as your previous comment). Would you mind clarifying a bit more? Thanks! > > You are right, ICH_LR.HW will be 1 for physical IRQ routed to a guest. > What I was trying to explain is this bit could be cleared (with > ICH_LR.pINTD adjusted). Thank you for all the discussions. Based on that, would below diff make sense to you? I did a test of the dynamic dtbo adding/removing with a ethernet device with this patch applied. Test steps are: (1) Use xl dt-overlay to add the ethernet device to Xen device tree and assign it to dom0. (2) Create a domU. (3) Use xl dt-overlay to de-assign the device from dom0 and assign it to domU. (4) Destroy the domU. The ethernet device is functional in the domain respectively when it is attached to a domain and I don't see errors when I destroy domU. But honestly I think the case we talked about is a quite unusual case so I am not sure if it was hit during my test. ``` diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index a775f886ed..d3f9cd2299 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -135,16 +135,6 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, ASSERT(virq < vgic_num_irqs(d)); ASSERT(!is_lpi(virq)); - /* - * When routing an IRQ to guest, the virtual state is not synced - * back to the physical IRQ. To prevent get unsync, restrict the - * routing to when the Domain is been created. - */ -#ifndef CONFIG_OVERLAY_DTB - if ( d->creation_finished ) - return -EBUSY; -#endif - ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); if ( ret ) return ret; @@ -169,20 +159,40 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, ASSERT(test_bit(_IRQ_GUEST, &desc->status)); ASSERT(!is_lpi(virq)); - /* - * Removing an interrupt while the domain is running may have - * undesirable effect on the vGIC emulation. - */ -#ifndef CONFIG_OVERLAY_DTB - if ( !d->is_dying ) - return -EBUSY; -#endif - desc->handler->shutdown(desc); /* EOI the IRQ if it has not been done by the guest */ if ( test_bit(_IRQ_INPROGRESS, &desc->status) ) + { + /* + * Handle the LR where the physical interrupt is de-assigned from the + * guest before it was EOIed + */ + struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq); + struct vgic_irq_rank *rank = vgic_rank_irq(v_target, virq); + struct pending_irq *p = irq_to_pending(v_target, virq); + unsigned long flags; + + spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + /* LR allocated for the IRQ */ + if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) && + test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) + { + gic_hw_ops->clear_lr(p->lr); + clear_bit(p->lr, &v_target->arch.lr_mask); + + clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); + clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); + p->lr = GIC_INVALID_LR; + } + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + + vgic_lock_rank(v_target, rank, flags); + vgic_disable_irqs(v_target, (~rank->ienable) & rank->ienable, rank->index); + vgic_unlock_rank(v_target, rank, flags); + gic_hw_ops->deactivate_irq(desc); + } clear_bit(_IRQ_INPROGRESS, &desc->status); ret = vgic_connect_hw_irq(d, NULL, virq, desc, false); ``` Kind regards, Henry > > Cheers, >
Hi, On 09/05/2024 16:31, Henry Wang wrote: > On 5/9/2024 4:46 AM, Julien Grall wrote: >> Hi Henry, >> [...] >> >>>>> we have 3 possible states which can be read from LR for this case : >>>>> active, pending, pending and active. >>>>> - I don't think we can do anything about the active state, so we >>>>> should return -EBUSY and reject the whole operation of removing the >>>>> IRQ from running guest, and user can always retry this operation. >>>> >>>> This would mean a malicious/buggy guest would be able to prevent a >>>> device to be de-assigned. This is not a good idea in particular when >>>> the domain is dying. >>>> >>>> That said, I think you can handle this case. The LR has a bit to >>>> indicate whether the pIRQ needs to be EOIed. You can clear it and >>>> this would prevent the guest to touch the pIRQ. There might be other >>>> clean-up to do in the vGIC datastructure. >>> >>> I probably misunderstood this sentence, do you mean the EOI bit in >>> the pINTID field? I think this bit is only available when the HW bit >>> of LR is 0, but in our case the HW is supposed to be 1 (as indicated >>> as your previous comment). Would you mind clarifying a bit more? Thanks! >> >> You are right, ICH_LR.HW will be 1 for physical IRQ routed to a guest. >> What I was trying to explain is this bit could be cleared (with >> ICH_LR.pINTD adjusted). > > Thank you for all the discussions. Based on that, would below diff make > sense to you? I did a test of the dynamic dtbo adding/removing with a > ethernet device with this patch applied. Test steps are: > (1) Use xl dt-overlay to add the ethernet device to Xen device tree and > assign it to dom0. > (2) Create a domU. > (3) Use xl dt-overlay to de-assign the device from dom0 and assign it to > domU. > (4) Destroy the domU. > > The ethernet device is functional in the domain respectively when it is > attached to a domain and I don't see errors when I destroy domU. But > honestly I think the case we talked about is a quite unusual case so I > am not sure if it was hit during my test. Correct, they are not errors that will happen in normal operations. You will want to tweak Linux or use XTF to trigger them. We want to test the state active, pending and both together before the physical interrupt is routed and also removed. > > ``` > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c > index a775f886ed..d3f9cd2299 100644 > --- a/xen/arch/arm/gic.c > +++ b/xen/arch/arm/gic.c > @@ -135,16 +135,6 @@ int gic_route_irq_to_guest(struct domain *d, > unsigned int virq, > ASSERT(virq < vgic_num_irqs(d)); > ASSERT(!is_lpi(virq)); > > - /* > - * When routing an IRQ to guest, the virtual state is not synced > - * back to the physical IRQ. To prevent get unsync, restrict the > - * routing to when the Domain is been created. > - */ > -#ifndef CONFIG_OVERLAY_DTB > - if ( d->creation_finished ) > - return -EBUSY; > -#endif > - > ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); This is checking if the interrupt is already enabled. Do we also need to check for active/pending? > if ( ret ) > return ret; > @@ -169,20 +159,40 @@ int gic_remove_irq_from_guest(struct domain *d, > unsigned int virq, > ASSERT(test_bit(_IRQ_GUEST, &desc->status)); > ASSERT(!is_lpi(virq)); > > - /* > - * Removing an interrupt while the domain is running may have > - * undesirable effect on the vGIC emulation. > - */ > -#ifndef CONFIG_OVERLAY_DTB > - if ( !d->is_dying ) > - return -EBUSY; > -#endif > - > desc->handler->shutdown(desc); > > /* EOI the IRQ if it has not been done by the guest */ > if ( test_bit(_IRQ_INPROGRESS, &desc->status) ) > + { I assume this is just a PoC state, but I just want to point out that this will not work with the new vGIC (some of the functions doesn't exist there). > + /* > + * Handle the LR where the physical interrupt is de-assigned > from the > + * guest before it was EOIed > + */ > + struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq); This will return a vCPU from the current affinity. This may not be where the interrupt was injected. From a brief look, I can't tell whether we have an easy way to know where the interrupt was injected (other than the pending_irq is in the list lr_queue/inflight) > + struct vgic_irq_rank *rank = vgic_rank_irq(v_target, virq); > + struct pending_irq *p = irq_to_pending(v_target, virq); > + unsigned long flags; > + > + spin_lock_irqsave(&v_target->arch.vgic.lock, flags); > + /* LR allocated for the IRQ */ > + if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) && > + test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) > + { > + gic_hw_ops->clear_lr(p->lr); This works on the current pCPU LR. However, the vCPU may not run on this pCPU. > + clear_bit(p->lr, &v_target->arch.lr_mask); > + > + clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status); > + clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status); > + p->lr = GIC_INVALID_LR; You also need to remove 'p' from the various list (e.g. inflight/lr_queue). But as I wrote previously, I think it would be much easier if we simply clear the HW bit in the LR. So we don't have to try to mess up with the vGIC internal state which is quite complex. I think it could be done as part of vgic_connect_hw_irq(). But I haven't fully investigate it. > + } > + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); > + > + vgic_lock_rank(v_target, rank, flags); > + vgic_disable_irqs(v_target, (~rank->ienable) & rank->ienable, > rank->index); > + vgic_unlock_rank(v_target, rank, flags); Why do you need to call vgic_disable_irqs()? > + > gic_hw_ops->deactivate_irq(desc); > + } > clear_bit(_IRQ_INPROGRESS, &desc->status); > > ret = vgic_connect_hw_irq(d, NULL, virq, desc, false); > ``` > > Kind regards, > Henry > >> >> Cheers, >> > Cheers,
Hi Julien, On 5/10/2024 4:54 PM, Julien Grall wrote: > Hi, > > On 09/05/2024 16:31, Henry Wang wrote: >> On 5/9/2024 4:46 AM, Julien Grall wrote: >>> Hi Henry, >>> [...] >>> ``` >>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >>> index a775f886ed..d3f9cd2299 100644 >>> --- a/xen/arch/arm/gic.c >>> +++ b/xen/arch/arm/gic.c >>> @@ -135,16 +135,6 @@ int gic_route_irq_to_guest(struct domain *d, >>> unsigned int virq, >>> ASSERT(virq < vgic_num_irqs(d)); >>> ASSERT(!is_lpi(virq)); >>> >>> - /* >>> - * When routing an IRQ to guest, the virtual state is not synced >>> - * back to the physical IRQ. To prevent get unsync, restrict the >>> - * routing to when the Domain is been created. >>> - */ >>> -#ifndef CONFIG_OVERLAY_DTB >>> - if ( d->creation_finished ) >>> - return -EBUSY; >>> -#endif >>> - >>> ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); > > This is checking if the interrupt is already enabled. Do we also need > to check for active/pending? Thank you for raising this! I assume you meant this? @@ -444,7 +444,9 @@ int vgic_connect_hw_irq(struct domain *d, struct vcpu *v, unsigned int virq, { /* The VIRQ should not be already enabled by the guest */ if ( !p->desc && - !test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) + !test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) && + !test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) && + !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) ) p->desc = desc; else ret = -EBUSY; I think adding the check for active/pending check at the time of routing the IRQ makes sense, so I will add them (both for old and new vGIC implementation). >> if ( ret ) >> return ret; >> @@ -169,20 +159,40 @@ int gic_remove_irq_from_guest(struct domain *d, >> unsigned int virq, >> ASSERT(test_bit(_IRQ_GUEST, &desc->status)); >> ASSERT(!is_lpi(virq)); >> >> - /* >> - * Removing an interrupt while the domain is running may have >> - * undesirable effect on the vGIC emulation. >> - */ >> -#ifndef CONFIG_OVERLAY_DTB >> - if ( !d->is_dying ) >> - return -EBUSY; >> -#endif >> - >> desc->handler->shutdown(desc); >> >> /* EOI the IRQ if it has not been done by the guest */ >> if ( test_bit(_IRQ_INPROGRESS, &desc->status) ) >> + { > > I assume this is just a PoC state, but I just want to point out that > this will not work with the new vGIC (some of the functions doesn't > exist there). Thank you. Yes currently we can discuss for the old vGIC implementation. After we reach the final conclusion I will do the changes for both old and new vGIC. >> + /* >> + * Handle the LR where the physical interrupt is de-assigned >> from the >> + * guest before it was EOIed >> + */ >> + struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq); > > This will return a vCPU from the current affinity. This may not be > where the interrupt was injected. From a brief look, I can't tell > whether we have an easy way to know where the interrupt was injected > (other than the pending_irq is in the list lr_queue/inflight) I doubt if we need to handle more than this - I think if the pending_irq is not in the lr_queue/inflight list, it would not belong to the corner case we are talking about (?). >> + } >> + spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); >> + >> + vgic_lock_rank(v_target, rank, flags); >> + vgic_disable_irqs(v_target, (~rank->ienable) & >> rank->ienable, rank->index); >> + vgic_unlock_rank(v_target, rank, flags); > > Why do you need to call vgic_disable_irqs()? I will drop this part. Kind regards, Henry
Hi Henry, On 11/05/2024 08:29, Henry Wang wrote: >>> + /* >>> + * Handle the LR where the physical interrupt is de-assigned >>> from the >>> + * guest before it was EOIed >>> + */ >>> + struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], virq); >> >> This will return a vCPU from the current affinity. This may not be >> where the interrupt was injected. From a brief look, I can't tell >> whether we have an easy way to know where the interrupt was injected >> (other than the pending_irq is in the list lr_queue/inflight) > > I doubt if we need to handle more than this - I think if the pending_irq > is not in the lr_queue/inflight list, it would not belong to the corner > case we are talking about (?). I didn't suggest we would need to handle the case where the pending_irq is not any of the queues. I was pointing out that I think we don't directly store the vCPU ID where we injected the IRQ. Instead, the pending_irq is just in list, so we will possibly need to store the vCPU ID for convenience. Cheers,
Hi Julien, On 5/11/2024 4:22 PM, Julien Grall wrote: > Hi Henry, > > On 11/05/2024 08:29, Henry Wang wrote: >>>> + /* >>>> + * Handle the LR where the physical interrupt is >>>> de-assigned from the >>>> + * guest before it was EOIed >>>> + */ >>>> + struct vcpu *v_target = vgic_get_target_vcpu(d->vcpu[0], >>>> virq); >>> >>> This will return a vCPU from the current affinity. This may not be >>> where the interrupt was injected. From a brief look, I can't tell >>> whether we have an easy way to know where the interrupt was injected >>> (other than the pending_irq is in the list lr_queue/inflight) >> >> I doubt if we need to handle more than this - I think if the >> pending_irq is not in the lr_queue/inflight list, it would not belong >> to the corner case we are talking about (?). > > I didn't suggest we would need to handle the case where the > pending_irq is not any of the queues. I was pointing out that I think > we don't directly store the vCPU ID where we injected the IRQ. > Instead, the pending_irq is just in list, so we will possibly need to > store the vCPU ID for convenience. Sorry for misunderstanding. Yeah you are definitely correct. Also thank you so much for the suggestion! Before seeing this suggestion, I was struggling in finding the correct vCPU by "for_each_vcpus" and comparison... but now I realized your suggestion is way more clever :) Kind regards, Henry
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index 44c40e86de..a775f886ed 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -140,8 +140,10 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, * back to the physical IRQ. To prevent get unsync, restrict the * routing to when the Domain is been created. */ +#ifndef CONFIG_OVERLAY_DTB if ( d->creation_finished ) return -EBUSY; +#endif ret = vgic_connect_hw_irq(d, NULL, virq, desc, true); if ( ret ) @@ -171,8 +173,10 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq, * Removing an interrupt while the domain is running may have * undesirable effect on the vGIC emulation. */ +#ifndef CONFIG_OVERLAY_DTB if ( !d->is_dying ) return -EBUSY; +#endif desc->handler->shutdown(desc);