Message ID | 20220801211240.597859-1-quic_eberman@quicinc.com (mailing list archive) |
---|---|
Headers | show |
Series | Drivers for gunyah hypervisor | expand |
On Mon, Aug 1, 2022 at 3:16 PM Elliot Berman <quic_eberman@quicinc.com> wrote: > > Gunyah is a Type-1 hypervisor independent of any > high-level OS kernel, and runs in a higher CPU privilege level. It does > not depend on any lower-privileged OS kernel/code for its core > functionality. This increases its security and can support a much smaller > trusted computing base than a Type-2 hypervisor. > > Gunyah is an open source hypervisor. The source repo is available at > https://github.com/quic/gunyah-hypervisor. Nowhere in this series do I see a change log, yet this is marked as v2. How is anyone supposed to identify what is the difference between v1 and v2?
Hi Jeffrey, On 8/1/2022 2:27 PM, Jeffrey Hugo wrote: > On Mon, Aug 1, 2022 at 3:16 PM Elliot Berman <quic_eberman@quicinc.com> wrote: >> >> Gunyah is a Type-1 hypervisor independent of any >> high-level OS kernel, and runs in a higher CPU privilege level. It does >> not depend on any lower-privileged OS kernel/code for its core >> functionality. This increases its security and can support a much smaller >> trusted computing base than a Type-2 hypervisor. >> >> Gunyah is an open source hypervisor. The source repo is available at >> https://github.com/quic/gunyah-hypervisor. > > Nowhere in this series do I see a change log, yet this is marked as > v2. How is anyone supposed to identify what is the difference between > v1 and v2? I dropped the message when copying cover letter: Changes in v2: - DT bindings clean up - Switch hypercalls to follow SMCCC
On 02/08/2022 00:12, Elliot Berman wrote: > Gunyah is a Type-1 hypervisor independent of any > high-level OS kernel, and runs in a higher CPU privilege level. It does > not depend on any lower-privileged OS kernel/code for its core > functionality. This increases its security and can support a much smaller > trusted computing base than a Type-2 hypervisor. > > Gunyah is an open source hypervisor. The source repo is available at > https://github.com/quic/gunyah-hypervisor. > > The diagram below shows the architecture. > > :: > > Primary VM Secondary VMs Is there any significant difference between Primary VM and other VMs? > +-----+ +-----+ | +-----+ +-----+ +-----+ > | | | | | | | | | | | > EL0 | APP | | APP | | | APP | | APP | | APP | > | | | | | | | | | | | > +-----+ +-----+ | +-----+ +-----+ +-----+ > ---------------------|------------------------- > +--------------+ | +----------------------+ > | | | | | > EL1 | Linux Kernel | | |Linux kernel/Other OS | ... > | | | | | > +--------------+ | +----------------------+ > --------hvc/smc------|------hvc/smc------------ > +----------------------------------------+ > | | > EL2 | Gunyah Hypervisor | > | | > +----------------------------------------+ > > Gunyah provides these following features. > > - Threads and Scheduling: The scheduler schedules virtual CPUs (VCPUs) on > physical CPUs and enables time-sharing of the CPUs. Is the scheduling provided behind the back of the OS or does it require cooperation? > - Memory Management: Gunyah tracks memory ownership and use of all memory > under its control. Memory partitioning between VMs is a fundamental > security feature. > - Interrupt Virtualization: All interrupts are handled in the hypervisor > and routed to the assigned VM. > - Inter-VM Communication: There are several different mechanisms provided > for communicating between VMs. > - Device Virtualization: Para-virtualization of devices is supported using > inter-VM communication. Low level system features and devices such as > interrupt controllers are supported with emulation where required. After reviewing some of the patches from the series, I'd like to understand, what does it provide (and can be provided) to the VMs. I'd like to understand it first, before going deep into the API issues. 1) The hypervisor provides message queues, doorbells and vCPUs Each of resources has it's own capability ID. Why is it called capability? Is it just a misname for the resource ID, or has it any other meaning behind? If it is a capability, who is capable of what? At this moment you create allocate two message queues with fixed IDs for communication with resource manager. Then you use these message queues to organize a console and a pack of tty devices. What other kinds of services does RM provide to the guest OS? Do you expect any other drivers to be calling into the RM? What is the usecase for the doorbells? Who provides doorbells? You mentioned that the RM generates DT overlays. What kind of information goes to the overlay? My current impression of this series is that you have misused the concept of devices. Rather than exporting MSGQs and BELLs as gunyah_devices and then using them from other drivers, I'd suggest turning them into resources provided by the gunyah driver core. I mentioned using the mailbox API for this. Another subsystem that might ring the bell for you is the remoteproc, especially the rproc_subdev. I might be completely wrong about this, but if my in-mind picture of Gunyah is correct, I'd have implemented the gunyah core subsytem as mailbox provider, RM as a separate platform driver consuming these mailboxes and in turn being a remoteproc driver, and consoles as remoteproc subdevices. I can assume that at some point you would like to use Gunyah to boot secondary VMs from the primary VM by calling into RM, etc. Most probably at this moment a VM would be allocated other bells, message queues, etc. If this assumption is correct, them the VM can become a separate device (remoteproc?) in the Linux device tree. I might be wrong in any of the assumptions above. Please feel free to correct me. We can then think about a better API for your usecase.
On Mon, Aug 01, 2022 at 02:12:29PM -0700, Elliot Berman wrote: > Gunyah is a Type-1 hypervisor independent of any > high-level OS kernel, and runs in a higher CPU privilege level. It does > not depend on any lower-privileged OS kernel/code for its core > functionality. This increases its security and can support a much smaller > trusted computing base than a Type-2 hypervisor. > > Gunyah is an open source hypervisor. The source repo is available at > https://github.com/quic/gunyah-hypervisor. > > The diagram below shows the architecture. > > :: > > Primary VM Secondary VMs > +-----+ +-----+ | +-----+ +-----+ +-----+ > | | | | | | | | | | | > EL0 | APP | | APP | | | APP | | APP | | APP | > | | | | | | | | | | | > +-----+ +-----+ | +-----+ +-----+ +-----+ > ---------------------|------------------------- > +--------------+ | +----------------------+ > | | | | | > EL1 | Linux Kernel | | |Linux kernel/Other OS | ... > | | | | | > +--------------+ | +----------------------+ > --------hvc/smc------|------hvc/smc------------ > +----------------------------------------+ > | | > EL2 | Gunyah Hypervisor | > | | > +----------------------------------------+ > > Gunyah provides these following features. > > - Threads and Scheduling: The scheduler schedules virtual CPUs (VCPUs) on > physical CPUs and enables time-sharing of the CPUs. > - Memory Management: Gunyah tracks memory ownership and use of all memory > under its control. Memory partitioning between VMs is a fundamental > security feature. > - Interrupt Virtualization: All interrupts are handled in the hypervisor > and routed to the assigned VM. > - Inter-VM Communication: There are several different mechanisms provided > for communicating between VMs. > - Device Virtualization: Para-virtualization of devices is supported using > inter-VM communication. Low level system features and devices such as > interrupt controllers are supported with emulation where required. > Hi, I can't apply this series on top of mainline or linux-next. On what tree (and what commit) this series is based on? I'd like to do htmldocs test. Thanks.
On 8/4/2022 1:26 AM, Bagas Sanjaya wrote: > On Mon, Aug 01, 2022 at 02:12:29PM -0700, Elliot Berman wrote: >> Gunyah is a Type-1 hypervisor independent of any >> high-level OS kernel, and runs in a higher CPU privilege level. It does >> not depend on any lower-privileged OS kernel/code for its core >> functionality. This increases its security and can support a much smaller >> trusted computing base than a Type-2 hypervisor. >> >> Gunyah is an open source hypervisor. The source repo is available at >> https://github.com/quic/gunyah-hypervisor. >> >> The diagram below shows the architecture. >> >> :: >> >> Primary VM Secondary VMs >> +-----+ +-----+ | +-----+ +-----+ +-----+ >> | | | | | | | | | | | >> EL0 | APP | | APP | | | APP | | APP | | APP | >> | | | | | | | | | | | >> +-----+ +-----+ | +-----+ +-----+ +-----+ >> ---------------------|------------------------- >> +--------------+ | +----------------------+ >> | | | | | >> EL1 | Linux Kernel | | |Linux kernel/Other OS | ... >> | | | | | >> +--------------+ | +----------------------+ >> --------hvc/smc------|------hvc/smc------------ >> +----------------------------------------+ >> | | >> EL2 | Gunyah Hypervisor | >> | | >> +----------------------------------------+ >> >> Gunyah provides these following features. >> >> - Threads and Scheduling: The scheduler schedules virtual CPUs (VCPUs) on >> physical CPUs and enables time-sharing of the CPUs. >> - Memory Management: Gunyah tracks memory ownership and use of all memory >> under its control. Memory partitioning between VMs is a fundamental >> security feature. >> - Interrupt Virtualization: All interrupts are handled in the hypervisor >> and routed to the assigned VM. >> - Inter-VM Communication: There are several different mechanisms provided >> for communicating between VMs. >> - Device Virtualization: Para-virtualization of devices is supported using >> inter-VM communication. Low level system features and devices such as >> interrupt controllers are supported with emulation where required. >> > > Hi, > > I can't apply this series on top of mainline or linux-next. On what tree > (and what commit) this series is based on? I'd like to do htmldocs test. > The series should apply cleanly on commit 4a57a8400075 ("vf/remap: return the amount of bytes actually deduplicated") from Linus's tree. > Thanks. >
On Thu, Aug 04, 2022 at 02:48:58PM -0700, Elliot Berman wrote: > > > > Hi, > > > > I can't apply this series on top of mainline or linux-next. On what tree > > (and what commit) this series is based on? I'd like to do htmldocs test. > > > > The series should apply cleanly on commit 4a57a8400075 ("vf/remap: return > the amount of bytes actually deduplicated") from Linus's tree. > Applied, thanks. Next time, don't forget to specify --base when using git-format-patch.
On Fri, 05 Aug 2022 03:15:24 +0100, Bagas Sanjaya <bagasdotme@gmail.com> wrote: > > On Thu, Aug 04, 2022 at 02:48:58PM -0700, Elliot Berman wrote: > > > > > > Hi, > > > > > > I can't apply this series on top of mainline or linux-next. On what tree > > > (and what commit) this series is based on? I'd like to do htmldocs test. > > > > > > > The series should apply cleanly on commit 4a57a8400075 ("vf/remap: return > > the amount of bytes actually deduplicated") from Linus's tree. > > > > Applied, thanks. > > Next time, don't forget to specify --base when using git-format-patch. Or even better, use a tagged release as the base (an early -rc would do), and not some random commit. Thanks, M.
On 8/2/2022 2:24 AM, Dmitry Baryshkov wrote: > On 02/08/2022 00:12, Elliot Berman wrote: >> Gunyah is a Type-1 hypervisor independent of any >> high-level OS kernel, and runs in a higher CPU privilege level. It does >> not depend on any lower-privileged OS kernel/code for its core >> functionality. This increases its security and can support a much smaller >> trusted computing base than a Type-2 hypervisor. >> >> Gunyah is an open source hypervisor. The source repo is available at >> https://github.com/quic/gunyah-hypervisor. >> >> The diagram below shows the architecture. >> >> :: >> >> Primary VM Secondary VMs > > Is there any significant difference between Primary VM and other VMs? > The primary VM is started by RM. Secondary VMs are not otherwise special except that they are (usually) launched by the primary VM. >> +-----+ +-----+ | +-----+ +-----+ +-----+ >> | | | | | | | | | | | >> EL0 | APP | | APP | | | APP | | APP | | APP | >> | | | | | | | | | | | >> +-----+ +-----+ | +-----+ +-----+ +-----+ >> ---------------------|------------------------- >> +--------------+ | +----------------------+ >> | | | | | >> EL1 | Linux Kernel | | |Linux kernel/Other OS | ... >> | | | | | >> +--------------+ | +----------------------+ >> --------hvc/smc------|------hvc/smc------------ >> +----------------------------------------+ >> | | >> EL2 | Gunyah Hypervisor | >> | | >> +----------------------------------------+ >> >> Gunyah provides these following features. >> >> - Threads and Scheduling: The scheduler schedules virtual CPUs (VCPUs) on >> physical CPUs and enables time-sharing of the CPUs. > > Is the scheduling provided behind the back of the OS or does it require > cooperation? > Gunyah supports both of these scheduling models. For instance, scheduling of resource manager and the primary VM are done by Gunyah itself. A VM that the primary VM launches could be scheduled by the primary VM itself (by making a hypercall requesting a vCPU be switched in), or by Gunyah itself. We've been calling the former "proxy scheduling" and this would be the default behavior of VMs. >> - Memory Management: Gunyah tracks memory ownership and use of all memory >> under its control. Memory partitioning between VMs is a fundamental >> security feature. >> - Interrupt Virtualization: All interrupts are handled in the hypervisor >> and routed to the assigned VM. >> - Inter-VM Communication: There are several different mechanisms provided >> for communicating between VMs. >> - Device Virtualization: Para-virtualization of devices is supported >> using >> inter-VM communication. Low level system features and devices such as >> interrupt controllers are supported with emulation where required. > > After reviewing some of the patches from the series, I'd like to > understand, what does it provide (and can be provided) to the VMs. > > I'd like to understand it first, before going deep into the API issues. > > 1) The hypervisor provides message queues, doorbells and vCPUs > > Each of resources has it's own capability ID. > Why is it called capability? Is it just a misname for the resource ID, > or has it any other meaning behind? If it is a capability, who is > capable of what? > We are following Gunyah's naming convention here. For each virtual machine, Gunyah maintains a table of resources which can be accessed by that VM. An entry in this table is called a "capability" and VMs can only access resources via this capability table. Hence, they get called "capability IDs" and not "resource IDs". A VM can have multiple capability IDs mapping to the same resource. If 2 VMs have access to the same resource, they may not be using the same capability ID to access that resource since the tables are independent per VM. > At this moment you create allocate two message queues with fixed IDs for > communication with resource manager. Then you use these message queues > to organize a console and a pack of tty devices. > > What other kinds of services does RM provide to the guest OS? > Do you expect any other drivers to be calling into the RM? > I want to establish the framework to build a VM loader for Gunyah. Internally, we are working with a prototype of a "generic VM loader" which works with crosvm [1]. In this generic VM loader, memory sharing, memory lending, cooperative scheduling, and raising virtual interrupts are all supported. Emulating virtio devices in userspace is supported in a way which feels very similar to KVM. Our internal VM loader uses an IOCTL interface which is similar to KVM's. > What is the usecase for the doorbells? Who provides doorbells? > The basic use case I'll start with is for userspace to create an IRQFD. Userspace can use the IRQFD to raise a doorbell (interrupt) on the other VM. > You mentioned that the RM generates DT overlays. What kind of > information goes to the overlay? > The info is described in Documentation/devicetree/bindings/firmware/gunyah-hypervisor.yaml. > My current impression of this series is that you have misused the > concept of devices. Rather than exporting MSGQs and BELLs as > gunyah_devices and then using them from other drivers, I'd suggest > turning them into resources provided by the gunyah driver core. I > mentioned using the mailbox API for this. Another subsystem that might > ring the bell for you is the remoteproc, especially the rproc_subdev. > I had an offline discussion with Bjorn and he agreed with this approach here. He suggested avoiding using the device bus model and will go with smaller approach in v3. > I might be completely wrong about this, but if my in-mind picture of > Gunyah is correct, I'd have implemented the gunyah core subsytem as > mailbox provider, RM as a separate platform driver consuming these > mailboxes and in turn being a remoteproc driver, and consoles as > remoteproc subdevices. > The mailbox framework can only fit with message queues and not doorbells or vCPUs. The mailbox framework also relies on the mailbox being defined in the devicetree. RM is an exceptional case in that it is described in the devicetree. Message queues for other VMs would be dynamically created at runtime as/when that VM is created. Thus, the client of the message queue would need to "own" both the controller and client ends of the mailbox. RM is not loaded or managed by Linux, so I don't think remoteproc framework provides us any code re-use except for the subdevices code. Remoteproc is much larger framework than just the subdevices code, so I don't think it fits well overall. > I can assume that at some point you would like to use Gunyah to boot > secondary VMs from the primary VM by calling into RM, etc. > Most probably at this moment a VM would be allocated other bells, > message queues, etc. If this assumption is correct, them the VM can > become a separate device (remoteproc?) in the Linux device tree. > > I might be wrong in any of the assumptions above. Please feel free to > correct me. We can then think about a better API for your usecase. > We don't want to limit VM configuration to the devicetree as this limits the number and kinds of VMs that can be launched to build time. I'm not sure if you might have seen an early presentation of Gunyah at Linaro? In the early days of Gunyah, we had static configuration of VMs and many properties of the VMs were described in the devicetree. We are moving away from static configuration of VMs as much as possible. [1]: https://chromium.googlesource.com/chromiumos/platform/crosvm
[drive-by observation since one thing caught my interest...] On 2022-08-09 00:38, Elliot Berman wrote: >> I might be completely wrong about this, but if my in-mind picture of >> Gunyah is correct, I'd have implemented the gunyah core subsytem as >> mailbox provider, RM as a separate platform driver consuming these >> mailboxes and in turn being a remoteproc driver, and consoles as >> remoteproc subdevices. > > > The mailbox framework can only fit with message queues and not doorbells > or vCPUs. Is that so? There was a whole long drawn-out saga around the SCMI protocol using the Arm MHU mailbox as a set of doorbells for shared-memory payloads, but it did eventually get merged as the separate arm_mhu_db.c driver, so unless we're talking about some completely different notion of "doorbell"... :/ > The mailbox framework also relies on the mailbox being defined > in the devicetree. RM is an exceptional case in that it is described in > the devicetree. Message queues for other VMs would be dynamically > created at runtime as/when that VM is created. Thus, the client of the > message queue would need to "own" both the controller and client ends of > the mailbox. FWIW, if the mailbox API does fit conceptually then it looks like it shouldn't be *too* hard to better abstract the DT details in the framework itself and allow providers to offer additional means to validate channel requests, which might be more productive than inventing a whole new thing. Thanks, Robin.
On 8/9/2022 6:13 AM, Robin Murphy wrote: > [drive-by observation since one thing caught my interest...] > Appreciate all the comments. Jassi, I understood you have talked with some of our folks (Trilok and Carl) a few years ago about using the mailbox APIs. We were steered away from using mailboxes then. Is that still the recommendation today? > On 2022-08-09 00:38, Elliot Berman wrote: >>> I might be completely wrong about this, but if my in-mind picture of >>> Gunyah is correct, I'd have implemented the gunyah core subsytem as >>> mailbox provider, RM as a separate platform driver consuming these >>> mailboxes and in turn being a remoteproc driver, and consoles as >>> remoteproc subdevices. > >> >> The mailbox framework can only fit with message queues and not >> doorbells or vCPUs. > > Is that so? There was a whole long drawn-out saga around the SCMI > protocol using the Arm MHU mailbox as a set of doorbells for > shared-memory payloads, but it did eventually get merged as the separate > arm_mhu_db.c driver, so unless we're talking about some completely > different notion of "doorbell"... :/ > Doorbells will be harder to fit into mailbox API framework. - Simple doorbells don't have any TX done acknowledgement model at the doorbell layer (see bullet 1 from https://lore.kernel.org/all/68e241fd-16f0-96b4-eab8-369628292e03@quicinc.com/). Doorbell clients might have a doorbell acknowledgement flow, but the only client I have for doorbells doesn't. IRQFDs would send an empty message to the mailbox and immediately do a client-triggered TX_DONE. - Using mailboxes for the more advanced use-case doorbell forces client to use doorbells a certain way because each channel could be a bit on the bitmask, or the client could have complete control of the entire bitmask. I think implementing the mailbox API would force the otherwise-generic doorbell code to make that decision for clients. Further, I wanted to highlight one other challenge with fitting Gunyah message queues into mailbox API: - Message queues track a flag which indicates whether there is space available in the queue. The flag is returned on msgq_send. When the message queue is full, an interrupt is raised when there is more space available. This could be used as a TX_DONE indicator, but mailbox framework's API prevents us from doing mbox_chan_txdone inside the send_data channel op. I think this might be solvable by adding a new txdone mechanism. >> The mailbox framework also relies on the mailbox being defined in the >> devicetree. RM is an exceptional case in that it is described in the >> devicetree. Message queues for other VMs would be dynamically created >> at runtime as/when that VM is created. Thus, the client of the message >> queue would need to "own" both the controller and client ends of the >> mailbox. > > FWIW, if the mailbox API does fit conceptually then it looks like it > shouldn't be *too* hard to better abstract the DT details in the > framework itself and allow providers to offer additional means to > validate channel requests, which might be more productive than inventing > a whole new thing. > Some notes about fitting mailboxes into Gunyah IPC: - A single mailbox controller can't cover all the gunyah devices. The number of gunyah devices is not fixed and varies per VM launched. Mailbox controller would need to be per-VM or per-device, where each channel represents a capability. - The other device types (like vCPU) don't fit into message-based style framework. I'd like to have a consistent way of binding a device's function with the device. If we use mailbox API, some devices will use mailbox and others will use some other mechanism. I'd prefer to consistently use "some other mechanism" throughout. - TX and RX message queues are independent and "combining" a TX and RX message queue happens at client layer by the client requesting access to two otherwise unassociated message queues. A mailbox channel would either be associated with a TX message queue capability or an RX message queue capability. This isn't a major hurdle per se, but it decreases how cleanly we can use the mailbox APIs IMO. - A VM might only have a TX message queue and no RX message queue, or vice versa. We won't be able to require coupling a TX and RX message queue for the mailbox. - TX done acknowledgement doesn't fit Gunyah IPC (see above) and a new TX_DONE mode would need to be implemented. - Need to make it possible for a client to binding a mailbox channel without DT. I'm getting a bit apprehensive about the tweaks needed to make mailbox framework usable for Gunyah. Will there be enough code re-use and help with abstracting the direct-to-Gunyah APIs? IMO, there isn't, but opinions are welcome :) Thanks, Elliot
On Tue, Aug 9, 2022 at 7:07 PM Elliot Berman <quic_eberman@quicinc.com> wrote: > > On 8/9/2022 6:13 AM, Robin Murphy wrote: > > [drive-by observation since one thing caught my interest...] > > > Appreciate all the comments. > > Jassi, > > I understood you have talked with some of our folks (Trilok and Carl) a > few years ago about using the mailbox APIs. We were steered away from > using mailboxes then. Is that still the recommendation today? > Neither I nor Google remember any such conversation. Doorbell had always been supported by the api. It was the doorbell-mode of _mhu_ controller that had some contention. I haven't read the complete history of Gunyah yet, but from a quick look it uses the hvc/smc instruction as the "physical link" between entities (?). zynqmp-ipi-mailbox.c is one driver that uses smc in such a manner. And I know there are some platforms that don't call hvc/smc under mailbox api and I don't blame them. Let me educate myself with the background and get back.... unless you want to summarize a usecase that you doubt is supported. Thanks.
On 8/9/2022 9:12 PM, Jassi Brar wrote: > On Tue, Aug 9, 2022 at 7:07 PM Elliot Berman <quic_eberman@quicinc.com> wrote: > > I haven't read the complete history of Gunyah yet, but from a quick > look it uses the hvc/smc instruction as the "physical link" between > entities (?). zynqmp-ipi-mailbox.c is one driver that uses smc in > such a manner. And I know there are some platforms that don't call > hvc/smc under mailbox api and I don't blame them. > > Let me educate myself with the background and get back.... unless you > want to summarize a usecase that you doubt is supported. > Hi Jassi, Did you have chance to evaluate? I have given a summary in this mail, especially the last paragraph: https://lore.kernel.org/all/36303c20-5d30-2edd-0863-0cad804e3f8f@quicinc.com/ Thanks, Elliot
On 09/08/2022 02:38, Elliot Berman wrote: > > > On 8/2/2022 2:24 AM, Dmitry Baryshkov wrote: >> I might be completely wrong about this, but if my in-mind picture of >> Gunyah is correct, I'd have implemented the gunyah core subsytem as >> mailbox provider, RM as a separate platform driver consuming these >> mailboxes and in turn being a remoteproc driver, and consoles as >> remoteproc subdevices. > > > The mailbox framework can only fit with message queues and not doorbells > or vCPUs. The mailbox framework also relies on the mailbox being defined > in the devicetree. RM is an exceptional case in that it is described in > the devicetree. Message queues for other VMs would be dynamically > created at runtime as/when that VM is created. Thus, the client of the > message queue would need to "own" both the controller and client ends of > the mailbox. I'd still suggest using the mailbox API for the doorbells. You do not have to implement the txdone, if I'm not mistaken. > > RM is not loaded or managed by Linux, so I don't think remoteproc > framework provides us any code re-use except for the subdevices code. > Remoteproc is much larger framework than just the subdevices code, so I > don't think it fits well overall. > >> I can assume that at some point you would like to use Gunyah to boot >> secondary VMs from the primary VM by calling into RM, etc. >> Most probably at this moment a VM would be allocated other bells, >> message queues, etc. If this assumption is correct, them the VM can >> become a separate device (remoteproc?) in the Linux device tree. >> >> I might be wrong in any of the assumptions above. Please feel free to >> correct me. We can then think about a better API for your usecase. >> > > We don't want to limit VM configuration to the devicetree as this limits > the number and kinds of VMs that can be launched to build time. I'm not > sure if you might have seen an early presentation of Gunyah at Linaro? > In the early days of Gunyah, we had static configuration of VMs and many > properties of the VMs were described in the devicetree. We are moving > away from static configuration of VMs as much as possible. ack, this is correct. > > [1]: https://chromium.googlesource.com/chromiumos/platform/crosvm >