Message ID | 20201109113233.9012-1-dbrazdil@google.com (mailing list archive) |
---|---|
Headers | show |
Series | Opt-in always-on nVHE hypervisor | expand |
On Mon, Nov 09, 2020 at 11:32:09AM +0000, David Brazdil wrote: > As we progress towards being able to keep guest state private to the > host running nVHE hypervisor, this series allows the hypervisor to > install itself on newly booted CPUs before the host is allowed to run > on them. Why? I thought we were trying to kill nVHE off now that newer CPUs provide the saner virtualization extensions?
On 2020-11-10 10:15, Christoph Hellwig wrote: > On Mon, Nov 09, 2020 at 11:32:09AM +0000, David Brazdil wrote: >> As we progress towards being able to keep guest state private to the >> host running nVHE hypervisor, this series allows the hypervisor to >> install itself on newly booted CPUs before the host is allowed to run >> on them. > > Why? I thought we were trying to kill nVHE off now that newer CPUs > provide the saner virtualization extensions? We can't kill nVHE at all, because that is the only game in town. You can't even buy a decent machine with VHE, no matter how much money you put on the table. nVHE is here for the foreseeable future, and we even use its misfeatures to our advantage in order to offer confidential VMs. See Will's presentation at KVM forum a couple of weeks ago for the gory details. Thanks, M.
Hi David, On 2020-11-09 11:32, David Brazdil wrote: > As we progress towards being able to keep guest state private to the > host running nVHE hypervisor, this series allows the hypervisor to > install itself on newly booted CPUs before the host is allowed to run > on them. > > All functionality described below is opt-in, guarded by an early param > 'kvm-arm.protected'. Future patches specific to the new "protected" > mode > should be hidden behind the same param. > > The hypervisor starts trapping host SMCs and intercepting host's PSCI > CPU_ON/OFF/SUSPEND calls. It replaces the host's entry point with its > own, initializes the EL2 state of the new CPU and installs the nVHE hyp > vector before ERETing to the host's entry point. > > The kernel checks new cores' features against the finalized system > capabilities. To avoid the need to move this code/data to EL2, the > implementation only allows to boot cores that were online at the time > of > KVM initialization and therefore had been checked already. > > Other PSCI SMCs are forwarded to EL3, though only the known set of SMCs > implemented in the kernel is allowed. Non-PSCI SMCs are also forwarded > to EL3. Future changes will need to ensure the safety of all SMCs wrt. > private guests. > > The host is still allowed to reset EL2 back to the stub vector, eg. for > hibernation or kexec, but will not disable nVHE when there are no VMs. > > Tested on Rock Pi 4b, based on 5.10-rc3. I think I've gone through most of the patches. When you respin this series, you may want to do so on top of my host EL2 entry rework [1], which change a few things you currently rely on. If anything in there doesn't work for you, please let me know. Thanks, M. [1] https://lore.kernel.org/kvm/20201109175923.445945-1-maz@kernel.org/
On Tue, Nov 10, 2020 at 1:19 PM Marc Zyngier <maz@kernel.org> wrote: > > Why? I thought we were trying to kill nVHE off now that newer CPUs > > provide the saner virtualization extensions? > > We can't kill nVHE at all, because that is the only game in town. > You can't even buy a decent machine with VHE, no matter how much money > you put on the table. As I mentioned it earlier, we did this type of nVHE hypervisor and the proof of concept is here: https://github.com/jkrh/kvms See the README. It runs successfully on multiple pieces of arm64 hardware and provides a tiny QEMU based development environment via the makefiles for the QEMU 'max' CPU. The code is rough, the amount of man hours put to it is not sky high, but it does run. I'll update a new kernel patch to patches/ dir for one of the later kernels hopefully next week, up to now we have only supported kernels between 4.9 .... 5.6 as this is what our development hardware(s) run with. It requires a handful of hooks in the kvm code, but the actual kvm calls are just rerouted back to the kernel symbols. This way the hypervisor itself can be kept very tiny. The s2 page tables are fully owned by the hyp and the guests are unmapped from the host memory when configured with the option (we call it host blinding). Multiple VMs can be run without pinning them into the memory. It also provides a tiny out of tree driver prototype stub to protect the critical sections of the kernel memory beyond the kernel's own reach. There are still holes in the implementation such as the virtio-mapback handling via whitelisting and paging integrity checks, and many things are not quite all the way there yet. One step at a time. -- Janne