Message ID | 20240222021006.2279329-1-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Handle set_memory_XXcrypted() errors in hyperv | expand |
From: Rick Edgecombe <rick.p.edgecombe@intel.com> Sent: Wednesday, February 21, 2024 6:10 PM > > Shared (decrypted) pages should never return to the page allocator, or > future usage of the pages may allow for the contents to be exposed to the > host. They may also cause the guest to crash if the page is used in way > disallowed by HW (i.e. for executable code or as a page table). > > Normally set_memory() call failures are rare. But on TDX > set_memory_XXcrypted() involves calls to the untrusted VMM, and an > attacker > could fail these calls such that: > 1. set_memory_encrypted() returns an error and leaves the pages fully > shared. > 2. set_memory_decrypted() returns an error, but the pages are actually > full converted to shared. > > This means that patterns like the below can cause problems: > void *addr = alloc(); > int fail = set_memory_decrypted(addr, 1); > if (fail) > free_pages(addr, 0); > > And: > void *addr = alloc(); > int fail = set_memory_decrypted(addr, 1); > if (fail) { > set_memory_encrypted(addr, 1); > free_pages(addr, 0); > } > > Unfortunately these patterns appear in the kernel. And what the > set_memory() callers should do in this situation is not clear either. They > shouldn't use them as shared because something clearly went wrong, but > they also need to fully reset the pages to private to free them. But, the > kernel needs the VMMs help to do this and the VMM is already being > uncooperative around the needed operations. So this isn't guaranteed to > succeed and the caller is kind of stuck with unusable pages. > > The only choice is to panic or leak the pages. The kernel tries not to > panic if at all possible, so just leak the pages at the call sites. > Separately there is a patch[0] to warn if the guest detects strange VMM > behavior around this. It is stalled, so in the mean time I'm proceeding > with fixing the callers to leak the pages. No additional warnings are > added, because the plan is to warn in a single place in x86 set_memory() > code. > > This series fixes the cases in the hyperv code. > > IMPORTANT NOTE: > I don't have a setup to test tdx hyperv changes. These changes are compile > tested only. Previously Michael Kelley suggested some folks at MS might be > able to help with this. Thanks for doing these changes. Overall they look pretty good, modulo a few comments. The "decrypted" flag in the vmbus_gpadl structure is a good way to keep track of the encryption status of the associated memory. The memory passed to the gpadl (Guest Physical Address Descriptor List) functions may allocated and freed directly by the driver, as in the netvsc and UIO cases. You've handled that case. But memory may also be allocated by vmbus_alloc_ring() and freed by vmbus_free_ring(). Your patch set needs an additional change to check the "decrypted" flag in vmbus_free_ring(). In reviewing the code, I also see some unrelated memory freeing issues in error paths. They are outside the scope of your changes. I'll make a note of these for future fixing. For testing, I'll do two things: 1) Verify that the non-error paths still work correctly with the changes. That should be relatively straightforward as the changes are pretty much confined to the error paths. 2) Hack set_memory_encrypted() to always fail. I hope Linux still boots in that case, but just leaks some memory. Then if I unbind a Hyper-V synthetic device, that should exercise the path where set_memory_encrypted() is called. Failures should be handled cleanly, albeit while leaking the memory. I should be able to test in a normal VM, a TDX VM, and an SEV-SNP VM. I have a few more detailed comments in the individual patches of this series. Michael > > [0] https://lore.kernel.org/lkml/20240122184003.129104-1-rick.p.edgecombe@intel.com/ > > Rick Edgecombe (4): > hv: Leak pages if set_memory_encrypted() fails > hv: Track decrypted status in vmbus_gpadl > hv_nstvsc: Don't free decrypted memory > uio_hv_generic: Don't free decrypted memory > > drivers/hv/channel.c | 11 ++++++++--- > drivers/hv/connection.c | 11 +++++++---- > drivers/net/hyperv/netvsc.c | 7 +++++-- > drivers/uio/uio_hv_generic.c | 12 ++++++++---- > include/linux/hyperv.h | 1 + > 5 files changed, 29 insertions(+), 13 deletions(-) > > -- > 2.34.1
From: Michael Kelley <mhklinux@outlook.com> Sent: Friday, March 1, 2024 11:00 AM > > > > IMPORTANT NOTE: > > I don't have a setup to test tdx hyperv changes. These changes are compile > > tested only. Previously Michael Kelley suggested some folks at MS might be > > able to help with this. > > Thanks for doing these changes. Overall they look pretty good, > modulo a few comments. The "decrypted" flag in the vmbus_gpadl > structure is a good way to keep track of the encryption status of > the associated memory. > > The memory passed to the gpadl (Guest Physical Address Descriptor > List) functions may allocated and freed directly by the driver, as in > the netvsc and UIO cases. You've handled that case. But memory > may also be allocated by vmbus_alloc_ring() and freed by > vmbus_free_ring(). Your patch set needs an additional change > to check the "decrypted" flag in vmbus_free_ring(). > > In reviewing the code, I also see some unrelated memory freeing > issues in error paths. They are outside the scope of your changes. > I'll make a note of these for future fixing. > > For testing, I'll do two things: > > 1) Verify that the non-error paths still work correctly with the > changes. That should be relatively straightforward as the > changes are pretty much confined to the error paths. > > 2) Hack set_memory_encrypted() to always fail. I hope Linux > still boots in that case, but just leaks some memory. Then if > I unbind a Hyper-V synthetic device, that should exercise the > path where set_memory_encrypted() is called. Failures > should be handled cleanly, albeit while leaking the memory. > > I should be able to test in a normal VM, a TDX VM, and an > SEV-SNP VM. > Rick -- Using your patches plus the changes in my comments, I've done most of the testing described above. The normal paths work, and when I hack set_memory_encrypted() to fail, the error paths correctly did not free the memory. I checked both the ring buffer memory and the additional vmalloc memory allocated by the netvsc driver and the uio driver. The memory status can be checked after-the-fact via /proc/vmmallocinfo and /proc/buddyinfo since these are mostly large allocations. As expected, the drivers output their own error messages after the failures to teardown the GPADLs. I did not test the vmbus_disconnect() path since that effectively kills the VM. I tested in a normal VM, and in an SEV-SNP VM. I didn't specifically test in a TDX VM, but given that Hyper-V CoCo guests run with a paravisor, the guest sees the same thing either way. Michael
On Thu, 2024-03-07 at 17:11 +0000, Michael Kelley wrote: > Using your patches plus the changes in my comments, I've > done most of the testing described above. The normal > paths work, and when I hack set_memory_encrypted() > to fail, the error paths correctly did not free the memory. > I checked both the ring buffer memory and the additional > vmalloc memory allocated by the netvsc driver and the uio > driver. The memory status can be checked after-the-fact > via /proc/vmmallocinfo and /proc/buddyinfo since these > are mostly large allocations. As expected, the drivers > output their own error messages after the failures to > teardown the GPADLs. > > I did not test the vmbus_disconnect() path since that > effectively kills the VM. > > I tested in a normal VM, and in an SEV-SNP VM. I didn't > specifically test in a TDX VM, but given that Hyper-V CoCo > guests run with a paravisor, the guest sees the same thing > either way. Thanks Michael! How would you feel about reposting the patches with your changes added? I think you have a very good handle on the part of the problem I understand, and additionally much more familiarity with these drivers.
From: Edgecombe, Rick P <rick.p.edgecombe@intel.com> Sent: Thursday, March 7, 2024 11:12 AM > > On Thu, 2024-03-07 at 17:11 +0000, Michael Kelley wrote: > > Using your patches plus the changes in my comments, I've > > done most of the testing described above. The normal > > paths work, and when I hack set_memory_encrypted() > > to fail, the error paths correctly did not free the memory. > > I checked both the ring buffer memory and the additional > > vmalloc memory allocated by the netvsc driver and the uio > > driver. The memory status can be checked after-the-fact > > via /proc/vmmallocinfo and /proc/buddyinfo since these > > are mostly large allocations. As expected, the drivers > > output their own error messages after the failures to > > teardown the GPADLs. > > > > I did not test the vmbus_disconnect() path since that > > effectively kills the VM. > > > > I tested in a normal VM, and in an SEV-SNP VM. I didn't > > specifically test in a TDX VM, but given that Hyper-V CoCo > > guests run with a paravisor, the guest sees the same thing > > either way. > > Thanks Michael! How would you feel about reposting the patches with > your changes added? I think you have a very good handle on the part of > the problem I understand, and additionally much more familiarity with > these drivers. Yes, I can submit a new version. Michael