diff mbox series

[RFC,v2,03/21] HostMem: Add private property and associate it with RAM_KVM_GMEM

Message ID 20230914035117.3285885-4-xiaoyao.li@intel.com (mailing list archive)
State New, archived
Headers show
Series QEMU gmem implemention | expand

Commit Message

Xiaoyao Li Sept. 14, 2023, 3:50 a.m. UTC
From: Isaku Yamahata <isaku.yamahata@intel.com>

Add a new property "private" to memory backends. When it's set to true,
it indicates the RAMblock of the backend also requires kvm gmem.

Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
---
 backends/hostmem-file.c  |  1 +
 backends/hostmem-memfd.c |  1 +
 backends/hostmem-ram.c   |  1 +
 backends/hostmem.c       | 18 ++++++++++++++++++
 include/sysemu/hostmem.h |  2 +-
 qapi/qom.json            |  4 ++++
 6 files changed, 26 insertions(+), 1 deletion(-)

Comments

Markus Armbruster Sept. 19, 2023, 9:46 a.m. UTC | #1
Xiaoyao Li <xiaoyao.li@intel.com> writes:

> From: Isaku Yamahata <isaku.yamahata@intel.com>
>
> Add a new property "private" to memory backends. When it's set to true,
> it indicates the RAMblock of the backend also requires kvm gmem.

Can you add a brief explanation why you need the property?

> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>

[...]

> diff --git a/qapi/qom.json b/qapi/qom.json
> index fa3e88c8e6ab..d28c5403bc0f 100644
> --- a/qapi/qom.json
> +++ b/qapi/qom.json
> @@ -605,6 +605,9 @@
>  # @reserve: if true, reserve swap space (or huge pages) if applicable
>  #     (default: true) (since 6.1)
>  #
> +# @private: if true, use KVM gmem private memory (default: false)
> +#     (since 8.2)
> +#
>  # @size: size of the memory region in bytes
>  #
>  # @x-use-canonical-path-for-ramblock-id: if true, the canonical path
> @@ -631,6 +634,7 @@
>              '*prealloc-context': 'str',
>              '*share': 'bool',
>              '*reserve': 'bool',
> +            '*private': 'bool',
>              'size': 'size',
>              '*x-use-canonical-path-for-ramblock-id': 'bool' } }
Xiaoyao Li Sept. 19, 2023, 11:24 p.m. UTC | #2
On 9/19/2023 5:46 PM, Markus Armbruster wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
> 
>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>
>> Add a new property "private" to memory backends. When it's set to true,
>> it indicates the RAMblock of the backend also requires kvm gmem.
> 
> Can you add a brief explanation why you need the property?

It provides a mechanism for user to specify whether the memory can serve 
as private memory (need request kvm gmem).

>> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
>> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> 
> [...]
> 
>> diff --git a/qapi/qom.json b/qapi/qom.json
>> index fa3e88c8e6ab..d28c5403bc0f 100644
>> --- a/qapi/qom.json
>> +++ b/qapi/qom.json
>> @@ -605,6 +605,9 @@
>>   # @reserve: if true, reserve swap space (or huge pages) if applicable
>>   #     (default: true) (since 6.1)
>>   #
>> +# @private: if true, use KVM gmem private memory (default: false)
>> +#     (since 8.2)
>> +#
>>   # @size: size of the memory region in bytes
>>   #
>>   # @x-use-canonical-path-for-ramblock-id: if true, the canonical path
>> @@ -631,6 +634,7 @@
>>               '*prealloc-context': 'str',
>>               '*share': 'bool',
>>               '*reserve': 'bool',
>> +            '*private': 'bool',
>>               'size': 'size',
>>               '*x-use-canonical-path-for-ramblock-id': 'bool' } }
>
Markus Armbruster Sept. 20, 2023, 7:30 a.m. UTC | #3
Xiaoyao Li <xiaoyao.li@intel.com> writes:

> On 9/19/2023 5:46 PM, Markus Armbruster wrote:
>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>> 
>>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>>
>>> Add a new property "private" to memory backends. When it's set to true,
>>> it indicates the RAMblock of the backend also requires kvm gmem.
>> Can you add a brief explanation why you need the property?
>
> It provides a mechanism for user to specify whether the memory can serve as private memory (need request kvm gmem).

Yes, but why would a user want such memory?
Xiaoyao Li Sept. 20, 2023, 2:35 p.m. UTC | #4
On 9/20/2023 3:30 PM, Markus Armbruster wrote:
> Xiaoyao Li <xiaoyao.li@intel.com> writes:
> 
>> On 9/19/2023 5:46 PM, Markus Armbruster wrote:
>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>
>>>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>>>
>>>> Add a new property "private" to memory backends. When it's set to true,
>>>> it indicates the RAMblock of the backend also requires kvm gmem.
>>> Can you add a brief explanation why you need the property?
>>
>> It provides a mechanism for user to specify whether the memory can serve as private memory (need request kvm gmem).
> 
> Yes, but why would a user want such memory?
> 

Because KVM demands it for confidential guest, e.g., TDX guest. KVM 
demands that the mem slot needs to have KVM_MEM_PRIVATE set and has 
valid gmem associated if the guest accesses it as private memory.
David Hildenbrand Sept. 20, 2023, 2:37 p.m. UTC | #5
On 20.09.23 16:35, Xiaoyao Li wrote:
> On 9/20/2023 3:30 PM, Markus Armbruster wrote:
>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>
>>> On 9/19/2023 5:46 PM, Markus Armbruster wrote:
>>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>>
>>>>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>>>>
>>>>> Add a new property "private" to memory backends. When it's set to true,
>>>>> it indicates the RAMblock of the backend also requires kvm gmem.
>>>> Can you add a brief explanation why you need the property?
>>>
>>> It provides a mechanism for user to specify whether the memory can serve as private memory (need request kvm gmem).
>>
>> Yes, but why would a user want such memory?
>>
> 
> Because KVM demands it for confidential guest, e.g., TDX guest. KVM
> demands that the mem slot needs to have KVM_MEM_PRIVATE set and has
> valid gmem associated if the guest accesses it as private memory.

I think as long as there is no demand to have a TDX guest with this 
property be set to "off", then just don't add it.

With a TDX VM, it will can be implicitly active. If we ever have to 
disable it for selective memory backends, we can add the property and 
have something like on/off/auto. For now it would be "auto".
Markus Armbruster Sept. 20, 2023, 3:42 p.m. UTC | #6
David Hildenbrand <david@redhat.com> writes:

> On 20.09.23 16:35, Xiaoyao Li wrote:
>> On 9/20/2023 3:30 PM, Markus Armbruster wrote:
>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>
>>>> On 9/19/2023 5:46 PM, Markus Armbruster wrote:
>>>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>>>
>>>>>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>>>>>
>>>>>> Add a new property "private" to memory backends. When it's set to true,
>>>>>> it indicates the RAMblock of the backend also requires kvm gmem.
>>>>> Can you add a brief explanation why you need the property?
>>>>
>>>> It provides a mechanism for user to specify whether the memory can serve as private memory (need request kvm gmem).
>>>
>>> Yes, but why would a user want such memory?
>>>
>> Because KVM demands it for confidential guest, e.g., TDX guest. KVM
>> demands that the mem slot needs to have KVM_MEM_PRIVATE set and has
>> valid gmem associated if the guest accesses it as private memory.

Commit messages should explain why we want the patch.  Documenting "why"
is at least as important as "what".  If "what" is missing, I can read
the patch to find out.  If "why" is missing, I'm reduced to guesswork.

> I think as long as there is no demand to have a TDX guest with this property be set to "off", then just don't add it.
>
> With a TDX VM, it will can be implicitly active. If we ever have to disable it for selective memory backends, we can add the property and have something like on/off/auto. For now it would be "auto".

Makes sense to me.
Xiaoyao Li Sept. 21, 2023, 8:38 a.m. UTC | #7
On 9/20/2023 11:42 PM, Markus Armbruster wrote:
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 20.09.23 16:35, Xiaoyao Li wrote:
>>> On 9/20/2023 3:30 PM, Markus Armbruster wrote:
>>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>>
>>>>> On 9/19/2023 5:46 PM, Markus Armbruster wrote:
>>>>>> Xiaoyao Li <xiaoyao.li@intel.com> writes:
>>>>>>
>>>>>>> From: Isaku Yamahata <isaku.yamahata@intel.com>
>>>>>>>
>>>>>>> Add a new property "private" to memory backends. When it's set to true,
>>>>>>> it indicates the RAMblock of the backend also requires kvm gmem.
>>>>>> Can you add a brief explanation why you need the property?
>>>>>
>>>>> It provides a mechanism for user to specify whether the memory can serve as private memory (need request kvm gmem).
>>>>
>>>> Yes, but why would a user want such memory?
>>>>
>>> Because KVM demands it for confidential guest, e.g., TDX guest. KVM
>>> demands that the mem slot needs to have KVM_MEM_PRIVATE set and has
>>> valid gmem associated if the guest accesses it as private memory.
> 
> Commit messages should explain why we want the patch.  Documenting "why"
> is at least as important as "what".  If "what" is missing, I can read
> the patch to find out.  If "why" is missing, I'm reduced to guesswork.

I'll try best to improve the commit message of this patch, and all other 
patches.

>> I think as long as there is no demand to have a TDX guest with this property be set to "off", then just don't add it.
>>
>> With a TDX VM, it will can be implicitly active. If we ever have to disable it for selective memory backends, we can add the property and have something like on/off/auto. For now it would be "auto".
> 
> Makes sense to me.

OK. I think I get the answer of open #1 in cover letter.

If no other voice, I'll drop this patch and allocate gmem RAM when 
vm_type is TDX.
David Hildenbrand Sept. 21, 2023, 8:45 a.m. UTC | #8
>>> I think as long as there is no demand to have a TDX guest with this property be set to "off", then just don't add it.
>>>
>>> With a TDX VM, it will can be implicitly active. If we ever have to disable it for selective memory backends, we can add the property and have something like on/off/auto. For now it would be "auto".
>>
>> Makes sense to me.
> 
> OK. I think I get the answer of open #1 in cover letter.
> 

Yes.

> If no other voice, I'll drop this patch and allocate gmem RAM when
> vm_type is TDX.

Good!
diff mbox series

Patch

diff --git a/backends/hostmem-file.c b/backends/hostmem-file.c
index b4335a80e6da..861f76f2de8a 100644
--- a/backends/hostmem-file.c
+++ b/backends/hostmem-file.c
@@ -56,6 +56,7 @@  file_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
     name = host_memory_backend_get_name(backend);
     ram_flags = backend->share ? RAM_SHARED : 0;
     ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
+    ram_flags |= backend->private ? RAM_KVM_GMEM : 0;
     ram_flags |= fb->is_pmem ? RAM_PMEM : 0;
     ram_flags |= RAM_NAMED_FILE;
     memory_region_init_ram_from_file(&backend->mr, OBJECT(backend), name,
diff --git a/backends/hostmem-memfd.c b/backends/hostmem-memfd.c
index 3fc85c3db81b..f49990ce3bbd 100644
--- a/backends/hostmem-memfd.c
+++ b/backends/hostmem-memfd.c
@@ -55,6 +55,7 @@  memfd_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
     name = host_memory_backend_get_name(backend);
     ram_flags = backend->share ? RAM_SHARED : 0;
     ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
+    ram_flags |= backend->private ? RAM_KVM_GMEM : 0;
     memory_region_init_ram_from_fd(&backend->mr, OBJECT(backend), name,
                                    backend->size, ram_flags, fd, 0, errp);
     g_free(name);
diff --git a/backends/hostmem-ram.c b/backends/hostmem-ram.c
index b8e55cdbd0f8..d6c46250dcfd 100644
--- a/backends/hostmem-ram.c
+++ b/backends/hostmem-ram.c
@@ -30,6 +30,7 @@  ram_backend_memory_alloc(HostMemoryBackend *backend, Error **errp)
     name = host_memory_backend_get_name(backend);
     ram_flags = backend->share ? RAM_SHARED : 0;
     ram_flags |= backend->reserve ? 0 : RAM_NORESERVE;
+    ram_flags |= backend->private ? RAM_KVM_GMEM : 0;
     memory_region_init_ram_flags_nomigrate(&backend->mr, OBJECT(backend), name,
                                            backend->size, ram_flags, errp);
     g_free(name);
diff --git a/backends/hostmem.c b/backends/hostmem.c
index 747e7838c031..dbdbb0aafd45 100644
--- a/backends/hostmem.c
+++ b/backends/hostmem.c
@@ -461,6 +461,20 @@  static void host_memory_backend_set_reserve(Object *o, bool value, Error **errp)
     }
     backend->reserve = value;
 }
+
+static bool host_memory_backend_get_private(Object *o, Error **errp)
+{
+    HostMemoryBackend *backend = MEMORY_BACKEND(o);
+
+    return backend->private;
+}
+
+static void host_memory_backend_set_private(Object *o, bool value, Error **errp)
+{
+    HostMemoryBackend *backend = MEMORY_BACKEND(o);
+
+    backend->private = value;
+}
 #endif /* CONFIG_LINUX */
 
 static bool
@@ -541,6 +555,10 @@  host_memory_backend_class_init(ObjectClass *oc, void *data)
         host_memory_backend_get_reserve, host_memory_backend_set_reserve);
     object_class_property_set_description(oc, "reserve",
         "Reserve swap space (or huge pages) if applicable");
+    object_class_property_add_bool(oc, "private",
+        host_memory_backend_get_private, host_memory_backend_set_private);
+    object_class_property_set_description(oc, "private",
+        "Use KVM gmem private memory");
 #endif /* CONFIG_LINUX */
     /*
      * Do not delete/rename option. This option must be considered stable
diff --git a/include/sysemu/hostmem.h b/include/sysemu/hostmem.h
index 39326f1d4f9c..d88970395618 100644
--- a/include/sysemu/hostmem.h
+++ b/include/sysemu/hostmem.h
@@ -65,7 +65,7 @@  struct HostMemoryBackend {
     /* protected */
     uint64_t size;
     bool merge, dump, use_canonical_path;
-    bool prealloc, is_mapped, share, reserve;
+    bool prealloc, is_mapped, share, reserve, private;
     uint32_t prealloc_threads;
     ThreadContext *prealloc_context;
     DECLARE_BITMAP(host_nodes, MAX_NODES + 1);
diff --git a/qapi/qom.json b/qapi/qom.json
index fa3e88c8e6ab..d28c5403bc0f 100644
--- a/qapi/qom.json
+++ b/qapi/qom.json
@@ -605,6 +605,9 @@ 
 # @reserve: if true, reserve swap space (or huge pages) if applicable
 #     (default: true) (since 6.1)
 #
+# @private: if true, use KVM gmem private memory (default: false)
+#     (since 8.2)
+#
 # @size: size of the memory region in bytes
 #
 # @x-use-canonical-path-for-ramblock-id: if true, the canonical path
@@ -631,6 +634,7 @@ 
             '*prealloc-context': 'str',
             '*share': 'bool',
             '*reserve': 'bool',
+            '*private': 'bool',
             'size': 'size',
             '*x-use-canonical-path-for-ramblock-id': 'bool' } }