From patchwork Wed Nov 15 07:14:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 13456252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82F0BC47075 for ; Wed, 15 Nov 2023 07:17:28 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r3A9G-0007GE-97; Wed, 15 Nov 2023 02:17:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r3A8o-0006jX-Pt for qemu-devel@nongnu.org; Wed, 15 Nov 2023 02:16:54 -0500 Received: from mgamail.intel.com ([192.55.52.115]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r3A8l-0002JU-CF for qemu-devel@nongnu.org; Wed, 15 Nov 2023 02:16:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700032607; x=1731568607; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bCmG5IUfIwWesh4kdOW1tz+KnTsMRtNuvE/tn0svV/0=; b=l7GPAku0SWb4ICGR0ueFplTidn2CcgB7RnBAqbMiOe2LEx3SI+Xjl7vm nO9OZ0mILqgTXDbERVi/fpUEBAXmQf5hfZ58P7NKrmUzzdxoozAEhVJZF unVi1hCPvlS1piAiyHPHSZhUyNPlYeDttoilkZYrcEftfWgIeGbQgpwsf LCldi/GZFgsEeXkhq2gCOCksR+9OJmpX4BbmQP+Lin9rMtkDNUDeXKH6m kGpj9IXslmFpIQnCg+e4NKSQ1wbsOd87CIUKQJU2tPq9HI5LuHiwDB8rW GL7XweALJ8yEkRAVZeWGq7dO3u4PoTk6oSHRk2IVsp6d5X05roXVEdrrd w==; X-IronPort-AV: E=McAfee;i="6600,9927,10894"; a="390622270" X-IronPort-AV: E=Sophos;i="6.03,304,1694761200"; d="scan'208";a="390622270" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Nov 2023 23:16:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10894"; a="714797262" X-IronPort-AV: E=Sophos;i="6.03,304,1694761200"; d="scan'208";a="714797262" Received: from lxy-clx-4s.sh.intel.com ([10.239.48.52]) by orsmga003.jf.intel.com with ESMTP; 14 Nov 2023 23:16:38 -0800 From: Xiaoyao Li To: Paolo Bonzini , David Hildenbrand , Igor Mammedov , "Michael S . Tsirkin" , Marcel Apfelbaum , Richard Henderson , Peter Xu , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Cornelia Huck , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eric Blake , Markus Armbruster , Marcelo Tosatti Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, xiaoyao.li@intel.com, Michael Roth , Sean Christopherson , Claudio Fontana , Gerd Hoffmann , Isaku Yamahata , Chenyi Qiang Subject: [PATCH v3 10/70] kvm: handle KVM_EXIT_MEMORY_FAULT Date: Wed, 15 Nov 2023 02:14:19 -0500 Message-Id: <20231115071519.2864957-11-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231115071519.2864957-1-xiaoyao.li@intel.com> References: <20231115071519.2864957-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Received-SPF: pass client-ip=192.55.52.115; envelope-from=xiaoyao.li@intel.com; helo=mgamail.intel.com X-Spam_score_int: -23 X-Spam_score: -2.4 X-Spam_bar: -- X-Spam_report: (-2.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HK_RANDOM_ENVFROM=0.999, HK_RANDOM_FROM=0.999, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Chao Peng Currently only KVM_MEMORY_EXIT_FLAG_PRIVATE in flags is valid when KVM_EXIT_MEMORY_FAULT happens. It indicates userspace needs to do the memory conversion on the RAMBlock to turn the memory into desired attribute, i.e., private/shared. Note, KVM_EXIT_MEMORY_FAULT makes sense only when the RAMBlock has guest_memfd memory backend. Note, KVM_EXIT_MEMORY_FAULT returns with -EFAULT, so special handling is added. Signed-off-by: Chao Peng Co-developed-by: Xiaoyao Li Signed-off-by: Xiaoyao Li --- accel/kvm/kvm-all.c | 76 +++++++++++++++++++++++++++++++++++++++------ 1 file changed, 66 insertions(+), 10 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 76e2404d54d2..58abbcb6926e 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -2902,6 +2902,50 @@ static void kvm_eat_signals(CPUState *cpu) } while (sigismember(&chkset, SIG_IPI)); } +static int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private) +{ + MemoryRegionSection section; + ram_addr_t offset; + RAMBlock *rb; + void *addr; + int ret = -1; + + section = memory_region_find(get_system_memory(), start, size); + if (!section.mr) { + return ret; + } + + if (memory_region_has_guest_memfd(section.mr)) { + if (to_private) { + ret = kvm_set_memory_attributes_private(start, size); + } else { + ret = kvm_set_memory_attributes_shared(start, size); + } + + if (ret) { + memory_region_unref(section.mr); + return ret; + } + + addr = memory_region_get_ram_ptr(section.mr) + + section.offset_within_region; + rb = qemu_ram_block_from_host(addr, false, &offset); + /* + * With KVM_SET_MEMORY_ATTRIBUTES by kvm_set_memory_attributes(), + * operation on underlying file descriptor is only for releasing + * unnecessary pages. + */ + ram_block_convert_range(rb, offset, size, to_private); + } else { + warn_report("Convert non guest_memfd backed memory region " + "(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s", + start, size, to_private ? "private" : "shared"); + } + + memory_region_unref(section.mr); + return ret; +} + int kvm_cpu_exec(CPUState *cpu) { struct kvm_run *run = cpu->kvm_run; @@ -2969,18 +3013,20 @@ int kvm_cpu_exec(CPUState *cpu) ret = EXCP_INTERRUPT; break; } - fprintf(stderr, "error: kvm run failed %s\n", - strerror(-run_ret)); + if (!(run_ret == -EFAULT && run->exit_reason == KVM_EXIT_MEMORY_FAULT)) { + fprintf(stderr, "error: kvm run failed %s\n", + strerror(-run_ret)); #ifdef TARGET_PPC - if (run_ret == -EBUSY) { - fprintf(stderr, - "This is probably because your SMT is enabled.\n" - "VCPU can only run on primary threads with all " - "secondary threads offline.\n"); - } + if (run_ret == -EBUSY) { + fprintf(stderr, + "This is probably because your SMT is enabled.\n" + "VCPU can only run on primary threads with all " + "secondary threads offline.\n"); + } #endif - ret = -1; - break; + ret = -1; + break; + } } trace_kvm_run_exit(cpu->cpu_index, run->exit_reason); @@ -3067,6 +3113,16 @@ int kvm_cpu_exec(CPUState *cpu) break; } break; + case KVM_EXIT_MEMORY_FAULT: + if (run->memory_fault.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) { + error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64, + (uint64_t)run->memory_fault.flags); + ret = -1; + break; + } + ret = kvm_convert_memory(run->memory_fault.gpa, run->memory_fault.size, + run->memory_fault.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE); + break; default: DPRINTF("kvm_arch_handle_exit\n"); ret = kvm_arch_handle_exit(cpu, run);