From patchwork Mon Mar 15 18:26:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 717F2C433E6 for ; Mon, 15 Mar 2021 18:27:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 43DDD64F26 for ; Mon, 15 Mar 2021 18:27:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232013AbhCOS1G (ORCPT ); Mon, 15 Mar 2021 14:27:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232062AbhCOS0z (ORCPT ); Mon, 15 Mar 2021 14:26:55 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA738C06174A for ; Mon, 15 Mar 2021 11:26:54 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id v136so25131360qkb.9 for ; Mon, 15 Mar 2021 11:26:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kl2EVOBmn7oQV8DsVZaFdaHri6MWqYz7gtsahKZb1Es=; b=Rv6NWFBPwKRKkg6EwJMIts8fimw72c/OGbtenC6IXqCfA3n83O8Dkd5ROm6ba/dSPE SnZXXtvDLj0FCJzCP1AgccjPahos19OWax8j+79Erkty7snq8AbSnIlgukX6CQto5i1+ EJIBv03ENC4fvoUMfBBR+bJTrOdakjsyxpp/E8bqr3RMeN9AjDZh/ERObS6wI8GpK6JZ eRMaWMyO2eBSnGh5pcQxmbbgvkB9DTs2g7BndSD3Se6Q56x3Hovz4VXDw9q2YdGr6cfb 6UAMlHtgyWtYknHs4MaQ0gvnRy0CPwmd8jKGa0px/OwowLIYOEz+B2oguy6vWGdIzwB/ LecA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kl2EVOBmn7oQV8DsVZaFdaHri6MWqYz7gtsahKZb1Es=; b=uN8Y5if8lWmDTAJlR2UlsqWgBUzoPSCokGZTQLUlFd5dnUj9x3yZltZqOQAa4FTALI HYXLkDgXJHGDobgZ9X9MW1RkhSRuozKsBhvc9r3uXmXLczKyOdU0OxUA0Ty+EPKQrL6T mC/rdaMoffn0bpoP9wPKyTq381bK5hiqF7+LqH2Qob0nCx04ptgs/JhYbTEkjuHVJ/cQ 3v6DRE2E/SXPA6M45R49lGMdzkfZrUsUFo+ugsO3SRyoh1pIaIoZ9pGCzA9YO6sYYNKh dcvOakKzJG6YIyzsTNhbRDpqLQKeXuSQV1zkfpWTAcSgTURoxp+LSuHg4KZpDO0PWw3u 2k7w== X-Gm-Message-State: AOAM531e4ymVgoQ0kikgoQx/PHoTR5ucJHxWvlI3u7lJMJhcrwVP0T0N BkhzlGLwr0lxsYcZnyjZVCbjacEkdQ2n X-Google-Smtp-Source: ABdhPJyz4NSvQIXVCAJ7bB7UPsQH3hv1JaZPjv7eMUgMv/o/R6Al3q67ItDwhcs0ZfqoPuTTjMqaimXw8NZ2 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a05:6214:aa6:: with SMTP id ew6mr12058104qvb.2.1615832814124; Mon, 15 Mar 2021 11:26:54 -0700 (PDT) Date: Mon, 15 Mar 2021 11:26:40 -0700 In-Reply-To: <20210315182643.2437374-1-bgardon@google.com> Message-Id: <20210315182643.2437374-2-bgardon@google.com> Mime-Version: 1.0 References: <20210315182643.2437374-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v2 1/4] KVM: x86/mmu: Fix RCU usage in handle_removed_tdp_mmu_page From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon , kernel test robot Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The pt passed into handle_removed_tdp_mmu_page does not need RCU protection, as it is not at any risk of being freed by another thread at that point. However, the implicit cast from tdp_sptep_t to u64 * dropped the __rcu annotation without a proper rcu_derefrence. Fix this by passing the pt as a tdp_ptep_t and then rcu_dereferencing it in the function. Suggested-by: Sean Christopherson Reported-by: kernel test robot Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/tdp_mmu.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d78915019b08..db2936cca4bf 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -301,11 +301,16 @@ static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp, * * Given a page table that has been removed from the TDP paging structure, * iterates through the page table to clear SPTEs and free child page tables. + * + * Note that pt is passed in as a tdp_ptep_t, but it does not need RCU + * protection. Since this thread removed it from the paging structure, + * this thread will be responsible for ensuring the page is freed. Hence the + * early rcu_dereferences in the function. */ -static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt, +static void handle_removed_tdp_mmu_page(struct kvm *kvm, tdp_ptep_t pt, bool shared) { - struct kvm_mmu_page *sp = sptep_to_sp(pt); + struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt)); int level = sp->role.level; gfn_t base_gfn = sp->gfn; u64 old_child_spte; @@ -318,7 +323,7 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt, tdp_mmu_unlink_page(kvm, sp, shared); for (i = 0; i < PT64_ENT_PER_PAGE; i++) { - sptep = pt + i; + sptep = rcu_dereference(pt) + i; gfn = base_gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)); if (shared) { From patchwork Mon Mar 15 18:26:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DCF6C4332B for ; Mon, 15 Mar 2021 18:27:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81ACC64F52 for ; Mon, 15 Mar 2021 18:27:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232470AbhCOS1I (ORCPT ); Mon, 15 Mar 2021 14:27:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232085AbhCOS1A (ORCPT ); Mon, 15 Mar 2021 14:27:00 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A44FC06174A for ; Mon, 15 Mar 2021 11:27:00 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id a63so39111432yba.2 for ; Mon, 15 Mar 2021 11:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bVFbFclWH6TVYcwCc+4Ci62XlhbttvVSTzGI6/A6UWQ=; b=E0MpwwInBk3iuFdbG0xyFOsZFAmjFiKutWJyHHYpT2jTy9OGAWmOAmh6rBz861KT77 pqS85PXvPhBS0/Ma0I8sKFO6LwRo6QpIuX1TIq0sfqsJGq613x90B5au2+dFX7girI2l e4tKDmfcCzwvNGoQ9FjIx9DMRI7azw4Y3e3OMrYcF5ph6+h0j9z2J/ghhWgpNb2/EGBL caR2SMEFQbEu3A8OIY+MN/Hj5NDqJkJwwSh4k3EFwGLmhvIPmcWTrXehdqGmweltGacb Jjbt6Z9G8d9Kgx+79BR0+kLvdiHZeUvzuyBeL6UqspGsIPVbe8SatM3deboLak3DZV3N 4DJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bVFbFclWH6TVYcwCc+4Ci62XlhbttvVSTzGI6/A6UWQ=; b=eRsqiyik2DmfIESuioiNj0Y+mYBma5XH0Iwe3Tsnv8i4wyIntoeNOgIB4M5+sMjaFk vFWGDdhq1Pg9nsLsIZMbBnfqaW8RQoVcTdxGdDqOg4u3/glzfqDvcCkgndik5FJbwJYu +DjbQMiP2lw8Au887KDf/k2TfwIdrjIRs1KTznfWb6EnEbe+TVpyXWcA1pNVo+/SkBrA fBsX5rtpm7TNpu0LEO4Fsz3YNhIEZby6fZALRGUkOfOum1equfTXONFiKOaRYCZA0SmU t0p543DK4dnI7O2zbMi/yK8xjX1FHwO5D1p5yaXx/u8qLsVNwT35H6N1q8ld1OpV5M+A khag== X-Gm-Message-State: AOAM532fIix3ocs3ID7edZiJV6NT7qDeuRGy9hDdFVgYNhqbZ0j2MdiN 2OTfvoQdZTuFP9yPLd0uJRJhklgBOZuq X-Google-Smtp-Source: ABdhPJwp1E7vgZOgmoN96JU0x3OuJxlPoqsceKnb0IVK2p3TjktU1Wz2gkABoEm0DVJLZJoeorFj2wqTV3v0 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a05:6902:72f:: with SMTP id l15mr1392566ybt.519.1615832819867; Mon, 15 Mar 2021 11:26:59 -0700 (PDT) Date: Mon, 15 Mar 2021 11:26:41 -0700 In-Reply-To: <20210315182643.2437374-1-bgardon@google.com> Message-Id: <20210315182643.2437374-3-bgardon@google.com> Mime-Version: 1.0 References: <20210315182643.2437374-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v2 2/4] KVM: x86/mmu: Fix RCU usage when atomically zapping SPTEs From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon , kernel test robot Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Fix a missing rcu_dereference in tdp_mmu_zap_spte_atomic. Reported-by: kernel test robot Signed-off-by: Ben Gardon Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index db2936cca4bf..946da74e069c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -543,7 +543,7 @@ static inline bool tdp_mmu_zap_spte_atomic(struct kvm *kvm, * here since the SPTE is going from non-present * to non-present. */ - WRITE_ONCE(*iter->sptep, 0); + WRITE_ONCE(*rcu_dereference(iter->sptep), 0); return true; } From patchwork Mon Mar 15 18:26:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A689C433E6 for ; Mon, 15 Mar 2021 18:28:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6778964F42 for ; Mon, 15 Mar 2021 18:28:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232166AbhCOS1i (ORCPT ); Mon, 15 Mar 2021 14:27:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230478AbhCOS1H (ORCPT ); Mon, 15 Mar 2021 14:27:07 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 456EDC06174A for ; Mon, 15 Mar 2021 11:27:07 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id w2so15979726qts.18 for ; Mon, 15 Mar 2021 11:27:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EKob6tlP8lBjDbv9XP0X/EBXxf5qoM5NvDApjzhDeLw=; b=qIxHH+3i2D7PNeGJC3Wruim3zEj1MDgDvkZZzXY+pHWwAtNmGgwLhZjQc9yyTJId1a 6gqc18J9p7NmH54ZStew8XMiJlLLaDXkOt1Bvbs70dfzfyNj5bVHJopRXw8/y9Z4ZJFP 66iq1veO96DphXblNTphGfgta/p6RsdVMgk/9r5YqinzRvz2vsc80PJZdyr37OQ+padx L122UH0b8zhy1naRpVakJAyRflF/cuzL1chxOr/LlfIQVsg+6LhCg38NpMUS1iqJvJm7 T+t/N/1jXIHwcLqnf2VR/Ny/ve/QlmD8mYvhTOVaw5jvqZ5JO2eVtKFdfMiSCo/nenXq CYTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EKob6tlP8lBjDbv9XP0X/EBXxf5qoM5NvDApjzhDeLw=; b=GGFDFz36TEzMvWX7Vr2gPnCBZv24AWLHhbTdKfr9KUIGaHRJA2OYzRDm+XruoFxbbd cuopf10OAjDfJldNLbRPyLsgcO+Sd3Rf97UJygAAs0tORyuze7IJfuHyaLwlMx+hTl81 ssEIbLEig6chyeG+uI+E+Eov0f9xbdZ7/FfqTy3QEWNdrcs+tQ0YT7GtPLU63XqHef7r lXwYu1PROz+6EcfkqHfAtiwCIahGWG3cl2aMVwEBjozJp4wVMQUHjzn275f5EcW33Qp3 4bWKbUPJfdwNb1tkS+13VysC6hE0GemLPMQAq2Wp3A8c/MaVlQXZbd9lCfsx39cQOjiM scxg== X-Gm-Message-State: AOAM531La7cSe5hCDWJBLO507bQ1LSQxNX6krPBTXan4Df8Kd5bPVIhX uDCIJv3Ibkk86EEcVyiA7W/vv1l6sQQp X-Google-Smtp-Source: ABdhPJxV6f2d8sp2lRsUZqtou2HPYkHQ/27qulx/IXNINjzP/jaq5BzIiuGb1hM4wbTTS0ozjfS4L5gNztBf X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a05:6214:88:: with SMTP id n8mr12426191qvr.22.1615832826469; Mon, 15 Mar 2021 11:27:06 -0700 (PDT) Date: Mon, 15 Mar 2021 11:26:42 -0700 In-Reply-To: <20210315182643.2437374-1-bgardon@google.com> Message-Id: <20210315182643.2437374-4-bgardon@google.com> Mime-Version: 1.0 References: <20210315182643.2437374-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v2 3/4] KVM: x86/mmu: Factor out tdp_iter_return_to_root From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In tdp_mmu_iter_cond_resched there is a call to tdp_iter_start which causes the iterator to continue its walk over the paging structure from the root. This is needed after a yield as paging structure could have been freed in the interim. The tdp_iter_start call is not very clear and something of a hack. It requires exposing tdp_iter fields not used elsewhere in tdp_mmu.c and the effect is not obvious from the function name. Factor a more aptly named function out of tdp_iter_start and call it from tdp_mmu_iter_cond_resched and tdp_iter_start. No functional change intended. Signed-off-by: Ben Gardon Reviewed-by: Sean Christopherson --- arch/x86/kvm/mmu/tdp_iter.c | 24 +++++++++++++++++------- arch/x86/kvm/mmu/tdp_iter.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 4 +--- 3 files changed, 19 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index e5f148106e20..f7f94ea65243 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -20,6 +20,21 @@ static gfn_t round_gfn_for_level(gfn_t gfn, int level) return gfn & -KVM_PAGES_PER_HPAGE(level); } +/* + * Return the TDP iterator to the root PT and allow it to continue its + * traversal over the paging structure from there. + */ +void tdp_iter_restart(struct tdp_iter *iter) +{ + iter->yielded_gfn = iter->next_last_level_gfn; + iter->level = iter->root_level; + + iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level); + tdp_iter_refresh_sptep(iter); + + iter->valid = true; +} + /* * Sets a TDP iterator to walk a pre-order traversal of the paging structure * rooted at root_pt, starting with the walk to translate next_last_level_gfn. @@ -31,16 +46,11 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, WARN_ON(root_level > PT64_ROOT_MAX_LEVEL); iter->next_last_level_gfn = next_last_level_gfn; - iter->yielded_gfn = iter->next_last_level_gfn; iter->root_level = root_level; iter->min_level = min_level; - iter->level = root_level; - iter->pt_path[iter->level - 1] = (tdp_ptep_t)root_pt; + iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root_pt; - iter->gfn = round_gfn_for_level(iter->next_last_level_gfn, iter->level); - tdp_iter_refresh_sptep(iter); - - iter->valid = true; + tdp_iter_restart(iter); } /* diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 4cc177d75c4a..8eb424d17c91 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -63,5 +63,6 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter); +void tdp_iter_restart(struct tdp_iter *iter); #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 946da74e069c..38b6b6936171 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -664,9 +664,7 @@ static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, WARN_ON(iter->gfn > iter->next_last_level_gfn); - tdp_iter_start(iter, iter->pt_path[iter->root_level - 1], - iter->root_level, iter->min_level, - iter->next_last_level_gfn); + tdp_iter_restart(iter); return true; } From patchwork Mon Mar 15 18:26:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12140407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DBA7C43381 for ; Mon, 15 Mar 2021 18:28:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 789DF64F4B for ; Mon, 15 Mar 2021 18:28:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232319AbhCOS1k (ORCPT ); Mon, 15 Mar 2021 14:27:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231377AbhCOS1O (ORCPT ); Mon, 15 Mar 2021 14:27:14 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF9E3C06174A for ; Mon, 15 Mar 2021 11:27:13 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id i1so23707951qvu.12 for ; Mon, 15 Mar 2021 11:27:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YJ4LYikLJ6LnYtflKHwYLFORojLmasAvwJZ1SOdvg4k=; b=R4U454aE0SCZ0ShHHqVFDB7cWipimWRK+MqPBvlPKTI+UubfEhsJ9LeScntpFJUV9D G0dPrTC6J2LQiXlDhpjXkuLLXoC+VQb37WFsMpn/rtQA8SdEvBd7QuUfyLR0ojoYTfq2 mU0ln5HmEE546LUukBzF8jH+yHNDveMOs315Ksl0K5Tts2L733Y6L2obPOrNpAEyJ9Hl ycmnxWP3kFJlmSPNDj97YvC5zxTdCo5jxpvDztsTslptRRoxm3hTuko9N2cGYpyrkKBl MDgpOPi6Bp5YeWZtLFKQm89ebCxaeR6/PRTUn7Gd6p39r1zuXhJxeOE6vdUC9lIaVbVL n/kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YJ4LYikLJ6LnYtflKHwYLFORojLmasAvwJZ1SOdvg4k=; b=VNNpdseYyGcVh8JXSJy8iJ3afLA0bVvOhcvdvYHv+qSVCTpGCYYypj+Q3r65jvCajH rxm5eKqah2xnKY1cC7oF6NvILhi3WaCDYh9/bqjbViUj5sb4HDREbdBnDljTCVf/Y42r /TXwaLndYNhxLVOfNpZ3inMKQ8mulvA3lw7lBm0a/1xTGrFA3FBH8bP5/QiOWPBSeJ7G edhdpQj58vxvZEPrm2d9EwID31kw1sC7sWff6kSRmab4xZLZ8DQuEdfsY8MSmwOBbwjm SgiN86ydHElbCJdhrXu0ZRHUV3FczY5ZkEqqa/YsnvuQ6kGatcczt3wMr/GTZKwcAVCK UYyg== X-Gm-Message-State: AOAM530NYuniGMh4bQBpo2pmIjItdyYtYRIZyXvtHGqpcaQluF94tTku +kJPSk555rwOa2X4zYWM0uHB+vjK0EdN X-Google-Smtp-Source: ABdhPJwbVQiLfVe2Db5rkIEg6KzSZtaX141pcPBDUyBP93Xmq6wAChQpbIRa1w7i0Lz1G5vf2ZWzAU3ZLQ+8 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:888a:4e22:67:844a]) (user=bgardon job=sendgmr) by 2002:a0c:b89a:: with SMTP id y26mr12061368qvf.49.1615832833135; Mon, 15 Mar 2021 11:27:13 -0700 (PDT) Date: Mon, 15 Mar 2021 11:26:43 -0700 In-Reply-To: <20210315182643.2437374-1-bgardon@google.com> Message-Id: <20210315182643.2437374-5-bgardon@google.com> Mime-Version: 1.0 References: <20210315182643.2437374-1-bgardon@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v2 4/4] KVM: x86/mmu: Store the address space ID in the TDP iterator From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Jim Mattson , Ben Gardon , kernel test robot Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Store the address space ID in the TDP iterator so that it can be retrieved without having to bounce through the root shadow page. This streamlines the code and fixes a Sparse warning about not properly using rcu_dereference() when grabbing the ID from the root on the fly. Reported-by: kernel test robot Signed-off-by: Sean Christopherson Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu/mmu_internal.h | 5 +++++ arch/x86/kvm/mmu/tdp_iter.c | 6 +----- arch/x86/kvm/mmu/tdp_iter.h | 3 ++- arch/x86/kvm/mmu/tdp_mmu.c | 23 +++++------------------ 4 files changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index ec4fc28b325a..1f6f98c76bdf 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -78,6 +78,11 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) return to_shadow_page(__pa(sptep)); } +static inline int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) +{ + return sp->role.smm ? 1 : 0; +} + static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) { /* diff --git a/arch/x86/kvm/mmu/tdp_iter.c b/arch/x86/kvm/mmu/tdp_iter.c index f7f94ea65243..b3ed302c1a35 100644 --- a/arch/x86/kvm/mmu/tdp_iter.c +++ b/arch/x86/kvm/mmu/tdp_iter.c @@ -49,6 +49,7 @@ void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, iter->root_level = root_level; iter->min_level = min_level; iter->pt_path[iter->root_level - 1] = (tdp_ptep_t)root_pt; + iter->as_id = kvm_mmu_page_as_id(sptep_to_sp(root_pt)); tdp_iter_restart(iter); } @@ -169,8 +170,3 @@ void tdp_iter_next(struct tdp_iter *iter) iter->valid = false; } -tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter) -{ - return iter->pt_path[iter->root_level - 1]; -} - diff --git a/arch/x86/kvm/mmu/tdp_iter.h b/arch/x86/kvm/mmu/tdp_iter.h index 8eb424d17c91..b1748b988d3a 100644 --- a/arch/x86/kvm/mmu/tdp_iter.h +++ b/arch/x86/kvm/mmu/tdp_iter.h @@ -36,6 +36,8 @@ struct tdp_iter { int min_level; /* The iterator's current level within the paging structure */ int level; + /* The address space ID, i.e. SMM vs. regular. */ + int as_id; /* A snapshot of the value at sptep */ u64 old_spte; /* @@ -62,7 +64,6 @@ tdp_ptep_t spte_to_child_pt(u64 pte, int level); void tdp_iter_start(struct tdp_iter *iter, u64 *root_pt, int root_level, int min_level, gfn_t next_last_level_gfn); void tdp_iter_next(struct tdp_iter *iter); -tdp_ptep_t tdp_iter_root_pt(struct tdp_iter *iter); void tdp_iter_restart(struct tdp_iter *iter); #endif /* __KVM_X86_MMU_TDP_ITER_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 38b6b6936171..462b1f71c77f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -203,11 +203,6 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, u64 old_spte, u64 new_spte, int level, bool shared); -static int kvm_mmu_page_as_id(struct kvm_mmu_page *sp) -{ - return sp->role.smm ? 1 : 0; -} - static void handle_changed_spte_acc_track(u64 old_spte, u64 new_spte, int level) { bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); @@ -497,10 +492,6 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte) { - u64 *root_pt = tdp_iter_root_pt(iter); - struct kvm_mmu_page *root = sptep_to_sp(root_pt); - int as_id = kvm_mmu_page_as_id(root); - lockdep_assert_held_read(&kvm->mmu_lock); /* @@ -514,8 +505,8 @@ static inline bool tdp_mmu_set_spte_atomic(struct kvm *kvm, new_spte) != iter->old_spte) return false; - handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, - iter->level, true); + handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, true); return true; } @@ -569,10 +560,6 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, u64 new_spte, bool record_acc_track, bool record_dirty_log) { - tdp_ptep_t root_pt = tdp_iter_root_pt(iter); - struct kvm_mmu_page *root = sptep_to_sp(root_pt); - int as_id = kvm_mmu_page_as_id(root); - lockdep_assert_held_write(&kvm->mmu_lock); /* @@ -586,13 +573,13 @@ static inline void __tdp_mmu_set_spte(struct kvm *kvm, struct tdp_iter *iter, WRITE_ONCE(*rcu_dereference(iter->sptep), new_spte); - __handle_changed_spte(kvm, as_id, iter->gfn, iter->old_spte, new_spte, - iter->level, false); + __handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte, + new_spte, iter->level, false); if (record_acc_track) handle_changed_spte_acc_track(iter->old_spte, new_spte, iter->level); if (record_dirty_log) - handle_changed_spte_dirty_log(kvm, as_id, iter->gfn, + handle_changed_spte_dirty_log(kvm, iter->as_id, iter->gfn, iter->old_spte, new_spte, iter->level); }