From patchwork Wed Dec 15 16:12:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53BE3C433F5 for ; Wed, 15 Dec 2021 16:23:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ggzLqHstAXXTehOB9Xs7APXnQL55Xq0M7Lv3ChFpLJI=; b=2N9n48i/VgGH7+p/eSuieGSX+u 407oAGh7e6pqmunR68OJSWHRk2LOqzklSTXboZk/fQ7gVp0PPv0wStGqU2Gb/4PP76yi7dJmzipS6 nj74o644JqGry5fBfIqNmYW3cl7kzIKBuOmayxHaQ1D7G28qV1bPPBpvlBy53hFFZjkkpodKM+b/s QFtlgh43DsymGdZlSZ5/WTIg/PaW8dr4NXgYQiaWU+sGT54W9fHfxYUjkCGWOsHj2daPHMPbKG/CE /CdXGgUqepZMe/WoYXoOl//nBK91mvIkXAwY3/4q9AYtAEAo/jmTZl0COaVapCscj5XNMfIY51Zz1 f4EItVog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX2W-001ftk-H3; Wed, 15 Dec 2021 16:22:00 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtW-001bvZ-PN for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:12:44 +0000 Received: by mail-wm1-x34a.google.com with SMTP id r129-20020a1c4487000000b00333629ed22dso14843435wma.6 for ; Wed, 15 Dec 2021 08:12:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fvU+rTfQGDk/gMHRwZZK2xJ4BmIbr82Ki0zdjrAMtH4=; b=AxcI0U50LP9UI1+U6aOTK66BaZ96AYCpHcns6c3HEWlsvGT1zuytPudhyifTcoMOiq DPjRsJ8EPAtn3WqwwehTKtW49jhvWcmbWpMjuhj4lChFaY+4eHGWGF+msFaGFQDcpey5 Owg7nJu4brk8wWyQ/BEFnhUQ90S3+rhrmvsZKXEpRU+8ZMgJwh95rH+r4vl2LygP2nhg m6201P1jNhZDkNAPtqCCFxk+w1/E3EDmIKqw397DawYopi8bxuv0nQi7GzuYgxrsQBHL zyS7w8OBi4L761ZlPheYjDM/ecN8KlHjek8Q1doEoaGdBdB1ImjjD3szz4rf60dtzjbX X0fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fvU+rTfQGDk/gMHRwZZK2xJ4BmIbr82Ki0zdjrAMtH4=; b=MMgcMOxne46ABHo7mkxqGwynSNR8gUkkXFTbrNxzzVlojQrukA+x0JEW5L3ioffbft Y6BUhxpYUDFSyBLW44t4Yhkc/oYHMhA1qOVxQQku8puMLaoJU1IHx6xn0DQQKuUkjcJV oGclQW02mHKP6I5oZmxUL8vPOVpurtDicNJRn03GjroEI1aWlbLCpqIPzOCWjxBx/lL1 qZJFUMRpWT4rdUAa2uI8djxa9KuT34ltV1/C9fkNqZvFc5pSqUjdrgTt951ZgElTjp8G +Vxa3SlZcMPCSVppC+kFUcektK1lr53Vies3n3QvOW7XQUC0NH7EImCy53n3gThHCXug yesg== X-Gm-Message-State: AOAM531gyOvkrBUYY9dF9QL+0R3mON5Nlg7eUVaYa6JiJ1LpzQDrQryz 8rl4nWZVoRt+FIeKOOEekJNphc7r72Vh X-Google-Smtp-Source: ABdhPJz9PeDSGMeEeP45N6lCRrahlctgvXx61iiSgCOtuaYTTEOCxJ3rRff8XyoyZ6cRaw8FkQLTvH91J/YB X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:600c:4104:: with SMTP id j4mr535325wmi.178.1639584759689; Wed, 15 Dec 2021 08:12:39 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:18 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-2-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 01/14] KVM: arm64: Provide {get, put}_page() stubs for early hyp allocator From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081242_881193_B5CAE62B X-CRM114-Status: GOOD ( 11.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In nVHE protected mode, the EL2 code uses a temporary allocator during boot while re-creating its stage-1 page-table. Unfortunately, the hyp_vmmemap is not ready to use at this stage, so refcounting pages is not possible. That is not currently a problem because hyp stage-1 mappings are never removed, which implies refcounting of page-table pages is unnecessary. In preparation for allowing hypervisor stage-1 mappings to be removed, provide stub implementations for {get,put}_page() in the early allocator. Acked-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/early_alloc.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/early_alloc.c b/arch/arm64/kvm/hyp/nvhe/early_alloc.c index 1306c430ab87..00de04153cc6 100644 --- a/arch/arm64/kvm/hyp/nvhe/early_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/early_alloc.c @@ -43,6 +43,9 @@ void *hyp_early_alloc_page(void *arg) return hyp_early_alloc_contig(1); } +static void hyp_early_alloc_get_page(void *addr) { } +static void hyp_early_alloc_put_page(void *addr) { } + void hyp_early_alloc_init(void *virt, unsigned long size) { base = cur = (unsigned long)virt; @@ -51,4 +54,6 @@ void hyp_early_alloc_init(void *virt, unsigned long size) hyp_early_alloc_mm_ops.zalloc_page = hyp_early_alloc_page; hyp_early_alloc_mm_ops.phys_to_virt = hyp_phys_to_virt; hyp_early_alloc_mm_ops.virt_to_phys = hyp_virt_to_phys; + hyp_early_alloc_mm_ops.get_page = hyp_early_alloc_get_page; + hyp_early_alloc_mm_ops.put_page = hyp_early_alloc_put_page; } From patchwork Wed Dec 15 16:12:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 828E1C433F5 for ; Wed, 15 Dec 2021 16:24:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=31uFeb0/regaCyz/V4oyzqc3gtYYo+VxOVzQ7i2Osvc=; b=dp1lo1F6UJCx+2aVtp1iZqxnEG e0FfIMUuLDwo1CB9lflNWX9HLqAqf+FjResveCBQvtCEpuzKU361cW2QecqUkmHRqDfhuA1CMSkRj WhqNw1QFAcpe6bhVTRnLlPe2+Yl6hSlFrBGNn4s4cGJu0yIBtn4dRq31vdPKw94jeCkIgOgCqwCUd szFCeizJHbxJB2KyIZtvKyuosSSKaKqkSlpPetn/Gpz4paK9FuTGoF3wpt23PJeCYYsu/Px4bjg2f TC7ziNcn6olNEZCEIaUv7VUhQjoY+Eq0bus4qJFz3fcp35zUbbc3O+cScmk/vJz+iMMcgk/eth/HQ EaF0X/Eg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX3P-001gGU-7S; Wed, 15 Dec 2021 16:22:56 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWta-001byC-I7 for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:12:48 +0000 Received: by mail-ed1-x549.google.com with SMTP id z14-20020a05640235ce00b003f7e90c32b6so2127993edc.4 for ; Wed, 15 Dec 2021 08:12:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9UncJz09b52+rulxMEJfzwQAU0lESQ6s/aE8Lup77NM=; b=IoTXNcUh80VkspkGKEhVZPGmIgfwuEX2hfWC85I6z4BP91X7qiSyJSelcdBwSw1+z/ ENHIv3PJ0avwndy5ipE//qEnaQSgTyXUbFnvdFYPJ9WV487qISa/CYlxv4wvG7jxgNnE uklhvdl0bjLKzKwujXWYoEg5w88I7O1+ks77TzECjDlQ/E56agN6hwB7e7EumJx7le6e IkHpzTlXUq/uxD56jvtOXsS+joBN2L8q4BTBmur++c6puzfe6OlwiQeG1wD2kPS6Abtc 277ohHjDPdJuIij7EWGEkECn+V/zDZqYSe3KMBnaE+mIXvhyAxJf4mZl27WNdzb0jUEC 03xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9UncJz09b52+rulxMEJfzwQAU0lESQ6s/aE8Lup77NM=; b=dv07boaaK6/tchn3zi4K6xtGpeh5ggiN3wxkmDXSf38lWU8+oM0qg3fR1wG/GEYmXH n2EIRyTQKDENt7OdLkuCRLanf+YUPblK9bRCcYDxpa3QR4gfdq0JZ9gG+dTBXNGr1zuS d1Xz0seDktdNjgLqjZi3KAYjdyTAzD91xnYEwIs2evPCYbzMlD/B8PFwkh4nEFwbF3gp qB+GIdj0k557o5uo1ymJYwe2DBNxNXgLUdx1KpT0j5WZvvLP5TCHhrPD7f2zhAGj+uGz J3V22ypCdUzso0v/nqRmENhQWsH4bYkoyZAPXFrJ9HliyUYJa9MIs7zdAhKhrmaNjsvs IZQA== X-Gm-Message-State: AOAM532tkwTD0QbenEoNVVMDqvQV0o4NYB5qO2O+2EX9edH2C4Qs4yzQ j2T+b9N0wVAbwfrZfwGTOOTiiqSKiHvi X-Google-Smtp-Source: ABdhPJwCHc7fxmz5DGHLcuofkXghjPhU8hYkQXG2pB5/De5FAlnDqDB9RnWoWN1bvIdS4I1Eas5U47PGeAb+ X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a17:906:4fc4:: with SMTP id i4mr11224234ejw.81.1639584764044; Wed, 15 Dec 2021 08:12:44 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:19 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-3-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 02/14] KVM: arm64: Refcount hyp stage-1 pgtable pages From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081246_674017_22BB5AB4 X-CRM114-Status: GOOD ( 13.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To prepare the ground for allowing hyp stage-1 mappings to be removed at run-time, update the KVM page-table code to maintain a correct refcount using the ->{get,put}_page() function callbacks. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/pgtable.c | 39 ++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 20 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index f8ceebe4982e..e50e9158fc56 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -383,21 +383,6 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte) return prot; } -static bool hyp_pte_needs_update(kvm_pte_t old, kvm_pte_t new) -{ - /* - * Tolerate KVM recreating the exact same mapping, or changing software - * bits if the existing mapping was valid. - */ - if (old == new) - return false; - - if (!kvm_pte_valid(old)) - return true; - - return !WARN_ON((old ^ new) & ~KVM_PTE_LEAF_ATTR_HI_SW); -} - static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct hyp_map_data *data) { @@ -407,11 +392,16 @@ static bool hyp_map_walker_try_leaf(u64 addr, u64 end, u32 level, if (!kvm_block_mapping_supported(addr, end, phys, level)) return false; + data->phys += granule; new = kvm_init_valid_leaf_pte(phys, data->attr, level); - if (hyp_pte_needs_update(old, new)) - smp_store_release(ptep, new); + if (old == new) + return true; + if (!kvm_pte_valid(old)) + data->mm_ops->get_page(ptep); + else if (WARN_ON((old ^ new) & ~KVM_PTE_LEAF_ATTR_HI_SW)) + return false; - data->phys += granule; + smp_store_release(ptep, new); return true; } @@ -433,6 +423,7 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, return -ENOMEM; kvm_set_table_pte(ptep, childp, mm_ops); + mm_ops->get_page(ptep); return 0; } @@ -482,8 +473,16 @@ static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg) { struct kvm_pgtable_mm_ops *mm_ops = arg; + kvm_pte_t pte = *ptep; + + if (!kvm_pte_valid(pte)) + return 0; + + mm_ops->put_page(ptep); + + if (kvm_pte_table(pte, level)) + mm_ops->put_page(kvm_pte_follow(pte, mm_ops)); - mm_ops->put_page((void *)kvm_pte_follow(*ptep, mm_ops)); return 0; } @@ -491,7 +490,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) { struct kvm_pgtable_walker walker = { .cb = hyp_free_walker, - .flags = KVM_PGTABLE_WALK_TABLE_POST, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, .arg = pgt->mm_ops, }; From patchwork Wed Dec 15 16:12:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA713C433FE for ; Wed, 15 Dec 2021 16:25:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WLALwbkYgXnrDYFCfsLsYQFs6rnzqvkxx/9Pec6KtW0=; b=lzx8VAOgBuqYl7zXfWHRYxswf6 CIMr2pFhL+UGS7Oy0QuQDrrCv6EexJYjlxcuGGIXkDtxDsA3I7X49+QhXUGSGApPjrGkYcpMz/xpe Bwb4D+AGTOlRBpdO0idtw6Xhhvstm6U5YPHZfg7kWvZpDXC2ExdQmbkIwo/NofcSvR7EsuB/hd529 xJ7hxcRR6mKi9kWMIImcjyq3bKSTvWzpuLURlqEK8XfhiLfYBRsfGkzz3QDlkU4m3JMqhruv256q5 utn44+eq/ijXKjd+qqdQdVXtvg3oIc4+c5O922Ata7JYLxMgAzWA3q84OMXLiZP0XvU1pTJSAF/3A ZXnCHnaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX4R-001gf5-TO; Wed, 15 Dec 2021 16:24:00 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtf-001c0I-3Z for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:12:52 +0000 Received: by mail-wr1-x44a.google.com with SMTP id a11-20020adffb8b000000b001a0b0f4afe9so3236594wrr.13 for ; Wed, 15 Dec 2021 08:12:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0T7H/Y38j6+Gwobm57eeLVycbzCrTBaKznUeZZoLnRY=; b=BJgpzkCwe1j3WJWKVWNOBVvJCvYLLVWrwYVZlEgV11DB1tQCznJlUwNpFmhV4nNJoe x4rhNTLCG8yYgsw9qn3v98zaLBUf6DRqrpAxVZbBAjhXEVgrI+0cEITsahoz+mFGbZDN Bp1fk3yZMsNpaQ/kcxbipbrFTeC4FKeQzjHWuTODjU6dTmT/KxlZ4zLtMB3B6sO8lBac aiE3ccRuO3vXHCM3Hg3g4J+Vl6C6MJQg8BNnXZo8Z3UDpyroafFj247opJNvQJOLHj8k 4RoVVrI1fiVAJ51smFgIMMfMRq9NWs8SnpWjMzKbPiX2hUX7nBHyLwXOKfUlvLf6MO9S RKNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0T7H/Y38j6+Gwobm57eeLVycbzCrTBaKznUeZZoLnRY=; b=efkRWTWNQgZYH0cjrnOgcx4jXST0YGSNfRCjt6qmJUq1DqQINzBjakUyWVD1BVK42S IkwC/Vp7Dy93Mc22GKQFRhM095sUTxmevxNlOOIIK2nDyvgmCA4SSL0xWukaedycYc0B 3CBV0JPG1C4MTywvU1SGnLGk6cSVT0tNQQU3Vi3QVy6aOC+EYKdl/WM0SFaif6X4sdwx jzUoFgKEBT9fjeXgMXFe2LZZhJON9peVeSobC+1xUIrNvuHG17aJXhZvTocbP9W+0YlT St7eKm1oKgKG3SpIjFkTm1rGf6kMaWkZcgO1aP0JuTa84rA9GW7DzosV0W9oA/HJJLWX wqqg== X-Gm-Message-State: AOAM531F0EgHjkeBgPUhe419kLP7oqkYSoYGKf/wXoR4MqPWBMo+dGFr FW/g8Aab5+S7b4DK3Zu99A3da33lWh4u X-Google-Smtp-Source: ABdhPJxs0xcgsWgPokAD4PgxH1YFZ/1fp7HGONpVZTXN+N1q3NOGytELjmmCe6UxqHhDjxU63cTx8+yucVmW X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:600c:3ac3:: with SMTP id d3mr527291wms.174.1639584768216; Wed, 15 Dec 2021 08:12:48 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:20 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-4-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 03/14] KVM: arm64: Fixup hyp stage-1 refcount From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081251_205444_EE652CB7 X-CRM114-Status: GOOD ( 15.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In nVHE-protected mode, the hyp stage-1 page-table refcount is broken due to the lack of refcount support in the early allocator. Fix-up the refcount in the finalize walker, once the 'hyp_vmemmap' is up and running. Acked-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index e31149965204..ab44e004e6d3 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -166,6 +166,7 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, enum kvm_pgtable_walk_flags flag, void * const arg) { + struct kvm_pgtable_mm_ops *mm_ops = arg; enum kvm_pgtable_prot prot; enum pkvm_page_state state; kvm_pte_t pte = *ptep; @@ -174,6 +175,15 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, if (!kvm_pte_valid(pte)) return 0; + /* + * Fix-up the refcount for the page-table pages as the early allocator + * was unable to access the hyp_vmemmap and so the buddy allocator has + * initialised the refcount to '1'. + */ + mm_ops->get_page(ptep); + if (flag != KVM_PGTABLE_WALK_LEAF) + return 0; + if (level != (KVM_PGTABLE_MAX_LEVELS - 1)) return -EINVAL; @@ -206,7 +216,8 @@ static int finalize_host_mappings(void) { struct kvm_pgtable_walker walker = { .cb = finalize_host_mappings_walker, - .flags = KVM_PGTABLE_WALK_LEAF, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + .arg = pkvm_pgtable.mm_ops, }; int i, ret; @@ -241,10 +252,6 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = finalize_host_mappings(); - if (ret) - goto out; - pkvm_pgtable_mm_ops = (struct kvm_pgtable_mm_ops) { .zalloc_page = hyp_zalloc_hyp_page, .phys_to_virt = hyp_phys_to_virt, @@ -254,6 +261,10 @@ void __noreturn __pkvm_init_finalise(void) }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; + ret = finalize_host_mappings(); + if (ret) + goto out; + out: /* * We tail-called to here from handle___pkvm_init() and will not return, From patchwork Wed Dec 15 16:12:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4B49C433F5 for ; Wed, 15 Dec 2021 16:26:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=z59CpjRQNHMZuc8qiigUKmD17J5q2XwZr+0QpoehIZk=; b=NSUNCopuNo+v2X6x4dVzjxCKRU sQBeAbGcF/vaW+M7OE6Qjftm4SMi5GMo9HiByj+oUJPEm3xB/7OMcMcySVHo4aOEuQKTobRxD81J5 BD6Fhn+O+g//pBB47H9NAR75JvzGCuinEyVPSRb67qKcBSwlXQj0cvu+Jxh+8L3Cl6jNL+Dukh9hd MSw5GkHUstZ3rAV8Y2ffBVeV3AmrVnoqXEOok4/XaNt7sVD6RkPmdl18eFJ2HpfjBeh3N8uNQSu53 NUm5m8CS2Yo0olGStiwNWdOoOWJ84SzoxMIRr83LwFbPvwXuAe2iupozzgqyLzwR0qRe5GoAqlDzT EbipIeVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX5a-001h83-VZ; Wed, 15 Dec 2021 16:25:12 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtj-001c2v-Fv for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:12:57 +0000 Received: by mail-wm1-x349.google.com with SMTP id n16-20020a05600c3b9000b003331973fdbbso9226725wms.0 for ; Wed, 15 Dec 2021 08:12:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MMV17RJGBqBIJSSqtbn1eP7i2KWhmG0Xg4IIILFGbk4=; b=TsX1J3w5V1i1HiSUsll+2W48tuChkMnwn+NtSsjIf5wMQ/88zmPZGculZ89WOyPr3y j7pfWDg3rD3EXldiIIgv8sTtXjmPWmn51rcw1Ws2CkNWVjJXW5VokmWSB1gwCxhyuMKM SUISMwNLFpghQZ4k/mzBK3Bq4d4t3/yH52+nPjJQ7j4y7zkpM0AnUwaMaAceawUvM6GI JOi+15VpsBSiAkh94aO7LxLWQaPi2fx8ajh9TuBtNOxv5QBBiWiQdfzxPYloLQh6qbQQ DRaNQ4HCuH9Ny/kX7+8XHgzmsXoKhF3r3Loka1NP2RKyZk8zSa4QplVyKQPCLZ64AhNE emhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MMV17RJGBqBIJSSqtbn1eP7i2KWhmG0Xg4IIILFGbk4=; b=JwcUlmkhS+JUYZsHZqrzvET7wP7i8XF1klD0HEjNNyNrzNoK0PCe9RFayogD2EJ5fw LtTvsE4lrxSpj76uMtSkQWb9itFowCB1hBBPyiHc4KvQJc1s2g2BpX8T4NbaP8Fa0pTz TKpRiRAIpEpdhFlO0id3W5taoMQnjlmAoX16LLkpK3hM8fnwRoP+UyRW62gKFgS+CqWw SWdyuex3oTgLK8pSZsTVGxhan91RC/ReHL9b163OY6vAXTwPvXeBUfpF9ixRpJFvBSJX PqfiNgLEXIkHRk5rxqTk1d2PWODN10mGU+Dt1QIPz/WLpxYLlWLlXIZLOCPh3PPzLCBb phAw== X-Gm-Message-State: AOAM533+kf3mILHUAmZdxVXOV2wt380PpBB9K63kWMN4hUaNBL8qjbOC hoMylBpFZDjffokocpVlodxA/7J2vMei X-Google-Smtp-Source: ABdhPJyfCs0Hha3nCPUt4YLDazw5cpbIxT+GVyubvLUpddKOfvogqT7sUw4CAzf8KJFy2IBZwJrNH4Y6lhHT X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a5d:650f:: with SMTP id x15mr5133650wru.201.1639584772363; Wed, 15 Dec 2021 08:12:52 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:21 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-5-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 04/14] KVM: arm64: Hook up ->page_count() for hypervisor stage-1 page-table From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081255_587443_E9592A7E X-CRM114-Status: UNSURE ( 9.05 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon kvm_pgtable_hyp_unmap() relies on the ->page_count() function callback being provided by the memory-management operations for the page-table. Wire up this callback for the hypervisor stage-1 page-table. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/setup.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index ab44e004e6d3..27af337f9fea 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -258,6 +258,7 @@ void __noreturn __pkvm_init_finalise(void) .virt_to_phys = hyp_virt_to_phys, .get_page = hpool_get_page, .put_page = hpool_put_page, + .page_count = hyp_page_count, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; From patchwork Wed Dec 15 16:12:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D486C433F5 for ; Wed, 15 Dec 2021 16:27:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Yz57TMxQoTTWeNuYxP/wpf27iPeiWI5qCtLOazhxT7c=; b=oUapGC4H+CjOE73kYgWe9o3uIE 2uedod7grlgsm8/TAgylxsD+2fEXJYtfHj/T2ax9EMIBsciInj/4YYsvQr07vRBTdmRPDabpbVjKc OSHBVnDpD8c33FdzLCmLGHEcnSRuEBaFgclI1TEO23ExpIYgJkJUsiC+tdjPZmksPBtURxKKVuPox 0zOLHCLhyM1DxirnMu+wIc/doYonBDgkzEBG/1t5AYwT7bto8IdI34Z7V2OlesuKwgpJX/vEvTQIe WR2hwMeYCbN5iJR0dfiVSQ4knZiaWqJfYGLAVOc2aybU9ptLB5k+po1l+vq6qMFG9+TLMjhfRgkUy d42aJBSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX6d-001hUY-9a; Wed, 15 Dec 2021 16:26:16 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtn-001c5e-V3 for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:01 +0000 Received: by mail-wr1-x449.google.com with SMTP id p3-20020a056000018300b00186b195d4ddso6040857wrx.15 for ; Wed, 15 Dec 2021 08:12:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ynF5lWvyiEFeIgUvxnlGN+fY90UCVaWVJ6r5LvF4TWY=; b=Z2bHKM6wBp9WnHm2RC+jSBb+1SB32iP9QhizIbhIErFB2+qCbRnYlbIotlBafMopur hac0wkBDsatfSzXXy8pdAx6rMwgY/jfNU8CHvQTFegTor4Qe3PBHLff7Wy2+Ivr8ZuDv zJ0FH+ethDYoeeJwxDEex5obvhi9i88xDwkOiQwKsVwrl28RQkhpkyTlt+rCLDAAIrAd AMttVGcClfOjioeT0j3Nr1ssCaL+wsx8QzYcfPyx86j16O36ZCpMNHZb12ABFjNobxZK n3Iiyvai6haQIoQOIFpmRG5gvvBtUjGn/05wqjE0Wk50DukCvN5/XLN/zEgEGfx+dXah U7wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ynF5lWvyiEFeIgUvxnlGN+fY90UCVaWVJ6r5LvF4TWY=; b=VPwROpccHFF+9KBQ2IwCHHBw/GpNgJVrMhuYTiawWp2lguOquwMnFam1Jp3r1IK5dT aCz32t+y1E+qmN3NFoRrUCScElL6C0O/16vbCMOJB7lA4/GJxjuF2JigS7WNm2Xff+pg 0tFVQR5ZGWtmrT3pTsNDF4CFLStTs8Tt7FF9E8ldrphfSO1zdFu29aiJ7Ozac51doBiA RMaGuEsSimQG1wajBKB5/Ys6imIdcxQqtm6A7v+vrqnslTuAOkscNXw0WjXnFC+IPZpd Sl753UZeuXRS6oAiWxoFtRbyYgj6Bf1HabGcH+ifEvSIYBeXbjFW29UapKjt8nmvnXfD uGsQ== X-Gm-Message-State: AOAM530vhmmp8uAFV8kvup0qYoGMiyGZxxtIH1F8Z3KRo1DuAV4WZfFL oXpixLVXfEgy2h14mWyol8TGyDU61ym/ X-Google-Smtp-Source: ABdhPJwQrmB63I33rYFWRbLk36OUKQ48YQKkPHhLj3/UyrM3DSZUyiBypkgiW3maONiY0gjgeTRYUoMH96Mt X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:adf:fc88:: with SMTP id g8mr5002042wrr.334.1639584776847; Wed, 15 Dec 2021 08:12:56 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:22 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-6-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 05/14] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081300_084304_321087D9 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index e50e9158fc56..adc73f8cd24f 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -451,6 +451,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) { From patchwork Wed Dec 15 16:12:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8EF9C433F5 for ; Wed, 15 Dec 2021 16:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=n0Eu5eYIk92LJP8+bmly4b5Q3qjT9irEm+rUedv5tIE=; b=nhxhASbBs0w1qbZFTk8okQUuHN TisTXVgXqqBDIl7IivLmGKZ41Ldo3QBRcMsnruPga/jlVlGgNm3EuVBXyBSmvKJN95otoNUVOCYnh 2huNh6AVQb6iB8qaYh0UMaPRgcTbMFAozLzcydIqcy8jVTR/ldnAKlpUZukbwLdQM0pml9OdkxWSY OQQlCP/AGKb+TwpvJtGSI0Ul1OpYELXy3Z0Y0JAmGKG3/m9ms61P1WAb1E28hATq6wEPvfpsLoddC INMXkwJ/iH9MPPbKhKpdxDV2x5D68k621WFfBLA3VIe1D4gN/dk67SVwlIt9B+w1tWv86mbAg2g5P vcTxYjHw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX7O-001hnj-5S; Wed, 15 Dec 2021 16:27:02 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtr-001c81-K9 for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:05 +0000 Received: by mail-wr1-x44a.google.com with SMTP id n22-20020adf8b16000000b001a22f61b29cso647378wra.23 for ; Wed, 15 Dec 2021 08:13:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rFCgXqXfHcb53cslJUDyyHetn7RW83DsGSg8T2DxbyQ=; b=Xbr7DWM55kmOvaqO2GbH+6Sgs0CZAnYI3TRkt/YXsdzrrqRLR/Frivi1pl3MLM/9FO j32SKpbCf8SMAOreeXObyBZiz1VTWrQXYlgdUzSK3UiwwzgzmnLA68Y25V9KvNe9N0XM 3kATi2Bbx3vrZyX/PezVQ1WnN0Gk6aVxD4zGCkhLD1gF+4QZzAYxOzBciVqN/KtL6qfr MFVCuO9PnWIyhUmJBVrBtkUcwW0Z5HXykLMP1ZqnW8V88b5/b0Ui7fr3FWkM7uhAOked VYssIEAFzkGEPZ41vDG5Cmqg5RHUxdorN+UUKPO33aMzZ3HokfsdK3D8CBYofIZozEZB 1Dqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rFCgXqXfHcb53cslJUDyyHetn7RW83DsGSg8T2DxbyQ=; b=iOYyt/q24FR8L0gaMLfcHhNMTHFmRlCLhp+bc2aVnpNhRifyNX/GPVBwNOqvYj2L8t 0wa+tT8CJC92LmnKXpxKakztiCeyx1scymvSl5nEZ0Zokd48VOo9BpXzUdJDb9yygl+G pL8yU2W9K/kQa0k4VoSMeMlcaN+8bofn0XJ7JwcasvSHPCOE/Rxd8dVomsAVi7yF8QIa h/EIk3Q9mjbWGcq5TS1KLtkwV0Z8GA6n1jnS4S1ks9yFm0+ivLZ3cG04AJFW2GCZTBXP Fapfdf8NY+vrijUHdpASEpA493wpJGKQgZ/oofm6VPRgnaMjiB6nlNfyL29uNqz+0p7C H9IQ== X-Gm-Message-State: AOAM532Frq+IhJeVN1VP4Fxdix9hO7InhtQPIyqDb66ZADOsciNYNgDG znc8875HB0w4/eTPJkZ1VRu60sy8ge+6 X-Google-Smtp-Source: ABdhPJy2AtmIuThch2i8/F0tSR53ROmxY1v/z1N9Cj4IaXOuDPzVJjxAElkrPPF8sn6zBbpFTeC0kXQ/6Ul+ X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a5d:4804:: with SMTP id l4mr4979766wrq.629.1639584781079; Wed, 15 Dec 2021 08:13:01 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:23 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-7-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 06/14] KVM: arm64: Introduce kvm_share_hyp() From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081303_762372_75EF6950 X-CRM114-Status: GOOD ( 19.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The create_hyp_mappings() function can currently be called at any point in time. However, its behaviour in protected mode changes widely depending on when it is being called. Prior to KVM init, it is used to create the temporary page-table used to bring-up the hypervisor, and later on it is transparently turned into a 'share' hypercall when the kernel has lost control over the hypervisor stage-1. In order to prepare the ground for also unsharing pages with the hypervisor during guest teardown, introduce a kvm_share_hyp() function to make it clear in which places a share hypercall should be expected, as we will soon need a matching unshare hypercall in all those places. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/arm.c | 4 ++-- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/mmu.c | 27 +++++++++++++++++++++------ arch/arm64/kvm/reset.c | 2 +- 5 files changed, 26 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 02d378887743..185d0f62b724 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -150,6 +150,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #include #include +int kvm_share_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9b745d2bc89a..c202abb448b1 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -146,7 +146,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (ret) return ret; - ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP); + ret = kvm_share_hyp(kvm, kvm + 1); if (ret) goto out_free_stage2_pgd; @@ -342,7 +342,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) if (err) return err; - return create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP); + return kvm_share_hyp(vcpu, vcpu + 1); } void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 5526d79c7b47..86899d3aa9a9 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -30,7 +30,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; /* Make sure the host task fpsimd state is visible to hyp: */ - ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP); + ret = kvm_share_hyp(fpsimd, fpsimd + 1); if (!ret) vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ea840fa223b5..872137fb5e0f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -299,6 +299,25 @@ static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end) return 0; } +int kvm_share_hyp(void *from, void *to) +{ + if (is_kernel_in_hyp_mode()) + return 0; + + /* + * The share hcall maps things in the 'fixed-offset' region of the hyp + * VA space, so we can only share physically contiguous data-structures + * for now. + */ + if (is_vmalloc_or_module_addr(from) || is_vmalloc_or_module_addr(to)) + return -EINVAL; + + if (kvm_host_owns_hyp_mappings()) + return create_hyp_mappings(from, to, PAGE_HYP); + + return pkvm_share_hyp(__pa(from), __pa(to)); +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range @@ -319,12 +338,8 @@ int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot) if (is_kernel_in_hyp_mode()) return 0; - if (!kvm_host_owns_hyp_mappings()) { - if (WARN_ON(prot != PAGE_HYP)) - return -EPERM; - return pkvm_share_hyp(kvm_kaddr_to_phys(from), - kvm_kaddr_to_phys(to)); - } + if (!kvm_host_owns_hyp_mappings()) + return -EPERM; start = start & PAGE_MASK; end = PAGE_ALIGN(end); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index c7a0249df840..e3e2a79fbd75 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -113,7 +113,7 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) if (!buf) return -ENOMEM; - ret = create_hyp_mappings(buf, buf + reg_sz, PAGE_HYP); + ret = kvm_share_hyp(buf, buf + reg_sz); if (ret) { kfree(buf); return ret; From patchwork Wed Dec 15 16:12:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0511BC433FE for ; Wed, 15 Dec 2021 16:29:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=M1hoSE/kKx2BbB/m3WBwa4x7O6ONBAdIa6xN+9+BZwc=; b=kFPuwOte8Kr0wFFOKM3g/+DXQM KV4y77NU2GC62mE1iZn1fh4CWj58mrWo1TsCeTFd4TTfb4wpOLXziGuhssT118uO9yX9W77MEByiV 2F/P2Md9rp7FBCN6WFhTl6dvUxsv4PITh4NQ0zYMe/krVp/eT+7uWiI64sTZHBzV71qKh+rxWxhn3 TCgLPupnIP5IgAXXS2pUdFEz5b4yc1Z5uC6sdFdc1gsVU6UNNuF/OmxR3irop2rVrOjdy5hXa1EFi o1kniZspyibOTupHLve0Iti7x0yYMP4u6Sl+hHXisjAIBLmCACh5r6uZtdXIFP9xK9cEtGv/dTFPu A7IJQ5fA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX8P-001iDE-SB; Wed, 15 Dec 2021 16:28:06 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtw-001cAW-1K for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:09 +0000 Received: by mail-ed1-x549.google.com with SMTP id v22-20020a50a456000000b003e7cbfe3dfeso20593533edb.11 for ; Wed, 15 Dec 2021 08:13:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B6GzFGiJ9PDO3woWUzIbQYLWwAYi6gFGbI36nZkkLuI=; b=LfMfotPk7w6rWp0uOSQHvJQNeYxBG/vYsI0ZSS+gJS2pQ9d7+MYAIw+CIHmxNxqtvK 5chzSGZRy0VzQN/uC3MPMZdBIb+2eqcQoyfLMPGg0heD+S9jsPKnj5ksmwTFlDCKyjms 7Hfq8hkRBtkZGdNfS8LyA/V1IPjjJ3tZN5isv8poxOZ5hGJvwKD5J6UgOj+Po2xIio2h MSN1+ZmJFWbEwZm6QRt22g23P6tdBlPdm30GzXM6iaEMpkf6s1iG8iNTcRVQq9rsCrDh Am2/bQCEEYu9awQdp9iHx0pj0BcxW4nwNDKIVeZvaD75lEGjA59wpVCV+IEfLMIzwjAZ EWpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B6GzFGiJ9PDO3woWUzIbQYLWwAYi6gFGbI36nZkkLuI=; b=CHpFDXyUZjO+kdXwAgEweZlCsjx1+dvf56ym7ExXCB+99AsauUq/JIIXwCguOXBFzW nP3KkYASisWPjcszWxaZt+427aTxuMQjDPUzPeAa+3eqZ3/Bc08JRIVntdqtnT2+5xk0 rmv341Lv7Eo/RSEL00+BTQ8oSZjo/fP+5xdTyHcpX/nO8eKxYoGy86+iC7pLhZWu0KDo ZBnARgqu0Bjjn/1GNHlZnIWDH7NAUUcQa0/hx7pDlx66k+v+MT/bv2REiwjt2IbMqMa7 2ewrvQi7+TC5Dm6NWe+7t96yAxOHQBaGXbWqExPtzfvyCRKJal39CntgDIqnmbNWcO3m zWfg== X-Gm-Message-State: AOAM5300Y+oavrZV4hY7V/vK8+HHjGOKptT/reTWabt+aXPNWVpMXoMB zVDjvUuBqqhWKc48k6I9horV5+cd5NkD X-Google-Smtp-Source: ABdhPJzgyjka4FxXVtsMMj3VQycigI6bxMYJb26eD4iPUI6aLUCxnlKe/+64Obb/yrMeRoi9JwSheSnldtDw X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a17:906:390:: with SMTP id b16mr11243842eja.123.1639584785340; Wed, 15 Dec 2021 08:13:05 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:24 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-8-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 07/14] KVM: arm64: pkvm: Refcount the pages shared with EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081308_145962_418D5260 X-CRM114-Status: GOOD ( 15.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In order to simplify the page tracking infrastructure at EL2 in nVHE protected mode, move the responsibility of refcounting pages that are shared multiple times on the host. In order to do so, let's create a red-black tree tracking all the PFNs that have been shared, along with a refcount. Acked-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/mmu.c | 78 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 68 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 872137fb5e0f..f26d83e3aa00 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -284,23 +284,72 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr) } } -static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end) +struct hyp_shared_pfn { + u64 pfn; + int count; + struct rb_node node; +}; + +static DEFINE_MUTEX(hyp_shared_pfns_lock); +static struct rb_root hyp_shared_pfns = RB_ROOT; + +static struct hyp_shared_pfn *find_shared_pfn(u64 pfn, struct rb_node ***node, + struct rb_node **parent) { - phys_addr_t addr; - int ret; + struct hyp_shared_pfn *this; + + *node = &hyp_shared_pfns.rb_node; + *parent = NULL; + while (**node) { + this = container_of(**node, struct hyp_shared_pfn, node); + *parent = **node; + if (this->pfn < pfn) + *node = &((**node)->rb_left); + else if (this->pfn > pfn) + *node = &((**node)->rb_right); + else + return this; + } - for (addr = ALIGN_DOWN(start, PAGE_SIZE); addr < end; addr += PAGE_SIZE) { - ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, - __phys_to_pfn(addr)); - if (ret) - return ret; + return NULL; +} + +static int share_pfn_hyp(u64 pfn) +{ + struct rb_node **node, *parent; + struct hyp_shared_pfn *this; + int ret = 0; + + mutex_lock(&hyp_shared_pfns_lock); + this = find_shared_pfn(pfn, &node, &parent); + if (this) { + this->count++; + goto unlock; } - return 0; + this = kzalloc(sizeof(*this), GFP_KERNEL); + if (!this) { + ret = -ENOMEM; + goto unlock; + } + + this->pfn = pfn; + this->count = 1; + rb_link_node(&this->node, parent, node); + rb_insert_color(&this->node, &hyp_shared_pfns); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn, 1); +unlock: + mutex_unlock(&hyp_shared_pfns_lock); + + return ret; } int kvm_share_hyp(void *from, void *to) { + phys_addr_t start, end, cur; + u64 pfn; + int ret; + if (is_kernel_in_hyp_mode()) return 0; @@ -315,7 +364,16 @@ int kvm_share_hyp(void *from, void *to) if (kvm_host_owns_hyp_mappings()) return create_hyp_mappings(from, to, PAGE_HYP); - return pkvm_share_hyp(__pa(from), __pa(to)); + start = ALIGN_DOWN(__pa(from), PAGE_SIZE); + end = PAGE_ALIGN(__pa(to)); + for (cur = start; cur < end; cur += PAGE_SIZE) { + pfn = __phys_to_pfn(cur); + ret = share_pfn_hyp(pfn); + if (ret) + return ret; + } + + return 0; } /** From patchwork Wed Dec 15 16:12:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E042DC433EF for ; Wed, 15 Dec 2021 16:30:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=LLoVFpI5Lkw0vCze/fcBfFWeWMVq1MIWkMsxIg8WGOw=; b=C0DClYj14+QKusgod5RqwKLXUf Z2UzlXWfsi5a+2St5jFj7CdVAx4rFaWQCZV4oEFdKh8W59HkSPZL+7Tco46/5ptWmtfOwj+F5uODy GzNNpfTHx37LSExivk1EidOlAGaJvh9EUjwACH6IDgFo2seS/csNxVwvQcVwrhLidH7peofl+mxBq uJKWK4uyoEHC4oB7k0/2H3JretRbxV3lTl4MF2W4fEVlbEMWmXQdxuNycUzGyKFEmsTI0o7Uo44Nt /sFRwN5/i6NYNgt4yexXC2ImVGKZTNam7NbMDu05h8jFQMb6IpL9/vb2DvXliJruaSEFszLepE9G8 XIx2rzwQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX9A-001iYc-OK; Wed, 15 Dec 2021 16:28:53 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtz-001cCC-IP for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:13 +0000 Received: by mail-wm1-x349.google.com with SMTP id n31-20020a05600c3b9f00b0034440f99123so2247050wms.7 for ; Wed, 15 Dec 2021 08:13:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8rgSQfxqCRM7uaw93tRPVu2i1VyDlU+SgH6AjdwelFg=; b=oL/4mToqyjfnEM6iyT67/BqZZr2PNLhiYQmR5ckr4IxYAMm3DJ8qXK8tH50bG1H1m/ vRkewaRsNmYjaDXv7YrXRz5/gziGd7MhVs8Mhinsx5SvjCflWGkam/TtMjX3Z++skB00 mruR2m7aZDYO9w3PkO3k2QmQKHrfBeUsyKS4zwIOQ7k491fnzDsEEcIJ12VpQB8r32sY JygdFvcfEU4MjPLHsd4oSD1/xBooRBb9m7s7Tt5nyCGm3S7W3rsmxZr5xl6Dw832HU1d ZS1OMoGsa1TLOGXwEo3GKHhFA9Q7xYLD1FNoNEikntrurFNYroaXlmDvYvmz+Br7YqwS 9OVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8rgSQfxqCRM7uaw93tRPVu2i1VyDlU+SgH6AjdwelFg=; b=KXZBbJxsG8unkIvXZ4Ty1IMfODNkA7q7736wnYe1kxHHUwrRjDnYGiXTMIHKII/yd7 bFB1p/cKCq1ldkAoiCI5UaVt5ATkdqCeXF2d8+9ARdTdQtmvh59InNbwfCMiqk6qjtgA 5gF8P0lN1+JkPPnBAvlJeiM+LPMrM6NWO6SB4Q+L3E0AREwtp9UU1GMlH+lrggj6Q/Ya 6nJQ/dtBATombFlggjxhz0tgM1lC8gMu5ryOeTVWLloMbpvUhR2XzTk2kJtItU+V4cn4 Eq1hD7Euw2u8mdRwJ+WFVKQ0kNHmsRZjictepEKTVjYhWZ49PJ5hw2CdgZCnFmpZAYI2 MLqg== X-Gm-Message-State: AOAM531rErlpEglvQwIzngSO2/2vrQJeb7kNMVkXaDFHUN9NPZvywCIC Sk0T9lO7UFLjtOvEOMzP+Cm7/nVRcW44 X-Google-Smtp-Source: ABdhPJxXvEycf99ECGUzXgk4Cq7sONC0HLtF5x7Y6YaPQ6+CwNHwdN/fgrDRTjcwDTjmSjH4G+ePR4js0bD1 X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:6000:168e:: with SMTP id y14mr4991297wrd.331.1639584789556; Wed, 15 Dec 2021 08:13:09 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:25 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-9-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 08/14] KVM: arm64: Extend pkvm_page_state enumeration to handle absent pages From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081311_820251_4661FCCE X-CRM114-Status: GOOD ( 11.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Explicitly name the combination of SW0 | SW1 as reserved in the pte and introduce a new PKVM_NOPAGE meta-state which, although not directly stored in the software bits of the pte, can be used to represent an entry for which there is no underlying page. This is distinct from an invalid pte, as stage-2 identity mappings for the host are created lazily and so an invalid pte there is the same as a valid mapping for the purposes of ownership information. This state will be used for permission checking during page transitions in later patches. Reviewed-by: Andrew Walbran Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index b58c910babaf..56445586c755 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,6 +24,11 @@ enum pkvm_page_state { PKVM_PAGE_OWNED = 0ULL, PKVM_PAGE_SHARED_OWNED = KVM_PGTABLE_PROT_SW0, PKVM_PAGE_SHARED_BORROWED = KVM_PGTABLE_PROT_SW1, + __PKVM_PAGE_RESERVED = KVM_PGTABLE_PROT_SW0 | + KVM_PGTABLE_PROT_SW1, + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE, }; #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT_SW1) From patchwork Wed Dec 15 16:12:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B837C433F5 for ; Wed, 15 Dec 2021 16:31:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ienau1Tn+BqfC43AtFcfiqywDGGWqFaIhnc2DnCxbNY=; b=ocyg592wCkF4ERMjV0l2brbvD+ eoGNodiaKabAMipATgPWJmofPjXGqs9iHhuI+u+bz3GxJUG9ZcoqHL5IesfvH9/CG6GBy+Sob1dPT yiM0Y+2ReBUwvxvJFih+YJ5f57ET/SHIr8RxKNSIHw+V+fBFE4ns0+idMyEHUoLLuR2DzfqTFaDHb TGd0OqtYG4rtGzX9SEjX1Tjmqkn0fbYgct5M2qlM5QEmaILCi0Txa4XBlo8kvy2xk4vdBXIGxN3qb 8FOBL1q+5Sc8AEehPZb1nB4aPDNB6kHUztASZrOs5NHdxe+/mf24lDciRrFnrdxGfTkcvsTqxa9X0 KE7ltvKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX9s-001iqn-Ky; Wed, 15 Dec 2021 16:29:37 +0000 Received: from mail-lf1-x14a.google.com ([2a00:1450:4864:20::14a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWu4-001cEZ-Ux for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:18 +0000 Received: by mail-lf1-x14a.google.com with SMTP id u20-20020a056512129400b0040373ffc60bso9732705lfs.15 for ; Wed, 15 Dec 2021 08:13:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2TGPIYSaEIjKXjmvhnIXxrzrd8fguGN1MQzZdij/gwk=; b=bx6YgQQWKyCSriIz1R2rvmpQeUPcD0LpKjIZj+LDd11kYfmZKlDmapJ/NyEJhKB7GO 0GNru/GfkcY439wnQd6qGZWJ0R4V8TCPgdF4fQCmFf2mLn1nyyp7lky7fe2/FaH0RHIt ayc9pOBPTj6NebOFz8Q/fypnbTOti4vduxE+9sd9VMnD0V8ByjDjpM0LtEIpro98NX+C xloCTF1+b5uUnxrAf8KMgFnozY/oViJEotm4Te4lvAcvr85lmaAQHI0wOMWk9biEgA94 CdeqYpF58doTpOxXGhwdvr8+ryDxaa43mMdVMz3zcHokS9iuQ95w4Vg0j4BSjvXa10iK gr9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2TGPIYSaEIjKXjmvhnIXxrzrd8fguGN1MQzZdij/gwk=; b=XpJno/iQJHFWgy3Zxdd1XugyxJkCtoBGkTVOXaxwj5Oyw9itnm6I1y8wU/80ibJ0aq Ruy+SXrJaD4NJA+Qns5jpWIq4b9LJyeMVgBK90WFjZpzb3y+fSqZL5F+BMMWyUzPZiOC OxlxvnBvoePeviGYVdGacoyHeBjoz/325ZOMxpfmMmTPQF7/DIZ/d1N89UtCl1O2dO0A YOgH2Qfr7KrKDJdi8dF4kuGB80vfhB3gnKcv5d7h4bEvwlLDUH/CODuMH0JU0JruBuG4 uLQ6a7XijpTp07JjJPnXfkqH4616AGR3/Us45kycVnqTWkliDFJfzT2PGrmMtMU3a9u3 BHgg== X-Gm-Message-State: AOAM532DxLtpZx6qRO1CjVCN/N5TYR8Rj+RXvDgG/43C62BoP3TeCb0Z dx/v4EG1yff7bUaVx8mw64Y3AGBrIb6R X-Google-Smtp-Source: ABdhPJzILf2Wp1hVJt12n/LnfOzJSJvsC7ZTraq2mR6Vo+DADiIAWTxTcbaMTv7WHfizd8OOLb/5vfdAWkVM X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:6512:388b:: with SMTP id n11mr10318861lft.198.1639584793573; Wed, 15 Dec 2021 08:13:13 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:26 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-10-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 09/14] KVM: arm64: Introduce wrappers for host and hyp spin lock accessors From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081317_037771_41131E79 X-CRM114-Status: GOOD ( 12.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon In preparation for adding additional locked sections for manipulating page-tables at EL2, introduce some simple wrappers around the host and hypervisor locks so that it's a bit easier to read and bit more difficult to take the wrong lock (or even take them in the wrong order). Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 32 ++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 92262e89672d..907d3cbf1809 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -28,6 +28,26 @@ static struct hyp_pool host_s2_pool; const u8 pkvm_hyp_id = 1; +static void host_lock_component(void) +{ + hyp_spin_lock(&host_kvm.lock); +} + +static void host_unlock_component(void) +{ + hyp_spin_unlock(&host_kvm.lock); +} + +static void hyp_lock_component(void) +{ + hyp_spin_lock(&pkvm_pgd_lock); +} + +static void hyp_unlock_component(void) +{ + hyp_spin_unlock(&pkvm_pgd_lock); +} + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -339,14 +359,14 @@ static int host_stage2_idmap(u64 addr) prot = is_memory ? PKVM_HOST_MEM_PROT : PKVM_HOST_MMIO_PROT; - hyp_spin_lock(&host_kvm.lock); + host_lock_component(); ret = host_stage2_adjust_range(addr, &range); if (ret) goto unlock; ret = host_stage2_idmap_locked(range.start, range.end - range.start, prot); unlock: - hyp_spin_unlock(&host_kvm.lock); + host_unlock_component(); return ret; } @@ -370,8 +390,8 @@ int __pkvm_host_share_hyp(u64 pfn) if (!addr_is_memory(addr)) return -EINVAL; - hyp_spin_lock(&host_kvm.lock); - hyp_spin_lock(&pkvm_pgd_lock); + host_lock_component(); + hyp_lock_component(); ret = kvm_pgtable_get_leaf(&host_kvm.pgt, addr, &pte, NULL); if (ret) @@ -433,8 +453,8 @@ int __pkvm_host_share_hyp(u64 pfn) BUG_ON(ret); unlock: - hyp_spin_unlock(&pkvm_pgd_lock); - hyp_spin_unlock(&host_kvm.lock); + hyp_unlock_component(); + host_unlock_component(); return ret; } From patchwork Wed Dec 15 16:12:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA99DC433EF for ; Wed, 15 Dec 2021 16:32:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=b5y5LEuxBTXO8155k2JRDYEKPHi8egnOomvE3fqlrYk=; b=v81cnmCjjGNTJhMIxjwo5rhocL HY2ZtvnRQ0GSW/hqxL6pEZMXTQJ2LN88hzsTyEeTcXCsypy4EqD9QJGwELKLLile3Kkxvj034W4um gcFJhLrKu6mFAXfFvs+CDE6NpIbek/hM3LRA0dFfsQw0DaQu7+bRVtbHDgW0zLCNEIvz9FeC/NSdp /kQV5v3IsEuxsBmzUTy4Y4lZFYzzOO4M8GSmg3udC3EV5Ayt8GknS5H8cPqiucSjG/IR4hNsRCam1 zKpIvIBCm7uaEGWfV+KGtnoRHHgW7+JVvrJHv52EiGQRjaeKgv406MPNwXTrWQlREBZEwn0hUGk7T bpCy//AA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXAz-001jIK-Hs; Wed, 15 Dec 2021 16:30:46 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWu8-001cGk-IW for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:24 +0000 Received: by mail-wm1-x349.google.com with SMTP id i15-20020a05600c354f00b0034566ac865bso991991wmq.6 for ; Wed, 15 Dec 2021 08:13:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1o38A9BI+r7dP43P34m5TydxbsI3e27ltr6SQ6Ewz8I=; b=LaRj4F6ubxri/nlRyOjmvFbJagUzE92BIeQpC1eoyaT3+xNdixw079Me2l0RqYwdcI ghIhCMB7riltQWtza9GFejvZkbIAdwfVrvueb+g3z8FbLOVTbpCKBM1ZNm+CzLgBezK+ Dm7bzPezMnOIZf0/AR/fuRnZbQnqWspwV0E+NMDqr+7AhRiWgvwmWQcVn3yIPrwW2r7I Qf6ZXukeGvQBE1FjTbgq+aXE+wm9yj2kwRYhLcethvspojRXPHYKmyvzfPM+7swsr155 YggJMLubJqB5VN8HsWDEVO06HqDr0pC/RS5V063AaaXhCenlLuBmv1uCtNTI0gRMcLgr 2ikA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1o38A9BI+r7dP43P34m5TydxbsI3e27ltr6SQ6Ewz8I=; b=Q4s4IMPi04gt5lXq4nYouR677t88ekOsOOshg81Ydz5IWwf0POIIiCgqMN16aEzUIu F2PwSNU7ARw5ONLs7+4unGQKN4f4puCKZloF8zJwbad5V19ViPr1ieULVuKoDCT8Soq6 Wi8HuRSHYNFg0rehHByRxGZ08KbCNhUVXv346IteSxD5zoOkbaJrmZ8tj/8ruWKBbtE9 m+xMEmHNCOcjrcdRYm07spf+MxwPJPwpeEjs8IemZ6sfPCr7HXlOcemmIECH6w1lXL2e V4j23t+fbCbNPL85r5MIz0sCqvCfZsk1XRJTmz0hEJhMqBmx3KaD2WHgDsG0+3iAyIdR V8xA== X-Gm-Message-State: AOAM53351MLVfoSsdBa50p3Sd0HfhcBvWFrMHNG1KBt+786amIuftvmP OzhEIqHw+RFq+7QT60VZBoFbmlMg4mGN X-Google-Smtp-Source: ABdhPJyxyB2n1To7SK89h+iSB1x6MuCPpXPoIrkqY5mnsl1QjaBX/fM+8EPHUY/eGgqVXdsENZ37hD09emLN X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:adf:ee42:: with SMTP id w2mr4964964wro.7.1639584798107; Wed, 15 Dec 2021 08:13:18 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:27 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-11-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 10/14] KVM: arm64: Implement do_share() helper for sharing memory From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081320_690365_4CF39C84 X-CRM114-Status: GOOD ( 15.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon By default, protected KVM isolates memory pages so that they are accessible only to their owner: be it the host kernel, the hypervisor at EL2 or (in future) the guest. Establishing shared-memory regions between these components therefore involves a transition for each page so that the owner can share memory with a borrower under a certain set of permissions. Introduce a do_share() helper for safely sharing a memory region between two components. Currently, only host-to-hyp sharing is implemented, but the code is easily extended to handle other combinations and the permission checks for each component are reusable. Reviewed-by: Andrew Walbran Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 237 ++++++++++++++++++++++++++ 1 file changed, 237 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 907d3cbf1809..666278632fed 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -472,3 +472,240 @@ void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) ret = host_stage2_idmap(addr); BUG_ON(ret && ret != -EAGAIN); } + +/* This corresponds to locking order */ +enum pkvm_component_id { + PKVM_ID_HOST, + PKVM_ID_HYP, +}; + +struct pkvm_mem_transition { + u64 nr_pages; + + struct { + enum pkvm_component_id id; + /* Address in the initiator's address space */ + u64 addr; + + union { + struct { + /* Address in the completer's address space */ + u64 completer_addr; + } host; + }; + } initiator; + + struct { + enum pkvm_component_id id; + } completer; +}; + +struct pkvm_mem_share { + const struct pkvm_mem_transition tx; + const enum kvm_pgtable_prot completer_prot; +}; + +struct check_walk_data { + enum pkvm_page_state desired; + enum pkvm_page_state (*get_page_state)(kvm_pte_t pte); +}; + +static int __check_page_state_visitor(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + struct check_walk_data *d = arg; + kvm_pte_t pte = *ptep; + + if (kvm_pte_valid(pte) && !addr_is_memory(kvm_pte_to_phys(pte))) + return -EINVAL; + + return d->get_page_state(pte) == d->desired ? 0 : -EPERM; +} + +static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, + struct check_walk_data *data) +{ + struct kvm_pgtable_walker walker = { + .cb = __check_page_state_visitor, + .arg = data, + .flags = KVM_PGTABLE_WALK_LEAF, + }; + + return kvm_pgtable_walk(pgt, addr, size, &walker); +} + +static enum pkvm_page_state host_get_page_state(kvm_pte_t pte) +{ + if (!kvm_pte_valid(pte) && pte) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __host_check_page_state_range(u64 addr, u64 size, + enum pkvm_page_state state) +{ + struct check_walk_data d = { + .desired = state, + .get_page_state = host_get_page_state, + }; + + hyp_assert_lock_held(&host_kvm.lock); + return check_page_state_range(&host_kvm.pgt, addr, size, &d); +} + +static int __host_set_page_state_range(u64 addr, u64 size, + enum pkvm_page_state state) +{ + enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + + return host_stage2_idmap_locked(addr, size, prot); +} + +static int host_request_owned_transition(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.host.completer_addr; + return __host_check_page_state_range(addr, size, PKVM_PAGE_OWNED); +} + +static int host_initiate_share(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.host.completer_addr; + return __host_set_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED); +} + +static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __hyp_check_page_state_range(u64 addr, u64 size, + enum pkvm_page_state state) +{ + struct check_walk_data d = { + .desired = state, + .get_page_state = hyp_get_page_state, + }; + + hyp_assert_lock_held(&pkvm_pgd_lock); + return check_page_state_range(&pkvm_pgtable, addr, size, &d); +} + +static bool __hyp_ack_skip_pgtable_check(const struct pkvm_mem_transition *tx) +{ + return !(IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || + tx->initiator.id != PKVM_ID_HOST); +} + +static int hyp_ack_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (perms != PAGE_HYP) + return -EPERM; + + if (__hyp_ack_skip_pgtable_check(tx)) + return 0; + + return __hyp_check_page_state_range(addr, size, PKVM_NOPAGE); +} + +static int hyp_complete_share(u64 addr, const struct pkvm_mem_transition *tx, + enum kvm_pgtable_prot perms) +{ + void *start = (void *)addr, *end = start + (tx->nr_pages * PAGE_SIZE); + enum kvm_pgtable_prot prot; + + prot = pkvm_mkstate(perms, PKVM_PAGE_SHARED_BORROWED); + return pkvm_create_mappings_locked(start, end, prot); +} + +static int check_share(struct pkvm_mem_share *share) +{ + const struct pkvm_mem_transition *tx = &share->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_request_owned_transition(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + ret = hyp_ack_share(completer_addr, tx, share->completer_prot); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static int __do_share(struct pkvm_mem_share *share) +{ + const struct pkvm_mem_transition *tx = &share->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_initiate_share(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + ret = hyp_complete_share(completer_addr, tx, share->completer_prot); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +/* + * do_share(): + * + * The page owner grants access to another component with a given set + * of permissions. + * + * Initiator: OWNED => SHARED_OWNED + * Completer: NOPAGE => SHARED_BORROWED + */ +static int do_share(struct pkvm_mem_share *share) +{ + int ret; + + ret = check_share(share); + if (ret) + return ret; + + return WARN_ON(__do_share(share)); +} From patchwork Wed Dec 15 16:12:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98991C433EF for ; Wed, 15 Dec 2021 16:35:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ugkQJF328Aw+cLxm5FWYIKiCyxKOE9rhQrLzoRs8XKs=; b=Jmh7Os4hVDm+Pjucj1kQW89SmW Z/26onMIQhE3r6pCZn44TLdtMfQWla9mtGGL0gwdoNVNgRdeQTnuX7JTWDw7TM561Ay81Y75fFi5E hLzGM5cHLRJJgNFDKFqxlvchuQpMevkAu7tMn+jkl5+/VZaNq5o5FcYf9J2+5o3L1aI/XtoZBmBSZ vEhqpdGkKCQB/5Pz9cMaK8vmBo/C8vpHgmiBZBIrqcm8z5h0H9ZFHjFy86Dz3NZ25uzg/quvrucYl kYR0VOvtqv/HwRjb9Gj2FGko10XvYS1AUC986BW5RRgN0wnBgShndQ0LHjQ+7B1zmuG/1cBDiNTfD HRzXcVsg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXC1-001jlF-SS; Wed, 15 Dec 2021 16:31:51 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWuD-001cIK-KJ for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:27 +0000 Received: by mail-wr1-x449.google.com with SMTP id n22-20020adf8b16000000b001a22f61b29cso647859wra.23 for ; Wed, 15 Dec 2021 08:13:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ErGzTwKr89qCblkEgesDiph0ZVTf33f4xYZuAhHXENo=; b=JFzGOoe7wEIoa7ofiFvx0O7xEqSKLffHVZviv3KfL412p5TVg7xXbsMbn7Je0BWo9K PhGl5H8niWw/tqDTqt0RDVmbeQ5eY1PlR9UuS84KW2LXaTmcqJebfzFl9dB5r6MAh/5N WYWfBHi+J3oeh0tYpMpfIv3YPvy4/GjffQkj0aYXtHhXPbPNwNU4hm9pxRRBGA2400Ok jv7an2mOQNTzGdBN7GUyO2FBJiPjYcs9u7/OGLrTS5Awu+Tr3Q1gchAAy1nOor9HR1Fp RpjlqoMwcigzlQc8H/lk/NpZtcgCP3jRVBSWBUezGG3LVh3cRj6FqblplmUoQOdNbNyH d+zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ErGzTwKr89qCblkEgesDiph0ZVTf33f4xYZuAhHXENo=; b=GQDFkmMhQme8WFbndiVl5FE2yU3c/3E6b1eVoE8a+LIgJHaTdWnn7Dnptum86wGKCi vYCzL5tC1cBSMdpkoxW9BOnJWlx4pWVFfHpS83DfST/vhrQdfdxAugvemwpRU+v//rjU Jrv7NVKcrpS4atGDsbz4R2B52Nc/32ruAeMr2YacVI8WLmul3Aof5A6HVgSPOcufbLKj CwII102GtiRj/S6vf3rlMmBDtlv29p64XvJ3Ta6KFaiRxZ1lh4J2Vr/G0fQiwT52Wesp h4NE7aPFPHTRTxzgHONtRt0Thq1fivasFUfvjZJKTSRdfKSsEKH6bylm5wKebwo2ehyx QpJw== X-Gm-Message-State: AOAM533SY5iKDDMxwHYqh3T/cS9YNpggfh0f4gd7Uw6bzPOP0TT92YTO fTCXWq6lD5SUQgAPzN1EENBLiBRdwSz1 X-Google-Smtp-Source: ABdhPJxh3qMrXIlyGH9SPFTDjo5KI6Z/OkMgigR3F7/zugDE3oqFZbPM/YJocdTgQ4kL8ntr51hKN49uIWyO X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:600c:1c8d:: with SMTP id k13mr93836wms.0.1639584802523; Wed, 15 Dec 2021 08:13:22 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:28 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-12-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 11/14] KVM: arm64: Implement __pkvm_host_share_hyp() using do_share() From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081325_784114_AC494A52 X-CRM114-Status: GOOD ( 16.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon __pkvm_host_share_hyp() shares memory between the host and the hypervisor so implement it as an invocation of the new do_share() mechanism. Note that double-sharing is no longer permitted (as this allows us to reduce the number of page-table walks significantly), but is thankfully no longer relied upon by the host. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 121 +++++++------------------- 1 file changed, 33 insertions(+), 88 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 666278632fed..14823e318585 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -371,94 +371,6 @@ static int host_stage2_idmap(u64 addr) return ret; } -static inline bool check_prot(enum kvm_pgtable_prot prot, - enum kvm_pgtable_prot required, - enum kvm_pgtable_prot denied) -{ - return (prot & (required | denied)) == required; -} - -int __pkvm_host_share_hyp(u64 pfn) -{ - phys_addr_t addr = hyp_pfn_to_phys(pfn); - enum kvm_pgtable_prot prot, cur; - void *virt = __hyp_va(addr); - enum pkvm_page_state state; - kvm_pte_t pte; - int ret; - - if (!addr_is_memory(addr)) - return -EINVAL; - - host_lock_component(); - hyp_lock_component(); - - ret = kvm_pgtable_get_leaf(&host_kvm.pgt, addr, &pte, NULL); - if (ret) - goto unlock; - if (!pte) - goto map_shared; - - /* - * Check attributes in the host stage-2 PTE. We need the page to be: - * - mapped RWX as we're sharing memory; - * - not borrowed, as that implies absence of ownership. - * Otherwise, we can't let it got through - */ - cur = kvm_pgtable_stage2_pte_prot(pte); - prot = pkvm_mkstate(0, PKVM_PAGE_SHARED_BORROWED); - if (!check_prot(cur, PKVM_HOST_MEM_PROT, prot)) { - ret = -EPERM; - goto unlock; - } - - state = pkvm_getstate(cur); - if (state == PKVM_PAGE_OWNED) - goto map_shared; - - /* - * Tolerate double-sharing the same page, but this requires - * cross-checking the hypervisor stage-1. - */ - if (state != PKVM_PAGE_SHARED_OWNED) { - ret = -EPERM; - goto unlock; - } - - ret = kvm_pgtable_get_leaf(&pkvm_pgtable, (u64)virt, &pte, NULL); - if (ret) - goto unlock; - - /* - * If the page has been shared with the hypervisor, it must be - * already mapped as SHARED_BORROWED in its stage-1. - */ - cur = kvm_pgtable_hyp_pte_prot(pte); - prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_BORROWED); - if (!check_prot(cur, prot, ~prot)) - ret = -EPERM; - goto unlock; - -map_shared: - /* - * If the page is not yet shared, adjust mappings in both page-tables - * while both locks are held. - */ - prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_BORROWED); - ret = pkvm_create_mappings_locked(virt, virt + PAGE_SIZE, prot); - BUG_ON(ret); - - prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); - ret = host_stage2_idmap_locked(addr, PAGE_SIZE, prot); - BUG_ON(ret); - -unlock: - hyp_unlock_component(); - host_unlock_component(); - - return ret; -} - void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu_fault_info fault; @@ -709,3 +621,36 @@ static int do_share(struct pkvm_mem_share *share) return WARN_ON(__do_share(share)); } + +int __pkvm_host_share_hyp(u64 pfn) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + .completer_prot = PAGE_HYP, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_share(&share); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 15 16:12:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B190C433F5 for ; Wed, 15 Dec 2021 16:37:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YBldWurwexN56tOkqXZGJYHKh3bufUvoviW+rQvx5iE=; b=ofY2/Jz45Vt9BJRWPH1cgrr1v1 WZZmkGqe1XV0VJw36M53iaiD3lfqsVaXFatZ3juzQ9D+adOpAXk/X3vUs9zfHI7B2cNaLIN/8j2G0 mKdGr+yVm2HxfAThfWZvTUQrsbRNFmsPfUAds95+0tdkKrlXGZJbw3oMtDbBKXk/qVk/WzZDeHB59 LM7rbOeVinwexg9fzAj8XMS/vra8FhknPdzqtXerXjRlkKlydLasEO0RikvbtgnszD9Bfpz2udOHl wVFrMRqVaycPGqldFSFJBby6MdaWYxspCKws7K9ZNptXz7UAhUiBortiRwA188WQuBbZFDNey8xPq /z6F0uIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXCp-001k5G-TE; Wed, 15 Dec 2021 16:32:40 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWuH-001cLF-5A for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:31 +0000 Received: by mail-wm1-x34a.google.com with SMTP id b75-20020a1c804e000000b0034569bde713so967807wmd.9 for ; Wed, 15 Dec 2021 08:13:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1Uv+uSK+k6gp/yCPD9bTsZMOeiUHXNuD9f7KA3qC+RU=; b=KHgYwK5BuPaiya24VhTskfkiHXY2KhERyO0Ns6i1bSlMuj6flpuQDquCIk2QoDFenC qTmtKo85C3U8k5CIbTtsPzVuIsDG0EzIcMAMs8xXZutQe60JwvibzIwGDQfXZrKXe9ip QOIdy2AkmCY6IoD11KQzFsIEcK0g0LRW1VCgr3FiPgyv3tZrI0lLtvljJpAS5pUS5VFe TjWXW9mB7oo0J5n+gli4u5slLrAbdAL3Hq+LMILIxsmiEb3tiuUplb0yyuM9saLdY1cd fAAII2+k49Xx7Lf+1kcWMi+20d3kSa1LrMGGBC90mBWw4Va6TToTO8C4rtUgIHTVn8ac MZMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1Uv+uSK+k6gp/yCPD9bTsZMOeiUHXNuD9f7KA3qC+RU=; b=iACofBC+aD4Ig+dYx16m9QEfczrIG1Xm5pdN7FqH7KZJw1ekajkOBUD8vpDWbSEi/K lDmFR+sbE7d3jlLt1NotqQy8q0jT+ppLZlMUQGeAZv/0FMEJazBWtChX4xdjJRGSABOJ iKXLvP1pxI9xJsx/+R2OXHzYqKOPXHChbiPxIp5YJpuDhS1evTMch64asEVmrU8e5OCC wELvTk377Dt4qcfln0PEAuZ776q/ytB2B0/GAcgh9DQGBXPQOB/9xd1d5RC0hW/h1FNk rjaik8YNjnf6rhllB+0YKVedP6IkzDFCnK8veu7gS00f8RGWQoPI+fDbSvLlLNR0BvtR 3w7w== X-Gm-Message-State: AOAM5322Q7vHHFP5Dby89R9BmpiDLyGCI4Op+7gLDoS9aNL45TzK/e8J Q9101W275E2uUzu93bi3CbOiXV+Jx9cG X-Google-Smtp-Source: ABdhPJzmVQ2X/pz8yOEnI/iDk9eDx3O/fvoBqSDacRJg+34A6UuREUZ3WUgDTLdGeCW+BXycOlr3EccUsebH X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:600c:4104:: with SMTP id j4mr540003wmi.178.1639584806973; Wed, 15 Dec 2021 08:13:26 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:29 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-13-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 12/14] KVM: arm64: Implement do_unshare() helper for unsharing memory From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081329_275275_F24E53CD X-CRM114-Status: GOOD ( 12.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Tearing down a previously shared memory region results in the borrower losing access to the underlying pages and returning them to the "owned" state in the owner. Implement a do_unshare() helper, along the same lines as do_share(), to provide this functionality for the host-to-hyp case. Reviewed-by: Andrew Walbran Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 115 ++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 14823e318585..2b23b2db8d4a 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -486,6 +486,16 @@ static int host_request_owned_transition(u64 *completer_addr, return __host_check_page_state_range(addr, size, PKVM_PAGE_OWNED); } +static int host_request_unshare(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.host.completer_addr; + return __host_check_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED); +} + static int host_initiate_share(u64 *completer_addr, const struct pkvm_mem_transition *tx) { @@ -496,6 +506,16 @@ static int host_initiate_share(u64 *completer_addr, return __host_set_page_state_range(addr, size, PKVM_PAGE_SHARED_OWNED); } +static int host_initiate_unshare(u64 *completer_addr, + const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + u64 addr = tx->initiator.addr; + + *completer_addr = tx->initiator.host.completer_addr; + return __host_set_page_state_range(addr, size, PKVM_PAGE_OWNED); +} + static enum pkvm_page_state hyp_get_page_state(kvm_pte_t pte) { if (!kvm_pte_valid(pte)) @@ -536,6 +556,17 @@ static int hyp_ack_share(u64 addr, const struct pkvm_mem_transition *tx, return __hyp_check_page_state_range(addr, size, PKVM_NOPAGE); } +static int hyp_ack_unshare(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + + if (__hyp_ack_skip_pgtable_check(tx)) + return 0; + + return __hyp_check_page_state_range(addr, size, + PKVM_PAGE_SHARED_BORROWED); +} + static int hyp_complete_share(u64 addr, const struct pkvm_mem_transition *tx, enum kvm_pgtable_prot perms) { @@ -546,6 +577,14 @@ static int hyp_complete_share(u64 addr, const struct pkvm_mem_transition *tx, return pkvm_create_mappings_locked(start, end, prot); } +static int hyp_complete_unshare(u64 addr, const struct pkvm_mem_transition *tx) +{ + u64 size = tx->nr_pages * PAGE_SIZE; + int ret = kvm_pgtable_hyp_unmap(&pkvm_pgtable, addr, size); + + return (ret != size) ? -EFAULT : 0; +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx = &share->tx; @@ -622,6 +661,82 @@ static int do_share(struct pkvm_mem_share *share) return WARN_ON(__do_share(share)); } +static int check_unshare(struct pkvm_mem_share *share) +{ + const struct pkvm_mem_transition *tx = &share->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_request_unshare(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + ret = hyp_ack_unshare(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static int __do_unshare(struct pkvm_mem_share *share) +{ + const struct pkvm_mem_transition *tx = &share->tx; + u64 completer_addr; + int ret; + + switch (tx->initiator.id) { + case PKVM_ID_HOST: + ret = host_initiate_unshare(&completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + if (ret) + return ret; + + switch (tx->completer.id) { + case PKVM_ID_HYP: + ret = hyp_complete_unshare(completer_addr, tx); + break; + default: + ret = -EINVAL; + } + + return ret; +} + +/* + * do_unshare(): + * + * The page owner revokes access from another component for a range of + * pages which were previously shared using do_share(). + * + * Initiator: SHARED_OWNED => OWNED + * Completer: SHARED_BORROWED => NOPAGE + */ +static int do_unshare(struct pkvm_mem_share *share) +{ + int ret; + + ret = check_unshare(share); + if (ret) + return ret; + + return WARN_ON(__do_unshare(share)); +} + int __pkvm_host_share_hyp(u64 pfn) { int ret; From patchwork Wed Dec 15 16:12:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9E821C433F5 for ; Wed, 15 Dec 2021 16:38:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=GDefQeXXMwkaF57f1lXBgyV2YtwrqZyMXy6u538ZqMg=; b=coWFwl7zWErR5/z1xHzV/RdwFP IWurxuj8d6nglU8hucrgSFk9rO99PwGfIXhxucaZob/w2b4ek0f0Pz8Q7FjRNQp/bYIxDqgiA5ml+ bRXPI4CmPW2XDMAVjMiOUtkEB2BlIucHjL5B1KAyv4z0XtuYp0bX8EpzZ67OyyMkZym9yAdajXSHg X5OaOSQ9A7MLbwGGqwcaK6nMupwT+Uj89KbzWwKBhKmJ4P77AwSuSNkVfQH92N+76Umva1THZOHbi upuVFi8H05Az82J09NXX9Xb+dgRR4INJx7/YvxuN6Ad/A+ytUzjN8aawQv+Uk63xbzRgBbpB1mqt3 j95DY57Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXDp-001kSf-80; Wed, 15 Dec 2021 16:33:41 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWuM-001cMe-13 for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:36 +0000 Received: by mail-wm1-x34a.google.com with SMTP id k25-20020a05600c1c9900b00332f798ba1dso14844356wms.4 for ; Wed, 15 Dec 2021 08:13:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6/+wRoizJk26BilMAQp8YVWtbQFwSb0j0DewQLtAtvw=; b=YkCETberpgbtJKCbv+/WJLC5UGmjRAm0xu/20go38uYx4DnqXRD3iQt18izTdl2Fw/ 5DFvJ8PYcov+6doec4EZ6zTpNayYHI6bERB6CHYZ8lgR21kv55P9Yca6zGITgLW3VvrE FlqdjEXrneT3Ve0WstmZPIeufdAWavGUrADrzvA7ZXE5vSW8JLANZ/Tnrr8jRIs+vttg r+NB5BRA13r3AHhPyTx2L0t/qBJ0HFTbcPiFMT/Z5I9ZccNPT/yG3NxWVOosMcuE6X5d Xviv6fhHYlI4ZrXbFkPmNkxkx33LjzP7wwyF6vmpKwfiYqGlVP3OiSQCcCRUTROkUK79 XHwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6/+wRoizJk26BilMAQp8YVWtbQFwSb0j0DewQLtAtvw=; b=A+Z0JsVSyZQWbGOoFxCjl8uqEWnsjWyfaS9SJqY2CBarTZze/L3rO2n7iry70Sqbzd uiy21H59+lV1WZz0ZqclzCP5ervXLFKiUPJnFt7zPIisN88uwb/p0EAzPruH5K459Vik re3o6b7FhEHy9prIALmJhFYzwU0Lo27y3KthDtxOvwVrTSJGFdrIlHXKQ+Ao/pR5SkDN 7eLyh/3u2+TOBqvpxL+p7mX4m3yQjURPT8ZeuVK9fLDUYnVNDLm9CCAcQlgvY3FpG0T4 xsqCdFzCRK4QvTHOAM/yLM4nJPXSjVUKxlBgzeC4+QbSLRMbjOqcPqR0Wdsgbgao1icz 6lJA== X-Gm-Message-State: AOAM532wltRpBxZvEVRSPZ+uuFr7fxdRAHLsSV3Kw3U9xg0beh4jpiG/ eKv/EWzrbzK/r2rbNPu9qQDOp9mzFSwr X-Google-Smtp-Source: ABdhPJzawzj3ZP6DaUKhsLUL4YxgOKveJ6VpQUe53oOEQYCA8scUYeX44bkb2dVQtGkQKa5bt9wt5iTBGQ4C X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr93016wms.1.1639584811018; Wed, 15 Dec 2021 08:13:31 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:30 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-14-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 13/14] KVM: arm64: Expose unshare hypercall to the host From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081334_245081_EBF21CB0 X-CRM114-Status: GOOD ( 13.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Introduce an unshare hypercall which can be used to unmap memory from the hypervisor stage-1 in nVHE protected mode. This will be useful to update the EL2 ownership state of pages during guest teardown, and avoids keeping dangling mappings to unreferenced portions of memory. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 8 +++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 33 +++++++++++++++++++ 4 files changed, 43 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 50d5e4de244c..d5b0386ef765 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -63,6 +63,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 56445586c755..80e99836eac7 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -55,6 +55,7 @@ extern const u8 pkvm_hyp_id; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); +int __pkvm_host_unshare_hyp(u64 pfn); bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index b096bf009144..5e2197db0d32 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -147,6 +147,13 @@ static void handle___pkvm_host_share_hyp(struct kvm_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) = __pkvm_host_share_hyp(pfn); } +static void handle___pkvm_host_unshare_hyp(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + + cpu_reg(host_ctxt, 1) = __pkvm_host_unshare_hyp(pfn); +} + static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(phys_addr_t, phys, host_ctxt, 1); @@ -184,6 +191,7 @@ static const hcall_t host_hcall[] = { HANDLE_FUNC(__pkvm_prot_finalize), HANDLE_FUNC(__pkvm_host_share_hyp), + HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 2b23b2db8d4a..16776d1d6151 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -769,3 +769,36 @@ int __pkvm_host_share_hyp(u64 pfn) return ret; } + +int __pkvm_host_unshare_hyp(u64 pfn) +{ + int ret; + u64 host_addr = hyp_pfn_to_phys(pfn); + u64 hyp_addr = (u64)__hyp_va(host_addr); + struct pkvm_mem_share share = { + .tx = { + .nr_pages = 1, + .initiator = { + .id = PKVM_ID_HOST, + .addr = host_addr, + .host = { + .completer_addr = hyp_addr, + }, + }, + .completer = { + .id = PKVM_ID_HYP, + }, + }, + .completer_prot = PAGE_HYP, + }; + + host_lock_component(); + hyp_lock_component(); + + ret = do_unshare(&share); + + hyp_unlock_component(); + host_unlock_component(); + + return ret; +} From patchwork Wed Dec 15 16:12:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 67CF4C433F5 for ; Wed, 15 Dec 2021 16:38:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=xhlk4int95oDGyTo+iyPhOhJEeA2xLcwR74djj4i2pk=; b=JKww3dWWZemmD+FE6dUN3BnsiZ I40pBPo8/YnCp/0QOkKRFeljqEix39FHzxEZLIg/qKzJAAH4ANz1e2D1VYJ903FW0psAZGCDgScPq 8j6W8SeR2B5YOlCwcfTEa7h+p6YToamG8sBBWfxbPv8vNYyHqTWbRpNYcIOhd6A7eMWHKr3nIoQVe iam/T0sXvxnRirrvmprC1xzbh136ROeM1F/I/V9U46nqrjZ1gFOm63mscf4+ZWdK50X+fTCwPUsMV dKZEJue4jG6Vqx8gGpORTaAtDobaG8qJIAfOFyv26YCrQFz7Ae1ttB7TGevKjUiRfLINp1nwuLt5U yyf8nTsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxXEr-001kuG-0S; Wed, 15 Dec 2021 16:34:46 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWuP-001cOs-NR for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:39 +0000 Received: by mail-wm1-x34a.google.com with SMTP id bg20-20020a05600c3c9400b0033a9300b44bso9228034wmb.2 for ; Wed, 15 Dec 2021 08:13:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=tend1ygRxA410KA7BvbTkFVqKoEy0L8UqyutJi9DG3g=; b=mMYmfDUg0+o7R7mkU2Rcir84qGTKQHmpLPxS4DDM4Yxvx6YxAPGRCg5RoA1VmxvYrm +Wy3GOV768b4gp9aQma4kHl9Caa26XgrHjw08g6sKfJamUn+1pi+nIVG3YsudL+GqU6X zhrGfKtBy+IIWtTM+SyDSic0+eTsMjkYOBY1cwDkmoEXfsKgXFcy+U2JYcjvqVH4++HW jAszwehfVYDifbvb+k3R+7yyDyHvZnSGnwGq1EFv8NxDJZ8LWmuAqwVauEu30VCZPOVk tVpmpvxi6tijPKEcKqcQX+bvBG6TQ0USoIGiBjG+BU8HCJ1KMuHsorkEbUam/+UNSu94 MPiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=tend1ygRxA410KA7BvbTkFVqKoEy0L8UqyutJi9DG3g=; b=t4MFshLpOl3aAu3jpBJ+o2n/sDJF5/2yo5t0eaasSPrUxa+SriqothSxp0v5vRiEBR M31vjHZqIky44WUnMJQuMZmQuGY9CqfxKdB+QWiLZ0zKwMSBZTmNKz/VF5yUYK2g4jkM 4/fqMXCHDKTG/UCqzSMaOlNMgtYEKfxVHikvWOJ9uvLch1cghdcOVvAhyocMpQPSLQVJ fB0PZbMrOnLVG+U3cFZItJySeu75jVLNXVdaBQo0WqszUoJsuPzQy4xRsiKsaAR6dFU0 Q7Tbs+8nfs5hCKE2zmimmIthfWysHvigj1mmlmPuWwSahHr/IJEPjcsh+hAvx8wKESxE d92w== X-Gm-Message-State: AOAM530B36ubEZN3UCB5fvwl5OlbiXNpa7b+VKyEVeLNKyCrIMr3myQk Jy1fSicOrsN/5vlzJTsNjuCsqqT7mFLD X-Google-Smtp-Source: ABdhPJybuNLpOu1Q9Rk0RrxyKIrEfRt0Wng6CPDHefyALS4cUZsP294GGppWk7kprjh8m4P74HvXasEclk1J X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:a1c:5445:: with SMTP id p5mr505391wmi.137.1639584815288; Wed, 15 Dec 2021 08:13:35 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:31 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-15-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 14/14] KVM: arm64: pkvm: Unshare guest structs during teardown From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081337_821304_C25D32AE X-CRM114-Status: GOOD ( 22.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make use of the newly introduced unshare hypercall during guest teardown to unmap guest-related data structures from the hyp stage-1. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/arm.c | 2 ++ arch/arm64/kvm/fpsimd.c | 34 ++++++++++++++++++++++--- arch/arm64/kvm/mmu.c | 42 +++++++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 8 +++++- 6 files changed, 85 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index cf858a7e3533..9360a2804df1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -321,6 +321,7 @@ struct kvm_vcpu_arch { struct kvm_guest_debug_arch external_debug_state; struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ + struct task_struct *parent_task; struct { /* {Break,watch}point registers */ @@ -737,6 +738,7 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); +void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu); static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 185d0f62b724..81839e9a8a24 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -151,6 +151,7 @@ static __always_inline unsigned long __kern_hyp_va(unsigned long v) #include int kvm_share_hyp(void *from, void *to); +void kvm_unshare_hyp(void *from, void *to); int create_hyp_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int create_hyp_io_mappings(phys_addr_t phys_addr, size_t size, void __iomem **kaddr, diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c202abb448b1..6057f3c5aafe 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -188,6 +188,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) } } atomic_set(&kvm->online_vcpus, 0); + + kvm_unshare_hyp(kvm, kvm + 1); } int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 86899d3aa9a9..2f48fd362a8c 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -14,6 +14,19 @@ #include #include +void kvm_vcpu_unshare_task_fp(struct kvm_vcpu *vcpu) +{ + struct task_struct *p = vcpu->arch.parent_task; + struct user_fpsimd_state *fpsimd; + + if (!is_protected_kvm_enabled() || !p) + return; + + fpsimd = &p->thread.uw.fpsimd_state; + kvm_unshare_hyp(fpsimd, fpsimd + 1); + put_task_struct(p); +} + /* * Called on entry to KVM_RUN unless this vcpu previously ran at least * once and the most recent prior KVM_RUN for this vcpu was called from @@ -29,12 +42,27 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; + kvm_vcpu_unshare_task_fp(vcpu); + /* Make sure the host task fpsimd state is visible to hyp: */ ret = kvm_share_hyp(fpsimd, fpsimd + 1); - if (!ret) - vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); + if (ret) + return ret; + + vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); + + /* + * We need to keep current's task_struct pinned until its data has been + * unshared with the hypervisor to make sure it is not re-used by the + * kernel and donated to someone else while already shared -- see + * kvm_vcpu_unshare_task_fp() for the matching put_task_struct(). + */ + if (is_protected_kvm_enabled()) { + get_task_struct(current); + vcpu->arch.parent_task = current; + } - return ret; + return 0; } /* diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f26d83e3aa00..b53e5bc3f4c3 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -344,6 +344,32 @@ static int share_pfn_hyp(u64 pfn) return ret; } +static int unshare_pfn_hyp(u64 pfn) +{ + struct rb_node **node, *parent; + struct hyp_shared_pfn *this; + int ret = 0; + + mutex_lock(&hyp_shared_pfns_lock); + this = find_shared_pfn(pfn, &node, &parent); + if (WARN_ON(!this)) { + ret = -ENOENT; + goto unlock; + } + + this->count--; + if (this->count) + goto unlock; + + rb_erase(&this->node, &hyp_shared_pfns); + kfree(this); + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1); +unlock: + mutex_unlock(&hyp_shared_pfns_lock); + + return ret; +} + int kvm_share_hyp(void *from, void *to) { phys_addr_t start, end, cur; @@ -376,6 +402,22 @@ int kvm_share_hyp(void *from, void *to) return 0; } +void kvm_unshare_hyp(void *from, void *to) +{ + phys_addr_t start, end, cur; + u64 pfn; + + if (is_kernel_in_hyp_mode() || kvm_host_owns_hyp_mappings() || !from) + return; + + start = ALIGN_DOWN(__pa(from), PAGE_SIZE); + end = PAGE_ALIGN(__pa(to)); + for (cur = start; cur < end; cur += PAGE_SIZE) { + pfn = __phys_to_pfn(cur); + WARN_ON(unshare_pfn_hyp(pfn)); + } +} + /** * create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode * @from: The virtual kernel start address of the range diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e3e2a79fbd75..798a84eddbde 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -150,7 +150,13 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { - kfree(vcpu->arch.sve_state); + void *sve_state = vcpu->arch.sve_state; + + kvm_vcpu_unshare_task_fp(vcpu); + kvm_unshare_hyp(vcpu, vcpu + 1); + if (sve_state) + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + kfree(sve_state); } static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu)