From patchwork Tue Jun 8 11:45:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0255C47082 for ; Tue, 8 Jun 2021 12:17:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A92D36124C for ; Tue, 8 Jun 2021 12:17:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A92D36124C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Fmkowi0LfGmSTJcHE/n8CKKNVVYguvyLyRwjtJM7utw=; b=ezcQjlkuFNiazDyCMM+tVvquoC 1jNhqYYV+NTWYlBUDsLguNS/XJK+YkaH7ixFsOoW2/3eOyMUboOueYcyyl6U73zpBX/uf2OwksnVQ XemkQl8hVUxnFx6eEUHETECm4zMlyZg+f2tTpCBO8/Haidz2jwiRvLDWnuRFJKQzlqPU1cyUei8t6 q1saN0hXOkmJ73BkQfFkrJkqrt/9QNcMFXH0oH7eg2k5L8wqPc8QkVxMMXAYcZQN545cLvgUf3e3L y+TiMNUUXHHubsC6yUaa5CbDvZP56EYeTEqD+Df5ZuO47uIxYIO7u71jXnjyKTtsVLRovl2rWm+LN Y0OmyI+A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqad2-008Ghy-16; Tue, 08 Jun 2021 12:14:45 +0000 Received: from mail-ej1-f74.google.com ([209.85.218.74]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaBh-0086cX-Ko for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:46:31 +0000 Received: by mail-ej1-f74.google.com with SMTP id hz18-20020a1709072cf2b02903fbaae9f4faso6653591ejc.4 for ; Tue, 08 Jun 2021 04:46:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vxwHJ+G94OJ1pE095VNOnJheGz25iQDynuEQjCESIh0=; b=qaed+O4Pp9KTEzvs+APK7fEuJPXtpKOQgrKsvMynvTCDksxLejAvbkSsyykvk7D8sS S8yR429zh5Q4ejjDIGva2qL7h2gIADTNFNMYwCoCH7pWsJg+c/D/RQF4oqP7Fgp4StAW +40ljDrP3JbD1vmJYd9ufdc85M0oGXSwb7Z6GmmDI1I56uYN3310sFRXr2EprFL06o0R H/ApsliuhwPxG8ykxG+ifXKqhtcBXzStm7uLNdf1Z+UozesEAohJUpYCFAbHR6eqQg77 4YmpUv4RainpfmuH97cTwY0tAH62FWahweNsmaxYoZcCa3x1nBeGaKijk6TxnmFCR4di XHAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vxwHJ+G94OJ1pE095VNOnJheGz25iQDynuEQjCESIh0=; b=eu4kyU6sq0RCyaPXvZH0jw3pMzZ42nfiwTx2UD4Vb4Vq8fdZMd+CDanYvRDpmYYTkL cTpM4QmT8SFwvIOdn2NJJm0XN5nA7R4J5imtKqm8tLZ7+BZyrlJ+riMU4iABf8EX057r lhffMtDsnSs+eTo8vRAtc3XTTtuaQuBq4L1mpO35fEk9W0SGU6YvuaELRkOEWzNyNJ0d CTJJCf9OwLfX61biTuD+KUEq6raQHBQ9Wx8da3GIIMi+lRbb9m+BV1cLo9vD+YcCUeNy DU+HlKBfGsz81HzGIKPHCx6Fv8HEOT5U7ioF+xwNLCKsHcAzYP+JRpKwn5PW8Qgqzc2I 18ug== X-Gm-Message-State: AOAM530zl8DDx/4OUwJ1PFAHj5KhH6EkR1gx9JCTKqsL1ne2a67TSca4 siIu46VH1dMkHi/TPQGG8QXdLRpajtw1 X-Google-Smtp-Source: ABdhPJxjy7h6qUzJMJc1Bbkt8we2RleHBUTUerWb3KW3/8CCh8UqtZ6HhCYhLc5ZOrmlQyHAQHfiTn44GB2u X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a17:906:a04e:: with SMTP id bg14mr6427415ejb.366.1623152726119; Tue, 08 Jun 2021 04:45:26 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:12 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-2-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 1/7] KVM: arm64: Move hyp_pool locking out of refcount helpers From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044629_728703_EAA071C0 X-CRM114-Status: GOOD ( 21.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount helpers currently rely on the hyp_pool lock for serialization. However, this means the refcounts can't be changed from the buddy allocator core as it already holds the lock, which means pages have to go through odd transient states. For example, when a page is freed, its refcount is set to 0, and the lock is transiently released before the page can be attached to a free list in the buddy tree. This is currently harmless as the allocator checks the list node of each page to see if it is available for allocation or not, but it means the page refcount can't be trusted to represent the state of the page even if the pool lock is held. In order to fix this, remove the pool locking from the refcount helpers, and move all the logic to the buddy allocator. This will simplify the removal of the list node from struct hyp_page in a later patch. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 35 ---------------------- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 43 ++++++++++++++++++++------- 2 files changed, 32 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 18a4494337bd..f2c84e4fa40f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -22,41 +22,6 @@ struct hyp_pool { unsigned int max_order; }; -static inline void hyp_page_ref_inc(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - p->refcount++; - hyp_spin_unlock(&pool->lock); -} - -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - int ret; - - hyp_spin_lock(&pool->lock); - p->refcount--; - ret = (p->refcount == 0); - hyp_spin_unlock(&pool->lock); - - return ret; -} - -static inline void hyp_set_page_refcounted(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - if (p->refcount) { - hyp_spin_unlock(&pool->lock); - BUG(); - } - p->refcount = 1; - hyp_spin_unlock(&pool->lock); -} - /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); void hyp_get_page(void *addr); diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 237e03bf0cb1..d666f4789e31 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,15 +93,6 @@ static void __hyp_attach_page(struct hyp_pool *pool, list_add_tail(&p->node, &pool->free_area[order]); } -static void hyp_attach_page(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - __hyp_attach_page(pool, p); - hyp_spin_unlock(&pool->lock); -} - static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, unsigned int order) @@ -125,19 +116,49 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } +static inline void hyp_page_ref_inc(struct hyp_page *p) +{ + p->refcount++; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + p->refcount--; + return (p->refcount == 0); +} + +static inline void hyp_set_page_refcounted(struct hyp_page *p) +{ + BUG_ON(p->refcount); + p->refcount = 1; +} + +/* + * Changes to the buddy tree and page refcounts must be done with the hyp_pool + * lock held. If a refcount change requires an update to the buddy tree (e.g. + * hyp_put_page()), both operations must be done within the same critical + * section to guarantee transient states (e.g. a page with null refcount but + * not yet attached to a free list) can't be observed by well-behaved readers. + */ void hyp_put_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); if (hyp_page_ref_dec_and_test(p)) - hyp_attach_page(p); + __hyp_attach_page(pool, p); + hyp_spin_unlock(&pool->lock); } void hyp_get_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); + hyp_spin_unlock(&pool->lock); } void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) @@ -159,8 +180,8 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) p = list_first_entry(&pool->free_area[i], struct hyp_page, node); p = __hyp_extract_page(pool, p, order); - hyp_spin_unlock(&pool->lock); hyp_set_page_refcounted(p); + hyp_spin_unlock(&pool->lock); return hyp_page_to_virt(p); } From patchwork Tue Jun 8 11:45:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23F7BC4743E for ; Tue, 8 Jun 2021 12:13:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E72DF6127A for ; Tue, 8 Jun 2021 12:13:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E72DF6127A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9dTcgaB8fkspB7JHG1cVQxw3S8pHQknrAsS016eyB30=; b=1rfoJaPizq3VzNpcekSeiIvcwC 5mef3F0tw5ZtSikgddZzW+rBCWMSzg87NhEv4OmwI1eqkGTksqO+9ecIvOHiAn2F+JuAGkqWr+wO2 zovrUWkqN9ct+iJogeQi4ryM8oCdLsHcJJwsEAIV3ESSblTtNWqxgoo0LNtufQ/DQNnNpUU9WCpxE VzBQYx+H7Ni0kWP4zY+BUr+Hxouc7hwSvurpidG06D4ZfANquLltBKb/iHhL3YvXfpjOrU5+MA7Sd rsJgZ/vNzDwofnJETwKqPv9SJsbqdOKNikHmbjW4K0w8Zn/sR+MvV2LA1DMt1U7q7WQwAguazPNiv 8RnRuDug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaZJ-008Eus-BY; Tue, 08 Jun 2021 12:10:53 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaAn-0086Qh-Ue for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:45:35 +0000 Received: by mail-wr1-x449.google.com with SMTP id d5-20020a0560001865b0290119bba6e1c7so5359375wri.20 for ; Tue, 08 Jun 2021 04:45:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LthYGSx9khubk0Q8iirSxhsMo2f7JJJGJq7fjYN5P38=; b=f2wQRB2jWBShQeudJb2Rmut+KMQOsZgySbE+sTH8oSLU6gFAIffflC/ma+62ucmDoj 7XeuFCB9mACIijeZC4zq5Dfd3nwANdrfG/dzHB+CtvH2bVxNlVc1M71oEMRwwolxuLvw /h35q0ZqMfz7SEVEqlZy8xEj8AA3AXvglFwvD4blk7ugoBTvGGwuwtrr2Zue/YXZ2Aor c69B6hM/6XZP8KvobS1YTao33E7DXN4ioJi3ViVlj58EeiDQOgIgbJ0eE8PplqWKTd2w LAHZepXukg8vmCGKkzLFL8TIohz3WkwjeeLJpDa+CZafFq2XWMPVRriusB39/NMuV/GE uVhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LthYGSx9khubk0Q8iirSxhsMo2f7JJJGJq7fjYN5P38=; b=MVLNQuuhvDOF6MexgGHS3SoP+BZIhiTMRGDSUBDg/inkzLxYO4S1Ka4QZT2VWskO2E hZZo90Z8JKF6Zqo7EktZm3DwPnrJPDkeEvQvDNJvolzk80qlZykrMVgTNy9zmBDregBu XuOjr4bl9JIQMKmcRSJbkSHCxveARUvGxXNberJk6Hw29R7IF9JF2cDtSMdim08/acX2 K4yJGxyx4UQok2RNPseQgBKvz1hxhBpXnfBAc9U4l2dB0ry2PJReXdybSg9hZ2D2pD3x y6PYzB4jv7ujiOGs9zmvGG3O8OevEZxgNwRZIgZak3BPGWoMmyB0qw47jpfcqrwy3qit /IWw== X-Gm-Message-State: AOAM531L+IhzSd775m2AVT2EhCE8voajnUoEPneV4UICsguQfVNEl5sJ O12OR1BKv8laR7KIfFNrInoBbFyL5Tr7 X-Google-Smtp-Source: ABdhPJxAu10Zi5a8GAxfgrCM5/ofdSfnO2007lyftbaojQBH+mzj7bMUM0GJxPqdZ4Q2zSw6PtaVyeR6V8I9 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:1910:: with SMTP id j16mr21226843wmq.39.1623152729448; Tue, 08 Jun 2021 04:45:29 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:13 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-3-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 2/7] KVM: arm64: Use refcount at hyp to check page availability From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044534_030807_0BB86AA0 X-CRM114-Status: GOOD ( 18.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp buddy allocator currently checks the struct hyp_page list node to see if a page is available for allocation or not when trying to coalesce memory. Now that decrementing the refcount and attaching to the buddy tree is done in the same critical section, we can rely on the refcount of the buddy page to be in sync, which allows to replace the list node check by a refcount check. This will ease removing the list node from struct hyp_page later on. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index d666f4789e31..2602577daa00 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -55,7 +55,7 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); - if (!buddy || buddy->order != order || list_empty(&buddy->node)) + if (!buddy || buddy->order != order || buddy->refcount) return NULL; return buddy; @@ -133,6 +133,12 @@ static inline void hyp_set_page_refcounted(struct hyp_page *p) p->refcount = 1; } +static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) +{ + if (hyp_page_ref_dec_and_test(p)) + __hyp_attach_page(pool, p); +} + /* * Changes to the buddy tree and page refcounts must be done with the hyp_pool * lock held. If a refcount change requires an update to the buddy tree (e.g. @@ -146,8 +152,7 @@ void hyp_put_page(void *addr) struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); - if (hyp_page_ref_dec_and_test(p)) - __hyp_attach_page(pool, p); + __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } @@ -202,15 +207,16 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - memset(p, 0, sizeof(*p) * nr_pages); for (i = 0; i < nr_pages; i++) { p[i].pool = pool; + p[i].order = 0; INIT_LIST_HEAD(&p[i].node); + hyp_set_page_refcounted(&p[i]); } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) - __hyp_attach_page(pool, &p[i]); + __hyp_put_page(pool, &p[i]); return 0; } From patchwork Tue Jun 8 11:45:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE588C47082 for ; Tue, 8 Jun 2021 12:14:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 83D0D61185 for ; Tue, 8 Jun 2021 12:14:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 83D0D61185 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rc51+nfSzz602RHqrNau4HSd2uegX7uE7wZkmoXa4wo=; b=TFknr+wmXCe8jdOclFkqsBwgEa hMWc8MwWK6v9ElpbK4DIOmGu6RCOFGIpWAu5myPtMKXzCX313cUrDCvIY+GJEEtbIUXlrEiIMIgE2 MvzWqEFZq1ME3XMUsVdKT+Qzu4UiJ+18smLI2LHMbFvtb4j7EWqP1C8PC2r8O4X2o5Suy3pmSOjXC LZiY6t2204GUEfyf/mhEiy0hFqWdWGKWAr2LeXcOrH0VfHGg8UYBff50szu/7jXkLE0VtYgx5DNj/ uzLg0Mom/D0K0/EAvnuyIPxCXIf2x127W3Xxm81oNbB1QoZuT5zBkNYlSnw2b5JFEghN6/MLIR/Bf nvJpt+Ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaau-008Fo7-0K; Tue, 08 Jun 2021 12:12:34 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaAo-0086Qv-VS for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:45:36 +0000 Received: by mail-qk1-x74a.google.com with SMTP id v16-20020ae9e3100000b02903aafadba721so482057qkf.6 for ; Tue, 08 Jun 2021 04:45:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sWbrRz6YFSv51K0ZiaeJfgkblqcqVJvp4mAMJ4g6evQ=; b=oMijNQKaRNBFiQ1m3Qen4ujOBqjvKBFOUAcRFp8lVBza1ZVmHJBg08yWChcmh4N3ZM OGodg6TU7O/n8uagXshvezjlQtF2vboBz78E2pGnEzfJf46ws36OW4bI1/aj3MDGCs9e Xfirtd13uPSSZLeEVxOgsYyY+418xQDc1+nXfhX45f/aIEHSgwagqS8YMo1NSiFFfY0c ieFXm6t/oQ5DXdoraPM0zQT1nRFddABeIHqKydm2Xvwr/+8QtiaDN5+AZINlYTdU70RH fSSMTVRLvAV+5ixRteovyozG1HBoKb7EkgINPdYQBbU5sNB1iJxJxaMi3xxL78dUJvgH LNPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sWbrRz6YFSv51K0ZiaeJfgkblqcqVJvp4mAMJ4g6evQ=; b=h04qMO5eQbstGYWzX0GpVIWE2wHtVaILvIwprQOxmQdnf+bNOlTeYyzF8pBV+j3vt4 Y3CotYzqXr3sIrKxR/pgQnnSunZpPPQTV6RG4Zy0DuABXLwMGCwjmiIdhy8Gw1NfDu9Z CZUixASJogDCrru19IP4HlAr1QhBN0vrWLi+toec0QmjI1zk/VFNtkvfJUFRbUxwyQ6U NxmG0boer8VL/PJUyhSitN/VUgrxtCcS8BpkTlXSbeNLTQE34cvaYMA9ndAfB22qqD1d IB7WscxA7vagXH9CrRtqPaCZXSYnSAIA2IRX9BrXbyqIQ6ehV742fYQ1fW6C0q4oO/MY c8tw== X-Gm-Message-State: AOAM530TQK3jebkmP2I+0qLqxmvGGBUMvUTd3YsPXhoBFfwvbdJXACsA rG7WdobppHEhXqz/Me//clpUdM6FK4wd X-Google-Smtp-Source: ABdhPJwsxbCizlABk5SfV18s0TUyC2fSDZX+uNojDJncA7j/acMAyKtd4wzKVHkNbef2eVClKkhRAeeIxy79 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:e8cd:: with SMTP id m13mr23073154qvo.52.1623152731419; Tue, 08 Jun 2021 04:45:31 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:14 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-4-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 3/7] KVM: arm64: Remove list_head from hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044535_056575_C993ED4A X-CRM114-Status: GOOD ( 19.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The list_head member of struct hyp_page is only needed when the page is attached to a free-list, which by definition implies the page is free. As such, nothing prevents us from using the page itself to store the list_head, hence reducing the size of the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 1 - arch/arm64/kvm/hyp/nvhe/page_alloc.c | 39 ++++++++++++++++++++---- 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index fd78bde939ee..7691ab495eb4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -12,7 +12,6 @@ struct hyp_page { unsigned int refcount; unsigned int order; struct hyp_pool *pool; - struct list_head node; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 2602577daa00..34f0eb026dd2 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -62,6 +62,34 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, } +/* + * Pages that are available for allocation are tracked in free-lists, so we use + * the pages themselves to store the list nodes to avoid wasting space. As the + * allocator always returns zeroed pages (which are zeroed on the hyp_put_page() + * path to optimize allocation speed), we also need to clean-up the list node in + * each page when we take it out of the list. + */ +static inline void page_remove_from_list(struct hyp_page *p) +{ + struct list_head *node = hyp_page_to_virt(p); + + __list_del_entry(node); + memset(node, 0, sizeof(*node)); +} + +static inline void page_add_to_list(struct hyp_page *p, struct list_head *head) +{ + struct list_head *node = hyp_page_to_virt(p); + + INIT_LIST_HEAD(node); + list_add_tail(node, head); +} + +static inline struct hyp_page *node_to_page(struct list_head *node) +{ + return hyp_virt_to_page(node); +} + static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { @@ -83,14 +111,14 @@ static void __hyp_attach_page(struct hyp_pool *pool, break; /* Take the buddy out of its list, and coallesce with @p */ - list_del_init(&buddy->node); + page_remove_from_list(buddy); buddy->order = HYP_NO_ORDER; p = min(p, buddy); } /* Mark the new head, and insert it */ p->order = order; - list_add_tail(&p->node, &pool->free_area[order]); + page_add_to_list(p, &pool->free_area[order]); } static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, @@ -99,7 +127,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, { struct hyp_page *buddy; - list_del_init(&p->node); + page_remove_from_list(p); while (p->order > order) { /* * The buddy of order n - 1 currently has HYP_NO_ORDER as it @@ -110,7 +138,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, p->order--; buddy = __find_buddy_nocheck(pool, p, p->order); buddy->order = p->order; - list_add_tail(&buddy->node, &pool->free_area[buddy->order]); + page_add_to_list(buddy, &pool->free_area[buddy->order]); } return p; @@ -182,7 +210,7 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) } /* Extract it from the tree at the right order */ - p = list_first_entry(&pool->free_area[i], struct hyp_page, node); + p = node_to_page(pool->free_area[i].next); p = __hyp_extract_page(pool, p, order); hyp_set_page_refcounted(p); @@ -210,7 +238,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, for (i = 0; i < nr_pages; i++) { p[i].pool = pool; p[i].order = 0; - INIT_LIST_HEAD(&p[i].node); hyp_set_page_refcounted(&p[i]); } From patchwork Tue Jun 8 11:45:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C34D4C47082 for ; Tue, 8 Jun 2021 12:17:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9181861182 for ; Tue, 8 Jun 2021 12:17:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9181861182 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=70DFYLHtcgOq5zwuxhebu178FVvnugSOIf52KAqlITQ=; b=txineI1YjdSjq0iPnTpSPeR4nR pECgcEK3JBL/PCCd+t9b/3zhp89BUxiDiKPdGCu9/Mmwl6PBpkjiPPYFbeH+vJWRWE1Mlpj9dVxyc XjxbsGWgLc1LKNL1H3v9119NBsOkxS25OdU9xBYvFJ8DwVkZ6XTzVlkHaJKV0CfWrm8wKquWycT4h +P9daWRqQHnQ1qnUB4N1sKTovzGQFGAAKveTJtemHp73+h69JGnaImNcCjg8tBZRXDRR58v9Z+o70 qC4yi0Rrg/2v4wJFFdDwnnpnWGA42DTR0F8zsyWTdKoIJAh+7U1/nYY7hAuoDpy+9HTnPTcKMKELW KDAPRE3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqadp-008Gya-9O; Tue, 08 Jun 2021 12:15:34 +0000 Received: from mail-qk1-f202.google.com ([209.85.222.202]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaBo-0086eG-GA for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:46:38 +0000 Received: by mail-qk1-f202.google.com with SMTP id v16-20020ae9e3100000b02903aafadba721so482113qkf.6 for ; Tue, 08 Jun 2021 04:46:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nIHpR8ZlVsGAW1DlzERYCQ0QSXSTOGCuxuNnwSGxICI=; b=LVFt37cFY7oCBUMi58el5o++2ZFZsq8A+pTbp+BOcqAWWQ4WqTRxhBf4T2PlylB9qw JR/Der7ziyA5uadmHUm8grGM4IKImhCun0F0w5SvWAJekm0NA+uzbrkqt+jvMiCDQ4GU uI7LO4ImgtAGEWiqKKfaZuiiux2nz0yD83RdhsY3O+4xAPI5167OU3otzOkhFVR5k7QY f1XbEGIbZqAZCn7fNgQnyarlmHtJ79bYSDh3k5SJy4/BIp8s/e17aYz4jPNvhpdidMq9 YagNSnWlRxNX936QI+I+awfw154Ji+bxdWdswrl4Lpl+lmpKjzTfh5IkzE1t65lByYCO imEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nIHpR8ZlVsGAW1DlzERYCQ0QSXSTOGCuxuNnwSGxICI=; b=gT2AEaLR8RiYBiBMkr4GRVct0lerJuzVkt47Q3ZQ5rvD6FduZJImq9x5Hhlg5l0U8f YjeZ9vUOzB4pWCZ8wlP+ChiBSGdnziIuz5xJJuZdnYZNDd5Y1QaS8dlK9KWgcLkLeWAg LtVhNteq8lyx3qPyqd0so6Bvu6uZ8tPoO/EEra25LCs1pz3o4rnF9NdI1DSnhJfYZ89J lUljIZaV2LMylC35BhHpoEco4UkXzmDJMa9y6JI0Cas+fNdKI0R6zsyRnR9PxhXkM2ok wKOe6oiNteZ+LPIoenh1V5yfuxj+L43UsIIgPgnYhBtFS3tyRk2X5DRlUH8wCvXw+U+X D4HA== X-Gm-Message-State: AOAM532q8P2IDDijHfVveryxnfurmXG+gH0OfOPbOhV/vF6oOuZz8KOO tHkiaDSdTeChnt/EBn1qpgIYhaN7iZfe X-Google-Smtp-Source: ABdhPJysEamaQ0A1j/aCGKg2F0y1iNhk5abGl8ZudEGWNhAWLz/SFVuUcg7RTeuNUZvCkVvIyTHAjEvme2to X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:b08:: with SMTP id u8mr2658991qvj.51.1623152733598; Tue, 08 Jun 2021 04:45:33 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:15 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-5-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 4/7] KVM: arm64: Unify MMIO and mem host stage-2 pools From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044636_606477_D36DF0DC X-CRM114-Status: GOOD ( 27.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently maintain two separate memory pools for the host stage-2, one for pages used in the page-table when mapping memory regions, and the other to map MMIO regions. The former is large enough to map all of memory with page granularity and the latter can cover an arbitrary portion of IPA space, but allows to 'recycle' pages. However, this split makes accounting difficult to manage as pages at intermediate levels of the page-table may be used to map both memory and MMIO regions. Simplify the scheme by merging both pools into one. This means we can now hit the -ENOMEM case in the memory abort path, but we're still guaranteed forward-progress in the worst case by unmapping MMIO regions. On the plus side this also means we can usually map a lot more MMIO space at once if memory ranges happen to be mapped with block mappings. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/include/nvhe/mm.h | 13 +++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 46 ++++++++----------- arch/arm64/kvm/hyp/nvhe/setup.c | 16 ++----- arch/arm64/kvm/hyp/reserved_mem.c | 3 +- 5 files changed, 32 insertions(+), 48 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 42d81ec739fa..9c227d87c36d 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -23,7 +23,7 @@ extern struct host_kvm host_kvm; int __pkvm_prot_finalize(void); int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end); -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); +int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); static __always_inline void __load_host_stage2(void) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 0095f6289742..8ec3a5a7744b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -78,19 +78,20 @@ static inline unsigned long hyp_s1_pgtable_pages(void) return res; } -static inline unsigned long host_s2_mem_pgtable_pages(void) +static inline unsigned long host_s2_pgtable_pages(void) { + unsigned long res; + /* * Include an extra 16 pages to safely upper-bound the worst case of * concatenated pgds. */ - return __hyp_pgtable_total_pages() + 16; -} + res = __hyp_pgtable_total_pages() + 16; -static inline unsigned long host_s2_dev_pgtable_pages(void) -{ /* Allow 1 GiB for MMIO mappings */ - return __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + + return res; } #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 4b60c0056c04..c8ed7e86231b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -23,8 +23,7 @@ extern unsigned long hyp_nr_cpus; struct host_kvm host_kvm; -static struct hyp_pool host_s2_mem; -static struct hyp_pool host_s2_dev; +static struct hyp_pool host_s2_pool; /* * Copies of the host's CPU features registers holding sanitized values. @@ -36,7 +35,7 @@ static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) { - return hyp_alloc_pages(&host_s2_mem, get_order(size)); + return hyp_alloc_pages(&host_s2_pool, get_order(size)); } static void *host_s2_zalloc_page(void *pool) @@ -44,20 +43,14 @@ static void *host_s2_zalloc_page(void *pool) return hyp_alloc_pages(pool, 0); } -static int prepare_s2_pools(void *mem_pgt_pool, void *dev_pgt_pool) +static int prepare_s2_pool(void *pgt_pool_base) { unsigned long nr_pages, pfn; int ret; - pfn = hyp_virt_to_pfn(mem_pgt_pool); - nr_pages = host_s2_mem_pgtable_pages(); - ret = hyp_pool_init(&host_s2_mem, pfn, nr_pages, 0); - if (ret) - return ret; - - pfn = hyp_virt_to_pfn(dev_pgt_pool); - nr_pages = host_s2_dev_pgtable_pages(); - ret = hyp_pool_init(&host_s2_dev, pfn, nr_pages, 0); + pfn = hyp_virt_to_pfn(pgt_pool_base); + nr_pages = host_s2_pgtable_pages(); + ret = hyp_pool_init(&host_s2_pool, pfn, nr_pages, 0); if (ret) return ret; @@ -86,7 +79,7 @@ static void prepare_host_vtcr(void) id_aa64mmfr1_el1_sys_val, phys_shift); } -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) +int kvm_host_prepare_stage2(void *pgt_pool_base) { struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; int ret; @@ -94,7 +87,7 @@ int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) prepare_host_vtcr(); hyp_spin_lock_init(&host_kvm.lock); - ret = prepare_s2_pools(mem_pgt_pool, dev_pgt_pool); + ret = prepare_s2_pool(pgt_pool_base); if (ret) return ret; @@ -199,11 +192,10 @@ static bool range_is_memory(u64 start, u64 end) } static inline int __host_stage2_idmap(u64 start, u64 end, - enum kvm_pgtable_prot prot, - struct hyp_pool *pool) + enum kvm_pgtable_prot prot) { return kvm_pgtable_stage2_map(&host_kvm.pgt, start, end - start, start, - prot, pool); + prot, &host_s2_pool); } static int host_stage2_idmap(u64 addr) @@ -211,7 +203,6 @@ static int host_stage2_idmap(u64 addr) enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W; struct kvm_mem_range range; bool is_memory = find_mem_range(addr, &range); - struct hyp_pool *pool = is_memory ? &host_s2_mem : &host_s2_dev; int ret; if (is_memory) @@ -222,22 +213,21 @@ static int host_stage2_idmap(u64 addr) if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); - if (is_memory || ret != -ENOMEM) + ret = __host_stage2_idmap(range.start, range.end, prot); + if (ret != -ENOMEM) goto unlock; /* - * host_s2_mem has been provided with enough pages to cover all of - * memory with page granularity, so we should never hit the ENOMEM case. - * However, it is difficult to know how much of the MMIO range we will - * need to cover upfront, so we may need to 'recycle' the pages if we - * run out. + * The pool has been provided with enough pages to cover all of memory + * with page granularity, but it is difficult to know how much of the + * MMIO range we will need to cover upfront, so we may need to 'recycle' + * the pages if we run out. */ ret = host_stage2_unmap_dev_all(); if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); + ret = __host_stage2_idmap(range.start, range.end, prot); unlock: hyp_spin_unlock(&host_kvm.lock); @@ -258,7 +248,7 @@ int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end) hyp_spin_lock(&host_kvm.lock); ret = kvm_pgtable_stage2_set_owner(&host_kvm.pgt, start, end - start, - &host_s2_mem, pkvm_hyp_id); + &host_s2_pool, pkvm_hyp_id); hyp_spin_unlock(&host_kvm.lock); return ret != -EAGAIN ? ret : 0; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index a3d3a275344e..1cff3259a493 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -24,8 +24,7 @@ unsigned long hyp_nr_cpus; static void *vmemmap_base; static void *hyp_pgt_base; -static void *host_s2_mem_pgt_base; -static void *host_s2_dev_pgt_base; +static void *host_s2_pgt_base; static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; static int divide_memory_pool(void *virt, unsigned long size) @@ -45,14 +44,9 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!hyp_pgt_base) return -ENOMEM; - nr_pages = host_s2_mem_pgtable_pages(); - host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_mem_pgt_base) - return -ENOMEM; - - nr_pages = host_s2_dev_pgtable_pages(); - host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_dev_pgt_base) + nr_pages = host_s2_pgtable_pages(); + host_s2_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_pgt_base) return -ENOMEM; return 0; @@ -158,7 +152,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = kvm_host_prepare_stage2(host_s2_mem_pgt_base, host_s2_dev_pgt_base); + ret = kvm_host_prepare_stage2(host_s2_pgt_base); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 83ca23ac259b..d654921dd09b 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -71,8 +71,7 @@ void __init kvm_hyp_reserve(void) } hyp_mem_pages += hyp_s1_pgtable_pages(); - hyp_mem_pages += host_s2_mem_pgtable_pages(); - hyp_mem_pages += host_s2_dev_pgtable_pages(); + hyp_mem_pages += host_s2_pgtable_pages(); /* * The hyp_vmemmap needs to be backed by pages, but these pages From patchwork Tue Jun 8 11:45:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42BE2C47082 for ; Tue, 8 Jun 2021 12:18:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0CF6561182 for ; Tue, 8 Jun 2021 12:18:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CF6561182 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=w/cCrlt0mG4BZwnfDqkhEXUsBy0ErS/U6QrEabYC/Ao=; b=cs+n4ikJrUY+TZZSGSo7CJTDgT BaMc7dVcHCoSsE/EOSeyRsIIw5lUde1BD3N2Sppz/cLbemX/pvmW2UVAJfFANA6JCOTU5mV65OvxV 1TSlbgChY7FW692g3k08WLUEIQT6E6LScE0jpWvKCQqMZa/2J36DgpCL0qdNJs8B0lBaLv+0ZIyVT n6D0m6rZRjnBeeIcvCr+4VdhuAsmXH9aze8ClpxmSyZy8m0jcqOZaxeCLn30/AKxpT4h0BnEBCMJ0 krYey2jopGraSgOcjD/fMG0wyMGEMPib/7pvBvbuqhesfwPKGTyF00O2DbkkBRH6LTO5hEakj2R1j fIedEOUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaeg-008HIx-OW; Tue, 08 Jun 2021 12:16:27 +0000 Received: from mail-qv1-f73.google.com ([209.85.219.73]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaBq-0086ee-LX for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:46:40 +0000 Received: by mail-qv1-f73.google.com with SMTP id t7-20020ad45bc70000b029023930e98a57so232750qvt.18 for ; Tue, 08 Jun 2021 04:46:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CcKhBHrtrE/oRoJL40NCriVUGHKq+WAbrBb37/bST5E=; b=N6R/REz4ypDjIMoMcK3tEjc75MX/zkNinHX3hHsQVqihRFxwe8089S+UykOpHh8VwK qMH3bqnJPyrAJyvloziZBUaZfldnmKgRTZlx+kdBdNLGLoruhCS7iGBtlroyKTvXXDgK LD3GcWTVf/fQbryWYSgQ2xvXpwKh4JIYZYscTMNMaCrgTG2zwW+m5l+EFFLoJS7c4TCB W6z/jkBN+t+2ozsr4N2Q2lH4wIODWu50ZT/OqhGNZKId0rX7dbZ+iQYr4EruboN6kKSx 6ueh5m5IwsSofk/VxH5YzAOHmp1FVNoKszwyAYWxGDeYYAzxAEzYw8E3+m/NDiSC5Wjn h52g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CcKhBHrtrE/oRoJL40NCriVUGHKq+WAbrBb37/bST5E=; b=Q81AZD1HKzqOHejAArlEK5aOlugx32AXG6qc3k66Zy1hxa4Osf0kTe8Vdb2Sz/6R45 s861QwMySJ2Zgwf8L2nTOMTchUvvgMA0EZQBDa8SNp2rzzewn5Cj+JmaoUtqUDEeoUt7 HK6qrbFiL7XDbw1xFLQPOgwGQvsTg03+oItVG3Hp09oC5VhtZgdvlLdn6o1suW1D8BS2 4tcDBMBNjzA6vjwrxsza5LzkYUwGCbmaA4Uwmfo3juju7/BsBfLPP/E5MaCNoVy7ZlJ+ k0aoUGvnRhbPjbwsuJK1b0c4GLmpsiCkiwekQBeKtCDy6GVBO3SeqPLhf2T7oqk99FH7 AU9g== X-Gm-Message-State: AOAM5300B9ibgJk9yzkxIMSdSLSPQnYaKErDRuFw/fxJHXr9vZWo9fiQ 6yHlv7YbRPG9kX7mpbx9Jw4ZNTEDjTTa X-Google-Smtp-Source: ABdhPJwmV4HHnFr5CoJZmAB4zmxtaQ4QQ4wZomGtKJiRX7H4lCdFOrj6XD5yptTjk6e2TsAql/Ga999/epVc X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:e4d:: with SMTP id o13mr23256075qvc.19.1623152735694; Tue, 08 Jun 2021 04:45:35 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:16 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-6-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 5/7] KVM: arm64: Remove hyp_pool pointer from struct hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044638_752581_08A44B49 X-CRM114-Status: GOOD ( 17.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Each struct hyp_page currently contains a pointer to a hyp_pool struct where the page should be freed if its refcount reaches 0. However, this information can always be inferred from the context in the EL2 code, so drop the pointer to save a few bytes in the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 4 ++-- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 -- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 ++++++++++++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 7 ++----- arch/arm64/kvm/hyp/nvhe/setup.c | 14 ++++++++++++-- 5 files changed, 28 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index f2c84e4fa40f..3ea7bfb6c380 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -24,8 +24,8 @@ struct hyp_pool { /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); -void hyp_get_page(void *addr); -void hyp_put_page(void *addr); +void hyp_get_page(struct hyp_pool *pool, void *addr); +void hyp_put_page(struct hyp_pool *pool, void *addr); /* Used pages cannot be freed */ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 7691ab495eb4..991636be2f46 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,11 +7,9 @@ #include -struct hyp_pool; struct hyp_page { unsigned int refcount; unsigned int order; - struct hyp_pool *pool; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index c8ed7e86231b..d938ce95d3bd 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -43,6 +43,16 @@ static void *host_s2_zalloc_page(void *pool) return hyp_alloc_pages(pool, 0); } +static void host_s2_get_page(void *addr) +{ + hyp_get_page(&host_s2_pool, addr); +} + +static void host_s2_put_page(void *addr) +{ + hyp_put_page(&host_s2_pool, addr); +} + static int prepare_s2_pool(void *pgt_pool_base) { unsigned long nr_pages, pfn; @@ -60,8 +70,8 @@ static int prepare_s2_pool(void *pgt_pool_base) .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, .page_count = hyp_page_count, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = host_s2_get_page, + .put_page = host_s2_put_page, }; return 0; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 34f0eb026dd2..e3689def7033 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -174,20 +174,18 @@ static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) * section to guarantee transient states (e.g. a page with null refcount but * not yet attached to a free list) can't be observed by well-behaved readers. */ -void hyp_put_page(void *addr) +void hyp_put_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } -void hyp_get_page(void *addr) +void hyp_get_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); @@ -236,7 +234,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); for (i = 0; i < nr_pages; i++) { - p[i].pool = pool; p[i].order = 0; hyp_set_page_refcounted(&p[i]); } diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 1cff3259a493..f834833ac921 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -137,6 +137,16 @@ static void *hyp_zalloc_hyp_page(void *arg) return hyp_alloc_pages(&hpool, 0); } +static void hpool_get_page(void *addr) +{ + hyp_get_page(&hpool, addr); +} + +static void hpool_put_page(void *addr) +{ + hyp_put_page(&hpool, addr); +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -160,8 +170,8 @@ void __noreturn __pkvm_init_finalise(void) .zalloc_page = hyp_zalloc_hyp_page, .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = hpool_get_page, + .put_page = hpool_put_page, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; From patchwork Tue Jun 8 11:45:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FFADC47082 for ; Tue, 8 Jun 2021 12:20:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D08FE61182 for ; Tue, 8 Jun 2021 12:20:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D08FE61182 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MGi/BXwSpYhbw37JbGfKQaBY6aocJdbe5sxVzT2yR+E=; b=U8jhGrrknzn6KvOg+I0xIhqK/c zeTAYOAdR5oY2cBx41Jvf7vEvUkK2IT3bRVbi4QVFzqaCyM0+P5yAq9HkYu5HTwlWgnkyNl7/zjYJ afhXnRyhp1vAVPXb3kZ6sJA43mI4gIL9OPg41ojkrVDXmbPe8j0LxndkQKNaTvZaw/ivZULkioEli /dQ++/5lqsnNLOz0KsJ26qFkUeDVSZWwOOLMDdW5FkoegN97+IH6beNrFLspM8e5IAMeYZlna7WZT Q/a97VMLDaX57wiSE4hi0VhT8LxeFUaBCFEtJ9p7gSNensPBP9OL+YaeyX7yqPmR6QdYqoJZ3RwoP 6Vk2+hkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqagg-008I2V-1x; Tue, 08 Jun 2021 12:18:30 +0000 Received: from mail-wm1-f74.google.com ([209.85.128.74]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaBt-0086fS-Dj for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:46:42 +0000 Received: by mail-wm1-f74.google.com with SMTP id k5-20020a05600c1c85b02901ac8b854c50so645464wms.5 for ; Tue, 08 Jun 2021 04:46:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5BNxXykWbSSzvgU2PyfVwBVhSOGhRGpGA/8JFAOEVmg=; b=RpPiV66cUY0ERU4ts0ZWRLX6HM855EURR0LBhraB/GG2HQQvfPX5fQ0d+wqECu1yLq BsE7Qc/3grhd+Y7hO/9tzN2u4M7IuO5K5lTpp/JBQnQGhV3/G1ye2lIf895dn6H3e3be wS0Ei7PbgSVXOUPIOPom6v5UkiPp56iUJV37nQoNTAEKbcMhVn/gyg8Sn//+5g5KmWj3 lKFXL1+52EY0b01a6d8k5AOOqf/kDkJSPiz/bLZqz3mIMK/siOA8lcAZXWXMUFDBTg7F VdqnAvUR2UCk5v0EQhE2ze7cx0VNHSnDWjZGyxerqm3ql6BNqZ8v4RfO8LSfHiNFDdWf oGAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5BNxXykWbSSzvgU2PyfVwBVhSOGhRGpGA/8JFAOEVmg=; b=Su6Mezn6669Pk0CJcZuArHm1GvrMIWqAVtKh7hkqkUgaFYL3QncXFnHehhCmKiGba3 AiRJpvc8hQguC2hB+C+4zpCnUttAEJAdr0SMp0GF+ai84w5K2Aeb+rnZK5WeW1/sLIxy 3MkcRJE1HmnYRiFZlTVmTNgCz8pIHzAvywLDvNzjrmTdMrC4xEQPW+UoL9RvBU507QMR Le0+e1whDT/NpQRi3fugqm/PRn2UpQ2lC+hHLyrjZCcCS/awTyHQP4EI+cdO1+aq1w/P xSnMmK3VmBmpKhq7OOmnT7RtXx/kk7+duAs6wVj61FwPMrwFKO7Tob3W/vk9gXQmB2Rb im/A== X-Gm-Message-State: AOAM532F/QgTvrse2MZ2sAqZClHaQ0DxvH4Bn7FhyqdKvA1YTjq7jpe3 9mEb0uE1zcgejCPJ9mHS87RRMsWi0fQI X-Google-Smtp-Source: ABdhPJxY0xFXprzkSg2IzWugfcreotUMtjfhIgxJAyEAb/go+uRuv0+nLOY4lwHHvo9KG8qAO8hAvcdRsU0b X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a7b:cb0b:: with SMTP id u11mr307833wmj.0.1623152737523; Tue, 08 Jun 2021 04:45:37 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:17 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-7-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 6/7] KVM: arm64: Use less bits for hyp_page order From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044641_507746_94535C50 X-CRM114-Status: GOOD ( 15.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page order is currently encoded on 4 bytes even though it is guaranteed to be smaller than this. Make it 2 bytes to reduce the hyp vmemmap overhead. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 3ea7bfb6c380..fb0f523d1492 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER UINT_MAX +#define HYP_NO_ORDER USHRT_MAX struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned int max_order; + unsigned short max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 991636be2f46..3fe34fa30ea4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,7 @@ struct hyp_page { unsigned int refcount; - unsigned int order; + unsigned short order; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e3689def7033..be07055bbc10 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -93,7 +93,7 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { - unsigned int order = p->order; + unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -123,7 +123,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy; @@ -192,9 +192,9 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) hyp_spin_unlock(&pool->lock); } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { - unsigned int i = order; + unsigned short i = order; struct hyp_page *p; hyp_spin_lock(&pool->lock); From patchwork Tue Jun 8 11:45:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F9B5C47082 for ; Tue, 8 Jun 2021 12:19:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 64A606124C for ; Tue, 8 Jun 2021 12:19:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 64A606124C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=re+dR2bcEXACL6if4lagLEoe+P9tMjykmxzpZ9w8GFA=; b=jXMzheAWJpNq6zylAsMK3y65Vl TgMpy8xsmGTcMWJfNK7U2/Fefc2g14rOR9DwfZctyWbjKj91NvIDHFHL+0eBWA7ayiRIbY8HSFG16 mpM0qm2AfOPBNfdnNxbMdt1MScq6ZfZIBmuLaQLCg2gl8E/XlQgl4ykIyAcnxJEDPhxfM0QClNYuZ kw9FzTo42ZLJJpDqc8ZrsMo0M81vOvG4qp8kF3/HCQMrs63qR5X1oc9JBIhV2UIFyycQzVHanZ+XG yChfMN8q+gI3ZbjkKoammlm9oAjkXgkh2P2h16yqHjCNXD4WDrlowmgo8aCmuyWlXoVVT3k8xfxY0 G6OV2XVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqafl-008HhA-V5; Tue, 08 Jun 2021 12:17:35 +0000 Received: from mail-qk1-f202.google.com ([209.85.222.202]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaBu-0086g8-8b for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:46:44 +0000 Received: by mail-qk1-f202.google.com with SMTP id 190-20020a3708c70000b02903aa60e6d8c1so8020302qki.19 for ; Tue, 08 Jun 2021 04:46:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ht6ZsPME0dKrsaoX0sOEJfkD40JHGHh+tjBgtM1a8+s=; b=Qo1Sme4AtGquqBbZ3++fnxMhekv/Q/MZQsj0pdKrYdhOBKiGTd8NVlwwFib6WIoU58 F/EuzAQZ227O9LOmq4TCiKU/lJ8OrmXe6Y0du8LPrkFaVPb0SGCrPQJv74dFr1t0hqw8 fy3V1IQhwl0eJmdKIpBflmQh37xHque8Fsm6v0wEp/y/OgDqet2jyq2u4FWH0X8QyMPE 8EBHAgM2ApfYABHvqv9V/vrHhpzSbPnhcEwwZjjctlPFZTxsppMlccDEn5h8VEiJGS/5 JaBuvCt6pEeETZ4wx1vNc3NkTiRSK/5Rul/LHPAj4crdEwXvDQar9+UrZw0QonuKMKSv EAsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ht6ZsPME0dKrsaoX0sOEJfkD40JHGHh+tjBgtM1a8+s=; b=hnMJVfmfa4HP1Y7sWp7PLLZYUCBkrGPOR+VcWkjuHYOP83MdSNstC97wv97bVrgdsk SPHxlGCki4CTLvyX7C/I5+FhF0kElNzUSOlJ63hFHrDoOTK+wi2AevDt1T/9gh4X9tp0 /q/cXnBoacQ0jfiX9FFcmQtuzU+qYZ+rjy8cut2rDb59J4LpnfQ/EVacX73OgVu7BsK7 CfT9avpicISDKgk/Fc/kZefWrPEFGZJCrSy4kbXZuAjXP2Kd2OJ+C0aRF6CPrRTntGOb JQtP/XCSclPd/sJbI71x95YNP0xD7kbeHoVbyPNqrB6ZNhcpkQR0y1vtMEt0sQM1T3xG N2SA== X-Gm-Message-State: AOAM533MZ0pgGaOL7IbG4eG2zp+isINsUJsI9chcnIg8HjhAUP0k/QXu McRt4Z+K3PU0KkpZNMaNo/CrKB9T2ykH X-Google-Smtp-Source: ABdhPJwKoD8tOLvO/LYUpyfEnSdl9NSUO/u5qM43SN4f3kxfrNxeCZtg5Givdh9sHlYANNlP3HYBgBNwoPfX X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:ad4:4241:: with SMTP id l1mr22892455qvq.2.1623152739555; Tue, 08 Jun 2021 04:45:39 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:18 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-8-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 7/7] KVM: arm64: Use less bits for hyp_page refcount From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044642_345508_D2DF0E27 X-CRM114-Status: GOOD ( 13.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount is currently encoded on 4 bytes even though we never need to count that many objects in a page. Make it 2 bytes to save some space in the vmemmap. As overflows are more likely to happen as well, make sure to catch those with a BUG in the increment function. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 3fe34fa30ea4..592b7edb3edb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include struct hyp_page { - unsigned int refcount; + unsigned short refcount; unsigned short order; }; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index be07055bbc10..41fc25bdfb34 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -146,6 +146,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, static inline void hyp_page_ref_inc(struct hyp_page *p) { + BUG_ON(p->refcount == USHRT_MAX); p->refcount++; }