From patchwork Thu May 27 12:51:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CAF6C4708A for ; Thu, 27 May 2021 12:53:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3613B610A8 for ; Thu, 27 May 2021 12:53:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3613B610A8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=DjEkQfJHCpQP1rnu0ERUZ/8n0VaT/GeCf49fx0Ktg8Y=; b=hTLxXetNLsQs8yb20WMqwI+nOP +jgr00o087YV6auV3oo3soImLRQmR6QljSkZ93EDOH6XzKZ7/luEy119Sl5dhizUTfY+kR2wlUHCx xHVzYWHvfvZpTEe1QuWuZhq+LAvsPTDyDOeBoKrrhEa0KF/FJseqb6lrX2GVTX0G+V5xv9SbCoFow 9JMeLL3h6YRJMiY+GOFB7fGAKwZ4ozRraZFmVy8aOxu2BeJ6de3w4ywyLdL13NkBunB0RlGL2XQap pOAEFU3yblaS0bHR4/vqv7cbIlRDRuaCkCJ+aS+eyrm5RsgHeUoIp4ZBZqDvFDh9XLLMVBfdow6x6 T2ZvFADg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUa-005tau-6q; Thu, 27 May 2021 12:52:04 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUC-005tP4-Pt for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:42 +0000 Received: by mail-wm1-x34a.google.com with SMTP id h18-20020a05600c3512b029018434eb1bd8so1458581wmq.2 for ; Thu, 27 May 2021 05:51:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=59ZrbnR57dXVmexXuU0iZx85Sibct5xhty4f3jqzmUk=; b=jVHXS3bvMibkBWS+bH3WSLubeEwo4y03J+vRbsZSsVjoLuBFta5X2GtmfPD5/3Z7QY TA8LUPhrubLg14KwptPsqTA0fyIxxEiMv2j6qPoea7xGJBviUgOJS8LKfAtQ6LRg4xiZ vNH4I54lKLKr8niplcjTAZf0ZeEFp4WpohPKu/ocRKYxchgP0pLtq8yCHW0wOP4GUgyk /wmQI3jlJLAHjRBTEUFDiGi2tu3nCsJijS3psLbSR0F2aqlmeNBMjDWodcEhaMfXQ52i HA4j1aHPpJE4izPecyVmlmp71jx2oZWHOQWqJWSClhiCwre/jt9wfMAJdPwgoSBHyx7U Go4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=59ZrbnR57dXVmexXuU0iZx85Sibct5xhty4f3jqzmUk=; b=UKHZZ64VwyHe0UxlL9TsaYaH9T4IN1+spcew46CkHQ7KIeI2rczsPVS3CtAgmk0U6y hP0S/SkR7GzZWkz27dYWp2YDO0MxSGlDx8Js5D20LQ8MGOkJiGKeOk12JMFIOHDMmqKi Jhx+izvNewjfB9oH3oObrDTuT3tFZvSClq+1kRGN6et/gYBAWRRjt4c266zmrX05MG6W KsqBbgVnJIb0zt/LoWVPIutr18DZaMIlH40NIFQuEJFhrnmSdXDV6UlBvO+ozBPhQgs/ QamkqnBimilFU7jqT1EKutRioU8auzBzAee4xRy5phpU/lTHk6U9k6ASC43TIdR7zByW Onig== X-Gm-Message-State: AOAM532SnVJPuHZC254v9x9qbrCNyObZsahLxSTy5ileW7Nxt+MNRhAO g1XlYajv4INV6u8aDuIXEePj25mUHSzG X-Google-Smtp-Source: ABdhPJx3hDj3fqzQNROoGJzyF8S2ksFUOMHLT2ocwl3Ym1DkFjpNtt3AG2ucCS5H6j++RJ8/UzZ53q5BHRH4 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:ad3:: with SMTP id c19mr8297036wmr.66.1622119898658; Thu, 27 May 2021 05:51:38 -0700 (PDT) Date: Thu, 27 May 2021 12:51:28 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-2-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 1/7] KVM: arm64: Move hyp_pool locking out of refcount helpers From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055140_866493_309DA83A X-CRM114-Status: GOOD ( 19.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount helpers currently rely on the hyp_pool lock for serialization. However, this means the refcounts can't be changed from the buddy allocator core as it already holds the lock, which means pages have to go through odd transient states. For example, when a page is freed, its refcount is set to 0, and the lock is transiently released before the page can be attached to a free list in the buddy tree. This is currently harmless as the allocator checks the list node of each page to see if it is available for allocation or not, but it means the page refcount can't be trusted to represent the state of the page even if the pool lock is held. In order to fix this, remove the pool locking from the refcount helpers, and move all the logic to the buddy allocator. This will simplify the removal of the list node from struct hyp_page in a later patch. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 21 ++------------------- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 19 ++++++++----------- 2 files changed, 10 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 18a4494337bd..aada4d97de49 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -24,37 +24,20 @@ struct hyp_pool { static inline void hyp_page_ref_inc(struct hyp_page *p) { - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); p->refcount++; - hyp_spin_unlock(&pool->lock); } static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) { - struct hyp_pool *pool = hyp_page_to_pool(p); - int ret; - - hyp_spin_lock(&pool->lock); p->refcount--; - ret = (p->refcount == 0); - hyp_spin_unlock(&pool->lock); - - return ret; + return (p->refcount == 0); } static inline void hyp_set_page_refcounted(struct hyp_page *p) { - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - if (p->refcount) { - hyp_spin_unlock(&pool->lock); + if (p->refcount) BUG(); - } p->refcount = 1; - hyp_spin_unlock(&pool->lock); } /* Allocation */ diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 237e03bf0cb1..04573bf35441 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,15 +93,6 @@ static void __hyp_attach_page(struct hyp_pool *pool, list_add_tail(&p->node, &pool->free_area[order]); } -static void hyp_attach_page(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - __hyp_attach_page(pool, p); - hyp_spin_unlock(&pool->lock); -} - static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, unsigned int order) @@ -128,16 +119,22 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, void hyp_put_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); if (hyp_page_ref_dec_and_test(p)) - hyp_attach_page(p); + __hyp_attach_page(pool, p); + hyp_spin_unlock(&pool->lock); } void hyp_get_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); + hyp_spin_unlock(&pool->lock); } void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) @@ -159,8 +156,8 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) p = list_first_entry(&pool->free_area[i], struct hyp_page, node); p = __hyp_extract_page(pool, p, order); - hyp_spin_unlock(&pool->lock); hyp_set_page_refcounted(p); + hyp_spin_unlock(&pool->lock); return hyp_page_to_virt(p); } From patchwork Thu May 27 12:51:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5180C4707F for ; Thu, 27 May 2021 12:54:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9B4136109F for ; Thu, 27 May 2021 12:54:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B4136109F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=owZxSvl3DE0LIQ2vc4EOXLXHyDl01byEIEZsbrtBlsU=; b=H+zi8nf33pfqTaY4bhe3Q9dkcH DOQZxAkHMpwcxuTkIa634m4LAiISt9BA0f0JGNinH1LUnnrJaSEf83Ya/Gcv5KHwwPMdSZi4f95Yy xjxBXFzYViE41Ir7+236k2Z+xSm9TH7ghoBK/1FGUOX3fvcELXxjF7B1r6/SDyQNWZfa67pyGqhWR fSUyIquqkA3Otb9MroejSeMvIcJy/ZG8FeTcKJyNal6qYWE69iBD5ByM5vlY2LAxPbZ2ufzBrSKuR EITm25RmSmAf8YXObmLb4ZWw5Ybt8rxmNGThDYuiyjYA/tsKITukgRezSqmlYy0EhFL+MKY07qkOb VuuqSeGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFV1-005tle-Og; Thu, 27 May 2021 12:52:32 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUF-005tQX-Vj for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:47 +0000 Received: by mail-ed1-x549.google.com with SMTP id c21-20020a0564021015b029038c3f08ce5aso246663edu.18 for ; Thu, 27 May 2021 05:51:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VHJDxJbB4/0UaCkiYnMDocZA1owly4T7PndnhH3Iyu4=; b=UDif3Yk+PyirPGKgjwApMLsdwDDazlqNcaTddUUJiaL8CggB5IbFz5NLz+d9ZujoSo u8Xf/rgSYRCQD/oc+W4sOqrYqYmC7x+K1DQXP5vUh+EOiMm4BxzGCJSf8fx6PwAhXcbd gkhGM3OrqR+90yiYgmUu67RympC5WD928YRa28W+0oQBp+e6/VqCIhcc66fRffxi6sQI /XS5VRbW4WYDR9dWTO9BYGzExAT4Vp5kecespCA9X0qN9QfbAMuPf0L98P+j58hrFH6X 0Ls9N20YBcgJclgF+UnbQYPzKM6w7Xo3sh1Hc0vmcrCcAhVhNUuAGLR2l5w+KWYa7hWi e7OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VHJDxJbB4/0UaCkiYnMDocZA1owly4T7PndnhH3Iyu4=; b=slhPR5VCMLDu+qjgCocF1+880deV2l2jI066kReZ/l5gIKM4B6/CteUMPRlB1MvPJU e8DhULtMMKZEDq5FQ/9mj1aaslUIzIUR5MXoar57EsB+vHqosVjieidCLRiTuehvuZ3s OtUTkbvj5R8hJpKJ44PzSccp/sz/FcVLXPw7veKty0sI9OBX4vR0yJ1sCGR4uY2i+tEQ M+Wda02aDiFchy5AwtT7XmM/7JuoNG/1D5xA0mhqsoU1HJMbUy7ggmdkaxOUSIHMjEF3 fxY/7dDWlxT+dezm3P2+PLlH7F8oAKT3qm/6MAgmpwnjQWOSJ5dOsovyZW/btcNEpOXF C0Hg== X-Gm-Message-State: AOAM530CGasTl3iJoLmlNuWjIDmK8sy7M+fYgtnD3KXj0LQhGKSMxPFd Ggr7GqhSXFmOgn8P4YjDamIIwc/guJaY X-Google-Smtp-Source: ABdhPJzJ2BtVqbgXgWjGsYBVJOsLAqvofOY6EixOvHYix2dY7OIUQXm2cfj8F2j3qMKF+qpDs8y5eIV8ZZje X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6402:50cd:: with SMTP id h13mr3939968edb.111.1622119901220; Thu, 27 May 2021 05:51:41 -0700 (PDT) Date: Thu, 27 May 2021 12:51:29 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-3-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 2/7] KVM: arm64: Use refcount at hyp to check page availability From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055144_065655_2803F8EE X-CRM114-Status: GOOD ( 16.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp buddy allocator currently checks the struct hyp_page list node to see if a page is available for allocation or not when trying to coalesce memory. Now that decrementing the refcount and attaching to the buddy tree is done in the same critical section, we can rely on the refcount of the buddy page to be in sync, which allows to replace the list node check by a refcount check. This will ease removing the list node from struct hyp_page later on. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 04573bf35441..7ee882f36767 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -55,7 +55,7 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); - if (!buddy || buddy->order != order || list_empty(&buddy->node)) + if (!buddy || buddy->order != order || buddy->refcount) return NULL; return buddy; @@ -116,14 +116,19 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } +static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) +{ + if (hyp_page_ref_dec_and_test(p)) + __hyp_attach_page(pool, p); +} + void hyp_put_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); - if (hyp_page_ref_dec_and_test(p)) - __hyp_attach_page(pool, p); + __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } @@ -178,15 +183,16 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - memset(p, 0, sizeof(*p) * nr_pages); for (i = 0; i < nr_pages; i++) { p[i].pool = pool; + p[i].order = 0; INIT_LIST_HEAD(&p[i].node); + hyp_set_page_refcounted(&p[i]); } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) - __hyp_attach_page(pool, &p[i]); + __hyp_put_page(pool, &p[i]); return 0; } From patchwork Thu May 27 12:51:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0FAAC4707F for ; Thu, 27 May 2021 12:55:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7C3FE613CA for ; Thu, 27 May 2021 12:55:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C3FE613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ukiihvqmCdtYpsUH3gPWRr17pd5JoVWSf8++uOSDJFw=; b=GboIki8h9A52i+rU8mpzX3Lpzl 4Gt17WvQ3Hp5SORI2w3fBHZBJOfQYeLtrYlwx9HQmGv2pnGNuJQ2E3TFTDrLQyag3w//h3pEKugNw JaF+xEF0gNa7nf86cr877SszjLLzs7cFkPNfS/F2F2b7bO7v/i82gxnIWUQ73RHHQZRbsUR08NxJa 453lnSgoCei9VWpVQ4KdGsjHTsDF/RCL3SospMX4AOw95x4BDJzykjz4QdQDKJ1za9R9Ezm+SHdyM qpqQdz+j97X5hRqkVShXk//rwCIi8LfHxtCiN5njCM7T+W7x+aZ5XhYjLJgW1mr3kFXfHNg1jPEIZ bAbASpnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFVQ-005u1C-7g; Thu, 27 May 2021 12:52:56 +0000 Received: from mail-ed1-x549.google.com ([2a00:1450:4864:20::549]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUH-005tRg-IX for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:47 +0000 Received: by mail-ed1-x549.google.com with SMTP id x3-20020a50ba830000b029038caed0dd2eso266856ede.7 for ; Thu, 27 May 2021 05:51:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gi5cRmDtyloWJMITCIj/sPnhDqshqX/If7D8pZ+Fco8=; b=DMh/PwjeROpCuIsXL8vRKNGEi9sHS19ekOOPLJW/yYZpcAXKCDh4kFuF6iI/aU1kMl Gd3tI584HoLn+kctQ7quFOApvKA4kHemuJ26RXQjx3fbsek7WyRVBTy4z/hDTF8/7CEL zxQX0nPp35II7RVacLFWT/l/To+3fVcczDHqZzs1XFABFaNr9SgLwX30EBRUuUMGhiEz 5euHTt9w+mN6fCrPAsZ1YWV8quDogTUAATc6+C9p/KeVUkcsAtPRHini8QK1D0L79neA NUHURMi7p+5iRahsNBMDoS6FlUOqIo57NM1IYG8tkhwHWec9RAnj7z1x/hxWSkIx0PPp axhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gi5cRmDtyloWJMITCIj/sPnhDqshqX/If7D8pZ+Fco8=; b=rwiqyhFTIh8vS73Cb4nNeU02za83VT3r3vQI2gI0rw1JwOW1Tdljw67K+Sn3CdfXe+ 73b8SVnpz31F/LxZHOmHbYKdzxLn0agzNXgLf08ymrBetQ7pybmkOwMufhp3f/yV99An Bggf5AQ5sd0vtXJJF9V8a4LTyRBqirJZBi3OfToeNmB+pSTilGDB1nt/Yen/msi2mFgn Jo74Y/hyAxsrf85JD2pJ40iUBiDgahXNS4+xi0K3OWww++6uneu+2icJiMfYPGaski+i SvIxiOFkO5GAQ4CzZcLfRpvv3dUlmcAw1Ye7KuahqaOS522kCmznC1sb+A+tYJtmlT3a 1FAg== X-Gm-Message-State: AOAM530yQuQ2lSmJcR5DzdUuSrj+/090eC2BeLG64mqQyDNwjX3YDazz vVOvcK3eXItwPmlxS5Z107CNmpuzQtyC X-Google-Smtp-Source: ABdhPJwlv3mU0dZTO0kFr4Yy4f9wIZtT5gFLjr+4vvIMhKkBy7e5WNSw28epAMZVLsvaNdTluQlcqCDUf204 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a17:907:770a:: with SMTP id kw10mr3675153ejc.213.1622119903222; Thu, 27 May 2021 05:51:43 -0700 (PDT) Date: Thu, 27 May 2021 12:51:30 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-4-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 3/7] KVM: arm64: Remove list_head from hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055145_662943_3E5F19EE X-CRM114-Status: GOOD ( 19.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The list_head member of struct hyp_page is only needed when the page is attached to a free-list, which by definition implies the page is free. As such, nothing prevents us from using the page itself to store the list_head, hence reducing the size of the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 1 - arch/arm64/kvm/hyp/nvhe/page_alloc.c | 39 ++++++++++++++++++++---- 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index fd78bde939ee..7691ab495eb4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -12,7 +12,6 @@ struct hyp_page { unsigned int refcount; unsigned int order; struct hyp_pool *pool; - struct list_head node; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 7ee882f36767..ce7379f1480b 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -62,6 +62,34 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, } +/* + * Pages that are available for allocation are tracked in free-lists, so we use + * the pages themselves to store the list nodes to avoid wasting space. As the + * allocator always returns zeroed pages (which are zeroed on the hyp_put_page() + * path to optimize allocation speed), we also need to clean-up the list node in + * each page when we take it out of the list. + */ +static inline void page_remove_from_list(struct hyp_page *p) +{ + struct list_head *node = (struct list_head *)hyp_page_to_virt(p); + + __list_del_entry(node); + memset(node, 0, sizeof(*node)); +} + +static inline void page_add_to_list(struct hyp_page *p, struct list_head *head) +{ + struct list_head *node = (struct list_head *)hyp_page_to_virt(p); + + INIT_LIST_HEAD(node); + list_add_tail(node, head); +} + +static inline struct hyp_page *node_to_page(struct list_head *node) +{ + return (struct hyp_page *)hyp_virt_to_page(node); +} + static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { @@ -83,14 +111,14 @@ static void __hyp_attach_page(struct hyp_pool *pool, break; /* Take the buddy out of its list, and coallesce with @p */ - list_del_init(&buddy->node); + page_remove_from_list(buddy); buddy->order = HYP_NO_ORDER; p = min(p, buddy); } /* Mark the new head, and insert it */ p->order = order; - list_add_tail(&p->node, &pool->free_area[order]); + page_add_to_list(p, &pool->free_area[order]); } static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, @@ -99,7 +127,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, { struct hyp_page *buddy; - list_del_init(&p->node); + page_remove_from_list(p); while (p->order > order) { /* * The buddy of order n - 1 currently has HYP_NO_ORDER as it @@ -110,7 +138,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, p->order--; buddy = __find_buddy_nocheck(pool, p, p->order); buddy->order = p->order; - list_add_tail(&buddy->node, &pool->free_area[buddy->order]); + page_add_to_list(buddy, &pool->free_area[buddy->order]); } return p; @@ -158,7 +186,7 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) } /* Extract it from the tree at the right order */ - p = list_first_entry(&pool->free_area[i], struct hyp_page, node); + p = node_to_page(pool->free_area[i].next); p = __hyp_extract_page(pool, p, order); hyp_set_page_refcounted(p); @@ -186,7 +214,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, for (i = 0; i < nr_pages; i++) { p[i].pool = pool; p[i].order = 0; - INIT_LIST_HEAD(&p[i].node); hyp_set_page_refcounted(&p[i]); } From patchwork Thu May 27 12:51:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E89FC4707F for ; Thu, 27 May 2021 12:55:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 15188613CA for ; Thu, 27 May 2021 12:55:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15188613CA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ss9tmXqHVpsjGz9tnurTx+tdJG9xCJmRJ+zAIv37djE=; b=lhAI8H0aR0tCEYvl+m8nxBI+6o 8G8JB7MapA2tYN6/u4Pj2pOrDbSDmqa0aOmws/rZBqv2YdpVQJsWR6hO6FWlDwHGMcnoDyM1UBtIX /k7qOD0YLlcTgo+HV5uWoHjeN2dWtGozTStVpzDFGIIoHO9Wuy0KMOvKuZoTfXxHCHmDq5WD+xldo D/Bc0iupqL6oQMNNgLGQcRmDFyuTBKtaO5577q0iB4iUcX2ptpCTVXa+GVI8aFT80L8WrsWGnXmBk ysa15qu8d3V0y2M5asvlau7NORm/rWaQ2RVGCwskgp9aWpfajVPPheA//gpkyUZPBgLGWwn7J1BGN UN/aJ2dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFVk-005uCT-J4; Thu, 27 May 2021 12:53:16 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUK-005tT7-9v for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:50 +0000 Received: by mail-wr1-x449.google.com with SMTP id i102-20020adf90ef0000b029010dfcfc46c0so1715354wri.1 for ; Thu, 27 May 2021 05:51:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cJaiPBoDlBL+OF3QKpqO+kiuhlVsaxj564aa8HPpcoc=; b=Oqceib9ytwRGxp9oHb7Ad15EZDUaWAUfqfoDLpiZu2vDfn7IF+J/s+AG3IllaHoAw5 O19F2WUCNsP1oS2jcscWeYg255Ns5Ue6/UFUjOZdi607KHJIlY5FO4qKlnGHJg9WiCrs 5oOxT5e+nHRS6u70ICp7TQXLG0FjxB/LOi3Xpwo2dfT5yUasezSb7tl6QdSVBq1KFKN8 rOQe3i/J6F+uwmtwAljIUyp00IUU1TSKIBy0kUn0s7/I2Ie1/beLf15LND2FnbxUvGcu t4yA4YKSA2AqeCHoI9g0q7DKUFmiaTcUpA5/GHeaCLzDHJlxe9qwUewCyZF+nhSVGQfv jmFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cJaiPBoDlBL+OF3QKpqO+kiuhlVsaxj564aa8HPpcoc=; b=pUBqfc5nRgUqg5KAEukjNNC+2PDIz8SgOdWfFiOZh5wxw3eqvMy7uPnHnGeq46lbPO kH8YSfJvNTAqx0OA4UUUIy/Fp+QpVhCL7z36p1mKf4nRekVbvWOsKd7a/JTppV+lH2I4 tFowyp83YAVoTp947X2dWlux5S2PGY2kWLfpBwuo7pFD+oEMv0kKfACPT0tWx3QF88tU qQUyf6uBvVLq4xqJUHlinhm+zmdb2Q3ljPt1vOgQ7AJKnT69s4vh7ZPmlPYAycfMG5br erCAusGA3ifvj2cfxRrSzrh/guSFAlXGNAiEeA3ZfOKWT+EXYHBrzQjRRrpLEEbHoCq6 xtWw== X-Gm-Message-State: AOAM533CAG6nUH8jVrcRNpN0LpJvvUVqE4lVez4m0szb1MHGg1PwZ5G7 uXtFY4zoWKoP/uSjuzeYkGl2utbudaH2 X-Google-Smtp-Source: ABdhPJzd81KX2uGvGuB/5JtZca9RU6/dBjhi6g1PGg25sSMilFn0CmPKNuM4XO8aGp5q1sgA0C7lqwh5zpjW X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a1c:6a0e:: with SMTP id f14mr8347592wmc.114.1622119905477; Thu, 27 May 2021 05:51:45 -0700 (PDT) Date: Thu, 27 May 2021 12:51:31 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-5-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 4/7] KVM: arm64: Unify MMIO and mem host stage-2 pools From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055148_405075_DA932BEF X-CRM114-Status: GOOD ( 26.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently maintain two separate memory pools for the host stage-2, one for pages used in the page-table when mapping memory regions, and the other to map MMIO regions. The former is large enough to map all of memory with page granularity and the latter can cover an arbitrary portion of IPA space, but allows to 'recycle' pages. However, this split makes accounting difficult to manage as pages at intermediate levels of the page-table may be used to map both memory and MMIO regions. Simplify the scheme by merging both pools into one. This means we can now hit the -ENOMEM case in the memory abort path, but we're still guaranteed forward-progress in the worst case by unmapping MMIO regions. On the plus side this also means we can usually map a lot more MMIO space at once if memory ranges happen to be mapped with block mappings. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/include/nvhe/mm.h | 13 +++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 46 ++++++++----------- arch/arm64/kvm/hyp/nvhe/setup.c | 16 ++----- arch/arm64/kvm/hyp/reserved_mem.c | 3 +- 5 files changed, 32 insertions(+), 48 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 42d81ec739fa..9c227d87c36d 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -23,7 +23,7 @@ extern struct host_kvm host_kvm; int __pkvm_prot_finalize(void); int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end); -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); +int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); static __always_inline void __load_host_stage2(void) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 0095f6289742..8ec3a5a7744b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -78,19 +78,20 @@ static inline unsigned long hyp_s1_pgtable_pages(void) return res; } -static inline unsigned long host_s2_mem_pgtable_pages(void) +static inline unsigned long host_s2_pgtable_pages(void) { + unsigned long res; + /* * Include an extra 16 pages to safely upper-bound the worst case of * concatenated pgds. */ - return __hyp_pgtable_total_pages() + 16; -} + res = __hyp_pgtable_total_pages() + 16; -static inline unsigned long host_s2_dev_pgtable_pages(void) -{ /* Allow 1 GiB for MMIO mappings */ - return __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + + return res; } #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e342f7f4f4fb..fdd5b5702e8a 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -23,8 +23,7 @@ extern unsigned long hyp_nr_cpus; struct host_kvm host_kvm; -struct hyp_pool host_s2_mem; -struct hyp_pool host_s2_dev; +struct hyp_pool host_s2_pool; /* * Copies of the host's CPU features registers holding sanitized values. @@ -36,7 +35,7 @@ static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) { - return hyp_alloc_pages(&host_s2_mem, get_order(size)); + return hyp_alloc_pages(&host_s2_pool, get_order(size)); } static void *host_s2_zalloc_page(void *pool) @@ -44,20 +43,14 @@ static void *host_s2_zalloc_page(void *pool) return hyp_alloc_pages(pool, 0); } -static int prepare_s2_pools(void *mem_pgt_pool, void *dev_pgt_pool) +static int prepare_s2_pool(void *pgt_pool_base) { unsigned long nr_pages, pfn; int ret; - pfn = hyp_virt_to_pfn(mem_pgt_pool); - nr_pages = host_s2_mem_pgtable_pages(); - ret = hyp_pool_init(&host_s2_mem, pfn, nr_pages, 0); - if (ret) - return ret; - - pfn = hyp_virt_to_pfn(dev_pgt_pool); - nr_pages = host_s2_dev_pgtable_pages(); - ret = hyp_pool_init(&host_s2_dev, pfn, nr_pages, 0); + pfn = hyp_virt_to_pfn(pgt_pool_base); + nr_pages = host_s2_pgtable_pages(); + ret = hyp_pool_init(&host_s2_pool, pfn, nr_pages, 0); if (ret) return ret; @@ -86,7 +79,7 @@ static void prepare_host_vtcr(void) id_aa64mmfr1_el1_sys_val, phys_shift); } -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) +int kvm_host_prepare_stage2(void *pgt_pool_base) { struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; int ret; @@ -94,7 +87,7 @@ int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) prepare_host_vtcr(); hyp_spin_lock_init(&host_kvm.lock); - ret = prepare_s2_pools(mem_pgt_pool, dev_pgt_pool); + ret = prepare_s2_pool(pgt_pool_base); if (ret) return ret; @@ -199,11 +192,10 @@ static bool range_is_memory(u64 start, u64 end) } static inline int __host_stage2_idmap(u64 start, u64 end, - enum kvm_pgtable_prot prot, - struct hyp_pool *pool) + enum kvm_pgtable_prot prot) { return kvm_pgtable_stage2_map(&host_kvm.pgt, start, end - start, start, - prot, pool); + prot, &host_s2_pool); } static int host_stage2_idmap(u64 addr) @@ -211,7 +203,6 @@ static int host_stage2_idmap(u64 addr) enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W; struct kvm_mem_range range; bool is_memory = find_mem_range(addr, &range); - struct hyp_pool *pool = is_memory ? &host_s2_mem : &host_s2_dev; int ret; if (is_memory) @@ -222,22 +213,21 @@ static int host_stage2_idmap(u64 addr) if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); - if (is_memory || ret != -ENOMEM) + ret = __host_stage2_idmap(range.start, range.end, prot); + if (ret != -ENOMEM) goto unlock; /* - * host_s2_mem has been provided with enough pages to cover all of - * memory with page granularity, so we should never hit the ENOMEM case. - * However, it is difficult to know how much of the MMIO range we will - * need to cover upfront, so we may need to 'recycle' the pages if we - * run out. + * The pool has been provided with enough pages to cover all of memory + * with page granularity, but it is difficult to know how much of the + * MMIO range we will need to cover upfront, so we may need to 'recycle' + * the pages if we run out. */ ret = host_stage2_unmap_dev_all(); if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); + ret = __host_stage2_idmap(range.start, range.end, prot); unlock: hyp_spin_unlock(&host_kvm.lock); @@ -258,7 +248,7 @@ int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end) hyp_spin_lock(&host_kvm.lock); ret = kvm_pgtable_stage2_set_owner(&host_kvm.pgt, start, end - start, - &host_s2_mem, pkvm_hyp_id); + &host_s2_pool, pkvm_hyp_id); hyp_spin_unlock(&host_kvm.lock); return ret != -EAGAIN ? ret : 0; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 7488f53b0aa2..709cb3d19eb7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -25,8 +25,7 @@ unsigned long hyp_nr_cpus; static void *vmemmap_base; static void *hyp_pgt_base; -static void *host_s2_mem_pgt_base; -static void *host_s2_dev_pgt_base; +static void *host_s2_pgt_base; static int divide_memory_pool(void *virt, unsigned long size) { @@ -45,14 +44,9 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!hyp_pgt_base) return -ENOMEM; - nr_pages = host_s2_mem_pgtable_pages(); - host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_mem_pgt_base) - return -ENOMEM; - - nr_pages = host_s2_dev_pgtable_pages(); - host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_dev_pgt_base) + nr_pages = host_s2_pgtable_pages(); + host_s2_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_pgt_base) return -ENOMEM; return 0; @@ -158,7 +152,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = kvm_host_prepare_stage2(host_s2_mem_pgt_base, host_s2_dev_pgt_base); + ret = kvm_host_prepare_stage2(host_s2_pgt_base); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 83ca23ac259b..d654921dd09b 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -71,8 +71,7 @@ void __init kvm_hyp_reserve(void) } hyp_mem_pages += hyp_s1_pgtable_pages(); - hyp_mem_pages += host_s2_mem_pgtable_pages(); - hyp_mem_pages += host_s2_dev_pgtable_pages(); + hyp_mem_pages += host_s2_pgtable_pages(); /* * The hyp_vmemmap needs to be backed by pages, but these pages From patchwork Thu May 27 12:51:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7571BC4707F for ; Thu, 27 May 2021 12:56:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 41E1B60FE6 for ; Thu, 27 May 2021 12:56:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41E1B60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=NnzesJTVu0SClWry7roBhTXT51NT0JTRlp6CkSkKu38=; b=HSWkSmm8TNmisAjssm6ay4sMiV QULaSVCaHlrIoErIcgkXNv0x/tBLbmoLODqi5U85vHpA1e9QWoc+/A4UavQCX7T62PN/DFYHrBKzK FkZRJXb8X+6o4pj9NUzvdNC83xKDng6dhlHPpUyWVpvRKD4L0Qh8dC613ME/bBHQzNTjvUVjDXzbi AmANhn0uCg201/N20GrPk12z5Xsd9q4Bd6QUKEUDCBwwHXz4XvAeevDcSP0BIZXAisDdFu2JlWoS5 4g4Q1GRMvB+L348FOf8H9TBkTv0aGtjVAdd/sglSo3qqyNITAyp5ZC8L5z3J6Uh6rl3jGG+uVy9Tg LMSxQz+Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFWF-005uVE-O3; Thu, 27 May 2021 12:53:48 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUM-005tUN-S1 for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:52 +0000 Received: by mail-qt1-x849.google.com with SMTP id j19-20020ac85f930000b029021f033edf60so53438qta.10 for ; Thu, 27 May 2021 05:51:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ehs1oQ+8oLxIXVrBNHuhZcAo+D2g9l++j8BZLV6gtTA=; b=a+FtUAFPFyiW0uYlgwR0eSULr6xYa8BpYbZNF6URd1JNP9Jw9Hd23+lAYigwVkhKoW lyFAkrD2tp/8dk2Rn0LshAJLDfuuF3tyfte6wErSyLZlahexM9sr15qTiMWbRy54SmxA CXiqPJ+PxMN1vY+PA3Qkvfqu2y/7s+tGazebtbYsQ+NKZbUxq46OS/xDMwFrSHKhJI8N Q50Hl2GkR08amV3wg4iJhrweLLG/4tHzM6TtJFHyR9LfjuSmNPtKWyKL4Ik1TCtRVESm 4i4sOBb2IvH511qCVxCI/ubxHWCHIZCq9RxmfVeCzmQC4cK/P8RVV750oUvTNGoiuWHm IDYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ehs1oQ+8oLxIXVrBNHuhZcAo+D2g9l++j8BZLV6gtTA=; b=AnMHjZv4/cfmiu89VKwNJJd6u7QYwp0jeu2sau6wmu9v5a5NxdBQo1JsapwYK7dFzM EFNUs3OvjX6wGUeXCnukS1aXKPBxxeGGZm5SnnCnpLpLT5XcQZ/7tzQfGy5HT2GA2Snw slc18UrsFH0HGE57tMpZ6M5bIsuPAQL3Jqkk0SdoNRtkwZ9XsXcnFbqD/mgAi/gocM/W Q75lpajmzIVCnCWMoGQaeLS2TgnNIkabh4fSCGa0dieesoASjPHEF5ng9xsPj/oX5sns 1tc5Qp/7tGaCoHIcgOyYc7kDjxUEJmH+hUX0UYVjTMapwh37TaWQ2MPywRlPiQhSeJ5J IMXg== X-Gm-Message-State: AOAM530828nndPkhXeo0hWICFa/FJlrWH56G0qlD4ZfgPH824rGqqJOV PEThe2nMyCSyYFErq9wYe+7TSuCGhVJ1 X-Google-Smtp-Source: ABdhPJyaKy4GEnKIJ/S5dR75Z6Sp8g/ueH5v6TANNcDxhzHPzuTTPn0wYyWBm/8d245iHVbNGmkmL9vYBxK9 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:9e4e:: with SMTP id z14mr3459560qve.31.1622119907499; Thu, 27 May 2021 05:51:47 -0700 (PDT) Date: Thu, 27 May 2021 12:51:32 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-6-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 5/7] KVM: arm64: Remove hyp_pool pointer from struct hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055150_948144_6760A83B X-CRM114-Status: GOOD ( 15.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Each struct hyp_page currently contains a pointer to a hyp_pool struct where the page should be freed if its refcount reaches 0. However, this information can always be inferred from the context in the EL2 code, so drop the pointer to save a few bytes in the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 4 ++-- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 -- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 13 +++++++++++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 7 ++----- arch/arm64/kvm/hyp/nvhe/setup.c | 14 ++++++++++++-- 5 files changed, 27 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index aada4d97de49..9ed374648364 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -42,8 +42,8 @@ static inline void hyp_set_page_refcounted(struct hyp_page *p) /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); -void hyp_get_page(void *addr); -void hyp_put_page(void *addr); +void hyp_get_page(void *addr, struct hyp_pool *pool); +void hyp_put_page(void *addr, struct hyp_pool *pool); /* Used pages cannot be freed */ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 7691ab495eb4..991636be2f46 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,11 +7,9 @@ #include -struct hyp_pool; struct hyp_page { unsigned int refcount; unsigned int order; - struct hyp_pool *pool; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index fdd5b5702e8a..3603311eb41c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -42,6 +42,15 @@ static void *host_s2_zalloc_page(void *pool) { return hyp_alloc_pages(pool, 0); } +static void host_s2_get_page(void *addr) +{ + hyp_get_page(addr, &host_s2_pool); +} + +static void host_s2_put_page(void *addr) +{ + hyp_put_page(addr, &host_s2_pool); +} static int prepare_s2_pool(void *pgt_pool_base) { @@ -60,8 +69,8 @@ static int prepare_s2_pool(void *pgt_pool_base) .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, .page_count = hyp_page_count, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = host_s2_get_page, + .put_page = host_s2_put_page, }; return 0; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index ce7379f1480b..e453108a2d95 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -150,20 +150,18 @@ static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) __hyp_attach_page(pool, p); } -void hyp_put_page(void *addr) +void hyp_put_page(void *addr, struct hyp_pool *pool) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } -void hyp_get_page(void *addr) +void hyp_get_page(void *addr, struct hyp_pool *pool) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); @@ -212,7 +210,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); for (i = 0; i < nr_pages; i++) { - p[i].pool = pool; p[i].order = 0; hyp_set_page_refcounted(&p[i]); } diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 709cb3d19eb7..bf61abd4a330 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -137,6 +137,16 @@ static void *hyp_zalloc_hyp_page(void *arg) return hyp_alloc_pages(&hpool, 0); } +static void hpool_get_page(void *addr) +{ + hyp_get_page(addr, &hpool); +} + +static void hpool_put_page(void *addr) +{ + hyp_put_page(addr, &hpool); +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -160,8 +170,8 @@ void __noreturn __pkvm_init_finalise(void) .zalloc_page = hyp_zalloc_hyp_page, .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = hpool_get_page, + .put_page = hpool_put_page, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; From patchwork Thu May 27 12:51:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10586C4708A for ; Thu, 27 May 2021 12:57:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C963A60FE6 for ; Thu, 27 May 2021 12:57:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C963A60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ScjmxLaHQcYvMCbATz2Da9uRG6cvsDa2GYeceHHB4ZU=; b=ZiEXDCwe5YcvubkpQoyYqw9wtT 257zjAuw4bihdeU37bzpe9XBYsfExrffRNvd/SyKmTIaD6rHqq0I8LQ2IiiSDVha4K82/lTBQsq08 fXkGWTewHsaEUPyL9CVtbSs3g2X6lp4BstnkfyrxopFDrkJTfE85FI0TA1sdXDJCO2TVLz56sp1g1 WmHhNRGIL8qLY464c300qIqozi4HpNwtc/rlMw5efRRmiELvSus4eY4zzH/M35rRB9VxtlYIP185D ijEL4axreLYj+j6UkYFebdH34hn0n3m4K1DDfgXdRwY580to4/ne1pno/dvMhGBzKCfTwX+/yxGi4 xFXlm0xw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFWp-005usV-7p; Thu, 27 May 2021 12:54:23 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUO-005tV5-R5 for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:55 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r7-20020adff7070000b0290114b9161019so309997wrp.13 for ; Thu, 27 May 2021 05:51:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1VjwQ4Izjf01RJSguF+MyepUhFZaPW5Ia+1I2PStwVY=; b=vg6SBmeyHn1if0S0th7HR74BmQq74KmKE1yJjpIX9ye6dYwZLFggZYO5tq/veoHetZ 5Q674WCyrQCG1WcqWZ/jITEVUmcJ4tH02f9Q/jSUmO3DkI1nwPjLa+F+mh/xVPVUv70u DhwBIsTjMJG0vUJsYatwyaGUIVp0XfFCtM9O20KmFC1EMHi61ZbgXBOmMAjh+wSNpiwj TgqBPSdq5pZx3UnHrUAtAGwfF9G6cMEuCVccdMwhdvt+8C55BcGj7kQFJiqmktvWDITl rETXsw+t6gsxPNXVtXZPbOGAzOxk9VekuOtK3lGvNUe4dEMSFsj/2aKbiiLuCFSz7wWp YmDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1VjwQ4Izjf01RJSguF+MyepUhFZaPW5Ia+1I2PStwVY=; b=pDLjN2cXgmYGMPBuJWr2vTsIByiBuSdl1WaciTyRL/e9jYkDSyO0Up8Qhow09XJzZC 4nDCdX2KQwebIWBwrSIxjMKw4nNYnxbpl4uSwUm4gMMPlE6b3zxnB9bvjHioQgJ3dlfc PggqewtXskqTGQiwEST788vLYIsF6c2hLXTHOLmMSFpzT+ntTDLXtAd1VH3kIp+eyCnk WpeWKcfpXG5IMZR6i8tLEaK0QfjJ5mfSrEll6SfYOX0WPjVwUsM1Cr6Ts9tcf9QXsVp9 tJ8Iutqapqa+Z9TwMDOm/HR3Z4nklxkFGcQLe/5SipPZeqYX9v2VDiDOuY+8Lm/lkwe3 f2lQ== X-Gm-Message-State: AOAM531ERBFswoMJcwef7mH/+KHNfITvlL4ajuv5h1fjL+uRyCdwzYRx ERA3bfv4ImCLEoR+Bxs5YkXegk+LEGbi X-Google-Smtp-Source: ABdhPJyrMPjGXOBkf7UTBcOaGjq3koWvGQeLen0va0jMrfOi2lbpRnwG62zACEHhkBJxblCYtamLOJFnGkdm X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a1c:e205:: with SMTP id z5mr6882602wmg.70.1622119910127; Thu, 27 May 2021 05:51:50 -0700 (PDT) Date: Thu, 27 May 2021 12:51:33 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-7-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 6/7] KVM: arm64: Use less bits for hyp_page order From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055152_951754_A2DD3BB4 X-CRM114-Status: GOOD ( 15.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page order is currently encoded on 4 bytes even though it is guaranteed to be smaller than this. Make it 2 bytes to reduce the hyp vmemmap overhead. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 9ed374648364..d420e5c0845f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER UINT_MAX +#define HYP_NO_ORDER USHRT_MAX struct hyp_pool { /* @@ -19,7 +19,7 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned int max_order; + unsigned short max_order; }; static inline void hyp_page_ref_inc(struct hyp_page *p) @@ -41,7 +41,7 @@ static inline void hyp_set_page_refcounted(struct hyp_page *p) } /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); void hyp_get_page(void *addr, struct hyp_pool *pool); void hyp_put_page(void *addr, struct hyp_pool *pool); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 991636be2f46..3fe34fa30ea4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,7 @@ struct hyp_page { unsigned int refcount; - unsigned int order; + unsigned short order; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e453108a2d95..2ba2c550ab03 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -93,7 +93,7 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { - unsigned int order = p->order; + unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -123,7 +123,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy; @@ -168,9 +168,9 @@ void hyp_get_page(void *addr, struct hyp_pool *pool) hyp_spin_unlock(&pool->lock); } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { - unsigned int i = order; + unsigned short i = order; struct hyp_page *p; hyp_spin_lock(&pool->lock); From patchwork Thu May 27 12:51:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D911C4707F for ; Thu, 27 May 2021 12:57:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 33AF560FE6 for ; Thu, 27 May 2021 12:57:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33AF560FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=V9l+cqS83BH0S1zJYUgwCRjELHmn2uVnorcxxUlREVM=; b=bSWWVVuIYS+RWl4OdyZt536u7v KBvTK/r38XPqwJKGhO2Ro38eYnCrPGFnGusviDdjInN/zaMJr1meAx1oKsLScbvJTXD0C+rSQGn2y 2dWItX3jUr4SYUf6v9Ych2cKdBfQJkp5vxE57VlcUs43jkTCQCemTdQ0V/IsDrDu0lcaS0TV8qNxI /qud+P5k99nB18IrIYU6pEbROh0EOFNOy7Ypo8xXUH33UAjDngY7/yIjE4uUOFqbR+E/2yHZ+axJL 1b+IUdozmGgoqXK6kKjARPorR7e5kTQ0n0FyShydufACN3gEstzhaaodzwRQAzNI+U1OLO3RDaMTt g+AOiZcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFXS-005vF3-Nx; Thu, 27 May 2021 12:55:02 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUQ-005tWD-Vx for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:56 +0000 Received: by mail-wm1-x349.google.com with SMTP id u203-20020a1cddd40000b029016dbb86d796so1462566wmg.0 for ; Thu, 27 May 2021 05:51:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Byq3teaLV2n+YzpfRy9Lkrwtyz9tACaM8wDIjUMnhpc=; b=lh3itUSKrY99HR4eVX5Kbop9zhHjelSV5Tk0fWZiJ9sUfR4ERy+T5lGBjCz/duYdnl kGkbRalNQtA2P8Y0x9AuQts+PtFMMV4Twn9z5LBtYsCtzuOsQwApQ9msA6YyUSd0L9kr 9uo4im5pV2Vfxkb39c1oXswe8RHMs28B40q7MqOi3d5tWR8vdUH1kSgUwquRqY9ZHNYh DPiymqTjegRTcs/yRjGTDtnhWm70YHVyMrJdLewGX2zGwBjl7lV8+fHtPLOB5VBtFc23 shccUlzY2zH/UCB6UeC1Jr/WQJf3uNRFlQ+E8cRqzchohQRyWGeILnf9albgfmXsYbY4 29Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Byq3teaLV2n+YzpfRy9Lkrwtyz9tACaM8wDIjUMnhpc=; b=lghigj/m2La3L7SruzEElMBR7tZ6n0zyT83pIvB+rZ2KTcR4DNgsGxDRlZxoccwXs3 dy4WYVscY/3N/kF5X60pUHJIT/lGxUb/cbGBsloI3B8dXfIKs/FtqL6abzkZLayShx0F EaG2iLGW9Bu71Gavwt3lcl27md+FOiSOivBWnvUSg9gQyDfI6P6YRiDowPPTLurIu63i ZrQx5/XqbCQv+kOIco/pcqY8mLR5r9/GbrFHxBPpEF4CUzNPbiMDwQ5bd06BOWB61fN2 2SsbBo/5cEh4RAYncE1eh3dUSWP29wVBf7RFsax1x7F8UjLzlZktwIj+4JjGj5wEa576 p3dA== X-Gm-Message-State: AOAM531F8DTug1wGDrL4tE+UgDR4nHAGW9OoElzzTtsKOQXe8spp405n uA3eoG0Tb2jtXcWd02kp6IY6xlpK1OM7 X-Google-Smtp-Source: ABdhPJx/lfTo0VRFwUG+jBpYqhQ9SqHdKxuCeQn/auIpBlo6JTHBRUm8dOoTIg4a0j6u0pDOtb1DO+Pb2vgH X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:3515:: with SMTP id h21mr8321374wmq.148.1622119912227; Thu, 27 May 2021 05:51:52 -0700 (PDT) Date: Thu, 27 May 2021 12:51:34 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-8-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 7/7] KVM: arm64: Use less bits for hyp_page refcount From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055155_061135_5D2C5ED4 X-CRM114-Status: GOOD ( 13.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount is currently encoded on 4 bytes even though we never need to count that many objects in a page. Make it 2 bytes to save some space in the vmemmap. As overflows are more likely to happen as well, make sure to catch those with a BUG in the increment function. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 2 ++ arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index d420e5c0845f..a82f73faf41e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -24,6 +24,8 @@ struct hyp_pool { static inline void hyp_page_ref_inc(struct hyp_page *p) { + if (p->refcount == USHRT_MAX) + BUG(); p->refcount++; } diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 3fe34fa30ea4..592b7edb3edb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include struct hyp_page { - unsigned int refcount; + unsigned short refcount; unsigned short order; };