From patchwork Wed Jun 2 09:43:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3DBEC4708F for ; Wed, 2 Jun 2021 09:46:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8053661363 for ; Wed, 2 Jun 2021 09:46:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8053661363 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VTiD5BhGeOuhU3x1yt+qMoXm/lknTYDlgEYVvH2Zj7U=; b=j7EI+16/nVX1vL0n0L2lk66uB7 mOXJ5J54aaxZyxy0RqV0LKRgk4Ct36UPXSryAGjFnDrnjYuvlVEmjhg0hRfr6ZfWelc0bFzDkoxXj Rhbv4IsIs2GQDYpKZktGBMNpaE5+4JasoKxbtmAwGz+Dqt+HfMtddavudxm9G/bNFJMm/J4aalaNn KQ111XYmbNHIB/zobBXVQWAD4+go2SXSFl7SMfZ1gnUVN+A5HNap+GpdXh76AQ9mI7Hq9ewIX8Co7 4aP/xCS4Cm4/dMfpCpfk/qMJzl5vNGv3YAAM52TMwXsasQh3h41WBski0AFD4Lw7BY4LhdniGkzQ9 R+vJIS1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQ0-0032BU-Ui; Wed, 02 Jun 2021 09:44:09 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPo-00326r-O7 for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:43:58 +0000 Received: by mail-wr1-x449.google.com with SMTP id n2-20020adfb7420000b029010e47b59f31so771779wre.9 for ; Wed, 02 Jun 2021 02:43:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=c9G26svnUWbmDEss+L1TLE5EXYPohttILixiExhT9vI=; b=GcqoSOATgdiJrr5Oyizj8W+rLOQpOg6DZBabZvKf0IMeyx8khSL42yhMPQqnKMValL uAuTd7nM6YbDqtCbcfkCOb91BewwCwkquHrYu3JXLQuAhzknh9aLKNwGrtiVOjPOF0s3 MTpErDLuSq5rbVOp8wHNBBV5LvKlhDcle0BvvwrjH9hUYWyryR0raOp0ZUcleQapEoq4 ZsrhtqCMSeaauWimVgYusqVb6KZP54Kd5hU6BreNlQRzkE1rl+J/nu+1hIq3lFk6wypT nmfCTaAyA9oRe3Fgl687yhu8ZucC7AGOBP++xZAZ6+L8CKO8qyNzqJdu7ycUwCrfMU6O lf+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=c9G26svnUWbmDEss+L1TLE5EXYPohttILixiExhT9vI=; b=p5IRRrHVn/C5D6RxUI/zVP/0hXm+pVP9H7Kmz4k1xKmXf2QKkImH5UPDTSnFp7ORVB 1VYmV+gm2WZ1Fr1wcOETaUuOyjgyl0XsS7VcIRb9rJi56eQK1A+nryBrt09aTDWusBVF 3AEflLOCnVpaPoZmkmGkZ261gtUomYlhTrwj/x1hV0XfJVyU58RcFil5bGlaW9TQ6he6 V2U62JQ6PiEv65mFxnndRg77J6awD4yamAePu8xm1qb1fqUSp2RFBoRpUrzDnyTe6czn olJyvnnTBE8jsJ/m0CqnsSZr2KdHb2qRGKl5qcDwhQWuj7x7I+0HikTmgZFtVuTVo1Le 7pJg== X-Gm-Message-State: AOAM531bMKyfChP6Fin98JQXPdDXssftkv0g3yfFe7Ehn+qD54jlZu/T l/oB7B3bgdTod2RBKZNK3ZOxaAmjSuis X-Google-Smtp-Source: ABdhPJxZ5g83EDaY4+kbAAODmvGhWrt2ItBDMTFzKjsuLKEcW0HzeMd2O18SphwhlIcp+ggGfCB1veunhgrR X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:35cc:: with SMTP id r12mr385000wmq.1.1622627033853; Wed, 02 Jun 2021 02:43:53 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:41 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-2-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 1/7] KVM: arm64: Move hyp_pool locking out of refcount helpers From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024356_815350_A2E631C6 X-CRM114-Status: GOOD ( 21.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount helpers currently rely on the hyp_pool lock for serialization. However, this means the refcounts can't be changed from the buddy allocator core as it already holds the lock, which means pages have to go through odd transient states. For example, when a page is freed, its refcount is set to 0, and the lock is transiently released before the page can be attached to a free list in the buddy tree. This is currently harmless as the allocator checks the list node of each page to see if it is available for allocation or not, but it means the page refcount can't be trusted to represent the state of the page even if the pool lock is held. In order to fix this, remove the pool locking from the refcount helpers, and move all the logic to the buddy allocator. This will simplify the removal of the list node from struct hyp_page in a later patch. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 35 ---------------------- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 43 ++++++++++++++++++++------- 2 files changed, 32 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 18a4494337bd..f2c84e4fa40f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -22,41 +22,6 @@ struct hyp_pool { unsigned int max_order; }; -static inline void hyp_page_ref_inc(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - p->refcount++; - hyp_spin_unlock(&pool->lock); -} - -static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - int ret; - - hyp_spin_lock(&pool->lock); - p->refcount--; - ret = (p->refcount == 0); - hyp_spin_unlock(&pool->lock); - - return ret; -} - -static inline void hyp_set_page_refcounted(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - if (p->refcount) { - hyp_spin_unlock(&pool->lock); - BUG(); - } - p->refcount = 1; - hyp_spin_unlock(&pool->lock); -} - /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); void hyp_get_page(void *addr); diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 237e03bf0cb1..d666f4789e31 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -93,15 +93,6 @@ static void __hyp_attach_page(struct hyp_pool *pool, list_add_tail(&p->node, &pool->free_area[order]); } -static void hyp_attach_page(struct hyp_page *p) -{ - struct hyp_pool *pool = hyp_page_to_pool(p); - - hyp_spin_lock(&pool->lock); - __hyp_attach_page(pool, p); - hyp_spin_unlock(&pool->lock); -} - static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, unsigned int order) @@ -125,19 +116,49 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, return p; } +static inline void hyp_page_ref_inc(struct hyp_page *p) +{ + p->refcount++; +} + +static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) +{ + p->refcount--; + return (p->refcount == 0); +} + +static inline void hyp_set_page_refcounted(struct hyp_page *p) +{ + BUG_ON(p->refcount); + p->refcount = 1; +} + +/* + * Changes to the buddy tree and page refcounts must be done with the hyp_pool + * lock held. If a refcount change requires an update to the buddy tree (e.g. + * hyp_put_page()), both operations must be done within the same critical + * section to guarantee transient states (e.g. a page with null refcount but + * not yet attached to a free list) can't be observed by well-behaved readers. + */ void hyp_put_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); if (hyp_page_ref_dec_and_test(p)) - hyp_attach_page(p); + __hyp_attach_page(pool, p); + hyp_spin_unlock(&pool->lock); } void hyp_get_page(void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); + struct hyp_pool *pool = hyp_page_to_pool(p); + hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); + hyp_spin_unlock(&pool->lock); } void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) @@ -159,8 +180,8 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) p = list_first_entry(&pool->free_area[i], struct hyp_page, node); p = __hyp_extract_page(pool, p, order); - hyp_spin_unlock(&pool->lock); hyp_set_page_refcounted(p); + hyp_spin_unlock(&pool->lock); return hyp_page_to_virt(p); } From patchwork Wed Jun 2 09:43:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B360C47083 for ; Wed, 2 Jun 2021 09:46:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 35560613BF for ; Wed, 2 Jun 2021 09:46:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 35560613BF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=4UY0YCbgkLqjVZxXJsbXsejJe5xu/IGdhMR6AuFsHQ8=; b=VjeplsVawJyKKk4kWrel5iBpB1 UxqkXYC0Q1L3lcjU89S0++d/yukz2SYGgB0LzeZV475gJjkieCp7RAQanj7ayhWs8/tVc1HK+EHv4 Q2iMhz8W0sMGxsTaD7QvSFckPjuZsztahcVyoJW0r6R4fUNIFLxhdzjbE/6hwrR1ebNizRGR5PG1U MhHbSRS6e6+CXebRJZpAUL/7VGI9gSu7Y7sqqCIz675Au45tvy+wMEQQ28GUKA/dBd22IV/GogBCl PMl0pkkc9QohcvRFUgQqLPfNcF13kdSG1j3B56oZXNryLv+0Ws4EZE57jvSRFUFFNLP3G9dSV06fg YR2eLngA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQD-0032E6-Hl; Wed, 02 Jun 2021 09:44:21 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPq-00327J-C9 for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:43:59 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id w14-20020a056214012eb02901f3a4388530so1256632qvs.17 for ; Wed, 02 Jun 2021 02:43:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iZUQk8fIlBvRNxDOUvgxujyUIWimZaPXfaVXd2+o6PY=; b=VLEyklUT17eTyvASVE7D7VJJlhe5fAE1d9RgPZBqibyIBB6t1KdUSjsMDHUTzawsYb kX7dTP9UCIcEy4saEDT8A1ACHMYxFaTwbasSzUy9XUR/Y2LxVJc9AKyM8VvhpSZRZ3NL MxZ1AY2eP2/bBpheP6r59nKkqlZ/ZGdBOqYGu8KXHl7lSkugmxTEnm+8Wu68jUWSVkJL 0xrNW3CgzLjbiR2UH+ZZSedoZtMHG+8aASS7O0TbNhnJBXFav94x+bAlSa4Vz6cWhaK6 egWJUSMlQpeb1mS5RVmlTwu/ULC5Aw0INHSbGNgFhYW0WNgXxCvaRJ0qlrTHoPcNIdkK Jcnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iZUQk8fIlBvRNxDOUvgxujyUIWimZaPXfaVXd2+o6PY=; b=GslDwBKTp7PDIfDc2FpkfcsolLNP0ehNb7rfeDiLN+7srhadghZ465+MWS92imMBNZ AGo9CV0nRSqxVQgYLbrBfC14a/0+JJSqMM7SbJZvMLaIMjrEuZGb3YXZWj3IL5Ip7fkv 49Ct9pnDxOulsJsgTX2+7boRx1+NhuG2h+mhzW77HmRFbbUcRTLWDOjRZ6XKjIxdXq8P m7ArnDB+uQB8GgXU7cFXPNvTJWYejeBHrL6nt/wchxbYhAH5g/sGSlqawy6+2FxuKdgx EAdvFeBOp1t87Jp7x0mLUc656hrCh+9y+eAfRQDJ4ji9ye4c/qSJUYTzPm0bZqXYAkLm ZS6Q== X-Gm-Message-State: AOAM532OGBAu4gIHJIENgjLPwh+mhfF2ULL1GoSN/sLU58koJBgNH3MW b05X2Y3n9jlwsdALl71+6flKjjVfWGJX X-Google-Smtp-Source: ABdhPJw5u97rh4yWHjSnTxNGWaGWtM7mSfMb+qJ6Mxm9lPPMCbPzQl92O3It3Tn7cZ0dBOcajFXTAP9JCzwj X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:c88:: with SMTP id r8mr6671320qvr.58.1622627036015; Wed, 02 Jun 2021 02:43:56 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:42 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-3-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 2/7] KVM: arm64: Use refcount at hyp to check page availability From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024358_451192_0F8093AC X-CRM114-Status: GOOD ( 18.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp buddy allocator currently checks the struct hyp_page list node to see if a page is available for allocation or not when trying to coalesce memory. Now that decrementing the refcount and attaching to the buddy tree is done in the same critical section, we can rely on the refcount of the buddy page to be in sync, which allows to replace the list node check by a refcount check. This will ease removing the list node from struct hyp_page later on. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index d666f4789e31..2602577daa00 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -55,7 +55,7 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); - if (!buddy || buddy->order != order || list_empty(&buddy->node)) + if (!buddy || buddy->order != order || buddy->refcount) return NULL; return buddy; @@ -133,6 +133,12 @@ static inline void hyp_set_page_refcounted(struct hyp_page *p) p->refcount = 1; } +static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) +{ + if (hyp_page_ref_dec_and_test(p)) + __hyp_attach_page(pool, p); +} + /* * Changes to the buddy tree and page refcounts must be done with the hyp_pool * lock held. If a refcount change requires an update to the buddy tree (e.g. @@ -146,8 +152,7 @@ void hyp_put_page(void *addr) struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); - if (hyp_page_ref_dec_and_test(p)) - __hyp_attach_page(pool, p); + __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } @@ -202,15 +207,16 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); - memset(p, 0, sizeof(*p) * nr_pages); for (i = 0; i < nr_pages; i++) { p[i].pool = pool; + p[i].order = 0; INIT_LIST_HEAD(&p[i].node); + hyp_set_page_refcounted(&p[i]); } /* Attach the unused pages to the buddy tree */ for (i = reserved_pages; i < nr_pages; i++) - __hyp_attach_page(pool, &p[i]); + __hyp_put_page(pool, &p[i]); return 0; } From patchwork Wed Jun 2 09:43:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 164E2C4708F for ; Wed, 2 Jun 2021 09:46:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CA0796101B for ; Wed, 2 Jun 2021 09:46:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CA0796101B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=CWPXjoRBu4edhncbfHj4W9AOefiyeVfXU2388e6aUEA=; b=N8L2QRdJ2wKFBU0S52HUOExmS4 DbAk96tpvq8Q7NgWbJhJkkr67fV/optqbcNc2q/LoDFFh89IpRVVJHrr9DM34gVklDWACfgtSyZrJ azuvBm3ZVfAQKYi9C/OcpNTfPZLgER1KM3zoV9pg+gYLW2dPy37G7QgBGm+zPkBJ9rnUqN7CZOnkp hnnFT2sMlIeJ5bN2tg5iy2JfEyt+kfB+VDA9jJNpdma8K/BzPsKZXbh/sd8Rp9fuR4Al6+P0Q7DRS vTRRGRb9yo4AjyU9FPYV0rkBF9gKqQvGCGCrBvfiHQFOnned1mnP8fE+exNoMCBArUN305AY7KeQn YKBxVN0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQP-0032HH-8G; Wed, 02 Jun 2021 09:44:33 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPs-00327x-2I for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:44:01 +0000 Received: by mail-wr1-x449.google.com with SMTP id g3-20020adfd1e30000b02901122a4b850aso760486wrd.20 for ; Wed, 02 Jun 2021 02:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=t3AUZFExsqOFXJcxe9KexRvl4zPYmY7UlKUPgsp6CsY=; b=LFwQebQ6RePwquLH6Tgf97KIl837thuo7ff3+yEFRV3EDeRV7KsrNUL4rJGqrD/AbB p2fG3lpP5GJrkCuXydJyrK6l3MPS9cDudqAWsO3YQULj7Q4AOqp7BDGmmpJp/eXbLYpV +5P5yzxlkadOG9jbu+nc1D8MblpcOvziNBMVqojWWhCX4+5izrXG+RLP2yKvIEZi0h+R ngCmZjOtMrzWzG3OFo9IKaI2e0JNd9mdg0XBe9KHPAyxvFHyO9lEYZv5scf2wpx6aGFS DkXeFZdRq0VAD3MvwlBjH47tUQD4ah6BqCdvnt4mowpP/VcWY3K+bRZHAEl55a2mfUld fhcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=t3AUZFExsqOFXJcxe9KexRvl4zPYmY7UlKUPgsp6CsY=; b=bqQMzcvT9fnc5kAOnaxkjzPY5W17lzdPgMW2zTNbxB48pQcSGYbRYzpvndz4eRg65w V+BUUykdXkusW6+WWe7o3z1tSywcqc4kSd4zMaQ7cRGrlpPfr+0IZ7g5Y/UzAg11TDOd iCE45kBfuO6N1miGxNtJZ/71jA46vu1/s5NeexQmojEE6k8ZjfNZCFQ8sFTyz+q8k5W/ bi/1wO7h/2mNAGuOjaOwnLbBi1VHjKmtnr1reFRH8i+90Vj6oap6M5+Y6of5lNtNy+My zlkqT9ceDBEeqkpHJcmbKcfGVrBSZK1KnpOmKQwjhIJSu/UsKIjqGdaCIT9s0YaNX/Dk 9O6w== X-Gm-Message-State: AOAM533jWvBmBkyD+43P+aiuJPE5joF434/37XwBUhY8aR4fVTqnlFZt Db4t/JaJgD4G4xCdgss/S2DzKBF4gKY6 X-Google-Smtp-Source: ABdhPJz2xb5KXzKaMGgfcMmLyy8tV7f+4qajv6QtgpSO7AfdaazrcE0h7oP3513BqmP33p6OpeLOE/kr6fO4 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a5d:5752:: with SMTP id q18mr5245469wrw.419.1622627038040; Wed, 02 Jun 2021 02:43:58 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:43 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-4-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 3/7] KVM: arm64: Remove list_head from hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024400_132557_22184F27 X-CRM114-Status: GOOD ( 20.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The list_head member of struct hyp_page is only needed when the page is attached to a free-list, which by definition implies the page is free. As such, nothing prevents us from using the page itself to store the list_head, hence reducing the size of the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 1 - arch/arm64/kvm/hyp/nvhe/page_alloc.c | 39 ++++++++++++++++++++---- 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index fd78bde939ee..7691ab495eb4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -12,7 +12,6 @@ struct hyp_page { unsigned int refcount; unsigned int order; struct hyp_pool *pool; - struct list_head node; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 2602577daa00..34f0eb026dd2 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -62,6 +62,34 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, } +/* + * Pages that are available for allocation are tracked in free-lists, so we use + * the pages themselves to store the list nodes to avoid wasting space. As the + * allocator always returns zeroed pages (which are zeroed on the hyp_put_page() + * path to optimize allocation speed), we also need to clean-up the list node in + * each page when we take it out of the list. + */ +static inline void page_remove_from_list(struct hyp_page *p) +{ + struct list_head *node = hyp_page_to_virt(p); + + __list_del_entry(node); + memset(node, 0, sizeof(*node)); +} + +static inline void page_add_to_list(struct hyp_page *p, struct list_head *head) +{ + struct list_head *node = hyp_page_to_virt(p); + + INIT_LIST_HEAD(node); + list_add_tail(node, head); +} + +static inline struct hyp_page *node_to_page(struct list_head *node) +{ + return hyp_virt_to_page(node); +} + static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { @@ -83,14 +111,14 @@ static void __hyp_attach_page(struct hyp_pool *pool, break; /* Take the buddy out of its list, and coallesce with @p */ - list_del_init(&buddy->node); + page_remove_from_list(buddy); buddy->order = HYP_NO_ORDER; p = min(p, buddy); } /* Mark the new head, and insert it */ p->order = order; - list_add_tail(&p->node, &pool->free_area[order]); + page_add_to_list(p, &pool->free_area[order]); } static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, @@ -99,7 +127,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, { struct hyp_page *buddy; - list_del_init(&p->node); + page_remove_from_list(p); while (p->order > order) { /* * The buddy of order n - 1 currently has HYP_NO_ORDER as it @@ -110,7 +138,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, p->order--; buddy = __find_buddy_nocheck(pool, p, p->order); buddy->order = p->order; - list_add_tail(&buddy->node, &pool->free_area[buddy->order]); + page_add_to_list(buddy, &pool->free_area[buddy->order]); } return p; @@ -182,7 +210,7 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) } /* Extract it from the tree at the right order */ - p = list_first_entry(&pool->free_area[i], struct hyp_page, node); + p = node_to_page(pool->free_area[i].next); p = __hyp_extract_page(pool, p, order); hyp_set_page_refcounted(p); @@ -210,7 +238,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, for (i = 0; i < nr_pages; i++) { p[i].pool = pool; p[i].order = 0; - INIT_LIST_HEAD(&p[i].node); hyp_set_page_refcounted(&p[i]); } From patchwork Wed Jun 2 09:43:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DB1AC4708F for ; Wed, 2 Jun 2021 09:46:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D9F206101B for ; Wed, 2 Jun 2021 09:46:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9F206101B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sK/AoxbbJfYTQly4ALvFSOcx4kx9zHhhyR0uzADRObM=; b=utHOUovB+l923XIxQyaVOWf1Yi 1JxIGaKB7UR7Y0ZaXWmdQcqbLcPNZcS/CyJXQcZ/ZogYdf6TPSg9TN2VI2CWlCfi3UPPkszEauhvI lzmeQtASHQL66lab/aexplYA05Oox6WLsuf5AIhLz4YeE6LA+WVjGjbGIGeWYHAd2mj08MRTn/gdG b47tmEfkBs4D80xqLyyikTqqny6lnbbsid3b5fM2sNGizvZPeGxSAor2eGYsEmLYTEI0MXeE6w4m9 7G88oQNO/LLA6/apZmMN1DoP6QzOt+hffo3Tokdpz4+0Hw+CJEwJGkNcduvkwAkUy78+NzKerthuA BF+JY6+w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQc-0032L8-TE; Wed, 02 Jun 2021 09:44:47 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPu-00328S-BV for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:44:04 +0000 Received: by mail-qv1-xf49.google.com with SMTP id n4-20020ad44a240000b029021cbf9668daso1243848qvz.23 for ; Wed, 02 Jun 2021 02:44:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ofFGPvvwIXZqH8gfjNYLHplOpXsBE/sEx9SqgA/jKn4=; b=M07Nm+ShVS7ds9EPGTVntA5Hd0A/yeVOhnF5gmQOwFbsJf+D63JOeN5bMxoXSEUl1J tCfrzlWNxhhhNCy6vOrVG4p/DZdH3UJaEZYjkab+W1zyXHo9tB7XH9m3snmOxKfcScgU uDB29M8UUBvbdg0yeE+pMUJX/lFgY+1pvvOBZu0Erzf2UhZorgNeRZ/q5RTcu1+eGDCG ZawI0iA/+lwa2RHEa5k8Yyry0toOdnHvG1s/cQ24A7ga40kDLDnnIFNMMYzATAmi3dhF mvrtBqwKOPvt9AyYsWWlZpuPvSxmLswfVfL3eXtbk+1VJ8O1fCdxLhHCJm+F1jqBz1hD Uniw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ofFGPvvwIXZqH8gfjNYLHplOpXsBE/sEx9SqgA/jKn4=; b=jkgFiPzd/KgOzv4XM1XA2SiSlyXj2lwrOt5fqhEIEsbS7tsdV3osREdHxz7vXTLri8 pm7/mN5P5O9oY9llQsdW1a7nuRh0gkJEALRYgwnlJby0HxiTtkuWPRwLo4P7CT89UlRl LIASLBaTX32n6lXoWXzTr7u/VFU0kbX2SWhOXz/i0J8iepUkbne9So+Am0q6cqZ0Sc9D w+UtkhlgyJqU+zc/zUBhOSDwduxQkOtAYtcGmvL3e5vN6tFhg/WWNcIGr0UPlxM0TgjJ RMvqOmGaE/kK9aByQfxQElmTXFej16xR03GJWOg3YUjYKyjg8rCKm0KOSP0yqKUasw6+ Lh0w== X-Gm-Message-State: AOAM533JWWKDffwKzWujQYgT+qmDPLnjocsq/h/xs1cL04Z3UChmL7M1 1rwrOT/sjP2EDHYoN4kQ0TqJ2AzHcIe4 X-Google-Smtp-Source: ABdhPJzY+neeJuSq4GoyamQtUDLMJrL/pDoFIpyY+8ImNj+ITViiir8Z8blmTXrJmRiE1LSVC/2JtapmMZxK X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:e387:: with SMTP id a7mr25742137qvl.36.1622627040109; Wed, 02 Jun 2021 02:44:00 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:44 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-5-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 4/7] KVM: arm64: Unify MMIO and mem host stage-2 pools From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024402_430751_4BB6E197 X-CRM114-Status: GOOD ( 27.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We currently maintain two separate memory pools for the host stage-2, one for pages used in the page-table when mapping memory regions, and the other to map MMIO regions. The former is large enough to map all of memory with page granularity and the latter can cover an arbitrary portion of IPA space, but allows to 'recycle' pages. However, this split makes accounting difficult to manage as pages at intermediate levels of the page-table may be used to map both memory and MMIO regions. Simplify the scheme by merging both pools into one. This means we can now hit the -ENOMEM case in the memory abort path, but we're still guaranteed forward-progress in the worst case by unmapping MMIO regions. On the plus side this also means we can usually map a lot more MMIO space at once if memory ranges happen to be mapped with block mappings. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 2 +- arch/arm64/kvm/hyp/include/nvhe/mm.h | 13 +++--- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 46 ++++++++----------- arch/arm64/kvm/hyp/nvhe/setup.c | 16 ++----- arch/arm64/kvm/hyp/reserved_mem.c | 3 +- 5 files changed, 32 insertions(+), 48 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 42d81ec739fa..9c227d87c36d 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -23,7 +23,7 @@ extern struct host_kvm host_kvm; int __pkvm_prot_finalize(void); int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end); -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool); +int kvm_host_prepare_stage2(void *pgt_pool_base); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); static __always_inline void __load_host_stage2(void) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 0095f6289742..8ec3a5a7744b 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -78,19 +78,20 @@ static inline unsigned long hyp_s1_pgtable_pages(void) return res; } -static inline unsigned long host_s2_mem_pgtable_pages(void) +static inline unsigned long host_s2_pgtable_pages(void) { + unsigned long res; + /* * Include an extra 16 pages to safely upper-bound the worst case of * concatenated pgds. */ - return __hyp_pgtable_total_pages() + 16; -} + res = __hyp_pgtable_total_pages() + 16; -static inline unsigned long host_s2_dev_pgtable_pages(void) -{ /* Allow 1 GiB for MMIO mappings */ - return __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + res += __hyp_pgtable_max_pages(SZ_1G >> PAGE_SHIFT); + + return res; } #endif /* __KVM_HYP_MM_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index e342f7f4f4fb..fdd5b5702e8a 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -23,8 +23,7 @@ extern unsigned long hyp_nr_cpus; struct host_kvm host_kvm; -struct hyp_pool host_s2_mem; -struct hyp_pool host_s2_dev; +struct hyp_pool host_s2_pool; /* * Copies of the host's CPU features registers holding sanitized values. @@ -36,7 +35,7 @@ static const u8 pkvm_hyp_id = 1; static void *host_s2_zalloc_pages_exact(size_t size) { - return hyp_alloc_pages(&host_s2_mem, get_order(size)); + return hyp_alloc_pages(&host_s2_pool, get_order(size)); } static void *host_s2_zalloc_page(void *pool) @@ -44,20 +43,14 @@ static void *host_s2_zalloc_page(void *pool) return hyp_alloc_pages(pool, 0); } -static int prepare_s2_pools(void *mem_pgt_pool, void *dev_pgt_pool) +static int prepare_s2_pool(void *pgt_pool_base) { unsigned long nr_pages, pfn; int ret; - pfn = hyp_virt_to_pfn(mem_pgt_pool); - nr_pages = host_s2_mem_pgtable_pages(); - ret = hyp_pool_init(&host_s2_mem, pfn, nr_pages, 0); - if (ret) - return ret; - - pfn = hyp_virt_to_pfn(dev_pgt_pool); - nr_pages = host_s2_dev_pgtable_pages(); - ret = hyp_pool_init(&host_s2_dev, pfn, nr_pages, 0); + pfn = hyp_virt_to_pfn(pgt_pool_base); + nr_pages = host_s2_pgtable_pages(); + ret = hyp_pool_init(&host_s2_pool, pfn, nr_pages, 0); if (ret) return ret; @@ -86,7 +79,7 @@ static void prepare_host_vtcr(void) id_aa64mmfr1_el1_sys_val, phys_shift); } -int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) +int kvm_host_prepare_stage2(void *pgt_pool_base) { struct kvm_s2_mmu *mmu = &host_kvm.arch.mmu; int ret; @@ -94,7 +87,7 @@ int kvm_host_prepare_stage2(void *mem_pgt_pool, void *dev_pgt_pool) prepare_host_vtcr(); hyp_spin_lock_init(&host_kvm.lock); - ret = prepare_s2_pools(mem_pgt_pool, dev_pgt_pool); + ret = prepare_s2_pool(pgt_pool_base); if (ret) return ret; @@ -199,11 +192,10 @@ static bool range_is_memory(u64 start, u64 end) } static inline int __host_stage2_idmap(u64 start, u64 end, - enum kvm_pgtable_prot prot, - struct hyp_pool *pool) + enum kvm_pgtable_prot prot) { return kvm_pgtable_stage2_map(&host_kvm.pgt, start, end - start, start, - prot, pool); + prot, &host_s2_pool); } static int host_stage2_idmap(u64 addr) @@ -211,7 +203,6 @@ static int host_stage2_idmap(u64 addr) enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W; struct kvm_mem_range range; bool is_memory = find_mem_range(addr, &range); - struct hyp_pool *pool = is_memory ? &host_s2_mem : &host_s2_dev; int ret; if (is_memory) @@ -222,22 +213,21 @@ static int host_stage2_idmap(u64 addr) if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); - if (is_memory || ret != -ENOMEM) + ret = __host_stage2_idmap(range.start, range.end, prot); + if (ret != -ENOMEM) goto unlock; /* - * host_s2_mem has been provided with enough pages to cover all of - * memory with page granularity, so we should never hit the ENOMEM case. - * However, it is difficult to know how much of the MMIO range we will - * need to cover upfront, so we may need to 'recycle' the pages if we - * run out. + * The pool has been provided with enough pages to cover all of memory + * with page granularity, but it is difficult to know how much of the + * MMIO range we will need to cover upfront, so we may need to 'recycle' + * the pages if we run out. */ ret = host_stage2_unmap_dev_all(); if (ret) goto unlock; - ret = __host_stage2_idmap(range.start, range.end, prot, pool); + ret = __host_stage2_idmap(range.start, range.end, prot); unlock: hyp_spin_unlock(&host_kvm.lock); @@ -258,7 +248,7 @@ int __pkvm_mark_hyp(phys_addr_t start, phys_addr_t end) hyp_spin_lock(&host_kvm.lock); ret = kvm_pgtable_stage2_set_owner(&host_kvm.pgt, start, end - start, - &host_s2_mem, pkvm_hyp_id); + &host_s2_pool, pkvm_hyp_id); hyp_spin_unlock(&host_kvm.lock); return ret != -EAGAIN ? ret : 0; diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 7488f53b0aa2..709cb3d19eb7 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -25,8 +25,7 @@ unsigned long hyp_nr_cpus; static void *vmemmap_base; static void *hyp_pgt_base; -static void *host_s2_mem_pgt_base; -static void *host_s2_dev_pgt_base; +static void *host_s2_pgt_base; static int divide_memory_pool(void *virt, unsigned long size) { @@ -45,14 +44,9 @@ static int divide_memory_pool(void *virt, unsigned long size) if (!hyp_pgt_base) return -ENOMEM; - nr_pages = host_s2_mem_pgtable_pages(); - host_s2_mem_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_mem_pgt_base) - return -ENOMEM; - - nr_pages = host_s2_dev_pgtable_pages(); - host_s2_dev_pgt_base = hyp_early_alloc_contig(nr_pages); - if (!host_s2_dev_pgt_base) + nr_pages = host_s2_pgtable_pages(); + host_s2_pgt_base = hyp_early_alloc_contig(nr_pages); + if (!host_s2_pgt_base) return -ENOMEM; return 0; @@ -158,7 +152,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = kvm_host_prepare_stage2(host_s2_mem_pgt_base, host_s2_dev_pgt_base); + ret = kvm_host_prepare_stage2(host_s2_pgt_base); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/reserved_mem.c b/arch/arm64/kvm/hyp/reserved_mem.c index 83ca23ac259b..d654921dd09b 100644 --- a/arch/arm64/kvm/hyp/reserved_mem.c +++ b/arch/arm64/kvm/hyp/reserved_mem.c @@ -71,8 +71,7 @@ void __init kvm_hyp_reserve(void) } hyp_mem_pages += hyp_s1_pgtable_pages(); - hyp_mem_pages += host_s2_mem_pgtable_pages(); - hyp_mem_pages += host_s2_dev_pgtable_pages(); + hyp_mem_pages += host_s2_pgtable_pages(); /* * The hyp_vmemmap needs to be backed by pages, but these pages From patchwork Wed Jun 2 09:43:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CC9DC47083 for ; Wed, 2 Jun 2021 09:46:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 644FC6135D for ; Wed, 2 Jun 2021 09:46:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 644FC6135D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=SiezjKHltGO/Qm+6ZJL72yPmAIDdwY7QnNj4RdcS0gk=; b=Fhyrs/RYx/qYWLRnN7HjPr0qlz t1SLlOG/EwK813N5gbvof7ugGB+/B5Bv8oQOLWZ0Ul0a7wYrrxF1VP0OFfSI+aIy1YdYH+WXk2c3t qyJkIVU8SdL1YUaytHXf5xD/ysoJBLm3UkM0YRLSKcxAr5jCyTES7uWe9b8HZp3Ls5iIT/dpddAKf Pw23Y3C5buuYOzridSl7qIeThK8zKNNzeWNfbMxNK0m2tfzJ+R3NOp9VxhPm+9PMEuD2xYl7DbJj2 FwxwwmU5wJXY82zkEfSHyeVzSJtWEzriGmpE7Hy7m1yLhj0wn+lsJTy0E962Cjlj099MezxCxp8ZE U9HfSzhQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQt-0032S1-9i; Wed, 02 Jun 2021 09:45:04 +0000 Received: from mail-qt1-x84a.google.com ([2607:f8b0:4864:20::84a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPw-00329s-8s for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:44:05 +0000 Received: by mail-qt1-x84a.google.com with SMTP id o10-20020a05622a138ab029023ee9c05f1eso986085qtk.21 for ; Wed, 02 Jun 2021 02:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M5Wmq+shTVGIkdp11V4c5i7OM/twUXVfGwFeKTKgDek=; b=sbhoOROzzgBjQ0Ii5F3WE53vDNE2WQ9bNmdrgZu1xuJ9RSjfTkJSQQOFRCY5YBGWoq uLzfsCzlorbdo93AUkTG8w+9z/k/IQhlrlZzGIue7mROdwqxWwzpc4ACpqZ7TQrsr0nR HxhodZI0VmshsthL3HyVJ4Llw6dMvOEOg0orUkcuwgmexiBLsW5yMr32mgBpk/UXShq5 3VsQbZ/2UndcuRP5P5ctxfPvznXxJrBNlmX/yVf4mCCSAdHzRGM9Q7rijkRuyatj6zvf hU9a4s+ryYsqN1/rzuOaQGnMC6kXUcDFLbTJdWPGndOcXodaVXtrYT9fbb4sBsC1tai/ uwOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M5Wmq+shTVGIkdp11V4c5i7OM/twUXVfGwFeKTKgDek=; b=RW1wfnjy6UpIcTKjUGPwbIvpGukgcrtiQKOm2pRV7rHoBs1Qf7ijchFtc9aAVBS0IU aBG5TaofxiA0LsEBZpr8q2+tq2RIiTqSilBVMvkIiMOmaf/RuZnWdJ9BwSXStL5zEO/C UTgD2GL8Ig2xIWIZw8h4OTU+mNoLZzs+vb6LyCsvGAapnhZH4gzJnMowN3ZHqV+47ATm dYz+cSajjpCVxki4+Ra5gPbuFGfQxmXD0DbMUqPSQbzd5gqCKLWwLENwRMMsisdgrIgu WXHYj5R7gY7OmKLaCXbDSqEEKpQa4zW8Qi4g0iw9vHXcQMXidEqcYf0uPicHOvTSFBx0 n5oA== X-Gm-Message-State: AOAM53129yj58mSEHwY5i7HGz+Z5OGNFUM0xw9pb+nPYTJTfzBiHugX8 aEuTD2kcNbfLYlQT0WgW69qIOoJtRSVT X-Google-Smtp-Source: ABdhPJzlMLj9aWOGmgfcq4uAOufpXqkiQRCfhyfBHQEh7YCjqgQkiQm8f+4yZIS6dkEn/iv2Ws85pl+Yel5w X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:19c8:: with SMTP id j8mr27500174qvc.42.1622627042096; Wed, 02 Jun 2021 02:44:02 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:45 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-6-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 5/7] KVM: arm64: Remove hyp_pool pointer from struct hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024404_351204_9CE31F1C X-CRM114-Status: GOOD ( 17.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Each struct hyp_page currently contains a pointer to a hyp_pool struct where the page should be freed if its refcount reaches 0. However, this information can always be inferred from the context in the EL2 code, so drop the pointer to save a few bytes in the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 4 ++-- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 -- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 14 ++++++++++++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 7 ++----- arch/arm64/kvm/hyp/nvhe/setup.c | 14 ++++++++++++-- 5 files changed, 28 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index f2c84e4fa40f..3ea7bfb6c380 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -24,8 +24,8 @@ struct hyp_pool { /* Allocation */ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); -void hyp_get_page(void *addr); -void hyp_put_page(void *addr); +void hyp_get_page(struct hyp_pool *pool, void *addr); +void hyp_put_page(struct hyp_pool *pool, void *addr); /* Used pages cannot be freed */ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 7691ab495eb4..991636be2f46 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,11 +7,9 @@ #include -struct hyp_pool; struct hyp_page { unsigned int refcount; unsigned int order; - struct hyp_pool *pool; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index fdd5b5702e8a..7727252890d8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -43,6 +43,16 @@ static void *host_s2_zalloc_page(void *pool) return hyp_alloc_pages(pool, 0); } +static void host_s2_get_page(void *addr) +{ + hyp_get_page(&host_s2_pool, addr); +} + +static void host_s2_put_page(void *addr) +{ + hyp_put_page(&host_s2_pool, addr); +} + static int prepare_s2_pool(void *pgt_pool_base) { unsigned long nr_pages, pfn; @@ -60,8 +70,8 @@ static int prepare_s2_pool(void *pgt_pool_base) .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, .page_count = hyp_page_count, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = host_s2_get_page, + .put_page = host_s2_put_page, }; return 0; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 34f0eb026dd2..e3689def7033 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -174,20 +174,18 @@ static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) * section to guarantee transient states (e.g. a page with null refcount but * not yet attached to a free list) can't be observed by well-behaved readers. */ -void hyp_put_page(void *addr) +void hyp_put_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); __hyp_put_page(pool, p); hyp_spin_unlock(&pool->lock); } -void hyp_get_page(void *addr) +void hyp_get_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p = hyp_virt_to_page(addr); - struct hyp_pool *pool = hyp_page_to_pool(p); hyp_spin_lock(&pool->lock); hyp_page_ref_inc(p); @@ -236,7 +234,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, /* Init the vmemmap portion */ p = hyp_phys_to_page(phys); for (i = 0; i < nr_pages; i++) { - p[i].pool = pool; p[i].order = 0; hyp_set_page_refcounted(&p[i]); } diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 709cb3d19eb7..dee099871865 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -137,6 +137,16 @@ static void *hyp_zalloc_hyp_page(void *arg) return hyp_alloc_pages(&hpool, 0); } +static void hpool_get_page(void *addr) +{ + hyp_get_page(&hpool, addr); +} + +static void hpool_put_page(void *addr) +{ + hyp_put_page(&hpool, addr); +} + void __noreturn __pkvm_init_finalise(void) { struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); @@ -160,8 +170,8 @@ void __noreturn __pkvm_init_finalise(void) .zalloc_page = hyp_zalloc_hyp_page, .phys_to_virt = hyp_phys_to_virt, .virt_to_phys = hyp_virt_to_phys, - .get_page = hyp_get_page, - .put_page = hyp_put_page, + .get_page = hpool_get_page, + .put_page = hpool_put_page, }; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; From patchwork Wed Jun 2 09:43:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BA32C47083 for ; Wed, 2 Jun 2021 09:47:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5E57A61363 for ; Wed, 2 Jun 2021 09:47:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E57A61363 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7Y1iS5N/uPdFq5OoYsvhDnKOKeJEd1+X0cXZ4gD3Y+s=; b=tA26AdHXqlXsVzD+EnUqTWo+kU Fw+yu7gHOHt02n9s+4ivCNZ+JL5WXanghfQc3Fk91Rsl4oFig082UkLR+s6ImUGtTsJiQEMtIJVKm Vd3MycdWWMv9/AOJmd0LVOii3s8NDW0DhsB7zY+sJK2zu5o2k1za41hAKR/cXTa6eYJAEn5RW+9n/ Uo2NzqTjqe6279pf/14UeeKE0tnwqcplj8NJH8SzAMS+sZvKdeNLcYRlY1ivm1rPaDSUEJUaMdHlb S/iwOoZZfUwmreStokaKLbD49pTGBspDTQDpqOSUXG1rxw7eUvS8ntNaDcw7rDbHdcf7ndplQlzIC 7F7jMh0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNRD-0032aX-QG; Wed, 02 Jun 2021 09:45:24 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNPx-0032Af-Mr for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:44:07 +0000 Received: by mail-qv1-xf49.google.com with SMTP id ea18-20020ad458b20000b0290215c367b5d3so1302398qvb.3 for ; Wed, 02 Jun 2021 02:44:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ge7hgJuDR1OpL3eSXNKyct/Yzp2Tl/Woy/7gBbj6zjQ=; b=SGe/vI9/qoCqTrH+bcIEubNmK7H5mOii/6aJ7flU0VvoG+l9VpXBTcsBtOfu9zM6Le 7rG86QSqJXdsMmuUrCOYSJY45EaW1zs86TqCiEjOMym4hEd27qd7N1ER72U0Fldirwtk YpiawF0q6cWlEBkdfwtCXrVyUgcpsXlUf6eKEkGBNu5881nqHe+QVg31Bk84i7XPLYom NPJ+n940IN6ugXYoxgScSNzm9aVvwIieFCWVKCi2awubsPBuTyDkz89BFag6+HJgeehc RdF8TuSXEjWNX2szcC9S2t6MQwm8yO/xuEG76vjbEmFccD5UL3F13X6RH/r1n6e2q5wy r82w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ge7hgJuDR1OpL3eSXNKyct/Yzp2Tl/Woy/7gBbj6zjQ=; b=QVBr7Gz0m17H5mOM3aXN+X2gZjt0mjDAz7tZjKLeZntZYGXfs3tQ4ZHaSdr4dgSapk S5hSz1H3tS9awn3JY/6UfVeczFON3M0OWG512g79ooiaGrkydeAWhevKJd+s4rEPLlKa esI6QZvsQwsuUeJ9zzGeG2QzZ0f5KcuhI5LCvbnUig+JT8mtDdFTOtN7oRXReMAmPVLe c2mdQlz350JP3yHmZQaOcM8YwX3ZHtoQEy1EPAWlJhqvHYw0RHNdAs9RvniKJ6nSZh5N ebO4Qo4cV6xEpOtux4zhxvFD1irzEq7o3BkdxusXddDO28lhdCUxNPzfLt5gt3Q30Qen jNPw== X-Gm-Message-State: AOAM532HeLfZsVcXEUtLAeQTA92SRByQ7zYXoizQ2IKM4SkwEQs0XINF 6hlpiXxDei2+bMByKNbhYCWexi9Rjxvc X-Google-Smtp-Source: ABdhPJwsIzGYhk0U69gO8w3MHa79EDRbAbnBjGKRU6QKgQTKDkQjjmn2EG/UXRBfHqwlMUkGRST8bkfSbrh3 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:6214:2125:: with SMTP id r5mr27655598qvc.28.1622627044088; Wed, 02 Jun 2021 02:44:04 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:46 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-7-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 6/7] KVM: arm64: Use less bits for hyp_page order From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024405_770991_4C2AE6F6 X-CRM114-Status: GOOD ( 15.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page order is currently encoded on 4 bytes even though it is guaranteed to be smaller than this. Make it 2 bytes to reduce the hyp vmemmap overhead. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 3ea7bfb6c380..fb0f523d1492 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER UINT_MAX +#define HYP_NO_ORDER USHRT_MAX struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned int max_order; + unsigned short max_order; }; /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 991636be2f46..3fe34fa30ea4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,7 @@ struct hyp_page { unsigned int refcount; - unsigned int order; + unsigned short order; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e3689def7033..be07055bbc10 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -93,7 +93,7 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { - unsigned int order = p->order; + unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -123,7 +123,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy; @@ -192,9 +192,9 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) hyp_spin_unlock(&pool->lock); } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { - unsigned int i = order; + unsigned short i = order; struct hyp_page *p; hyp_spin_lock(&pool->lock); From patchwork Wed Jun 2 09:43:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12293581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB308C47083 for ; Wed, 2 Jun 2021 09:47:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AE24B6101B for ; Wed, 2 Jun 2021 09:47:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE24B6101B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=vIWrAuhXOKcbrheNcrx/6r0HOfjgcHlLNXVF+B9EmkY=; b=WTYUW85Y0Td62LVJPgHJXtendC 8d9EcKgq0B5w4cIMQxa9qtSo/qM+0ioffK0c6MOy3JpBRFKoKqztxKn2U7v1CnK1DbPe3SYjWK+oe /J7sf192ASv+n7xloQtbCt7uGwh5QF5X4Xu9SfZysHSBumZplUS7XRfJ4ncj71qWF83sWXXFBHwmm 9Ipe0cFCoARMSlMJuuvh5639co/50paiMLpXOKGYpXMlAcJ/2RCRqCoWi10RNH6eJqLNnFD2YGfy6 hexiKuW0S7h8lXPG4Le9xlLTgmwjAWK/1v0zwM/KQ/fxehCcIbcPriURyTyLfRqKJquYT4+V/NXiM izXZ5F6g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNRk-0032ob-04; Wed, 02 Jun 2021 09:45:56 +0000 Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loNQ0-0032BD-8Z for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 09:44:09 +0000 Received: by mail-qk1-x749.google.com with SMTP id r25-20020a05620a03d9b02903a58bfe037cso178127qkm.15 for ; Wed, 02 Jun 2021 02:44:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6BW7fDsTteROuo6K5BqDbcxWDf5UHuxhaiKaGg5d2Mc=; b=T6g1G1dBfUxc5a4uRyN0WITPLWnvNi8vRjaU0WxUeXxRylqlrnTDVZ76WleGEclLOB bFAArsvkYHz97FjuhdnCCh+WEkLIwWDbGPN8HcQ78O37UJm4EoCDSqpQXeqPvmiYy6nr yIq1GGdZJktTkGB8wQ4EBTglMjSXseqlhcwtjeBXe9GBq8u46ny++nQ2E4bWDjknAAve 2U2O/hlxHUJkYkpUskkdnrYRyPf0c1V6lmQnl8jnqVfGA3zTVMsoKNxQl4+tCdCbeRIj UPVn2XdYfC7UcpIUefZVUauC2ueC83iZpRd9bxE+tL/DUwpCrqaQCV3dz1x9aDI/G7rx MDvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6BW7fDsTteROuo6K5BqDbcxWDf5UHuxhaiKaGg5d2Mc=; b=Q1jF5wUEiTJNBlWJ7wd+YTML37T+E8T0zNqVoZHCWc3BDGHKqna+7RbsRhPM8vSSOh YQFZcq7kFdvxJJSXkhUKC+wJIGKMcR1gs0jxs5BwiAAiEBvzg0V5Q6cALeD9cLkcrFRx U629HLYg7YTjPQBxeDVDyDVrYn92W8JT3vy9g62xU+XyJKerhg8Mq6w3irD5yVxzw5NC VjehgDHOkZSKGjSMrGbg94m2qhY9pfUHP4eOlDWzHIkLSMvJUurtfsEVxwdVLf8XSSrM ZLCDqt5PBX7NgO3mvdaJ1msS5a89PW9quvq7xTvH/HQUrOu3X/wHQGn7FWyrGMIpuUHL u9Xw== X-Gm-Message-State: AOAM532xmNV61+bDE+ytJfyJfFE6ke4YwDsuP6YMYHWBCA14l038+DTO VxqsUWVui5aoxfv8msY6DwUWNlKiWWRt X-Google-Smtp-Source: ABdhPJxl+MLQjn8yPrKOU2bQi1i+tyBz/2Fy9EQ5GBW0rZbYDu246jiWjDU3i6n0uX5Qndx78NtlHv8adJah X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:e8cd:: with SMTP id m13mr6368320qvo.52.1622627046193; Wed, 02 Jun 2021 02:44:06 -0700 (PDT) Date: Wed, 2 Jun 2021 09:43:47 +0000 In-Reply-To: <20210602094347.3730846-1-qperret@google.com> Message-Id: <20210602094347.3730846-8-qperret@google.com> Mime-Version: 1.0 References: <20210602094347.3730846-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc0.204.g9fa02ecfa5-goog Subject: [PATCH v2 7/7] KVM: arm64: Use less bits for hyp_page refcount From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_024408_325789_2BC47751 X-CRM114-Status: GOOD ( 13.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page refcount is currently encoded on 4 bytes even though we never need to count that many objects in a page. Make it 2 bytes to save some space in the vmemmap. As overflows are more likely to happen as well, make sure to catch those with a BUG in the increment function. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 3fe34fa30ea4..592b7edb3edb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include struct hyp_page { - unsigned int refcount; + unsigned short refcount; unsigned short order; }; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index be07055bbc10..41fc25bdfb34 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -146,6 +146,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, static inline void hyp_page_ref_inc(struct hyp_page *p) { + BUG_ON(p->refcount == USHRT_MAX); p->refcount++; }