From patchwork Tue Jun 8 11:45:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12306657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE588C47082 for ; Tue, 8 Jun 2021 12:14:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 83D0D61185 for ; Tue, 8 Jun 2021 12:14:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 83D0D61185 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=rc51+nfSzz602RHqrNau4HSd2uegX7uE7wZkmoXa4wo=; b=TFknr+wmXCe8jdOclFkqsBwgEa hMWc8MwWK6v9ElpbK4DIOmGu6RCOFGIpWAu5myPtMKXzCX313cUrDCvIY+GJEEtbIUXlrEiIMIgE2 MvzWqEFZq1ME3XMUsVdKT+Qzu4UiJ+18smLI2LHMbFvtb4j7EWqP1C8PC2r8O4X2o5Suy3pmSOjXC LZiY6t2204GUEfyf/mhEiy0hFqWdWGKWAr2LeXcOrH0VfHGg8UYBff50szu/7jXkLE0VtYgx5DNj/ uzLg0Mom/D0K0/EAvnuyIPxCXIf2x127W3Xxm81oNbB1QoZuT5zBkNYlSnw2b5JFEghN6/MLIR/Bf nvJpt+Ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaau-008Fo7-0K; Tue, 08 Jun 2021 12:12:34 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqaAo-0086Qv-VS for linux-arm-kernel@lists.infradead.org; Tue, 08 Jun 2021 11:45:36 +0000 Received: by mail-qk1-x74a.google.com with SMTP id v16-20020ae9e3100000b02903aafadba721so482057qkf.6 for ; Tue, 08 Jun 2021 04:45:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sWbrRz6YFSv51K0ZiaeJfgkblqcqVJvp4mAMJ4g6evQ=; b=oMijNQKaRNBFiQ1m3Qen4ujOBqjvKBFOUAcRFp8lVBza1ZVmHJBg08yWChcmh4N3ZM OGodg6TU7O/n8uagXshvezjlQtF2vboBz78E2pGnEzfJf46ws36OW4bI1/aj3MDGCs9e Xfirtd13uPSSZLeEVxOgsYyY+418xQDc1+nXfhX45f/aIEHSgwagqS8YMo1NSiFFfY0c ieFXm6t/oQ5DXdoraPM0zQT1nRFddABeIHqKydm2Xvwr/+8QtiaDN5+AZINlYTdU70RH fSSMTVRLvAV+5ixRteovyozG1HBoKb7EkgINPdYQBbU5sNB1iJxJxaMi3xxL78dUJvgH LNPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sWbrRz6YFSv51K0ZiaeJfgkblqcqVJvp4mAMJ4g6evQ=; b=h04qMO5eQbstGYWzX0GpVIWE2wHtVaILvIwprQOxmQdnf+bNOlTeYyzF8pBV+j3vt4 Y3CotYzqXr3sIrKxR/pgQnnSunZpPPQTV6RG4Zy0DuABXLwMGCwjmiIdhy8Gw1NfDu9Z CZUixASJogDCrru19IP4HlAr1QhBN0vrWLi+toec0QmjI1zk/VFNtkvfJUFRbUxwyQ6U NxmG0boer8VL/PJUyhSitN/VUgrxtCcS8BpkTlXSbeNLTQE34cvaYMA9ndAfB22qqD1d IB7WscxA7vagXH9CrRtqPaCZXSYnSAIA2IRX9BrXbyqIQ6ehV742fYQ1fW6C0q4oO/MY c8tw== X-Gm-Message-State: AOAM530TQK3jebkmP2I+0qLqxmvGGBUMvUTd3YsPXhoBFfwvbdJXACsA rG7WdobppHEhXqz/Me//clpUdM6FK4wd X-Google-Smtp-Source: ABdhPJwsxbCizlABk5SfV18s0TUyC2fSDZX+uNojDJncA7j/acMAyKtd4wzKVHkNbef2eVClKkhRAeeIxy79 X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a0c:e8cd:: with SMTP id m13mr23073154qvo.52.1623152731419; Tue, 08 Jun 2021 04:45:31 -0700 (PDT) Date: Tue, 8 Jun 2021 11:45:14 +0000 In-Reply-To: <20210608114518.748712-1-qperret@google.com> Message-Id: <20210608114518.748712-4-qperret@google.com> Mime-Version: 1.0 References: <20210608114518.748712-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.rc1.229.g3e70b5a671-goog Subject: [PATCH v3 3/7] KVM: arm64: Remove list_head from hyp_page From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_044535_056575_C993ED4A X-CRM114-Status: GOOD ( 19.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The list_head member of struct hyp_page is only needed when the page is attached to a free-list, which by definition implies the page is free. As such, nothing prevents us from using the page itself to store the list_head, hence reducing the size of the vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 1 - arch/arm64/kvm/hyp/nvhe/page_alloc.c | 39 ++++++++++++++++++++---- 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index fd78bde939ee..7691ab495eb4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -12,7 +12,6 @@ struct hyp_page { unsigned int refcount; unsigned int order; struct hyp_pool *pool; - struct list_head node; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 2602577daa00..34f0eb026dd2 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -62,6 +62,34 @@ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, } +/* + * Pages that are available for allocation are tracked in free-lists, so we use + * the pages themselves to store the list nodes to avoid wasting space. As the + * allocator always returns zeroed pages (which are zeroed on the hyp_put_page() + * path to optimize allocation speed), we also need to clean-up the list node in + * each page when we take it out of the list. + */ +static inline void page_remove_from_list(struct hyp_page *p) +{ + struct list_head *node = hyp_page_to_virt(p); + + __list_del_entry(node); + memset(node, 0, sizeof(*node)); +} + +static inline void page_add_to_list(struct hyp_page *p, struct list_head *head) +{ + struct list_head *node = hyp_page_to_virt(p); + + INIT_LIST_HEAD(node); + list_add_tail(node, head); +} + +static inline struct hyp_page *node_to_page(struct list_head *node) +{ + return hyp_virt_to_page(node); +} + static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { @@ -83,14 +111,14 @@ static void __hyp_attach_page(struct hyp_pool *pool, break; /* Take the buddy out of its list, and coallesce with @p */ - list_del_init(&buddy->node); + page_remove_from_list(buddy); buddy->order = HYP_NO_ORDER; p = min(p, buddy); } /* Mark the new head, and insert it */ p->order = order; - list_add_tail(&p->node, &pool->free_area[order]); + page_add_to_list(p, &pool->free_area[order]); } static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, @@ -99,7 +127,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, { struct hyp_page *buddy; - list_del_init(&p->node); + page_remove_from_list(p); while (p->order > order) { /* * The buddy of order n - 1 currently has HYP_NO_ORDER as it @@ -110,7 +138,7 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, p->order--; buddy = __find_buddy_nocheck(pool, p, p->order); buddy->order = p->order; - list_add_tail(&buddy->node, &pool->free_area[buddy->order]); + page_add_to_list(buddy, &pool->free_area[buddy->order]); } return p; @@ -182,7 +210,7 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) } /* Extract it from the tree at the right order */ - p = list_first_entry(&pool->free_area[i], struct hyp_page, node); + p = node_to_page(pool->free_area[i].next); p = __hyp_extract_page(pool, p, order); hyp_set_page_refcounted(p); @@ -210,7 +238,6 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, for (i = 0; i < nr_pages; i++) { p[i].pool = pool; p[i].order = 0; - INIT_LIST_HEAD(&p[i].node); hyp_set_page_refcounted(&p[i]); }