From patchwork Thu May 27 12:51:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12284087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10586C4708A for ; Thu, 27 May 2021 12:57:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C963A60FE6 for ; Thu, 27 May 2021 12:57:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C963A60FE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ScjmxLaHQcYvMCbATz2Da9uRG6cvsDa2GYeceHHB4ZU=; b=ZiEXDCwe5YcvubkpQoyYqw9wtT 257zjAuw4bihdeU37bzpe9XBYsfExrffRNvd/SyKmTIaD6rHqq0I8LQ2IiiSDVha4K82/lTBQsq08 fXkGWTewHsaEUPyL9CVtbSs3g2X6lp4BstnkfyrxopFDrkJTfE85FI0TA1sdXDJCO2TVLz56sp1g1 WmHhNRGIL8qLY464c300qIqozi4HpNwtc/rlMw5efRRmiELvSus4eY4zzH/M35rRB9VxtlYIP185D ijEL4axreLYj+j6UkYFebdH34hn0n3m4K1DDfgXdRwY580to4/ne1pno/dvMhGBzKCfTwX+/yxGi4 xFXlm0xw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFWp-005usV-7p; Thu, 27 May 2021 12:54:23 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lmFUO-005tV5-R5 for linux-arm-kernel@lists.infradead.org; Thu, 27 May 2021 12:51:55 +0000 Received: by mail-wr1-x44a.google.com with SMTP id r7-20020adff7070000b0290114b9161019so309997wrp.13 for ; Thu, 27 May 2021 05:51:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1VjwQ4Izjf01RJSguF+MyepUhFZaPW5Ia+1I2PStwVY=; b=vg6SBmeyHn1if0S0th7HR74BmQq74KmKE1yJjpIX9ye6dYwZLFggZYO5tq/veoHetZ 5Q674WCyrQCG1WcqWZ/jITEVUmcJ4tH02f9Q/jSUmO3DkI1nwPjLa+F+mh/xVPVUv70u DhwBIsTjMJG0vUJsYatwyaGUIVp0XfFCtM9O20KmFC1EMHi61ZbgXBOmMAjh+wSNpiwj TgqBPSdq5pZx3UnHrUAtAGwfF9G6cMEuCVccdMwhdvt+8C55BcGj7kQFJiqmktvWDITl rETXsw+t6gsxPNXVtXZPbOGAzOxk9VekuOtK3lGvNUe4dEMSFsj/2aKbiiLuCFSz7wWp YmDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1VjwQ4Izjf01RJSguF+MyepUhFZaPW5Ia+1I2PStwVY=; b=pDLjN2cXgmYGMPBuJWr2vTsIByiBuSdl1WaciTyRL/e9jYkDSyO0Up8Qhow09XJzZC 4nDCdX2KQwebIWBwrSIxjMKw4nNYnxbpl4uSwUm4gMMPlE6b3zxnB9bvjHioQgJ3dlfc PggqewtXskqTGQiwEST788vLYIsF6c2hLXTHOLmMSFpzT+ntTDLXtAd1VH3kIp+eyCnk WpeWKcfpXG5IMZR6i8tLEaK0QfjJ5mfSrEll6SfYOX0WPjVwUsM1Cr6Ts9tcf9QXsVp9 tJ8Iutqapqa+Z9TwMDOm/HR3Z4nklxkFGcQLe/5SipPZeqYX9v2VDiDOuY+8Lm/lkwe3 f2lQ== X-Gm-Message-State: AOAM531ERBFswoMJcwef7mH/+KHNfITvlL4ajuv5h1fjL+uRyCdwzYRx ERA3bfv4ImCLEoR+Bxs5YkXegk+LEGbi X-Google-Smtp-Source: ABdhPJyrMPjGXOBkf7UTBcOaGjq3koWvGQeLen0va0jMrfOi2lbpRnwG62zACEHhkBJxblCYtamLOJFnGkdm X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a1c:e205:: with SMTP id z5mr6882602wmg.70.1622119910127; Thu, 27 May 2021 05:51:50 -0700 (PDT) Date: Thu, 27 May 2021 12:51:33 +0000 In-Reply-To: <20210527125134.2116404-1-qperret@google.com> Message-Id: <20210527125134.2116404-7-qperret@google.com> Mime-Version: 1.0 References: <20210527125134.2116404-1-qperret@google.com> X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog Subject: [PATCH 6/7] KVM: arm64: Use less bits for hyp_page order From: Quentin Perret To: maz@kernel.org, will@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, catalin.marinas@arm.com, suzuki.poulose@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kernel-team@android.com, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210527_055152_951754_A2DD3BB4 X-CRM114-Status: GOOD ( 15.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp_page order is currently encoded on 4 bytes even though it is guaranteed to be smaller than this. Make it 2 bytes to reduce the hyp vmemmap overhead. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 9ed374648364..d420e5c0845f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include -#define HYP_NO_ORDER UINT_MAX +#define HYP_NO_ORDER USHRT_MAX struct hyp_pool { /* @@ -19,7 +19,7 @@ struct hyp_pool { struct list_head free_area[MAX_ORDER]; phys_addr_t range_start; phys_addr_t range_end; - unsigned int max_order; + unsigned short max_order; }; static inline void hyp_page_ref_inc(struct hyp_page *p) @@ -41,7 +41,7 @@ static inline void hyp_set_page_refcounted(struct hyp_page *p) } /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order); +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); void hyp_get_page(void *addr, struct hyp_pool *pool); void hyp_put_page(void *addr, struct hyp_pool *pool); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/include/nvhe/memory.h index 991636be2f46..3fe34fa30ea4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -9,7 +9,7 @@ struct hyp_page { unsigned int refcount; - unsigned int order; + unsigned short order; }; extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index e453108a2d95..2ba2c550ab03 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { phys_addr_t addr = hyp_page_to_phys(p); @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy = __find_buddy_nocheck(pool, p, order); @@ -93,7 +93,7 @@ static inline struct hyp_page *node_to_page(struct list_head *node) static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { - unsigned int order = p->order; + unsigned short order = p->order; struct hyp_page *buddy; memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -123,7 +123,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned int order) + unsigned short order) { struct hyp_page *buddy; @@ -168,9 +168,9 @@ void hyp_get_page(void *addr, struct hyp_pool *pool) hyp_spin_unlock(&pool->lock); } -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned int order) +void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) { - unsigned int i = order; + unsigned short i = order; struct hyp_page *p; hyp_spin_lock(&pool->lock);