From patchwork Wed Feb 1 12:53:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 099E6C05027 for ; Wed, 1 Feb 2023 13:23:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WGnd+R2P8OLU1XK6JsqTpMji2KuSxyF0c9SR0ko05yQ=; b=z8Igbq6RYUB+Ko x1JX4Kqd5kId0MTnG2wkOgXrizMwkhp24X2zc6Diu2snk5LhKIBNSsEg4svd06ImHoZpKFYq0BeU/ YoaqykFtk85CUn/tS5Tj9sfEvsGlRe24H3nw997rDAHgZiB5kvsV9f0SSp0QuMrtPYZ/fXoGr4Fa4 ZBN8oQw2KzUWpNCAG+Si4BCu6Yu7TQflFdyn9q3G7wGom2xDuNZImWfqUMuaaBbFj1Jyjx7sdPSYf RQa4BkrDAOqxgO5Asky+Spd74mo8u6lpclG1qHlIYCYTeu3qOqhz0bvmjoCosYPjjNsNPR4LrJic5 h1lT9RgaF1p+4ax/STRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4G-00C1ja-7F; Wed, 01 Feb 2023 13:22:28 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pND4E-00C1id-GH for linux-arm-kernel@bombadil.infradead.org; Wed, 01 Feb 2023 13:22:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=PQ9JQNEnQW9FAncMG3NtsPTFFJ G/cbEPyclaE7T/PmzVznnd9Wk/0Wfi3sgydNB99DEuyfKP5y9t4Sc/TVRStvJuOu9PBjL6J7GTj13 WOyMjpwMCXOY3fgvPWlk+N6ClRr4oXJQe64ZP2Kwd5PeHzW71Zh/f7nl/7RvWv1ZuR3MqZYALwbJf CLTLujxzW1St5xo7MaZa3jEqG5oghCkGxrSJ15m4iJopyBY77M0t4NYfiMJukmXVk2vDo4A9a8v9j qmYhakiE2MR349W7ToZRAnrhzC5QwsV16ouKrcNgKhozqOFZNBlGBw2/foYrhb+XE/Duy8rmi4qyt 65USi9PQ==; Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiP-00CIy0-4f for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:56 +0000 Received: by mail-wr1-x42d.google.com with SMTP id q10so17233771wrm.4 for ; Wed, 01 Feb 2023 04:59:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=uqT0FSjB+l/XDMkVXFvBdgrL2J1Pb16FQPBIibne3w+RA4w7ZHPEnJd22+zxU8HfKd uiqDg2LvM9mRSy4apdViIghUV5RiGpJ9xBgsXYd6cKHBt/TpwnC8GMrC5ko0Iaweq0z/ CmUdRKtmq7JFkN77fcuz8OnmfGchAp3YYrnMcIiHnhVzF9vSnEeydQzkuMixbhCMfjEz vUrnAhJySZtl5+SiR5kS/OhbSljiIekXWL9ZacrQilp0vifaVxWXWanc9DLxC94l3IoI MYetvwJmX0Zxy5mpnmnLi/ktsZyCSUnBrEMc+P2kFBHo928MJkgF0p9IzZDGEvEHMqtX s3Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/mQgjjPfntTEYktl5YrbY8Zrdu7ool4i44pQG74D6kk=; b=wO8vbkgSI1QOf5Ub/du1i2aJZgU0PgLBVZ4rz7AwFIRNlYVYGjbW6sWpdYRysAGZ/O c7gQZaWsxoWg/qA/yl1NxpHk331EOV1u1RZk7VcnY7qAEJV/NNM8kXgoxTp+aiymuSz5 uW35ZExDD/ceaLcWt/O+NMs6g8gp+mr7gUFy97aqLs8b+emVLEyLNXng18jnW3f2Woiu nbdhYB41QK3H4GIWvLiQQtqSEUZ7JFrvI1esa+XuXpm2OTlq78k4PXv454yPVNKuBa6A 0AIsBGkRH2kCOHOFPnm0Im4w+gueJP2pN/0OL4KgNBmhTxYnD/9kabB/7h3Dmcaq7yz2 sqzw== X-Gm-Message-State: AO0yUKVT2T5xw8i3ZJ3BcmuyYMzZftlKTbcKG0DMtywnse6iK4AQYUZr Qs6KlMh8rH9TExO5tnQOPrq/fQ== X-Google-Smtp-Source: AK7set//4GU7QGGJ9JCNwt9lq+EdDL/CqmMxgfwo0Z2qvjFGyJ3BYHcNzGcdlCitnKLcy3y+gHrMfQ== X-Received: by 2002:a05:6000:186a:b0:298:4baf:ac8a with SMTP id d10-20020a056000186a00b002984bafac8amr2908303wri.44.1675256389370; Wed, 01 Feb 2023 04:59:49 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:48 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 38/45] iommu/arm-smmu-v3-kvm: Add per-cpu page queue Date: Wed, 1 Feb 2023 12:53:22 +0000 Message-Id: <20230201125328.2186498-39-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_125953_754799_CAE585E5 X-CRM114-Status: GOOD ( 18.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Allocate page queues shared with the hypervisor for page donation and reclaim. A local_lock ensures that only one thread fills the queue during a hypercall. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c | 93 ++++++++++++++++++- 1 file changed, 92 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c index 8808890f4dc0..755c77bc0417 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-kvm.c @@ -5,6 +5,7 @@ * Copyright (C) 2022 Linaro Ltd. */ #include +#include #include #include @@ -23,6 +24,81 @@ struct host_arm_smmu_device { static size_t kvm_arm_smmu_cur; static size_t kvm_arm_smmu_count; static struct hyp_arm_smmu_v3_device *kvm_arm_smmu_array; +static struct kvm_hyp_iommu_memcache *kvm_arm_smmu_memcache; + +static DEFINE_PER_CPU(local_lock_t, memcache_lock) = + INIT_LOCAL_LOCK(memcache_lock); + +static void *kvm_arm_smmu_alloc_page(void *opaque) +{ + struct arm_smmu_device *smmu = opaque; + struct page *p; + + p = alloc_pages_node(dev_to_node(smmu->dev), GFP_ATOMIC, 0); + if (!p) + return NULL; + + return page_address(p); +} + +static void kvm_arm_smmu_free_page(void *va, void *opaque) +{ + free_page((unsigned long)va); +} + +static phys_addr_t kvm_arm_smmu_host_pa(void *va) +{ + return __pa(va); +} + +static void *kvm_arm_smmu_host_va(phys_addr_t pa) +{ + return __va(pa); +} + +__maybe_unused +static int kvm_arm_smmu_topup_memcache(struct arm_smmu_device *smmu) +{ + struct kvm_hyp_memcache *mc; + int cpu = raw_smp_processor_id(); + + lockdep_assert_held(this_cpu_ptr(&memcache_lock)); + mc = &kvm_arm_smmu_memcache[cpu].pages; + + if (!kvm_arm_smmu_memcache[cpu].needs_page) + return -EBADE; + + kvm_arm_smmu_memcache[cpu].needs_page = false; + return __topup_hyp_memcache(mc, 1, kvm_arm_smmu_alloc_page, + kvm_arm_smmu_host_pa, smmu); +} + +__maybe_unused +static void kvm_arm_smmu_reclaim_memcache(void) +{ + struct kvm_hyp_memcache *mc; + int cpu = raw_smp_processor_id(); + + lockdep_assert_held(this_cpu_ptr(&memcache_lock)); + mc = &kvm_arm_smmu_memcache[cpu].pages; + + __free_hyp_memcache(mc, kvm_arm_smmu_free_page, + kvm_arm_smmu_host_va, NULL); +} + +/* + * Issue hypercall, and retry after filling the memcache if necessary. + * After the call, reclaim pages pushed in the memcache by the hypervisor. + */ +#define kvm_call_hyp_nvhe_mc(smmu, ...) \ +({ \ + int __ret; \ + do { \ + __ret = kvm_call_hyp_nvhe(__VA_ARGS__); \ + } while (__ret && !kvm_arm_smmu_topup_memcache(smmu)); \ + kvm_arm_smmu_reclaim_memcache(); \ + __ret; \ +}) static bool kvm_arm_smmu_validate_features(struct arm_smmu_device *smmu) { @@ -211,7 +287,7 @@ static struct platform_driver kvm_arm_smmu_driver = { static int kvm_arm_smmu_array_alloc(void) { - int smmu_order; + int smmu_order, mc_order; struct device_node *np; kvm_arm_smmu_count = 0; @@ -228,7 +304,17 @@ static int kvm_arm_smmu_array_alloc(void) if (!kvm_arm_smmu_array) return -ENOMEM; + mc_order = get_order(NR_CPUS * sizeof(*kvm_arm_smmu_memcache)); + kvm_arm_smmu_memcache = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, + mc_order); + if (!kvm_arm_smmu_memcache) + goto err_free_array; + return 0; + +err_free_array: + free_pages((unsigned long)kvm_arm_smmu_array, smmu_order); + return -ENOMEM; } static void kvm_arm_smmu_array_free(void) @@ -237,6 +323,8 @@ static void kvm_arm_smmu_array_free(void) order = get_order(kvm_arm_smmu_count * sizeof(*kvm_arm_smmu_array)); free_pages((unsigned long)kvm_arm_smmu_array, order); + order = get_order(NR_CPUS * sizeof(*kvm_arm_smmu_memcache)); + free_pages((unsigned long)kvm_arm_smmu_memcache, order); } /** @@ -272,9 +360,12 @@ int kvm_arm_smmu_v3_init(unsigned int *count) * These variables are stored in the nVHE image, and won't be accessible * after KVM initialization. Ownership of kvm_arm_smmu_array will be * transferred to the hypervisor as well. + * + * kvm_arm_smmu_memcache is shared between hypervisor and host. */ kvm_hyp_arm_smmu_v3_smmus = kern_hyp_va(kvm_arm_smmu_array); kvm_hyp_arm_smmu_v3_count = kvm_arm_smmu_count; + kvm_hyp_iommu_memcaches = kern_hyp_va(kvm_arm_smmu_memcache); return 0; err_free: