From patchwork Wed Mar 5 13:04:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1E65CC19F32 for ; Wed, 5 Mar 2025 13:06:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9D81010E75D; Wed, 5 Mar 2025 13:06:47 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="hk0R4nGH"; dkim-atps=neutral Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by gabe.freedesktop.org (Postfix) with ESMTPS id 860D210E771 for ; Wed, 5 Mar 2025 13:06:45 +0000 (UTC) Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-5e4ebc78da5so10316328a12.2 for ; Wed, 05 Mar 2025 05:06:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180002; x=1741784802; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BvA69GRM940nbh4WvjTd5eG8jH1Ow0PBoCzHHS2i3Ls=; b=hk0R4nGHvkHVrSzd6CutJrvCrzk/cVDZ67iQ5s7aWzeyFI1LGGe8PCWpnan/DCYmhe jpJqTp9uwHd3YBrr9XjeEWQbmuCE/DHXqwY0SiEqPch6bJ5AAAycop8T9sKVDp9ZRnhU 3Ir63TXHZ4sAZgQ9pMZ0bqQ1uWwx1jzWrrYmHcshI7cMEEROtJeFxnq2JWr3tdXSmtLi 0UQkepWlFrZMnwKFLka+PjkVEMhwz3+qsgY0QE0ASr+QwYiEnoG4ST4BPeAwbELWtbhn 58+T3Xtwvc8Fe7MMUD4YXKLwL3v8LlWH0pgaSEg7Rl0Xg24KGPRPCnETTfAwBhT8+nDv dRcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180002; x=1741784802; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BvA69GRM940nbh4WvjTd5eG8jH1Ow0PBoCzHHS2i3Ls=; b=Ew1uwyL8J+Aukc+zGFHCQcMkVj7/9LxO9zgPgRgcMRI0z3DQWMzN95zhuYlS7hEeA5 r8JL19WOiYSjbI2h4yWlNCvMtTmxkKFSQyh1Ij/fdjtWBpokBHH+BVHs+oPix6ALwMqM G9Aq5N9uAv/bfJzmq+gluNzSKNkNRZ0sHlO+8W8thC//4YXpWdWCSSbu+NfYTDiWkG8P ixTHhaYKNejNDwDh5VRejW8BHQeZw+Stbk94NQkBcbUrY8Or2AqSnnRw852+Z+EPRx2d YDa6whkwMNUTqUHDdMvM06BXV2cw/RnyhLOWWuSTzFLHRvVenKj96w+0TSkujhfzMLzl W2Fg== X-Forwarded-Encrypted: i=1; AJvYcCXx2AeSfpC+yjG7U2xmV3H+17kFr6ipg1BNV7XVJLxQgDUlge9nBk4wisY4O5fSN5WbCthGR/oZS80=@lists.freedesktop.org X-Gm-Message-State: AOJu0YyCneXDWQ7vnjQQKjVHGMpF+PHVMZuSQZOZFKVgTpu7LE+F3B3k eKkqqbo3Pu4qoBzqDVKHnffsygNuiSKW2lRiWvfeRLxmqasA4+2gQrcvhwFJ39HSGbBeDTyt8Yc W5rE= X-Gm-Gg: ASbGnctHYFXy+8W3Maqa54ukpkDVzfeWl2g114bVWOK3f5fHtl+YcZ6O7m43/IgDtGq fs031MOzkr739DHvjnDrQW7fAj4D1BP4U2vP1vWr65cziXx5WTHCp3L2MUGNnNJLAeMJkaWnutK V+1j4DA7WDxGgbY9bxn9I/lGN3vVbIw5NcBv3UifsIX5v7SKkkmJZBZfvV/JYJXRn9UrtwkTEf7 ucYgeQJnTgRNN6VwLo5Kv4D7IgwhsNUeb4fDiWXOl4HUlwxOEz/nO8UzH1ma2PXK8tRfI8EPma7 DYsXX7oJjivwbOrTyDIj5uoEEYrHCTGwH1VQ6UHPIzpKKURrMcnGppV9iqEY1hHin1ypDNEZ8X5 I3v0hGXVFYJJcljIOOLzdig== X-Google-Smtp-Source: AGHT+IEjtiBAYCTH0+e95UXzOpV6Qk/1lP6TuM4iQbwFmlqCq9fGF9wURhaIIhsi+UMHMk8/oJkTeQ== X-Received: by 2002:a50:c8cd:0:b0:5e5:bcd6:4ad8 with SMTP id 4fb4d7f45d1cf-5e5bcd64d5amr155764a12.9.1741180002364; Wed, 05 Mar 2025 05:06:42 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:41 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 01/10] tee: tee_device_alloc(): copy dma_mask from parent device Date: Wed, 5 Mar 2025 14:04:07 +0100 Message-ID: <20250305130634.1850178-2-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" If a parent device is supplied to tee_device_alloc(), copy the dma_mask field into the new device. This avoids future warnings when mapping a DMA-buf for the device. Signed-off-by: Jens Wiklander --- drivers/tee/tee_core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index d113679b1e2d..685afcaa3ea1 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -922,6 +922,8 @@ struct tee_device *tee_device_alloc(const struct tee_desc *teedesc, teedev->dev.class = &tee_class; teedev->dev.release = tee_release_device; teedev->dev.parent = dev; + if (dev) + teedev->dev.dma_mask = dev->dma_mask; teedev->dev.devt = MKDEV(MAJOR(tee_devt), teedev->id); From patchwork Wed Mar 5 13:04:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D413CC282EC for ; Wed, 5 Mar 2025 13:06:49 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 94A3A10E76A; Wed, 5 Mar 2025 13:06:48 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="i2o2xG9C"; dkim-atps=neutral Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by gabe.freedesktop.org (Postfix) with ESMTPS id 84B4A10E75F for ; Wed, 5 Mar 2025 13:06:46 +0000 (UTC) Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-5e5b572e45cso573571a12.0 for ; Wed, 05 Mar 2025 05:06:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180005; x=1741784805; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g64carhpN86lM7kPnuUc7tjq3kR3ZxlKeoOmJi6Slgs=; b=i2o2xG9CiaKAWlk7DZAfn69CJJOyMv47IppijuBHp6U3BLLvwnzvrcxWdEBPiKz6mH u5IMiaoivTX/ktUH/JTmOxLEvQpCECUdhJbIsvzplnqN7IRhQ9rL76FqA1wXjwJpSaSN aWxnX3wmWrLxp4S7oPhPNHWAcoHR0NMgryQdiNSyoJ2WJXtf6RBq9w39oPJhRRofmXmU UFw5IWk0jggk58FqhzQZaWFAX3oCbeDLWSDQHP0BBqnX5mBYdnBjMYiMG97hKrWlj47Q UROsVUbXhUDOyOKzgwXeG1gph/+DOX/OlnLFuGXFRtanH5i6FUJkh9UNMbP/hWyWBgET jYLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180005; x=1741784805; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g64carhpN86lM7kPnuUc7tjq3kR3ZxlKeoOmJi6Slgs=; b=YmPs370yZ1cDSgL7tg5gBsqqxmiMaT1wy/HdmUlvbhO2q5Xg1ez/iqCRWEHJrWutX1 P2MaZocp3pwcT+6emH/Z2UakNl9MrEOazS3xkR4Le+TCfRzRwRBiIl72cos5znYdFQsy xpQXAXA4QYLFRgxEncHJ75N3pbTsvFEmwxOOd3u3h/JjVK+4GEFyzi+BugxaJNc3IHx0 tNUaDAZ2ewohWuNEWXZLUyvX1JFltFwW8yaOPbuDJTAzTA89aHrmTEEUXEziNFkjOrO9 x+Srafw/7d1piI0BY1qXnLeO11NGSGIgU6RB+s/Rb+87cs+JpfKFq/Lc6/LDNybN3Uu9 zOsQ== X-Forwarded-Encrypted: i=1; AJvYcCVaZR/tphM/qx03qtr/8Xvniqm7se/j92n2OeGF/b++uLxrMd3/Ci/JsXTJZVP+UD3rFQOQXxFf8kI=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzXS8n67bfQ3Sm0A6AQ1cw4895Y4W9W4OAOKfl9uHKC4Mj9AXGv IP05B/yEuCrnRrf6oP58FHhbh6pniNyDfIWjg52WMGR7JPppDhgUv6tIq1+A+A1fm/+WsZupktH XO1Y= X-Gm-Gg: ASbGncsOv6c6gWh1GHKGULnY/91Awcie8PjfbNd0FqIQ3Z3v/56KOyssc03leeYnk+u zmlzxSb8B3XlMKuPli2vGOR9BV0c8ugqz/60akQBIKVuehN/Q4V4YkTH1O5aYrtuGbHi31k9bLA zdYHwJ+nzFyIsT5kgaW6iEpNYEPbS5Cn7WvJ4ZkxZo+cFbSTtpFlZ8P/6yoo8NQIenUHDLUJSTG UX8dEI0mwAado4RMD2TTgGg4UXEkELdVW8By2Vsb2JVzxHVHkg3Xezpt/Ab9pdUPPeKfnqm7VeG HNHIp69q9ueeuWiLF6k3D8T4s8f2aSthMImB/rMmfeygtr4rp1sxjICdjTGZ4kMlMq/oe3HZud2 SMQwITPYuHasSM/89ZjBMPQ== X-Google-Smtp-Source: AGHT+IHAJQdH7f8KJt+R2ffDvxbc7sUQ+b3jomBMCSiokLRmAxj8briZ+1p+KDuEOOHd7BerJyZSMw== X-Received: by 2002:a05:6402:5203:b0:5e1:9725:bb3e with SMTP id 4fb4d7f45d1cf-5e59f4869femr3345168a12.28.1741180005006; Wed, 05 Mar 2025 05:06:45 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:44 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 02/10] optee: pass parent device to tee_device_alloc() Date: Wed, 5 Mar 2025 14:04:08 +0100 Message-ID: <20250305130634.1850178-3-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" During probing of the OP-TEE driver, pass the parent device to tee_device_alloc() so the dma_mask of the new devices can be updated accordingly. Signed-off-by: Jens Wiklander --- drivers/tee/optee/ffa_abi.c | 8 ++++---- drivers/tee/optee/smc_abi.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index f3af5666bb11..4ca1d5161b82 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -914,16 +914,16 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) (sec_caps & OPTEE_FFA_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; - teedev = tee_device_alloc(&optee_ffa_clnt_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_clnt_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_pool; } optee->teedev = teedev; - teedev = tee_device_alloc(&optee_ffa_supp_desc, NULL, optee->pool, - optee); + teedev = tee_device_alloc(&optee_ffa_supp_desc, &ffa_dev->dev, + optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev; diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index f0c3ac1103bb..165fadd9abc9 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1691,14 +1691,14 @@ static int optee_probe(struct platform_device *pdev) (sec_caps & OPTEE_SMC_SEC_CAP_RPMB_PROBE)) optee->in_kernel_rpmb_routing = true; - teedev = tee_device_alloc(&optee_clnt_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_clnt_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_free_optee; } optee->teedev = teedev; - teedev = tee_device_alloc(&optee_supp_desc, NULL, pool, optee); + teedev = tee_device_alloc(&optee_supp_desc, &pdev->dev, pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); goto err_unreg_teedev; From patchwork Wed Mar 5 13:04:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF563C19F32 for ; Wed, 5 Mar 2025 13:06:51 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2BEE010E75F; Wed, 5 Mar 2025 13:06:51 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="zJryLf6b"; dkim-atps=neutral Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3857710E75B for ; Wed, 5 Mar 2025 13:06:50 +0000 (UTC) Received: by mail-ed1-f46.google.com with SMTP id 4fb4d7f45d1cf-5e4ad1d67bdso10763072a12.2 for ; Wed, 05 Mar 2025 05:06:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180009; x=1741784809; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tnKwjJ1FVLv4umx37hI66Bb7z/WCUlz7TSg/JWGf0qo=; b=zJryLf6b73XHqtioPZGQVU0uE+MyN3mq4T+nEGZSrpCCnc4IJi2vi+7k0E364ebsdI Ixx8cQKabfn1QTRUF7oIEX1+VxUa+2yM6hH80q4kkV/jguECUe85RFsdod1nKxsEusDT FO1BtFOnBzBh+2qUrwLM+MTqyTeuMBaUSGYRl4nvk2YuMdaB72rTj6a4piqKeKKWVorH OcvL+pSoaRWAq25hwdO92It2CUSO/CPOfV3aExBOBQ0gzUB9oBCkGxAWW/mtlWsQCVv0 PDwviJR2LHKuf+5a10oHP+SOcQvg81oiIv1OgCrbz7d4VsMWvJ3Jo1IgEnxn9bZ4xAXC NRZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180009; x=1741784809; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tnKwjJ1FVLv4umx37hI66Bb7z/WCUlz7TSg/JWGf0qo=; b=LwBwyx4Wrc79Nd7EWTfCtJ+p+EPgvh+wrtvcXnF2S+q2HKcrucg6JQujFXbTQsgBnB xEE4y3RRvEGruqEsDFvCYkSbtJp0m/UN6JW8sqnro/2kXKYq7RGjukA+0o2tJXBjlPZM DJA2BiKE3Oew1zUL/ruTuPF1jRbEFmFmMe+IxNUM/MlW53Rr3nY3TVEvKBhzIvT9swYZ nPMIlaciluqDAyfOyBq3pihtN0Ue8NcufWPio/QKZXOSJJFfbu1GU/qNxkOEFOpVw5Ft BJd33u6SmHr6nJPZOL4tIf5YNlZOu7I7KtGsPmY62PipjiTSfZ6Slivsjc7Hhmb+NZah tYdA== X-Forwarded-Encrypted: i=1; AJvYcCVqwhg5YDz8ZuxVBValWcX+gRD0Vz6oC3Oms9+QPct1XOL15dVTluGeVRiSLBIkhqeIH+CwyjKNrJo=@lists.freedesktop.org X-Gm-Message-State: AOJu0Yx3NHhzYBpOq+dsTwlyQYTc7plmgnXpV/h6Ww1rzVV6GBiVe/i4 Vhli14OpWRSfNpeIEQx0pngBG48y9GYxRTSJFwe2k4AQdwKAQXTaBiSKeeBiAxk= X-Gm-Gg: ASbGncs2H4CWfl6XBiGsArBlVy+kpgyk/H4UT3qeNVcYvk31zTfa566UucdsdJ6Llv4 fPJ7xN6Ud7d+va7IS07E3m5N9rkP2+m9mNXWxg6eZ702oxCXHgFsaZmejON0kZMU+7rIJwvftBn qQeW7i4IGsE+Jx7aZ9OQ4ruUwOlHR8+IW+ZSuc+LOXyDpXMpAzbhmM5eBzBga+Ia+UoCmXqLWhV 4QRMO9nFl7fxKE0CxViVTANPSfz3Hx2GG0yg9NAT30cV8bA4p2nErj7IjE4Cn+kBR4fyuDMxm3G avJFXQyYFI+Af6EQlEz8xS2C31tuvmW6vFIYLVnLI5gP7GsexWkX4TQeO6hVodSOuaajrglze26 lWiC1Q/RPo7pwQsgSJ6S3yQ== X-Google-Smtp-Source: AGHT+IFkv93UBCNyDTbp+dhfIHJd1mUccVi9lWn9kUXL6jjZpjjTaG6v7cqJg4o/sMmedLPajLbIUA== X-Received: by 2002:a05:6402:2714:b0:5e5:437b:74a7 with SMTP id 4fb4d7f45d1cf-5e59f37c457mr2370034a12.8.1741180008509; Wed, 05 Mar 2025 05:06:48 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:46 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 03/10] optee: account for direction while converting parameters Date: Wed, 5 Mar 2025 14:04:09 +0100 Message-ID: <20250305130634.1850178-4-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The OP-TEE backend driver has two internal function pointers to convert between the subsystem type struct tee_param and the OP-TEE type struct optee_msg_param. The conversion is done from one of the types to the other, which is then involved in some operation and finally converted back to the original type. When converting to prepare the parameters for the operation, all fields must be taken into account, but then converting back, it's enough to update only out-values and out-sizes. So, an update_out parameter is added to the conversion functions to tell if all or only some fields must be copied. This is needed in a later patch where it might get confusing when converting back in from_msg_param() callback since an allocated restricted SHM can be using the sec_world_id of the used restricted memory pool and that doesn't translate back well. Signed-off-by: Jens Wiklander --- drivers/tee/optee/call.c | 10 ++-- drivers/tee/optee/ffa_abi.c | 43 +++++++++++++---- drivers/tee/optee/optee_private.h | 42 +++++++++++------ drivers/tee/optee/rpc.c | 31 +++++++++---- drivers/tee/optee/smc_abi.c | 76 +++++++++++++++++++++++-------- 5 files changed, 144 insertions(+), 58 deletions(-) diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c index 16eb953e14bb..f1533b894726 100644 --- a/drivers/tee/optee/call.c +++ b/drivers/tee/optee/call.c @@ -400,7 +400,8 @@ int optee_open_session(struct tee_context *ctx, export_uuid(msg_arg->params[1].u.octets, &client_uuid); rc = optee->ops->to_msg_param(optee, msg_arg->params + 2, - arg->num_params, param); + arg->num_params, param, + false /*!update_out*/); if (rc) goto out; @@ -427,7 +428,8 @@ int optee_open_session(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params + 2)) { + msg_arg->params + 2, + true /*update_out*/)) { arg->ret = TEEC_ERROR_COMMUNICATION; arg->ret_origin = TEEC_ORIGIN_COMMS; /* Close session again to avoid leakage */ @@ -541,7 +543,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, msg_arg->cancel_id = arg->cancel_id; rc = optee->ops->to_msg_param(optee, msg_arg->params, arg->num_params, - param); + param, false /*!update_out*/); if (rc) goto out; @@ -551,7 +553,7 @@ int optee_invoke_func(struct tee_context *ctx, struct tee_ioctl_invoke_arg *arg, } if (optee->ops->from_msg_param(optee, param, arg->num_params, - msg_arg->params)) { + msg_arg->params, true /*update_out*/)) { msg_arg->ret = TEEC_ERROR_COMMUNICATION; msg_arg->ret_origin = TEEC_ORIGIN_COMMS; } diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index 4ca1d5161b82..e4b08cd195f3 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -122,15 +122,21 @@ static int optee_shm_rem_ffa_handle(struct optee *optee, u64 global_id) */ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, - u32 attr, const struct optee_msg_param *mp) + u32 attr, const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm = NULL; u64 offs_high = 0; u64 offs_low = 0; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_FMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_FMEM_INPUT; - p->u.memref.size = mp->u.fmem.size; if (mp->u.fmem.global_id != OPTEE_MSG_FMEM_INVALID_GLOBAL_ID) shm = optee_shm_from_ffa_handle(optee, mp->u.fmem.global_id); @@ -141,6 +147,8 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, offs_high = mp->u.fmem.offs_high; } p->u.memref.shm_offs = offs_low | offs_high << 32; +out: + p->u.memref.size = mp->u.fmem.size; } /** @@ -150,12 +158,14 @@ static void from_msg_param_ffa_mem(struct optee *optee, struct tee_param *p, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * * Returns 0 on success or <0 on failure */ static int optee_ffa_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { size_t n; @@ -166,18 +176,20 @@ static int optee_ffa_from_msg_param(struct optee *optee, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_FMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_FMEM_INOUT: - from_msg_param_ffa_mem(optee, p, attr, mp); + from_msg_param_ffa_mem(optee, p, attr, mp, update_out); break; default: return -EINVAL; @@ -188,10 +200,16 @@ static int optee_ffa_from_msg_param(struct optee *optee, } static int to_msg_param_ffa_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { struct tee_shm *shm = p->u.memref.shm; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_FMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; @@ -211,6 +229,7 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, memset(&mp->u, 0, sizeof(mp->u)); mp->u.fmem.global_id = OPTEE_MSG_FMEM_INVALID_GLOBAL_ID; } +out: mp->u.fmem.size = p->u.memref.size; return 0; @@ -222,13 +241,15 @@ static int to_msg_param_ffa_mem(struct optee_msg_param *mp, * @optee: main service struct * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays - * @params: subsystem itnernal parameter representation + * @params: subsystem internal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_ffa_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, size_t num_params, - const struct tee_param *params) + const struct tee_param *params, + bool update_out) { size_t n; @@ -238,18 +259,20 @@ static int optee_ffa_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - if (to_msg_param_ffa_mem(mp, p)) + if (to_msg_param_ffa_mem(mp, p, update_out)) return -EINVAL; break; default: diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index dc0f355ef72a..20eda508dbac 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -185,10 +185,12 @@ struct optee_ops { bool system_thread); int (*to_msg_param)(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params); + size_t num_params, const struct tee_param *params, + bool update_out); int (*from_msg_param)(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params); + const struct optee_msg_param *msg_params, + bool update_out); }; /** @@ -316,23 +318,35 @@ void optee_release(struct tee_context *ctx); void optee_release_supp(struct tee_context *ctx); static inline void optee_from_msg_param_value(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { - p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + - attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; - p->u.value.a = mp->u.value.a; - p->u.value.b = mp->u.value.b; - p->u.value.c = mp->u.value.c; + if (!update_out) + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT + + attr - OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + + if (attr == OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT || + attr == OPTEE_MSG_ATTR_TYPE_VALUE_INOUT || !update_out) { + p->u.value.a = mp->u.value.a; + p->u.value.b = mp->u.value.b; + p->u.value.c = mp->u.value.c; + } } static inline void optee_to_msg_param_value(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, + bool update_out) { - mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - - TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; - mp->u.value.a = p->u.value.a; - mp->u.value.b = p->u.value.b; - mp->u.value.c = p->u.value.c; + if (!update_out) + mp->attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + p->attr - + TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || + p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT || !update_out) { + mp->u.value.a = p->u.value.a; + mp->u.value.b = p->u.value.b; + mp->u.value.c = p->u.value.c; + } } void optee_cq_init(struct optee_call_queue *cq, int thread_count); diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c index ebbbd42b0e3e..580e6b9b0606 100644 --- a/drivers/tee/optee/rpc.c +++ b/drivers/tee/optee/rpc.c @@ -63,7 +63,7 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) + arg->params, false /*!update_out*/)) goto bad; for (i = 0; i < arg->num_params; i++) { @@ -107,7 +107,8 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, } else { params[3].u.value.a = msg.len; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) + arg->num_params, params, + true /*update_out*/)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; else arg->ret = TEEC_SUCCESS; @@ -188,6 +189,7 @@ static void handle_rpc_func_cmd_wait(struct optee_msg_arg *arg) static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, struct optee_msg_arg *arg) { + bool update_out = false; struct tee_param *params; arg->ret_origin = TEEC_ORIGIN_COMMS; @@ -200,15 +202,21 @@ static void handle_rpc_supp_cmd(struct tee_context *ctx, struct optee *optee, } if (optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params)) { + arg->params, update_out)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } arg->ret = optee_supp_thrd_req(ctx, arg->cmd, arg->num_params, params); + /* + * Special treatment for OPTEE_RPC_CMD_SHM_ALLOC since input is a + * value type, but the output is a memref type. + */ + if (arg->cmd != OPTEE_RPC_CMD_SHM_ALLOC) + update_out = true; if (optee->ops->to_msg_param(optee, arg->params, arg->num_params, - params)) + params, update_out)) arg->ret = TEEC_ERROR_BAD_PARAMETERS; out: kfree(params); @@ -270,7 +278,7 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; @@ -280,7 +288,8 @@ static void handle_rpc_func_rpmb_probe_reset(struct tee_context *ctx, params[0].u.value.b = 0; params[0].u.value.c = 0; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -324,7 +333,7 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -358,7 +367,8 @@ static void handle_rpc_func_rpmb_probe_next(struct tee_context *ctx, params[0].u.value.b = rdev->descr.capacity; params[0].u.value.c = rdev->descr.reliable_wr_count; if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; return; } @@ -384,7 +394,7 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, if (arg->num_params != ARRAY_SIZE(params) || optee->ops->from_msg_param(optee, params, arg->num_params, - arg->params) || + arg->params, false /*!update_out*/) || params[0].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT || params[1].attr != TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; @@ -401,7 +411,8 @@ static void handle_rpc_func_rpmb_frames(struct tee_context *ctx, goto out; } if (optee->ops->to_msg_param(optee, arg->params, - arg->num_params, params)) { + arg->num_params, params, + true /*update_out*/)) { arg->ret = TEEC_ERROR_BAD_PARAMETERS; goto out; } diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index 165fadd9abc9..cfdae266548b 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -81,20 +81,26 @@ static int optee_cpuhp_disable_pcpu_irq(unsigned int cpu) */ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; phys_addr_t pa; int rc; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_TMEM_INPUT) + return 0; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; - p->u.memref.size = mp->u.tmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.tmem.shm_ref; if (!shm) { p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; - return 0; + goto out; } rc = tee_shm_get_pa(shm, 0, &pa); @@ -103,18 +109,25 @@ static int from_msg_param_tmp_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = mp->u.tmem.buf_ptr - pa; p->u.memref.shm = shm; - +out: + p->u.memref.size = mp->u.tmem.size; return 0; } static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, - const struct optee_msg_param *mp) + const struct optee_msg_param *mp, + bool update_out) { struct tee_shm *shm; + if (update_out) { + if (attr == OPTEE_MSG_ATTR_TYPE_RMEM_INPUT) + return; + goto out; + } + p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT + attr - OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; - p->u.memref.size = mp->u.rmem.size; shm = (struct tee_shm *)(unsigned long)mp->u.rmem.shm_ref; if (shm) { @@ -124,6 +137,8 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, p->u.memref.shm_offs = 0; p->u.memref.shm = NULL; } +out: + p->u.memref.size = mp->u.rmem.size; } /** @@ -133,11 +148,13 @@ static void from_msg_param_reg_mem(struct tee_param *p, u32 attr, * @params: subsystem internal parameter representation * @num_params: number of elements in the parameter arrays * @msg_params: OPTEE_MSG parameters + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, size_t num_params, - const struct optee_msg_param *msg_params) + const struct optee_msg_param *msg_params, + bool update_out) { int rc; size_t n; @@ -149,25 +166,27 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, switch (attr) { case OPTEE_MSG_ATTR_TYPE_NONE: + if (update_out) + break; p->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&p->u, 0, sizeof(p->u)); break; case OPTEE_MSG_ATTR_TYPE_VALUE_INPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT: case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT: - optee_from_msg_param_value(p, attr, mp); + optee_from_msg_param_value(p, attr, mp, update_out); break; case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT: - rc = from_msg_param_tmp_mem(p, attr, mp); + rc = from_msg_param_tmp_mem(p, attr, mp, update_out); if (rc) return rc; break; case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT: case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT: - from_msg_param_reg_mem(p, attr, mp); + from_msg_param_reg_mem(p, attr, mp, update_out); break; default: @@ -178,20 +197,25 @@ static int optee_from_msg_param(struct optee *optee, struct tee_param *params, } static int to_msg_param_tmp_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { int rc; phys_addr_t pa; + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.tmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.tmem.size = p->u.memref.size; if (!p->u.memref.shm) { mp->u.tmem.buf_ptr = 0; - return 0; + goto out; } rc = tee_shm_get_pa(p->u.memref.shm, p->u.memref.shm_offs, &pa); @@ -201,19 +225,27 @@ static int to_msg_param_tmp_mem(struct optee_msg_param *mp, mp->u.tmem.buf_ptr = pa; mp->attr |= OPTEE_MSG_ATTR_CACHE_PREDEFINED << OPTEE_MSG_ATTR_CACHE_SHIFT; - +out: + mp->u.tmem.size = p->u.memref.size; return 0; } static int to_msg_param_reg_mem(struct optee_msg_param *mp, - const struct tee_param *p) + const struct tee_param *p, bool update_out) { + if (update_out) { + if (p->attr == TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT) + return 0; + goto out; + } + mp->attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT + p->attr - TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT; mp->u.rmem.shm_ref = (unsigned long)p->u.memref.shm; - mp->u.rmem.size = p->u.memref.size; mp->u.rmem.offs = p->u.memref.shm_offs; +out: + mp->u.rmem.size = p->u.memref.size; return 0; } @@ -223,11 +255,13 @@ static int to_msg_param_reg_mem(struct optee_msg_param *mp, * @msg_params: OPTEE_MSG parameters * @num_params: number of elements in the parameter arrays * @params: subsystem itnernal parameter representation + * @update_out: update parameter for output only * Returns 0 on success or <0 on failure */ static int optee_to_msg_param(struct optee *optee, struct optee_msg_param *msg_params, - size_t num_params, const struct tee_param *params) + size_t num_params, const struct tee_param *params, + bool update_out) { int rc; size_t n; @@ -238,21 +272,23 @@ static int optee_to_msg_param(struct optee *optee, switch (p->attr) { case TEE_IOCTL_PARAM_ATTR_TYPE_NONE: + if (update_out) + break; mp->attr = TEE_IOCTL_PARAM_ATTR_TYPE_NONE; memset(&mp->u, 0, sizeof(mp->u)); break; case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT: - optee_to_msg_param_value(mp, p); + optee_to_msg_param_value(mp, p, update_out); break; case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: if (tee_shm_is_dynamic(p->u.memref.shm)) - rc = to_msg_param_reg_mem(mp, p); + rc = to_msg_param_reg_mem(mp, p, update_out); else - rc = to_msg_param_tmp_mem(mp, p); + rc = to_msg_param_tmp_mem(mp, p, update_out); if (rc) return rc; break; From patchwork Wed Mar 5 13:04:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C2042C19F32 for ; Wed, 5 Mar 2025 13:06:54 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4CD0710E775; Wed, 5 Mar 2025 13:06:54 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="gUmehfAu"; dkim-atps=neutral Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3212D10E75B for ; Wed, 5 Mar 2025 13:06:53 +0000 (UTC) Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-5e4f5cc3172so8501216a12.0 for ; Wed, 05 Mar 2025 05:06:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180012; x=1741784812; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=gUmehfAuNkrebkaFH9chmRqP70i0mAIBVa6H5VClleFrYWRa9mXcQFZQ9ap3iNDqV/ JbBbb6QzNVvrZeEWaj8F+t8Qa7VDrBeTqyCpDzLXbuvYuy8JLZnTz/DQ2M4U2rFH8iV6 4IT3djggRYB9rP6PvmRcn+lPz3l6gvqahtqw2cMoYuhlKytKCmQYS6wHwJtju8edsGW7 BnGWWY9XflWsiLeEPYObVlwGmyzvccJ1O0KMOosjtkd5Rjp0+ezJwDY3OO26dr6ifHe4 Vi3Z9Dvow4S6uuCJVG6QKF2SR/eYremJgY5bRn+B0RDZe/E3/61m8XURY9szHDt4D8nA FVcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180012; x=1741784812; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nSDQWd9UZqV/STl4BJjbppPEGE1hH0ufO/G2aXdmVa4=; b=cIjjGtwaiNCvBGuvlA6apD82946cgAAuLu0WUOIataTVmYdiMAweuEtTOXO5UgQDjn +gNNQnPii1lsoX1/fKgmWYaASO3AmKy2pkrtDaUXhFCd9Si2g8QokE0e4/vm6chtI3Nx nH83oQK7/F4/EgdYawxcnxTqcjxymsJGN984IQLDzrJHwtdmiMdDlKbtrKx7xorLYlcd OMJWu1bm3OvlNBGzTULyKSrBsi/uSqkNFXOj08sNc+QrD+nAB0oczwK/TzAQE5BgA0rI PhViwCi8TR9ldbQWLq3JahET6RYzxy6GeV56W/5WLb6tbdpsElHfTXq9+F8lKUowakIe xoQw== X-Forwarded-Encrypted: i=1; AJvYcCXtnhZnkyAPeikprDA3e7rJKsEL8N3S7Xue3uyQvhHVYhQoM0vLIgBtsaFEKML2oROiMEcQuZBAU6w=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxsO1scZUETaPPftN0cxEIXZ9GRwggfKDdGzWlwjAYUH2NCqyBr 8EBvaWONQmx4mP/UcmEe+VccIBVFQuonQVb6euJE7eqEkEAUDZ+QCe4XUzyAit4= X-Gm-Gg: ASbGncv+4GKBgFbeQujTkTnq/bDiod4inRtF5S0rigToUiv5ITO1mlQJ+ElSrkq+Yb3 dEzZIX9KuQbowBp0LdoMbHdpcrN5Edfl5Z+ANGQGWZmmAzNG4AGVZN9EaiV6rGBoMuS4/woqX2D OZJnoHOzWvocNlfmIe0umaNm9uVQb4s2v6Y3DKtND8Rff6l8THToaFMgwsFCbe/4nTXZ5f61L3P Ma5/fDkDDIyrvEWV7xw1WVvmrUDyU0QXwV8L/9dp15epxJjMBwsW1QF6eDL7QIos8dm85pJ6tFa v/ZevGDGLFloH2zC+32XE6XgFKGdThMX7IgyDe/j9CFZ3krQwMMFcfe5xL5VDf8GQzjZDfl/Rp5 A7ts1yeCop7rUGlBR9CC7UQ== X-Google-Smtp-Source: AGHT+IGkGSDzVVBlXNvlMiIDvRWPvOOiytk/N/ekI3zhjISEo1QkiAr9cRPlUfo3MPWV5GHTdd4Z6g== X-Received: by 2002:a05:6402:13d3:b0:5dc:81b3:5e1a with SMTP id 4fb4d7f45d1cf-5e59f3864ebmr2662502a12.7.1741180011312; Wed, 05 Mar 2025 05:06:51 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:50 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 04/10] optee: sync secure world ABI headers Date: Wed, 5 Mar 2025 14:04:10 +0100 Message-ID: <20250305130634.1850178-5-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Update the header files describing the secure world ABI, both with and without FF-A. The ABI is extended to deal with restricted memory, but as usual backward compatible. Signed-off-by: Jens Wiklander --- drivers/tee/optee/optee_ffa.h | 27 ++++++++++--- drivers/tee/optee/optee_msg.h | 65 ++++++++++++++++++++++++++++++-- drivers/tee/optee/optee_smc.h | 71 ++++++++++++++++++++++++++++++++++- 3 files changed, 154 insertions(+), 9 deletions(-) diff --git a/drivers/tee/optee/optee_ffa.h b/drivers/tee/optee/optee_ffa.h index 257735ae5b56..7bd037200343 100644 --- a/drivers/tee/optee/optee_ffa.h +++ b/drivers/tee/optee/optee_ffa.h @@ -81,7 +81,7 @@ * as the second MSG arg struct for * OPTEE_FFA_YIELDING_CALL_WITH_ARG. * Bit[31:8]: Reserved (MBZ) - * w5: Bitfield of secure world capabilities OPTEE_FFA_SEC_CAP_* below, + * w5: Bitfield of OP-TEE capabilities OPTEE_FFA_SEC_CAP_* * w6: The maximum secure world notification number * w7: Not used (MBZ) */ @@ -94,6 +94,8 @@ #define OPTEE_FFA_SEC_CAP_ASYNC_NOTIF BIT(1) /* OP-TEE supports probing for RPMB device if needed */ #define OPTEE_FFA_SEC_CAP_RPMB_PROBE BIT(2) +/* OP-TEE supports Restricted Memory for secure data path */ +#define OPTEE_FFA_SEC_CAP_RSTMEM BIT(3) #define OPTEE_FFA_EXCHANGE_CAPABILITIES OPTEE_FFA_BLOCKING_CALL(2) @@ -108,7 +110,7 @@ * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_UNREGISTER_SHM OPTEE_FFA_BLOCKING_CALL(3) @@ -119,16 +121,31 @@ * Call register usage: * w3: Service ID, OPTEE_FFA_ENABLE_ASYNC_NOTIF * w4: Notification value to request bottom half processing, should be - * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE. + * less than OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE * w5-w7: Not used (MBZ) * * Return register usage: * w3: Error code, 0 on success - * w4-w7: Note used (MBZ) + * w4-w7: Not used (MBZ) */ #define OPTEE_FFA_ENABLE_ASYNC_NOTIF OPTEE_FFA_BLOCKING_CALL(5) -#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 +#define OPTEE_FFA_MAX_ASYNC_NOTIF_VALUE 64 + +/* + * Release Restricted memory + * + * Call register usage: + * w3: Service ID, OPTEE_FFA_RECLAIM_RSTMEM + * w4: Shared memory handle, lower bits + * w5: Shared memory handle, higher bits + * w6-w7: Not used (MBZ) + * + * Return register usage: + * w3: Error code, 0 on success + * w4-w7: Note used (MBZ) + */ +#define OPTEE_FFA_RELEASE_RSTMEM OPTEE_FFA_BLOCKING_CALL(8) /* * Call with struct optee_msg_arg as argument in the supplied shared memory diff --git a/drivers/tee/optee/optee_msg.h b/drivers/tee/optee/optee_msg.h index e8840a82b983..1b558526e7d9 100644 --- a/drivers/tee/optee/optee_msg.h +++ b/drivers/tee/optee/optee_msg.h @@ -133,13 +133,13 @@ struct optee_msg_param_rmem { }; /** - * struct optee_msg_param_fmem - ffa memory reference parameter + * struct optee_msg_param_fmem - FF-A memory reference parameter * @offs_lower: Lower bits of offset into shared memory reference * @offs_upper: Upper bits of offset into shared memory reference * @internal_offs: Internal offset into the first page of shared memory * reference * @size: Size of the buffer - * @global_id: Global identifier of Shared memory + * @global_id: Global identifier of the shared memory */ struct optee_msg_param_fmem { u32 offs_low; @@ -165,7 +165,7 @@ struct optee_msg_param_value { * @attr: attributes * @tmem: parameter by temporary memory reference * @rmem: parameter by registered memory reference - * @fmem: parameter by ffa registered memory reference + * @fmem: parameter by FF-A registered memory reference * @value: parameter by opaque value * @octets: parameter by octet string * @@ -296,6 +296,18 @@ struct optee_msg_arg { */ #define OPTEE_MSG_FUNCID_GET_OS_REVISION 0x0001 +/* + * Values used in OPTEE_MSG_CMD_LEND_RSTMEM below + * OPTEE_MSG_RSTMEM_RESERVED Reserved + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY Secure Video Playback + * OPTEE_MSG_RSTMEM_TRUSTED_UI Trused UI + * OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD Secure Video Recording + */ +#define OPTEE_MSG_RSTMEM_RESERVED 0 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_PLAY 1 +#define OPTEE_MSG_RSTMEM_TRUSTED_UI 2 +#define OPTEE_MSG_RSTMEM_SECURE_VIDEO_RECORD 3 + /* * Do a secure call with struct optee_msg_arg as argument * The OPTEE_MSG_CMD_* below defines what goes in struct optee_msg_arg::cmd @@ -337,6 +349,49 @@ struct optee_msg_arg { * OPTEE_MSG_CMD_STOP_ASYNC_NOTIF informs secure world that from now is * normal world unable to process asynchronous notifications. Typically * used when the driver is shut down. + * + * OPTEE_MSG_CMD_LEND_RSTMEM lends restricted memory. The passed normal + * physical memory is restricted from normal world access. The memory + * should be unmapped prior to this call since it becomes inaccessible + * during the request. + * Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a OPTEE_MSG_RSTMEM_* defined above + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_TMEM_INPUT + * [in] param[1].u.tmem.buf_ptr physical address + * [in] param[1].u.tmem.size size + * [in] param[1].u.tmem.shm_ref holds restricted memory reference + * + * OPTEE_MSG_CMD_RECLAIM_RSTMEM reclaims a previously lent restricted + * memory reference. The physical memory is accessible by the normal world + * after this function has return and can be mapped again. The information + * is passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * + * OPTEE_MSG_CMD_GET_RSTMEM_CONFIG get configuration for a specific + * restricted memory use case. Parameters are passed as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INOUT + * [in] param[0].value.a OPTEE_MSG_RSTMEM_* + * [in] param[1].attr OPTEE_MSG_ATTR_TYPE_{R,F}MEM_OUTPUT + * [in] param[1].u.{r,f}mem Buffer or NULL + * [in] param[1].u.{r,f}mem.size Provided size of buffer or 0 for query + * output for the restricted use case: + * [out] param[0].value.a Minimal size of SDP memory + * [out] param[0].value.b Required alignment of size and start of + * restricted memory + * [out] param[1].{r,f}mem.size Size of output data + * [out] param[1].{r,f}mem If non-NULL, contains an array of + * uint16_t holding endpoints that + * must be included when lending + * memory for this use case + * + * OPTEE_MSG_CMD_ASSIGN_RSTMEM assigns use-case to restricted memory + * previously lent using the FFA_LEND framework ABI. Parameters are passed + * as: + * [in] param[0].attr OPTEE_MSG_ATTR_TYPE_VALUE_INPUT + * [in] param[0].u.value.a holds restricted memory cookie + * [in] param[0].u.value.b OPTEE_MSG_RSTMEM_* defined above */ #define OPTEE_MSG_CMD_OPEN_SESSION 0 #define OPTEE_MSG_CMD_INVOKE_COMMAND 1 @@ -346,6 +401,10 @@ struct optee_msg_arg { #define OPTEE_MSG_CMD_UNREGISTER_SHM 5 #define OPTEE_MSG_CMD_DO_BOTTOM_HALF 6 #define OPTEE_MSG_CMD_STOP_ASYNC_NOTIF 7 +#define OPTEE_MSG_CMD_LEND_RSTMEM 8 +#define OPTEE_MSG_CMD_RECLAIM_RSTMEM 9 +#define OPTEE_MSG_CMD_GET_RSTMEM_CONFIG 10 +#define OPTEE_MSG_CMD_ASSIGN_RSTMEM 11 #define OPTEE_MSG_FUNCID_CALL_WITH_ARG 0x0004 #endif /* _OPTEE_MSG_H */ diff --git a/drivers/tee/optee/optee_smc.h b/drivers/tee/optee/optee_smc.h index 879426300821..abc379ce190c 100644 --- a/drivers/tee/optee/optee_smc.h +++ b/drivers/tee/optee/optee_smc.h @@ -264,7 +264,6 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM BIT(0) /* Secure world can communicate via previously unregistered shared memory */ #define OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM BIT(1) - /* * Secure world supports commands "register/unregister shared memory", * secure world accepts command buffers located in any parts of non-secure RAM @@ -280,6 +279,10 @@ struct optee_smc_get_shm_config_result { #define OPTEE_SMC_SEC_CAP_RPC_ARG BIT(6) /* Secure world supports probing for RPMB device if needed */ #define OPTEE_SMC_SEC_CAP_RPMB_PROBE BIT(7) +/* Secure world supports Secure Data Path */ +#define OPTEE_SMC_SEC_CAP_SDP BIT(8) +/* Secure world supports dynamic restricted memory */ +#define OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM BIT(9) #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES 9 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \ @@ -451,6 +454,72 @@ struct optee_smc_disable_shm_cache_result { /* See OPTEE_SMC_CALL_WITH_REGD_ARG above */ #define OPTEE_SMC_FUNCID_CALL_WITH_REGD_ARG 19 +/* + * Get Secure Data Path memory config + * + * Returns the Secure Data Path memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_SDP_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Physical address of start of SDP memory + * a2 Size of SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ +#define OPTEE_SMC_FUNCID_GET_SDP_CONFIG 20 +#define OPTEE_SMC_GET_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_SDP_CONFIG) + +struct optee_smc_get_sdp_config_result { + unsigned long status; + unsigned long start; + unsigned long size; + unsigned long flags; +}; + +/* + * Get Secure Data Path dynamic memory config + * + * Returns the Secure Data Path dynamic memory config. + * + * Call register usage: + * a0 SMC Function ID, OPTEE_SMC_GET_DYN_SHM_CONFIG + * a2-6 Not used, must be zero + * a7 Hypervisor Client ID register + * + * Have config return register usage: + * a0 OPTEE_SMC_RETURN_OK + * a1 Minamal size of SDP memory + * a2 Required alignment of size and start of registered SDP memory + * a3 Not used + * a4-7 Preserved + * + * Not available register usage: + * a0 OPTEE_SMC_RETURN_ENOTAVAIL + * a1-3 Not used + * a4-7 Preserved + */ + +#define OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG 21 +#define OPTEE_SMC_GET_DYN_SDP_CONFIG \ + OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_GET_DYN_SDP_CONFIG) + +struct optee_smc_get_dyn_sdp_config_result { + unsigned long status; + unsigned long size; + unsigned long align; + unsigned long flags; +}; /* * Resume from RPC (for example after processing a foreign interrupt) From patchwork Wed Mar 5 13:04:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002616 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 534E0C19F32 for ; Wed, 5 Mar 2025 13:06:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BE89610E77D; Wed, 5 Mar 2025 13:06:57 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="Jw0ZA1+C"; dkim-atps=neutral Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by gabe.freedesktop.org (Postfix) with ESMTPS id C1DA310E77D for ; Wed, 5 Mar 2025 13:06:55 +0000 (UTC) Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-5dccaaca646so1813227a12.0 for ; Wed, 05 Mar 2025 05:06:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180014; x=1741784814; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4A2RMjc7+ET06/OLz7LUVQUp9/M2mVREMhZjdI+mgFg=; b=Jw0ZA1+CSu7mM3kw31vZRluQqnwLzsDxTTqbV90CvzvOc8lMCW34QgiMV39CoXjJ7w 0q9zpH9HfpETLUPiphw/sJYvP0xIeFrHXpoWrV3nyZxQuIBYUyQ8l07byMDwvWytYHm0 ZN8at3KT576nD/mPXMNmn8a3o75ANFgR/jR3JTblmApyqsW2DfVJ5JAOXAPjWFYtaQvP LfK/EelxXRc7Q0nuR6HqBAHMg/higZvI1D45yfbERi34Jd7CoBEMvkSutaS0VkWqadb3 1uQbT4+G29ctCU9PwtTwCQ781xE5hBNwbXaegJqqQ4kqS4WW9xnUjRjumEgUaM1cM2GT Bl1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180014; x=1741784814; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4A2RMjc7+ET06/OLz7LUVQUp9/M2mVREMhZjdI+mgFg=; b=FHKx72hyUUqNkkjOAG22FUFbT7bD6Z2Xnzc4vjCxljJV1fENKb0EUZwmqW2cBh8E9N mJOwi1XYRuXJt34pmh7oPGcNWgKfdFYo18JasYjBeTfeIGFnpa1ErpJUKcI3C/f1GDIK MGBCk111FPUEJVQCtwJ4sAxyHYSbbcjFhz+xKo+ZhbC/HvUdDq1bDppNwZqYawJdDZTZ 9kXEosB4/d2u5mIU8K8SOvQp/nGTYNmCKsVQ9Mn/RF1Rlplo62zis88IwBaBfpefzB4c 4adg5ssrU+UlldvkfHismqzgzWKMPzZw5SRgbyXSDc9QSN9B+SMdfxC2uCsezrwxG44Z Zr7g== X-Forwarded-Encrypted: i=1; AJvYcCVGS0E8L3aS4y37t39MP1LB8+lLii5k7MwtvO2o9G4IExSsogc0GXxtGFL9/KfAYvzWUj9yRpB7kuE=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwKwDeyZZIqKb4L6O0Ym5Uq5m/fH8+wshBdlgR+m+O3GRvNvZAZ 9tg/K47xC1bt0KGZLPWJzootafeX8qpK1egV0nEIP5M/a5gCEPeJmosnnoAsJeI= X-Gm-Gg: ASbGnctjQPdCTWSWKKFs5S2/HI+k4deKNtYHs+EMjXeS80BO7oL9moyoO7oG9GyUlB4 khhraSAAhaFMgokHMUx7t9eP+4Ui751peA+ejDltXCSQFtbnToZ8SsyOORn4wk4AhRADOpkubKj WQ6gIi8B1FbfX+LPu+Y5AQIWd9drY1HvbHtCMVL1r2/Kg+HXP+VLHKRS9TaTweDXyh69bOpm2/6 bI8iKxwWSUQ8Hmb++l+L2RkZAQb4OouXZPaOaxq2UJ1en5jiTxHAxScKuPV3QygTk9l9hTWKHx2 fi2eX/sm4hahux/zhY1m5NxKsB6H5IV6fbaJTFW/ev9FUjYIakO6dyccT6c63l9lWxI6RdTL2Eh Xj2qMLUrKpx7AfnJOn2ln1g== X-Google-Smtp-Source: AGHT+IFwVg2fe30YJ6NhkeUa+m2OUhrYsSvmDZcpiLEWIsMefC5Un9QvScvwsJSP1T5LTCC7FAF7Mw== X-Received: by 2002:a05:6402:2742:b0:5dc:eb2:570d with SMTP id 4fb4d7f45d1cf-5e59f0dc9d3mr3049042a12.2.1741180014215; Wed, 05 Mar 2025 05:06:54 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:53 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 05/10] tee: implement restricted DMA-heap Date: Wed, 5 Mar 2025 14:04:11 +0100 Message-ID: <20250305130634.1850178-6-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Implement DMA heap for restricted DMA-buf allocation in the TEE subsystem. Restricted memory refers to memory buffers behind a hardware enforced firewall. It is not accessible to the kernel during normal circumstances but rather only accessible to certain hardware IPs or CPUs executing in higher or differently privileged mode than the kernel itself. This interface allows to allocate and manage such restricted memory buffers via interaction with a TEE implementation. The restricted memory is allocated for a specific use-case, like Secure Video Playback, Trusted UI, or Secure Video Recording where certain hardware devices can access the memory. The DMA-heaps are enabled explicitly by the TEE backend driver. The TEE backend drivers needs to implement restricted memory pool to manage the restricted memory. Signed-off-by: Jens Wiklander --- drivers/tee/Makefile | 1 + drivers/tee/tee_heap.c | 470 ++++++++++++++++++++++++++++++++++++++ drivers/tee/tee_private.h | 6 + include/linux/tee_core.h | 62 +++++ 4 files changed, 539 insertions(+) create mode 100644 drivers/tee/tee_heap.c diff --git a/drivers/tee/Makefile b/drivers/tee/Makefile index 5488cba30bd2..949a6a79fb06 100644 --- a/drivers/tee/Makefile +++ b/drivers/tee/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_TEE) += tee.o tee-objs += tee_core.o +tee-objs += tee_heap.o tee-objs += tee_shm.o tee-objs += tee_shm_pool.o obj-$(CONFIG_OPTEE) += optee/ diff --git a/drivers/tee/tee_heap.c b/drivers/tee/tee_heap.c new file mode 100644 index 000000000000..476ab2e27260 --- /dev/null +++ b/drivers/tee/tee_heap.c @@ -0,0 +1,470 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "tee_private.h" + +struct tee_dma_heap { + struct dma_heap *heap; + enum tee_dma_heap_id id; + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + /* Protects pool and teedev above */ + struct mutex mu; +}; + +struct tee_heap_buffer { + struct tee_rstmem_pool *pool; + struct tee_device *teedev; + size_t size; + size_t offs; + struct sg_table table; +}; + +struct tee_heap_attachment { + struct sg_table table; + struct device *dev; +}; + +struct tee_rstmem_static_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + phys_addr_t pa_base; +}; + +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_DMABUF_HEAPS) +static DEFINE_XARRAY_ALLOC(tee_dma_heap); + +static int copy_sg_table(struct sg_table *dst, struct sg_table *src) +{ + struct scatterlist *dst_sg; + struct scatterlist *src_sg; + int ret; + int i; + + ret = sg_alloc_table(dst, src->orig_nents, GFP_KERNEL); + if (ret) + return ret; + + dst_sg = dst->sgl; + for_each_sgtable_sg(src, src_sg, i) { + sg_set_page(dst_sg, sg_page(src_sg), src_sg->length, + src_sg->offset); + dst_sg = sg_next(dst_sg); + } + + return 0; +} + +static int tee_heap_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_heap_attachment *a; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + ret = copy_sg_table(&a->table, &buf->table); + if (ret) { + kfree(a); + return ret; + } + + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +} + +static void tee_heap_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attachment) +{ + struct tee_heap_attachment *a = attachment->priv; + + sg_free_table(&a->table); + kfree(a); +} + +static struct sg_table * +tee_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + int ret; + + ret = dma_map_sgtable(attachment->dev, &a->table, direction, + DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + + return &a->table; +} + +static void tee_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, + struct sg_table *table, + enum dma_data_direction direction) +{ + struct tee_heap_attachment *a = attachment->priv; + + WARN_ON(&a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, + DMA_ATTR_SKIP_CPU_SYNC); +} + +static void tee_heap_buf_free(struct dma_buf *dmabuf) +{ + struct tee_heap_buffer *buf = dmabuf->priv; + struct tee_device *teedev = buf->teedev; + + buf->pool->ops->free(buf->pool, &buf->table); + tee_device_put(teedev); +} + +static const struct dma_buf_ops tee_heap_buf_ops = { + .attach = tee_heap_attach, + .detach = tee_heap_detach, + .map_dma_buf = tee_heap_map_dma_buf, + .unmap_dma_buf = tee_heap_unmap_dma_buf, + .release = tee_heap_buf_free, +}; + +static struct dma_buf *tee_dma_heap_alloc(struct dma_heap *heap, + unsigned long len, u32 fd_flags, + u64 heap_flags) +{ + struct tee_dma_heap *h = dma_heap_get_drvdata(heap); + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct tee_device *teedev = NULL; + struct tee_heap_buffer *buf; + struct tee_rstmem_pool *pool; + struct dma_buf *dmabuf; + int rc; + + mutex_lock(&h->mu); + if (tee_device_get(h->teedev)) { + teedev = h->teedev; + pool = h->pool; + } + mutex_unlock(&h->mu); + + if (!teedev) + return ERR_PTR(-EINVAL); + + buf = kzalloc(sizeof(*buf), GFP_KERNEL); + if (!buf) { + dmabuf = ERR_PTR(-ENOMEM); + goto err; + } + buf->size = len; + buf->pool = pool; + buf->teedev = teedev; + + rc = pool->ops->alloc(pool, &buf->table, len, &buf->offs); + if (rc) { + dmabuf = ERR_PTR(rc); + goto err_kfree; + } + + exp_info.ops = &tee_heap_buf_ops; + exp_info.size = len; + exp_info.priv = buf; + exp_info.flags = fd_flags; + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) + goto err_rstmem_free; + + return dmabuf; + +err_rstmem_free: + pool->ops->free(pool, &buf->table); +err_kfree: + kfree(buf); +err: + tee_device_put(h->teedev); + return dmabuf; +} + +static const struct dma_heap_ops tee_dma_heap_ops = { + .allocate = tee_dma_heap_alloc, +}; + +static const char *heap_id_2_name(enum tee_dma_heap_id id) +{ + switch (id) { + case TEE_DMA_HEAP_SECURE_VIDEO_PLAY: + return "restricted,secure-video"; + case TEE_DMA_HEAP_TRUSTED_UI: + return "restricted,trusted-ui"; + case TEE_DMA_HEAP_SECURE_VIDEO_RECORD: + return "restricted,secure-video-record"; + default: + return NULL; + } +} + +static int alloc_dma_heap(struct tee_device *teedev, enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct dma_heap_export_info exp_info = { + .ops = &tee_dma_heap_ops, + .name = heap_id_2_name(id), + }; + struct tee_dma_heap *h; + int rc; + + if (!exp_info.name) + return -EINVAL; + + if (xa_reserve(&tee_dma_heap, id, GFP_KERNEL)) { + if (!xa_load(&tee_dma_heap, id)) + return -EEXIST; + return -ENOMEM; + } + + h = kzalloc(sizeof(*h), GFP_KERNEL); + if (!h) + return -ENOMEM; + h->id = id; + h->teedev = teedev; + h->pool = pool; + mutex_init(&h->mu); + + exp_info.priv = h; + h->heap = dma_heap_add(&exp_info); + if (IS_ERR(h->heap)) { + rc = PTR_ERR(h->heap); + kfree(h); + + return rc; + } + + /* "can't fail" due to the call to xa_reserve() above */ + return WARN(xa_store(&tee_dma_heap, id, h, GFP_KERNEL), + "xa_store() failed"); +} + +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool) +{ + struct tee_dma_heap *h; + int rc; + + h = xa_load(&tee_dma_heap, id); + if (h) { + mutex_lock(&h->mu); + if (h->teedev) { + rc = -EBUSY; + } else { + h->teedev = teedev; + h->pool = pool; + rc = 0; + } + mutex_unlock(&h->mu); + } else { + rc = alloc_dma_heap(teedev, id, pool); + } + + if (rc) + dev_err(&teedev->dev, "can't register DMA heap id %d (%s)\n", + id, heap_id_2_name(id)); + + return rc; +} + +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev) +{ + struct tee_rstmem_pool *pool; + struct tee_dma_heap *h; + u_long i; + + xa_for_each(&tee_dma_heap, i, h) { + if (h) { + pool = NULL; + mutex_lock(&h->mu); + if (h->teedev == teedev) { + pool = h->pool; + h->teedev = NULL; + h->pool = NULL; + } + mutex_unlock(&h->mu); + if (pool) + pool->ops->destroy_pool(pool); + } + } +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_heap_buffer *buf; + int rc; + + /* The DMA-buf must be from our heap */ + if (dmabuf->ops != &tee_heap_buf_ops) + return -EINVAL; + + buf = dmabuf->priv; + /* The buffer must be from the same teedev */ + if (buf->teedev != teedev) + return -EINVAL; + + shm->size = buf->size; + + rc = buf->pool->ops->update_shm(buf->pool, &buf->table, buf->offs, shm, + parent_shm); + if (!rc && *parent_shm) + *offset = buf->offs; + + return rc; +} +#else +int tee_device_register_dma_heap(struct tee_device *teedev __always_unused, + enum tee_dma_heap_id id __always_unused, + struct tee_rstmem_pool *pool __always_unused) +{ + return -EINVAL; +} +EXPORT_SYMBOL_GPL(tee_device_register_dma_heap); + +void +tee_device_unregister_all_dma_heaps(struct tee_device *teedev __always_unused) +{ +} +EXPORT_SYMBOL_GPL(tee_device_unregister_all_dma_heaps); + +int tee_heap_update_from_dma_buf(struct tee_device *teedev __always_unused, + struct dma_buf *dmabuf __always_unused, + size_t *offset __always_unused, + struct tee_shm *shm __always_unused, + struct tee_shm **parent_shm __always_unused) +{ + return -EINVAL; +} +#endif + +static struct tee_rstmem_static_pool * +to_rstmem_static_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct tee_rstmem_static_pool, pool); +} + +static int rstmem_pool_op_static_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + phys_addr_t pa; + int ret; + + pa = gen_pool_alloc(stp->gen_pool, size); + if (!pa) + return -ENOMEM; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (ret) { + gen_pool_free(stp->gen_pool, pa, size); + return ret; + } + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - stp->pa_base; + + return 0; +} + +static void rstmem_pool_op_static_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(stp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); +} + +static int rstmem_pool_op_static_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + shm->paddr = stp->pa_base + offs; + *parent_shm = NULL; + + return 0; +} + +static void rstmem_pool_op_static_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct tee_rstmem_static_pool *stp = to_rstmem_static_pool(pool); + + gen_pool_destroy(stp->gen_pool); + kfree(stp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_static = { + .alloc = rstmem_pool_op_static_alloc, + .free = rstmem_pool_op_static_free, + .update_shm = rstmem_pool_op_static_update_shm, + .destroy_pool = rstmem_pool_op_static_destroy_pool, +}; + +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size) +{ + const size_t page_mask = PAGE_SIZE - 1; + struct tee_rstmem_static_pool *stp; + int rc; + + /* Check it's page aligned */ + if ((paddr | size) & page_mask) + return ERR_PTR(-EINVAL); + + stp = kzalloc(sizeof(*stp), GFP_KERNEL); + if (!stp) + return ERR_PTR(-ENOMEM); + + stp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!stp->gen_pool) { + rc = -ENOMEM; + goto err_free; + } + + rc = gen_pool_add(stp->gen_pool, paddr, size, -1); + if (rc) + goto err_free_pool; + + stp->pool.ops = &rstmem_pool_ops_static; + stp->pa_base = paddr; + return &stp->pool; + +err_free_pool: + gen_pool_destroy(stp->gen_pool); +err_free: + kfree(stp); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_rstmem_static_pool_alloc); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 9bc50605227c..6c6ff5d5eed2 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -24,4 +25,9 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +int tee_heap_update_from_dma_buf(struct tee_device *teedev, + struct dma_buf *dmabuf, size_t *offset, + struct tee_shm *shm, + struct tee_shm **parent_shm); + #endif /*TEE_PRIVATE_H*/ diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index a38494d6b5f4..16ef078247ae 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -8,9 +8,11 @@ #include #include +#include #include #include #include +#include #include #include #include @@ -30,6 +32,12 @@ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 +enum tee_dma_heap_id { + TEE_DMA_HEAP_SECURE_VIDEO_PLAY = 1, + TEE_DMA_HEAP_TRUSTED_UI, + TEE_DMA_HEAP_SECURE_VIDEO_RECORD, +}; + /** * struct tee_device - TEE Device representation * @name: name of device @@ -116,6 +124,33 @@ struct tee_desc { u32 flags; }; +/** + * struct tee_rstmem_pool - restricted memory pool + * @ops: operations + * + * This is an abstract interface where this struct is expected to be + * embedded in another struct specific to the implementation. + */ +struct tee_rstmem_pool { + const struct tee_rstmem_pool_ops *ops; +}; + +/** + * struct tee_rstmem_pool_ops - restricted memory pool operations + * @alloc: called when allocating restricted memory + * @free: called when freeing restricted memory + * @destroy_pool: called when destroying the pool + */ +struct tee_rstmem_pool_ops { + int (*alloc)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t size, size_t *offs); + void (*free)(struct tee_rstmem_pool *pool, struct sg_table *sgt); + int (*update_shm)(struct tee_rstmem_pool *pool, struct sg_table *sgt, + size_t offs, struct tee_shm *shm, + struct tee_shm **parent_shm); + void (*destroy_pool)(struct tee_rstmem_pool *pool); +}; + /** * tee_device_alloc() - Allocate a new struct tee_device instance * @teedesc: Descriptor for this driver @@ -154,6 +189,11 @@ int tee_device_register(struct tee_device *teedev); */ void tee_device_unregister(struct tee_device *teedev); +int tee_device_register_dma_heap(struct tee_device *teedev, + enum tee_dma_heap_id id, + struct tee_rstmem_pool *pool); +void tee_device_unregister_all_dma_heaps(struct tee_device *teedev); + /** * tee_device_set_dev_groups() - Set device attribute groups * @teedev: Device to register @@ -229,6 +269,28 @@ static inline void tee_shm_pool_free(struct tee_shm_pool *pool) pool->ops->destroy_pool(pool); } +/** + * tee_rstmem_static_pool_alloc() - Create a restricted memory manager + * @paddr: Physical address of start of pool + * @size: Size in bytes of the pool + * + * @returns pointer to a 'struct tee_shm_pool' or an ERR_PTR on failure. + */ +struct tee_rstmem_pool *tee_rstmem_static_pool_alloc(phys_addr_t paddr, + size_t size); + +/** + * tee_rstmem_pool_free() - Free a restricted memory pool + * @pool: The restricted memory pool to free + * + * There must be no remaining restricted memory allocated from this pool + * when this function is called. + */ +static inline void tee_rstmem_pool_free(struct tee_rstmem_pool *pool) +{ + pool->ops->destroy_pool(pool); +} + /** * tee_get_drvdata() - Return driver_data pointer * @returns the driver_data pointer supplied to tee_register(). From patchwork Wed Mar 5 13:04:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B3ADEC28B22 for ; Wed, 5 Mar 2025 13:06:59 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 32ED410E778; Wed, 5 Mar 2025 13:06:59 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="bNEqP9Vo"; dkim-atps=neutral Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by gabe.freedesktop.org (Postfix) with ESMTPS id 26F7210E778 for ; Wed, 5 Mar 2025 13:06:58 +0000 (UTC) Received: by mail-ed1-f44.google.com with SMTP id 4fb4d7f45d1cf-5e4f5cc3172so8501377a12.0 for ; Wed, 05 Mar 2025 05:06:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180017; x=1741784817; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5CcAD/p05b1IVvXLEIrs8hiPkLvSkxniC6dMT97HxMQ=; b=bNEqP9VoaRZQ1bSl2kZePTz7ETBAh0Xv49SKUg27IuaPSsTmjw0uuNbJYscUHlkMk3 KCy2gHvEwzJ4KdRAkJ7vOgScWIocy++ct2/F6RzYhYUqMjvAeChHbXCEvy1m7LG7v0+C A4NTTrA3r8YKqonXKhKwCO6JozXYTb4UPvRqSwzvu6ceGWKXbSkRW9UcZWYRW5NyxO6M 5s7Y0f8xOSNiO+wBHJf6tNIgIVS9cNao52GU/BYyZMqJ8Wc2IbY7osH4glNNW0qYDgVi kAb8+VGoo9+DDKtxl0ZVdSM1AqrZ8gcTAXo57GhulPgJy+m4ANFp3ICnxeEoirb5ZYkD yRWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180017; x=1741784817; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5CcAD/p05b1IVvXLEIrs8hiPkLvSkxniC6dMT97HxMQ=; b=lGIH8N3WM4AbCBEWi2+JHvomLtgfqI74IOlxw8fGvh2qlfnRnqdTEcjWc28SomUk+j Vp7aPErVw+oalwaYTmUOMW/o1+QpZbYGovuf+BcSkCNXWmsHoL6S2iZjHBNA84FKLOL0 FKLOyCoUS2Bj0tZMRBKKHN6BaapYfBHm2r2FyJlj4rmCiecTc5djwXUIXti/DZLblyPS ZBOYx7WqulzgBoWsPQ/UTDHBu/WturkL6pZBEWH8i0K8xbHelY64FcJLf1+v+w0xiNEd llEa5ftPNfAlpzx3DWoDwTqoWDGQPNwXIYtiCsaNgRf6hklbmP6X7dYEpzxIJp2PloQD DCpg== X-Forwarded-Encrypted: i=1; AJvYcCXLKS46OYI1PS3W9H9kp8Rz5HKDOjJ7UV5jrP2juSlmv7mr4FiUH6+ABhu2FBeOcwxM79HP9iM2yAc=@lists.freedesktop.org X-Gm-Message-State: AOJu0YwKLrBXMaO7riYYN2c8F8c/v3l1hp2jzrLb+mGsoiKQ7xMG/gTg kiDxMWJj7txLK5WAGdIsycvpwhmzL/b6kJZ5gbCkmZu4xV9A0eAaXQ/MfdjU2nY= X-Gm-Gg: ASbGncvJKVcv4DQc8ApFf9g4PO6ifjHbiUX0QLTC+Zy7LfccukllDVfV8YPjKr+SQ1V BlbhRLu3mxZnghOy+fuO9Sp5HKVCQVMsTYsEhmkorrFIO7ljn2bp1zxKd5yH6rUCnnRpRRKihN2 DA7EIK492YkjhnI8GjjfMhNft3mRgHTGfDxX7pF63lcIXLi42GqEd7RkDpXgfBrCqSHldSbOkTe 8Fsur1PmDUFEzsDZWWOAqgXgEOJHSBictl/hP4Z75/s+vSZZxtm2sksuRWVy6A+AKy4AfnGmosI 32VNY0am98iUe2+ChJEAmPCUnvelH7s7zEeHaOfEDNO0ARxWQcIzYBCduL/xn7bUZ3EpMclSg4T u4pJzlwfCDV8QdyAzCfh6Hg== X-Google-Smtp-Source: AGHT+IG3cgo4lt4+QBuBPJp4MiDy4EL2BqrtziPjGxQujndCvzWUBDgOAl5gJF76QqdWuudAiJRjUg== X-Received: by 2002:a05:6402:1e92:b0:5d9:82bc:ad06 with SMTP id 4fb4d7f45d1cf-5e59f386d7dmr2815278a12.3.1741180016471; Wed, 05 Mar 2025 05:06:56 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:55 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Etienne Carriere , Jens Wiklander Subject: [PATCH v6 06/10] tee: new ioctl to a register tee_shm from a dmabuf file descriptor Date: Wed, 5 Mar 2025 14:04:12 +0100 Message-ID: <20250305130634.1850178-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Etienne Carriere Enable userspace to create a tee_shm object that refers to a dmabuf reference. Userspace registers the dmabuf file descriptor as in a tee_shm object. The registration is completed with a tee_shm file descriptor returned to userspace. Userspace is free to close the dmabuf file descriptor now since all the resources are now held via the tee_shm object. Closing the tee_shm file descriptor will release all resources used by the tee_shm object. This change only support dmabuf references that relates to physically contiguous memory buffers. New tee_shm flag to identify tee_shm objects built from a registered dmabuf, TEE_SHM_DMA_BUF. Signed-off-by: Etienne Carriere Signed-off-by: Olivier Masse Signed-off-by: Jens Wiklander --- drivers/tee/tee_core.c | 145 ++++++++++++++++++++++++++----------- drivers/tee/tee_private.h | 1 + drivers/tee/tee_shm.c | 146 ++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 1 + include/linux/tee_drv.h | 10 +++ include/uapi/linux/tee.h | 29 ++++++++ 6 files changed, 288 insertions(+), 44 deletions(-) diff --git a/drivers/tee/tee_core.c b/drivers/tee/tee_core.c index 685afcaa3ea1..3a71643766d5 100644 --- a/drivers/tee/tee_core.c +++ b/drivers/tee/tee_core.c @@ -353,6 +353,103 @@ tee_ioctl_shm_register(struct tee_context *ctx, return ret; } +static int +tee_ioctl_shm_register_fd(struct tee_context *ctx, + struct tee_ioctl_shm_register_fd_data __user *udata) +{ + struct tee_ioctl_shm_register_fd_data data; + struct tee_shm *shm; + long ret; + + if (copy_from_user(&data, udata, sizeof(data))) + return -EFAULT; + + /* Currently no input flags are supported */ + if (data.flags) + return -EINVAL; + + shm = tee_shm_register_fd(ctx, data.fd); + if (IS_ERR(shm)) + return -EINVAL; + + data.id = shm->id; + data.flags = shm->flags; + data.size = shm->size; + + if (copy_to_user(udata, &data, sizeof(data))) + ret = -EFAULT; + else + ret = tee_shm_get_fd(shm); + + /* + * When user space closes the file descriptor the shared memory + * should be freed or if tee_shm_get_fd() failed then it will + * be freed immediately. + */ + tee_shm_put(shm); + return ret; +} + +static int param_from_user_memref(struct tee_context *ctx, + struct tee_param_memref *memref, + struct tee_ioctl_param *ip) +{ + struct tee_shm *shm; + size_t offs = 0; + + /* + * If a NULL pointer is passed to a TA in the TEE, + * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL + * indicating a NULL memory reference. + */ + if (ip->c != TEE_MEMREF_NULL) { + /* + * If we fail to get a pointer to a shared + * memory object (and increase the ref count) + * from an identifier we return an error. All + * pointers that has been added in params have + * an increased ref count. It's the callers + * responibility to do tee_shm_put() on all + * resolved pointers. + */ + shm = tee_shm_get_from_id(ctx, ip->c); + if (IS_ERR(shm)) + return PTR_ERR(shm); + + /* + * Ensure offset + size does not overflow + * offset and does not overflow the size of + * the referred shared memory object. + */ + if ((ip->a + ip->b) < ip->a || + (ip->a + ip->b) > shm->size) { + tee_shm_put(shm); + return -EINVAL; + } + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm *parent_shm; + + parent_shm = tee_shm_get_parent_shm(shm, &offs); + if (parent_shm) { + tee_shm_put(shm); + shm = parent_shm; + } + } + } else if (ctx->cap_memref_null) { + /* Pass NULL pointer to OP-TEE */ + shm = NULL; + } else { + return -EINVAL; + } + + memref->shm_offs = ip->a + offs; + memref->size = ip->b; + memref->shm = shm; + + return 0; +} + static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t num_params, struct tee_ioctl_param __user *uparams) @@ -360,8 +457,8 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, size_t n; for (n = 0; n < num_params; n++) { - struct tee_shm *shm; struct tee_ioctl_param ip; + int rc; if (copy_from_user(&ip, uparams + n, sizeof(ip))) return -EFAULT; @@ -384,45 +481,10 @@ static int params_from_user(struct tee_context *ctx, struct tee_param *params, case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT: case TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT: - /* - * If a NULL pointer is passed to a TA in the TEE, - * the ip.c IOCTL parameters is set to TEE_MEMREF_NULL - * indicating a NULL memory reference. - */ - if (ip.c != TEE_MEMREF_NULL) { - /* - * If we fail to get a pointer to a shared - * memory object (and increase the ref count) - * from an identifier we return an error. All - * pointers that has been added in params have - * an increased ref count. It's the callers - * responibility to do tee_shm_put() on all - * resolved pointers. - */ - shm = tee_shm_get_from_id(ctx, ip.c); - if (IS_ERR(shm)) - return PTR_ERR(shm); - - /* - * Ensure offset + size does not overflow - * offset and does not overflow the size of - * the referred shared memory object. - */ - if ((ip.a + ip.b) < ip.a || - (ip.a + ip.b) > shm->size) { - tee_shm_put(shm); - return -EINVAL; - } - } else if (ctx->cap_memref_null) { - /* Pass NULL pointer to OP-TEE */ - shm = NULL; - } else { - return -EINVAL; - } - - params[n].u.memref.shm_offs = ip.a; - params[n].u.memref.size = ip.b; - params[n].u.memref.shm = shm; + rc = param_from_user_memref(ctx, ¶ms[n].u.memref, + &ip); + if (rc) + return rc; break; default: /* Unknown attribute */ @@ -827,6 +889,8 @@ static long tee_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) return tee_ioctl_shm_alloc(ctx, uarg); case TEE_IOC_SHM_REGISTER: return tee_ioctl_shm_register(ctx, uarg); + case TEE_IOC_SHM_REGISTER_FD: + return tee_ioctl_shm_register_fd(ctx, uarg); case TEE_IOC_OPEN_SESSION: return tee_ioctl_open_session(ctx, uarg); case TEE_IOC_INVOKE: @@ -1288,3 +1352,4 @@ MODULE_AUTHOR("Linaro"); MODULE_DESCRIPTION("TEE Driver"); MODULE_VERSION("1.0"); MODULE_LICENSE("GPL v2"); +MODULE_IMPORT_NS("DMA_BUF"); diff --git a/drivers/tee/tee_private.h b/drivers/tee/tee_private.h index 6c6ff5d5eed2..aad7f6c7e0f0 100644 --- a/drivers/tee/tee_private.h +++ b/drivers/tee/tee_private.h @@ -24,6 +24,7 @@ void teedev_ctx_put(struct tee_context *ctx); struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_user_buf(struct tee_context *ctx, unsigned long addr, size_t length); +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs); int tee_heap_update_from_dma_buf(struct tee_device *teedev, struct dma_buf *dmabuf, size_t *offset, diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index daf6e5cfd59a..8b79918468b5 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -4,6 +4,7 @@ */ #include #include +#include #include #include #include @@ -15,6 +16,16 @@ #include #include "tee_private.h" +/* extra references appended to shm object for registered shared memory */ +struct tee_shm_dmabuf_ref { + struct tee_shm shm; + size_t offset; + struct dma_buf *dmabuf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct tee_shm *parent_shm; +}; + static void shm_put_kernel_pages(struct page **pages, size_t page_count) { size_t n; @@ -45,7 +56,23 @@ static void release_registered_pages(struct tee_shm *shm) static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) { - if (shm->flags & TEE_SHM_POOL) { + struct tee_shm *parent_shm = NULL; + void *p = shm; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + parent_shm = ref->parent_shm; + p = ref; + if (ref->attach) { + dma_buf_unmap_attachment(ref->attach, ref->sgt, + DMA_BIDIRECTIONAL); + + dma_buf_detach(ref->dmabuf, ref->attach); + } + dma_buf_put(ref->dmabuf); + } else if (shm->flags & TEE_SHM_POOL) { teedev->pool->ops->free(teedev->pool, shm); } else if (shm->flags & TEE_SHM_DYNAMIC) { int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm); @@ -57,9 +84,10 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) release_registered_pages(shm); } - teedev_ctx_put(shm->ctx); + if (shm->ctx) + teedev_ctx_put(shm->ctx); - kfree(shm); + kfree(p); tee_device_put(teedev); } @@ -169,7 +197,7 @@ struct tee_shm *tee_shm_alloc_user_buf(struct tee_context *ctx, size_t size) * tee_client_invoke_func(). The memory allocated is later freed with a * call to tee_shm_free(). * - * @returns a pointer to 'struct tee_shm' + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure */ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) { @@ -179,6 +207,116 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd) +{ + struct tee_shm_dmabuf_ref *ref; + int rc; + + if (!tee_device_get(ctx->teedev)) + return ERR_PTR(-EINVAL); + + teedev_ctx_get(ctx); + + ref = kzalloc(sizeof(*ref), GFP_KERNEL); + if (!ref) { + rc = -ENOMEM; + goto err_put_tee; + } + + refcount_set(&ref->shm.refcount, 1); + ref->shm.ctx = ctx; + ref->shm.id = -1; + ref->shm.flags = TEE_SHM_DMA_BUF; + + ref->dmabuf = dma_buf_get(fd); + if (IS_ERR(ref->dmabuf)) { + rc = PTR_ERR(ref->dmabuf); + goto err_kfree_ref; + } + + rc = tee_heap_update_from_dma_buf(ctx->teedev, ref->dmabuf, + &ref->offset, &ref->shm, + &ref->parent_shm); + if (!rc) + goto out; + if (rc != -EINVAL) + goto err_put_dmabuf; + + ref->attach = dma_buf_attach(ref->dmabuf, &ctx->teedev->dev); + if (IS_ERR(ref->attach)) { + rc = PTR_ERR(ref->attach); + goto err_put_dmabuf; + } + + ref->sgt = dma_buf_map_attachment(ref->attach, DMA_BIDIRECTIONAL); + if (IS_ERR(ref->sgt)) { + rc = PTR_ERR(ref->sgt); + goto err_detach; + } + + if (sg_nents(ref->sgt->sgl) != 1) { + rc = PTR_ERR(ref->sgt->sgl); + goto err_unmap_attachement; + } + + ref->shm.paddr = page_to_phys(sg_page(ref->sgt->sgl)); + ref->shm.size = ref->sgt->sgl->length; + +out: + mutex_lock(&ref->shm.ctx->teedev->mutex); + ref->shm.id = idr_alloc(&ref->shm.ctx->teedev->idr, &ref->shm, + 1, 0, GFP_KERNEL); + mutex_unlock(&ref->shm.ctx->teedev->mutex); + if (ref->shm.id < 0) { + rc = ref->shm.id; + if (ref->attach) + goto err_unmap_attachement; + goto err_put_dmabuf; + } + + return &ref->shm; + +err_unmap_attachement: + dma_buf_unmap_attachment(ref->attach, ref->sgt, DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(ref->dmabuf, ref->attach); +err_put_dmabuf: + dma_buf_put(ref->dmabuf); +err_kfree_ref: + kfree(ref); +err_put_tee: + teedev_ctx_put(ctx); + tee_device_put(ctx->teedev); + + return ERR_PTR(rc); +} +EXPORT_SYMBOL_GPL(tee_shm_register_fd); + +struct tee_shm *tee_shm_get_parent_shm(struct tee_shm *shm, size_t *offs) +{ + struct tee_shm *parent_shm = NULL; + + if (shm->flags & TEE_SHM_DMA_BUF) { + struct tee_shm_dmabuf_ref *ref; + + ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); + if (ref->parent_shm) { + /* + * the shm already has one reference to + * ref->parent_shm so we should be clear of 0. + * We're getting another reference since the caller + * of this function expects to put the returned + * parent_shm when it's done with it. + */ + parent_shm = ref->parent_shm; + refcount_inc(&parent_shm->refcount); + *offs = ref->offset; + } + } + + return parent_shm; +} + /** * tee_shm_alloc_priv_buf() - Allocate shared memory for a privately shared * kernel buffer diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 16ef078247ae..6bd833b6d0e1 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -28,6 +28,7 @@ #define TEE_SHM_USER_MAPPED BIT(1) /* Memory mapped in user space */ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ +#define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 diff --git a/include/linux/tee_drv.h b/include/linux/tee_drv.h index a54c203000ed..824f1251de60 100644 --- a/include/linux/tee_drv.h +++ b/include/linux/tee_drv.h @@ -116,6 +116,16 @@ struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); struct tee_shm *tee_shm_register_kernel_buf(struct tee_context *ctx, void *addr, size_t length); +/** + * tee_shm_register_fd() - Register shared memory from file descriptor + * + * @ctx: Context that allocates the shared memory + * @fd: Shared memory file descriptor reference + * + * @returns a pointer to 'struct tee_shm' on success, and ERR_PTR on failure + */ +struct tee_shm *tee_shm_register_fd(struct tee_context *ctx, int fd); + /** * tee_shm_free() - Free shared memory * @shm: Handle to shared memory to free diff --git a/include/uapi/linux/tee.h b/include/uapi/linux/tee.h index d0430bee8292..1f9a4ac2b211 100644 --- a/include/uapi/linux/tee.h +++ b/include/uapi/linux/tee.h @@ -118,6 +118,35 @@ struct tee_ioctl_shm_alloc_data { #define TEE_IOC_SHM_ALLOC _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 1, \ struct tee_ioctl_shm_alloc_data) +/** + * struct tee_ioctl_shm_register_fd_data - Shared memory registering argument + * @fd: [in] File descriptor identifying the shared memory + * @size: [out] Size of shared memory to allocate + * @flags: [in] Flags to/from allocation. + * @id: [out] Identifier of the shared memory + * + * The flags field should currently be zero as input. Updated by the call + * with actual flags as defined by TEE_IOCTL_SHM_* above. + * This structure is used as argument for TEE_IOC_SHM_REGISTER_FD below. + */ +struct tee_ioctl_shm_register_fd_data { + __s64 fd; + __u64 size; + __u32 flags; + __s32 id; +}; + +/** + * TEE_IOC_SHM_REGISTER_FD - register a shared memory from a file descriptor + * + * Returns a file descriptor on success or < 0 on failure + * + * The returned file descriptor refers to the shared memory object in kernel + * land. The shared memory is freed when the descriptor is closed. + */ +#define TEE_IOC_SHM_REGISTER_FD _IOWR(TEE_IOC_MAGIC, TEE_IOC_BASE + 8, \ + struct tee_ioctl_shm_register_fd_data) + /** * struct tee_ioctl_buf_data - Variable sized buffer * @buf_ptr: [in] A __user pointer to a buffer From patchwork Wed Mar 5 13:04:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002618 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6C96C19F32 for ; Wed, 5 Mar 2025 13:07:02 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 372B510E779; Wed, 5 Mar 2025 13:07:02 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="EypjspN7"; dkim-atps=neutral Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id EC82D10E768 for ; Wed, 5 Mar 2025 13:07:00 +0000 (UTC) Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-5e4ce6e3b8cso1311683a12.1 for ; Wed, 05 Mar 2025 05:07:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180019; x=1741784819; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+6L88VawgT5clB6S0hOB/yQKL6Af309OnWxwlD6cmBU=; b=EypjspN7ahomBnrQhR9/QYpDp4Bkve/P8de0w0XW2CeJPulXJweWSQkjh87+hR37jI WaDl7a6WvNmIJsJdAhyrtXuGXmoanKwUlnLV7rCi6Fgi7111My+2Hq4MFVm0XWOBOK8D a0h+EUxu8FfzGNMxLGfYcRK0nimQ7JbZrXEFpJpmP58OoAWPMTtUvutTq26SpafqSF0+ yhbYyDs6ef2cwNc4n58hqLb45HsEq+3Cl8/ROfVqtDthqHVcmtT5/Sm9D7nLF+/ISaFE IkWyfzwtm1fJz23f+RGwsX+yea57N3njnT75NPANU/zRvWWIlDt1TKrTohUnzKi1ot8y 4ZHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180019; x=1741784819; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+6L88VawgT5clB6S0hOB/yQKL6Af309OnWxwlD6cmBU=; b=XxsnnROr8n22V5LEOA9pNDLKFcouavqNf1MD7IhtT7ojLoHKU8cpH0PdrS5XHX0LLz rupb4zJJimeEmZ0A0v+iF/GxR2rRjcVL6Fp+xiU998V1OeoDTxMZ1v9xqI3ifXOYQmgI KyABcac/Sn2jum3Z/hj4aD2Qr/Yz7EZX5g6u89YRSQWWif0d4Ma4qsloSWu4Kfkbk1I4 iq25FvKIyjVHbwSj08A/JRn/+y3UBnPHeCyZbgYUOG3fAVTTdWte/H1xg4htmhlBs1An 1J3NFG0iYpFyOcMUP0MBkolfq2YW1znGo3Gd3ntRoQEm7HNSflt1DWQ2AzZJJMeJg76e TF9w== X-Forwarded-Encrypted: i=1; AJvYcCXCP9lg3yiibvp72Vx+yRgxexvTggUCfZqPgutnHj/UOU3w0UfyktKvqRfPQk5fHZ5V8oIyu46EN4Q=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxeuaiYH/vWnLkb4v8NCLpMpse7Bc848K+t7f9CpHMOh0Zsv+6V pvMEQS+fVz4u27hVZrVEM406xdiSJ4JP86yK3FveXY33Ul0kft47FoKVAJwDu38= X-Gm-Gg: ASbGncs8ESMaUr9677ZraVluPN6diHoqamQL1ZEA4x8A2UovVic3d8oNgNwuH2kR+Gh rLsXNhlziRECnqK0vkGPp42ykTwAiNiqMu+r10Ey1dBnutrktHW+jGsva50fWKC12AA6vYPEZgQ 4uDV7lJ4Fvyz7VWvfJCVbsd00m2v8JOxNNqxvHH1TrXHp524LuC1dmzJpsh+7fao00+uq0InsXP HhpWUEutuBBxhORgOxMXbzfCz+V0ROKEu/wloPyLxxilXPF4lLVgPt4scAAWcNu7Fdm7cowOWYj +JfYJmsJuYynPsHAfm+dF1crrjDIhpDfmOz0uCrCN8WBxZ0iE083YRsRshZ2HlkSw3gQAsCDpu7 H3wK2l7RLSPtYqxHpKw3LsQ== X-Google-Smtp-Source: AGHT+IEAmgkcE8RyjO1pJJHrKqdAmUGw9vFkZJI7+m0/4SoKqcFz7yQm/3SS6AjKMMHXW07x446SxQ== X-Received: by 2002:a05:6402:4309:b0:5e4:d52b:78a2 with SMTP id 4fb4d7f45d1cf-5e584f51077mr7080244a12.15.1741180019316; Wed, 05 Mar 2025 05:06:59 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:06:57 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 07/10] tee: add tee_shm_alloc_cma_phys_mem() Date: Wed, 5 Mar 2025 14:04:13 +0100 Message-ID: <20250305130634.1850178-8-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add tee_shm_alloc_cma_phys_mem() to allocate a physical memory using from the default CMA pool. The memory is represented by a tee_shm object using the new flag TEE_SHM_CMA_BUF to identify it as physical memory from CMA. Signed-off-by: Jens Wiklander --- drivers/tee/tee_shm.c | 55 ++++++++++++++++++++++++++++++++++++++-- include/linux/tee_core.h | 4 +++ 2 files changed, 57 insertions(+), 2 deletions(-) diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c index 8b79918468b5..8d8341f8ebd7 100644 --- a/drivers/tee/tee_shm.c +++ b/drivers/tee/tee_shm.c @@ -3,8 +3,11 @@ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited */ #include +#include #include #include +#include +#include #include #include #include @@ -13,7 +16,6 @@ #include #include #include -#include #include "tee_private.h" /* extra references appended to shm object for registered shared memory */ @@ -59,7 +61,14 @@ static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) struct tee_shm *parent_shm = NULL; void *p = shm; - if (shm->flags & TEE_SHM_DMA_BUF) { + if (shm->flags & TEE_SHM_CMA_BUF) { +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct page *page = phys_to_page(shm->paddr); + struct cma *cma = dev_get_cma_area(&shm->ctx->teedev->dev); + + cma_release(cma, page, shm->size / PAGE_SIZE); +#endif + } else if (shm->flags & TEE_SHM_DMA_BUF) { struct tee_shm_dmabuf_ref *ref; ref = container_of(shm, struct tee_shm_dmabuf_ref, shm); @@ -341,6 +350,48 @@ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size) } EXPORT_SYMBOL_GPL(tee_shm_alloc_priv_buf); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align) +{ +#if !IS_MODULE(CONFIG_TEE) && IS_ENABLED(CONFIG_CMA) + struct tee_device *teedev = ctx->teedev; + struct cma *cma = dev_get_cma_area(&teedev->dev); + struct tee_shm *shm; + struct page *page; + + if (!tee_device_get(teedev)) + return ERR_PTR(-EINVAL); + + page = cma_alloc(cma, page_count, align, true/*no_warn*/); + if (!page) + goto err_put_teedev; + + shm = kzalloc(sizeof(*shm), GFP_KERNEL); + if (!shm) + goto err_cma_crelease; + + refcount_set(&shm->refcount, 1); + shm->ctx = ctx; + shm->paddr = page_to_phys(page); + shm->size = page_count * PAGE_SIZE; + shm->flags = TEE_SHM_CMA_BUF; + + teedev_ctx_get(ctx); + + return shm; + +err_cma_crelease: + cma_release(cma, page, page_count); +err_put_teedev: + tee_device_put(teedev); + + return ERR_PTR(-ENOMEM); +#else + return ERR_PTR(-EINVAL); +#endif +} +EXPORT_SYMBOL_GPL(tee_shm_alloc_cma_phys_mem); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, diff --git a/include/linux/tee_core.h b/include/linux/tee_core.h index 6bd833b6d0e1..b6727d9a3556 100644 --- a/include/linux/tee_core.h +++ b/include/linux/tee_core.h @@ -29,6 +29,7 @@ #define TEE_SHM_POOL BIT(2) /* Memory allocated from pool */ #define TEE_SHM_PRIV BIT(3) /* Memory private to TEE driver */ #define TEE_SHM_DMA_BUF BIT(4) /* Memory with dma-buf handle */ +#define TEE_SHM_CMA_BUF BIT(5) /* CMA allocated memory */ #define TEE_DEVICE_FLAG_REGISTERED 0x1 #define TEE_MAX_DEV_NAME_LEN 32 @@ -307,6 +308,9 @@ void *tee_get_drvdata(struct tee_device *teedev); */ struct tee_shm *tee_shm_alloc_priv_buf(struct tee_context *ctx, size_t size); +struct tee_shm *tee_shm_alloc_cma_phys_mem(struct tee_context *ctx, + size_t page_count, size_t align); + int tee_dyn_shm_alloc_helper(struct tee_shm *shm, size_t size, size_t align, int (*shm_register)(struct tee_context *ctx, struct tee_shm *shm, From patchwork Wed Mar 5 13:04:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80215C28B22 for ; Wed, 5 Mar 2025 13:07:04 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 043DF10E1F1; Wed, 5 Mar 2025 13:07:04 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="KhdAmGR4"; dkim-atps=neutral Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id DC43110E77A for ; Wed, 5 Mar 2025 13:07:02 +0000 (UTC) Received: by mail-ed1-f43.google.com with SMTP id 4fb4d7f45d1cf-5e50de0b5easo5933609a12.3 for ; Wed, 05 Mar 2025 05:07:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180021; x=1741784821; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GV+1CN9mTm6W0e8GwrPdwh4pOa9jooWXILMnvo8a19M=; b=KhdAmGR4dwiLgVxn+wHtUlmfRf5fLeep5kW6+dOnl7u4h2D0NgYn2cHvCGwsaHXopD nLRlMYq2jg+NdM/8HaCZj35xJfN+bwpiA2e6rsfBQWj10kpOOowVNo+/O66IloK+TPBZ qJKM4TsZ+tMIhGN1emGHsQtkbeA9OoyU7fDmAUlkTcSi+ccRh18lMkB2pSL9MJbNgJ38 L+Ivv3LL3YUxp2H0fSxHkxhbNxLqKxjjiNAsppGufmfYFEuZAs4HzgYpyytTeXSmET6V fJ2Wx+HEEtPnzChW4zcTOu+zw/QDBsYHW28edzuoUtLjZiHf748TrUl60U/PphNCuRZy KYYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180021; x=1741784821; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GV+1CN9mTm6W0e8GwrPdwh4pOa9jooWXILMnvo8a19M=; b=bM3tG9iLoIeqCsoFZhjfNk4qBea5cGiKFnNSE2OpzY7QyWlkpxvIdIsvZj2ULIQVih Quw2Igm8o93L0fUQaCwjE3p6d2/7CoDDMvNP0knIcD9pJWoAvVA0OhlwAeGWwXjdN7Lp vl8KpafwV6hNWM4wTW0Tu+aug/6Df2/NYia1lcpRpguh418WKXe4Vs+bDd6iM+xiD/1+ ZiUuNxp3g3tqVdMaz1whpLWPZjPBVv7Fr/tmyfyTocG7Fre0shn+/Xv7/Tre4Xg3KCFn cVaxMlj/QQk/dzLLp+pNCk7n8DgIl/SrkGPb1xWKKLoDEheDHrTfmnlzVKTFcWL332gn JrmQ== X-Forwarded-Encrypted: i=1; AJvYcCVPj3ylmQeDHZ6iPZYKgOgxHLylUU3CgDFinklbL2wUOIshftpQrs+nJVcDb3dwH1Vm5KP13tevKyM=@lists.freedesktop.org X-Gm-Message-State: AOJu0YxNOzFsLRyk+TC0RwPpqeUP6LO6KdC3gS+mZnU4mC+RIbs+h/el 1dAGUzZrJJMFzL+6G6nHL+2B9QHnQrDZ0J0ZWUJMjH18OBH1UldYOss4GGs8C18= X-Gm-Gg: ASbGnct6uVzHHMuLmawwWfU1R0IHEivZ9uUMJeus0Ls0eqboZnjUrFAcdRoMgUr805J oj8IX/8SdtsBSWixQEOcWjJPt7TXvzHz6vYmFDxwFQGaKLmZuINr0LI5NDYxBxrih4JARblxSsp lQISOOxotG0FCJTDWCpuOTsyeHlhl2X+bxJB44UwHYOAYPHck+pTBUZYiXHMDDg33Hoc8I4iHXk xTqJYLrGHtpjTXF6Sop/NviRuUWMqS9hhiRHJXYhjpfgyV2IcTQPThexT4G/XvZFrfWZstCoqb5 SJTGS7KT46+4JE3fzWApRYX1moEQEMUxxJLT1y5tvyi1bXOOvc4iWE438n5uqiMvpO8p2UjSfTm jc0OtF5kLhrXMCCNb30HQEA== X-Google-Smtp-Source: AGHT+IFZCxWXYzGPaJ+Fpyr5pD44xJg3mO/vEGV3TFHnOgghMwA++b6wko/XdNnqusONUByGqiK4lg== X-Received: by 2002:a05:6402:4302:b0:5de:dc08:9cc5 with SMTP id 4fb4d7f45d1cf-5e59f35eab4mr2661989a12.7.1741180021272; Wed, 05 Mar 2025 05:07:01 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.06.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:00 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 08/10] optee: support restricted memory allocation Date: Wed, 5 Mar 2025 14:04:14 +0100 Message-ID: <20250305130634.1850178-9-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver for restricted memory allocation. The support is limited to only the SMC ABI and for secure video buffers. OP-TEE is probed for the range of restricted physical memory and a memory pool allocator is initialized if OP-TEE have support for such memory. Signed-off-by: Jens Wiklander --- drivers/tee/optee/core.c | 1 + drivers/tee/optee/smc_abi.c | 44 +++++++++++++++++++++++++++++++++++-- 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/tee/optee/core.c b/drivers/tee/optee/core.c index c75fddc83576..c7fd8040480e 100644 --- a/drivers/tee/optee/core.c +++ b/drivers/tee/optee/core.c @@ -181,6 +181,7 @@ void optee_remove_common(struct optee *optee) tee_device_unregister(optee->supp_teedev); tee_device_unregister(optee->teedev); + tee_device_unregister_all_dma_heaps(optee->teedev); tee_shm_pool_free(optee->pool); optee_supp_uninit(&optee->supp); mutex_destroy(&optee->call_queue.mutex); diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index cfdae266548b..a14ff0b7d3b3 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1620,6 +1620,41 @@ static inline int optee_load_fw(struct platform_device *pdev, } #endif +static int optee_sdp_pool_init(struct optee *optee) +{ + enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + int rc; + + if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + union { + struct arm_smccc_res smccc; + struct optee_smc_get_sdp_config_result result; + } res; + + optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, + 0, &res.smccc); + if (res.result.status != OPTEE_SMC_RETURN_OK) { + pr_err("Secure Data Path service not available\n"); + return 0; + } + + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); + if (rc) + goto err; + } + + return 0; +err: + pool->ops->destroy_pool(pool); + return rc; +} + static int optee_probe(struct platform_device *pdev) { optee_invoke_fn *invoke_fn; @@ -1715,7 +1750,7 @@ static int optee_probe(struct platform_device *pdev) optee = kzalloc(sizeof(*optee), GFP_KERNEL); if (!optee) { rc = -ENOMEM; - goto err_free_pool; + goto err_free_shm_pool; } optee->ops = &optee_ops; @@ -1788,6 +1823,10 @@ static int optee_probe(struct platform_device *pdev) pr_info("Asynchronous notifications enabled\n"); } + rc = optee_sdp_pool_init(optee); + if (rc) + goto err_notif_uninit; + /* * Ensure that there are no pre-existing shm objects before enabling * the shm cache so that there's no chance of receiving an invalid @@ -1823,6 +1862,7 @@ static int optee_probe(struct platform_device *pdev) optee_disable_shm_cache(optee); optee_smc_notif_uninit_irq(optee); optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); err_notif_uninit: optee_notif_uninit(optee); err_close_ctx: @@ -1839,7 +1879,7 @@ static int optee_probe(struct platform_device *pdev) tee_device_unregister(optee->teedev); err_free_optee: kfree(optee); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); if (memremaped_shm) memunmap(memremaped_shm); From patchwork Wed Mar 5 13:04:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002620 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4C0B3C19F32 for ; Wed, 5 Mar 2025 13:07:07 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C10D810E768; Wed, 5 Mar 2025 13:07:06 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="bWoCuLTL"; dkim-atps=neutral Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0D2CE10E768 for ; Wed, 5 Mar 2025 13:07:06 +0000 (UTC) Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-5e4c0c12bccso11915675a12.1 for ; Wed, 05 Mar 2025 05:07:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180024; x=1741784824; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OqKSZBP+OJWF5gJmf7xM2I9adV5PGtGazVdFpn+CQSw=; b=bWoCuLTL5kZTCC0C48QA+XYfW7n154BnWCr5RG88A0OCB471AqnbcEwKFdQeE57fW4 H+LtJ1XzajGq1V1sHIQ2Qd+8XOKr0eW7n++orGbZEro7gRS3VvD6VL+YQ5fviajWzqb1 gA2gmKmAKcFpCg269JYgaf4GahdN3/2G2YS9I93XJZniNInt91z9C5hjzHX6VpHMp6id S/yDKVqe+Ou1tGccyVSil6QD83dMB845TNLfjVAN1C6LiHILgUgBBrQM//11DAJOPNm0 sYj1h8vDtwqxLUDaHJQpZbMgoH4kfxd3BFRPSly5bFSHwY/DyPP7010Dtd6bx4Y2KY1w UHPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180024; x=1741784824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OqKSZBP+OJWF5gJmf7xM2I9adV5PGtGazVdFpn+CQSw=; b=f0Ecb/PDji5+CAtFHce1aDIUXGBulLADRXB66vAS3Otk9sxEWmOl+fzXAsNdIrlG4S hKCdUfMyXSAmbDkn+pAnasEgXce+hXS7GTur+f8XZkJctbmaZuOhnNGdDGS/Lf4HfC// i/klRFn6bUpyLhw17aK7i1p2TP7xP3UqpZzepXGEPUo2kUsMuxXfhWh55JbM9CcEXcaS eTJoO7TkdQZ7Lm1Shr9Rr///iZUGdltNbPo26xVpBeg1LLjFV5TPpCPU5upFV2tZftW3 VJNXhghGVe+gqIYs+J/zQxK5gzhlJmfrtzXFcRBY27JC6ZByyfyAIzws27dSk+33aWIj wYfQ== X-Forwarded-Encrypted: i=1; AJvYcCVn7jVlIcBpjMrflkKLE0IqNmDArafPSqU7BGckmrpQdXPty5mvksJQ/flOsO17oiTo3NcWVSpH5Fg=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzdtqRGeM98W7gRDFXHelAVwTF7ls7A/M/0B+zDV574WTAfji2P FKWsb3BN9TnElmI3ssmoD8WmGJtw+yAlUNOgosNFAwayRwKlO/UqXRcFzZXwrfA= X-Gm-Gg: ASbGncuZVftYE0/ZXU6VUC6yl7dOuPd4a13PEXvhzbVTKAEnNyyG7OFeZeJlGTpk9Lg hD2HmBiYxRSjkKU4brYVKOhrCfCHCivVNlh6N1aiTLMcXZtA3F5Hoi+DoJBt6ByP/643jLhuwKn UNfMMmrXebdCqWyCcAZpKk03gRQMh2Fr45N7B1Vul+hosC0CaVIzVRxeMe/dJ5FmiM2mLSY7NdH +lXupdgWWrY8f5QbfFw6PsNqUEv6o58ZlFBBHomu02HAF6VBhoG1KqPvc+Vr28lbTZ/GfHdBPP0 +UG2byIvIKSZBmbZBLuV8cNxLrR4LCo1VwUqUe4Ur3gEsjstLMhGi+8Lb/fUrACo10WAqqpLMkc rSova7yZOxSeBdp7QppWITg== X-Google-Smtp-Source: AGHT+IE15Fnv9zCGL44EN+x0x28iZKYTfWnu4C8Tigs1ik2W/BgqyNM9ZszaJHzYmQDtEqzGDoT2bw== X-Received: by 2002:a05:6402:2344:b0:5e5:827d:bb1c with SMTP id 4fb4d7f45d1cf-5e59f471818mr2458803a12.25.1741180024315; Wed, 05 Mar 2025 05:07:04 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.07.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:03 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 09/10] optee: FF-A: dynamic restricted memory allocation Date: Wed, 5 Mar 2025 14:04:15 +0100 Message-ID: <20250305130634.1850178-10-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver dynamic restricted memory allocation with FF-A. The restricted memory pools for dynamically allocated restrict memory are instantiated when requested by user-space. This instantiation can fail if OP-TEE doesn't support the requested use-case of restricted memory. Restricted memory pools based on a static carveout or dynamic allocation can coexist for different use-cases. We use only dynamic allocation with FF-A. Signed-off-by: Jens Wiklander --- drivers/tee/optee/Makefile | 1 + drivers/tee/optee/ffa_abi.c | 143 ++++++++++++- drivers/tee/optee/optee_private.h | 13 +- drivers/tee/optee/rstmem.c | 329 ++++++++++++++++++++++++++++++ 4 files changed, 483 insertions(+), 3 deletions(-) create mode 100644 drivers/tee/optee/rstmem.c diff --git a/drivers/tee/optee/Makefile b/drivers/tee/optee/Makefile index a6eff388d300..498969fb8e40 100644 --- a/drivers/tee/optee/Makefile +++ b/drivers/tee/optee/Makefile @@ -4,6 +4,7 @@ optee-objs += core.o optee-objs += call.o optee-objs += notif.o optee-objs += rpc.o +optee-objs += rstmem.o optee-objs += supp.o optee-objs += device.o optee-objs += smc_abi.o diff --git a/drivers/tee/optee/ffa_abi.c b/drivers/tee/optee/ffa_abi.c index e4b08cd195f3..6a55114232ef 100644 --- a/drivers/tee/optee/ffa_abi.c +++ b/drivers/tee/optee/ffa_abi.c @@ -672,6 +672,123 @@ static int optee_ffa_do_call_with_arg(struct tee_context *ctx, return optee_ffa_yielding_call(ctx, &data, rpc_arg, system_thread); } +static int do_call_lend_rstmem(struct optee *optee, u64 cookie, u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_ASSIGN_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = cookie; + msg_arg->params[0].u.value.b = use_case; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_ffa_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + struct ffa_send_direct_data data; + struct ffa_mem_region_attributes *mem_attr; + struct ffa_mem_ops_args args = { + .use_txbuf = true, + .tag = use_case, + }; + struct page *page; + struct scatterlist sgl; + unsigned int n; + int rc; + + mem_attr = kcalloc(ep_count, sizeof(*mem_attr), GFP_KERNEL); + for (n = 0; n < ep_count; n++) { + mem_attr[n].receiver = end_points[n]; + mem_attr[n].attrs = FFA_MEM_RW; + } + args.attrs = mem_attr; + args.nattrs = ep_count; + + page = phys_to_page(rstmem->paddr); + sg_init_table(&sgl, 1); + sg_set_page(&sgl, page, rstmem->size, 0); + + args.sg = &sgl; + rc = mem_ops->memory_lend(&args); + kfree(mem_attr); + if (rc) + return rc; + + rc = do_call_lend_rstmem(optee, args.g_handle, use_case); + if (rc) + goto err_reclaim; + + rc = optee_shm_add_ffa_handle(optee, rstmem, args.g_handle); + if (rc) + goto err_unreg; + + rstmem->sec_world_id = args.g_handle; + + return 0; + +err_unreg: + data = (struct ffa_send_direct_data){ + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)args.g_handle, + .data2 = (u32)(args.g_handle >> 32), + }; + msg_ops->sync_send_receive(ffa_dev, &data); +err_reclaim: + mem_ops->memory_reclaim(args.g_handle, 0); + return rc; +} + +static int optee_ffa_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct ffa_device *ffa_dev = optee->ffa.ffa_dev; + const struct ffa_msg_ops *msg_ops = ffa_dev->ops->msg_ops; + const struct ffa_mem_ops *mem_ops = ffa_dev->ops->mem_ops; + u64 global_handle = rstmem->sec_world_id; + struct ffa_send_direct_data data = { + .data0 = OPTEE_FFA_RELEASE_RSTMEM, + .data1 = (u32)global_handle, + .data2 = (u32)(global_handle >> 32) + }; + int rc; + + optee_shm_rem_ffa_handle(optee, global_handle); + rstmem->sec_world_id = 0; + + rc = msg_ops->sync_send_receive(ffa_dev, &data); + if (rc) + pr_err("Release SHM id 0x%llx rc %d\n", global_handle, rc); + + rc = mem_ops->memory_reclaim(global_handle, 0); + if (rc) + pr_err("mem_reclaim: 0x%llx %d", global_handle, rc); + + return rc; +} + /* * 6. Driver initialization * @@ -833,6 +950,8 @@ static const struct optee_ops optee_ffa_ops = { .do_call_with_arg = optee_ffa_do_call_with_arg, .to_msg_param = optee_ffa_to_msg_param, .from_msg_param = optee_ffa_from_msg_param, + .lend_rstmem = optee_ffa_lend_rstmem, + .reclaim_rstmem = optee_ffa_reclaim_rstmem, }; static void optee_ffa_remove(struct ffa_device *ffa_dev) @@ -941,7 +1060,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) optee->pool, optee); if (IS_ERR(teedev)) { rc = PTR_ERR(teedev); - goto err_free_pool; + goto err_free_shm_pool; } optee->teedev = teedev; @@ -988,6 +1107,24 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) rc); } + if (IS_ENABLED(CONFIG_CMA) && !IS_MODULE(CONFIG_OPTEE) && + (sec_caps & OPTEE_FFA_SEC_CAP_RSTMEM)) { + enum tee_dma_heap_id id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; + struct tee_rstmem_pool *pool; + + pool = optee_rstmem_alloc_cma_pool(optee, id); + if (IS_ERR(pool)) { + rc = PTR_ERR(pool); + goto err_notif_uninit; + } + + rc = tee_device_register_dma_heap(optee->teedev, id, pool); + if (rc) { + pool->ops->destroy_pool(pool); + goto err_notif_uninit; + } + } + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); if (rc) goto err_unregister_devices; @@ -1001,6 +1138,8 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) err_unregister_devices: optee_unregister_devices(); + tee_device_unregister_all_dma_heaps(optee->teedev); +err_notif_uninit: if (optee->ffa.bottom_half_value != U32_MAX) notif_ops->notify_relinquish(ffa_dev, optee->ffa.bottom_half_value); @@ -1018,7 +1157,7 @@ static int optee_ffa_probe(struct ffa_device *ffa_dev) tee_device_unregister(optee->supp_teedev); err_unreg_teedev: tee_device_unregister(optee->teedev); -err_free_pool: +err_free_shm_pool: tee_shm_pool_free(pool); err_free_optee: kfree(optee); diff --git a/drivers/tee/optee/optee_private.h b/drivers/tee/optee/optee_private.h index 20eda508dbac..faab31ad7c52 100644 --- a/drivers/tee/optee/optee_private.h +++ b/drivers/tee/optee/optee_private.h @@ -174,9 +174,14 @@ struct optee; * @do_call_with_arg: enters OP-TEE in secure world * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters * @from_msg_param: converts from OPTEE_MSG parameters to struct tee_param + * @lend_rstmem: lends physically contiguous memory as restricted + * memory, inaccessible by the kernel + * @reclaim_rstmem: reclaims restricted memory previously lent with + * @lend_rstmem() and makes it accessible by the + * kernel again * * These OPs are only supposed to be used internally in the OP-TEE driver - * as a way of abstracting the different methogs of entering OP-TEE in + * as a way of abstracting the different methods of entering OP-TEE in * secure world. */ struct optee_ops { @@ -191,6 +196,10 @@ struct optee_ops { size_t num_params, const struct optee_msg_param *msg_params, bool update_out); + int (*lend_rstmem)(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case); + int (*reclaim_rstmem)(struct optee *optee, struct tee_shm *rstmem); }; /** @@ -285,6 +294,8 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params, void optee_supp_init(struct optee_supp *supp); void optee_supp_uninit(struct optee_supp *supp); void optee_supp_release(struct optee_supp *supp); +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id); int optee_supp_recv(struct tee_context *ctx, u32 *func, u32 *num_params, struct tee_param *param); diff --git a/drivers/tee/optee/rstmem.c b/drivers/tee/optee/rstmem.c new file mode 100644 index 000000000000..ea27769934d4 --- /dev/null +++ b/drivers/tee/optee/rstmem.c @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2025, Linaro Limited + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include "optee_private.h" + +struct optee_rstmem_cma_pool { + struct tee_rstmem_pool pool; + struct gen_pool *gen_pool; + struct optee *optee; + size_t page_count; + u16 *end_points; + u_int end_point_count; + u_int align; + refcount_t refcount; + u32 use_case; + struct tee_shm *rstmem; + /* Protects when initializing and tearing down this struct */ + struct mutex mutex; +}; + +static struct optee_rstmem_cma_pool * +to_rstmem_cma_pool(struct tee_rstmem_pool *pool) +{ + return container_of(pool, struct optee_rstmem_cma_pool, pool); +} + +static int init_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc; + + rp->rstmem = tee_shm_alloc_cma_phys_mem(rp->optee->ctx, rp->page_count, + rp->align); + if (IS_ERR(rp->rstmem)) { + rc = PTR_ERR(rp->rstmem); + goto err_null_rstmem; + } + + /* + * TODO unmap the memory range since the physical memory will + * become inaccesible after the lend_rstmem() call. + */ + rc = rp->optee->ops->lend_rstmem(rp->optee, rp->rstmem, rp->end_points, + rp->end_point_count, rp->use_case); + if (rc) + goto err_put_shm; + rp->rstmem->flags |= TEE_SHM_DYNAMIC; + + rp->gen_pool = gen_pool_create(PAGE_SHIFT, -1); + if (!rp->gen_pool) { + rc = -ENOMEM; + goto err_reclaim; + } + + rc = gen_pool_add(rp->gen_pool, rp->rstmem->paddr, + rp->rstmem->size, -1); + if (rc) + goto err_free_pool; + + refcount_set(&rp->refcount, 1); + return 0; + +err_free_pool: + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; +err_reclaim: + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); +err_put_shm: + tee_shm_put(rp->rstmem); +err_null_rstmem: + rp->rstmem = NULL; + return rc; +} + +static int get_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + int rc = 0; + + if (!refcount_inc_not_zero(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) { + /* + * Another thread has already initialized the pool + * before us, or the pool was just about to be torn + * down. Either way we only need to increase the + * refcount and we're done. + */ + refcount_inc(&rp->refcount); + } else { + rc = init_cma_rstmem(rp); + } + mutex_unlock(&rp->mutex); + } + + return rc; +} + +static void release_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + gen_pool_destroy(rp->gen_pool); + rp->gen_pool = NULL; + + rp->optee->ops->reclaim_rstmem(rp->optee, rp->rstmem); + rp->rstmem->flags &= ~TEE_SHM_DYNAMIC; + + WARN(refcount_read(&rp->rstmem->refcount) != 1, "Unexpected refcount"); + tee_shm_put(rp->rstmem); + rp->rstmem = NULL; +} + +static void put_cma_rstmem(struct optee_rstmem_cma_pool *rp) +{ + if (refcount_dec_and_test(&rp->refcount)) { + mutex_lock(&rp->mutex); + if (rp->gen_pool) + release_cma_rstmem(rp); + mutex_unlock(&rp->mutex); + } +} + +static int rstmem_pool_op_cma_alloc(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t size, + size_t *offs) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + size_t sz = ALIGN(size, PAGE_SIZE); + phys_addr_t pa; + int rc; + + rc = get_cma_rstmem(rp); + if (rc) + return rc; + + pa = gen_pool_alloc(rp->gen_pool, sz); + if (!pa) { + rc = -ENOMEM; + goto err_put; + } + + rc = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (rc) + goto err_free; + + sg_set_page(sgt->sgl, phys_to_page(pa), size, 0); + *offs = pa - rp->rstmem->paddr; + + return 0; +err_free: + gen_pool_free(rp->gen_pool, pa, size); +err_put: + put_cma_rstmem(rp); + + return rc; +} + +static void rstmem_pool_op_cma_free(struct tee_rstmem_pool *pool, + struct sg_table *sgt) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + struct scatterlist *sg; + int i; + + for_each_sgtable_sg(sgt, sg, i) + gen_pool_free(rp->gen_pool, sg_phys(sg), sg->length); + sg_free_table(sgt); + put_cma_rstmem(rp); +} + +static int rstmem_pool_op_cma_update_shm(struct tee_rstmem_pool *pool, + struct sg_table *sgt, size_t offs, + struct tee_shm *shm, + struct tee_shm **parent_shm) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + *parent_shm = rp->rstmem; + + return 0; +} + +static void pool_op_cma_destroy_pool(struct tee_rstmem_pool *pool) +{ + struct optee_rstmem_cma_pool *rp = to_rstmem_cma_pool(pool); + + mutex_destroy(&rp->mutex); + kfree(rp); +} + +static struct tee_rstmem_pool_ops rstmem_pool_ops_cma = { + .alloc = rstmem_pool_op_cma_alloc, + .free = rstmem_pool_op_cma_free, + .update_shm = rstmem_pool_op_cma_update_shm, + .destroy_pool = pool_op_cma_destroy_pool, +}; + +static int get_rstmem_config(struct optee *optee, u32 use_case, + size_t *min_size, u_int *min_align, + u16 *end_points, u_int *ep_count) +{ + struct tee_param params[2] = { + [0] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT, + .u.value.a = use_case, + }, + [1] = { + .attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT, + }, + }; + struct optee_shm_arg_entry *entry; + struct tee_shm *shm_param = NULL; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + if (end_points && *ep_count) { + params[1].u.memref.size = *ep_count * sizeof(*end_points); + shm_param = tee_shm_alloc_priv_buf(optee->ctx, + params[1].u.memref.size); + if (IS_ERR(shm_param)) + return PTR_ERR(shm_param); + params[1].u.memref.shm = shm_param; + } + + msg_arg = optee_get_msg_arg(optee->ctx, ARRAY_SIZE(params), &entry, + &shm, &offs); + if (IS_ERR(msg_arg)) { + rc = PTR_ERR(msg_arg); + goto out_free_shm; + } + msg_arg->cmd = OPTEE_MSG_CMD_GET_RSTMEM_CONFIG; + + rc = optee->ops->to_msg_param(optee, msg_arg->params, + ARRAY_SIZE(params), params, + false /*!update_out*/); + if (rc) + goto out_free_msg; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out_free_msg; + if (msg_arg->ret && msg_arg->ret != TEEC_ERROR_SHORT_BUFFER) { + rc = -EINVAL; + goto out_free_msg; + } + + rc = optee->ops->from_msg_param(optee, params, ARRAY_SIZE(params), + msg_arg->params, true /*update_out*/); + if (rc) + goto out_free_msg; + + if (!msg_arg->ret && end_points && + *ep_count < params[1].u.memref.size / sizeof(u16)) { + rc = -EINVAL; + goto out_free_msg; + } + + *min_size = params[0].u.value.a; + *min_align = params[0].u.value.b; + *ep_count = params[1].u.memref.size / sizeof(u16); + + if (msg_arg->ret == TEEC_ERROR_SHORT_BUFFER) { + rc = -ENOSPC; + goto out_free_msg; + } + + if (end_points) + memcpy(end_points, tee_shm_get_va(shm_param, 0), + params[1].u.memref.size); + +out_free_msg: + optee_free_msg_arg(optee->ctx, entry, offs); +out_free_shm: + if (shm_param) + tee_shm_free(shm_param); + return rc; +} + +struct tee_rstmem_pool *optee_rstmem_alloc_cma_pool(struct optee *optee, + enum tee_dma_heap_id id) +{ + struct optee_rstmem_cma_pool *rp; + u32 use_case = id; + size_t min_size; + int rc; + + rp = kzalloc(sizeof(*rp), GFP_KERNEL); + if (!rp) + return ERR_PTR(-ENOMEM); + rp->use_case = use_case; + + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, NULL, + &rp->end_point_count); + if (rc) { + if (rc != -ENOSPC) + goto err; + rp->end_points = kcalloc(rp->end_point_count, + sizeof(*rp->end_points), GFP_KERNEL); + if (!rp->end_points) { + rc = -ENOMEM; + goto err; + } + rc = get_rstmem_config(optee, use_case, &min_size, &rp->align, + rp->end_points, &rp->end_point_count); + if (rc) + goto err_kfree_eps; + } + + rp->pool.ops = &rstmem_pool_ops_cma; + rp->optee = optee; + rp->page_count = min_size / PAGE_SIZE; + mutex_init(&rp->mutex); + + return &rp->pool; + +err_kfree_eps: + kfree(rp->end_points); +err: + kfree(rp); + return ERR_PTR(rc); +} From patchwork Wed Mar 5 13:04:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 14002621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED4B9C19F32 for ; Wed, 5 Mar 2025 13:07:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7092E10E771; Wed, 5 Mar 2025 13:07:10 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="t06Zn+NE"; dkim-atps=neutral Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6B9CF10E771 for ; Wed, 5 Mar 2025 13:07:09 +0000 (UTC) Received: by mail-ed1-f51.google.com with SMTP id 4fb4d7f45d1cf-5e5a346aeffso1092240a12.1 for ; Wed, 05 Mar 2025 05:07:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1741180028; x=1741784828; darn=lists.freedesktop.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SZWcwQgxJYl5xqresVhiMdi9+ZPx9Y4Kk8m/1lcEjeM=; b=t06Zn+NEf90I+uMTsAwKu6cS5JQ5DwnEw9JIXYwPjI6op25z4KDSeUyUIJuks+AuFn /tOENGlHykeaqtRoj/xEEWhwWvBs7kdfUxY3m4JG8OEAhsy6etee843XYucuf+3niUb6 hsEPTagbmBymNSexdUpVmNbOUTjR30sGHrZ824/WQuH5N3lck5LYIqshdroTJP2swnvG aOsbHcv8X9awII7FRw6Q6RJlcDmviIb68E+UpPGFSSTSt8ORMT6NIUuGQULhbyPOB0FT QcrILiNfH9XCQgf/BFPAX/Dw1MqjRXixcFiqHLFFK/r23Z6Plo/gJgcuqvYE7rBDBSZZ Idjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741180028; x=1741784828; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SZWcwQgxJYl5xqresVhiMdi9+ZPx9Y4Kk8m/1lcEjeM=; b=i7sVfmH1JzIAuM566wWY6MR3HOMhpxRzVEFfZHHN2u6sx0KxASgsgcp2/VhXmviZ/1 /TszTuiJpLL269VxCjVSjiKz6+0U9HVA8F/6T1N6T8ZJGNEsxh/nj7E/fUn55pJ16Wrk AgjWJpecXZlcEuZmKrUSVPoMpDSJjkuUMV/nb1OpufTtYyffQS8WSmBoOt881BuWxsaL x+SsH32k3Q8aBehEovbjIAud9z01DtwaJ58LaSuAuWQtDkC/MCM2NdAt+sUpg8Viot+j D8YRnmXwuJwqBhSYtZT8DABalcb0treFmck7pyTJrV2EvehVW5mJiWUaPnjRHqEa2OZr 525A== X-Forwarded-Encrypted: i=1; AJvYcCXGzrV9yGOZJLQsdA6Oy40nDWZS9fVHTsQEMbDobmTJOwrVQzaXzxSn9ZTagzRhNUdupFbJf1JaMXc=@lists.freedesktop.org X-Gm-Message-State: AOJu0YzIi/RYFcZjyxZAmyCODg2p9ipyViGZrt4pyUD/9AD3QW0tVtlE gpWTlxkor241HQD/8KzCscoOW1raFVCmomrKolPoYLFPArvZ6ykoXU8C7tPgK2Q= X-Gm-Gg: ASbGncswgWH30Gi4k739KBAxMy5lq1TEOazkvZUhznkQ8zlaRgdO3MTcYatNsib6X9Y LW9CHtjzbvKdcAmXlIQgP3YhSG31Ic01H1UKDBLLDAYzGj9hscD7Unh0vOVmBEfmt3NYQEThu3I MT5YWj2Ms/76APDHDd3ueJUn6ofMn4RWh1J2IeGrF/t2PeW4DXEeC0G1VfBgfknDC+OTt8UY14E 2mXe+ngtA2iCO5R0SgCFrKxR5ZhB6CXheeFz+Pvp+DYtobwRVnXylywy61yRw6qbXhEvJHRoAMe q64pVBOVn5fzCK4OAsMq7isUKBKU0ztAZH1iAusuVJIULS9ZRhU8AzU0lwTRuHp2LX6k7lJ5exK UGYd+LsIlftAA1LkUqzphYg== X-Google-Smtp-Source: AGHT+IEU8qhj3jZr5pC33H3BKUPYoPOJwyx39WdATtL+5/RDhfCQ1+ckzcmgDPihiiWE9zWeUGcUMw== X-Received: by 2002:a05:6402:13d1:b0:5e4:99af:b7c with SMTP id 4fb4d7f45d1cf-5e59f384aebmr3091724a12.9.1741180027839; Wed, 05 Mar 2025 05:07:07 -0800 (PST) Received: from rayden.urgonet (h-98-128-140-123.A175.priv.bahnhof.se. [98.128.140.123]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5e5bcd1595bsm65714a12.42.2025.03.05.05.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Mar 2025 05:07:06 -0800 (PST) From: Jens Wiklander To: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, op-tee@lists.trustedfirmware.org, linux-arm-kernel@lists.infradead.org Cc: Olivier Masse , Thierry Reding , Yong Wu , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?utf-8?q?Christian_K=C3=B6nig?= , Sumit Garg , Matthias Brugger , AngeloGioacchino Del Regno , azarrabi@qti.qualcomm.com, Simona Vetter , Daniel Stone , Jens Wiklander Subject: [PATCH v6 10/10] optee: smc abi: dynamic restricted memory allocation Date: Wed, 5 Mar 2025 14:04:16 +0100 Message-ID: <20250305130634.1850178-11-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250305130634.1850178-1-jens.wiklander@linaro.org> References: <20250305130634.1850178-1-jens.wiklander@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support in the OP-TEE backend driver for dynamic restricted memory allocation using the SMC ABI. Signed-off-by: Jens Wiklander --- drivers/tee/optee/smc_abi.c | 96 +++++++++++++++++++++++++++++++------ 1 file changed, 81 insertions(+), 15 deletions(-) diff --git a/drivers/tee/optee/smc_abi.c b/drivers/tee/optee/smc_abi.c index a14ff0b7d3b3..aa574ee6e277 100644 --- a/drivers/tee/optee/smc_abi.c +++ b/drivers/tee/optee/smc_abi.c @@ -1001,6 +1001,69 @@ static int optee_smc_do_call_with_arg(struct tee_context *ctx, return rc; } +static int optee_smc_lend_rstmem(struct optee *optee, struct tee_shm *rstmem, + u16 *end_points, unsigned int ep_count, + u32 use_case) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 2, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_LEND_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT; + msg_arg->params[0].u.value.a = use_case; + msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_TMEM_INPUT; + msg_arg->params[1].u.tmem.buf_ptr = rstmem->paddr; + msg_arg->params[1].u.tmem.size = rstmem->size; + msg_arg->params[1].u.tmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) { + rc = -EINVAL; + goto out; + } + rstmem->sec_world_id = (u_long)rstmem; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + +static int optee_smc_reclaim_rstmem(struct optee *optee, struct tee_shm *rstmem) +{ + struct optee_shm_arg_entry *entry; + struct optee_msg_arg *msg_arg; + struct tee_shm *shm; + u_int offs; + int rc; + + msg_arg = optee_get_msg_arg(optee->ctx, 1, &entry, &shm, &offs); + if (IS_ERR(msg_arg)) + return PTR_ERR(msg_arg); + + msg_arg->cmd = OPTEE_MSG_CMD_RECLAIM_RSTMEM; + msg_arg->params[0].attr = OPTEE_MSG_ATTR_TYPE_RMEM_INPUT; + msg_arg->params[0].u.rmem.shm_ref = (u_long)rstmem; + + rc = optee->ops->do_call_with_arg(optee->ctx, shm, offs, false); + if (rc) + goto out; + if (msg_arg->ret != TEEC_SUCCESS) + rc = -EINVAL; + +out: + optee_free_msg_arg(optee->ctx, entry, offs); + return rc; +} + /* * 5. Asynchronous notification */ @@ -1252,6 +1315,8 @@ static const struct optee_ops optee_ops = { .do_call_with_arg = optee_smc_do_call_with_arg, .to_msg_param = optee_to_msg_param, .from_msg_param = optee_from_msg_param, + .lend_rstmem = optee_smc_lend_rstmem, + .reclaim_rstmem = optee_smc_reclaim_rstmem, }; static int enable_async_notif(optee_invoke_fn *invoke_fn) @@ -1622,11 +1687,13 @@ static inline int optee_load_fw(struct platform_device *pdev, static int optee_sdp_pool_init(struct optee *optee) { + bool sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP; + bool dyn_sdp = optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_RSTMEM; enum tee_dma_heap_id heap_id = TEE_DMA_HEAP_SECURE_VIDEO_PLAY; - struct tee_rstmem_pool *pool; - int rc; + struct tee_rstmem_pool *pool = ERR_PTR(-EINVAL); + int rc = -EINVAL; - if (optee->smc.sec_caps & OPTEE_SMC_SEC_CAP_SDP) { + if (sdp) { union { struct arm_smccc_res smccc; struct optee_smc_get_sdp_config_result result; @@ -1634,25 +1701,24 @@ static int optee_sdp_pool_init(struct optee *optee) optee->smc.invoke_fn(OPTEE_SMC_GET_SDP_CONFIG, 0, 0, 0, 0, 0, 0, 0, &res.smccc); - if (res.result.status != OPTEE_SMC_RETURN_OK) { - pr_err("Secure Data Path service not available\n"); - return 0; - } + if (res.result.status == OPTEE_SMC_RETURN_OK) + pool = tee_rstmem_static_pool_alloc(res.result.start, + res.result.size); + } - pool = tee_rstmem_static_pool_alloc(res.result.start, - res.result.size); - if (IS_ERR(pool)) - return PTR_ERR(pool); + if (dyn_sdp && IS_ERR(pool)) + pool = optee_rstmem_alloc_cma_pool(optee, heap_id); + if (!IS_ERR(pool)) { rc = tee_device_register_dma_heap(optee->teedev, heap_id, pool); if (rc) - goto err; + pool->ops->destroy_pool(pool); } + if (rc && (sdp || dyn_sdp)) + pr_err("Secure Data Path service not available\n"); + return 0; -err: - pool->ops->destroy_pool(pool); - return rc; } static int optee_probe(struct platform_device *pdev)