From patchwork Mon Jun 10 06:53:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 690C6C27C6E for ; Mon, 10 Jun 2024 06:54:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736900.1142954 (Exim 4.92) (envelope-from ) id 1sGYug-0001rg-2z; Mon, 10 Jun 2024 06:53:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736900.1142954; Mon, 10 Jun 2024 06:53:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYuf-0001r0-Tf; Mon, 10 Jun 2024 06:53:53 +0000 Received: by outflank-mailman (input) for mailman id 736900; Mon, 10 Jun 2024 06:53:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYue-0001oX-DX for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:52 +0000 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [2a00:1450:4864:20::62f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 30aa8c65-26f6-11ef-b4bb-af5377834399; Mon, 10 Jun 2024 08:53:49 +0200 (CEST) Received: by mail-ej1-x62f.google.com with SMTP id a640c23a62f3a-a6efae34c83so189671266b.0 for ; Sun, 09 Jun 2024 23:53:49 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:47 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 30aa8c65-26f6-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002428; x=1718607228; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2VzeWK8x82YxZO3FACx+2RlqRVfXhRtSExk4PP34ai4=; b=cYsfA46SHwpPnhzK1vOVQX3W3bmDYPSgg4XnVWEIWo9UKzr2jjItSjdHom0gRbXmlk a/t4kOXTzxmY/jdrvGS69i/5UUZ1V/rSnZz/QF+QUTBhBXuhkZ0d2fEjQCVAr+MUwLss 6nPc6UuCRWX4MHH74B3MJgWp9DdsZp0+anxG1maJ8B+fdRXNCPMAcYLnmpnI93iX6jOC moluf+21qrjf0g/jCb1BBgrxRXsqiue9a95VESH8/4iaZTv2hgtv47thf3tEGxpqDOVW sQ5P8bamOshepzCCg9AfzuHVnsm84S6TdFI3Czv0OoqEHPFSvVzBbtUXinDZ3NejeD7z RRCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002428; x=1718607228; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2VzeWK8x82YxZO3FACx+2RlqRVfXhRtSExk4PP34ai4=; b=e9ZzU1FNeb3dMYgHMbJkn7TRNDMpgXZ5ZbRyeqX0/8FY+TyptNNO9OkSqmYI2yu7yW CdvKbysUn2b5uoTX1p1V/YdFu+KQDsC+v2YWhUEyLJNH71DD6QwE+PmgCFrjgBgbpk+Q 3Lxu9Ynf22Alf+7BDv1GG5iYV9Ubq1ENuwUi/W8kiDb7lSvPlVapLigOllD7TqVQzDcO QEY+28H5iAbielTPk5lxLM19sxIb4rRkU0SZn12O5h3+7fk4kUUwlwIG7KGM0J/s9jZI WE458BarE5fvPSc2f4KISkVitipulTZEX5+AMsBgknhyK3lUoJADn4EstzgeKf7Npka8 5+0Q== X-Gm-Message-State: AOJu0Yx7Mw7fYYGCY5696jAEGOM0GgmpHQ9BjH5j1mTqTeSySrFAmIwI Z0url6ngzjd6QvnTJn/Tvq9zY35MmcIfvl7Mv0YBeFFuZzA66WP96sF65afqJf6mRKuWaNFjwxs l59A= X-Google-Smtp-Source: AGHT+IFksq917houloNmrm5lAA40EHRgPJB3pWIU+MWKx9Za4TcrtnD1C83f+jBsjtwQ38ngh3ac0g== X-Received: by 2002:a17:906:ca0f:b0:a6f:1d50:bf1e with SMTP id a640c23a62f3a-a6f1d50c0a1mr115807866b.43.1718002428347; Sun, 09 Jun 2024 23:53:48 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH v6 1/7] xen/arm: ffa: refactor ffa_handle_call() Date: Mon, 10 Jun 2024 08:53:37 +0200 Message-Id: <20240610065343.2594943-2-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Refactors the large switch block in ffa_handle_call() to use common code for the simple case where it's either an error code or success with no further parameters. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 8665201e34a9..5209612963e1 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -273,18 +273,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) case FFA_RXTX_MAP_64: e = ffa_handle_rxtx_map(fid, get_user_reg(regs, 1), get_user_reg(regs, 2), get_user_reg(regs, 3)); - if ( e ) - ffa_set_regs_error(regs, e); - else - ffa_set_regs_success(regs, 0, 0); - return true; + break; case FFA_RXTX_UNMAP: e = ffa_handle_rxtx_unmap(); - if ( e ) - ffa_set_regs_error(regs, e); - else - ffa_set_regs_success(regs, 0, 0); - return true; + break; case FFA_PARTITION_INFO_GET: e = ffa_handle_partition_info_get(get_user_reg(regs, 1), get_user_reg(regs, 2), @@ -299,11 +291,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) return true; case FFA_RX_RELEASE: e = ffa_handle_rx_release(); - if ( e ) - ffa_set_regs_error(regs, e); - else - ffa_set_regs_success(regs, 0, 0); - return true; + break; case FFA_MSG_SEND_DIRECT_REQ_32: case FFA_MSG_SEND_DIRECT_REQ_64: handle_msg_send_direct_req(regs, fid); @@ -316,17 +304,19 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) e = ffa_handle_mem_reclaim(regpair_to_uint64(get_user_reg(regs, 2), get_user_reg(regs, 1)), get_user_reg(regs, 3)); - if ( e ) - ffa_set_regs_error(regs, e); - else - ffa_set_regs_success(regs, 0, 0); - return true; + break; default: gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid); ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return true; } + + if ( e ) + ffa_set_regs_error(regs, e); + else + ffa_set_regs_success(regs, 0, 0); + return true; } static int ffa_domain_init(struct domain *d) From patchwork Mon Jun 10 06:53:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D2DA9C27C78 for ; Mon, 10 Jun 2024 06:54:08 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736901.1142960 (Exim 4.92) (envelope-from ) id 1sGYug-0001ub-BC; Mon, 10 Jun 2024 06:53:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736901.1142960; Mon, 10 Jun 2024 06:53:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYug-0001t9-4N; Mon, 10 Jun 2024 06:53:54 +0000 Received: by outflank-mailman (input) for mailman id 736901; Mon, 10 Jun 2024 06:53:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYue-0001oX-Qm for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:52 +0000 Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [2a00:1450:4864:20::62a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 31cd4f37-26f6-11ef-b4bb-af5377834399; Mon, 10 Jun 2024 08:53:51 +0200 (CEST) Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-a6ef64b092cso224166366b.1 for ; Sun, 09 Jun 2024 23:53:51 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:49 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 31cd4f37-26f6-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002430; x=1718607230; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Injz1eHKMU34CsdRACoEV/3/mGn5muvQCz5MmACq6o4=; b=c1zGfPGabAngaLO0Mv+Fenz9hpoP4iVVtsnrPeTaPzQaa2LQ16QIRK0J63MpdYYoNw i+uND7RC1vR3NS4XdHEEflTCmyf8/negg3+zh4u+gZmlgRayc++Hfq1SGUefYJlXN8vS khLiRYBuZGObm1TbWQQe4bRySkGmkT0usBUw3Niamnuvf9cJGKCatM27w1bJJSR932B7 rdHwJdhfFPkG+Anb/7kEgvtcEtwGayn98N1EGOziHhYXygQ8DPp0nahrdjk8afuVGsoH vgZ1vdMVh08khBCj3pzkvyMbEQQlCPavBpBGyViomvT/VxqlbqWfEodDiLKlGgg6E7LJ 6dyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002430; x=1718607230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Injz1eHKMU34CsdRACoEV/3/mGn5muvQCz5MmACq6o4=; b=YYTUKhnYc0UbARAi/QQMoYlvFkOWqfyhN6JdMQVRfPUbQaKHAtVzX178uGYlbcHKDM GhULVAo6yG6l6JP3DryF8nZ+V42QMveZ+lE0Uu+3nkvG3KaFFPoQ99s/+odwk6/iNKT9 3d4V8WpmZRgb91r2EWJn332hgWY99bLPMpIx0RTZsgW3pBZig6mD8BS+FIXonCyuqvRB tVW2MBRdn0eFoUX7yfGUrGYobSSrbnti/9jHCm/5s4L71FbDcMvc4Smxt9GMMpvnTKZB RXX7dsciqAWY6sLtxwW471h+kiN4kjszE/gOxJkx/RnXQ3Pdvx5R9tk+TMiAMEAbiQju OjFw== X-Gm-Message-State: AOJu0Yy4PuT6Sm2x1aIuLXQuYG/lJT5dBM4DU9GYNd0KAtX+q/RE9XBE nfReEn1pcXTKg9ddd23WAbBxBMEXqjbHq+dOTdGYOfwkgpRh+YfkSxR2IFZ0erHLfKgbaNYk8bd b4Gk= X-Google-Smtp-Source: AGHT+IGMHCd19V2Ca1l7FaUhJpfnKwGOHKc5HQE5b6x+izr2N0yEBtHEkS4HKkyYX7r9eBBn5Pjgmw== X-Received: by 2002:a17:906:7b87:b0:a6f:9ee:bd47 with SMTP id a640c23a62f3a-a6f09ef34d0mr300998766b.58.1718002429948; Sun, 09 Jun 2024 23:53:49 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH v6 2/7] xen/arm: ffa: use ACCESS_ONCE() Date: Mon, 10 Jun 2024 08:53:38 +0200 Message-Id: <20240610065343.2594943-3-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Replace read_atomic() with ACCESS_ONCE() to match the intended use, that is, to prevent the compiler from (via optimization) reading shared memory more than once. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_shm.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c index eed9ad2d2986..75a5b66aeb4c 100644 --- a/xen/arch/arm/tee/ffa_shm.c +++ b/xen/arch/arm/tee/ffa_shm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -171,8 +172,8 @@ static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm, for ( n = 0; n < range_count; n++ ) { - page_count = read_atomic(&range[n].page_count); - addr = read_atomic(&range[n].address); + page_count = ACCESS_ONCE(range[n].page_count); + addr = ACCESS_ONCE(range[n].address); for ( m = 0; m < page_count; m++ ) { if ( pg_idx >= shm->page_count ) @@ -527,13 +528,13 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) goto out_unlock; mem_access = ctx->tx + trans.mem_access_offs; - if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW ) + if ( ACCESS_ONCE(mem_access->access_perm.perm) != FFA_MEM_ACC_RW ) { ret = FFA_RET_NOT_SUPPORTED; goto out_unlock; } - region_offs = read_atomic(&mem_access->region_offs); + region_offs = ACCESS_ONCE(mem_access->region_offs); if ( sizeof(*region_descr) + region_offs > frag_len ) { ret = FFA_RET_NOT_SUPPORTED; @@ -541,8 +542,8 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) } region_descr = ctx->tx + region_offs; - range_count = read_atomic(®ion_descr->address_range_count); - page_count = read_atomic(®ion_descr->total_page_count); + range_count = ACCESS_ONCE(region_descr->address_range_count); + page_count = ACCESS_ONCE(region_descr->total_page_count); if ( !page_count ) { @@ -557,7 +558,7 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) goto out_unlock; } shm->sender_id = trans.sender_id; - shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id); + shm->ep_id = ACCESS_ONCE(mem_access->access_perm.endpoint_id); /* * Check that the Composite memory region descriptor fits. From patchwork Mon Jun 10 06:53:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EF300C27C5F for ; Mon, 10 Jun 2024 06:54:06 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736902.1142978 (Exim 4.92) (envelope-from ) id 1sGYui-0002W2-E3; Mon, 10 Jun 2024 06:53:56 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736902.1142978; Mon, 10 Jun 2024 06:53:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYui-0002Vt-AS; Mon, 10 Jun 2024 06:53:56 +0000 Received: by outflank-mailman (input) for mailman id 736902; Mon, 10 Jun 2024 06:53:54 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYug-0001oX-J9 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:54 +0000 Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [2a00:1450:4864:20::62a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 32e3b659-26f6-11ef-b4bb-af5377834399; Mon, 10 Jun 2024 08:53:53 +0200 (CEST) Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-a6ef8e62935so228092266b.3 for ; Sun, 09 Jun 2024 23:53:53 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:50 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 32e3b659-26f6-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002432; x=1718607232; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XH0hx5FhOx9XD5Quim9d/dWx8ptFKhc8V8iHDPJ7zTg=; b=lLIYtpGn4JjIP1moIj/qS+1o2d58yQNtFpIXoNNsDH73mx4rqOOLqxEPI+qgiSI7hC 95AIt8fxyhj1lXu98iKMDINzdvgoKYY+fiIY3AZdyJ5EFS4uLg9ZjqrQWtEX7O5xOUny SRaD0rJqX4Qlc2cICZ0XLOyjp6fI7x5DbqzT2hT+Z+5l7QFQlg/lwmThGnUrbblblY6F DqXpVYSwsHJLLK9uPreBl3yZi3XRzE+FjaeRgPhTVDULIuk5L+UOhXjHJ2Zs9cHzu9nW Dl/F0wh+QsJ5PLdVryLPpf4FeGRoObmzdh0s58EOPTzXp+WerohVtYyDpFO2J3IFDGHR SbVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002432; x=1718607232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XH0hx5FhOx9XD5Quim9d/dWx8ptFKhc8V8iHDPJ7zTg=; b=k0fhwsZ3LSSf4kH3KlL0gF+YB8GHj5aoN1nIQX0NEnsxFw6+IcNaE9NtBorpDpvT8r USSwMu/4Pmoar0FIjc4tPn8dI1RUK/iiDkgKaolDgp4zKTeT3cJDVwMtGbx26xqpNQiW QL/LMBvQ730uGSwANllHM0wkrjmNxFZmsZQIK7WWNUhzhYR8W++9tX/VyzmaWA44iri1 3TXThWe0cBAiIUGHholX1exJmPI+OcDXZKfc6JOOMCZ3yJrsv2O+YPW8CL6tnt9I2Xao 2YdyjuZ4vRFmOE7IdgZPkDL3lxPjkL62HqU8ZI+R4A4W4ZW3Y1t8K/WfpLAhrTSImJnu k4zg== X-Gm-Message-State: AOJu0Yz7trcAhjQQId7k9j83LHThx1K87wjEoTE5oAfwJNSMLlBv2JT4 m2qKbggWa8GgtAmrxmKjtpeF4HcpVTLMeYGy4SoHgCKpBJfRd38aP0a1HCpEVYpb+ApopyuOvjo HWoI= X-Google-Smtp-Source: AGHT+IFarHqSdGAqU5TZnBn1sJgCv2O8vReWZ1tFvpOuZd/60yHNPdDl8basOLuUVpHD3bVWJ7BmSg== X-Received: by 2002:a17:906:80ca:b0:a6f:1004:dc30 with SMTP id a640c23a62f3a-a6f1004e1f0mr310918866b.65.1718002431882; Sun, 09 Jun 2024 23:53:51 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH v6 3/7] xen/arm: ffa: simplify ffa_handle_mem_share() Date: Mon, 10 Jun 2024 08:53:39 +0200 Message-Id: <20240610065343.2594943-4-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Simplify ffa_handle_mem_share() by removing the start_page_idx and last_page_idx parameters from get_shm_pages() and check that the number of pages matches expectations at the end of get_shm_pages(). Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_shm.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c index 75a5b66aeb4c..370d83ec5cf8 100644 --- a/xen/arch/arm/tee/ffa_shm.c +++ b/xen/arch/arm/tee/ffa_shm.c @@ -159,10 +159,9 @@ static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi, */ static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm, const struct ffa_address_range *range, - uint32_t range_count, unsigned int start_page_idx, - unsigned int *last_page_idx) + uint32_t range_count) { - unsigned int pg_idx = start_page_idx; + unsigned int pg_idx = 0; gfn_t gfn; unsigned int n; unsigned int m; @@ -191,7 +190,9 @@ static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm, } } - *last_page_idx = pg_idx; + /* The ranges must add up */ + if ( pg_idx < shm->page_count ) + return FFA_RET_INVALID_PARAMETERS; return FFA_RET_OK; } @@ -460,7 +461,6 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) struct domain *d = current->domain; struct ffa_ctx *ctx = d->arch.tee; struct ffa_shm_mem *shm = NULL; - unsigned int last_page_idx = 0; register_t handle_hi = 0; register_t handle_lo = 0; int ret = FFA_RET_DENIED; @@ -570,15 +570,9 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) goto out; } - ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count, - 0, &last_page_idx); + ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count); if ( ret ) goto out; - if ( last_page_idx != shm->page_count ) - { - ret = FFA_RET_INVALID_PARAMETERS; - goto out; - } /* Note that share_shm() uses our tx buffer */ spin_lock(&ffa_tx_buffer_lock); From patchwork Mon Jun 10 06:53:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0A37C27C5E for ; Mon, 10 Jun 2024 06:54:05 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736903.1142984 (Exim 4.92) (envelope-from ) id 1sGYui-0002Zg-QG; Mon, 10 Jun 2024 06:53:56 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736903.1142984; Mon, 10 Jun 2024 06:53:56 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYui-0002ZW-Kd; Mon, 10 Jun 2024 06:53:56 +0000 Received: by outflank-mailman (input) for mailman id 736903; Mon, 10 Jun 2024 06:53:55 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYuh-0002SJ-J0 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:55 +0000 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [2a00:1450:4864:20::641]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3391d7a6-26f6-11ef-90a2-e314d9c70b13; Mon, 10 Jun 2024 08:53:54 +0200 (CEST) Received: by mail-ej1-x641.google.com with SMTP id a640c23a62f3a-a63359aaaa6so594625266b.2 for ; Sun, 09 Jun 2024 23:53:54 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:52 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3391d7a6-26f6-11ef-90a2-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002433; x=1718607233; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JceofD7RBMGtBhxQJQYStOTtcfNOzjMXKDJSoSIwmug=; b=JaWqYG55WainfdKf+S2kg5CQ07FYmhbpwoj0U4RaPRbWap2Ml+0/l7yejbeTiIxd5O 0lsUwdYQ+U7xGeGRiA6/Q+T7TnJLAZL8IqJsREGYbqqtxw4s+Swm9rF7b0yaMZXNY64p CqqSg6Q7pf2X9ICvZRS5rd78KC6j6F8UMmZx7t0a9HZWj7cctJ8oeXwTBuQJBlXpVyEu 25wQan0tfVwqUmTQVTOWHdwahyTaezxlaBj0LVNz1lJhI6uQRJxuEycUgPHPJxpWoXoF lgEYCr7sDzCnmwEfhNyl108f70Njx0cAlNLGLwuG4xPVGSnwKvzc2vilpVJFCR/Sf0VE oPJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002433; x=1718607233; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JceofD7RBMGtBhxQJQYStOTtcfNOzjMXKDJSoSIwmug=; b=UNyhsPTOP3G0fgmNM9MD5Kl1Es69F5YMNkH2jrxsMcNQNqpfUvIymnVNkURpUjb269 wvVdBvQCARlkVQp5ztZNXfbk0YzrpAASQ1cOBSHSPoSOElFz8KLGU3qfJsRhJuucelWL HZIrPfLh6ts08kTk1kBqtosrV4mR9Wc3oK+FaZUgdDbtRX7ee1K3/UqptxdISRblQrGJ ANaKE4DRy8siHr2R0t0pa7rl/J9KMlWBe1KKv9H71rSVcr87t8Mrpb1Vi+efDGL1DasO AuPEGnfxr0k0iyFjQu1N1gV6OrFh3ggvPJLPcA78g+Lg55/0Y+WNS1Bt5PraeE3CTcDL Mviw== X-Gm-Message-State: AOJu0YzfOgGH1k2jZVwKpvmL5DCKU6m26jRI0gl4acXs8eEAKrfV+Bh5 q7PXfXMNos6w6dvEt45T5U/uTGW+sU2RzNRODsSmYLcQaMKCmHgm8XONQQj1Dcv5d+Xtg63Pu9n 4YUf5wA== X-Google-Smtp-Source: AGHT+IENIvSGyjGpOz9XjYX5uB8uPLMvPPR9yA55yF5Sw+1pXuvS4Hh317Vncdxx9CfRgcNU+2A6zw== X-Received: by 2002:a17:906:1c87:b0:a6f:1106:5dc7 with SMTP id a640c23a62f3a-a6f11065e29mr239543466b.5.1718002433493; Sun, 09 Jun 2024 23:53:53 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Julien Grall Subject: [XEN PATCH v6 4/7] xen/arm: allow dynamically assigned SGI handlers Date: Mon, 10 Jun 2024 08:53:40 +0200 Message-Id: <20240610065343.2594943-5-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Updates so request_irq() can be used with a dynamically assigned SGI irq as input. This prepares for a later patch where an FF-A schedule receiver interrupt handler is installed for an SGI generated by the secure world. From the Arm Base System Architecture v1.0C [1]: "The system shall implement at least eight Non-secure SGIs, assigned to interrupt IDs 0-7." gic_route_irq_to_xen() don't gic_set_irq_type() for SGIs since they are always edge triggered. gic_interrupt() is updated to route the dynamically assigned SGIs to do_IRQ() instead of do_sgi(). The latter still handles the statically assigned SGI handlers like for instance GIC_SGI_CALL_FUNCTION. [1] https://developer.arm.com/documentation/den0094/ Signed-off-by: Jens Wiklander Acked-by: Julien Grall --- v3->v4 - Use IRQ_TYPE_EDGE_RISING instead of DT_IRQ_TYPE_EDGE_RISING v2->v3 - Rename GIC_SGI_MAX to GIC_SGI_STATIC_MAX and rename do_sgi() to do_static_sgi() - Update comment in setup_irq() to mention that SGI irq_desc is banked - Add ASSERT() in do_IRQ() that the irq isn't an SGI before injecting calling vgic_inject_irq() - Initialize local_irqs_type[] range for SGIs as IRQ_TYPE_EDGE_RISING - Adding link to the Arm Base System Architecture v1.0C v1->v2 - Update patch description as requested --- xen/arch/arm/gic.c | 12 +++++++----- xen/arch/arm/include/asm/gic.h | 2 +- xen/arch/arm/irq.c | 18 ++++++++++++++---- 3 files changed, 22 insertions(+), 10 deletions(-) diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index b3467a76ae75..3eaf670fd731 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c @@ -38,7 +38,7 @@ const struct gic_hw_operations *gic_hw_ops; static void __init __maybe_unused build_assertions(void) { /* Check our enum gic_sgi only covers SGIs */ - BUILD_BUG_ON(GIC_SGI_MAX > NR_GIC_SGI); + BUILD_BUG_ON(GIC_SGI_STATIC_MAX > NR_GIC_SGI); } void register_gic_ops(const struct gic_hw_operations *ops) @@ -117,7 +117,9 @@ void gic_route_irq_to_xen(struct irq_desc *desc, unsigned int priority) desc->handler = gic_hw_ops->gic_host_irq_type; - gic_set_irq_type(desc, desc->arch.type); + /* SGIs are always edge-triggered, so there is need to set it */ + if ( desc->irq >= NR_GIC_SGI) + gic_set_irq_type(desc, desc->arch.type); gic_set_irq_priority(desc, priority); } @@ -322,7 +324,7 @@ void gic_disable_cpu(void) gic_hw_ops->disable_interface(); } -static void do_sgi(struct cpu_user_regs *regs, enum gic_sgi sgi) +static void do_static_sgi(struct cpu_user_regs *regs, enum gic_sgi sgi) { struct irq_desc *desc = irq_to_desc(sgi); @@ -367,7 +369,7 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq) /* Reading IRQ will ACK it */ irq = gic_hw_ops->read_irq(); - if ( likely(irq >= 16 && irq < 1020) ) + if ( likely(irq >= GIC_SGI_STATIC_MAX && irq < 1020) ) { isb(); do_IRQ(regs, irq, is_fiq); @@ -379,7 +381,7 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq) } else if ( unlikely(irq < 16) ) { - do_sgi(regs, irq); + do_static_sgi(regs, irq); } else { diff --git a/xen/arch/arm/include/asm/gic.h b/xen/arch/arm/include/asm/gic.h index 03f209529b13..541f0eeb808a 100644 --- a/xen/arch/arm/include/asm/gic.h +++ b/xen/arch/arm/include/asm/gic.h @@ -285,7 +285,7 @@ enum gic_sgi { GIC_SGI_EVENT_CHECK, GIC_SGI_DUMP_STATE, GIC_SGI_CALL_FUNCTION, - GIC_SGI_MAX, + GIC_SGI_STATIC_MAX, }; /* SGI irq mode types */ diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index e5fb26a3de2d..c60502444ccf 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -142,7 +142,13 @@ void __init init_IRQ(void) spin_lock(&local_irqs_type_lock); for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ ) - local_irqs_type[irq] = IRQ_TYPE_INVALID; + { + /* SGIs are always edge-triggered */ + if ( irq < NR_GIC_SGI ) + local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING; + else + local_irqs_type[irq] = IRQ_TYPE_INVALID; + } spin_unlock(&local_irqs_type_lock); BUG_ON(init_local_irq_data(smp_processor_id()) < 0); @@ -214,9 +220,12 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq) perfc_incr(irqs); - ASSERT(irq >= 16); /* SGIs do not come down this path */ + /* Statically assigned SGIs do not come down this path */ + ASSERT(irq >= GIC_SGI_STATIC_MAX); - if ( irq < 32 ) + if ( irq < NR_GIC_SGI ) + perfc_incr(ipis); + else if ( irq < NR_GIC_LOCAL_IRQS ) perfc_incr(ppis); else perfc_incr(spis); @@ -250,6 +259,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq) * The irq cannot be a PPI, we only support delivery of SPIs to * guests. */ + ASSERT(irq >= NR_GIC_SGI); vgic_inject_irq(info->d, NULL, info->virq, true); goto out_no_end; } @@ -386,7 +396,7 @@ int setup_irq(unsigned int irq, unsigned int irqflags, struct irqaction *new) { gic_route_irq_to_xen(desc, GIC_PRI_IRQ); /* It's fine to use smp_processor_id() because: - * For PPI: irq_desc is banked + * For SGI and PPI: irq_desc is banked * For SPI: we don't care for now which CPU will receive the * interrupt * TODO: Handle case where SPI is setup on different CPU than From patchwork Mon Jun 10 06:53:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CE78C27C75 for ; Mon, 10 Jun 2024 06:54:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736904.1142998 (Exim 4.92) (envelope-from ) id 1sGYul-000344-8S; Mon, 10 Jun 2024 06:53:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736904.1142998; Mon, 10 Jun 2024 06:53:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYul-00033v-5e; Mon, 10 Jun 2024 06:53:59 +0000 Received: by outflank-mailman (input) for mailman id 736904; Mon, 10 Jun 2024 06:53:57 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYuj-0001oX-J5 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:57 +0000 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [2a00:1450:4864:20::62c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 34a79980-26f6-11ef-b4bb-af5377834399; Mon, 10 Jun 2024 08:53:56 +0200 (CEST) Received: by mail-ej1-x62c.google.com with SMTP id a640c23a62f3a-a62ef52e837so528756866b.3 for ; Sun, 09 Jun 2024 23:53:56 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:54 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 34a79980-26f6-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002435; x=1718607235; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v0XZWV38zo7fRMFW2k6millPwbwyTFNlIqQWXfkZcoE=; b=wm3NVcf0cwmsKRDHMpzk0pUjs8pRNsKotohsW5zAUtAxPqcjiweXaPjV15z2nk83Dn 1qe7DYXdaGjoyVZW+UzaKiufbfdrDI5LNPOHPsXhb1k8qDHBisLhPMHi0No75/eizajk Fq4Nz1G6LT+UXrGT1Jf5NOk6jtu0tun/5PObGi/fUJE8eGrOfEYocrSrj7dIUD4zvuGX RTlQW/pyQossoInGJS4gvs71Bhc4XmWN1kBru3mBbGTb5FYtqtsncnIDBgrIC3hwPWNe u7aADhnSGB6vWvFmaHFSZgy+tRPffv2PaLfk0DSp+Z+IqDo8lF3gN2Jma00mEBRmZZBr jnCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002435; x=1718607235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v0XZWV38zo7fRMFW2k6millPwbwyTFNlIqQWXfkZcoE=; b=NCfN7gDrafE5mYqYlrA+BE7VSyzLsQyVBuWHz0ICbsDsPNBkOMi+Ah1PJA5skvfKVs k0uDNyFCX3FtMOln7FNy+rV6kJ4h1ZU4ZNrJcspLF2+kBofxVU4jemyHbVqWwC/rzwli fcT6gNw+DfZHxZ7D/Abln4/pZB1kO65b/griFFv35XyGGbMW/L9eGtPrs4BA/yocSvFv je2USAZHaHnuxlOu1O40sHZnsfpiFheK39/Mb6HOUA0MS8ZWIg5hiaDMK/7YecDEx6L2 IZvbKRNsh+SgNHKMf0OMaaSSE98+NgOcc0qir/5XcDYBAu2xo5PcXR7bcj+RiSqJBa4y D0Jw== X-Gm-Message-State: AOJu0YykQRm73cX6ALs4b44yrUbgH9utKduQ6bftTZGoTIRcbixWS6eX HIuUFqHU8PTPNM2wIodF/90+tCVNjQ98XFF0wBVr0SpYABCScGlqo7/4MEyJAdQ3KEM2HCYXuH3 I744= X-Google-Smtp-Source: AGHT+IFrqovL90KjjtIWH7ELqoOBQ3DlSigv3se5/sV3vqUZIu9PEL+pc5diOLHr+PJK6s+8Xzun2Q== X-Received: by 2002:a17:906:29d5:b0:a6f:1b40:82ab with SMTP id a640c23a62f3a-a6f1b408397mr163610266b.76.1718002434843; Sun, 09 Jun 2024 23:53:54 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH v6 5/7] xen/arm: add and call init_tee_secondary() Date: Mon, 10 Jun 2024 08:53:41 +0200 Message-Id: <20240610065343.2594943-6-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Add init_tee_secondary() to the TEE mediator framework and call it from start_secondary() late enough that per-cpu interrupts can be configured on CPUs as they are initialized. This is needed in later patches. Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- v5->v6: - Rename init_tee_interrupt() to init_tee_secondary() as requested --- xen/arch/arm/include/asm/tee/tee.h | 8 ++++++++ xen/arch/arm/smpboot.c | 2 ++ xen/arch/arm/tee/tee.c | 6 ++++++ 3 files changed, 16 insertions(+) diff --git a/xen/arch/arm/include/asm/tee/tee.h b/xen/arch/arm/include/asm/tee/tee.h index da324467e130..6bc13da885b6 100644 --- a/xen/arch/arm/include/asm/tee/tee.h +++ b/xen/arch/arm/include/asm/tee/tee.h @@ -28,6 +28,9 @@ struct tee_mediator_ops { */ bool (*probe)(void); + /* Initialize secondary CPUs */ + void (*init_secondary)(void); + /* * Called during domain construction if toolstack requests to enable * TEE support so mediator can inform TEE about new @@ -66,6 +69,7 @@ int tee_domain_init(struct domain *d, uint16_t tee_type); int tee_domain_teardown(struct domain *d); int tee_relinquish_resources(struct domain *d); uint16_t tee_get_type(void); +void init_tee_secondary(void); #define REGISTER_TEE_MEDIATOR(_name, _namestr, _type, _ops) \ static const struct tee_mediator_desc __tee_desc_##_name __used \ @@ -105,6 +109,10 @@ static inline uint16_t tee_get_type(void) return XEN_DOMCTL_CONFIG_TEE_NONE; } +static inline void init_tee_secondary(void) +{ +} + #endif /* CONFIG_TEE */ #endif /* __ARCH_ARM_TEE_TEE_H__ */ diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index 93a10d7721b4..04e363088d60 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -29,6 +29,7 @@ #include #include #include +#include /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn @@ -401,6 +402,7 @@ void asmlinkage start_secondary(void) */ init_maintenance_interrupt(); init_timer_interrupt(); + init_tee_secondary(); local_abort_enable(); diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c index ddd17506a9ff..9fd1d7495b2e 100644 --- a/xen/arch/arm/tee/tee.c +++ b/xen/arch/arm/tee/tee.c @@ -96,6 +96,12 @@ static int __init tee_init(void) __initcall(tee_init); +void __init init_tee_secondary(void) +{ + if ( cur_mediator && cur_mediator->ops->init_secondary ) + cur_mediator->ops->init_secondary(); +} + /* * Local variables: * mode: C From patchwork Mon Jun 10 06:53:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 61271C41513 for ; Mon, 10 Jun 2024 06:54:07 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736905.1143003 (Exim 4.92) (envelope-from ) id 1sGYul-00036W-JZ; Mon, 10 Jun 2024 06:53:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736905.1143003; Mon, 10 Jun 2024 06:53:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYul-000366-DK; Mon, 10 Jun 2024 06:53:59 +0000 Received: by outflank-mailman (input) for mailman id 736905; Mon, 10 Jun 2024 06:53:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYuj-0002SJ-VW for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:57 +0000 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [2a00:1450:4864:20::629]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 35603f96-26f6-11ef-90a2-e314d9c70b13; Mon, 10 Jun 2024 08:53:57 +0200 (CEST) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-a63359aaaa6so594629866b.2 for ; Sun, 09 Jun 2024 23:53:57 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:55 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 35603f96-26f6-11ef-90a2-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002436; x=1718607236; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4VXeaEelKJYpG/6TyRv7n3Wpu8YGpUrR2rYBr36fFWI=; b=kTAnCAJnx9Thdpc0FXS2PUhrwcnsDYMb++DlD1CTsZu1OdpH4LM+3bxURpQTdgodMN Hn5TQr67piXzgCrOk94ZFY+ISeEZLUA/3JvSSG2XhuNn7jfnnvsl/Fd1Bxt/BU4MXxG7 MACov1toSd84WvSEwzbWqwqrpO1HhqXWCfM6mr1jQKPiloxamLFibrvsUBm6yUtnhD+l kBvXLMXYfCSw7qhDPK8dKV6NtQNR7ssv0/ggeDlPqYDiA3t3nVc8EBWjiGU72A/eO2SD wsewUFmPnI9uJr8VMMpfo6d3A1ZvvS5XPcttbpHjRL45lNqRpdXP2515ByZ//ldPgvd9 1bLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002436; x=1718607236; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4VXeaEelKJYpG/6TyRv7n3Wpu8YGpUrR2rYBr36fFWI=; b=k8gX/POf27nSrWfIALZwdwOWuMoYmloplDpxoPL6nhPnyxjvvANpl3shSqn20Sbb6U e0wDFWjiWITvcLkEcwiEAT9ZeMroTiwZ2IqSG1Pn5cZFJMe+NlXf1XOq5qn031q3wIuz Yp45ONf/FhrYp4jn7DQmdYyO9vAphkuQn7+VvVcvJrdb03M1r3dnj1GxvTjeDcdTZxLT zKLkCBztCVsn1pWymdrUgVav23bxYuWT7c2TEqsrL+WVcCCgCLAqk9de4O4K7NVUJBMY WryySP7/Nc7Huu3DcpcaAB8x4sY4QllgpmERTuvD/1Uo0QGFpoBnAFEgWnQ9+TLdLOgn aHNA== X-Gm-Message-State: AOJu0YwegWssKE/fimCyT99Et9lvNoRaVyMHX6W+hSzSqV7lX6n2WRvk 62bDzyVj8PogkBPuiwcjxwvf2D+moLNJDC49axxwpmcluHnFK8Rc68G/ZlPblXySqwCkp20hP3l MZ4Q= X-Google-Smtp-Source: AGHT+IHNPbpnRBW3v3XVGITo/4pgqQeNlf8GqFQc2TOpQHtSgwbVAgzIPb6eK/wrE0lBSuIW9tPH6g== X-Received: by 2002:a17:906:ca0f:b0:a6f:1d50:bf1e with SMTP id a640c23a62f3a-a6f1d50c0a1mr115826066b.43.1718002436462; Sun, 09 Jun 2024 23:53:56 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [XEN PATCH v6 6/7] xen/arm: add and call tee_free_domain_ctx() Date: Mon, 10 Jun 2024 08:53:42 +0200 Message-Id: <20240610065343.2594943-7-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Add tee_free_domain_ctx() to the TEE mediator framework. tee_free_domain_ctx() is called from arch_domain_destroy() to allow late freeing of the d->arch.tee context. This will simplify access to d->arch.tee for domains retrieved with rcu_lock_domain_by_id(). Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- xen/arch/arm/domain.c | 1 + xen/arch/arm/include/asm/tee/tee.h | 6 ++++++ xen/arch/arm/tee/tee.c | 6 ++++++ 3 files changed, 13 insertions(+) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 8bde2f730dfb..7cfcefd27944 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -843,6 +843,7 @@ int arch_domain_teardown(struct domain *d) void arch_domain_destroy(struct domain *d) { + tee_free_domain_ctx(d); /* IOMMU page table is shared with P2M, always call * iommu_domain_destroy() before p2m_final_teardown(). */ diff --git a/xen/arch/arm/include/asm/tee/tee.h b/xen/arch/arm/include/asm/tee/tee.h index 6bc13da885b6..0169fd746bcd 100644 --- a/xen/arch/arm/include/asm/tee/tee.h +++ b/xen/arch/arm/include/asm/tee/tee.h @@ -38,6 +38,7 @@ struct tee_mediator_ops { */ int (*domain_init)(struct domain *d); int (*domain_teardown)(struct domain *d); + void (*free_domain_ctx)(struct domain *d); /* * Called during domain destruction to relinquish resources used @@ -70,6 +71,7 @@ int tee_domain_teardown(struct domain *d); int tee_relinquish_resources(struct domain *d); uint16_t tee_get_type(void); void init_tee_secondary(void); +void tee_free_domain_ctx(struct domain *d); #define REGISTER_TEE_MEDIATOR(_name, _namestr, _type, _ops) \ static const struct tee_mediator_desc __tee_desc_##_name __used \ @@ -113,6 +115,10 @@ static inline void init_tee_secondary(void) { } +static inline void tee_free_domain_ctx(struct domain *d) +{ +} + #endif /* CONFIG_TEE */ #endif /* __ARCH_ARM_TEE_TEE_H__ */ diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c index 9fd1d7495b2e..b1cae16c17a1 100644 --- a/xen/arch/arm/tee/tee.c +++ b/xen/arch/arm/tee/tee.c @@ -102,6 +102,12 @@ void __init init_tee_secondary(void) cur_mediator->ops->init_secondary(); } +void tee_free_domain_ctx(struct domain *d) +{ + if ( cur_mediator && cur_mediator->ops->free_domain_ctx) + cur_mediator->ops->free_domain_ctx(d); +} + /* * Local variables: * mode: C From patchwork Mon Jun 10 06:53:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Wiklander X-Patchwork-Id: 13691586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 22148C27C5E for ; Mon, 10 Jun 2024 06:54:13 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.736906.1143018 (Exim 4.92) (envelope-from ) id 1sGYun-0003ca-Rp; Mon, 10 Jun 2024 06:54:01 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 736906.1143018; Mon, 10 Jun 2024 06:54:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYun-0003cO-Nz; Mon, 10 Jun 2024 06:54:01 +0000 Received: by outflank-mailman (input) for mailman id 736906; Mon, 10 Jun 2024 06:54:01 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sGYum-0002SJ-RT for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:54:01 +0000 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [2a00:1450:4864:20::644]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 365e6658-26f6-11ef-90a2-e314d9c70b13; Mon, 10 Jun 2024 08:53:58 +0200 (CEST) Received: by mail-ej1-x644.google.com with SMTP id a640c23a62f3a-a6f09eaf420so154697066b.3 for ; Sun, 09 Jun 2024 23:53:59 -0700 (PDT) Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se. [217.31.164.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Jun 2024 23:53:57 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 365e6658-26f6-11ef-90a2-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718002438; x=1718607238; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=305iG9fx40qqEJp0HWKt7JOcd7D+k2FTNaXogI/WLSM=; b=E5bUyMO9elYHOfM5gKFwuUHKzW/Cge+s0otluzvzLbSo6hJRsptueTmhYzCwApPpW5 dYYh/hCqgICI13xZInQ4fH2gBd0Y2YYpiNc+nXfg/oWaxcG8sEmVWMfSONn2QCScOmJ/ /k9egZN2Lc6UezHrArn4yYXforSByu4/j6nFTs8LOBCfpAtNRdINg2V+kngH/lGuoGtw jOIGhhkU0nqo+lAWwaKdPzjRSw/+7USKu+ViRn+bLe13R8BCl1a/1kHdzfw9AP26hDSe AZnlGHqXcRkgUCb/EWH/k8OVMxgsnOKh53KOveO/VQIo/3UcdowmHYi0w0f21Pp57T5D NYyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718002438; x=1718607238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=305iG9fx40qqEJp0HWKt7JOcd7D+k2FTNaXogI/WLSM=; b=avi1nC2Sv+sDQG3R9k1sPYRtnHYS4+oCq/a7Pt8NPApv+7u/tF5V/xq35oHP7Prvwn 5k86ILDcvCMLM8UDOijoX7tNZ7at2iPJBMNqdweYGEcbgT0zhTBtGfPULpb9Q8bnTxm3 lJiOeJtZ6KHffzL5GBa0t8WrL+5U1w8v7TVQf41V2l3evle1xOv7VdBkthevowEvOnqL J+W7g7LLBg6VMZ1HeVa7h7yxT2DRAtXjIIJh3bHH90FkPjZ89rUZJdNsOZH/iwGbmFJ4 38dZZ3C4ApQmguGtJc+Q8y+4RR7IVFLI4WdOjcD+HLgPf4oD/O/3+bdGbca44VRMk8zZ ICXw== X-Gm-Message-State: AOJu0Yyv7j+nSE+L4KNwG3Ukf1ZcBCwggs1GMnNDr8mTJ1DCf6sGNpV+ uHjYjNNDLiR17wqhbtUQ8dJpmAJuwfoVT1uxgbVibfHNL3SwSzwJz4INXqQyRcnL35NlHDw8Xh7 1kQabcg== X-Google-Smtp-Source: AGHT+IFovAOFQGDgK4ZMx0Nd6uSETjAmoBfhQRI6+9c4qvCcBcggDMQKP095+P3hPn/jy29eXTOuJw== X-Received: by 2002:a17:906:3f92:b0:a59:9dbf:677b with SMTP id a640c23a62f3a-a6cdb0f53b4mr562970566b.48.1718002437935; Sun, 09 Jun 2024 23:53:57 -0700 (PDT) From: Jens Wiklander To: xen-devel@lists.xenproject.org Cc: patches@linaro.org, Jens Wiklander , Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel Subject: [XEN PATCH v6 7/7] xen/arm: ffa: support notification Date: Mon, 10 Jun 2024 08:53:43 +0200 Message-Id: <20240610065343.2594943-8-jens.wiklander@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org> References: <20240610065343.2594943-1-jens.wiklander@linaro.org> MIME-Version: 1.0 Add support for FF-A notifications, currently limited to an SP (Secure Partition) sending an asynchronous notification to a guest. Guests and Xen itself are made aware of pending notifications with an interrupt. The interrupt handler triggers a tasklet to retrieve the notifications using the FF-A ABI and deliver them to their destinations. Update ffa_partinfo_domain_init() to return error code like ffa_notif_domain_init(). Signed-off-by: Jens Wiklander Reviewed-by: Bertrand Marquis --- v5->v6: - Add a local ffa_init_secondary() that calls ffa_notif_init_interrupt() as requested - Add comments in notif_vm_pend_intr() to explain the cause and consequences of not finding the domain of a vm_id or if the found domain doesn't have a FF-A context. v4->v5: - Move the freeing of d->arch.tee to the new TEE mediator free_domain_ctx callback to make the context accessible during rcu_lock_domain_by_id() from a tasklet - Initialize interrupt handlers for secondary CPUs from the new TEE mediator init_interrupt() callback - Restore the ffa_probe() from v3, that is, remove the presmp_initcall(ffa_init) approach and use ffa_probe() as usual now that we have the init_interrupt callback. - A tasklet is added to handle the Schedule Receiver interrupt. The tasklet finds each relevant domain with rcu_lock_domain_by_id() which now is enough to guarantee that the FF-A context can be accessed. - The notification interrupt handler only schedules the notification tasklet mentioned above v3->v4: - Add another note on FF-A limitations - Clear secure_pending in ffa_handle_notification_get() if both SP and SPM bitmaps are retrieved - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id FF-A uses for Xen itself - Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_id() in notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_id() via ffa_rcu_lock_domain_by_vm_id() - Remove spinlock in struct ffa_ctx_notif and use atomic functions as needed to access and update the secure_pending field - In notif_irq_handler(), look for the first online CPU instead of assuming that the first CPU is online - Initialize FF-A via presmp_initcall() before the other CPUs are online, use register_cpu_notifier() to install the interrupt handler notif_irq_handler() - Update commit message to reflect recent updates v2->v3: - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h - Register the Xen SRI handler on each CPU using on_selected_cpus() and setup_irq() - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR doesn't conflict with static SGI handlers v1->v2: - Addressing review comments - Change ffa_handle_notification_{bind,unbind,set}() to take struct cpu_user_regs *regs as argument. - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return an error code. - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR. [review] xen/arm: ffa: support notification [review] xen/arm: ffa: support notification --- xen/arch/arm/tee/Makefile | 1 + xen/arch/arm/tee/ffa.c | 77 +++++- xen/arch/arm/tee/ffa_notif.c | 425 ++++++++++++++++++++++++++++++++ xen/arch/arm/tee/ffa_partinfo.c | 9 +- xen/arch/arm/tee/ffa_private.h | 56 ++++- xen/arch/arm/tee/tee.c | 2 +- xen/include/public/arch-arm.h | 14 ++ 7 files changed, 569 insertions(+), 15 deletions(-) create mode 100644 xen/arch/arm/tee/ffa_notif.c diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile index f0112a2f922d..7c0f46f7f446 100644 --- a/xen/arch/arm/tee/Makefile +++ b/xen/arch/arm/tee/Makefile @@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o obj-$(CONFIG_FFA) += ffa_shm.o obj-$(CONFIG_FFA) += ffa_partinfo.o obj-$(CONFIG_FFA) += ffa_rxtx.o +obj-$(CONFIG_FFA) += ffa_notif.o obj-y += tee.o obj-$(CONFIG_OPTEE) += optee.o diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 5209612963e1..022089278e1c 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -39,6 +39,12 @@ * - at most 32 shared memory regions per guest * o FFA_MSG_SEND_DIRECT_REQ: * - only supported from a VM to an SP + * o FFA_NOTIFICATION_*: + * - only supports global notifications, that is, per vCPU notifications + * are not supported + * - doesn't support signalling the secondary scheduler of pending + * notification for secure partitions + * - doesn't support notifications for Xen itself * * There are some large locked sections with ffa_tx_buffer_lock and * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used @@ -194,6 +200,8 @@ out: static void handle_features(struct cpu_user_regs *regs) { + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; uint32_t a1 = get_user_reg(regs, 1); unsigned int n; @@ -240,6 +248,30 @@ static void handle_features(struct cpu_user_regs *regs) BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE); ffa_set_regs_success(regs, 0, 0); break; + case FFA_FEATURE_NOTIF_PEND_INTR: + if ( ctx->notif.enabled ) + ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0); + else + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + break; + case FFA_FEATURE_SCHEDULE_RECV_INTR: + if ( ctx->notif.enabled ) + ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0); + else + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + break; + + case FFA_NOTIFICATION_BIND: + case FFA_NOTIFICATION_UNBIND: + case FFA_NOTIFICATION_GET: + case FFA_NOTIFICATION_SET: + case FFA_NOTIFICATION_INFO_GET_32: + case FFA_NOTIFICATION_INFO_GET_64: + if ( ctx->notif.enabled ) + ffa_set_regs_success(regs, 0, 0); + else + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + break; default: ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); break; @@ -305,6 +337,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) get_user_reg(regs, 1)), get_user_reg(regs, 3)); break; + case FFA_NOTIFICATION_BIND: + e = ffa_handle_notification_bind(regs); + break; + case FFA_NOTIFICATION_UNBIND: + e = ffa_handle_notification_unbind(regs); + break; + case FFA_NOTIFICATION_INFO_GET_32: + case FFA_NOTIFICATION_INFO_GET_64: + ffa_handle_notification_info_get(regs); + return true; + case FFA_NOTIFICATION_GET: + ffa_handle_notification_get(regs); + return true; + case FFA_NOTIFICATION_SET: + e = ffa_handle_notification_set(regs); + break; default: gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid); @@ -322,6 +370,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs) static int ffa_domain_init(struct domain *d) { struct ffa_ctx *ctx; + int ret; if ( !ffa_version ) return -ENODEV; @@ -345,10 +394,11 @@ static int ffa_domain_init(struct domain *d) * error, so no need for cleanup in this function. */ - if ( !ffa_partinfo_domain_init(d) ) - return -EIO; + ret = ffa_partinfo_domain_init(d); + if ( ret ) + return ret; - return 0; + return ffa_notif_domain_init(d); } static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) @@ -376,13 +426,6 @@ static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time) } else { - /* - * domain_destroy() might have been called (via put_domain() in - * ffa_reclaim_shms()), so we can't touch the domain structure - * anymore. - */ - xfree(ctx); - /* Only check if there has been a change to the teardown queue */ if ( !first_time ) { @@ -423,17 +466,28 @@ static int ffa_domain_teardown(struct domain *d) return 0; ffa_rxtx_domain_destroy(d); + ffa_notif_domain_destroy(d); ffa_domain_teardown_continue(ctx, true /* first_time */); return 0; } +static void ffa_free_domain_ctx(struct domain *d) +{ + XFREE(d->arch.tee); +} + static int ffa_relinquish_resources(struct domain *d) { return 0; } +static void ffa_init_secondary(void) +{ + ffa_notif_init_interrupt(); +} + static bool ffa_probe(void) { uint32_t vers; @@ -502,6 +556,7 @@ static bool ffa_probe(void) if ( !ffa_partinfo_init() ) goto err_rxtx_destroy; + ffa_notif_init(); INIT_LIST_HEAD(&ffa_teardown_head); init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0); @@ -517,8 +572,10 @@ err_rxtx_destroy: static const struct tee_mediator_ops ffa_ops = { .probe = ffa_probe, + .init_secondary = ffa_init_secondary, .domain_init = ffa_domain_init, .domain_teardown = ffa_domain_teardown, + .free_domain_ctx = ffa_free_domain_ctx, .relinquish_resources = ffa_relinquish_resources, .handle_call = ffa_handle_call, }; diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c new file mode 100644 index 000000000000..541e61d2f606 --- /dev/null +++ b/xen/arch/arm/tee/ffa_notif.c @@ -0,0 +1,425 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2024 Linaro Limited + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "ffa_private.h" + +static bool __ro_after_init notif_enabled; +static unsigned int __ro_after_init notif_sri_irq; + +int ffa_handle_notification_bind(struct cpu_user_regs *regs) +{ + struct domain *d = current->domain; + uint32_t src_dst = get_user_reg(regs, 1); + uint32_t flags = get_user_reg(regs, 2); + uint32_t bitmap_lo = get_user_reg(regs, 3); + uint32_t bitmap_hi = get_user_reg(regs, 4); + + if ( !notif_enabled ) + return FFA_RET_NOT_SUPPORTED; + + if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) ) + return FFA_RET_INVALID_PARAMETERS; + + if ( flags ) /* Only global notifications are supported */ + return FFA_RET_DENIED; + + /* + * We only support notifications from SP so no need to check the sender + * endpoint ID, the SPMC will take care of that for us. + */ + return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, bitmap_hi, + bitmap_lo); +} + +int ffa_handle_notification_unbind(struct cpu_user_regs *regs) +{ + struct domain *d = current->domain; + uint32_t src_dst = get_user_reg(regs, 1); + uint32_t bitmap_lo = get_user_reg(regs, 3); + uint32_t bitmap_hi = get_user_reg(regs, 4); + + if ( !notif_enabled ) + return FFA_RET_NOT_SUPPORTED; + + if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) ) + return FFA_RET_INVALID_PARAMETERS; + + /* + * We only support notifications from SP so no need to check the + * destination endpoint ID, the SPMC will take care of that for us. + */ + return ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitmap_hi, + bitmap_lo); +} + +void ffa_handle_notification_info_get(struct cpu_user_regs *regs) +{ + struct domain *d = current->domain; + struct ffa_ctx *ctx = d->arch.tee; + + if ( !notif_enabled ) + { + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + return; + } + + if ( test_and_clear_bool(ctx->notif.secure_pending) ) + { + /* A pending global notification for the guest */ + ffa_set_regs(regs, FFA_SUCCESS_64, 0, + 1U << FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT, ffa_get_vm_id(d), + 0, 0, 0, 0); + } + else + { + /* Report an error if there where no pending global notification */ + ffa_set_regs_error(regs, FFA_RET_NO_DATA); + } +} + +void ffa_handle_notification_get(struct cpu_user_regs *regs) +{ + struct domain *d = current->domain; + uint32_t recv = get_user_reg(regs, 1); + uint32_t flags = get_user_reg(regs, 2); + uint32_t w2 = 0; + uint32_t w3 = 0; + uint32_t w4 = 0; + uint32_t w5 = 0; + uint32_t w6 = 0; + uint32_t w7 = 0; + + if ( !notif_enabled ) + { + ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); + return; + } + + if ( (recv & 0xFFFFU) != ffa_get_vm_id(d) ) + { + ffa_set_regs_error(regs, FFA_RET_INVALID_PARAMETERS); + return; + } + + if ( flags & ( FFA_NOTIF_FLAG_BITMAP_SP | FFA_NOTIF_FLAG_BITMAP_SPM ) ) + { + struct arm_smccc_1_2_regs arg = { + .a0 = FFA_NOTIFICATION_GET, + .a1 = recv, + .a2 = flags & ( FFA_NOTIF_FLAG_BITMAP_SP | + FFA_NOTIF_FLAG_BITMAP_SPM ), + }; + struct arm_smccc_1_2_regs resp; + int32_t e; + + /* + * Clear secure pending if both FFA_NOTIF_FLAG_BITMAP_SP and + * FFA_NOTIF_FLAG_BITMAP_SPM are set since secure world can't have + * any more pending notifications. + */ + if ( ( flags & FFA_NOTIF_FLAG_BITMAP_SP ) && + ( flags & FFA_NOTIF_FLAG_BITMAP_SPM ) ) + { + struct ffa_ctx *ctx = d->arch.tee; + + ACCESS_ONCE(ctx->notif.secure_pending) = false; + } + + arm_smccc_1_2_smc(&arg, &resp); + e = ffa_get_ret_code(&resp); + if ( e ) + { + ffa_set_regs_error(regs, e); + return; + } + + if ( flags & FFA_NOTIF_FLAG_BITMAP_SP ) + { + w2 = resp.a2; + w3 = resp.a3; + } + + if ( flags & FFA_NOTIF_FLAG_BITMAP_SPM ) + w6 = resp.a6; + } + + ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7); +} + +int ffa_handle_notification_set(struct cpu_user_regs *regs) +{ + struct domain *d = current->domain; + uint32_t src_dst = get_user_reg(regs, 1); + uint32_t flags = get_user_reg(regs, 2); + uint32_t bitmap_lo = get_user_reg(regs, 3); + uint32_t bitmap_hi = get_user_reg(regs, 4); + + if ( !notif_enabled ) + return FFA_RET_NOT_SUPPORTED; + + if ( (src_dst >> 16) != ffa_get_vm_id(d) ) + return FFA_RET_INVALID_PARAMETERS; + + /* Let the SPMC check the destination of the notification */ + return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitmap_lo, + bitmap_hi); +} + +/* + * Extract a 16-bit ID (index n) from the successful return value from + * FFA_NOTIFICATION_INFO_GET_64 or FFA_NOTIFICATION_INFO_GET_32. IDs are + * returned in registers 3 to 7 with four IDs per register for 64-bit + * calling convention and two IDs per register for 32-bit calling + * convention. + */ +static uint16_t get_id_from_resp(struct arm_smccc_1_2_regs *resp, + unsigned int n) +{ + unsigned int ids_per_reg; + unsigned int reg_idx; + unsigned int reg_shift; + + if ( smccc_is_conv_64(resp->a0) ) + ids_per_reg = 4; + else + ids_per_reg = 2; + + reg_idx = n / ids_per_reg + 3; + reg_shift = ( n % ids_per_reg ) * 16; + + switch ( reg_idx ) + { + case 3: + return resp->a3 >> reg_shift; + case 4: + return resp->a4 >> reg_shift; + case 5: + return resp->a5 >> reg_shift; + case 6: + return resp->a6 >> reg_shift; + case 7: + return resp->a7 >> reg_shift; + default: + ASSERT(0); /* "Can't happen" */ + return 0; + } +} + +static void notif_vm_pend_intr(uint16_t vm_id) +{ + struct ffa_ctx *ctx; + struct domain *d; + struct vcpu *v; + + /* + * vm_id == 0 means a notifications pending for Xen itself, but + * we don't support that yet. + */ + if ( !vm_id ) + return; + + /* + * This can fail if the domain has been destroyed after + * FFA_NOTIFICATION_INFO_GET_64. Ignoring this is harmless since the + * guest doesn't exist any more. + */ + d = ffa_rcu_lock_domain_by_vm_id(vm_id); + if ( !d ) + return; + + /* + * Failing here is unlikely since the domain ID must have been reused + * for a new domain between the FFA_NOTIFICATION_INFO_GET_64 and + * ffa_rcu_lock_domain_by_vm_id() calls. + * + * Continuing on the scenario above if the domain has FF-A enabled. We + * can't tell here if the domain ID has been reused for a new domain so + * we inject an NPI. When the NPI handler in the domain calls + * FFA_NOTIFICATION_GET it will have accurate information, the worst + * case is a spurious NPI. + */ + ctx = d->arch.tee; + if ( !ctx ) + goto out_unlock; + + /* + * arch.tee is freed from complete_domain_destroy() so the RCU lock + * guarantees that the data structure isn't freed while we're accessing + * it. + */ + ACCESS_ONCE(ctx->notif.secure_pending) = true; + + /* + * Since we're only delivering global notification, always + * deliver to the first online vCPU. It doesn't matter + * which we chose, as long as it's available. + */ + for_each_vcpu(d, v) + { + if ( is_vcpu_online(v) ) + { + vgic_inject_irq(d, v, GUEST_FFA_NOTIF_PEND_INTR_ID, + true); + break; + } + } + if ( !v ) + printk(XENLOG_ERR "ffa: can't inject NPI, all vCPUs offline\n"); + +out_unlock: + rcu_unlock_domain(d); +} + +static void notif_sri_action(void *unused) +{ + const struct arm_smccc_1_2_regs arg = { + .a0 = FFA_NOTIFICATION_INFO_GET_64, + }; + struct arm_smccc_1_2_regs resp; + unsigned int id_pos; + unsigned int list_count; + uint64_t ids_count; + unsigned int n; + int32_t res; + + do { + arm_smccc_1_2_smc(&arg, &resp); + res = ffa_get_ret_code(&resp); + if ( res ) + { + if ( res != FFA_RET_NO_DATA ) + printk(XENLOG_ERR "ffa: notification info get failed: error %d\n", + res); + return; + } + + ids_count = resp.a2 >> FFA_NOTIF_INFO_GET_ID_LIST_SHIFT; + list_count = ( resp.a2 >> FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT ) & + FFA_NOTIF_INFO_GET_ID_COUNT_MASK; + + id_pos = 0; + for ( n = 0; n < list_count; n++ ) + { + unsigned int count = ((ids_count >> 2 * n) & 0x3) + 1; + uint16_t vm_id = get_id_from_resp(&resp, id_pos); + + notif_vm_pend_intr(vm_id); + + id_pos += count; + } + + } while (resp.a2 & FFA_NOTIF_INFO_GET_MORE_FLAG); +} + +static DECLARE_TASKLET(notif_sri_tasklet, notif_sri_action, NULL); + +static void notif_irq_handler(int irq, void *data) +{ + tasklet_schedule(¬if_sri_tasklet); +} + +static int32_t ffa_notification_bitmap_create(uint16_t vm_id, + uint32_t vcpu_count) +{ + return ffa_simple_call(FFA_NOTIFICATION_BITMAP_CREATE, vm_id, vcpu_count, + 0, 0); +} + +static int32_t ffa_notification_bitmap_destroy(uint16_t vm_id) +{ + return ffa_simple_call(FFA_NOTIFICATION_BITMAP_DESTROY, vm_id, 0, 0, 0); +} + +void ffa_notif_init_interrupt(void) +{ + int ret; + + if ( notif_enabled && notif_sri_irq < NR_GIC_SGI ) + { + /* + * An error here is unlikely since the primary CPU has already + * succeeded in installing the interrupt handler. If this fails it + * may lead to a problem with notifictaions. + * + * The CPUs without an notification handler installed will fail to + * trigger on the SGI indicating that there are notifications + * pending, while the SPMC in the secure world will not notice that + * the interrupt was lost. + */ + ret = request_irq(notif_sri_irq, 0, notif_irq_handler, "FF-A notif", + NULL); + if ( ret ) + printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", + notif_sri_irq, ret); + } +} + +void ffa_notif_init(void) +{ + const struct arm_smccc_1_2_regs arg = { + .a0 = FFA_FEATURES, + .a1 = FFA_FEATURE_SCHEDULE_RECV_INTR, + }; + struct arm_smccc_1_2_regs resp; + unsigned int irq; + int ret; + + arm_smccc_1_2_smc(&arg, &resp); + if ( resp.a0 != FFA_SUCCESS_32 ) + return; + + irq = resp.a2; + notif_sri_irq = irq; + if ( irq >= NR_GIC_SGI ) + irq_set_type(irq, IRQ_TYPE_EDGE_RISING); + ret = request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL); + if ( ret ) + { + printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", + irq, ret); + return; + } + + notif_enabled = true; +} + +int ffa_notif_domain_init(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + int32_t res; + + if ( !notif_enabled ) + return 0; + + res = ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus); + if ( res ) + return -ENOMEM; + + ctx->notif.enabled = true; + + return 0; +} + +void ffa_notif_domain_destroy(struct domain *d) +{ + struct ffa_ctx *ctx = d->arch.tee; + + if ( ctx->notif.enabled ) + { + ffa_notification_bitmap_destroy(ffa_get_vm_id(d)); + ctx->notif.enabled = false; + } +} diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c index dc1059584828..93a03c6bc672 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -306,7 +306,7 @@ static void vm_destroy_bitmap_init(struct ffa_ctx *ctx, } } -bool ffa_partinfo_domain_init(struct domain *d) +int ffa_partinfo_domain_init(struct domain *d) { unsigned int count = BITS_TO_LONGS(subscr_vm_destroyed_count); struct ffa_ctx *ctx = d->arch.tee; @@ -315,7 +315,7 @@ bool ffa_partinfo_domain_init(struct domain *d) ctx->vm_destroy_bitmap = xzalloc_array(unsigned long, count); if ( !ctx->vm_destroy_bitmap ) - return false; + return -ENOMEM; for ( n = 0; n < subscr_vm_created_count; n++ ) { @@ -330,7 +330,10 @@ bool ffa_partinfo_domain_init(struct domain *d) } vm_destroy_bitmap_init(ctx, n); - return n == subscr_vm_created_count; + if ( n != subscr_vm_created_count ) + return -EIO; + + return 0; } bool ffa_partinfo_domain_destroy(struct domain *d) diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index 98236cbf14a3..7c6b06f686fc 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -25,6 +25,7 @@ #define FFA_RET_DENIED -6 #define FFA_RET_RETRY -7 #define FFA_RET_ABORTED -8 +#define FFA_RET_NO_DATA -9 /* FFA_VERSION helpers */ #define FFA_VERSION_MAJOR_SHIFT 16U @@ -175,6 +176,21 @@ */ #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U) +/* Flags used in calls to FFA_NOTIFICATION_GET interface */ +#define FFA_NOTIF_FLAG_BITMAP_SP BIT(0, U) +#define FFA_NOTIF_FLAG_BITMAP_VM BIT(1, U) +#define FFA_NOTIF_FLAG_BITMAP_SPM BIT(2, U) +#define FFA_NOTIF_FLAG_BITMAP_HYP BIT(3, U) + +#define FFA_NOTIF_INFO_GET_MORE_FLAG BIT(0, U) +#define FFA_NOTIF_INFO_GET_ID_LIST_SHIFT 12 +#define FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT 7 +#define FFA_NOTIF_INFO_GET_ID_COUNT_MASK 0x1F + +/* Feature IDs used with FFA_FEATURES */ +#define FFA_FEATURE_NOTIF_PEND_INTR 0x1U +#define FFA_FEATURE_SCHEDULE_RECV_INTR 0x2U + /* Function IDs */ #define FFA_ERROR 0x84000060U #define FFA_SUCCESS_32 0x84000061U @@ -213,6 +229,24 @@ #define FFA_MEM_FRAG_TX 0x8400007BU #define FFA_MSG_SEND 0x8400006EU #define FFA_MSG_POLL 0x8400006AU +#define FFA_NOTIFICATION_BITMAP_CREATE 0x8400007DU +#define FFA_NOTIFICATION_BITMAP_DESTROY 0x8400007EU +#define FFA_NOTIFICATION_BIND 0x8400007FU +#define FFA_NOTIFICATION_UNBIND 0x84000080U +#define FFA_NOTIFICATION_SET 0x84000081U +#define FFA_NOTIFICATION_GET 0x84000082U +#define FFA_NOTIFICATION_INFO_GET_32 0x84000083U +#define FFA_NOTIFICATION_INFO_GET_64 0xC4000083U + +struct ffa_ctx_notif { + bool enabled; + + /* + * True if domain is reported by FFA_NOTIFICATION_INFO_GET to have + * pending global notifications. + */ + bool secure_pending; +}; struct ffa_ctx { void *rx; @@ -228,6 +262,7 @@ struct ffa_ctx { struct list_head shm_list; /* Number of allocated shared memory object */ unsigned int shm_count; + struct ffa_ctx_notif notif; /* * tx_lock is used to serialize access to tx * rx_lock is used to serialize access to rx @@ -257,7 +292,7 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs); int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags); bool ffa_partinfo_init(void); -bool ffa_partinfo_domain_init(struct domain *d); +int ffa_partinfo_domain_init(struct domain *d); bool ffa_partinfo_domain_destroy(struct domain *d); int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3, uint32_t w4, uint32_t w5, uint32_t *count, @@ -271,12 +306,31 @@ uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, uint32_t ffa_handle_rxtx_unmap(void); int32_t ffa_handle_rx_release(void); +void ffa_notif_init(void); +void ffa_notif_init_interrupt(void); +int ffa_notif_domain_init(struct domain *d); +void ffa_notif_domain_destroy(struct domain *d); + +int ffa_handle_notification_bind(struct cpu_user_regs *regs); +int ffa_handle_notification_unbind(struct cpu_user_regs *regs); +void ffa_handle_notification_info_get(struct cpu_user_regs *regs); +void ffa_handle_notification_get(struct cpu_user_regs *regs); +int ffa_handle_notification_set(struct cpu_user_regs *regs); + static inline uint16_t ffa_get_vm_id(const struct domain *d) { /* +1 since 0 is reserved for the hypervisor in FF-A */ return d->domain_id + 1; } +static inline struct domain *ffa_rcu_lock_domain_by_vm_id(uint16_t vm_id) +{ + ASSERT(vm_id); + + /* -1 to match ffa_get_vm_id() */ + return rcu_lock_domain_by_id(vm_id - 1); +} + static inline void ffa_set_regs(struct cpu_user_regs *regs, register_t v0, register_t v1, register_t v2, register_t v3, register_t v4, register_t v5, register_t v6, diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c index b1cae16c17a1..3f65e45a7892 100644 --- a/xen/arch/arm/tee/tee.c +++ b/xen/arch/arm/tee/tee.c @@ -94,7 +94,7 @@ static int __init tee_init(void) return 0; } -__initcall(tee_init); +presmp_initcall(tee_init); void __init init_tee_secondary(void) { diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 289af81bd69d..e2412a17474e 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -505,6 +505,7 @@ typedef uint64_t xen_callback_t; #define GUEST_MAX_VCPUS 128 /* Interrupts */ + #define GUEST_TIMER_VIRT_PPI 27 #define GUEST_TIMER_PHYS_S_PPI 29 #define GUEST_TIMER_PHYS_NS_PPI 30 @@ -515,6 +516,19 @@ typedef uint64_t xen_callback_t; #define GUEST_VIRTIO_MMIO_SPI_FIRST 33 #define GUEST_VIRTIO_MMIO_SPI_LAST 43 +/* + * SGI is the preferred delivery mechanism of FF-A pending notifications or + * schedule recveive interrupt. SGIs 8-15 are normally not used by a guest + * as they in a non-virtualized system typically are assigned to the secure + * world. Here we're free to use SGI 8-15 since they are virtual and have + * nothing to do with the secure world. + * + * For partitioning of SGIs see also Arm Base System Architecture v1.0C, + * https://developer.arm.com/documentation/den0094/ + */ +#define GUEST_FFA_NOTIF_PEND_INTR_ID 8 +#define GUEST_FFA_SCHEDULE_RECV_INTR_ID 9 + /* PSCI functions */ #define PSCI_cpu_suspend 0 #define PSCI_cpu_off 1