diff mbox series

[RFC,5/6] firmware: arm_scmi: Add raw transmission support

Message ID 20220816072450.3120959-6-cristian.marussi@arm.com (mailing list archive)
State New, archived
Headers show
Series Introduce a unified API for SCMI Server testing | expand

Commit Message

Cristian Marussi Aug. 16, 2022, 7:24 a.m. UTC
Add SCMI Raw mode support which exposes a userspace interface rooted under
/sys/kernel/debug/scmi_raw.

Raw mode can be enabled/disabled at runtime via ./scmi_raw/enable.
Once enabled, all the regular SCMI drivers activity is inhibited and a
userspace application can then inject and read back bare SCMI messages
writing and reading to/from ./scmi_raw/message* entries.

Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
 drivers/firmware/arm_scmi/Kconfig    |   13 +
 drivers/firmware/arm_scmi/Makefile   |    1 +
 drivers/firmware/arm_scmi/raw_mode.c | 1046 ++++++++++++++++++++++++++
 drivers/firmware/arm_scmi/raw_mode.h |   28 +
 4 files changed, 1088 insertions(+)
 create mode 100644 drivers/firmware/arm_scmi/raw_mode.c
 create mode 100644 drivers/firmware/arm_scmi/raw_mode.h

Comments

Mark Brown Aug. 16, 2022, 6:03 p.m. UTC | #1
On Tue, Aug 16, 2022 at 08:24:49AM +0100, Cristian Marussi wrote:
> Add SCMI Raw mode support which exposes a userspace interface rooted under
> /sys/kernel/debug/scmi_raw.

> Raw mode can be enabled/disabled at runtime via ./scmi_raw/enable.
> Once enabled, all the regular SCMI drivers activity is inhibited and a
> userspace application can then inject and read back bare SCMI messages
> writing and reading to/from ./scmi_raw/message* entries.

Is there a strong reason to have the runtime enable/disable?  Given that
this is going to be used in special kernel builds rather than something
people have as standard it feels like the transition to/from raw mode is
opening up a set of extra use cases that wouldn't normally come up for
the SCMI drivers (especially if the testing ends up leaving the firmware
in a weird state).
Cristian Marussi Aug. 17, 2022, 8:38 a.m. UTC | #2
On Tue, Aug 16, 2022 at 07:03:21PM +0100, Mark Brown wrote:
> On Tue, Aug 16, 2022 at 08:24:49AM +0100, Cristian Marussi wrote:
> > Add SCMI Raw mode support which exposes a userspace interface rooted under
> > /sys/kernel/debug/scmi_raw.

Hi Mark,

thanks for having a look.

> 
> > Raw mode can be enabled/disabled at runtime via ./scmi_raw/enable.
> > Once enabled, all the regular SCMI drivers activity is inhibited and a
> > userspace application can then inject and read back bare SCMI messages
> > writing and reading to/from ./scmi_raw/message* entries.
> 
> Is there a strong reason to have the runtime enable/disable?  Given that
> this is going to be used in special kernel builds rather than something
> people have as standard it feels like the transition to/from raw mode is
> opening up a set of extra use cases that wouldn't normally come up for
> the SCMI drivers (especially if the testing ends up leaving the firmware
> in a weird state).

The rationale for this dynamic runtime switch was to try to have a
system that can be configured for SCMI FW testing BUT not necessarily
specifically only for such SCMI tests...IOW a system that can be used in
CI to run a number of other test suites (while in normal mode) and then
switched to raw mode only when SCMI compliance tests are to be run.

Indeed, the enable/disable thing is the main critical point of this RFC
since especially the disable-and-back-to-normal transition proved to be
potentially problematic...i.e. the system generally works in my setup but
it cannot be guaranteed to really bring back the system in a fully
functional state depending on how complex the driver stack is
(..beside the potential FW final weird state issue as you rightly
pointed out)

...moreover at the end the whole disable and go-back-to-normal really
makes little sense in a typical CI scenario where anyway the system
under test is most probably rebooted between runs of different test
suites, so we really do not care about any weird final state probably.

I, nonetheless, posted this RFC with this such support, at first to have
some general feedback, BUT also because I'm still anyway wondering if it
would not be worth to keep at least the capability to only enable it at
run-time (dropping the disable-back-to-normal feat), because this would
enable to build an image which includes this SCMI Raw support, which is
default disabled, and that can at will enabled at runtime only on selected
runs, so that this same test-image could still be used in a number of
different CI test-runs (keeping raw mode disabled and silent) but also then
used for a specific SCMI testing run that would eventually enable it.

If this is not worth really I can just drop the whole runtime switch thing
and stick to the more conservative approach of having a dedicated image
for this kind of SCMI FW testing: a system that once configured at compile
time with this, it just cannot use at all the regular SCMI stack (...which
does NOT necessarily prevent the system to boot and be used for other non
SCMI testing indeed...it would just be probably slower...)

Any thought ? 

Thanks,
Cristian
Mark Brown Aug. 17, 2022, 1:42 p.m. UTC | #3
On Wed, Aug 17, 2022 at 09:38:57AM +0100, Cristian Marussi wrote:

> ...moreover at the end the whole disable and go-back-to-normal really
> makes little sense in a typical CI scenario where anyway the system
> under test is most probably rebooted between runs of different test
> suites, so we really do not care about any weird final state probably.

> I, nonetheless, posted this RFC with this such support, at first to have
> some general feedback, BUT also because I'm still anyway wondering if it
> would not be worth to keep at least the capability to only enable it at
> run-time (dropping the disable-back-to-normal feat), because this would
> enable to build an image which includes this SCMI Raw support, which is
> default disabled, and that can at will enabled at runtime only on selected
> runs, so that this same test-image could still be used in a number of
> different CI test-runs (keeping raw mode disabled and silent) but also then
> used for a specific SCMI testing run that would eventually enable it.

The enable usecase does indeed make more sense, though I'd still worry
about other code having problems with the SCMI support getting
hotplugged out from underneath it since that isn't a thing that happens
in practical systems.  For example the archrandom code is going to get
confused since it probes once to see if SMCCC TRNG support is available
and if it's present sets a flag which it assumes will be true for the
rest of system runtime.  I don't entirely know how the image build costs
play off here for the people who'd actually be running these tests, but
my instinct is that the extra kernel build isn't really much in the
grand scheme of things compared to shaking out the consequences of a
runtime switch and the costs of actually running the tests.
Cristian Marussi Aug. 17, 2022, 2:21 p.m. UTC | #4
On Wed, Aug 17, 2022 at 02:42:40PM +0100, Mark Brown wrote:
> On Wed, Aug 17, 2022 at 09:38:57AM +0100, Cristian Marussi wrote:
> 
> > ...moreover at the end the whole disable and go-back-to-normal really
> > makes little sense in a typical CI scenario where anyway the system
> > under test is most probably rebooted between runs of different test
> > suites, so we really do not care about any weird final state probably.
> 
> > I, nonetheless, posted this RFC with this such support, at first to have
> > some general feedback, BUT also because I'm still anyway wondering if it
> > would not be worth to keep at least the capability to only enable it at
> > run-time (dropping the disable-back-to-normal feat), because this would
> > enable to build an image which includes this SCMI Raw support, which is
> > default disabled, and that can at will enabled at runtime only on selected
> > runs, so that this same test-image could still be used in a number of
> > different CI test-runs (keeping raw mode disabled and silent) but also then
> > used for a specific SCMI testing run that would eventually enable it.
> 
> The enable usecase does indeed make more sense, though I'd still worry
> about other code having problems with the SCMI support getting
> hotplugged out from underneath it since that isn't a thing that happens
> in practical systems.  For example the archrandom code is going to get
> confused since it probes once to see if SMCCC TRNG support is available
> and if it's present sets a flag which it assumes will be true for the
> rest of system runtime.  I don't entirely know how the image build costs
> play off here for the people who'd actually be running these tests, but
> my instinct is that the extra kernel build isn't really much in the
> grand scheme of things compared to shaking out the consequences of a
> runtime switch and the costs of actually running the tests.

Indeed Sudeep had the same concerns and when I tested this with a dummy
driver using an SCMI clock I could see that the disappearing clock was
seemingly handled safely by the clock framework, but this is just one
of the possibly involved framework, I cannot be so sure about all the
others and any future frmk that will have to interact with this dynamic
unplugging...so at the end probably is not worth the effort as you are
saying...I'll most probably drop this next....

Thanks a lot for your feedback.
Cristian
diff mbox series

Patch

diff --git a/drivers/firmware/arm_scmi/Kconfig b/drivers/firmware/arm_scmi/Kconfig
index a14f65444b35..ab726a92ac2f 100644
--- a/drivers/firmware/arm_scmi/Kconfig
+++ b/drivers/firmware/arm_scmi/Kconfig
@@ -23,6 +23,19 @@  config ARM_SCMI_PROTOCOL
 
 if ARM_SCMI_PROTOCOL
 
+config ARM_SCMI_RAW_MODE_SUPPORT
+	bool "Enable support for SCMI Raw transmission mode"
+	help
+	  Enable support for SCMI Raw transmission mode.
+
+	  If enabled allows the direct injection and snooping of SCMI bare
+	  messages through a dedicated debugfs interface.
+	  It is meant to be used by SCMI compliance/testing suites.
+
+	  When enabled regular SCMI drivers interactions are inhibited in
+	  order to avoid unexpected interactions with the SCMI Raw message
+	  flow. If unsure say N.
+
 config ARM_SCMI_HAVE_TRANSPORT
 	bool
 	help
diff --git a/drivers/firmware/arm_scmi/Makefile b/drivers/firmware/arm_scmi/Makefile
index 9ea86f8cc8f7..7c1aca60c8ce 100644
--- a/drivers/firmware/arm_scmi/Makefile
+++ b/drivers/firmware/arm_scmi/Makefile
@@ -1,6 +1,7 @@ 
 # SPDX-License-Identifier: GPL-2.0-only
 scmi-bus-y = bus.o
 scmi-driver-y = driver.o notify.o
+scmi-driver-$(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT) += raw_mode.o
 scmi-transport-$(CONFIG_ARM_SCMI_HAVE_SHMEM) = shmem.o
 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_MAILBOX) += mailbox.o
 scmi-transport-$(CONFIG_ARM_SCMI_TRANSPORT_SMC) += smc.o
diff --git a/drivers/firmware/arm_scmi/raw_mode.c b/drivers/firmware/arm_scmi/raw_mode.c
new file mode 100644
index 000000000000..c14cdd12b140
--- /dev/null
+++ b/drivers/firmware/arm_scmi/raw_mode.c
@@ -0,0 +1,1046 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * System Control and Management Interface (SCMI) Raw mode support
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+
+#include <linux/bitmap.h>
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/export.h>
+#include <linux/idr.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+
+#include "common.h"
+
+#include "raw_mode.h"
+
+#include <trace/events/scmi.h>
+
+#define SCMI_XFER_RAW_MAX_RETRIES	10
+
+struct scmi_xfer_raw_work {
+	unsigned long max_tmo;
+	struct scmi_raw_mode_info *raw;
+	struct list_head free_waiters;
+	/* Protect free_waiters list */
+	struct mutex free_mtx;
+	struct list_head active_waiters;
+	/* Protect active_waiters list */
+	struct mutex active_mtx;
+	wait_queue_head_t waiters_wq;
+	struct work_struct waiters_work;
+	bool wait_wq_shutdown;
+	struct workqueue_struct	*wait_wq;
+};
+
+struct scmi_raw_mode_info {
+	bool enabled;
+	struct scmi_chan_info *cinfo;
+	const struct scmi_desc *desc;
+	int tx_max_msg;
+	struct list_head free_raw_bufs[SCMI_RAW_MAX_QUEUE];
+	/* Protect free_raw_bufs[] lists */
+	spinlock_t free_bufs_lock[SCMI_RAW_MAX_QUEUE];
+	struct list_head msg_q[SCMI_RAW_MAX_QUEUE];
+	/* Protect msg_q[] lists */
+	spinlock_t msg_q_lock[SCMI_RAW_MAX_QUEUE];
+	wait_queue_head_t wqs[SCMI_RAW_MAX_QUEUE];
+	struct scmi_xfer_raw_work *wrk;
+	struct dentry *dentry;
+	void *gid;
+};
+
+struct scmi_xfer_raw_waiter {
+	unsigned long start_jiffies;
+	struct scmi_xfer *xfer;
+	struct completion async_response;
+	struct list_head node;
+};
+
+struct scmi_raw_buffer {
+	size_t max_len;
+	struct scmi_msg msg;
+	struct list_head node;
+};
+
+struct scmi_raw_data {
+	struct scmi_raw_mode_info *raw;
+	struct scmi_msg tx;
+	size_t tx_size;
+	size_t tx_req;
+	struct scmi_msg rx;
+	size_t rx_size;
+};
+
+static inline
+struct scmi_raw_buffer *scmi_raw_buffer_get(struct scmi_raw_mode_info *raw,
+					    unsigned int idx)
+{
+	unsigned long flags;
+	struct scmi_raw_buffer *rb = NULL;
+	struct list_head *head = &raw->free_raw_bufs[idx];
+
+	spin_lock_irqsave(&raw->free_bufs_lock[idx], flags);
+	if (!list_empty(head)) {
+		rb = list_first_entry(head, struct scmi_raw_buffer, node);
+		list_del_init(&rb->node);
+	}
+	spin_unlock_irqrestore(&raw->free_bufs_lock[idx], flags);
+
+	return rb;
+}
+
+static inline void scmi_raw_buffer_put(struct scmi_raw_mode_info *raw,
+				       struct scmi_raw_buffer *rb,
+				       unsigned int idx)
+{
+	unsigned long flags;
+
+	rb->msg.len = rb->max_len;
+
+	spin_lock_irqsave(&raw->free_bufs_lock[idx], flags);
+	list_add_tail(&rb->node, &raw->free_raw_bufs[idx]);
+	spin_unlock_irqrestore(&raw->free_bufs_lock[idx], flags);
+}
+
+static inline void scmi_raw_buffer_enqueue(struct scmi_raw_mode_info *raw,
+					   struct scmi_raw_buffer *rb,
+					   unsigned int idx)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&raw->msg_q_lock[idx], flags);
+	list_add_tail(&rb->node, &raw->msg_q[idx]);
+	spin_unlock_irqrestore(&raw->msg_q_lock[idx], flags);
+
+	wake_up_interruptible(&raw->wqs[idx]);
+}
+
+static inline struct scmi_raw_buffer*
+scmi_raw_buffer_dequeue(struct scmi_raw_mode_info *raw, unsigned int idx)
+{
+	unsigned long flags;
+	struct scmi_raw_buffer *rb = NULL;
+
+	spin_lock_irqsave(&raw->msg_q_lock[idx], flags);
+	if (!list_empty(&raw->msg_q[idx])) {
+		rb = list_first_entry(&raw->msg_q[idx],
+				      struct scmi_raw_buffer, node);
+		list_del_init(&rb->node);
+	}
+	spin_unlock_irqrestore(&raw->msg_q_lock[idx], flags);
+
+	return rb;
+}
+
+static void scmi_raw_buffer_queue_flush(struct scmi_raw_mode_info *raw,
+					unsigned int idx)
+{
+	struct scmi_raw_buffer *rb;
+
+	do {
+		rb = scmi_raw_buffer_dequeue(raw, idx);
+		if (rb)
+			scmi_raw_buffer_put(raw, rb, idx);
+	} while (rb);
+}
+
+static inline struct scmi_xfer_raw_waiter *
+scmi_xfer_raw_waiter_get(struct scmi_xfer_raw_work *wrk, struct scmi_xfer *xfer,
+			 bool async)
+{
+	struct scmi_xfer_raw_waiter *rw = NULL;
+
+	mutex_lock(&wrk->free_mtx);
+	if (!list_empty(&wrk->free_waiters)) {
+		rw = list_first_entry(&wrk->free_waiters,
+				      struct scmi_xfer_raw_waiter, node);
+		list_del_init(&rw->node);
+
+		if (async) {
+			reinit_completion(&rw->async_response);
+			xfer->async_done = &rw->async_response;
+		}
+
+		rw->xfer = xfer;
+	}
+	//TODO this critS can be shortened...
+	mutex_unlock(&wrk->free_mtx);
+
+	return rw;
+}
+
+static inline
+void scmi_xfer_raw_waiter_put(struct scmi_xfer_raw_work *wrk,
+			      struct scmi_xfer_raw_waiter *rw)
+{
+	if (rw->xfer) {
+		rw->xfer->async_done = NULL;
+		rw->xfer = NULL;
+	}
+
+	mutex_lock(&wrk->free_mtx);
+	list_add_tail(&rw->node, &wrk->free_waiters);
+	mutex_unlock(&wrk->free_mtx);
+}
+
+static inline
+void scmi_xfer_raw_waiter_enqueue(struct scmi_xfer_raw_work *wrk,
+				  struct scmi_xfer_raw_waiter *rw)
+{
+	rw->start_jiffies = jiffies;
+
+	trace_scmi_xfer_response_wait(rw->xfer->transfer_id, rw->xfer->hdr.id,
+				      rw->xfer->hdr.protocol_id,
+				      rw->xfer->hdr.seq,
+				      wrk->raw->desc->max_rx_timeout_ms,
+				      rw->xfer->hdr.poll_completion);
+
+	mutex_lock(&wrk->active_mtx);
+	list_add_tail(&rw->node, &wrk->active_waiters);
+	mutex_unlock(&wrk->active_mtx);
+
+	wake_up_interruptible(&wrk->waiters_wq);
+}
+
+static struct scmi_xfer_raw_waiter*
+scmi_xfer_raw_waiter_dequeue(struct scmi_xfer_raw_work *wrk)
+{
+	struct scmi_xfer_raw_waiter *rw;
+
+	mutex_lock(&wrk->active_mtx);
+	while (list_empty(&wrk->active_waiters)) {
+		int ret;
+
+		mutex_unlock(&wrk->active_mtx);
+
+		ret = wait_event_interruptible(wrk->waiters_wq,
+					       !list_empty(&wrk->active_waiters) ||
+					       wrk->wait_wq_shutdown);
+		if (ret || wrk->wait_wq_shutdown)
+			return NULL;
+
+		mutex_lock(&wrk->active_mtx);
+	}
+
+	rw = list_first_entry(&wrk->active_waiters,
+			      struct scmi_xfer_raw_waiter, node);
+	list_del_init(&rw->node);
+	mutex_unlock(&wrk->active_mtx);
+
+	return rw;
+}
+
+static void scmi_xfer_raw_worker(struct work_struct *work)
+{
+	struct scmi_xfer_raw_work *wrk;
+	struct scmi_raw_mode_info *raw;
+	struct scmi_chan_info *cinfo;
+	struct device *dev;
+
+	wrk = container_of(work, struct scmi_xfer_raw_work, waiters_work);
+
+	raw = wrk->raw;
+	cinfo = raw->cinfo;
+	dev = raw->cinfo->handle->dev;
+
+	do {
+		int ret = 0;
+		unsigned long aging, tmo;
+		struct scmi_xfer *xfer;
+		struct scmi_xfer_raw_waiter *rw;
+
+		/*
+		 * Waiters are queued by wait-deadline at the end, so some of
+		 * them could have been already expired when processed, BUT we
+		 * have to check the completion status anyway just in case a
+		 * virtually expired (aged) transaction was indeed completed
+		 * fine and we'll have to wait for the asynchronous part (if
+		 * any).
+		 */
+		rw = scmi_xfer_raw_waiter_dequeue(wrk);
+		if (!rw)
+			return;
+
+		xfer = rw->xfer;
+
+		aging = jiffies - rw->start_jiffies;
+		tmo = wrk->max_tmo > aging ? wrk->max_tmo - aging : 0;
+
+		if ((tmo && !wait_for_completion_timeout(&xfer->done, tmo)) ||
+		    (!tmo && !try_wait_for_completion(&xfer->done))) {
+			dev_err(dev, "timed out in RAW response - HDR:%08X\n",
+				pack_scmi_header(&xfer->hdr));
+			ret = -ETIMEDOUT;
+		}
+
+		/* Avoid unneeded async waits */
+		if (!ret && xfer->hdr.status)
+			ret = scmi_to_linux_errno(xfer->hdr.status);
+
+		if (raw->desc->ops->mark_txdone)
+			raw->desc->ops->mark_txdone(cinfo, ret, xfer);
+
+		trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id,
+				    xfer->hdr.protocol_id, xfer->hdr.seq, ret);
+
+		/* Wait also for an async delayed response if needed */
+		if (!ret && xfer->async_done) {
+			tmo = msecs_to_jiffies(SCMI_MAX_RESPONSE_TIMEOUT);
+			if (!wait_for_completion_timeout(xfer->async_done, tmo))
+				dev_err(dev,
+					"timed out in RAW delayed resp - HDR:%08X\n",
+					pack_scmi_header(&xfer->hdr));
+		}
+
+		/* Release waiter and xfer */
+		scmi_xfer_raw_put(raw->cinfo->handle, xfer);
+		scmi_xfer_raw_waiter_put(wrk, rw);
+	} while (1);
+}
+
+static int scmi_xfer_raw_enable(struct scmi_raw_mode_info *raw, bool enable)
+{
+	struct device *dev = raw->cinfo->handle->dev;
+
+	if (enable && !raw->enabled) {
+		dev_info(dev, "Enabling SCMI Raw access. Unbinding drivers.\n");
+		scmi_bus_raw_mode_enable();
+
+		/* Make sure all changes are visible before enabling Raw mode */
+		smp_store_mb(raw->enabled, true);
+	} else if (!enable && raw->enabled) {
+		//TODO Check about xfers registered as in-flight BUT never sent
+		int i;
+
+		/* Make sure all changes are visible before disabling Raw mode */
+		smp_store_mb(raw->enabled, false);
+
+		/*
+		 * Flush all message queues since nobody will read/dequeue these
+		 * anymore. Note that, instead, active_waiters queue will flush
+		 * on its own as soon as replies are received or time out.
+		 */
+		for (i = 0; i < SCMI_RAW_MAX_QUEUE; i++)
+			scmi_raw_buffer_queue_flush(raw, i);
+
+		dev_info(dev, "Disabling SCMI Raw access. Binding drivers.\n");
+		scmi_bus_raw_mode_disable();
+	}
+
+	return 0;
+}
+
+static int scmi_xfer_raw_get_init(struct scmi_raw_mode_info *raw, void *buf,
+				  size_t len, struct scmi_xfer **p)
+{
+	u32 msg_hdr;
+	size_t tx_size;
+	struct scmi_xfer *xfer;
+	int ret, retry = SCMI_XFER_RAW_MAX_RETRIES;
+	struct device *dev = raw->cinfo->handle->dev;
+
+	if (!buf || len < sizeof(u32))
+		return -EINVAL;
+
+	tx_size = len - sizeof(u32);
+	/* Ensure we have sane transfer sizes */
+	if (tx_size > raw->desc->max_msg_size)
+		return -ERANGE;
+
+	xfer = scmi_xfer_raw_get(raw->cinfo->handle);
+	if (IS_ERR(xfer)) {
+		dev_warn(dev, "RAW - Cannot get a free RAW xfer !\n");
+		return PTR_ERR(xfer);
+	}
+
+	/* Build xfer from provided SCMI bare message */
+	msg_hdr = le32_to_cpu(*((u32 *)buf));
+	unpack_scmi_header(msg_hdr, &xfer->hdr);
+	xfer->hdr.seq = (u16)MSG_XTRACT_TOKEN(msg_hdr);
+	xfer->hdr.poll_completion = false;
+	xfer->hdr.status = SCMI_SUCCESS;
+	xfer->tx.len = tx_size;
+	xfer->rx.len = raw->desc->max_msg_size;
+	memset(xfer->tx.buf, 0x00, raw->desc->max_msg_size);
+	if (tx_size)
+		memcpy(xfer->tx.buf, (u8 *)buf + sizeof(msg_hdr), tx_size);
+	*p = xfer;
+
+	/*
+	 * In flight registration can temporarily fail in case of Raw messages
+	 * if the user injects messages without using monotonically increasing
+	 * sequence numbers since the xfer (annd the token) is finally released
+	 * by a deferred worker in Raw mode. Just retry.
+	 */
+	do {
+		ret = scmi_xfer_raw_inflight_register(raw->cinfo->handle, xfer);
+		if (ret) {
+			dev_warn(dev,
+				 "RAW - Cannot register xfer %d as in-flight - HDR:0x%08X\n",
+				 xfer->hdr.seq, msg_hdr);
+
+			if (retry) {
+				dev_info(dev,
+					 "...retrying[%d] inflight registration\n",
+					 retry);
+				msleep(raw->desc->max_rx_timeout_ms /
+						SCMI_XFER_RAW_MAX_RETRIES);
+			} else {
+				scmi_xfer_raw_put(raw->cinfo->handle, xfer);
+			}
+		}
+	} while (ret && retry--);
+
+	return ret;
+}
+
+static int scmi_do_xfer_raw_start(struct scmi_raw_mode_info *raw,
+				  struct scmi_xfer *xfer, bool async)
+{
+	int ret;
+	struct scmi_xfer_raw_waiter *rw;
+	struct device *dev = raw->cinfo->handle->dev;
+	struct scmi_chan_info *cinfo = raw->cinfo;
+
+	rw = scmi_xfer_raw_waiter_get(raw->wrk, xfer, async);
+	if (!rw) {
+		dev_warn(dev, "RAW - Cannot get a free waiter !\n");
+		return -ENOMEM;
+	}
+
+	trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id,
+			      xfer->hdr.protocol_id, xfer->hdr.seq,
+			      xfer->hdr.poll_completion);
+
+	reinit_completion(&xfer->done);
+	/* Make sure xfer state update is visible before sending */
+	smp_store_mb(xfer->state, SCMI_XFER_SENT_OK);
+
+	ret = raw->desc->ops->send_message(cinfo, xfer);
+	if (ret) {
+		dev_err(dev, "Failed to send RAW message %d\n", ret);
+		scmi_xfer_raw_waiter_put(raw->wrk, rw);
+		return ret;
+	}
+
+	trace_scmi_msg_dump(xfer->hdr.protocol_id, xfer->hdr.id, "CMND",
+			    xfer->hdr.seq, xfer->hdr.status,
+			    xfer->tx.buf, xfer->tx.len);
+
+	scmi_xfer_raw_waiter_enqueue(raw->wrk, rw);
+
+	return ret;
+}
+
+static int scmi_raw_message_send(struct scmi_raw_mode_info *raw,
+				 void *buf, size_t len, bool async)
+{
+	int ret;
+	struct scmi_xfer *xfer;
+
+	if (!raw->enabled)
+		return -ENODEV;
+
+	ret = scmi_xfer_raw_get_init(raw, buf, len, &xfer);
+	if (ret)
+		return ret;
+
+	ret = scmi_do_xfer_raw_start(raw, xfer, async);
+	if (ret)
+		scmi_xfer_raw_put(raw->cinfo->handle, xfer);
+
+	return ret;
+}
+
+static struct scmi_raw_buffer *
+scmi_raw_message_dequeue(struct scmi_raw_mode_info *raw, unsigned int idx)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&raw->msg_q_lock[idx], flags);
+	while (list_empty(&raw->msg_q[idx])) {
+		spin_unlock_irqrestore(&raw->msg_q_lock[idx], flags);
+
+		if (wait_event_interruptible(raw->wqs[idx],
+					     !list_empty(&raw->msg_q[idx])))
+			return NULL;
+
+		spin_lock_irqsave(&raw->msg_q_lock[idx], flags);
+	}
+	spin_unlock_irqrestore(&raw->msg_q_lock[idx], flags);
+
+	return scmi_raw_buffer_dequeue(raw, idx);
+}
+
+static int scmi_raw_message_receive(struct scmi_raw_mode_info *raw,
+				    void *buf, size_t len, size_t *size,
+				    unsigned int idx)
+{
+	int ret = 0;
+	struct scmi_raw_buffer *rb;
+
+	if (!raw->enabled)
+		return -ENODEV;
+
+	rb = scmi_raw_message_dequeue(raw, idx);
+	if (!rb) {
+		dev_warn(raw->cinfo->handle->dev,
+			 "RAW - No message available!\n");
+		return -ENODEV;
+	}
+
+	if (rb->msg.len <= len) {
+		memcpy(buf, rb->msg.buf, rb->msg.len);
+		*size = rb->msg.len;
+	} else {
+		ret = -ENOSPC;
+	}
+
+	scmi_raw_buffer_put(raw, rb, idx);
+
+	return ret;
+}
+
+static int scmi_xfer_raw_collect(struct scmi_xfer *xfer,
+				 void *msg, size_t *msg_len)
+{
+	u32 *m;
+	size_t msg_size;
+
+	if (!xfer || !msg || !msg_len)
+		return -EINVAL;
+
+	/* Account for hdr ...*/
+	msg_size = xfer->rx.len + sizeof(u32);
+	/* ... and status if needed */
+	if (xfer->hdr.type != MSG_TYPE_NOTIFICATION)
+		msg_size += sizeof(u32);
+
+	if (msg_size > *msg_len)
+		return -ENOSPC;
+
+	m = msg;
+	*m = cpu_to_le32(pack_scmi_header(&xfer->hdr));
+	if (xfer->hdr.type != MSG_TYPE_NOTIFICATION)
+		*++m = cpu_to_le32(xfer->hdr.status);
+
+	memcpy(++m, xfer->rx.buf, xfer->rx.len);
+
+	*msg_len = msg_size;
+
+	return 0;
+}
+
+void scmi_raw_message_report(void *r, struct scmi_xfer *xfer, unsigned int idx)
+{
+	int ret;
+	struct scmi_raw_buffer *rb;
+	struct device *dev;
+	struct scmi_raw_mode_info *raw = r;
+
+	if (!raw || !raw->enabled)
+		return;
+
+	dev = raw->cinfo->handle->dev;
+	rb = scmi_raw_buffer_get(raw, idx);
+	if (!rb) {
+		dev_warn(dev, "RAW[%d] - Cannot get a free RAW buffer\n", idx);
+		return;
+	}
+
+	ret = scmi_xfer_raw_collect(xfer, rb->msg.buf, &rb->msg.len);
+	if (ret) {
+		dev_warn(dev, "RAW - Cannot collect xfer into buffer !\n");
+		scmi_raw_buffer_put(raw, rb, idx);
+		return;
+	}
+
+	scmi_raw_buffer_enqueue(raw, rb, idx);
+}
+
+static void scmi_xfer_raw_fill(struct scmi_raw_mode_info *raw,
+			       struct scmi_xfer *xfer, u32 msg_hdr)
+{
+	/* Unpack received HDR as it is */
+	unpack_scmi_header(msg_hdr, &xfer->hdr);
+	xfer->hdr.seq = MSG_XTRACT_TOKEN(msg_hdr);
+
+	memset(xfer->rx.buf, 0x00, xfer->rx.len);
+
+	raw->desc->ops->fetch_response(raw->cinfo, xfer);
+}
+
+void scmi_raw_error_report(void *r, u32 msg_hdr, void *priv)
+{
+	struct scmi_xfer xfer;
+	struct scmi_raw_buffer *rb;
+	struct scmi_raw_mode_info *raw = r;
+
+	if (!raw || !raw->enabled)
+		return;
+
+	rb = scmi_raw_buffer_get(raw, SCMI_RAW_ERRS_QUEUE);
+	if (!rb) {
+		dev_warn(raw->cinfo->handle->dev,
+			 "RAW[%d] - Cannot get a free RAW buffer\n",
+			 SCMI_RAW_ERRS_QUEUE);
+		return;
+	}
+
+	/* Use a raw buffer to provide rx space to the temp xfer */
+	xfer.rx.buf = rb->msg.buf;
+	/*
+	 * Allow max_msg_size...note that allocated rx.buf length is
+	 * max_msg_size + sizeof(u32).
+	 */
+	xfer.rx.len = raw->desc->max_msg_size;
+	if (priv)
+		/*
+		 * Any transport-provided priv must be passed back down
+		 * to transport
+		 */
+		smp_store_mb(xfer.priv, priv);
+
+	scmi_xfer_raw_fill(raw, &xfer, msg_hdr);
+	scmi_raw_message_report(raw, &xfer, SCMI_RAW_ERRS_QUEUE);
+
+	scmi_raw_buffer_put(raw, rb, SCMI_RAW_ERRS_QUEUE);
+}
+
+static inline ssize_t scmi_dbg_raw_mode_common_read(struct file *filp,
+						    char __user *buf,
+						    size_t count, loff_t *ppos,
+						    unsigned int idx)
+{
+	ssize_t cnt;
+	struct scmi_raw_data *rd = filp->private_data;
+
+	if (!rd->rx_size) {
+		int ret;
+
+		ret = scmi_raw_message_receive(rd->raw, rd->rx.buf, rd->rx.len,
+					       &rd->rx_size, idx);
+		if (ret) {
+			rd->rx_size = 0;
+			return ret;
+		}
+
+		/* Reset any previous filepos change, including writes */
+		*ppos = 0;
+	} else if (*ppos == rd->rx_size) {
+		/* Return EOF once all the message has been read-out */
+		rd->rx_size = 0;
+		return 0;
+	}
+
+	cnt = simple_read_from_buffer(buf, count, ppos,
+				      rd->rx.buf, rd->rx_size);
+
+	return cnt;
+}
+
+static ssize_t scmi_dbg_raw_mode_common_write(struct file *filp,
+					      const char __user *buf,
+					      size_t count, loff_t *ppos,
+					      bool async)
+{
+	int ret;
+	struct scmi_raw_data *rd = filp->private_data;
+
+	if (count > rd->tx.len - rd->tx_size)
+		return -ENOSPC;
+
+	/* On first write attempt @count carries the total full message size. */
+	if (!rd->tx_size)
+		rd->tx_req = count;
+
+	/* Gather a full message before sending it with a RAW xfer */
+	if (rd->tx_size < rd->tx_req) {
+		size_t cnt;
+
+		cnt = simple_write_to_buffer(rd->tx.buf, rd->tx.len, ppos,
+					     buf, count);
+		rd->tx_size += cnt;
+		if (cnt < count)
+			return cnt;
+	}
+
+	ret = scmi_raw_message_send(rd->raw, rd->tx.buf, rd->tx_size, async);
+
+	/* Reset ppos for next message ... */
+	rd->tx_size = 0;
+
+	return ret ?: count;
+}
+
+static inline __poll_t
+scmi_test_dbg_raw_common_poll(struct file *filp, struct poll_table_struct *wait,
+			      unsigned int idx)
+{
+	unsigned long flags;
+	struct scmi_raw_data *rd = filp->private_data;
+
+	if (!rd->raw->enabled)
+		return 0;
+
+	poll_wait(filp, &rd->raw->wqs[idx], wait);
+
+	spin_lock_irqsave(&rd->raw->msg_q_lock[idx], flags);
+	if (!list_empty(&rd->raw->msg_q[idx])) {
+		spin_unlock_irqrestore(&rd->raw->msg_q_lock[idx], flags);
+		return POLLIN | POLLRDNORM;
+	}
+	spin_unlock_irqrestore(&rd->raw->msg_q_lock[idx], flags);
+
+	return 0;
+}
+
+static ssize_t scmi_dbg_raw_mode_message_read(struct file *filp,
+					      char __user *buf,
+					      size_t count, loff_t *ppos)
+{
+	return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos,
+					     SCMI_RAW_REPLY_QUEUE);
+}
+
+static ssize_t scmi_dbg_raw_mode_message_write(struct file *filp,
+					       const char __user *buf,
+					       size_t count, loff_t *ppos)
+{
+	return scmi_dbg_raw_mode_common_write(filp, buf, count, ppos, false);
+}
+
+static __poll_t scmi_dbg_raw_mode_message_poll(struct file *filp,
+					       struct poll_table_struct *wait)
+{
+	return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_REPLY_QUEUE);
+}
+
+static int scmi_dbg_raw_mode_open(struct inode *inode, struct file *filp)
+{
+	struct scmi_raw_mode_info *raw;
+	struct scmi_raw_data *rd;
+
+	if (!inode->i_private)
+		return -ENODEV;
+
+	raw = inode->i_private;
+	rd = kzalloc(sizeof(*rd), GFP_KERNEL);
+	if (!rd)
+		return -ENOMEM;
+
+	rd->rx.len = raw->desc->max_msg_size + sizeof(u32);
+	rd->rx.buf = kzalloc(rd->rx.len, GFP_KERNEL);
+	if (!rd->rx.buf) {
+		kfree(rd);
+		return -ENOMEM;
+	}
+
+	rd->tx.len = raw->desc->max_msg_size + sizeof(u32);
+	rd->tx.buf = kzalloc(rd->tx.len, GFP_KERNEL);
+	if (!rd->tx.buf) {
+		kfree(rd->rx.buf);
+		kfree(rd);
+		return -ENOMEM;
+	}
+
+	rd->raw = raw;
+	filp->private_data = rd;
+
+	return 0;
+}
+
+static int scmi_dbg_raw_mode_release(struct inode *inode, struct file *filp)
+{
+	struct scmi_raw_data *rd = filp->private_data;
+
+	kfree(rd->rx.buf);
+	kfree(rd->tx.buf);
+	kfree(rd);
+
+	return 0;
+}
+
+static ssize_t scmi_dbg_raw_mode_enable_write(struct file *filp,
+					      const char __user *buf,
+					      size_t count, loff_t *ppos)
+{
+	int ret;
+	bool enabled;
+	struct scmi_raw_data *rd = filp->private_data;
+
+	ret = kstrtobool_from_user(buf, count, &enabled);
+	if (ret)
+		return ret;
+
+	ret = scmi_xfer_raw_enable(rd->raw, enabled);
+	if (ret)
+		return ret;
+
+	return count;
+}
+
+static const struct file_operations scmi_dbg_raw_mode_enable_fops = {
+	.open = scmi_dbg_raw_mode_open,
+	.release = scmi_dbg_raw_mode_release,
+	.write = scmi_dbg_raw_mode_enable_write,
+	.owner = THIS_MODULE,
+};
+
+static const struct file_operations scmi_dbg_raw_mode_message_fops = {
+	.open = scmi_dbg_raw_mode_open,
+	.release = scmi_dbg_raw_mode_release,
+	.read = scmi_dbg_raw_mode_message_read,
+	.write = scmi_dbg_raw_mode_message_write,
+	.poll = scmi_dbg_raw_mode_message_poll,
+	.owner = THIS_MODULE,
+};
+
+static ssize_t scmi_dbg_raw_mode_message_async_write(struct file *filp,
+						     const char __user *buf,
+						     size_t count, loff_t *ppos)
+{
+	return scmi_dbg_raw_mode_common_write(filp, buf, count, ppos, true);
+}
+
+static const struct file_operations scmi_dbg_raw_mode_message_async_fops = {
+	.open = scmi_dbg_raw_mode_open,
+	.release = scmi_dbg_raw_mode_release,
+	.read = scmi_dbg_raw_mode_message_read,
+	.write = scmi_dbg_raw_mode_message_async_write,
+	.poll = scmi_dbg_raw_mode_message_poll,
+	.owner = THIS_MODULE,
+};
+
+static ssize_t scmi_test_dbg_raw_mode_notif_read(struct file *filp,
+						 char __user *buf,
+						 size_t count, loff_t *ppos)
+{
+	return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos,
+					     SCMI_RAW_NOTIF_QUEUE);
+}
+
+static __poll_t scmi_test_dbg_raw_mode_notif_poll(struct file *filp,
+						  struct poll_table_struct *wait)
+{
+	return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_NOTIF_QUEUE);
+}
+
+static const struct file_operations scmi_dbg_raw_mode_notification_fops = {
+	.open = scmi_dbg_raw_mode_open,
+	.release = scmi_dbg_raw_mode_release,
+	.read = scmi_test_dbg_raw_mode_notif_read,
+	.poll = scmi_test_dbg_raw_mode_notif_poll,
+	.owner = THIS_MODULE,
+};
+
+static ssize_t scmi_test_dbg_raw_mode_errors_read(struct file *filp,
+						  char __user *buf,
+						  size_t count, loff_t *ppos)
+{
+	return scmi_dbg_raw_mode_common_read(filp, buf, count, ppos,
+					     SCMI_RAW_ERRS_QUEUE);
+}
+
+static __poll_t scmi_test_dbg_raw_mode_errors_poll(struct file *filp,
+						   struct poll_table_struct *wait)
+{
+	return scmi_test_dbg_raw_common_poll(filp, wait, SCMI_RAW_ERRS_QUEUE);
+}
+
+static const struct file_operations scmi_dbg_raw_mode_errors_fops = {
+	.open = scmi_dbg_raw_mode_open,
+	.release = scmi_dbg_raw_mode_release,
+	.read = scmi_test_dbg_raw_mode_errors_read,
+	.poll = scmi_test_dbg_raw_mode_errors_poll,
+	.owner = THIS_MODULE,
+};
+
+static int scmi_xfer_raw_free_bufs_init(struct scmi_raw_mode_info *raw, int idx)
+{
+	int i;
+	struct scmi_raw_buffer *rb;
+	struct device *dev = raw->cinfo->handle->dev;
+
+	rb = devm_kcalloc(dev, raw->tx_max_msg, sizeof(*rb), GFP_KERNEL);
+	if (!rb)
+		return -ENOMEM;
+
+	spin_lock_init(&raw->free_bufs_lock[idx]);
+	INIT_LIST_HEAD(&raw->free_raw_bufs[idx]);
+	for (i = 0; i < raw->tx_max_msg; i++, rb++) {
+		rb->max_len = raw->desc->max_msg_size + sizeof(u32);
+		rb->msg.buf = devm_kzalloc(dev, rb->max_len, GFP_KERNEL);
+		if (!rb->msg.buf)
+			return -ENOMEM;
+		scmi_raw_buffer_put(raw, rb, idx);
+	}
+
+	spin_lock_init(&raw->msg_q_lock[idx]);
+	INIT_LIST_HEAD(&raw->msg_q[idx]);
+	init_waitqueue_head(&raw->wqs[idx]);
+
+	return 0;
+}
+
+static int scmi_xfer_raw_worker_init(struct scmi_raw_mode_info *raw)
+{
+	int i;
+	struct scmi_xfer_raw_waiter *rw;
+	struct device *dev = raw->cinfo->handle->dev;
+
+	rw = devm_kcalloc(dev, raw->tx_max_msg, sizeof(*rw), GFP_KERNEL);
+	if (!rw)
+		return -ENOMEM;
+
+	raw->wrk = devm_kzalloc(dev, sizeof(*raw->wrk), GFP_KERNEL);
+	if (!raw->wrk)
+		return -ENOMEM;
+
+	raw->wrk->wait_wq = alloc_workqueue("scmi-raw-wait-wq-%d",
+					    WQ_UNBOUND | WQ_FREEZABLE |
+					    WQ_HIGHPRI, WQ_SYSFS, 0);
+	if (!raw->wrk->wait_wq)
+		return -ENOMEM;
+
+	raw->wrk->max_tmo = msecs_to_jiffies(raw->desc->max_rx_timeout_ms);
+
+	raw->wrk->raw = raw;
+	mutex_init(&raw->wrk->free_mtx);
+	INIT_LIST_HEAD(&raw->wrk->active_waiters);
+	mutex_init(&raw->wrk->active_mtx);
+	INIT_LIST_HEAD(&raw->wrk->free_waiters);
+	for (i = 0; i < raw->tx_max_msg; i++, rw++) {
+		init_completion(&rw->async_response);
+		scmi_xfer_raw_waiter_put(raw->wrk, rw);
+	}
+	INIT_WORK(&raw->wrk->waiters_work, scmi_xfer_raw_worker);
+	init_waitqueue_head(&raw->wrk->waiters_wq);
+
+	/* kick waiter work */
+	queue_work(raw->wrk->wait_wq, &raw->wrk->waiters_work);
+
+	return 0;
+}
+
+static int scmi_raw_mode_setup(struct scmi_raw_mode_info *raw)
+{
+	int ret;
+	void *gid;
+	struct device *dev = raw->cinfo->handle->dev;
+
+	gid = devres_open_group(dev, NULL, GFP_KERNEL);
+	if (!gid)
+		return -ENOMEM;
+
+	ret = scmi_xfer_raw_free_bufs_init(raw, SCMI_RAW_REPLY_QUEUE);
+	if (ret)
+		goto err;
+
+	ret = scmi_xfer_raw_free_bufs_init(raw, SCMI_RAW_NOTIF_QUEUE);
+	if (ret)
+		goto err;
+
+	ret = scmi_xfer_raw_free_bufs_init(raw, SCMI_RAW_ERRS_QUEUE);
+	if (ret)
+		goto err;
+
+	ret = scmi_xfer_raw_worker_init(raw);
+	if (ret)
+		goto err;
+
+	devres_close_group(dev, gid);
+	raw->gid = gid;
+
+	return 0;
+
+err:
+	devres_release_group(dev, gid);
+	return ret;
+}
+
+void *scmi_raw_mode_init(struct scmi_chan_info *cinfo,
+			 const struct scmi_desc *desc, int tx_max_msg)
+{
+	int ret;
+	struct scmi_raw_mode_info *raw;
+	struct device *dev;
+
+	if (!cinfo || !desc)
+		return ERR_PTR(-EINVAL);
+
+	dev = cinfo->handle->dev;
+	raw = devm_kzalloc(dev, sizeof(*raw), GFP_KERNEL);
+	if (!raw)
+		return ERR_PTR(-ENOMEM);
+
+	raw->cinfo = cinfo;
+	raw->desc = desc;
+	raw->tx_max_msg = tx_max_msg;
+
+	ret = scmi_raw_mode_setup(raw);
+	if (ret) {
+		devm_kfree(dev, raw);
+		return ERR_PTR(ret);
+	}
+
+	raw->dentry = debugfs_create_dir("scmi_raw", NULL);
+	if (IS_ERR(raw->dentry)) {
+		ret = PTR_ERR(raw->dentry);
+		devres_release_group(dev, raw->gid);
+		devm_kfree(dev, raw);
+		return ERR_PTR(ret);
+	}
+
+	debugfs_create_file("enable", 0200, raw->dentry, raw,
+			    &scmi_dbg_raw_mode_enable_fops);
+
+	debugfs_create_u32("transport_rx_timeout_ms", 0400, raw->dentry,
+			   (u32 *)&raw->desc->max_rx_timeout_ms);
+
+	debugfs_create_u32("transport_max_msg_size", 0400, raw->dentry,
+			   (u32 *)&raw->desc->max_msg_size);
+
+	debugfs_create_u32("transport_tx_max_msg", 0400, raw->dentry,
+			   (u32 *)&raw->tx_max_msg);
+
+	debugfs_create_file("message", 0600, raw->dentry, raw,
+			    &scmi_dbg_raw_mode_message_fops);
+
+	debugfs_create_file("message_async", 0600, raw->dentry, raw,
+			    &scmi_dbg_raw_mode_message_async_fops);
+
+	debugfs_create_file("notification", 0400, raw->dentry, raw,
+			    &scmi_dbg_raw_mode_notification_fops);
+
+	debugfs_create_file("errors", 0400, raw->dentry, raw,
+			    &scmi_dbg_raw_mode_errors_fops);
+
+	return raw;
+}
+
+void scmi_raw_mode_cleanup(void *r)
+{
+	struct scmi_raw_mode_info *raw = r;
+
+	if (!raw)
+		return;
+
+	debugfs_remove_recursive(raw->dentry);
+	scmi_xfer_raw_enable(raw, false);
+
+	raw->wrk->wait_wq_shutdown = true;
+	wake_up_interruptible(&raw->wrk->waiters_wq);
+	cancel_work_sync(&raw->wrk->waiters_work);
+	destroy_workqueue(raw->wrk->wait_wq);
+}
diff --git a/drivers/firmware/arm_scmi/raw_mode.h b/drivers/firmware/arm_scmi/raw_mode.h
new file mode 100644
index 000000000000..9988fd31ec91
--- /dev/null
+++ b/drivers/firmware/arm_scmi/raw_mode.h
@@ -0,0 +1,28 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * System Control and Management Interface (SCMI) Message Protocol
+ * Raw mode support header.
+ *
+ * Copyright (C) 2022 ARM Ltd.
+ */
+#ifndef _SCMI_RAW_MODE_H
+#define _SCMI_RAW_MODE_H
+
+#include "common.h"
+
+enum {
+	SCMI_RAW_REPLY_QUEUE,
+	SCMI_RAW_NOTIF_QUEUE,
+	SCMI_RAW_ERRS_QUEUE,
+	SCMI_RAW_MAX_QUEUE
+};
+
+void *scmi_raw_mode_init(struct scmi_chan_info *cinfo,
+			 const struct scmi_desc *desc, int tx_max_msg);
+void scmi_raw_mode_cleanup(void *raw);
+
+void scmi_raw_message_report(void *raw, struct scmi_xfer *xfer,
+			     unsigned int idx);
+void scmi_raw_error_report(void *raw, u32 msg_hdr, void *priv);
+
+#endif /* _SCMI_RAW_MODE_H */