From patchwork Sun Feb 16 17:21:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C06FB1820 for ; Sun, 16 Feb 2020 17:27:14 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A8F2720857 for ; Sun, 16 Feb 2020 17:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8F2720857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D31B56E44F; Sun, 16 Feb 2020 17:27:03 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id A928A6E44F for ; Sun, 16 Feb 2020 17:27:00 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id F18AA2006F; Sun, 16 Feb 2020 18:21:38 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 1/9] regmap: Add USB support Date: Sun, 16 Feb 2020 18:21:09 +0100 Message-Id: <20200216172117.49832-2-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=mUNDW0rc9m-72unTMGQA:9 a=qPv1qALXyqbN-t20:21 a=3pyz2qYhOZPv3Z7M:21 a=m1hsav1n4tjLVwml:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add support for regmap over USB for use with the Multifunction USB Device. Two endpoints IN/OUT are used. Up to 255 regmaps are supported on one USB interface. The register index width is always 32-bit, but the register value can be 8, 16 or 32 bits wide. LZ4 compression is supported on bulk transfers. Signed-off-by: Noralf Trønnes --- drivers/base/regmap/Kconfig | 8 +- drivers/base/regmap/Makefile | 1 + drivers/base/regmap/regmap-usb.c | 1026 ++++++++++++++++++++++++++++++ include/linux/regmap.h | 23 + include/linux/regmap_usb.h | 97 +++ 5 files changed, 1154 insertions(+), 1 deletion(-) create mode 100644 drivers/base/regmap/regmap-usb.c create mode 100644 include/linux/regmap_usb.h diff --git a/drivers/base/regmap/Kconfig b/drivers/base/regmap/Kconfig index 0fd6f97ee523..6c937c196825 100644 --- a/drivers/base/regmap/Kconfig +++ b/drivers/base/regmap/Kconfig @@ -4,7 +4,7 @@ # subsystems should select the appropriate symbols. config REGMAP - default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SCCB || REGMAP_I3C) + default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SCCB || REGMAP_I3C || REGMAP_USB) select IRQ_DOMAIN if REGMAP_IRQ bool @@ -53,3 +53,9 @@ config REGMAP_SCCB config REGMAP_I3C tristate depends on I3C + +config REGMAP_USB + tristate + depends on USB + select LZ4_COMPRESS + select LZ4_DECOMPRESS diff --git a/drivers/base/regmap/Makefile b/drivers/base/regmap/Makefile index ff6c7d8ec1cd..7e6932f100ea 100644 --- a/drivers/base/regmap/Makefile +++ b/drivers/base/regmap/Makefile @@ -17,3 +17,4 @@ obj-$(CONFIG_REGMAP_W1) += regmap-w1.o obj-$(CONFIG_REGMAP_SOUNDWIRE) += regmap-sdw.o obj-$(CONFIG_REGMAP_SCCB) += regmap-sccb.o obj-$(CONFIG_REGMAP_I3C) += regmap-i3c.o +obj-$(CONFIG_REGMAP_USB) += regmap-usb.o diff --git a/drivers/base/regmap/regmap-usb.c b/drivers/base/regmap/regmap-usb.c new file mode 100644 index 000000000000..bb4f0df44d1d --- /dev/null +++ b/drivers/base/regmap/regmap-usb.c @@ -0,0 +1,1026 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Register map access API - USB support + * + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "internal.h" + +/** + * DOC: overview + * + * This regmap over USB supports multiple regmaps over a single USB interface. + * Two endpoints are needed and the first IN and OUT endpoints are used. + * A REGMAP_USB_DT_INTERFACE descriptor request is issued to get the number of + * regmaps supported on the interface. A REGMAP_USB_DT_MAP descriptor request is + * issued to get details about a specific regmap. This is done when + * devm_regmap_init_usb() is called to get access to a regmap. + * + * A regmap transfer begins with the host sending OUT a ®map_usb_header which + * contains info about the index of the regmap, the register address etc. Next + * it does an IN or OUT transfer of the register value(s) depending on if it's a + * read or write. This transfer can be compressed using lz4 if the device + * supports it. Finally a ®map_usb_status IN request is issued to receive the + * status of the transfer. + * + * If a transfer fails with the error code -EPIPE, a reset control request + * (REGMAP_USB_REQ_PROTOCOL_RESET) is issued. The device should reset it's state + * machine and return its previous error code if any. The device can halt its + * IN/OUT endpoints to force the host to perform a reset if it fails to + * understand a transfer. + */ + +/* Provides exclusive interface access */ +struct regmap_usb_interface { + struct usb_interface *interface; + struct mutex lock; /* Ensures exclusive interface access */ + unsigned int refcount; + struct list_head link; + + u32 tag; +}; + +struct regmap_usb_context; + +struct regmap_usb_transfer { + struct regmap_usb_context *ctx; + struct usb_anchor anchor; + struct urb *header_urb; + struct urb *buf_out_urb; + struct urb *buf_in_urb; + void *buf; + size_t bufsize; + struct urb *status_urb; + spinlock_t lock; /* Protect dynamic values */ + u32 tag; + int status; + + u8 compression; + void *buf_in_dest; + unsigned int length; + unsigned int actual_length; + + ktime_t start; /* FIXME: Temporary debug/perf aid */ +}; + +struct regmap_usb_context { + struct usb_device *usb; + struct regmap_usb_interface *ruif; + u8 ifnum; + unsigned int in_pipe; + unsigned int out_pipe; + u16 index; + unsigned int val_bytes; + void *lz4_comp_mem; + u8 compression; + unsigned int max_transfer_size; + struct regmap_usb_transfer *transfers[2]; +#ifdef CONFIG_DEBUG_FS + u64 stats_length; + u64 stats_actual_length; + unsigned int num_resets; + unsigned int num_errors; +#endif +}; + +/* FIXME: Temporary debugging aid */ +static unsigned int debug = 8; + +#define udebug(level, fmt, ...) \ +do { \ + if ((level) <= debug) \ + pr_debug(fmt, ##__VA_ARGS__); \ +} while (0) + +static LIST_HEAD(regmap_usb_interfaces); +static DEFINE_MUTEX(regmap_usb_interfaces_lock); + +static struct regmap_usb_interface *regmap_usb_interface_get(struct usb_interface *interface) +{ + struct regmap_usb_interface *ruif, *entry; + + mutex_lock(®map_usb_interfaces_lock); + list_for_each_entry(entry, ®map_usb_interfaces, link) + if (entry->interface == interface) { + ruif = entry; + ruif->refcount++; + goto out_unlock; + } + + ruif = kzalloc(sizeof(*ruif), GFP_KERNEL); + if (!ruif) { + ruif = ERR_PTR(-ENOMEM); + goto out_unlock; + } + + mutex_init(&ruif->lock); + ruif->interface = interface; + ruif->refcount++; + list_add(&ruif->link, ®map_usb_interfaces); +out_unlock: + mutex_unlock(®map_usb_interfaces_lock); + + return ruif; +} + +static void regmap_usb_interface_put(struct regmap_usb_interface *ruif) +{ + mutex_lock(®map_usb_interfaces_lock); + if (--ruif->refcount) + goto out_unlock; + + list_del(&ruif->link); + mutex_destroy(&ruif->lock); + kfree(ruif); +out_unlock: + mutex_unlock(®map_usb_interfaces_lock); +} + +#ifdef CONFIG_DEBUG_FS +static void regmap_usb_stats_add_length(struct regmap_usb_context *ctx, unsigned int len) +{ + ctx->stats_length += len; + /* Did it wrap around? */ + if (ctx->stats_length <= len && ctx->stats_actual_length) { + ctx->stats_length = len; + ctx->stats_actual_length = 0; + } +} + +#define regmap_usb_stats_add(c, v) \ + (c) += v +#else +static void regmap_usb_stats_add_length(struct regmap_usb_context *ctx, unsigned int len) +{ +} + +#define regmap_usb_stats_add(c, v) +#endif + +static int regmap_usb_protocol_reset(struct regmap_usb_context *ctx) +{ + u8 *prev_errno; + int ret; + + regmap_usb_stats_add(ctx->num_resets, 1); + + prev_errno = kmalloc(1, GFP_ATOMIC); + if (!prev_errno) + return -ENOMEM; + + usb_clear_halt(ctx->usb, ctx->out_pipe); + usb_clear_halt(ctx->usb, ctx->in_pipe); + + ret = usb_control_msg(ctx->usb, usb_rcvctrlpipe(ctx->usb, 0), + REGMAP_USB_REQ_PROTOCOL_RESET, + USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, + 0, ctx->ifnum, prev_errno, 1, + USB_CTRL_SET_TIMEOUT); + udebug(0, "%s: ret=%d, prev_errno=%u\n", __func__, ret, *prev_errno); + if (ret < 0 || ret != 1) { + /* FIXME: Try a USB port reset? */ + ret = -EPIPE; + goto free; + } + + ret = *prev_errno; +free: + kfree(prev_errno); + + return ret ? -ret : -EPIPE; +} + +static void regmap_usb_header_urb_completion(struct urb *urb) +{ + struct regmap_usb_transfer *transfer = urb->context; + unsigned long flags; + + spin_lock_irqsave(&transfer->lock, flags); + if (urb->status) + transfer->status = urb->status; + else if (urb->actual_length != urb->transfer_buffer_length) + transfer->status = -EREMOTEIO; + transfer->start = ktime_get(); + spin_unlock_irqrestore(&transfer->lock, flags); + + udebug(4, "%s: transfer: status=%d (%d), tag=%u\n", + __func__, urb->status, transfer->status, transfer->tag); +} + +static void regmap_usb_status_urb_completion(struct urb *urb) +{ + struct regmap_usb_status *status = urb->transfer_buffer; + struct regmap_usb_transfer *transfer = urb->context; + unsigned long flags; + int stat; + + udebug(4, "%s: urb->status=%d, signature=0x%x, tag=%u (expected %u)\n", + __func__, urb->status, le32_to_cpu(status->signature), + le16_to_cpu(status->tag), transfer->tag); + + if (urb->status) + stat = urb->status; + else if (urb->actual_length != urb->transfer_buffer_length) + stat = -EREMOTEIO; + else if (le32_to_cpu(status->signature) != REGMAP_USB_STATUS_SIGNATURE || + le16_to_cpu(status->tag) != transfer->tag) + stat = -EBADMSG; + else + stat = -status->status; + + spin_lock_irqsave(&transfer->lock, flags); + if (!transfer->status) + transfer->status = stat; + spin_unlock_irqrestore(&transfer->lock, flags); +} + +static long mud_drm_throughput(ktime_t begin, ktime_t end, size_t len) +{ + long throughput; + + throughput = ktime_us_delta(end, begin); + throughput = throughput ? (len * 1000) / throughput : 0; + throughput = throughput * 1000 / 1024; + + return throughput; +} + +static void regmap_usb_buf_in_urb_completion(struct urb *urb) +{ + struct regmap_usb_transfer *transfer = urb->context; + unsigned long flags; + ktime_t start, end; + + spin_lock_irqsave(&transfer->lock, flags); + if (urb->status && !transfer->status) + transfer->status = urb->status; + transfer->actual_length = urb->actual_length; + start = transfer->start; + spin_unlock_irqrestore(&transfer->lock, flags); + + end = ktime_get(); + + udebug(4, "%s: IN: status=%d, tag=%u, %ld kB/s (%lld ms), len=%u\n", + __func__, urb->status, transfer->tag, + mud_drm_throughput(start, end, urb->actual_length), + ktime_ms_delta(end, start), urb->actual_length); +} + +static void regmap_usb_buf_out_urb_completion(struct urb *urb) +{ + struct regmap_usb_transfer *transfer = urb->context; + unsigned long flags; + ktime_t start, end; + + spin_lock_irqsave(&transfer->lock, flags); + if (!transfer->status) { + if (urb->status) + transfer->status = urb->status; + else if (urb->actual_length != urb->transfer_buffer_length) + transfer->status = -EREMOTEIO; + } + start = transfer->start; + spin_unlock_irqrestore(&transfer->lock, flags); + + end = ktime_get(); + + udebug(4, "%s: OUT: status=%d, tag=%u, %ld kB/s (%lld ms), len=%u\n", + __func__, transfer->status, transfer->tag, + mud_drm_throughput(start, end, urb->transfer_buffer_length), + ktime_ms_delta(end, start), urb->transfer_buffer_length); +} + +static struct urb *regmap_usb_alloc_urb(struct usb_device *usb, unsigned int pipe, + size_t size, usb_complete_t complete_fn, + struct regmap_usb_transfer *transfer) +{ + void *buf = NULL; + struct urb *urb; + + urb = usb_alloc_urb(0, GFP_KERNEL); + if (!urb) + return NULL; + + if (size) { + buf = usb_alloc_coherent(usb, size, GFP_KERNEL, &urb->transfer_dma); + if (!buf) { + usb_free_urb(urb); + return NULL; + } + } + + usb_fill_bulk_urb(urb, usb, pipe, buf, size, complete_fn, transfer); + if (size) + urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; + + return urb; +} + +static void regmap_usb_free_transfer(struct regmap_usb_transfer *transfer) +{ + struct urb *urb; + + if (!transfer) + return; + + urb = transfer->header_urb; + if (urb) + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); + usb_free_urb(urb); + + urb = transfer->status_urb; + if (urb) + usb_free_coherent(urb->dev, urb->transfer_buffer_length, + urb->transfer_buffer, urb->transfer_dma); + usb_free_urb(urb); + + usb_free_urb(transfer->buf_in_urb); + usb_free_urb(transfer->buf_out_urb); + kfree(transfer->buf); + kfree(transfer); +} + +static struct regmap_usb_transfer *regmap_usb_alloc_transfer(struct regmap_usb_context *ctx) +{ + struct regmap_usb_transfer *transfer; + + transfer = kzalloc(sizeof(*transfer), GFP_KERNEL); + if (!transfer) + return NULL; + + init_usb_anchor(&transfer->anchor); + spin_lock_init(&transfer->lock); + transfer->ctx = ctx; + + transfer->header_urb = regmap_usb_alloc_urb(ctx->usb, ctx->out_pipe, + sizeof(struct regmap_usb_header), + regmap_usb_header_urb_completion, + transfer); + if (!transfer->header_urb) + goto error; + + transfer->status_urb = regmap_usb_alloc_urb(ctx->usb, ctx->in_pipe, + sizeof(struct regmap_usb_status), + regmap_usb_status_urb_completion, + transfer); + if (!transfer->status_urb) + goto error; + + transfer->buf_in_urb = regmap_usb_alloc_urb(ctx->usb, ctx->in_pipe, 0, + regmap_usb_buf_in_urb_completion, + transfer); + if (!transfer->buf_in_urb) + goto error; + + transfer->buf_out_urb = regmap_usb_alloc_urb(ctx->usb, ctx->out_pipe, 0, + regmap_usb_buf_out_urb_completion, + transfer); + if (!transfer->buf_out_urb) + goto error; + + transfer->bufsize = ctx->max_transfer_size; +retry: + transfer->buf = kmalloc(transfer->bufsize, GFP_KERNEL); + if (!transfer->buf) { + if (transfer->bufsize < 32) /* Give up */ + goto error; + + transfer->bufsize /= 2; + goto retry; + } + + return transfer; + +error: + regmap_usb_free_transfer(transfer); + + return NULL; +} + +static void regmap_usb_free_transfers(struct regmap_usb_context *ctx) +{ + regmap_usb_free_transfer(ctx->transfers[0]); + regmap_usb_free_transfer(ctx->transfers[1]); +} + +static int regmap_usb_alloc_transfers(struct regmap_usb_context *ctx) +{ + ctx->transfers[0] = regmap_usb_alloc_transfer(ctx); + ctx->transfers[1] = regmap_usb_alloc_transfer(ctx); + if (!ctx->transfers[0] || !ctx->transfers[1]) { + regmap_usb_free_transfers(ctx); + return -ENOMEM; + } + + return 0; +} + +static int regmap_usb_submit_urb(struct urb *urb, struct regmap_usb_transfer *transfer) +{ + int ret; + + usb_anchor_urb(urb, &transfer->anchor); + ret = usb_submit_urb(urb, GFP_KERNEL); + if (ret) + usb_unanchor_urb(urb); + + return ret; +} + +static void regmap_usb_kill_transfers(struct regmap_usb_context *ctx) +{ + usb_kill_anchored_urbs(&ctx->transfers[0]->anchor); + usb_kill_anchored_urbs(&ctx->transfers[1]->anchor); +} + +static int regmap_usb_submit_transfer(struct regmap_usb_transfer *transfer, + unsigned int regnr, u32 flags, void *buf, size_t len) +{ + struct regmap_usb_context *ctx = transfer->ctx; + struct regmap_usb_header *header; + struct urb *urb; + int ret; + + spin_lock_irq(&transfer->lock); + transfer->actual_length = 0; + transfer->status = 0; + transfer->tag = ++ctx->ruif->tag; + spin_unlock_irq(&transfer->lock); + + udebug(3, "%s: regnr=0x%x, in=%u flags=0x%x, len=%zu, transfer->buf=%s, tag=%u\n", + __func__, regnr, !!(flags & REGMAP_USB_HEADER_FLAG_IN), flags, len, + buf == transfer->buf ? "yes" : "no", ctx->ruif->tag); + + header = transfer->header_urb->transfer_buffer; + header->signature = cpu_to_le32(REGMAP_USB_HEADER_SIGNATURE); + header->index = cpu_to_le16(ctx->index); + header->tag = cpu_to_le16(ctx->ruif->tag); + header->flags = cpu_to_le32(flags); + header->regnr = cpu_to_le32(regnr); + header->length = cpu_to_le32(len); + + ret = regmap_usb_submit_urb(transfer->header_urb, transfer); + if (ret) + goto error; + + if (flags & REGMAP_USB_HEADER_FLAG_IN) + urb = transfer->buf_in_urb; + else + urb = transfer->buf_out_urb; + + urb->transfer_buffer = buf; + urb->transfer_buffer_length = len; + + ret = regmap_usb_submit_urb(urb, transfer); + if (ret) + goto error; + + ret = regmap_usb_submit_urb(transfer->status_urb, transfer); + if (ret) + goto error; + + return 0; + +error: + regmap_usb_kill_transfers(ctx); + + return ret; +} + +static int regmap_usb_wait_anchor(struct regmap_usb_transfer *transfer) +{ + int remain; + + remain = usb_wait_anchor_empty_timeout(&transfer->anchor, 5000); + if (!remain) { + /* Kill pending first */ + if (transfer == transfer->ctx->transfers[0]) + usb_kill_anchored_urbs(&transfer->ctx->transfers[1]->anchor); + else + usb_kill_anchored_urbs(&transfer->ctx->transfers[0]->anchor); + usb_kill_anchored_urbs(&transfer->anchor); + + return -ETIMEDOUT; + } + + return 0; +} + +static int regmap_usb_transfer_decompress(struct regmap_usb_transfer *transfer) +{ + unsigned int length, actual_length; + u8 compression; + void *dest; + int ret; + + spin_lock_irq(&transfer->lock); + length = transfer->buf_in_urb->transfer_buffer_length; + actual_length = transfer->actual_length; + compression = transfer->compression; + transfer->compression = 0; + dest = transfer->buf_in_dest; + spin_unlock_irq(&transfer->lock); + + udebug(3, "%s: dest=%px length=%u actual_length=%u\n", + __func__, dest, length, actual_length); + + if (!actual_length) /* This transfer has not been used */ + return 0; + + if (!length) /* FIXME: necessary? */ + return -EINVAL; + + regmap_usb_stats_add(transfer->ctx->stats_actual_length, actual_length); + + if (!compression) { + if (actual_length != length) + return -EREMOTEIO; + + return 0; + } + + if (actual_length == length) { /* Device did not compress */ + memcpy(dest, transfer->buf, length); + return 0; + } + + if (compression & REGMAP_USB_COMPRESSION_LZ4) { + ret = LZ4_decompress_safe(transfer->buf, dest, + actual_length, transfer->bufsize); + udebug(3, " decompress: ret=%d\n", ret); + } else { + return -EINVAL; + } + + if (ret < 0 || ret != length) + return -EREMOTEIO; + + return 0; +} + +static int regmap_usb_transfer(struct regmap_usb_context *ctx, bool in, + const void *reg, void *buf, size_t len) +{ + struct regmap_usb_transfer *transfer = NULL; + unsigned int i, regnr, actual_length; + size_t chunk, trlen, complen = 0; + size_t orglen = len; + ktime_t start, end; + void *trbuf; + u32 flags; + int ret; + + regnr = *(u32 *)reg; + + for (i = 0; i < 2; i++) { + struct regmap_usb_transfer *transfer = ctx->transfers[i]; + + spin_lock_irq(&transfer->lock); + transfer->actual_length = 0; + transfer->compression = 0; + transfer->status = 0; + spin_unlock_irq(&transfer->lock); + } + + /* FIXME: This did not work */ + /* Use 2 transfers to maximize compressed transfers */ + if (0 && ctx->compression && + ctx->transfers[0]->bufsize == ctx->transfers[1]->bufsize && + len > 128 && len <= ctx->transfers[0]->bufsize) + complen = len / 2; + + mutex_lock(&ctx->ruif->lock); + + udebug(2, "\n%s: regnr=0x%x, in=%u len=%zu, buf=%px, is_vmalloc=%u\n", + __func__, regnr, in, len, buf, is_vmalloc_addr(buf)); + + start = ktime_get(); + + i = 0; + while (len) { + transfer = ctx->transfers[i]; + i = !i; + + chunk = min(complen ? complen : transfer->bufsize, len); + trlen = chunk; + flags = 0; + + regmap_usb_stats_add_length(ctx, chunk); + + ret = regmap_usb_wait_anchor(transfer); + if (ret) { + udebug(0, "FAIL first wait %d\n", ret); + goto error; + } + + spin_lock_irq(&transfer->lock); + ret = transfer->status; + actual_length = transfer->actual_length; + spin_unlock_irq(&transfer->lock); + if (ret) { + udebug(0, "FAIL transfer %d\n", ret); + goto error; + } + + if (in && ctx->compression && actual_length) { + ret = regmap_usb_transfer_decompress(transfer); + if (ret) + goto error; + } + + trbuf = buf; + + if (!in) { + ret = 0; + /* LZ4_minLength = 13, use the next power of two value */ + if (ctx->compression & REGMAP_USB_COMPRESSION_LZ4 && chunk >= 16) { + ret = LZ4_compress_default(buf, transfer->buf, chunk, + chunk, ctx->lz4_comp_mem); + udebug(3, " compress[%u](chunk=%zu): ret=%d\n", !i, chunk, ret); + } + if (ret > 0) { + flags |= REGMAP_USB_COMPRESSION_LZ4; + trbuf = transfer->buf; + trlen = ret; + } else if (is_vmalloc_addr(buf)) { + memcpy(transfer->buf, buf, chunk); + trbuf = transfer->buf; + } + regmap_usb_stats_add(ctx->stats_actual_length, trlen); + } else { + flags |= REGMAP_USB_HEADER_FLAG_IN; + if (ctx->compression & REGMAP_USB_COMPRESSION_LZ4 && trlen >= 16) { + flags |= REGMAP_USB_COMPRESSION_LZ4; + trbuf = transfer->buf; + + spin_lock_irq(&transfer->lock); + transfer->compression = REGMAP_USB_COMPRESSION_LZ4; + transfer->buf_in_dest = buf; + spin_unlock_irq(&transfer->lock); + } + } + + ret = regmap_usb_submit_transfer(transfer, regnr, flags, trbuf, trlen); + if (ret) { + udebug(0, "FAIL submit %d\n", ret); + goto error; + } + + len -= chunk; + buf += chunk; + regnr += chunk / ctx->val_bytes; + } + + ret = regmap_usb_wait_anchor(transfer); + if (ret) { + udebug(0, "FAIL second wait%d\n", ret); + goto error; + } + + for (i = 0; i < 2; i++) { + struct regmap_usb_transfer *transfer = ctx->transfers[i]; + + spin_lock_irq(&transfer->lock); + ret = transfer->status; + spin_unlock_irq(&transfer->lock); + if (ret) { + udebug(0, "FAIL transfer2[%u] %d\n", i, ret); + goto error; + } + } + + if (in && ctx->compression) { + ret = regmap_usb_transfer_decompress(ctx->transfers[0]); + if (ret) + goto error; + ret = regmap_usb_transfer_decompress(ctx->transfers[1]); + } + +error: + /* + * FIXME: What errors should warrant a reset? + * Verify that the DOC section is correct. + */ + if (ret == -EPIPE || ret == -ETIMEDOUT) + ret = regmap_usb_protocol_reset(ctx); + + if (ret) + regmap_usb_stats_add(ctx->num_errors, 1); + + if (debug >= 2) { + end = ktime_get(); + pr_debug("%s: ret=%d %ld kB/s (%lld ms)\n", __func__, ret, + mud_drm_throughput(start, end, orglen), + ktime_ms_delta(end, start)); + } + + mutex_unlock(&ctx->ruif->lock); + + return ret; +} + +static int regmap_usb_gather_write(void *context, + const void *reg, size_t reg_len, + const void *val, size_t val_len) +{ + return regmap_usb_transfer(context, false, reg, (void *)val, val_len); +} + +static int regmap_usb_write(void *context, const void *data, size_t count) +{ + struct regmap_usb_context *ctx = context; + size_t val_len = count - sizeof(u32); + void *val; + int ret; + + /* buffer needs to be properly aligned for DMA use */ + val = kmemdup(data + sizeof(u32), val_len, GFP_KERNEL); + if (!val) + return -ENOMEM; + + ret = regmap_usb_gather_write(ctx, data, sizeof(u32), val, val_len); + kfree(val); + + return ret; +} + +static int regmap_usb_read(void *context, const void *reg_buf, size_t reg_size, + void *val_buf, size_t val_size) +{ + return regmap_usb_transfer(context, true, reg_buf, val_buf, val_size); +} + +static void regmap_usb_free_context(void *context) +{ + struct regmap_usb_context *ctx = context; + + udebug(1, "%s:\n", __func__); + + regmap_usb_interface_put(ctx->ruif); + regmap_usb_free_transfers(ctx); + kfree(ctx->lz4_comp_mem); + kfree(ctx); +} + +static const struct regmap_bus regmap_usb = { + .write = regmap_usb_write, + .gather_write = regmap_usb_gather_write, + .read = regmap_usb_read, + .free_context = regmap_usb_free_context, + /* regmap_usb_transfer() handles reg_format: */ + .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, + .val_format_endian_default = REGMAP_ENDIAN_LITTLE, +}; + +#ifdef CONFIG_DEBUG_FS +static int regmap_usb_debugfs_usbinfo_show(struct seq_file *s, void *ignored) +{ + struct regmap_usb_context *ctx = s->private; + + mutex_lock(&ctx->ruif->lock); + + seq_printf(s, "USB interface: %s\n", dev_name(&ctx->ruif->interface->dev)); + seq_printf(s, "Regmap index: %u\n", ctx->index); + seq_printf(s, "Max transfer size: %u\n", ctx->max_transfer_size); + seq_printf(s, "Tag: %u\n", ctx->ruif->tag); + seq_printf(s, "Number of errors: %u\n", ctx->num_errors); + seq_printf(s, "Number of resets: %u\n", ctx->num_resets); + + seq_puts(s, "Compression: "); + if (ctx->compression & REGMAP_USB_COMPRESSION_LZ4) + seq_puts(s, " lz4"); + seq_puts(s, "\n"); + + if (ctx->compression) { + u64 remainder; + u64 ratio = div64_u64_rem(ctx->stats_length, ctx->stats_actual_length, + &remainder); + u64 ratio_frac = div64_u64(remainder * 10, ctx->stats_actual_length); + + seq_printf(s, "Compression ratio: %llu.%llu\n", ratio, ratio_frac); + } + + mutex_unlock(&ctx->ruif->lock); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(regmap_usb_debugfs_usbinfo); + +static void regmap_usb_debugfs_init(struct regmap *map) +{ + if (!map->debugfs) + return; + + debugfs_create_file("usbinfo", 0400, map->debugfs, map->bus_context, + ®map_usb_debugfs_usbinfo_fops); +} +#else +static void regmap_usb_debugfs_init(struct regmap *map) {} +#endif + +static struct regmap_usb_context * +regmap_usb_gen_context(struct usb_interface *interface, unsigned int index, + const struct regmap_config *config) +{ + struct usb_host_interface *alt = interface->cur_altsetting; + struct usb_device *usb = interface_to_usbdev(interface); + struct usb_endpoint_descriptor *ep_in, *ep_out; + unsigned int num_regmaps, max_transfer_size; + struct regmap_usb_map_descriptor map_desc; + struct regmap_usb_context *ctx; + int ret; + + ret = regmap_usb_get_interface_descriptor(interface, &num_regmaps); + if (ret) + return ERR_PTR(ret); + + if (!num_regmaps) + return ERR_PTR(-EINVAL); + + if (index >= num_regmaps) + return ERR_PTR(-ENOENT); + + ret = regmap_usb_get_map_descriptor(interface, index, &map_desc); + if (ret) + return ERR_PTR(ret); + + if (config->reg_bits != 32 || + config->val_bits != map_desc.bRegisterValueBits) + return ERR_PTR(-EINVAL); + + max_transfer_size = 1 << map_desc.bMaxTransferSizeOrder; + if (max_transfer_size < (config->val_bits / 8)) + return ERR_PTR(-EINVAL); + + max_transfer_size = min_t(unsigned long, max_transfer_size, KMALLOC_MAX_SIZE); + + ret = usb_find_common_endpoints(alt, &ep_in, &ep_out, NULL, NULL); + if (ret) + return ERR_PTR(ret); + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return ERR_PTR(-ENOMEM); + + ctx->usb = usb; + ctx->index = index; + ctx->ifnum = alt->desc.bInterfaceNumber; + ctx->val_bytes = config->val_bits / 8; + ctx->compression = map_desc.bCompression; + ctx->max_transfer_size = max_transfer_size; + ctx->in_pipe = usb_rcvbulkpipe(usb, usb_endpoint_num(ep_in)); + ctx->out_pipe = usb_sndbulkpipe(usb, usb_endpoint_num(ep_out)); + + if (ctx->compression & REGMAP_USB_COMPRESSION_LZ4) { + ctx->lz4_comp_mem = kmalloc(LZ4_MEM_COMPRESS, GFP_KERNEL); + if (!ctx->lz4_comp_mem) { + ret = -ENOMEM; + goto err_free; + } + } + + ret = regmap_usb_alloc_transfers(ctx); + if (ret) + goto err_free; + + ctx->ruif = regmap_usb_interface_get(interface); + if (IS_ERR(ctx->ruif)) { + ret = PTR_ERR(ctx->ruif); + goto err_free_transfers; + } + + return ctx; + +err_free_transfers: + regmap_usb_free_transfers(ctx); +err_free: + kfree(ctx->lz4_comp_mem); + kfree(ctx); + + return ERR_PTR(ret); +} + +struct regmap *__devm_regmap_init_usb(struct device *dev, + struct usb_interface *interface, + unsigned int index, + const struct regmap_config *config, + struct lock_class_key *lock_key, + const char *lock_name) +{ + struct regmap_usb_context *ctx; + struct regmap *map; + + ctx = regmap_usb_gen_context(interface, index, config); + if (IS_ERR(ctx)) + return ERR_CAST(ctx); + + map = __devm_regmap_init(dev, ®map_usb, ctx, config, + lock_key, lock_name); + if (!IS_ERR(map)) + regmap_usb_debugfs_init(map); + + return map; +} +EXPORT_SYMBOL(__devm_regmap_init_usb); + +static int regmap_usb_get_descriptor(struct usb_interface *interface, u8 type, + u8 index, void *desc, size_t size) +{ + u8 ifnum = interface->cur_altsetting->desc.bInterfaceNumber; + struct usb_device *usb = interface_to_usbdev(interface); + u8 *buf; + int ret; + + buf = kmalloc(size, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + ret = usb_control_msg(usb, usb_rcvctrlpipe(usb, 0), + USB_REQ_GET_DESCRIPTOR, + USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE, + (type << 8) + index, ifnum, buf, size, + USB_CTRL_GET_TIMEOUT); + if (ret < 0) + goto free; + + if (ret != size || buf[0] != size || buf[1] != type) { + ret = -ENODATA; + goto free; + } + + memcpy(desc, buf, size); +free: + kfree(buf); + + return ret; +} + +/** + * regmap_usb_get_interface_descriptor() - Get regmap interface descriptor + * @interface: USB interface + * @num_regmaps: Returns the number of regmaps supported on this interface + * + * Returns: + * Zero on success, negative error code on failure. + */ +int regmap_usb_get_interface_descriptor(struct usb_interface *interface, + unsigned int *num_regmaps) +{ + struct regmap_usb_interface_descriptor desc; + int ret; + + ret = regmap_usb_get_descriptor(interface, REGMAP_USB_DT_INTERFACE, 0, + &desc, sizeof(desc)); + if (ret < 0) + return ret; + + *num_regmaps = desc.bNumRegmaps; + + return 0; +} +EXPORT_SYMBOL(regmap_usb_get_interface_descriptor); + +/** + * regmap_usb_get_map_descriptor() - Get regmap descriptor + * @interface: USB interface + * @index: Index of register + * @desc: Returned descriptor (little endian representation) + * + * Returns: + * Zero on success, negative error code on failure. + */ +int regmap_usb_get_map_descriptor(struct usb_interface *interface, + unsigned int index, + struct regmap_usb_map_descriptor *desc) +{ + int ret; + + ret = regmap_usb_get_descriptor(interface, REGMAP_USB_DT_MAP, index, + desc, sizeof(*desc)); + if (ret < 0) + return ret; + + if (desc->name[31] != '\0') + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL(regmap_usb_get_map_descriptor); + +MODULE_LICENSE("GPL"); diff --git a/include/linux/regmap.h b/include/linux/regmap.h index dfe493ac692d..c25ae1a98538 100644 --- a/include/linux/regmap.h +++ b/include/linux/regmap.h @@ -32,6 +32,7 @@ struct regmap_range_cfg; struct regmap_field; struct snd_ac97; struct sdw_slave; +struct usb_interface; /* An enum of all the supported cache types */ enum regcache_type { @@ -618,6 +619,12 @@ struct regmap *__devm_regmap_init_sdw(struct sdw_slave *sdw, const struct regmap_config *config, struct lock_class_key *lock_key, const char *lock_name); +struct regmap *__devm_regmap_init_usb(struct device *dev, + struct usb_interface *interface, + unsigned int index, + const struct regmap_config *config, + struct lock_class_key *lock_key, + const char *lock_name); struct regmap *__devm_regmap_init_slimbus(struct slim_device *slimbus, const struct regmap_config *config, struct lock_class_key *lock_key, @@ -971,6 +978,22 @@ bool regmap_ac97_default_volatile(struct device *dev, unsigned int reg); __regmap_lockdep_wrapper(__devm_regmap_init_sdw, #config, \ sdw, config) +/** + * devm_regmap_init_usb() - Initialise managed register map + * + * @dev: Parent device + * @interface: USB interface + * @index: Index of register + * @config: Configuration for register map + * + * The return value will be an ERR_PTR() on error or a valid pointer + * to a struct regmap. The regmap will be automatically freed by the + * device management code. + */ +#define devm_regmap_init_usb(dev, interface, index, config) \ + __regmap_lockdep_wrapper(__devm_regmap_init_usb, #config, \ + dev, interface, index, config) + /** * devm_regmap_init_slimbus() - Initialise managed register map * diff --git a/include/linux/regmap_usb.h b/include/linux/regmap_usb.h new file mode 100644 index 000000000000..e28d5139a53c --- /dev/null +++ b/include/linux/regmap_usb.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright 2020 Noralf Trønnes + */ + +#ifndef __LINUX_REGMAP_USB_H +#define __LINUX_REGMAP_USB_H + +#include +#include + +struct usb_interface; + +#define REGMAP_USB_MAX_MAPS 255 + +#define REGMAP_USB_DT_INTERFACE (USB_TYPE_VENDOR | 0x01) +#define REGMAP_USB_DT_MAP (USB_TYPE_VENDOR | 0x02) + +/** + * struct regmap_usb_interface_descriptor - Regmap interface descriptor + * @bLength: Size of descriptor in bytes + * @bDescriptorType: DescriptorType (REGMAP_USB_DT_INTERFACE) + * @wNumRegmaps: Number of regmaps on this interface + */ +struct regmap_usb_interface_descriptor { + __u8 bLength; + __u8 bDescriptorType; + + __u8 bNumRegmaps; +} __packed; + +/** + * struct regmap_usb_map_descriptor - Regmap descriptor + * @bLength: Size of descriptor in bytes + * @bDescriptorType: DescriptorType (REGMAP_USB_DT_MAP) + * @name: Register name (NUL terminated) + * @bRegisterValueBits: Number of bits in the register value + * @bCompression: Supported compression types + * @bMaxTransferSizeOrder: Maximum transfer size the device can handle as log2. + */ +struct regmap_usb_map_descriptor { + __u8 bLength; + __u8 bDescriptorType; + + __u8 name[32]; + __u8 bRegisterValueBits; + __u8 bCompression; +#define REGMAP_USB_COMPRESSION_LZ4 BIT(0) + __u8 bMaxTransferSizeOrder; +} __packed; + +#define REGMAP_USB_REQ_PROTOCOL_RESET 0xff /* Returns previous error code as u8 */ + +/** + * struct regmap_usb_header - Transfer header + * @signature: Magic value (0x2389abc2) + * @index: Index of adressed register + * @tag: Sequential transfer number + * @flags: Transfer flags + * @regnr: Register number + * @length: Transfer length + */ +struct regmap_usb_header { +#define REGMAP_USB_HEADER_SIGNATURE 0x2389abc2 + __le32 signature; + __le16 index; + __le16 tag; + __le32 flags; +#define REGMAP_USB_HEADER_FLAG_IN BIT(31) +/* First 8 bits are the same as the descriptor compression bits */ +#define REGMAP_USB_HEADER_FLAG_COMPRESSION_MASK 0xff + __le32 regnr; + __le32 length; +} __packed; + +/** + * struct regmap_usb_status - Transfer status + * @signature: Magic value (0x83e7b803) + * @index: Index of adressed register + * @tag: Sequential transfer number (the same as the one received in the header) + * @status: Status value of the transfer (Zero on success or a Linux errno on failure) + */ +struct regmap_usb_status { +#define REGMAP_USB_STATUS_SIGNATURE 0x83e7b803 + __le32 signature; + __le16 index; + __le16 tag; + __u8 status; +} __packed; + +int regmap_usb_get_interface_descriptor(struct usb_interface *interface, + unsigned int *num_regmaps); +int regmap_usb_get_map_descriptor(struct usb_interface *interface, + unsigned int index, + struct regmap_usb_map_descriptor *desc); + +#endif From patchwork Sun Feb 16 17:21:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384501 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19A371871 for ; Sun, 16 Feb 2020 17:27:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0200A2086A for ; Sun, 16 Feb 2020 17:27:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0200A2086A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C019B6E45E; Sun, 16 Feb 2020 17:27:05 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 02D8A6E456 for ; Sun, 16 Feb 2020 17:27:03 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id 5ED46200E9; Sun, 16 Feb 2020 18:21:39 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 2/9] mfd: Add driver for Multifunction USB Device Date: Sun, 16 Feb 2020 18:21:10 +0100 Message-Id: <20200216172117.49832-3-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=CUepv2jGAAAA:20 a=eErRC1Xa2moXgSPB7X0A:9 a=ivkM6FztAEMVDUfN:21 a=V-uIEZzIEEd-kku2:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" A Multifunction USB Device is a device that supports functions like gpio and display or any other function that can be represented as a USB regmap. Interrupts over USB is also supported if such an endpoint is present. Signed-off-by: Noralf Trønnes --- drivers/mfd/Kconfig | 8 + drivers/mfd/Makefile | 1 + drivers/mfd/mud.c | 580 ++++++++++++++++++++++++++++++++++++++++ include/linux/mfd/mud.h | 16 ++ 4 files changed, 605 insertions(+) create mode 100644 drivers/mfd/mud.c create mode 100644 include/linux/mfd/mud.h diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig index 52818dbcfe1f..9950794d907e 100644 --- a/drivers/mfd/Kconfig +++ b/drivers/mfd/Kconfig @@ -1968,6 +1968,14 @@ config MFD_STMFX additional drivers must be enabled in order to use the functionality of the device. +config MFD_MUD + tristate "Multifunction USB Device core driver" + depends on USB + select MFD_CORE + select REGMAP_USB + help + Select this to get support for the Multifunction USB Device. + menu "Multimedia Capabilities Port drivers" depends on ARCH_SA1100 diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile index 29e6767dd60c..0adfab9afaed 100644 --- a/drivers/mfd/Makefile +++ b/drivers/mfd/Makefile @@ -255,4 +255,5 @@ obj-$(CONFIG_MFD_ROHM_BD70528) += rohm-bd70528.o obj-$(CONFIG_MFD_ROHM_BD718XX) += rohm-bd718x7.o obj-$(CONFIG_MFD_STMFX) += stmfx.o obj-$(CONFIG_MFD_RPISENSE_CORE) += rpisense-core.o +obj-$(CONFIG_MFD_MUD) += mud.o diff --git a/drivers/mfd/mud.c b/drivers/mfd/mud.c new file mode 100644 index 000000000000..f5f31478656d --- /dev/null +++ b/drivers/mfd/mud.c @@ -0,0 +1,580 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Temporary debugging aid */ +#undef dev_dbg +#define dev_dbg dev_info + +#define mdebug(fmt, ...) \ +do { \ + if (1) \ + pr_debug(fmt, ##__VA_ARGS__); \ +} while (0) + +struct mud_irq_event { + struct list_head node; + DECLARE_BITMAP(status, REGMAP_USB_MAX_MAPS); +}; + +struct mud_irq { + struct irq_domain *domain; + unsigned int num_irqs; + + struct workqueue_struct *workq; + struct work_struct work; + struct urb *urb; + + spinlock_t lock; /* Protect the values below */ + unsigned long *mask; + u16 tag; + struct list_head eventlist; + + unsigned int stats_illegal; + unsigned int stats_already_seen; + unsigned int stats_lost; +}; + +struct mud_device { + struct usb_device *usb; + struct mud_irq *mirq; + struct mfd_cell *cells; + unsigned int num_cells; +}; + +static void mud_irq_work(struct work_struct *work) +{ + struct mud_irq *mirq = container_of(work, struct mud_irq, work); + struct mud_irq_event *event; + unsigned long n, flags; + unsigned int irq; + + mdebug("%s: IN\n", __func__); + + while (true) { + spin_lock_irqsave(&mirq->lock, flags); + event = list_first_entry_or_null(&mirq->eventlist, struct mud_irq_event, node); + if (event) { + list_del(&event->node); + mdebug(" status: %*pb\n", mirq->num_irqs, event->status); + bitmap_and(event->status, event->status, mirq->mask, mirq->num_irqs); + } + spin_unlock_irqrestore(&mirq->lock, flags); + if (!event) + break; + + for_each_set_bit(n, event->status, mirq->num_irqs) { + irq = irq_find_mapping(mirq->domain, n); + mdebug(" n=%lu irq=%u\n", n, irq); + if (irq) + handle_nested_irq(irq); + } + + kfree(event); + } + + mdebug("%s: OUT\n", __func__); +} + +#define BYTES_PER_LONG (BITS_PER_LONG / BITS_PER_BYTE) + +static void mud_irq_queue(struct urb *urb) +{ + u8 *buf = urb->transfer_buffer + sizeof(u16); + struct mud_irq *mirq = urb->context; + struct device *dev = &urb->dev->dev; + struct mud_irq_event *event = NULL; + unsigned int i, tag, diff; + unsigned long flags; + + if (urb->actual_length != urb->transfer_buffer_length) { + dev_err_once(dev, "Interrupt packet wrong length: %u\n", + urb->actual_length); + mirq->stats_illegal++; + return; + } + + spin_lock_irqsave(&mirq->lock, flags); + + tag = le16_to_cpup(urb->transfer_buffer); + if (tag == mirq->tag) { + dev_dbg(dev, "Interrupt tag=%u already seen, ignoring\n", tag); + mirq->stats_already_seen++; + goto unlock; + } + + if (tag > mirq->tag) + diff = tag - mirq->tag; + else + diff = U16_MAX - mirq->tag + tag + 1; + + if (diff > 1) { + dev_err_once(dev, "Interrupts lost: %u\n", diff - 1); + mirq->stats_lost += diff - 1; + } + + event = kzalloc(sizeof(*event), GFP_ATOMIC); + if (!event) { + mirq->stats_lost += 1; + goto unlock; + } + + list_add_tail(&event->node, &mirq->eventlist); + + for (i = 0; i < (urb->transfer_buffer_length - sizeof(u16)); i++) { + unsigned long *val = &event->status[i / BYTES_PER_LONG]; + unsigned int mod = i % BYTES_PER_LONG; + + if (!mod) + *val = buf[i]; + else + *val |= ((unsigned long)buf[i]) << (mod * BITS_PER_BYTE); + } + + mdebug("%s: tag=%u\n", __func__, tag); + + mirq->tag = tag; +unlock: + spin_unlock_irqrestore(&mirq->lock, flags); + + if (event) + queue_work(mirq->workq, &mirq->work); +} + +static void mud_irq_urb_completion(struct urb *urb) +{ + struct device *dev = &urb->dev->dev; + int ret; + + mdebug("%s: actual_length=%u\n", __func__, urb->actual_length); + + switch (urb->status) { + case 0: + mud_irq_queue(urb); + break; + case -EPROTO: /* FIXME: verify: dwc2 reports this on disconnect */ + case -ECONNRESET: + case -ENOENT: + case -ESHUTDOWN: + dev_dbg(dev, "irq urb shutting down with status: %d\n", urb->status); + return; + default: + dev_dbg(dev, "irq urb failure with status: %d\n", urb->status); + break; + } + + ret = usb_submit_urb(urb, GFP_ATOMIC); + if (ret && ret != -ENODEV) + dev_err(dev, "irq usb_submit_urb failed with result %d\n", ret); +} + +static void mud_irq_mask(struct irq_data *data) +{ + struct mud_irq *mirq = irq_data_get_irq_chip_data(data); + unsigned long flags; + + mdebug("%s: hwirq=%lu\n", __func__, data->hwirq); + + spin_lock_irqsave(&mirq->lock, flags); + clear_bit(data->hwirq, mirq->mask); + spin_unlock_irqrestore(&mirq->lock, flags); +} + +static void mud_irq_unmask(struct irq_data *data) +{ + struct mud_irq *mirq = irq_data_get_irq_chip_data(data); + unsigned long flags; + + mdebug("%s: hwirq=%lu\n", __func__, data->hwirq); + + spin_lock_irqsave(&mirq->lock, flags); + set_bit(data->hwirq, mirq->mask); + spin_unlock_irqrestore(&mirq->lock, flags); +} + +static struct irq_chip mud_irq_chip = { + .name = "mud-irq", + .irq_mask = mud_irq_mask, + .irq_unmask = mud_irq_unmask, +}; + +static void __maybe_unused +mud_irq_domain_debug_show(struct seq_file *m, struct irq_domain *d, + struct irq_data *data, int ind) +{ + struct mud_irq *mirq = d ? d->host_data : irq_data_get_irq_chip_data(data); + unsigned long flags; + + spin_lock_irqsave(&mirq->lock, flags); + + seq_printf(m, "%*sTag: %u\n", ind, "", mirq->tag); + seq_printf(m, "%*sIllegal: %u\n", ind, "", mirq->stats_illegal); + seq_printf(m, "%*sAlready seen: %u\n", ind, "", mirq->stats_already_seen); + seq_printf(m, "%*sLost: %u\n", ind, "", mirq->stats_lost); + + spin_unlock_irqrestore(&mirq->lock, flags); +} + +static int mud_irq_domain_map(struct irq_domain *d, unsigned int virq, + irq_hw_number_t hwirq) +{ + irq_set_chip_data(virq, d->host_data); + irq_set_chip_and_handler(virq, &mud_irq_chip, handle_simple_irq); + irq_set_nested_thread(virq, true); + irq_set_noprobe(virq); + + return 0; +} + +static void mud_irq_domain_unmap(struct irq_domain *d, unsigned int virq) +{ + irq_set_chip_and_handler(virq, NULL, NULL); + irq_set_chip_data(virq, NULL); +} + +static const struct irq_domain_ops mud_irq_ops = { + .map = mud_irq_domain_map, + .unmap = mud_irq_domain_unmap, +#ifdef CONFIG_GENERIC_IRQ_DEBUGFS + .debug_show = mud_irq_domain_debug_show, +#endif +}; + +static int mud_irq_start(struct mud_irq *mirq) +{ + if (!mirq) + return 0; + + return usb_submit_urb(mirq->urb, GFP_KERNEL); +} + +static void mud_irq_stop(struct mud_irq *mirq) +{ + if (!mirq) + return; + + usb_kill_urb(mirq->urb); + flush_work(&mirq->work); +} + +static void mud_irq_release(struct mud_irq *mirq) +{ + if (!mirq) + return; + + if (mirq->workq) + destroy_workqueue(mirq->workq); + + if (mirq->domain) { + irq_hw_number_t hwirq; + + for (hwirq = 0; hwirq < mirq->num_irqs; hwirq++) + irq_dispose_mapping(irq_find_mapping(mirq->domain, hwirq)); + + irq_domain_remove(mirq->domain); + } + + usb_free_coherent(mirq->urb->dev, mirq->urb->transfer_buffer_length, + mirq->urb->transfer_buffer, mirq->urb->transfer_dma); + usb_free_urb(mirq->urb); + bitmap_free(mirq->mask); + kfree(mirq); +} + +static struct mud_irq *mud_irq_create(struct usb_interface *interface, + unsigned int num_irqs) +{ + struct usb_device *usb = interface_to_usbdev(interface); + struct device *dev = &interface->dev; + struct usb_endpoint_descriptor *ep; + struct fwnode_handle *fn; + struct urb *urb = NULL; + struct mud_irq *mirq; + void *buf = NULL; + size_t buf_size; + int ret; + + mdebug("%s: dev->id=%d\n", __func__, dev->id); + + ret = usb_find_int_in_endpoint(interface->cur_altsetting, &ep); + if (ret == -ENXIO) + return NULL; + if (ret) + return ERR_PTR(ret); + + mirq = kzalloc(sizeof(*mirq), GFP_KERNEL); + if (!mirq) + return ERR_PTR(-ENOMEM); + + mirq->mask = bitmap_zalloc(num_irqs, GFP_KERNEL); + if (!mirq->mask) { + ret = -ENOMEM; + goto release; + } + + spin_lock_init(&mirq->lock); + mirq->num_irqs = num_irqs; + + urb = usb_alloc_urb(0, GFP_KERNEL); + if (!urb) { + ret = -ENOMEM; + goto release; + } + + buf_size = usb_endpoint_maxp(ep); + if (buf_size != (sizeof(u16) + DIV_ROUND_UP(num_irqs, BITS_PER_BYTE))) { + dev_err(dev, "Interrupt endpoint wMaxPacketSize too small: %zu\n", buf_size); + ret = -EINVAL; + goto release; + } + + buf = usb_alloc_coherent(usb, buf_size, GFP_KERNEL, &urb->transfer_dma); + if (!buf) { + usb_free_urb(urb); + ret = -ENOMEM; + goto release; + } + + usb_fill_int_urb(urb, usb, + usb_rcvintpipe(usb, usb_endpoint_num(ep)), + buf, buf_size, mud_irq_urb_completion, + mirq, ep->bInterval); + urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; + + mirq->urb = urb; + + if (dev->of_node) { + fn = of_node_to_fwnode(dev->of_node); + } else { + fn = irq_domain_alloc_named_fwnode("mud-irq"); + if (!fn) { + ret = -ENOMEM; + goto release; + } + } + + mirq->domain = irq_domain_create_linear(fn, num_irqs, &mud_irq_ops, mirq); + if (!dev->of_node) + irq_domain_free_fwnode(fn); + if (!mirq->domain) { + ret = -ENOMEM; + goto release; + } + + INIT_LIST_HEAD(&mirq->eventlist); + INIT_WORK(&mirq->work, mud_irq_work); + + mirq->workq = alloc_workqueue("mud-irq/%s", WQ_HIGHPRI, 0, dev_name(dev)); + if (!mirq->workq) { + ret = -ENOMEM; + goto release; + } + + return mirq; + +release: + mud_irq_release(mirq); + + return ERR_PTR(ret); +} + +static int mud_probe_regmap(struct usb_interface *interface, struct mfd_cell *cell, + unsigned int index, struct mud_irq *mirq) +{ + struct mud_cell_pdata *pdata; + struct resource *res = NULL; + int ret; + + pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); + if (!pdata) + return -ENOMEM; + + ret = regmap_usb_get_map_descriptor(interface, index, &pdata->desc); + if (ret) + goto error; + + mdebug("%s: name='%s' index=%u\n", __func__, pdata->desc.name, index); + mdebug(" bRegisterValueBits=%u\n", pdata->desc.bRegisterValueBits); + mdebug(" bCompression=0x%02x\n", pdata->desc.bCompression); + mdebug(" bMaxTransferSizeOrder=%u (%ukB)\n", + pdata->desc.bMaxTransferSizeOrder, + (1 << pdata->desc.bMaxTransferSizeOrder) / 1024); + + if (mirq) { + res = kzalloc(sizeof(*res), GFP_KERNEL); + if (!res) { + ret = -ENOMEM; + goto error; + } + + res->flags = IORESOURCE_IRQ; + res->start = irq_create_mapping(mirq->domain, index); + mdebug(" res->start=%u\n", (unsigned int)res->start); + res->end = res->start; + + cell->resources = res; + cell->num_resources = 1; + } + + pdata->interface = interface; + pdata->index = index; + cell->name = pdata->desc.name; + cell->platform_data = pdata; + cell->pdata_size = sizeof(*pdata); + /* + * A Multifunction USB Device can have multiple functions of the same + * type. mfd_add_device() in its current form will only match on the + * first node in the Device Tree. + */ + cell->of_compatible = cell->name; + + return 0; + +error: + kfree(res); + kfree(pdata); + + return ret; +} + +static void mud_free(struct mud_device *mud) +{ + unsigned int i; + + mud_irq_release(mud->mirq); + + for (i = 0; i < mud->num_cells; i++) { + kfree(mud->cells[i].platform_data); + kfree(mud->cells[i].resources); + } + + kfree(mud->cells); + kfree(mud); +} + +static int mud_probe(struct usb_interface *interface, + const struct usb_device_id *id) +{ + struct device *dev = &interface->dev; + unsigned int i, num_regmaps; + struct mud_device *mud; + int ret; + + mdebug("%s: interface->dev.of_node=%px usb->dev.of_node=%px", + __func__, interface->dev.of_node, + usb_get_dev(interface_to_usbdev(interface))->dev.of_node); + + ret = regmap_usb_get_interface_descriptor(interface, &num_regmaps); + if (ret) + return ret; + if (!num_regmaps) + return -EINVAL; + + mdebug("%s: num_regmaps=%u\n", __func__, num_regmaps); + + mud = kzalloc(sizeof(*mud), GFP_KERNEL); + if (!mud) + return -ENOMEM; + + mud->mirq = mud_irq_create(interface, num_regmaps); + if (IS_ERR(mud->mirq)) { + ret = PTR_ERR(mud->mirq); + goto err_free; + } + + mud->num_cells = num_regmaps; + mud->cells = kcalloc(num_regmaps, sizeof(*mud->cells), GFP_KERNEL); + if (!mud->cells) + goto err_free; + + for (i = 0; i < num_regmaps; i++) { + ret = mud_probe_regmap(interface, &mud->cells[i], i, mud->mirq); + if (ret) { + dev_err(dev, "Failed to probe regmap index %i (error %d)\n", i, ret); + goto err_free; + } + } + + ret = mud_irq_start(mud->mirq); + if (ret) { + dev_err(dev, "Failed to start irq (error %d)\n", ret); + goto err_free; + } + + ret = mfd_add_hotplug_devices(dev, mud->cells, mud->num_cells); + if (ret) { + dev_err(dev, "Failed to add mfd devices to core."); + goto err_stop; + } + + mud->usb = usb_get_dev(interface_to_usbdev(interface)); + + usb_set_intfdata(interface, mud); + + if (mud->usb->product) + dev_info(dev, "%s\n", mud->usb->product); + + return 0; + +err_stop: + mud_irq_stop(mud->mirq); +err_free: + mud_free(mud); + + return ret; +} + +static void mud_disconnect(struct usb_interface *interface) +{ + struct mud_device *mud = usb_get_intfdata(interface); + + mfd_remove_devices(&interface->dev); + mud_irq_stop(mud->mirq); + usb_put_dev(mud->usb); + mud_free(mud); + + dev_dbg(&interface->dev, "disconnected\n"); +} + +static const struct usb_device_id mud_table[] = { + /* + * FIXME: + * Apply for a proper pid: https://github.com/openmoko/openmoko-usb-oui + * + * Or maybe the Linux Foundation will provide one from their vendor id. + */ + { USB_DEVICE_INTERFACE_CLASS(0x1d50, 0x6150, USB_CLASS_VENDOR_SPEC) }, + { } +}; + +MODULE_DEVICE_TABLE(usb, mud_table); + +static struct usb_driver mud_driver = { + .name = "mud", + .probe = mud_probe, + .disconnect = mud_disconnect, + .id_table = mud_table, +}; + +module_usb_driver(mud_driver); + +MODULE_DESCRIPTION("Generic USB Device mfd core driver"); +MODULE_AUTHOR("Noralf Trønnes "); +MODULE_LICENSE("GPL"); diff --git a/include/linux/mfd/mud.h b/include/linux/mfd/mud.h new file mode 100644 index 000000000000..b2059fa57429 --- /dev/null +++ b/include/linux/mfd/mud.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __LINUX_MUD_H +#define __LINUX_MUD_H + +#include + +struct usb_interface; + +struct mud_cell_pdata { + struct usb_interface *interface; + unsigned int index; + struct regmap_usb_map_descriptor desc; +}; + +#endif From patchwork Sun Feb 16 17:21:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4110B1820 for ; Sun, 16 Feb 2020 17:27:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 293AA20857 for ; Sun, 16 Feb 2020 17:27:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 293AA20857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 53C9B6E46B; Sun, 16 Feb 2020 17:27:06 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id BDE4D6E452 for ; Sun, 16 Feb 2020 17:27:00 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id B3E83200EC; Sun, 16 Feb 2020 18:21:39 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 3/9] usb: gadget: function: Add Multifunction USB Device support Date: Sun, 16 Feb 2020 18:21:11 +0100 Message-Id: <20200216172117.49832-4-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=9iXY35lkYcQ8TEz7kwIA:9 a=S4O_JTp8qcaCKkJR:21 a=v5tXeCps66gWD8CS:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This is the gadget side of the mfd host driver. It provides a USB function that drivers can hook into providing functions like gpio and display as regmaps to the host. These drivers are configured through configfs. Signed-off-by: Noralf Trønnes --- drivers/usb/gadget/Kconfig | 10 + drivers/usb/gadget/function/Makefile | 2 + drivers/usb/gadget/function/f_mud.c | 913 ++++++++++++++++++++++ drivers/usb/gadget/function/f_mud.h | 210 +++++ drivers/usb/gadget/function/mud_regmap.c | 936 +++++++++++++++++++++++ 5 files changed, 2071 insertions(+) create mode 100644 drivers/usb/gadget/function/f_mud.c create mode 100644 drivers/usb/gadget/function/f_mud.h create mode 100644 drivers/usb/gadget/function/mud_regmap.c diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig index 02ff850278b1..9551876ffe08 100644 --- a/drivers/usb/gadget/Kconfig +++ b/drivers/usb/gadget/Kconfig @@ -216,6 +216,9 @@ config USB_F_PRINTER config USB_F_TCM tristate +config USB_F_MUD + tristate + # this first set of drivers all depend on bulk-capable hardware. config USB_CONFIGFS @@ -483,6 +486,13 @@ config USB_CONFIGFS_F_TCM Both protocols can work on USB2.0 and USB3.0. UAS utilizes the USB 3.0 feature called streams support. +menuconfig USB_CONFIGFS_F_MUD + bool "Multifunction USB Device" + depends on USB_CONFIGFS + select USB_F_MUD + help + Core support for the Multifunction USB Device. + choice tristate "USB Gadget precomposed configurations" default USB_ETH diff --git a/drivers/usb/gadget/function/Makefile b/drivers/usb/gadget/function/Makefile index 5d3a6cf02218..b6e31b511521 100644 --- a/drivers/usb/gadget/function/Makefile +++ b/drivers/usb/gadget/function/Makefile @@ -50,3 +50,5 @@ usb_f_printer-y := f_printer.o obj-$(CONFIG_USB_F_PRINTER) += usb_f_printer.o usb_f_tcm-y := f_tcm.o obj-$(CONFIG_USB_F_TCM) += usb_f_tcm.o +usb_f_mud-y := f_mud.o mud_regmap.o +obj-$(CONFIG_USB_F_MUD) += usb_f_mud.o diff --git a/drivers/usb/gadget/function/f_mud.c b/drivers/usb/gadget/function/f_mud.c new file mode 100644 index 000000000000..b15a571d2e5d --- /dev/null +++ b/drivers/usb/gadget/function/f_mud.c @@ -0,0 +1,913 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "f_mud.h" + +/** + * DOC: overview + * + * f_mud is the device side counterpart to drivers/mfd/mud. + * It combines the regmap and mfd cell abstraction on the host side into one cell + * driver on the device side: @f_mud_cell_ops. The reason for not using the + * regmap library here is so drivers can do compression directly with their own + * buffers without going through a temporary buffer. + */ + +/* Temporary debugging aid */ +static unsigned int debug = 8; + +#define fmdebug(level, fmt, ...) \ +do { \ + if ((level) <= debug) \ + pr_debug(fmt, ##__VA_ARGS__); \ +} while (0) + +struct f_mud { + struct usb_function func; + u8 interface_id; + struct mud_regmap *mreg; + + struct f_mud_cell **cells; + unsigned int num_cells; + + int interrupt_interval_ms; + + spinlock_t irq_lock; + bool irq_enabled; + struct usb_ep *irq_ep; + struct usb_request *irq_req; + u16 int_tag; + unsigned long *irq_status; + bool irq_queued; +}; + +static inline struct f_mud *func_to_f_mud(struct usb_function *f) +{ + return container_of(f, struct f_mud, func); +} + +struct f_mud_opts { + struct usb_function_instance func_inst; + struct mutex lock; + int refcnt; + + int interrupt_interval_ms; + + struct list_head cells; +}; + +static inline struct f_mud_opts *ci_to_f_mud_opts(struct config_item *item) +{ + return container_of(to_config_group(item), struct f_mud_opts, + func_inst.group); +} + +static DEFINE_MUTEX(f_mud_cell_ops_list_mutex); +static LIST_HEAD(f_mud_cell_ops_list); + +struct f_mud_cell_ops_list_item { + struct list_head list; + const struct f_mud_cell_ops *ops; + unsigned int refcnt; +}; + +static struct f_mud_cell_ops_list_item *f_mud_cell_item_lookup(const char *name) +{ + struct f_mud_cell_ops_list_item *item; + + list_for_each_entry(item, &f_mud_cell_ops_list, list) { + if (!strcmp(name, item->ops->name)) + return item; + } + + return NULL; +} + +/** + * f_mud_cell_register() - Register a cell driver + * @ops: Cell operations structure + * + * This function registers a cell driver for use in a gadget. + * + * Returns: + * Zero on success, negative error code on failure. + */ +int f_mud_cell_register(const struct f_mud_cell_ops *ops) +{ + struct f_mud_cell_ops_list_item *item; + int ret = 0; + + fmdebug(1, "%s: name=%s\n", __func__, ops->name); + + mutex_lock(&f_mud_cell_ops_list_mutex); + + item = f_mud_cell_item_lookup(ops->name); + if (item) { + pr_err("%s: '%s' is already registered\n", __func__, ops->name); + ret = -EEXIST; + goto out; + } + + item = kzalloc(sizeof(*item), GFP_KERNEL); + if (!item) { + ret = -ENOMEM; + goto out; + } + + item->ops = ops; + INIT_LIST_HEAD(&item->list); + list_add(&item->list, &f_mud_cell_ops_list); +out: + mutex_unlock(&f_mud_cell_ops_list_mutex); + + fmdebug(1, "%s: ret=%d\n", __func__, ret); + + return ret; +} +EXPORT_SYMBOL(f_mud_cell_register); + +/** + * f_mud_cell_unregister() - Unregister a cell driver + * @ops: Cell operations structure + * + * This function unregisters a cell driver. + */ +void f_mud_cell_unregister(const struct f_mud_cell_ops *ops) +{ + struct f_mud_cell_ops_list_item *item; + + fmdebug(1, "%s: name=%s\n", __func__, ops->name); + + mutex_lock(&f_mud_cell_ops_list_mutex); + + item = f_mud_cell_item_lookup(ops->name); + if (item) { + list_del(&item->list); + if (item->refcnt) + kfree(item); + else + pr_err("%s: Can't unregister '%s' (refcnt=%u)\n", __func__, ops->name, item->refcnt); + } else { + pr_err("%s: Didn't find '%s'\n", __func__, ops->name); + } + + mutex_unlock(&f_mud_cell_ops_list_mutex); +} +EXPORT_SYMBOL(f_mud_cell_unregister); + +static const struct f_mud_cell_ops *f_mud_cell_get(const char *name) +{ + const struct f_mud_cell_ops *ops = NULL; + struct f_mud_cell_ops_list_item *item; + char module_name[MODULE_NAME_LEN]; + bool retried = false; + + fmdebug(1, "%s: name=%s\n", __func__, name); +retry: + mutex_lock(&f_mud_cell_ops_list_mutex); + item = f_mud_cell_item_lookup(name); + fmdebug(1, "%s: item=%px\n", __func__, item); + if (!item) { + mutex_unlock(&f_mud_cell_ops_list_mutex); + if (retried) + return NULL; + + retried = true; + snprintf(module_name, MODULE_NAME_LEN, "usb_f_%s", name); + strreplace(module_name, '-', '_'); + if (request_module(module_name)) + return NULL; + + goto retry; + } + + if (item && try_module_get(item->ops->owner)) { + ops = item->ops; + item->refcnt++; + } + + mutex_unlock(&f_mud_cell_ops_list_mutex); + + return ops; +} + +static void f_mud_cell_put(const struct f_mud_cell_ops *ops) +{ + struct f_mud_cell_ops_list_item *item; + + fmdebug(1, "%s: name=%s\n", __func__, ops->name); + + mutex_lock(&f_mud_cell_ops_list_mutex); + item = f_mud_cell_item_lookup(ops->name); + WARN_ON(!item || !item->refcnt); + if (item && item->refcnt) { + module_put(item->ops->owner); + item->refcnt--; + } + mutex_unlock(&f_mud_cell_ops_list_mutex); +} + +#define F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(name, addr, size) \ + static struct usb_endpoint_descriptor name = { \ + .bLength = USB_DT_ENDPOINT_SIZE, \ + .bDescriptorType = USB_DT_ENDPOINT, \ + .bEndpointAddress = addr, \ + .bmAttributes = USB_ENDPOINT_XFER_BULK, \ + .wMaxPacketSize = cpu_to_le16(size), \ + } + +#define F_MUD_DEFINE_INT_ENDPOINT_DESCRIPTOR(name) \ + static struct usb_endpoint_descriptor name = { \ + .bLength = USB_DT_ENDPOINT_SIZE, \ + .bDescriptorType = USB_DT_ENDPOINT, \ + .bEndpointAddress = USB_DIR_IN, \ + .bmAttributes = USB_ENDPOINT_XFER_INT, \ + } + +static struct usb_interface_descriptor f_mud_intf = { + .bLength = USB_DT_INTERFACE_SIZE, + .bDescriptorType = USB_DT_INTERFACE, + /*.bNumEndpoints = 2 or 3, */ + .bInterfaceClass = USB_CLASS_VENDOR_SPEC, +}; + +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_fs_in_desc, USB_DIR_IN, 0); +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_fs_out_desc, USB_DIR_OUT, 0); +F_MUD_DEFINE_INT_ENDPOINT_DESCRIPTOR(f_mud_fs_int_desc); + +static struct usb_descriptor_header *f_mud_fs_function[] = { + (struct usb_descriptor_header *)&f_mud_intf, + (struct usb_descriptor_header *)&f_mud_fs_in_desc, + (struct usb_descriptor_header *)&f_mud_fs_out_desc, + NULL, /* Room for optional interrupt endpoint */ + NULL, +}; + +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_hs_in_desc, USB_DIR_IN, 512); +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_hs_out_desc, USB_DIR_OUT, 512); +F_MUD_DEFINE_INT_ENDPOINT_DESCRIPTOR(f_mud_hs_int_desc); + +static struct usb_descriptor_header *f_mud_hs_function[] = { + (struct usb_descriptor_header *)&f_mud_intf, + (struct usb_descriptor_header *)&f_mud_hs_in_desc, + (struct usb_descriptor_header *)&f_mud_hs_out_desc, + NULL, /* Room for optional interrupt endpoint */ + NULL, +}; + +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_ss_in_desc, USB_DIR_IN, 1024); +F_MUD_DEFINE_BULK_ENDPOINT_DESCRIPTOR(f_mud_ss_out_desc, USB_DIR_OUT, 1024); +F_MUD_DEFINE_INT_ENDPOINT_DESCRIPTOR(f_mud_ss_int_desc); + +static struct usb_ss_ep_comp_descriptor f_mud_ss_bulk_comp_desc = { + .bLength = USB_DT_SS_EP_COMP_SIZE, + .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, +}; + +static struct usb_ss_ep_comp_descriptor f_mud_ss_int_comp_desc = { + .bLength = USB_DT_SS_EP_COMP_SIZE, + .bDescriptorType = USB_DT_SS_ENDPOINT_COMP, +}; + +static struct usb_descriptor_header *f_mud_ss_function[] = { + (struct usb_descriptor_header *)&f_mud_intf, + (struct usb_descriptor_header *)&f_mud_ss_in_desc, + (struct usb_descriptor_header *)&f_mud_ss_bulk_comp_desc, + (struct usb_descriptor_header *)&f_mud_ss_out_desc, + (struct usb_descriptor_header *)&f_mud_ss_bulk_comp_desc, + NULL, /* Room for optional interrupt endpoint, otherwise terminator */ + (struct usb_descriptor_header *)&f_mud_ss_int_comp_desc, + NULL, +}; + +static struct usb_string f_mud_string_defs[] = { + [0].s = "Multifunction USB device", + { } /* end of list */ +}; + +static struct usb_gadget_strings f_mud_string_table = { + .language = 0x0409, /* en-us */ + .strings = f_mud_string_defs, +}; + +static struct usb_gadget_strings *f_mud_strings[] = { + &f_mud_string_table, + NULL, +}; + +static void fmud_irq_req_queue(struct f_mud *fmud) +{ + unsigned int nlongs = DIV_ROUND_UP(fmud->num_cells, BITS_PER_LONG); + unsigned int nbytes = DIV_ROUND_UP(fmud->num_cells, BITS_PER_BYTE); + unsigned int i, ilong, ibuf = 0; + int ret; + __le16 *tag = fmud->irq_req->buf; + u8 *buf = fmud->irq_req->buf + sizeof(u16); + + fmdebug(3, "%s: irq_status: %*pb\n", __func__, fmud->num_cells, fmud->irq_status); + + *tag = cpu_to_le16(++fmud->int_tag); + + for (ilong = 0; ilong < nlongs; ilong++) { + unsigned long val = fmud->irq_status[ilong]; + + fmud->irq_status[ilong] = 0; + + for (i = 0; i < (BITS_PER_LONG / BITS_PER_BYTE) && ibuf < nbytes; i++, ibuf++) { + buf[ibuf] = val & 0xff; + val >>= 8; + } + } + + fmdebug(3, "%s: req->buf: %*ph\n", __func__, fmud->irq_req->length, fmud->irq_req->buf); + + ret = usb_ep_queue(fmud->irq_ep, fmud->irq_req, GFP_ATOMIC); + if (!ret) + fmud->irq_queued = true; + else + pr_err("%s: Failed to queue irq req, error=%d\n", __func__, ret); +} + +static void fmud_irq_req_complete(struct usb_ep *ep, struct usb_request *req) +{ + struct f_mud *fmud = req->context; + unsigned long flags; + + switch (req->status) { + case 0: + break; + case -ECONNABORTED: /* hardware forced ep reset */ + case -ECONNRESET: /* request dequeued */ + case -ESHUTDOWN: /* disconnect from host */ + fmdebug(1, "%s: abort, status=%d\n", __func__, req->status); + return; + default: + pr_err("%s: irq request failed, error=%d\n", __func__, req->status); + break; + } + + spin_lock_irqsave(&fmud->irq_lock, flags); + + fmud->irq_queued = false; + + if (!bitmap_empty(fmud->irq_status, fmud->num_cells)) + fmud_irq_req_queue(fmud); + + spin_unlock_irqrestore(&fmud->irq_lock, flags); +} + +/** + * f_mud_irq() - Send an interrupt + * @cell: Cell + * + * This function queues an interrupt to be sent to the host. + * + * Returns: + * True if there's a pending interrupt that has not been sent yet, otherwise false. + */ +bool f_mud_irq(struct f_mud_cell *cell) +{ + struct f_mud *fmud = cell->fmud; + unsigned long flags; + bool ret; + + if (WARN_ON_ONCE(!fmud || !fmud->irq_enabled)) + return false; + + spin_lock_irqsave(&fmud->irq_lock, flags); + + ret = test_and_set_bit(cell->index, fmud->irq_status); + + if (!fmud->irq_queued) + fmud_irq_req_queue(fmud); + + spin_unlock_irqrestore(&fmud->irq_lock, flags); + + fmdebug(1, "%s: cell->index=%u was_set=%u\n", __func__, cell->index, ret); + + return ret; +} +EXPORT_SYMBOL(f_mud_irq); + +static int f_mud_set_alt(struct usb_function *f, unsigned int intf, unsigned int alt) +{ + struct f_mud *fmud = func_to_f_mud(f); + + fmdebug(1, "%s: intf=%u, alt=%u\n", __func__, intf, alt); + + if (alt || intf != fmud->interface_id) + return -EINVAL; + + if (fmud->irq_ep) { + struct usb_composite_dev *cdev = f->config->cdev; + + if (!fmud->irq_ep->desc) { + if (config_ep_by_speed(cdev->gadget, f, fmud->irq_ep)) { + fmud->irq_ep->desc = NULL; + return -EINVAL; + } + } + + usb_ep_disable(fmud->irq_ep); + usb_ep_enable(fmud->irq_ep); + fmud->irq_enabled = true; + } + + return mud_regmap_set_alt(fmud->mreg, f); +} + +static int f_mud_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl) +{ + struct f_mud *fmud = func_to_f_mud(f); + + return mud_regmap_setup(fmud->mreg, f, ctrl); +} + +static void f_mud_disable(struct usb_function *f) +{ + struct f_mud *fmud = func_to_f_mud(f); + + fmdebug(1, "%s\n", __func__); + + if (fmud->irq_ep) { + fmud->int_tag = 0; + fmud->irq_enabled = false; + usb_ep_disable(fmud->irq_ep); + } + + mud_regmap_stop(fmud->mreg); +} + +static void f_mud_unbind(struct usb_configuration *c, struct usb_function *f) +{ + struct f_mud *fmud = func_to_f_mud(f); + struct f_mud_cell *cell; + unsigned int i; + + fmdebug(1, "%s\n", __func__); + + for (i = 0; i < fmud->num_cells; i++) { + cell = fmud->cells[i]; + cell->ops->unbind(cell); + } + mud_regmap_cleanup(fmud->mreg); + fmud->mreg = NULL; + usb_free_all_descriptors(f); + if (fmud->irq_req) { + kfree(fmud->irq_req->buf); + usb_ep_free_request(fmud->irq_ep, fmud->irq_req); + fmud->irq_req = NULL; + bitmap_free(fmud->irq_status); + fmud->irq_status = NULL; + } +} + +static int f_mud_bind(struct usb_configuration *c, struct usb_function *f) +{ + struct usb_composite_dev *cdev = c->cdev; + struct f_mud *fmud = func_to_f_mud(f); + struct usb_ep *in_ep, *out_ep; + unsigned int max_index = 0; + struct usb_string *us; + struct mud_regmap *mreg; + int i, ret; + + fmdebug(1, "%s\n", __func__); + + for (i = 0; i < fmud->num_cells; i++) + max_index = max(fmud->cells[i]->index, max_index); + + if (fmud->num_cells != max_index + 1) { + pr_err("Cell indices are not continuous\n"); + return -EINVAL; + } + + us = usb_gstrings_attach(cdev, f_mud_strings, + ARRAY_SIZE(f_mud_string_defs)); + if (IS_ERR(us)) + return PTR_ERR(us); + + f_mud_intf.iInterface = us[0].id; + + ret = usb_interface_id(c, f); + if (ret < 0) + return ret; + + fmud->interface_id = ret; + f_mud_intf.bInterfaceNumber = fmud->interface_id; + + in_ep = usb_ep_autoconfig(cdev->gadget, &f_mud_fs_in_desc); + out_ep = usb_ep_autoconfig(cdev->gadget, &f_mud_fs_out_desc); + if (!in_ep || !out_ep) + return -ENODEV; + + f_mud_hs_in_desc.bEndpointAddress = f_mud_fs_in_desc.bEndpointAddress; + f_mud_hs_out_desc.bEndpointAddress = f_mud_fs_out_desc.bEndpointAddress; + + f_mud_ss_in_desc.bEndpointAddress = f_mud_fs_in_desc.bEndpointAddress; + f_mud_ss_out_desc.bEndpointAddress = f_mud_fs_out_desc.bEndpointAddress; + + if (fmud->interrupt_interval_ms) { + unsigned int buflen = sizeof(u16) + DIV_ROUND_UP(fmud->num_cells, 8); + unsigned int interval_ms = fmud->interrupt_interval_ms; + unsigned int interval_hs_ss; + + interval_hs_ss = roundup_pow_of_two(interval_ms * 8); /* 125us frames */ + interval_hs_ss = ilog2(interval_hs_ss) + 1; /* 2^(bInterval-1) encoding */ + interval_hs_ss = min_t(unsigned int, interval_hs_ss, 16); /* max 4096ms */ + + f_mud_fs_int_desc.bInterval = min_t(unsigned int, interval_ms, 255); + f_mud_fs_int_desc.wMaxPacketSize = cpu_to_le16(buflen); + f_mud_hs_int_desc.bInterval = interval_hs_ss; + f_mud_hs_int_desc.wMaxPacketSize = cpu_to_le16(buflen); + f_mud_ss_int_desc.bInterval = interval_hs_ss; + f_mud_ss_int_desc.wMaxPacketSize = cpu_to_le16(buflen); + f_mud_ss_int_comp_desc.wBytesPerInterval = cpu_to_le16(buflen); + + fmud->irq_ep = usb_ep_autoconfig(cdev->gadget, &f_mud_fs_int_desc); + if (!fmud->irq_ep) + return -ENODEV; + + f_mud_hs_int_desc.bEndpointAddress = f_mud_fs_int_desc.bEndpointAddress; + f_mud_ss_int_desc.bEndpointAddress = f_mud_fs_int_desc.bEndpointAddress; + + fmud->irq_req = usb_ep_alloc_request(fmud->irq_ep, GFP_KERNEL); + if (!fmud->irq_req) { + ret = -ENOMEM; + goto fail_free_irq; + } + + fmud->irq_req->complete = fmud_irq_req_complete; + fmud->irq_req->length = buflen; + fmud->irq_req->context = fmud; + fmud->irq_req->buf = kmalloc(buflen, GFP_KERNEL); + if (!fmud->irq_req->buf) { + ret = -ENOMEM; + goto fail_free_irq; + } + + fmud->irq_status = bitmap_zalloc(fmud->num_cells, GFP_KERNEL); + if (!fmud->irq_status) { + ret = -ENOMEM; + goto fail_free_irq; + } + + f_mud_intf.bNumEndpoints = 3; + f_mud_fs_function[3] = (struct usb_descriptor_header *)&f_mud_fs_int_desc; + f_mud_hs_function[3] = (struct usb_descriptor_header *)&f_mud_hs_int_desc; + f_mud_ss_function[5] = (struct usb_descriptor_header *)&f_mud_ss_int_desc; + } else { + f_mud_intf.bNumEndpoints = 2; + f_mud_fs_function[3] = NULL; + f_mud_hs_function[3] = NULL; + f_mud_ss_function[5] = NULL; + } + + ret = usb_assign_descriptors(f, f_mud_fs_function, f_mud_hs_function, + f_mud_ss_function, NULL); + if (ret) + goto fail_free_irq; + + for (i = 0; i < fmud->num_cells; i++) { + struct f_mud_cell *cell = fmud->cells[i]; + + ret = cell->ops->bind(cell); + if (ret) { + pr_err("%s: Failed to bind cell '%s' ret=%d", + __func__, cell->ops->name, ret); + goto fail_unbind; + } + } + + mreg = mud_regmap_init(cdev, in_ep, out_ep, fmud->cells, fmud->num_cells); + if (IS_ERR(mreg)) { + ret = PTR_ERR(mreg); + goto fail_unbind; + } + + fmud->mreg = mreg; + + pr_info("%s: %s speed IN/%s OUT/%s\n", __func__, + gadget_is_superspeed(c->cdev->gadget) ? "super" : + gadget_is_dualspeed(c->cdev->gadget) ? "dual" : "full", + in_ep->name, out_ep->name); + + return 0; + +fail_unbind: + while (--i >= 0) { + struct f_mud_cell *cell = fmud->cells[i]; + + cell->ops->unbind(cell); + } + usb_free_all_descriptors(f); +fail_free_irq: + if (fmud->irq_req) { + kfree(fmud->irq_req->buf); + usb_ep_free_request(fmud->irq_ep, fmud->irq_req); + fmud->irq_req = NULL; + bitmap_free(fmud->irq_status); + fmud->irq_status = NULL; + } + + return ret; +} + +static void f_mud_free_func(struct usb_function *f) +{ + struct f_mud_opts *opts = container_of(f->fi, struct f_mud_opts, func_inst); + struct f_mud *fmud = func_to_f_mud(f); + unsigned int i; + + fmdebug(1, "%s\n", __func__); + + mutex_lock(&opts->lock); + opts->refcnt--; + for (i = 0; i < fmud->num_cells; i++) { + configfs_undepend_item(&fmud->cells[i]->group.cg_item); + fmud->cells[i]->fmud = NULL; + } + mutex_unlock(&opts->lock); + + kfree(fmud->cells); + kfree(fmud); +} + +static struct usb_function *f_mud_alloc_func(struct usb_function_instance *fi) +{ + struct f_mud_opts *opts = container_of(fi, struct f_mud_opts, func_inst); + unsigned int max_index = 0, interrupt_interval_ms = UINT_MAX; + struct usb_function *func; + struct f_mud_cell *cell; + struct f_mud *fmud; + int ret = 0; + + fmdebug(1, "%s\n", __func__); + + fmud = kzalloc(sizeof(*fmud), GFP_KERNEL); + if (!fmud) + return ERR_PTR(-ENOMEM); + + spin_lock_init(&fmud->irq_lock); + + mutex_lock(&opts->lock); + + list_for_each_entry(cell, &opts->cells, node) { + max_index = max(max_index, cell->index); + fmud->num_cells++; + } + + if (!fmud->num_cells) { + ret = -ENOENT; + goto unlock; + } + + if (fmud->num_cells != max_index + 1) { + pr_err("Cell indices are not continuous\n"); + ret = -EINVAL; + goto unlock; + } + + fmud->cells = kcalloc(fmud->num_cells, sizeof(*fmud->cells), GFP_KERNEL); + if (!fmud->cells) { + ret = -ENOMEM; + goto unlock; + } + + list_for_each_entry(cell, &opts->cells, node) { + /* Prevent the cell dir from being deleted */ + ret = configfs_depend_item(cell->group.cg_subsys, &cell->group.cg_item); + if (ret) + goto unlock; + + fmud->cells[cell->index] = cell; + cell->fmud = fmud; + if (cell->ops->interrupt_interval_ms) + interrupt_interval_ms = min(cell->ops->interrupt_interval_ms, + interrupt_interval_ms); + } + + if (interrupt_interval_ms != UINT_MAX) { + if (opts->interrupt_interval_ms == -1) + fmud->interrupt_interval_ms = interrupt_interval_ms; + else + fmud->interrupt_interval_ms = opts->interrupt_interval_ms; + } + + fmdebug(1, " interrupt_interval_ms=%d\n", fmud->interrupt_interval_ms); + + opts->refcnt++; +unlock: + mutex_unlock(&opts->lock); + + if (ret) + goto error; + + func = &fmud->func; + func->name = "f_mud"; + func->bind = f_mud_bind; + func->unbind = f_mud_unbind; + func->set_alt = f_mud_set_alt; + func->setup = f_mud_setup; + func->disable = f_mud_disable; + func->free_func = f_mud_free_func; + + return func; + +error: + if (fmud->cells) { + unsigned int i; + + for (i = 0; i < fmud->num_cells; i++) { + cell = fmud->cells[i]; + if (cell) + configfs_undepend_item(&cell->group.cg_item); + } + kfree(fmud->cells); + } + kfree(fmud); + + return ERR_PTR(ret); +} + +F_MUD_OPT_INT(f_mud_opts, interrupt_interval_ms, -1, INT_MAX); + +static struct configfs_attribute *f_mud_attrs[] = { + &f_mud_opts_attr_interrupt_interval_ms, + NULL, +}; + +static struct config_group *f_mud_cell_make_group(struct config_group *group, + const char *name) +{ + struct f_mud_opts *opts = ci_to_f_mud_opts(&group->cg_item); + char *cell_name, *cell_index_str, *buf = NULL; + const struct f_mud_cell_ops *ops; + struct f_mud_cell *cell; + int ret = 0; + u8 index; + + fmdebug(1, "%s: name=%s\n", __func__, name); + + mutex_lock(&opts->lock); + if (opts->refcnt) { + ret = -EBUSY; + goto out_unlock; + } + + buf = kstrdup(name, GFP_KERNEL); + if (!buf) { + ret = -ENOMEM; + goto out_unlock; + } + + cell_index_str = buf; + cell_name = strsep(&cell_index_str, "."); + if (!cell_index_str || !strlen(cell_index_str)) { + pr_err("Unable to parse CELL.INDEX for '%s'\n", name); + ret = -EINVAL; + goto out_unlock; + } + + ret = kstrtou8(cell_index_str, 10, &index); + if (ret) + goto out_unlock; + + if (index >= REGMAP_USB_MAX_MAPS) { + pr_err("Cell index out of range for '%s'\n", name); + ret = -EINVAL; + goto out_unlock; + } + + ops = f_mud_cell_get(cell_name); + if (!ops) { + ret = -ENOENT; + goto out_unlock; + } + + cell = ops->alloc(); + if (IS_ERR(cell)) { + f_mud_cell_put(ops); + ret = PTR_ERR(cell); + goto out_unlock; + } + + cell->ops = ops; + cell->index = index; + list_add(&cell->node, &opts->cells); +out_unlock: + mutex_unlock(&opts->lock); + + kfree(buf); + + return ret ? ERR_PTR(ret) : &cell->group; +} + +/** + * f_mud_cell_item_release() - Cell configfs item release + * @item: Configfs item + * + * Drivers should use this as their &configfs_item_operations.release callback + * in their &config_item_type on the cells &config_group. + */ +void f_mud_cell_item_release(struct config_item *item) +{ + struct f_mud_cell *cell = ci_to_f_mud_cell(item); + const struct f_mud_cell_ops *ops = cell->ops; + + fmdebug(1, "%s: cell=%px\n", __func__, cell); + + ops->free(cell); + f_mud_cell_put(ops); +} +EXPORT_SYMBOL(f_mud_cell_item_release); + +static void f_mud_cell_drop_item(struct config_group *group, struct config_item *item) +{ + struct f_mud_opts *opts = ci_to_f_mud_opts(&group->cg_item); + struct f_mud_cell *cell = ci_to_f_mud_cell(item); + + fmdebug(1, "%s: cell=%px\n", __func__, cell); + + mutex_lock(&opts->lock); + list_del(&cell->node); + mutex_unlock(&opts->lock); + + config_item_put(item); +} + +static struct configfs_group_operations f_mud_cell_group_ops = { + .make_group = f_mud_cell_make_group, + .drop_item = f_mud_cell_drop_item, +}; + +static void f_mud_attr_release(struct config_item *item) +{ + struct f_mud_opts *opts = ci_to_f_mud_opts(item); + + fmdebug(1, "%s\n", __func__); + + usb_put_function_instance(&opts->func_inst); +} + +static struct configfs_item_operations f_mud_item_ops = { + .release = f_mud_attr_release, +}; + +static const struct config_item_type f_mud_func_type = { + .ct_item_ops = &f_mud_item_ops, + .ct_group_ops = &f_mud_cell_group_ops, + .ct_attrs = f_mud_attrs, + .ct_owner = THIS_MODULE, +}; + +static void f_mud_free_func_inst(struct usb_function_instance *f) +{ + struct f_mud_opts *opts = container_of(f, struct f_mud_opts, func_inst); + + fmdebug(1, "%s\n", __func__); + + mutex_destroy(&opts->lock); + kfree(opts); +} + +static struct usb_function_instance *f_mud_alloc_func_inst(void) +{ + struct f_mud_opts *opts; + + fmdebug(1, "%s\n", __func__); + + opts = kzalloc(sizeof(*opts), GFP_KERNEL); + if (!opts) + return ERR_PTR(-ENOMEM); + + mutex_init(&opts->lock); + INIT_LIST_HEAD(&opts->cells); + opts->func_inst.free_func_inst = f_mud_free_func_inst; + opts->interrupt_interval_ms = -1; + + config_group_init_type_name(&opts->func_inst.group, "", &f_mud_func_type); + + return &opts->func_inst; +} + +DECLARE_USB_FUNCTION_INIT(f_mud, f_mud_alloc_func_inst, f_mud_alloc_func); + +MODULE_DESCRIPTION("Multifunction USB Device"); +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_LICENSE("GPL"); diff --git a/drivers/usb/gadget/function/f_mud.h b/drivers/usb/gadget/function/f_mud.h new file mode 100644 index 000000000000..ce3833530a1a --- /dev/null +++ b/drivers/usb/gadget/function/f_mud.h @@ -0,0 +1,210 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __LINUX_F_MUD_H +#define __LINUX_F_MUD_H + +#include +#include +#include + +struct module; + +struct f_mud; +struct f_mud_cell; + +/** + * struct f_mud_cell_ops - Cell driver operations + * @name: Name, passed on to the host as the regmap name + * @owner: Owner module + * @regval_bytes: Number of bytes in the register value (1, 2 or 4) + * @max_transfer_size: Maximum size of one transfer + * @compression: Supported compression bit mask + * @interrupt_interval_ms: Interrupt interval in milliseconds (optional) + * Setting this adds an interrupt endpoint to the interface. + * If more than one cell sets this then the smallest value is used. + * @alloc: Callback that allocates and returns an &f_mud_cell + * @free: Frees the allocated cell + * @bind: Called when the gadget is bound to the controller + * @unbind: Called when the gadget is unbound from the controller + * @enable: Called when the USB cable is plugged in (optional) + * @disable: Called when the USB cable is unplugged (optional) + * @readreg: Called when the host is writing to a register. If the host asks for + * compression, the cell is free to ignore it. If it does compress, + * then the len argument must be updated to reflect the actual buffer size. + * @writereg: Called when the host is reading from a register + * + * All callbacks run in process context. + */ +struct f_mud_cell_ops { + const char *name; + struct module *owner; + + unsigned int regval_bytes; + unsigned int max_transfer_size; + u8 compression; + unsigned int interrupt_interval_ms; + + struct f_mud_cell *(*alloc)(void); + void (*free)(struct f_mud_cell *cell); + + int (*bind)(struct f_mud_cell *cell); + void (*unbind)(struct f_mud_cell *cell); + + void (*enable)(struct f_mud_cell *cell); + void (*disable)(struct f_mud_cell *cell); + + int (*readreg)(struct f_mud_cell *cell, unsigned int regnr, + void *buf, size_t *len, u8 compression); + int (*writereg)(struct f_mud_cell *cell, unsigned int regnr, + const void *buf, size_t len, u8 compression); +}; + +/** + * struct f_mud_cell - Cell + * @node: List node in the configfs entry list + * @index: Cell index from configfs entry + * @ops: Cell operations + * @fmud: Parent structure + * @group: Configfs group + */ +struct f_mud_cell { + struct list_head node; + unsigned int index; + const struct f_mud_cell_ops *ops; + struct f_mud *fmud; + struct config_group group; +}; + +static inline struct f_mud_cell *ci_to_f_mud_cell(struct config_item *item) +{ + return container_of(to_config_group(item), struct f_mud_cell, group); +} + +bool f_mud_irq(struct f_mud_cell *cell); +void f_mud_cell_item_release(struct config_item *item); + +int f_mud_cell_register(const struct f_mud_cell_ops *ops); +void f_mud_cell_unregister(const struct f_mud_cell_ops *ops); + +#define DECLARE_F_MUD_CELL_INIT(_ops) \ + static int __init _ops ## _mod_init(void) \ + { \ + return f_mud_cell_register(&_ops); \ + } \ + module_init(_ops ## _mod_init); \ + static void __exit _ops ## _mod_exit(void) \ + { \ + f_mud_cell_unregister(&_ops); \ + } \ + module_exit(_ops ## _mod_exit) + +#define F_MUD_OPT_INT(_typ, _name, _min, _max) \ +static ssize_t _typ ## _ ## _name ## _show(struct config_item *item, \ + char *page) \ +{ \ + struct _typ *opts = ci_to_ ## _typ(item); \ + ssize_t ret; \ + \ + mutex_lock(&opts->lock); \ + ret = sprintf(page, "%d\n", opts->_name); \ + mutex_unlock(&opts->lock); \ + \ + return ret; \ +} \ + \ +static ssize_t _typ ## _ ## _name ## _store(struct config_item *item, \ + const char *page, size_t len) \ +{ \ + struct _typ *opts = ci_to_ ## _typ(item); \ + int ret, num; \ + \ + mutex_lock(&opts->lock); \ + if (opts->refcnt) { \ + ret = -EBUSY; \ + goto out_unlock; \ + } \ + \ + ret = kstrtoint(page, 0, &num); \ + if (ret) \ + goto out_unlock; \ + \ + if (num >= (_min) || num <= (_max)) \ + opts->_name = num; \ + else \ + ret = -EINVAL; \ +out_unlock: \ + mutex_unlock(&opts->lock); \ + \ + return ret ? ret : len; \ +} \ + \ +CONFIGFS_ATTR(_typ ## _, _name) + +#define F_MUD_OPT_STR(_typ, _name) \ +static ssize_t _typ ## _ ## _name ## _show(struct config_item *item, \ + char *page) \ +{ \ + struct _typ *opts = ci_to_ ## _typ(item); \ + ssize_t ret; \ + \ + mutex_lock(&opts->lock); \ + \ + if (opts->_name) { \ + ret = strscpy(page, opts->_name, PAGE_SIZE); \ + } else { \ + page[0] = '\0'; \ + ret = 0; \ + } \ + \ + mutex_unlock(&opts->lock); \ + \ + return ret; \ +} \ + \ +static ssize_t _typ ## _ ## _name ## _store(struct config_item *item, \ + const char *page, size_t len) \ +{ \ + struct _typ *opts = ci_to_ ## _typ(item); \ + ssize_t ret = 0; \ + char *buf; \ + \ + mutex_lock(&opts->lock); \ + if (opts->refcnt) { \ + ret = -EBUSY; \ + goto out_unlock; \ + } \ + \ + buf = kstrndup(page, len, GFP_KERNEL); \ + if (!buf) { \ + ret = -ENOMEM; \ + goto out_unlock; \ + } \ + \ + kfree(opts->_name); \ + opts->_name = buf; \ +out_unlock: \ + mutex_unlock(&opts->lock); \ + \ + return ret ? ret : len; \ +} \ + \ +CONFIGFS_ATTR(_typ ## _, _name) + +/* mud_regmap.c */ +struct mud_regmap; +struct usb_composite_dev; +struct usb_ctrlrequest; +struct usb_ep; +struct usb_function; + +void mud_regmap_stop(struct mud_regmap *mreg); +int mud_regmap_setup(struct mud_regmap *mreg, struct usb_function *f, + const struct usb_ctrlrequest *ctrl); +int mud_regmap_set_alt(struct mud_regmap *mreg, struct usb_function *f); + +struct mud_regmap *mud_regmap_init(struct usb_composite_dev *cdev, + struct usb_ep *in_ep, struct usb_ep *out_ep, + struct f_mud_cell **cells, unsigned int num_cells); +void mud_regmap_cleanup(struct mud_regmap *mreg); + +#endif diff --git a/drivers/usb/gadget/function/mud_regmap.c b/drivers/usb/gadget/function/mud_regmap.c new file mode 100644 index 000000000000..d5fd5d2d96a7 --- /dev/null +++ b/drivers/usb/gadget/function/mud_regmap.c @@ -0,0 +1,936 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "f_mud.h" + +struct mud_regmap_transfer { + struct mud_regmap *mreg; + + spinlock_t lock; + struct list_head node; + + struct usb_request *header_req; + struct usb_request *buf_out_req; + struct usb_request *buf_in_req; + struct usb_request *status_req; + + struct usb_request *current_req; + + u32 tag; + bool in; + unsigned int index; + unsigned int regnr; + u32 flags; +}; + +struct mud_regmap { + struct usb_ep *in_ep; + struct usb_ep *out_ep; + struct usb_composite_dev *cdev; + + struct f_mud_cell **cells; + unsigned int num_cells; + + unsigned int max_transfer_size; + struct mud_regmap_transfer *transfers[2]; + + spinlock_t lock; + + struct list_head free_transfers; + struct list_head pending_transfers; + + struct workqueue_struct *workq; + struct work_struct work; + wait_queue_head_t waitq; + + bool pending_protocol_reset; + bool pending_stall; + bool stalled; + bool run; + + bool header_queued; + u8 errno; +}; + +/* Temporary debugging aid */ +static unsigned int debug = 0; + +#define udebug(level, fmt, ...) \ +do { \ + if ((level) <= debug) \ + printk(KERN_DEBUG fmt, ##__VA_ARGS__); \ +} while (0) + +#undef DBG +#define DBG INFO + +static int mud_regmap_usb_ep_set_halt(struct usb_ep *ep) +{ + int retries = 10, ret; + + while (retries-- > 0) { + ret = usb_ep_set_halt(ep); + if (ret != -EAGAIN) + break; + msleep(100); + } + + return ret; +} + +static void mud_regmap_stall(struct mud_regmap *mreg) +{ + struct usb_request *current_req[2] = { NULL, NULL }; + struct mud_regmap_transfer *transfer; + int ret_in, ret_out, timeout; + unsigned int i; + + udebug(0, "%s:\n", __func__); + + for (i = 0; i < 2; i++) { + transfer = mreg->transfers[i]; + spin_lock_irq(&transfer->lock); + current_req[i] = transfer->current_req; + spin_unlock_irq(&transfer->lock); + if (current_req[i]) { + if (current_req[i] == transfer->header_req || + current_req[i] == transfer->buf_out_req) + usb_ep_dequeue(mreg->out_ep, current_req[i]); + else + usb_ep_dequeue(mreg->in_ep, current_req[i]); + } + } + + for (timeout = 20; timeout > 0; timeout--) { + for (i = 0; i < 2; i++) { + transfer = mreg->transfers[i]; + spin_lock_irq(&transfer->lock); + current_req[i] = transfer->current_req; + spin_unlock_irq(&transfer->lock); + } + if (!current_req[0] && !current_req[1]) + break; + msleep(100); + } + + if (!timeout) + pr_warn("%s: timeout waiting for transfers to complete: tr0=%u, tr1=%u\n", + __func__, !!current_req[0], !!current_req[1]); + + ret_in = mud_regmap_usb_ep_set_halt(mreg->in_ep); + ret_out = mud_regmap_usb_ep_set_halt(mreg->out_ep); + if (ret_in || ret_out) + pr_err("%s: Failed to halt endpoint(s) ret_in=%d, ret_out=%d\n", + __func__, ret_in, ret_out); + + spin_lock_irq(&mreg->lock); + mreg->pending_stall = false; + mreg->stalled = true; + spin_unlock_irq(&mreg->lock); +} + +static void mud_regmap_queue_stall(struct mud_regmap *mreg, int error) +{ + unsigned long flags; + + udebug(0, "%s: error=%d\n", __func__, error); + + if (error < -255 || error > 0) + error = -EREMOTEIO; + + spin_lock_irqsave(&mreg->lock, flags); + if (!mreg->pending_stall) { + mreg->errno = -error; + mreg->pending_stall = true; + wake_up(&mreg->waitq); + } + spin_unlock_irqrestore(&mreg->lock, flags); +} + +static int mud_regmap_queue_header(struct mud_regmap *mreg) +{ + struct mud_regmap_transfer *transfer = NULL; + bool header_queued, run; + unsigned long flags; + int ret; + + spin_lock_irqsave(&mreg->lock, flags); + header_queued = mreg->header_queued; + run = mreg->run; + if (!header_queued && run) { + transfer = list_first_entry_or_null(&mreg->free_transfers, struct mud_regmap_transfer, node); + if (transfer) { + mreg->header_queued = true; + list_del_init(&transfer->node); + } + } + spin_unlock_irqrestore(&mreg->lock, flags); + + udebug(4, "%s: header_queued=%u, transfer=%px\n", __func__, + header_queued, transfer); + + if (header_queued || !run) + return 0; + + if (!transfer) { + udebug(4, "Run out of transfers\n"); + return 0; + } + + spin_lock_irqsave(&transfer->lock, flags); + ret = usb_ep_queue(mreg->out_ep, transfer->header_req, GFP_ATOMIC); + if (!ret) + transfer->current_req = transfer->header_req; + spin_unlock_irqrestore(&transfer->lock, flags); + if (ret) { + pr_warn("Queueing header failed, ret=%d\n", ret); + mud_regmap_queue_stall(mreg, ret); + } + + return ret; +} + +static void mud_regmap_queue_status(struct mud_regmap_transfer *transfer, int error) +{ + struct regmap_usb_status *status = transfer->status_req->buf; + unsigned long flags; + int ret; + + udebug(4, "%s: tag=%u, error=%d\n", __func__, transfer->tag, error); + + if (error < -255 || error > 0) + error = -EREMOTEIO; + + spin_lock_irqsave(&transfer->lock, flags); + status->tag = cpu_to_le16(transfer->tag); + status->status = -error; + ret = usb_ep_queue(transfer->mreg->in_ep, transfer->status_req, GFP_ATOMIC); + if (!ret) + transfer->current_req = transfer->status_req; + spin_unlock_irqrestore(&transfer->lock, flags); + if (ret) { + pr_warn("Queueing status failed, ret=%d\n", ret); + mud_regmap_queue_stall(transfer->mreg, ret); + } +} + +static void mud_regmap_queue_for_processing(struct mud_regmap_transfer *transfer) +{ + struct mud_regmap *mreg = transfer->mreg; + unsigned long flags; + + spin_lock_irqsave(&mreg->lock, flags); + list_add_tail(&transfer->node, &mreg->pending_transfers); + spin_unlock_irqrestore(&mreg->lock, flags); + + wake_up(&mreg->waitq); +} + +static bool mud_regmap_check_req_status(struct usb_request *req) +{ + switch (req->status) { + case -ECONNABORTED: /* hardware forced ep reset */ + case -ECONNRESET: /* request dequeued */ + case -ESHUTDOWN: /* disconnect from host */ + case -EOVERFLOW: /* buffer overrun on read means that + * we didn't provide a big enough + * buffer. + */ + case -EREMOTEIO: /* short read */ + udebug(0, "%s: bail out, status=%d\n", __func__, req->status); + return false; + } + + return true; +} + +static void mud_regmap_header_req_complete(struct usb_ep *ep, struct usb_request *req) +{ + struct mud_regmap_transfer *transfer = req->context; + struct mud_regmap *mreg = transfer->mreg; + struct regmap_usb_header *header = req->buf; + unsigned int index = le16_to_cpu(header->index); + u32 hflags = le32_to_cpu(header->flags); + bool in = hflags & REGMAP_USB_HEADER_FLAG_IN; + u32 length = le32_to_cpu(header->length); + unsigned long flags; + bool run; + int ret; + + udebug(4, "%s: status=%d, actual=%u, length=%u\n", __func__, req->status, req->actual, req->length); + udebug(4, " signature=0x%x, index=%u, tag=%u, flags=0x%x, regnr=0x%x, length=%u, in=%u\n", + le32_to_cpu(header->signature), le16_to_cpu(header->index), le16_to_cpu(header->tag), + le32_to_cpu(header->flags), le32_to_cpu(header->regnr), le32_to_cpu(header->length), in); + + spin_lock_irqsave(&transfer->lock, flags); + transfer->current_req = NULL; + transfer->in = in; + transfer->index = index; + transfer->tag = le16_to_cpu(header->tag); + transfer->flags = hflags; + transfer->regnr = le32_to_cpu(header->regnr); + spin_unlock_irqrestore(&transfer->lock, flags); + + if (!mud_regmap_check_req_status(req)) + return; + + spin_lock_irqsave(&mreg->lock, flags); + mreg->header_queued = false; + run = mreg->run; + spin_unlock_irqrestore(&mreg->lock, flags); + + if (!run) + return; + + if (req->status) { + udebug(0, "%s: Failed, status=%d\n", __func__, req->status); + ret = req->status; + goto error; + } + + if (req->actual != req->length) { + udebug(0, "%s: Wrong length\n", __func__); + ret = -EREMOTEIO; + goto error; + } + + if (le32_to_cpu(header->signature) != REGMAP_USB_HEADER_SIGNATURE) { + udebug(0, "%s: Wrong signature\n", __func__); + ret = -EINVAL; + goto error; + } + + if (index >= mreg->num_cells) { + udebug(0, "%s: No such index %u\n", __func__, index); + ret = -ENOENT; + goto error; + } + + /* FIXME: Temporary test code */ + if (index == 2 && !strcmp(mreg->cells[index]->ops->name, "mud-test")) { + udebug(0, "%s: Test stall + reset\n", __func__); + ret = -ENOENT; + goto error; + } + + if (length > mreg->max_transfer_size) { + udebug(0, "%s: Length overflow %u\n", __func__, length); + ret = -EOVERFLOW; + goto error; + } + + if (in) { + transfer->buf_in_req->length = length; + mud_regmap_queue_for_processing(transfer); + mud_regmap_queue_header(mreg); + } else { + transfer->buf_out_req->length = length; + + spin_lock_irqsave(&transfer->lock, flags); + ret = usb_ep_queue(mreg->out_ep, transfer->buf_out_req, GFP_ATOMIC); + if (!ret) + transfer->current_req = transfer->buf_out_req; + spin_unlock_irqrestore(&transfer->lock, flags); + if (ret) { + pr_warn("Queueing buf out failed, ret=%d\n", ret); + goto error; + } + } + + return; + +error: + mud_regmap_queue_stall(mreg, ret); +} + +static void mud_regmap_buf_out_req_complete(struct usb_ep *ep, struct usb_request *req) +{ + struct mud_regmap_transfer *transfer = req->context; + struct mud_regmap *mreg = transfer->mreg; + unsigned long flags; + bool run; + int ret; + + udebug(4, "%s: status=%d, actual=%u, length=%u, tag=%u\n", __func__, + req->status, req->actual, req->length, transfer->tag); + + spin_lock_irqsave(&transfer->lock, flags); + transfer->current_req = NULL; + spin_unlock_irqrestore(&transfer->lock, flags); + + if (!mud_regmap_check_req_status(req)) + return; + + spin_lock_irqsave(&mreg->lock, flags); + run = mreg->run; + spin_unlock_irqrestore(&mreg->lock, flags); + + if (!run) + return; + + if (req->status) { + udebug(0, "%s: Failed, status=%d\n", __func__, req->status); + ret = req->status; + goto error; + } + + if (req->actual != req->length) { + udebug(0, "%s: Wrong length\n", __func__); + ret = -EREMOTEIO; + goto error; + } + + mud_regmap_queue_for_processing(transfer); + mud_regmap_queue_header(mreg); + + return; + +error: + mud_regmap_queue_stall(mreg, ret); +} + +static void mud_regmap_buf_in_req_complete(struct usb_ep *ep, struct usb_request *req) +{ + struct mud_regmap_transfer *transfer = req->context; + struct mud_regmap *mreg = transfer->mreg; + unsigned long flags; + bool run; + + udebug(4, "%s: status=%d, actual=%u, length=%u, tag=%u\n", __func__, + req->status, req->actual, req->length, transfer->tag); + + spin_lock_irqsave(&transfer->lock, flags); + transfer->current_req = NULL; + spin_unlock_irqrestore(&transfer->lock, flags); + + if (!mud_regmap_check_req_status(req)) + return; + + spin_lock_irqsave(&mreg->lock, flags); + run = mreg->run; + spin_unlock_irqrestore(&mreg->lock, flags); + + if (!run) + return; + + if (req->actual != req->length) { + udebug(0, "%s: Wrong length\n", __func__); + mud_regmap_queue_stall(mreg, -EREMOTEIO); + } +} + +static void mud_regmap_status_req_complete(struct usb_ep *ep, struct usb_request *req) +{ + struct mud_regmap_transfer *transfer = req->context; + struct mud_regmap *mreg = transfer->mreg; + unsigned long flags; + bool run; + + udebug(4, "%s: status=%d, actual=%u, length=%u, tag=%u\n", __func__, + req->status, req->actual, req->length, transfer->tag); + + spin_lock_irqsave(&transfer->lock, flags); + transfer->current_req = NULL; + spin_unlock_irqrestore(&transfer->lock, flags); + + if (!mud_regmap_check_req_status(req)) + return; + + spin_lock_irqsave(&mreg->lock, flags); + run = mreg->run; + list_add_tail(&transfer->node, &mreg->free_transfers); + spin_unlock_irqrestore(&mreg->lock, flags); + + if (!run) + return; + + if (req->actual != req->length) { + udebug(0, "%s: Wrong length\n", __func__); + mud_regmap_queue_stall(mreg, -EREMOTEIO); + return; + } + + /* Make sure it's queued */ + mud_regmap_queue_header(mreg); +} + +static void mud_regmap_free_transfer(struct mud_regmap_transfer *transfer) +{ + if (!transfer) + return; + + kfree(transfer->status_req->buf); + usb_ep_free_request(transfer->mreg->in_ep, transfer->status_req); + + usb_ep_free_request(transfer->mreg->in_ep, transfer->buf_in_req); + + kfree(transfer->buf_out_req->buf); + usb_ep_free_request(transfer->mreg->out_ep, transfer->buf_out_req); + + kfree(transfer->header_req->buf); + usb_ep_free_request(transfer->mreg->out_ep, transfer->header_req); + + kfree(transfer); +} + +static struct mud_regmap_transfer *mud_regmap_alloc_transfer(struct mud_regmap *mreg) +{ + struct mud_regmap_transfer *transfer; + struct regmap_usb_header *header; + struct regmap_usb_status *status; + void *buf; + + transfer = kzalloc(sizeof(*transfer), GFP_KERNEL); + header = kzalloc(sizeof(*header), GFP_KERNEL); + status = kzalloc(sizeof(*status), GFP_KERNEL); + buf = kmalloc(mreg->max_transfer_size, GFP_KERNEL); + if (!transfer || !header || !status || !buf) + goto free; + + spin_lock_init(&transfer->lock); + transfer->mreg = mreg; + + transfer->header_req = usb_ep_alloc_request(mreg->out_ep, GFP_KERNEL); + if (!transfer->header_req) + goto free; + + transfer->header_req->context = transfer; + transfer->header_req->complete = mud_regmap_header_req_complete; + transfer->header_req->buf = header; + transfer->header_req->length = sizeof(*header); + + transfer->buf_out_req = usb_ep_alloc_request(mreg->out_ep, GFP_KERNEL); + if (!transfer->buf_out_req) + goto free; + + transfer->buf_out_req->context = transfer; + transfer->buf_out_req->complete = mud_regmap_buf_out_req_complete; + transfer->buf_out_req->buf = buf; + + transfer->buf_in_req = usb_ep_alloc_request(mreg->in_ep, GFP_KERNEL); + if (!transfer->buf_in_req) + goto free; + + transfer->buf_in_req->context = transfer; + transfer->buf_in_req->complete = mud_regmap_buf_in_req_complete; + transfer->buf_in_req->buf = buf; + + transfer->status_req = usb_ep_alloc_request(mreg->in_ep, GFP_KERNEL); + if (!transfer->status_req) + goto free; + + transfer->status_req->context = transfer; + transfer->status_req->complete = mud_regmap_status_req_complete; + transfer->status_req->buf = status; + transfer->status_req->length = sizeof(*status); + status->signature = cpu_to_le32(REGMAP_USB_STATUS_SIGNATURE); + + return transfer; + +free: + if (transfer->status_req) + usb_ep_free_request(mreg->in_ep, transfer->status_req); + if (transfer->buf_in_req) + usb_ep_free_request(mreg->in_ep, transfer->buf_in_req); + if (transfer->buf_out_req) + usb_ep_free_request(mreg->out_ep, transfer->buf_out_req); + if (transfer->header_req) + usb_ep_free_request(mreg->out_ep, transfer->header_req); + kfree(buf); + kfree(status); + kfree(header); + kfree(transfer); + + return NULL; +} + +static void mud_regmap_free_transfers(struct mud_regmap *mreg) +{ + mud_regmap_free_transfer(mreg->transfers[0]); + mud_regmap_free_transfer(mreg->transfers[1]); +} + +static int mud_regmap_alloc_transfers(struct mud_regmap *mreg) +{ +retry: + udebug(1, "%s: max_transfer_size=%u\n", __func__, + mreg->max_transfer_size); + + mreg->transfers[0] = mud_regmap_alloc_transfer(mreg); + mreg->transfers[1] = mud_regmap_alloc_transfer(mreg); + if (!mreg->transfers[0] || !mreg->transfers[1]) { + mud_regmap_free_transfers(mreg); + if (mreg->max_transfer_size < 512) + return -ENOMEM; /* No point in retrying we'll fail later anyway */ + + mreg->max_transfer_size /= 2; + goto retry; + } + + list_add_tail(&mreg->transfers[0]->node, &mreg->free_transfers); + list_add_tail(&mreg->transfers[1]->node, &mreg->free_transfers); + + return 0; +} + +static void mud_regmap_reset_state(struct mud_regmap *mreg) +{ + unsigned long flags; + + udebug(5, "%s:\n", __func__); + + spin_lock_irqsave(&mreg->lock, flags); + + mreg->pending_protocol_reset = false; + mreg->pending_stall = false; + mreg->stalled = false; + mreg->header_queued = false; + mreg->errno = 0; + + INIT_LIST_HEAD(&mreg->free_transfers); + INIT_LIST_HEAD(&mreg->pending_transfers); + + INIT_LIST_HEAD(&mreg->transfers[0]->node); + list_add_tail(&mreg->transfers[0]->node, &mreg->free_transfers); + INIT_LIST_HEAD(&mreg->transfers[1]->node); + list_add_tail(&mreg->transfers[1]->node, &mreg->free_transfers); + + spin_unlock_irqrestore(&mreg->lock, flags); +} + +static void mud_regmap_protocol_reset(struct mud_regmap *mreg) +{ + struct usb_composite_dev *cdev = mreg->cdev; + int ret; + + udebug(0, "%s: IN\n", __func__); + + mud_regmap_reset_state(mreg); + + /* Complete the reset request and return the error */ + ret = usb_ep_queue(cdev->gadget->ep0, cdev->req, GFP_ATOMIC); + if (ret < 0) + /* FIXME: Should we stall (again) and let the host retry? */ + ERROR(cdev, "usb_ep_queue error on ep0 %d\n", ret); + + mud_regmap_queue_header(mreg); + + udebug(0, "%s: OUT\n", __func__); +} + +static void mud_regmap_worker(struct work_struct *work) +{ + struct mud_regmap *mreg = container_of(work, struct mud_regmap, work); + struct mud_regmap_transfer *transfer; + unsigned int index, regnr; + struct f_mud_cell *cell; + bool in, stalled; + int ret, error; + size_t len; + u32 flags; + + for (index = 0; index < mreg->num_cells; index++) { + cell = mreg->cells[index]; + if (cell->ops->enable) + cell->ops->enable(cell); + } + + while (mreg->run) { + spin_lock_irq(&mreg->lock); + stalled = mreg->stalled; + transfer = list_first_entry_or_null(&mreg->pending_transfers, struct mud_regmap_transfer, node); + if (transfer) + list_del_init(&transfer->node); + spin_unlock_irq(&mreg->lock); + + if (mreg->pending_protocol_reset) { + mud_regmap_protocol_reset(mreg); + continue; + } + + if (mreg->pending_stall) { + mud_regmap_stall(mreg); + continue; + } + + if (stalled || !transfer) { + /* Use _interruptible to avoid triggering hung task warnings */ + wait_event_interruptible(mreg->waitq, !mreg->run || + mreg->pending_stall || + mreg->pending_protocol_reset || + !list_empty(&mreg->pending_transfers)); + continue; + } + + spin_lock_irq(&transfer->lock); + index = transfer->index; + in = transfer->in; + regnr = transfer->regnr; + flags = transfer->flags; + if (in) + len = transfer->buf_in_req->length; + else + len = transfer->buf_out_req->length; + spin_unlock_irq(&transfer->lock); + + // FIXME: check len? + + cell = mreg->cells[index]; + + if (in) { + udebug(2, "cell->ops->readreg(regnr=0x%02x, len=%zu)\n", regnr, len); + + error = cell->ops->readreg(cell, regnr, + transfer->buf_in_req->buf, &len, + flags & REGMAP_USB_HEADER_FLAG_COMPRESSION_MASK); + if (error) { + udebug(2, " error=%d\n", error); + // FIXME: Stall or run its course to status stage? Stalling takes time... + } + + /* In case the buffer was compressed */ + transfer->buf_in_req->length = len; + + ret = usb_ep_queue(mreg->in_ep, transfer->buf_in_req, GFP_KERNEL); + if (ret) { + pr_warn("Failed to queue buf_in_req ret=%d\n", ret); + mud_regmap_queue_stall(transfer->mreg, ret); + continue; + } + + mud_regmap_queue_status(transfer, error); + } else { + udebug(2, "cell->ops->writereg(regnr=0x%02x, len=%zu)\n", regnr, len); + + error = cell->ops->writereg(cell, regnr, + transfer->buf_out_req->buf, len, + flags & REGMAP_USB_HEADER_FLAG_COMPRESSION_MASK); + if (error) + udebug(2, " error=%d\n", error); + + mud_regmap_queue_status(transfer, error); + } + } + + for (index = 0; index < mreg->num_cells; index++) { + cell = mreg->cells[index]; + if (cell->ops->disable) + cell->ops->disable(cell); + } +} + +static int mud_regmap_start(struct mud_regmap *mreg) +{ + unsigned long flags; + + udebug(1, "%s:\n", __func__); + + mud_regmap_reset_state(mreg); + + usb_ep_enable(mreg->in_ep); + usb_ep_enable(mreg->out_ep); + + spin_lock_irqsave(&mreg->lock, flags); + mreg->run = true; + spin_unlock_irqrestore(&mreg->lock, flags); + + queue_work(mreg->workq, &mreg->work); + + return mud_regmap_queue_header(mreg); +} + +void mud_regmap_stop(struct mud_regmap *mreg) +{ + unsigned long flags; + + udebug(1, "%s:\n", __func__); + + spin_lock_irqsave(&mreg->lock, flags); + mreg->run = false; + spin_unlock_irqrestore(&mreg->lock, flags); + + wake_up(&mreg->waitq); + + usb_ep_disable(mreg->in_ep); + usb_ep_disable(mreg->out_ep); +} + +int mud_regmap_setup(struct mud_regmap *mreg, struct usb_function *f, + const struct usb_ctrlrequest *ctrl) +{ + struct usb_composite_dev *cdev = f->config->cdev; + u16 length = le16_to_cpu(ctrl->wLength); + struct usb_request *req = cdev->req; + int ret; + + udebug(1, "%s: bRequest=0x%x, length=%u\n", __func__, ctrl->bRequest, length); + + if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_INTERFACE)) + return -EINVAL; + + if (ctrl->bRequest == USB_REQ_GET_DESCRIPTOR) { + u8 type = le16_to_cpu(ctrl->wValue) >> 8; + u8 index = le16_to_cpu(ctrl->wValue) & 0xff; + + udebug(1, " USB_REQ_GET_DESCRIPTOR: type=%u index=%u\n", type, index); + + if (type == REGMAP_USB_DT_INTERFACE && index == 0) { + struct regmap_usb_interface_descriptor *desc = req->buf; + + desc->bLength = sizeof(*desc); + desc->bDescriptorType = REGMAP_USB_DT_INTERFACE; + desc->bNumRegmaps = mreg->num_cells; + req->zero = 0; + req->length = min_t(unsigned int, length, sizeof(*desc)); + } else if (type == REGMAP_USB_DT_MAP) { + struct regmap_usb_map_descriptor *desc = req->buf; + unsigned int max_transfer_size; + struct f_mud_cell *cell; + + if (index >= mreg->num_cells) + return -ENOENT; + + cell = mreg->cells[index]; + + desc->bLength = sizeof(*desc); + desc->bDescriptorType = REGMAP_USB_DT_MAP; + if (strscpy_pad(desc->name, cell->ops->name, 32) < 0) + return -EINVAL; + desc->bRegisterValueBits = cell->ops->regval_bytes * 8; + desc->bCompression = cell->ops->compression; + max_transfer_size = min(mreg->max_transfer_size, + cell->ops->max_transfer_size); + desc->bMaxTransferSizeOrder = ilog2(max_transfer_size); + req->zero = 0; + req->length = min_t(unsigned int, length, sizeof(*desc)); + } else { + return -EINVAL; + } + } else if (ctrl->bRequest == REGMAP_USB_REQ_PROTOCOL_RESET && length == 1) { + unsigned long flags; + + DBG(cdev, "Protocol reset request: errno=%u\n", mreg->errno); + + spin_lock_irqsave(&mreg->lock, flags); + mreg->pending_protocol_reset = true; + *(u8 *)req->buf = mreg->errno; + mreg->errno = 0; + spin_unlock_irqrestore(&mreg->lock, flags); + + req->zero = 0; + req->length = length; + + wake_up(&mreg->waitq); + + return USB_GADGET_DELAYED_STATUS; + } else { + return -EINVAL; + } + + ret = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC); + if (ret < 0) + ERROR(cdev, "usb_ep_queue error on ep0 %d\n", ret); + + return ret; +} + +int mud_regmap_set_alt(struct mud_regmap *mreg, struct usb_function *f) +{ + struct usb_composite_dev *cdev = f->config->cdev; + + DBG(cdev, "%s: reset\n", __func__); + + if (!mreg->in_ep->desc || !mreg->out_ep->desc) { + DBG(cdev, "%s: init\n", __func__); + if (config_ep_by_speed(cdev->gadget, f, mreg->in_ep) || + config_ep_by_speed(cdev->gadget, f, mreg->out_ep)) { + mreg->in_ep->desc = NULL; + mreg->out_ep->desc = NULL; + return -EINVAL; + } + } + + mud_regmap_stop(mreg); + + return mud_regmap_start(mreg); +} + +struct mud_regmap *mud_regmap_init(struct usb_composite_dev *cdev, + struct usb_ep *in_ep, struct usb_ep *out_ep, + struct f_mud_cell **cells, unsigned int num_cells) +{ + size_t max_transfer_size = 0; + struct mud_regmap *mreg; + unsigned int i; + int ret; + + for (i = 0; i < num_cells; i++) { + size_t cell_max = cells[i]->ops->max_transfer_size; + + if (!is_power_of_2(cell_max)) { + pr_err("%s: Max transfer size must be a power of two: %u\n", + __func__, cell_max); + return ERR_PTR(-EINVAL); + } + max_transfer_size = max(max_transfer_size, cell_max); + } + + mreg = kzalloc(sizeof(*mreg), GFP_KERNEL); + if (!mreg) + return ERR_PTR(-ENOMEM); + + mreg->cdev = cdev; + mreg->in_ep = in_ep; + mreg->out_ep = out_ep; + mreg->cells = cells; + mreg->num_cells = num_cells; + mreg->max_transfer_size = max_transfer_size; + + spin_lock_init(&mreg->lock); + + INIT_LIST_HEAD(&mreg->free_transfers); + INIT_LIST_HEAD(&mreg->pending_transfers); + + INIT_WORK(&mreg->work, mud_regmap_worker); + init_waitqueue_head(&mreg->waitq); + + mreg->workq = alloc_workqueue("mud_regmap", 0, 0); + if (!mreg->workq) { + ret = -ENOMEM; + goto fail; + } + + ret = mud_regmap_alloc_transfers(mreg); + if (ret) + goto fail; + + return mreg; +fail: + mud_regmap_cleanup(mreg); + + return ERR_PTR(ret); +} + +void mud_regmap_cleanup(struct mud_regmap *mreg) +{ + cancel_work_sync(&mreg->work); + if (mreg->workq) + destroy_workqueue(mreg->workq); + mud_regmap_free_transfers(mreg); + kfree(mreg); +} From patchwork Sun Feb 16 17:21:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B42531871 for ; Sun, 16 Feb 2020 17:27:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9CCCC2086A for ; Sun, 16 Feb 2020 17:27:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9CCCC2086A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 47CCB6E466; Sun, 16 Feb 2020 17:27:06 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id BC3A76E44F for ; Sun, 16 Feb 2020 17:27:02 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id 297EC200E8; Sun, 16 Feb 2020 18:21:40 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 4/9] pinctrl: Add Multifunction USB Device pinctrl driver Date: Sun, 16 Feb 2020 18:21:12 +0100 Message-Id: <20200216172117.49832-5-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=zA0JEd7Wm40yEy5wBp4A:9 a=mvnLadNLvkWAnLlj:21 a=mjYpQo1dwepDeJSL:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The Multifunction USB Device has optional support for gpio and pin configuration. Interrupts are supported if the device supports it. Signed-off-by: Noralf Trønnes --- drivers/pinctrl/Kconfig | 9 + drivers/pinctrl/Makefile | 1 + drivers/pinctrl/pinctrl-mud.c | 657 ++++++++++++++++++++++++++++++++++ drivers/pinctrl/pinctrl-mud.h | 89 +++++ 4 files changed, 756 insertions(+) create mode 100644 drivers/pinctrl/pinctrl-mud.c create mode 100644 drivers/pinctrl/pinctrl-mud.h diff --git a/drivers/pinctrl/Kconfig b/drivers/pinctrl/Kconfig index df0ef69dd474..ee3532c64411 100644 --- a/drivers/pinctrl/Kconfig +++ b/drivers/pinctrl/Kconfig @@ -384,6 +384,15 @@ config PINCTRL_OCELOT select OF_GPIO select REGMAP_MMIO +config PINCTRL_MUD + tristate "Multifunction USB Device pinctrl driver" + depends on MFD_MUD + select GENERIC_PINCONF + select GPIOLIB + select GPIOLIB_IRQCHIP + help + Support for GPIOs on Multifunction USB Devices. + source "drivers/pinctrl/actions/Kconfig" source "drivers/pinctrl/aspeed/Kconfig" source "drivers/pinctrl/bcm/Kconfig" diff --git a/drivers/pinctrl/Makefile b/drivers/pinctrl/Makefile index 879f312bfb75..782cc7f286b7 100644 --- a/drivers/pinctrl/Makefile +++ b/drivers/pinctrl/Makefile @@ -47,6 +47,7 @@ obj-$(CONFIG_PINCTRL_INGENIC) += pinctrl-ingenic.o obj-$(CONFIG_PINCTRL_RK805) += pinctrl-rk805.o obj-$(CONFIG_PINCTRL_OCELOT) += pinctrl-ocelot.o obj-$(CONFIG_PINCTRL_EQUILIBRIUM) += pinctrl-equilibrium.o +obj-$(CONFIG_PINCTRL_MUD) += pinctrl-mud.o obj-y += actions/ obj-$(CONFIG_ARCH_ASPEED) += aspeed/ diff --git a/drivers/pinctrl/pinctrl-mud.c b/drivers/pinctrl/pinctrl-mud.c new file mode 100644 index 000000000000..f890c8e68755 --- /dev/null +++ b/drivers/pinctrl/pinctrl-mud.c @@ -0,0 +1,657 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "core.h" +#include "pinconf.h" +#include "pinmux.h" +#include "pinctrl-utils.h" + +#include "pinctrl-mud.h" + +/* Temporary debugging aid */ +static unsigned int debug = 8; + +#define pdebug(level, fmt, ...) \ +do { \ + if ((level) <= debug) \ + printk(KERN_DEBUG fmt, ##__VA_ARGS__); \ +} while (0) + +struct mud_pinctrl_pin { + unsigned int irq_types; + unsigned int irq_type; + bool irq_enabled; +}; + +struct mud_pinctrl { + struct device *dev; + struct regmap *regmap; + struct mud_pinctrl_pin *pins; + struct pinctrl_dev *pctl_dev; + struct pinctrl_desc pctl_desc; + struct gpio_chip gpio_chip; + struct irq_chip irq_chip; + struct mutex irqlock; /* IRQ bus lock */ +}; + +static unsigned int mud_pinctrl_pin_reg(unsigned int pin, unsigned int offset) +{ + return MUD_PINCTRL_REG_PIN_BASE + (pin * MUD_PINCTRL_PIN_BLOCK_SIZE) + offset; +} + +static int mud_pinctrl_pin_read_reg(struct mud_pinctrl *pctl, unsigned int pin, + unsigned int offset, unsigned int *val) +{ + return regmap_read(pctl->regmap, mud_pinctrl_pin_reg(pin, offset), val); +} + +static int mud_pinctrl_pin_write_reg(struct mud_pinctrl *pctl, unsigned int pin, + unsigned int offset, unsigned int val) +{ + return regmap_write(pctl->regmap, mud_pinctrl_pin_reg(pin, offset), val); +} + +static int mud_pinctrl_read_bitmap(struct mud_pinctrl *pctl, unsigned int reg, + unsigned long *bitmap, unsigned int nbits) +{ + unsigned int nregs = DIV_ROUND_UP(nbits, 32); + u32 *vals; + int ret; + + vals = kmalloc_array(nregs, sizeof(*vals), GFP_KERNEL); + if (!vals) + return -ENOMEM; + + ret = regmap_bulk_read(pctl->regmap, reg, vals, nregs); + if (ret) + goto free; + + bitmap_from_arr32(bitmap, vals, nbits); +free: + kfree(vals); + + return ret; +} + +static int mud_pinctrl_gpio_request(struct gpio_chip *gc, unsigned int offset) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + int ret; + + pdebug(1, "%s: offset=%u\n", __func__, offset); + + ret = mud_pinctrl_pin_write_reg(pctl, offset, MUD_PIN_GPIO_REQUEST, 1); + if (ret == -EBUSY) { + dev_err(pctl->dev, + "pin %u is claimed by another function on the USB device\n", + offset); + ret = -EINVAL; /* follow pinmux.c:pin_request() */ + } + + return ret; +} + +static void mud_pinctrl_gpio_free(struct gpio_chip *gc, unsigned int offset) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + + pdebug(1, "%s: offset=%u\n", __func__, offset); + + mud_pinctrl_pin_write_reg(pctl, offset, MUD_PIN_GPIO_FREE, 1); +} + +static int mud_pinctrl_gpio_get(struct gpio_chip *gc, unsigned int offset) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + unsigned int value; + int ret; + + ret = mud_pinctrl_pin_read_reg(pctl, offset, MUD_PIN_LEVEL, &value); + + return ret ? ret : value; +} + +static void mud_pinctrl_gpio_set(struct gpio_chip *gc, unsigned int offset, int value) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + int ret; + + ret = mud_pinctrl_pin_write_reg(pctl, offset, MUD_PIN_LEVEL, value); + if (ret) + dev_err_once(pctl->dev, "Failed to set gpio output, error=%d\n", ret); +} + +/* FIXME: Remove this comment when settled: .get_direction returns 0=out, 1=in */ +static int mud_pinctrl_gpio_get_direction(struct gpio_chip *gc, unsigned int offset) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + unsigned int val; + int ret; + + ret = mud_pinctrl_pin_read_reg(pctl, offset, MUD_PIN_DIRECTION, &val); + + return ret ? ret : val; +} + +static int mud_pinctrl_gpio_direction_input(struct gpio_chip *gc, unsigned int offset) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + int ret; + + ret = mud_pinctrl_pin_write_reg(pctl, offset, MUD_PIN_DIRECTION, + MUD_PIN_DIRECTION_INPUT); + if (ret == -ENOTSUPP) { + dev_err(pctl->dev, "pin %u can't be used as an input\n", offset); + ret = -EIO; /* gpiolib uses this error code */ + } + + return ret; +} + +static int mud_pinctrl_gpio_direction_output(struct gpio_chip *gc, unsigned int offset, int value) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + unsigned int regval; + int ret; + + regval = value ? MUD_PIN_DIRECTION_OUTPUT_HIGH : MUD_PIN_DIRECTION_OUTPUT_LOW; + + ret = mud_pinctrl_pin_write_reg(pctl, offset, MUD_PIN_DIRECTION, regval); + if (ret == -ENOTSUPP) { + dev_err(pctl->dev, "pin %u can't be used as an output\n", offset); + ret = -EIO; + } + + return ret; +} + +/* The pinctrl pin number space and the gpio hwpin (offset) numberspace are identical. */ +static int mud_pinctrl_gpio_add_pin_ranges(struct gpio_chip *gc) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + + return gpiochip_add_pin_range(&pctl->gpio_chip, dev_name(pctl->dev), + pctl->pctl_desc.pins->number, + pctl->pctl_desc.pins->number, + pctl->pctl_desc.npins); +} + +static const struct gpio_chip mud_pinctrl_gpio_chip = { + .label = "mud-pins", + .request = mud_pinctrl_gpio_request, + .free = mud_pinctrl_gpio_free, + .get_direction = mud_pinctrl_gpio_get_direction, + .direction_input = mud_pinctrl_gpio_direction_input, + .direction_output = mud_pinctrl_gpio_direction_output, + .get = mud_pinctrl_gpio_get, + .set = mud_pinctrl_gpio_set, + .set_config = gpiochip_generic_config, + .add_pin_ranges = mud_pinctrl_gpio_add_pin_ranges, + .base = -1, + .can_sleep = true, +}; + +/* enum pin_config_param is not ABI so: */ +static const u8 mud_pin_config_param_table[] = { + [PIN_CONFIG_BIAS_BUS_HOLD] = MUD_PIN_CONFIG_BIAS_BUS_HOLD, + [PIN_CONFIG_BIAS_DISABLE] = MUD_PIN_CONFIG_BIAS_DISABLE, + [PIN_CONFIG_BIAS_HIGH_IMPEDANCE] = MUD_PIN_CONFIG_BIAS_HIGH_IMPEDANCE, + [PIN_CONFIG_BIAS_PULL_DOWN] = MUD_PIN_CONFIG_BIAS_PULL_DOWN, + [PIN_CONFIG_BIAS_PULL_PIN_DEFAULT] = MUD_PIN_CONFIG_BIAS_PULL_PIN_DEFAULT, + [PIN_CONFIG_BIAS_PULL_UP] = MUD_PIN_CONFIG_BIAS_PULL_UP, + [PIN_CONFIG_DRIVE_OPEN_DRAIN] = MUD_PIN_CONFIG_DRIVE_OPEN_DRAIN, + [PIN_CONFIG_DRIVE_OPEN_SOURCE] = MUD_PIN_CONFIG_DRIVE_OPEN_SOURCE, + [PIN_CONFIG_DRIVE_PUSH_PULL] = MUD_PIN_CONFIG_DRIVE_PUSH_PULL, + [PIN_CONFIG_DRIVE_STRENGTH] = MUD_PIN_CONFIG_DRIVE_STRENGTH, + [PIN_CONFIG_DRIVE_STRENGTH_UA] = MUD_PIN_CONFIG_DRIVE_STRENGTH_UA, + [PIN_CONFIG_INPUT_DEBOUNCE] = MUD_PIN_CONFIG_INPUT_DEBOUNCE, + [PIN_CONFIG_INPUT_ENABLE] = MUD_PIN_CONFIG_INPUT_ENABLE, + [PIN_CONFIG_INPUT_SCHMITT] = MUD_PIN_CONFIG_INPUT_SCHMITT, + [PIN_CONFIG_INPUT_SCHMITT_ENABLE] = MUD_PIN_CONFIG_INPUT_SCHMITT_ENABLE, + [PIN_CONFIG_LOW_POWER_MODE] = MUD_PIN_CONFIG_LOW_POWER_MODE, + [PIN_CONFIG_OUTPUT_ENABLE] = MUD_PIN_CONFIG_OUTPUT_ENABLE, + [PIN_CONFIG_OUTPUT] = MUD_PIN_CONFIG_OUTPUT, + [PIN_CONFIG_POWER_SOURCE] = MUD_PIN_CONFIG_POWER_SOURCE, + [PIN_CONFIG_SLEEP_HARDWARE_STATE] = MUD_PIN_CONFIG_SLEEP_HARDWARE_STATE, + [PIN_CONFIG_SLEW_RATE] = MUD_PIN_CONFIG_SLEW_RATE, + [PIN_CONFIG_SKEW_DELAY] = MUD_PIN_CONFIG_SKEW_DELAY, + [PIN_CONFIG_PERSIST_STATE] = MUD_PIN_CONFIG_PERSIST_STATE, +}; + +static int mud_pinctrl_pinconf_get(struct pinctrl_dev *pctldev, + unsigned int pin, unsigned long *config) +{ + enum pin_config_param param = pinconf_to_config_param(*config); + struct mud_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); + unsigned int arg, offset = mud_pin_config_param_table[param]; + int ret; + + ret = mud_pinctrl_pin_read_reg(pctl, pin, offset, &arg); + if (ret) + return ret; + + *config = pinconf_to_config_packed(param, arg); + + return 0; +} + +static int mud_pinctrl_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin, + unsigned long *configs, unsigned int num_configs) +{ + struct mud_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); + enum pin_config_param param; + unsigned int i, offset, arg; + int ret; + + for (i = 0; i < num_configs; i++) { + param = pinconf_to_config_param(configs[i]); + arg = pinconf_to_config_argument(configs[i]); + offset = mud_pin_config_param_table[param]; + + ret = mud_pinctrl_pin_write_reg(pctl, pin, offset, arg); + if (ret) + return ret; + } + + return 0; +} + +static const struct pinconf_ops mud_pinconf_ops = { + .is_generic = true, + .pin_config_get = mud_pinctrl_pinconf_get, + .pin_config_set = mud_pinctrl_pinconf_set, + .pin_config_config_dbg_show = pinconf_generic_dump_config, +}; + +static int mud_pinctrl_get_groups_count(struct pinctrl_dev *pctldev) +{ + return 0; +} + +static const char *mud_pinctrl_get_group_name(struct pinctrl_dev *pctldev, + unsigned int selector) +{ + return NULL; +} + +static int mud_pinctrl_get_group_pins(struct pinctrl_dev *pctldev, + unsigned int selector, + const unsigned int **pins, + unsigned int *num_pins) +{ + return -ENOTSUPP; +} + +static void mud_pinctrl_pin_dbg_show(struct pinctrl_dev *pctldev, + struct seq_file *s, unsigned int offset) +{ + struct mud_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); + struct pinctrl_gpio_range *range; + int dir, val; + + range = pinctrl_find_gpio_range_from_pin_nolock(pctldev, offset); + if (!range) + return; + + dir = mud_pinctrl_gpio_get_direction(&pctl->gpio_chip, offset); + if (dir < 0) + return; + + val = mud_pinctrl_gpio_get(&pctl->gpio_chip, offset); + if (val < 0) + return; + + seq_printf(s, "function gpio_%s %s", dir ? "in" : "out", val ? "hi" : "lo"); +} + +static const struct pinctrl_ops mud_pinctrl_ops = { + .get_groups_count = mud_pinctrl_get_groups_count, + .get_group_name = mud_pinctrl_get_group_name, + .get_group_pins = mud_pinctrl_get_group_pins, + .pin_dbg_show = mud_pinctrl_pin_dbg_show, + .dt_node_to_map = pinconf_generic_dt_node_to_map_pin, + .dt_free_map = pinconf_generic_dt_free_map, +}; + +static const struct pinctrl_desc mud_pinctrl_pinctrl_desc = { + .owner = THIS_MODULE, + .name = "mud-pins", + .pctlops = &mud_pinctrl_ops, + .confops = &mud_pinconf_ops, +}; + +static void mud_pinctrl_irq_init_valid_mask(struct gpio_chip *gc, + unsigned long *valid_mask, + unsigned int ngpios) +{ + struct mud_pinctrl *pctl = gpiochip_get_data(gc); + unsigned int i; + + pdebug(1, "%s: valid_mask: %*pb\n", __func__, ngpios, valid_mask); + + for (i = 0; i < ngpios; i++) { + if (!pctl->pins[i].irq_types) + clear_bit(i, valid_mask); + } + + pdebug(1, "%s: valid_mask: %*pb\n", __func__, ngpios, valid_mask); +} + +static void mud_pinctrl_irq_enable(struct irq_data *data) +{ + struct gpio_chip *gpio_chip = irq_data_get_irq_chip_data(data); + struct mud_pinctrl *pctl = gpiochip_get_data(gpio_chip); + + pdebug(2, "%s: hwirq=%lu\n", __func__, data->hwirq); + + pctl->pins[data->hwirq].irq_enabled = true; +} + +static void mud_pinctrl_irq_disable(struct irq_data *data) +{ + struct gpio_chip *gpio_chip = irq_data_get_irq_chip_data(data); + struct mud_pinctrl *pctl = gpiochip_get_data(gpio_chip); + + pdebug(2, "%s: hwirq=%lu\n", __func__, data->hwirq); + + pctl->pins[data->hwirq].irq_enabled = false; +} + +static int mud_pinctrl_irq_set_type(struct irq_data *data, unsigned int type) +{ + struct gpio_chip *gpio_chip = irq_data_get_irq_chip_data(data); + struct mud_pinctrl *pctl = gpiochip_get_data(gpio_chip); + + pdebug(1, "%s: hwirq=%lu, type=%u\n", __func__, data->hwirq, type); + + if (type == IRQ_TYPE_NONE) + return -EINVAL; + + if ((pctl->pins[data->hwirq].irq_types & type) == type) + pctl->pins[data->hwirq].irq_type = type; + + return 0; +} + +static void mud_pinctrl_irq_bus_lock(struct irq_data *data) +{ + struct gpio_chip *gpio_chip = irq_data_get_irq_chip_data(data); + struct mud_pinctrl *pctl = gpiochip_get_data(gpio_chip); + + pdebug(3, "%s: hwirq=%lu\n", __func__, data->hwirq); + + mutex_lock(&pctl->irqlock); +} + +static void mud_pinctrl_irq_bus_sync_unlock(struct irq_data *data) +{ + struct gpio_chip *gpio_chip = irq_data_get_irq_chip_data(data); + struct mud_pinctrl *pctl = gpiochip_get_data(gpio_chip); + unsigned int reg; + u32 vals[2]; + int ret; + + pdebug(3, "%s: hwirq=%lu: irq_enabled=%u irq_type=%u\n", __func__, + data->hwirq, pctl->pins[data->hwirq].irq_enabled, + pctl->pins[data->hwirq].irq_type); + + switch (pctl->pins[data->hwirq].irq_type) { + case IRQ_TYPE_EDGE_RISING: + vals[0] = MUD_PIN_IRQ_TYPE_EDGE_RISING; + break; + case IRQ_TYPE_EDGE_FALLING: + vals[0] = MUD_PIN_IRQ_TYPE_EDGE_FALLING; + break; + case IRQ_TYPE_EDGE_BOTH: + vals[0] = MUD_PIN_IRQ_TYPE_EDGE_BOTH; + break; + default: + vals[0] = MUD_PIN_IRQ_TYPE_NONE; + break; + }; + + vals[1] = pctl->pins[data->hwirq].irq_enabled; + reg = mud_pinctrl_pin_reg(data->hwirq, MUD_PIN_IRQ_TYPE); + + /* It's safe to use a stack allocated array because bulk_write does kmemdup */ + ret = regmap_bulk_write(pctl->regmap, reg, vals, 2); + if (ret) + dev_err_once(pctl->dev, "Failed to sync irq data, error=%d\n", ret); + + mutex_unlock(&pctl->irqlock); +} + +static const struct irq_chip mud_pinctrl_irq_chip = { + .name = "mud-pins", + .irq_enable = mud_pinctrl_irq_enable, + .irq_disable = mud_pinctrl_irq_disable, + .irq_set_type = mud_pinctrl_irq_set_type, + .irq_bus_lock = mud_pinctrl_irq_bus_lock, + .irq_bus_sync_unlock = mud_pinctrl_irq_bus_sync_unlock, +}; + +static irqreturn_t mud_pinctrl_irq_thread_fn(int irq, void *dev_id) +{ + struct mud_pinctrl *pctl = (struct mud_pinctrl *)dev_id; + DECLARE_BITMAP(status, MUD_PINCTRL_MAX_NUM_PINS); + struct gpio_chip *gc = &pctl->gpio_chip; + unsigned long n; + int ret; + + ret = mud_pinctrl_read_bitmap(pctl, MUD_PINCTRL_REG_IRQ_STATUS, + status, gc->ngpio); + if (ret) + return IRQ_NONE; + + pdebug(3, "%s: STATUS: %*pb\n", __func__, gc->ngpio, status); + + for_each_set_bit(n, status, gc->ngpio) { + unsigned int irq = irq_find_mapping(gc->irq.domain, n); + + pdebug(2, "%s: IRQ on pin %lu irq=%u enabled=%u\n", __func__, + n, irq, pctl->pins[n].irq_enabled); + + if (irq && pctl->pins[n].irq_enabled) + handle_nested_irq(irq); + } + + return IRQ_HANDLED; +} + +static const struct regmap_config mud_pinctrl_regmap_config = { + .reg_bits = 32, + .val_bits = 32, + /* FIXME: Setup caching? */ + .cache_type = REGCACHE_NONE, +}; + +static int mud_pinctrl_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct mud_cell_pdata *pdata = dev_get_platdata(dev); + struct pinctrl_pin_desc *pdesc; + struct mud_pinctrl *pctl; + struct regmap *regmap; + unsigned int i, reg, npins; + const char **names; + int ret, irq; + + pdebug(1, "%s: dev->of_node=%px\n", __func__, dev->of_node); + + pctl = devm_kzalloc(dev, sizeof(*pctl), GFP_KERNEL); + if (!pctl) + return -ENOMEM; + + pctl->dev = dev; + + mutex_init(&pctl->irqlock); + + regmap = devm_regmap_init_usb(dev, pdata->interface, pdata->index, + &mud_pinctrl_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + pctl->regmap = regmap; + + ret = regmap_read(regmap, MUD_PINCTRL_REG_NUM_PINS, &npins); + if (ret) { + dev_err(pctl->dev, "Failed to read from device\n"); + return ret; + } + if (!npins || npins > MUD_PINCTRL_MAX_NUM_PINS) + return -EINVAL; + + pctl->pins = devm_kcalloc(dev, npins, sizeof(*pctl->pins), GFP_KERNEL); + if (!pctl->pins) + return -ENOMEM; + + pdesc = devm_kcalloc(dev, npins, sizeof(*pdesc), GFP_KERNEL); + if (!pdesc) + return -ENOMEM; + + pctl->pctl_desc = mud_pinctrl_pinctrl_desc; + pctl->pctl_desc.pins = pdesc; + pctl->pctl_desc.npins = npins; + + for (i = 0; i < npins; i++, pdesc++) { + char *name; + + name = devm_kmalloc(dev, MUD_PIN_NAME_LEN, GFP_KERNEL); + if (!name) + return -ENOMEM; + + pdesc->number = i; + pdesc->name = name; + reg = mud_pinctrl_pin_reg(i, MUD_PIN_NAME); + ret = regmap_raw_read(regmap, reg, name, MUD_PIN_NAME_LEN); + if (ret) { + dev_err(pctl->dev, "Failed to read name for pin %u\n", i); + return ret; + } + if (!name[0] || name[MUD_PIN_NAME_LEN - 1]) { + dev_err(pctl->dev, "Illegal name for pin %u\n", i); + return -EINVAL; + } + } + + ret = devm_pinctrl_register_and_init(dev, &pctl->pctl_desc, + pctl, &pctl->pctl_dev); + if (ret) { + dev_err(pctl->dev, "pinctrl registration failed\n"); + return ret; + } + + ret = pinctrl_enable(pctl->pctl_dev); + if (ret) { + dev_err(pctl->dev, "pinctrl enable failed\n"); + return ret; + } + + irq = platform_get_irq_optional(pdev, 0); + pdebug(1, "%s: irq=%d\n", __func__, irq); + if (irq > 0) { + bool use_irq = false; + + for (i = 0; i < npins; i++) { + reg = mud_pinctrl_pin_reg(i, MUD_PIN_IRQ_TYPES); + ret = regmap_read(regmap, reg, &pctl->pins[i].irq_types); + if (ret) { + dev_err(dev, "Failed to read irq type for pin %u\n", i); + return ret; + } + pdebug(1, "%s: pctl->pins[%u].irq_types=%u\n", __func__, + i, pctl->pins[i].irq_types); + if (pctl->pins[i].irq_types) + use_irq = true; + } + + if (!use_irq) + irq = 0; + } else { + irq = 0; + } + + pctl->gpio_chip = mud_pinctrl_gpio_chip; + pctl->gpio_chip.parent = dev; + pctl->gpio_chip.ngpio = npins; + if (irq) + pctl->gpio_chip.irq.init_valid_mask = mud_pinctrl_irq_init_valid_mask; + + names = devm_kcalloc(dev, npins, sizeof(*names), GFP_KERNEL); + if (!names) + return -ENOMEM; + + pctl->gpio_chip.names = names; + + for (i = 0; i < npins; i++) + names[i] = pctl->pctl_desc.pins[i].name; + + ret = devm_gpiochip_add_data(dev, &pctl->gpio_chip, pctl); + if (ret) { + dev_err(dev, "Could not register gpiochip, %d\n", ret); + return ret; + } + + if (irq) { + pctl->irq_chip = mud_pinctrl_irq_chip; + ret = gpiochip_irqchip_add_nested(&pctl->gpio_chip, &pctl->irq_chip, + 0, handle_bad_irq, IRQ_TYPE_NONE); + if (ret) { + dev_err(dev, "Cannot add irqchip to gpiochip\n"); + return ret; + } + + ret = devm_request_threaded_irq(dev, irq, NULL, + mud_pinctrl_irq_thread_fn, + IRQF_ONESHOT, "mud-pins", pctl); + if (ret) { + dev_err(dev, "Cannot request irq%d\n", irq); + return ret; + } + } + + return 0; +} + +static const struct of_device_id mud_pinctrl_of_match[] = { + { .compatible = "mud-pins", }, + {}, +}; +MODULE_DEVICE_TABLE(of, mud_pinctrl_of_match); + +static const struct platform_device_id mud_pinctrl_id_table[] = { + { "mud-pins", }, + { }, +}; +MODULE_DEVICE_TABLE(platform, mud_pinctrl_id_table); + +static struct platform_driver mud_pinctrl_driver = { + .driver = { + .name = "mud-pins", + .of_match_table = mud_pinctrl_of_match, + }, + .probe = mud_pinctrl_probe, + .id_table = mud_pinctrl_id_table, +}; + +module_platform_driver(mud_pinctrl_driver); + +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_DESCRIPTION("Pin control interface for Multifunction USB Device"); +MODULE_LICENSE("GPL"); diff --git a/drivers/pinctrl/pinctrl-mud.h b/drivers/pinctrl/pinctrl-mud.h new file mode 100644 index 000000000000..f4839da46bda --- /dev/null +++ b/drivers/pinctrl/pinctrl-mud.h @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright 2020 Noralf Trønnes + */ + +#ifndef __LINUX_PINCTRL_MUD_H +#define __LINUX_PINCTRL_MUD_H + +#define MUD_PINCTRL_MAX_NUM_PINS 512 + +#define MUD_PINCTRL_REG_NUM_PINS 0x0000 + +#define MUD_PINCTRL_REG_IRQ_STATUS 0x0020 +#define MUD_PINCTRL_REG_IRQ_STATUS_END 0x002f + +#define MUD_PINCTRL_REG_PIN_BASE 0x0100 +#define MUD_PINCTRL_PIN_BLOCK_SIZE 256 + + /* + * The following are offsets into the pin register block. + * + * The first part is identical to enum pin_config_param except for: + * - No room for custom configurations. + * + * See the enum declaration docs in include/linux/pinctrl/pinconf-generic.h + * for details about behaviour and arguments. + * + * Device should return -ENOTSUPP if the config is not supported. + */ + #define MUD_PIN_CONFIG_BIAS_BUS_HOLD 0x00 + #define MUD_PIN_CONFIG_BIAS_DISABLE 0x01 + #define MUD_PIN_CONFIG_BIAS_HIGH_IMPEDANCE 0x02 + #define MUD_PIN_CONFIG_BIAS_PULL_DOWN 0x03 + #define MUD_PIN_CONFIG_BIAS_PULL_PIN_DEFAULT 0x04 + #define MUD_PIN_CONFIG_BIAS_PULL_UP 0x05 + #define MUD_PIN_CONFIG_DRIVE_OPEN_DRAIN 0x06 + #define MUD_PIN_CONFIG_DRIVE_OPEN_SOURCE 0x07 + #define MUD_PIN_CONFIG_DRIVE_PUSH_PULL 0x08 + #define MUD_PIN_CONFIG_DRIVE_STRENGTH 0x09 + #define MUD_PIN_CONFIG_DRIVE_STRENGTH_UA 0x0a + #define MUD_PIN_CONFIG_INPUT_DEBOUNCE 0x0b + #define MUD_PIN_CONFIG_INPUT_ENABLE 0x0c + #define MUD_PIN_CONFIG_INPUT_SCHMITT 0x0d + #define MUD_PIN_CONFIG_INPUT_SCHMITT_ENABLE 0x0e + #define MUD_PIN_CONFIG_LOW_POWER_MODE 0x0f + #define MUD_PIN_CONFIG_OUTPUT_ENABLE 0x10 + #define MUD_PIN_CONFIG_OUTPUT 0x11 + #define MUD_PIN_CONFIG_POWER_SOURCE 0x12 + #define MUD_PIN_CONFIG_SLEEP_HARDWARE_STATE 0x13 + #define MUD_PIN_CONFIG_SLEW_RATE 0x14 + #define MUD_PIN_CONFIG_SKEW_DELAY 0x15 + #define MUD_PIN_CONFIG_PERSIST_STATE 0x16 + #define MUD_PIN_CONFIG_END 0x7f + + /* Must be NUL terminated */ + #define MUD_PIN_NAME 0x80 + #define MUD_PIN_NAME_LEN 16 + #define MUD_PIN_NAME_END 0x83 + + /* + * Device should return: + * -EBUSY if pin is in use by another function (i2c, spi, ...) + * -ENOENT if there is no gpio function on the pin. + */ + #define MUD_PIN_GPIO_REQUEST 0x84 + #define MUD_PIN_GPIO_FREE 0x85 + + /* Device should return -ENOTSUPP if the direction is not supported */ + #define MUD_PIN_DIRECTION 0x86 + #define MUD_PIN_DIRECTION_OUTPUT 0x0 + #define MUD_PIN_DIRECTION_OUTPUT_LOW 0x0 + #define MUD_PIN_DIRECTION_OUTPUT_HIGH 0x2 + #define MUD_PIN_DIRECTION_INPUT 0x1 + + #define MUD_PIN_LEVEL 0x87 + + /* + * Set _TYPES to _NONE is the pin doesn't have interrupt support. + * If all pins are _NONE, then interrupts are disabled in the host driver. + */ + #define MUD_PIN_IRQ_TYPES 0x90 + #define MUD_PIN_IRQ_TYPE_NONE 0x00 + #define MUD_PIN_IRQ_TYPE_EDGE_RISING 0x01 + #define MUD_PIN_IRQ_TYPE_EDGE_FALLING 0x02 + #define MUD_PIN_IRQ_TYPE_EDGE_BOTH 0x03 + #define MUD_PIN_IRQ_TYPE 0x91 + #define MUD_PIN_IRQ_ENABLED 0x92 + +#endif From patchwork Sun Feb 16 17:21:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAB6D159A for ; Sun, 16 Feb 2020 17:27:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F219420857 for ; Sun, 16 Feb 2020 17:27:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F219420857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 883256E46E; Sun, 16 Feb 2020 17:27:06 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id A28AE6E44E for ; Sun, 16 Feb 2020 17:27:00 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id 83324200ED; Sun, 16 Feb 2020 18:21:40 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 5/9] usb: gadget: function: mud: Add gpio support Date: Sun, 16 Feb 2020 18:21:13 +0100 Message-Id: <20200216172117.49832-6-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=OVWlCPoClyRDxbRKtg8A:9 a=ENd6JL4-XTPeTk_z:21 a=YaaFWRjS8iQpcXM6:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add optional gpio functionality to the Multifunction USB Device. Signed-off-by: Noralf Trønnes --- drivers/usb/gadget/Kconfig | 14 + drivers/usb/gadget/function/Makefile | 2 + drivers/usb/gadget/function/f_mud_pins.c | 962 +++++++++++++++++++++++ 3 files changed, 978 insertions(+) create mode 100644 drivers/usb/gadget/function/f_mud_pins.c diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig index 9551876ffe08..d6285146ec76 100644 --- a/drivers/usb/gadget/Kconfig +++ b/drivers/usb/gadget/Kconfig @@ -219,6 +219,9 @@ config USB_F_TCM config USB_F_MUD tristate +config USB_F_MUD_PINS + tristate + # this first set of drivers all depend on bulk-capable hardware. config USB_CONFIGFS @@ -493,6 +496,17 @@ menuconfig USB_CONFIGFS_F_MUD help Core support for the Multifunction USB Device. +if USB_F_MUD + +config USB_CONFIGFS_F_MUD_PINS + bool "Multifunction USB Device GPIO" + depends on PINCTRL + select USB_F_MUD_PINS + help + GPIO support for the Multifunction USB Device. + +endif # USB_F_MUD + choice tristate "USB Gadget precomposed configurations" default USB_ETH diff --git a/drivers/usb/gadget/function/Makefile b/drivers/usb/gadget/function/Makefile index b6e31b511521..2e24227fcc12 100644 --- a/drivers/usb/gadget/function/Makefile +++ b/drivers/usb/gadget/function/Makefile @@ -52,3 +52,5 @@ usb_f_tcm-y := f_tcm.o obj-$(CONFIG_USB_F_TCM) += usb_f_tcm.o usb_f_mud-y := f_mud.o mud_regmap.o obj-$(CONFIG_USB_F_MUD) += usb_f_mud.o +usb_f_mud_pins-y := f_mud_pins.o +obj-$(CONFIG_USB_F_MUD_PINS) += usb_f_mud_pins.o diff --git a/drivers/usb/gadget/function/f_mud_pins.c b/drivers/usb/gadget/function/f_mud_pins.c new file mode 100644 index 000000000000..b3466804ad5e --- /dev/null +++ b/drivers/usb/gadget/function/f_mud_pins.c @@ -0,0 +1,962 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright (C) 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "f_mud.h" +#include "../../../pinctrl/pinctrl-mud.h" + +/* + * Even though the host side is a pinctrl driver, the device side is a gpio consumer. + * That's because not all boards have a pin controller. + */ + +/* Temporary debugging aid */ +#define fmdebug(fmt, ...) \ +do { \ + if (1) \ + printk(KERN_DEBUG fmt, ##__VA_ARGS__); \ +} while (0) + +static DEFINE_IDA(f_mud_pins_ida); + +struct f_mud_pins_cell_item { + struct list_head node; + unsigned int index; + struct config_group group; + + struct mutex lock; /* Protect the values below */ + int refcnt; + + const char *name; + const char *chip; + int offset; +}; + +static inline struct f_mud_pins_cell_item *ci_to_f_mud_pins_cell_item(struct config_item *item) +{ + return container_of(to_config_group(item), struct f_mud_pins_cell_item, group); +} + +struct f_mud_pins_lookup_device { + struct device dev; + int id; + struct f_mud_cell *cell; + struct gpiod_lookup_table *lookup; + const char **names; + unsigned int count; +}; + +struct f_mud_pins_pin { + struct f_mud_pins_cell *parent; + unsigned int index; + struct gpio_desc *gpio; + unsigned long dflags; + unsigned int debounce; +#define DEBOUNCE_NOT_SET UINT_MAX + bool config_requested; + int irq; + int irqflags; +}; + +struct f_mud_pins_cell { + struct f_mud_cell cell; + + struct mutex lock; /* Protect refcnt and items */ + int refcnt; + struct list_head items; + + struct f_mud_pins_lookup_device *ldev; + struct f_mud_pins_pin *pins; + unsigned int count; + spinlock_t irq_status_lock; + unsigned long *irq_status; +}; + +static inline struct f_mud_pins_cell *ci_to_f_mud_pins_cell(struct config_item *item) +{ + return container_of(to_config_group(item), struct f_mud_pins_cell, cell.group); +} + +static inline struct f_mud_pins_cell *cell_to_pcell(struct f_mud_cell *cell) +{ + return container_of(cell, struct f_mud_pins_cell, cell); +} + +static irqreturn_t f_mud_pins_gpio_irq_thread(int irq, void *p) +{ + struct f_mud_pins_pin *pin = p; + struct f_mud_pins_cell *pcell = pin->parent; + + spin_lock(&pcell->irq_status_lock); + set_bit(pin->index, pcell->irq_status); + spin_unlock(&pcell->irq_status_lock); + + fmdebug("%s(index=%u): irq_status=%*pb\n", __func__, pin->index, + pcell->count, pcell->irq_status); + + f_mud_irq(pcell->ldev->cell); + + return IRQ_HANDLED; +} + +static int f_mud_pins_gpio_irq_request(struct f_mud_pins_cell *pcell, unsigned int index) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + int ret, irq; + + fmdebug("%s(index=%u)\n", __func__, index); + + if (pin->irq) + return 0; + + if (!pin->gpio || !pin->irqflags) + return -EINVAL; + + ret = gpiod_get_direction(pin->gpio); + if (ret < 0) + return ret; + if (ret != 1) + return -EINVAL; + + irq = gpiod_to_irq(pin->gpio); + fmdebug(" irq=%d\n", irq); + if (irq <= 0) + return -ENODEV; + + ret = request_threaded_irq(irq, NULL, f_mud_pins_gpio_irq_thread, + pin->irqflags | IRQF_ONESHOT, + dev_name(&pcell->ldev->dev), pin); + if (ret) + return ret; + + pin->irq = irq; + + return 0; +} + +static void f_mud_pins_gpio_irq_free(struct f_mud_pins_cell *pcell, unsigned int index) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + + fmdebug("%s(index=%u): irq=%d\n", __func__, index, pin->irq); + + if (pin->irq) { + free_irq(pin->irq, pin); + pin->irq = 0; + } +} + +static void f_mud_pins_gpio_irq_free_all(struct f_mud_pins_cell *pcell) +{ + unsigned int i; + + for (i = 0; i < pcell->count; i++) + f_mud_pins_gpio_irq_free(pcell, i); +} + +static int f_mud_pins_gpio_irq_type(struct f_mud_pins_cell *pcell, unsigned int index, + unsigned int val) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + + fmdebug("%s(index=%u, val=%u)\n", __func__, index, val); + + switch (val) { + case MUD_PIN_IRQ_TYPE_EDGE_RISING: + pin->irqflags = IRQF_TRIGGER_RISING; + break; + case MUD_PIN_IRQ_TYPE_EDGE_FALLING: + pin->irqflags = IRQF_TRIGGER_FALLING; + break; + case MUD_PIN_IRQ_TYPE_EDGE_BOTH: + pin->irqflags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING; + break; + case MUD_PIN_IRQ_TYPE_NONE: + pin->irqflags = IRQF_TRIGGER_NONE; + break; + default: + return -EINVAL; + }; + + return 0; +} + +static void f_mud_pins_gpio_free(struct f_mud_pins_cell *pcell, unsigned int index) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + + fmdebug("%s(index=%u): gpio=%d, config_requested=%u\n", __func__, index, + pin->gpio ? desc_to_gpio(pin->gpio) : -1, pin->config_requested); + + if (pin->config_requested) + return; + + if (pin->gpio) + gpiod_put(pin->gpio); + pin->gpio = NULL; +} + +/* When flags change re-request the gpio to enable the settings */ +static int f_mud_pins_gpio_request_do(struct f_mud_pins_cell *pcell, unsigned int index, + bool config, unsigned int debounce) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + struct gpio_desc *gpio; + int ret; + + fmdebug("%s(index=%u): lflags=%lu dflags=%lu\n", __func__, index, + pcell->ldev->lookup->table[index].flags, pin->dflags); + + if (!pcell->pins[index].gpio && config) + pin->config_requested = true; + + if (pcell->pins[index].gpio) { + gpiod_put(pcell->pins[index].gpio); + pcell->pins[index].gpio = NULL; + } + + gpio = gpiod_get_index(&pcell->ldev->dev, NULL, index, pin->dflags); + if (IS_ERR(gpio)) { + ret = PTR_ERR(gpio); + fmdebug("failed to get gpio %u: ret=%d\n", index, ret); + return ret; + } + pin->gpio = gpio; + fmdebug(" gpios[%u]: gpionr %d\n", index, desc_to_gpio(gpio)); + + /* Debounce can be set through pinconf so it must be stored */ + if (debounce == DEBOUNCE_NOT_SET) + debounce = pin->debounce; + + if (debounce != DEBOUNCE_NOT_SET) { + ret = gpiod_set_debounce(pin->gpio, debounce); + if (ret) + return ret; + + pin->debounce = debounce ? debounce : DEBOUNCE_NOT_SET; + } + + return 0; +} + +static int f_mud_pins_gpio_request_debounce(struct f_mud_pins_cell *pcell, + unsigned int index, unsigned int debounce) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + + if (pin->debounce == debounce) + return 0; + + return f_mud_pins_gpio_request_do(pcell, index, true, debounce); +} + +static int f_mud_pins_gpio_request_dflag(struct f_mud_pins_cell *pcell, unsigned int index, + enum gpiod_flags dflag, bool dval, bool config) +{ + struct f_mud_pins_pin *pin = &pcell->pins[index]; + bool curr_dflag = pin->dflags & dflag; + bool changed = false; + + fmdebug("%s(index=%u): dflag=%u dval=%u\n", __func__, index, dflag, dval); + + if (curr_dflag != dval) { + if (dval) + pin->dflags |= dflag; + else + pin->dflags &= ~dflag; + changed = true; + } + + if (pin->gpio && !changed) + return 0; + + return f_mud_pins_gpio_request_do(pcell, index, config, DEBOUNCE_NOT_SET); +} + +static int f_mud_pins_gpio_request_lflag(struct f_mud_pins_cell *pcell, unsigned int index, + enum gpio_lookup_flags lflag, bool lval, + bool config) +{ + struct gpiod_lookup *lentry = &pcell->ldev->lookup->table[index]; + struct f_mud_pins_pin *pin = &pcell->pins[index]; + bool curr_lflag = lentry->flags & lflag; + bool changed = false; + + fmdebug("%s(index=%u): lflag=%u lval=%u\n", __func__, index, lflag, lval); + + if (curr_lflag != lval) { + if (lval) + lentry->flags |= lflag; + else + lentry->flags &= ~lflag; + changed = true; + } + + if (pin->gpio && !changed) + return 0; + + return f_mud_pins_gpio_request_do(pcell, index, config, DEBOUNCE_NOT_SET); +} + +static int f_mud_pins_write_gpio_config(struct f_mud_pins_cell *pcell, unsigned int index, + unsigned int offset, unsigned int val) +{ + struct f_mud_pins_pin *pin; + int ret = -ENOTSUPP; + + fmdebug("%s(index=%u, offset=0x%02x, val=%u)\n", __func__, index, offset, val); + + pin = &pcell->pins[index]; + + switch (offset) { + case MUD_PIN_CONFIG_BIAS_PULL_DOWN: + if (!val) + return -ENOTSUPP; + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_PULL_DOWN, val, true); + break; + case MUD_PIN_CONFIG_BIAS_PULL_UP: + if (!val) + return -ENOTSUPP; + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_PULL_UP, val, true); + break; + case MUD_PIN_CONFIG_DRIVE_OPEN_DRAIN: + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_OPEN_DRAIN, 1, true); + break; + case MUD_PIN_CONFIG_DRIVE_OPEN_SOURCE: + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_OPEN_SOURCE, 1, true); + break; + case MUD_PIN_CONFIG_DRIVE_PUSH_PULL: + ret = f_mud_pins_gpio_request_dflag(pcell, index, GPIOD_IN, 0, true); + break; + case MUD_PIN_CONFIG_INPUT_DEBOUNCE: + ret = f_mud_pins_gpio_request_debounce(pcell, index, val); + break; + case MUD_PIN_CONFIG_INPUT_ENABLE: + ret = f_mud_pins_gpio_request_dflag(pcell, index, GPIOD_IN, val, true); + break; + case MUD_PIN_CONFIG_OUTPUT: + val = val ? GPIOD_OUT_HIGH : GPIOD_OUT_LOW; + ret = f_mud_pins_gpio_request_dflag(pcell, index, val, 1, true); + break; + case MUD_PIN_CONFIG_PERSIST_STATE: + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_TRANSITORY, val, true); + break; + } + + return ret; +} + +static int f_mud_pins_write_gpio(struct f_mud_pins_cell *pcell, unsigned int regnr, + const void *buf, size_t len) +{ + unsigned int regbase = regnr - MUD_PINCTRL_REG_PIN_BASE; + unsigned int offset = regbase % MUD_PINCTRL_PIN_BLOCK_SIZE; + unsigned int index = regbase / MUD_PINCTRL_PIN_BLOCK_SIZE; + size_t count = len / sizeof(u32); + struct f_mud_pins_pin *pin; + const __le32 *buf32 = buf; + u32 val; + int ret; + + fmdebug("%s(len=%zu): offset=0x%02x index=%u\n", __func__, len, offset, index); + + if (index >= pcell->count) { + fmdebug("%s: gpio index out of bounds: %u\n", __func__, index); + return -EINVAL; + } + + while (count--) { + pin = &pcell->pins[index]; + val = le32_to_cpup(buf32++); + ret = 0; + + switch (offset++) { + case MUD_PIN_CONFIG_BIAS_BUS_HOLD ... MUD_PIN_CONFIG_END: + return f_mud_pins_write_gpio_config(pcell, index, offset - 1, val); + + case MUD_PIN_GPIO_REQUEST: + if (val != 1) + return -EINVAL; + /* Persistence is the default so set it from the start */ + ret = f_mud_pins_gpio_request_lflag(pcell, index, GPIO_TRANSITORY, + 0, false); + break; + case MUD_PIN_GPIO_FREE: + if (val != 1) + return -EINVAL; + f_mud_pins_gpio_free(pcell, index); + break; + + case MUD_PIN_DIRECTION: + if (!pin) + return -EINVAL; + + if (val == MUD_PIN_DIRECTION_INPUT) { + ret = gpiod_direction_input(pin->gpio); + } else { + int value = !!(val & MUD_PIN_DIRECTION_OUTPUT_HIGH); + + ret = gpiod_direction_output(pin->gpio, value); + } + break; + case MUD_PIN_LEVEL: + if (pin) + gpiod_set_value_cansleep(pin->gpio, val); + break; + case MUD_PIN_IRQ_TYPE: + ret = f_mud_pins_gpio_irq_type(pcell, index, val); + break; + case MUD_PIN_IRQ_ENABLED: + if (val) + ret = f_mud_pins_gpio_irq_request(pcell, index); + else + f_mud_pins_gpio_irq_free(pcell, index); + break; + default: + pr_err("%s: unknown register: 0x%x\n", __func__, regnr); + return -EINVAL; + } + + if (ret < 0) + return ret; + } + + return 0; +} + +static int f_mud_pins_writereg(struct f_mud_cell *cell, unsigned int regnr, + const void *buf, size_t len, u8 compression) +{ + struct f_mud_pins_cell *pcell = cell_to_pcell(cell); + + fmdebug("%s(regnr=0x%02x, len=%zu)\n", __func__, regnr, len); + + if (regnr >= MUD_PINCTRL_REG_PIN_BASE) + return f_mud_pins_write_gpio(pcell, regnr, buf, len); + + return -EINVAL; +} + +static int f_mud_pins_read_gpio(struct f_mud_pins_cell *pcell, unsigned int regnr, + void *buf, size_t len) +{ + unsigned int regbase = regnr - MUD_PINCTRL_REG_PIN_BASE; + unsigned int offset = regbase % MUD_PINCTRL_PIN_BLOCK_SIZE; + unsigned int index = regbase / MUD_PINCTRL_PIN_BLOCK_SIZE; + size_t count = len / sizeof(u32); + struct f_mud_pins_pin *pin; + struct gpio_desc *gpio; + __le32 *buf32 = buf; + u32 val; + int ret; + + fmdebug("%s(len=%zu): offset=0x%02x index=%u\n", __func__, len, offset, index); + + if (index >= pcell->count) { + fmdebug("%s: gpio index out of bounds: %u\n", __func__, index); + return -EINVAL; + } + + if (offset >= MUD_PIN_NAME && (offset + count - 1) <= MUD_PIN_NAME_END) { + size_t start = (offset - MUD_PIN_NAME) * sizeof(u32); + char name[MUD_PIN_NAME_LEN]; + + strscpy_pad(name, pcell->ldev->names[index], MUD_PIN_NAME_LEN); + fmdebug(" name=%*ph\n", MUD_PIN_NAME_LEN, name); + memcpy(buf, name + start, len); + + return 0; + } + + pin = &pcell->pins[index]; + + while (count--) { + switch (offset++) { + case MUD_PIN_DIRECTION: + /* + * Host side gpiochip_add_data_with_key() calls ->get_direction + * without requesting the gpio first. + */ + gpio = pin->gpio; + if (!gpio) { + /* non gpio pins will fail here, but that's fine */ + ret = f_mud_pins_gpio_request_lflag(pcell, index, + 0, 0, false); + if (ret) + return ret; + } + ret = gpiod_get_direction(pin->gpio); + if (!gpio) + f_mud_pins_gpio_free(pcell, index); + if (ret < 0) + return ret; + val = ret ? MUD_PIN_DIRECTION_INPUT : MUD_PIN_DIRECTION_OUTPUT; + break; + case MUD_PIN_LEVEL: + if (!pin->gpio) + return -EINVAL; + ret = gpiod_get_value_cansleep(pin->gpio); + if (ret < 0) + return ret; + val = ret; + break; + case MUD_PIN_IRQ_TYPES: + /* FIXME: Is it possible to get this info somewhere? */ + val = MUD_PIN_IRQ_TYPE_EDGE_BOTH; + break; + default: + pr_err("%s: unknown register: 0x%x\n", __func__, regnr); + return -EINVAL; + } + + *(buf32++) = cpu_to_le32(val); + } + + return 0; +} + +static int f_mud_pins_readreg(struct f_mud_cell *cell, unsigned int regnr, + void *buf, size_t *len, u8 compression) +{ + struct f_mud_pins_cell *pcell = cell_to_pcell(cell); + size_t count = *len / sizeof(u32); + __le32 *buf32 = buf; + + fmdebug("%s(regnr=0x%02x, len=%zu)\n", __func__, regnr, *len); + + if (regnr >= MUD_PINCTRL_REG_PIN_BASE) + return f_mud_pins_read_gpio(pcell, regnr, buf, *len); + + if (regnr >= MUD_PINCTRL_REG_IRQ_STATUS && + regnr <= MUD_PINCTRL_REG_IRQ_STATUS_END) { + unsigned int nregs = DIV_ROUND_UP(pcell->count, sizeof(u32)); + unsigned int offset = regnr - MUD_PINCTRL_REG_IRQ_STATUS; + unsigned int i, end = min_t(unsigned int, count, nregs); + u32 irqvals[MUD_PINCTRL_MAX_NUM_PINS / sizeof(u32)]; + + if (count > (nregs - offset)) + return -EINVAL; + + bitmap_to_arr32(irqvals, pcell->irq_status, pcell->count); + + fmdebug(" irq_status=%*pb irqvals=%*ph\n", pcell->count, + pcell->irq_status, nregs, irqvals); + + for (i = offset; i < end; i++) + *buf32++ = cpu_to_le32(irqvals[i]); + + return 0; + } + + if (regnr == MUD_PINCTRL_REG_NUM_PINS && count == 1) { + *buf32 = cpu_to_le32(pcell->count); + return 0; + } + + fmdebug("%s: unknown register 0x%x\n", __func__, regnr); + + return -EINVAL; +} + +static void f_mud_pins_disable(struct f_mud_cell *cell) +{ + struct f_mud_pins_cell *pcell = cell_to_pcell(cell); + + f_mud_pins_gpio_irq_free_all(pcell); + /* + * FIXME: Free requested gpios as well? + * Or should they survive on unplug on a powered board? + */ +} + +static void f_mud_pins_lookup_device_release(struct device *dev) +{ + struct f_mud_pins_lookup_device *ldev = + container_of(dev, struct f_mud_pins_lookup_device, dev); + + fmdebug("%s: ldev=%px\n", __func__, ldev); + + if (ldev->lookup) { + struct gpiod_lookup *p; + + if (ldev->lookup->dev_id) + gpiod_remove_lookup_table(ldev->lookup); + + for (p = &ldev->lookup->table[0]; p->chip_label; p++) + kfree(p->chip_label); + kfree(ldev->lookup); + } + + if (ldev->names) { + unsigned int i; + + for (i = 0; i < ldev->count; i++) + kfree(ldev->names[i]); + kfree(ldev->names); + } + + ida_free(&f_mud_pins_ida, ldev->id); + kfree(ldev); +} + +static int f_mud_pins_looup_device_create(struct f_mud_pins_cell *pcell) +{ + struct f_mud_pins_lookup_device *ldev; + struct f_mud_pins_cell_item *fitem; + unsigned int max_index = 0; + int ret; + + fmdebug("%s: %px\n", __func__, pcell); + + ldev = kzalloc(sizeof(*ldev), GFP_KERNEL); + if (!ldev) + return -ENOMEM; + + fmdebug("%s: ldev=%px\n", __func__, ldev); + + ldev->id = ida_alloc(&f_mud_pins_ida, GFP_KERNEL); + if (ldev->id < 0) { + kfree(ldev); + return ldev->id; + } + + ldev->cell = &pcell->cell; + ldev->dev.release = f_mud_pins_lookup_device_release; + dev_set_name(&ldev->dev, "f_mud_pins-%d", ldev->id); + + ret = device_register(&ldev->dev); + if (ret) { + put_device(&ldev->dev); + return ret; + } + + mutex_lock(&pcell->lock); + + list_for_each_entry(fitem, &pcell->items, node) { + max_index = max(max_index, fitem->index); + ldev->count++; + } + + if (!ldev->count) { + ret = -ENOENT; + goto out_unlock; + } + + if (ldev->count != max_index + 1) { + pr_err("Pin indices are not continuous\n"); + ret = -EINVAL; + goto out_unlock; + } + + ldev->names = kcalloc(ldev->count, sizeof(*ldev->names), GFP_KERNEL); + ldev->lookup = kzalloc(struct_size(ldev->lookup, table, ldev->count + 1), + GFP_KERNEL); + if (!ldev->lookup || !ldev->names) { + ret = -ENOMEM; + goto out_unlock; + } + + ret = 0; + list_for_each_entry(fitem, &pcell->items, node) { + struct gpiod_lookup *entry = &ldev->lookup->table[fitem->index]; + + mutex_lock(&fitem->lock); + + if (!fitem->name) { + pr_err("Missing name for pin %u\n", fitem->index); + ret = -EINVAL; + goto out_unlock_pin; + } + + ldev->names[fitem->index] = kstrdup(fitem->name, GFP_KERNEL); + if (!ldev->names[fitem->index]) { + ret = -ENOMEM; + goto out_unlock_pin; + } + + /* Skip adding to lookup if the pin has no gpio function */ + if (!fitem->chip) + goto out_unlock_pin; + + entry->idx = fitem->index; + entry->chip_hwnum = fitem->offset; + + entry->chip_label = kstrdup(fitem->chip, GFP_KERNEL); + if (!entry->chip_label) { + ret = -ENOMEM; + goto out_unlock_pin; + } + + fmdebug(" %u: chip=%s hwnum=%u name=%s\n", entry->idx, + entry->chip_label, entry->chip_hwnum, ldev->names[fitem->index]); +out_unlock_pin: + mutex_unlock(&fitem->lock); + if (ret) + goto out_unlock; + } + + ldev->lookup->dev_id = dev_name(&ldev->dev); + gpiod_add_lookup_table(ldev->lookup); + + pcell->ldev = ldev; + pcell->count = ldev->count; + +out_unlock: + mutex_unlock(&pcell->lock); + + if (ret) + device_unregister(&ldev->dev); + + return ret; +} + +static void f_mud_pins_lookup_device_destroy(struct f_mud_pins_cell *pcell) +{ + if (pcell->ldev) { + device_unregister(&pcell->ldev->dev); + pcell->ldev = NULL; + } +} + +static int f_mud_pins_bind(struct f_mud_cell *cell) +{ + struct f_mud_pins_cell *pcell = cell_to_pcell(cell); + unsigned int i; + int ret; + + fmdebug("%s: %px\n", __func__, pcell); + + ret = f_mud_pins_looup_device_create(pcell); + if (ret) + return ret; + + spin_lock_init(&pcell->irq_status_lock); + pcell->irq_status = bitmap_zalloc(pcell->count, GFP_KERNEL); + if (!pcell->irq_status) { + ret = -ENOMEM; + goto error_free; + } + + pcell->pins = kcalloc(pcell->count, sizeof(*pcell->pins), GFP_KERNEL); + if (!pcell->pins) { + ret = -ENOMEM; + goto error_free; + } + + for (i = 0; i < pcell->count; i++) { + struct f_mud_pins_pin *pin = &pcell->pins[i]; + + pin->parent = pcell; + pin->index = i; + pin->debounce = DEBOUNCE_NOT_SET; + } + + return 0; + +error_free: + kfree(pcell->pins); + f_mud_pins_lookup_device_destroy(pcell); + + return ret; +} + +static void f_mud_pins_unbind(struct f_mud_cell *cell) +{ + struct f_mud_pins_cell *pcell = cell_to_pcell(cell); + unsigned int i; + + fmdebug("%s:\n", __func__); + + if (pcell->pins) { + for (i = 0; i < pcell->count; i++) { + if (pcell->pins[i].gpio) { + fmdebug(" gpiod_put: %u\n", i); + gpiod_put(pcell->pins[i].gpio); + } + } + + kfree(pcell->pins); + } + + bitmap_free(pcell->irq_status); + + f_mud_pins_lookup_device_destroy(pcell); +} + +static void f_mud_pins_cell_item_item_release(struct config_item *item) +{ + struct f_mud_pins_cell_item *fitem = ci_to_f_mud_pins_cell_item(item); + + fmdebug("%s: fitem=%px\n", __func__, fitem); + + kfree(fitem->name); + kfree(fitem->chip); + kfree(fitem); +} + +F_MUD_OPT_STR(f_mud_pins_cell_item, name); +F_MUD_OPT_STR(f_mud_pins_cell_item, chip); +F_MUD_OPT_INT(f_mud_pins_cell_item, offset, 0, INT_MAX); + +static struct configfs_attribute *f_mud_pins_cell_item_attrs[] = { + &f_mud_pins_cell_item_attr_name, + &f_mud_pins_cell_item_attr_chip, + &f_mud_pins_cell_item_attr_offset, + NULL, +}; + +static struct configfs_item_operations f_mud_pins_cell_item_item_ops = { + .release = f_mud_pins_cell_item_item_release, +}; + +static const struct config_item_type f_mud_pins_cell_item_func_type = { + .ct_item_ops = &f_mud_pins_cell_item_item_ops, + .ct_attrs = f_mud_pins_cell_item_attrs, + .ct_owner = THIS_MODULE, +}; + +static struct config_group *f_mud_pins_make_group(struct config_group *group, + const char *name) +{ + struct f_mud_pins_cell *pcell = ci_to_f_mud_pins_cell(&group->cg_item); + struct f_mud_pins_cell_item *fitem; + const char *prefix = "pin."; + struct config_group *grp; + int ret, index; + + fmdebug("%s: name=%s\n", __func__, name); + + mutex_lock(&pcell->lock); + fmdebug("%s: pcell=%px pcell->refcnt=%d\n", __func__, pcell, pcell->refcnt); + if (pcell->refcnt) { + grp = ERR_PTR(-EBUSY); + goto out_unlock; + } + + if (strstr(name, prefix) != name) { + pr_err("Missing prefix '%s' in name: '%s'\n", prefix, name); + grp = ERR_PTR(-EINVAL); + goto out_unlock; + } + + ret = kstrtoint(name + strlen(prefix), 10, &index); + if (ret) { + pr_err("Failed to parse index in name: '%s'\n", name); + grp = ERR_PTR(ret); + goto out_unlock; + } + + fitem = kzalloc(sizeof(*fitem), GFP_KERNEL); + fmdebug(" pin=%px\n", fitem); + if (!fitem) { + grp = ERR_PTR(-ENOMEM); + goto out_unlock; + } + + fitem->index = index; + grp = &fitem->group; + + config_group_init_type_name(grp, "", &f_mud_pins_cell_item_func_type); + + list_add(&fitem->node, &pcell->items); +out_unlock: + mutex_unlock(&pcell->lock); + + return grp; +} + +static void f_mud_pins_drop_item(struct config_group *group, struct config_item *item) +{ + struct f_mud_pins_cell *pcell = ci_to_f_mud_pins_cell(&group->cg_item); + struct f_mud_pins_cell_item *fitem = ci_to_f_mud_pins_cell_item(item); + + fmdebug("%s: pcell=%px fitem=%px\n", __func__, pcell, fitem); + + mutex_lock(&pcell->lock); + list_del(&fitem->node); + mutex_unlock(&pcell->lock); + + config_item_put(item); +} + +static struct configfs_group_operations f_mud_pins_group_ops = { + .make_group = f_mud_pins_make_group, + .drop_item = f_mud_pins_drop_item, +}; + +static struct configfs_item_operations f_mud_pins_item_ops = { + .release = f_mud_cell_item_release, +}; + +static const struct config_item_type f_mud_pins_func_type = { + .ct_item_ops = &f_mud_pins_item_ops, + .ct_group_ops = &f_mud_pins_group_ops, + .ct_owner = THIS_MODULE, +}; + +static void f_mud_pins_free(struct f_mud_cell *cell) +{ + struct f_mud_pins_cell *pcell = container_of(cell, struct f_mud_pins_cell, cell); + + fmdebug("%s: pcell=%px\n", __func__, pcell); + + mutex_destroy(&pcell->lock); + kfree(pcell); +} + +static struct f_mud_cell *f_mud_pins_alloc(void) +{ + struct f_mud_pins_cell *pcell; + + pcell = kzalloc(sizeof(*pcell), GFP_KERNEL); + fmdebug("%s: pcell=%px\n", __func__, pcell); + if (!pcell) + return ERR_PTR(-ENOMEM); + + mutex_init(&pcell->lock); + INIT_LIST_HEAD(&pcell->items); + config_group_init_type_name(&pcell->cell.group, "", &f_mud_pins_func_type); + + fmdebug("%s: cell=%px\n", __func__, &pcell->cell); + + return &pcell->cell; +} + +static const struct f_mud_cell_ops f_mud_pins_ops = { + .name = "mud-pins", + .owner = THIS_MODULE, + + .interrupt_interval_ms = 100, + + .alloc = f_mud_pins_alloc, + .free = f_mud_pins_free, + .bind = f_mud_pins_bind, + .unbind = f_mud_pins_unbind, + + .regval_bytes = 4, + .max_transfer_size = 64, + + .disable = f_mud_pins_disable, + .readreg = f_mud_pins_readreg, + .writereg = f_mud_pins_writereg, +}; + +DECLARE_F_MUD_CELL_INIT(f_mud_pins_ops); + +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_LICENSE("GPL"); From patchwork Sun Feb 16 17:21:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384487 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 011B918E8 for ; Sun, 16 Feb 2020 17:27:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DDE9120857 for ; Sun, 16 Feb 2020 17:27:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DDE9120857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BA7EB6E44E; Sun, 16 Feb 2020 17:27:03 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 057C06E44E for ; Sun, 16 Feb 2020 17:27:03 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id D80AD200EE; Sun, 16 Feb 2020 18:21:40 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 6/9] regmap: Speed up _regmap_raw_write_impl() for large buffers Date: Sun, 16 Feb 2020 18:21:14 +0100 Message-Id: <20200216172117.49832-7-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=b2nCpQa9nUX8Wd5tZLcA:9 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When writing a 3MB buffer the unwritable check in _regmap_raw_write_impl() adds a ~20ms overhead on a Raspberry Pi 4. Amend this by avoiding the check if it's not necessary. Signed-off-by: Noralf Trønnes --- drivers/base/regmap/regmap.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c index 19f57ccfbe1d..cd876309a74b 100644 --- a/drivers/base/regmap/regmap.c +++ b/drivers/base/regmap/regmap.c @@ -1489,10 +1489,12 @@ static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg, WARN_ON(!map->bus); /* Check for unwritable registers before we start */ - for (i = 0; i < val_len / map->format.val_bytes; i++) - if (!regmap_writeable(map, - reg + regmap_get_offset(map, i))) - return -EINVAL; + if (map->max_register || map->writeable_reg || map->wr_table) { + for (i = 0; i < val_len / map->format.val_bytes; i++) + if (!regmap_writeable(map, + reg + regmap_get_offset(map, i))) + return -EINVAL; + } if (!map->cache_bypass && map->format.parse_val) { unsigned int ival; From patchwork Sun Feb 16 17:21:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB647159A for ; Sun, 16 Feb 2020 17:27:25 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3B2A20857 for ; Sun, 16 Feb 2020 17:27:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3B2A20857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 67A0A6E46C; Sun, 16 Feb 2020 17:27:06 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 06AD56E456 for ; Sun, 16 Feb 2020 17:27:00 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id 25A9D200F1; Sun, 16 Feb 2020 18:21:41 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 7/9] drm: Add Multifunction USB Device display driver Date: Sun, 16 Feb 2020 18:21:15 +0100 Message-Id: <20200216172117.49832-8-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=9BuUX6pRhsC96qv7cPsA:9 a=7yrdoAaASxDrdOGF:21 a=1pP-XCHQzciLwiWz:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The Multifunction USB Device has optional support for displays. LZ4 compression is used if the device supports it. The driver is MIT licensed in the hope that parts of it can be used on the BSD's. Signed-off-by: Noralf Trønnes --- drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/Makefile | 1 + drivers/gpu/drm/mud/Kconfig | 15 + drivers/gpu/drm/mud/Makefile | 3 + drivers/gpu/drm/mud/mud_drm.c | 1198 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/mud/mud_drm.h | 137 ++++ 6 files changed, 1356 insertions(+) create mode 100644 drivers/gpu/drm/mud/Kconfig create mode 100644 drivers/gpu/drm/mud/Makefile create mode 100644 drivers/gpu/drm/mud/mud_drm.c create mode 100644 drivers/gpu/drm/mud/mud_drm.h diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index bfdadc3667e0..8ddc0d0e82cc 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -387,6 +387,8 @@ source "drivers/gpu/drm/aspeed/Kconfig" source "drivers/gpu/drm/mcde/Kconfig" +source "drivers/gpu/drm/mud/Kconfig" + # Keep legacy drivers last menuconfig DRM_LEGACY diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 9f1c7c486f88..5a5eab598d39 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -122,3 +122,4 @@ obj-$(CONFIG_DRM_LIMA) += lima/ obj-$(CONFIG_DRM_PANFROST) += panfrost/ obj-$(CONFIG_DRM_ASPEED_GFX) += aspeed/ obj-$(CONFIG_DRM_MCDE) += mcde/ +obj-y += mud/ diff --git a/drivers/gpu/drm/mud/Kconfig b/drivers/gpu/drm/mud/Kconfig new file mode 100644 index 000000000000..440e994ca0a2 --- /dev/null +++ b/drivers/gpu/drm/mud/Kconfig @@ -0,0 +1,15 @@ +# SPDX-License-Identifier: GPL-2.0-or-later + +config DRM_MUD + tristate "Multifunction USB Device Display" + depends on DRM && USB && MFD_MUD + select LZ4_COMPRESS + select DRM_KMS_HELPER + + select DRM_GEM_CMA_HELPER + select DRM_GEM_SHMEM_HELPER + + select BACKLIGHT_CLASS_DEVICE + help + This is a KMS driver for Multifunction USB Device displays or display + adapters. diff --git a/drivers/gpu/drm/mud/Makefile b/drivers/gpu/drm/mud/Makefile new file mode 100644 index 000000000000..d5941d33bcd9 --- /dev/null +++ b/drivers/gpu/drm/mud/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-or-later + +obj-$(CONFIG_DRM_MUD) += mud_drm.o diff --git a/drivers/gpu/drm/mud/mud_drm.c b/drivers/gpu/drm/mud/mud_drm.c new file mode 100644 index 000000000000..51ba756940fd --- /dev/null +++ b/drivers/gpu/drm/mud/mud_drm.c @@ -0,0 +1,1198 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +//#include +#include +#include +#include +#include +#include + +#include "mud_drm.h" + +/* + * freerun: Userspace is told that the flush happended immediately, before the worker has begun. + * Useful if the rendering loop handles several displays one after the other. + * steady: Notify userspace at a fixed interval (FPS). + * inline: Do flushing before returning to userspace from the update function. + * + * Rationale: + * In a worst case scenario a full buffer update can take 1 second. + * + * I believe Wayland/Weston has one rendering loop per display/device and if so doesn't need any special treatment. + * For Xorg I got the impression that it runs everything in one loop and one slow device will slow everything down. + * For games on embedded I believe maybe a fixed rate is best. + * + * I'd appreciate any insight into how userspace operates in this regard. + */ +static int pageflip; +module_param(pageflip, int, 0644); +MODULE_PARM_DESC(pageflip, "pageflip strategy: 0=freerun, 1=steady, 2=inline [default=0]"); + +#define MUD_DRM_PAGEFLIP_FREERUN 0 +#define MUD_DRM_PAGEFLIP_STEADY 1 +#define MUD_DRM_PAGEFLIP_INLINE 2 + +struct mud_drm_damage { + struct list_head list; + struct drm_rect rect; + ktime_t time; +}; + +struct mud_drm_device { + struct drm_device drm; + struct drm_simple_display_pipe pipe; + struct usb_device *usb; + struct regmap *regmap; + + void *buf; + size_t buf_len; + + struct mutex lock; + + bool run; + struct workqueue_struct *workq; + struct work_struct work; + wait_queue_head_t waitq; + struct drm_framebuffer *fb; + struct list_head damagelist; + unsigned int average_pageflip_ms; +}; + +static inline struct mud_drm_device *to_mud_drm_device(struct drm_device *drm) +{ + return container_of(drm, struct mud_drm_device, drm); +} + +/*----------------------------------------------------------------------------*/ +/* TODO: Move to drm_encoder.c */ + +static void drm_encoder_dummy_cleanup(struct drm_encoder *encoder) +{ + drm_encoder_cleanup(encoder); + kfree(encoder); +} + +static const struct drm_encoder_funcs drm_encoder_dummy_funcs = { + .destroy = drm_encoder_dummy_cleanup, +}; + +static struct drm_encoder *drm_encoder_create_dummy(struct drm_device *dev, int encoder_type) +{ + struct drm_encoder *encoder; + int ret; + + encoder = kzalloc(sizeof(*encoder), GFP_KERNEL); + if (!encoder) + return ERR_PTR(-ENOMEM); + + ret = drm_encoder_init(dev, encoder, &drm_encoder_dummy_funcs, + encoder_type, NULL); + if (ret) { + kfree(encoder); + return ERR_PTR(ret); + } + + return encoder; +} + +/*----------------------------------------------------------------------------*/ +/* TODO: Move to drm_simple_kms_helper.c */ + +static void drm_simple_connector_cleanup(struct drm_connector *connector) +{ + drm_connector_cleanup(connector); + kfree(connector); +} + +static const struct drm_connector_funcs drm_simple_connector_funcs = { + .fill_modes = drm_helper_probe_single_connector_modes, + .destroy = drm_simple_connector_cleanup, + .reset = drm_atomic_helper_connector_reset, + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, +}; + +static struct drm_connector * +drm_simple_connector_create(struct drm_device *dev, const struct drm_connector_helper_funcs *funcs, int connector_type) +{ + struct drm_connector *connector; + int ret; + + connector = kzalloc(sizeof(*connector), GFP_KERNEL); + if (!connector) + return ERR_PTR(-ENOMEM); + + drm_connector_helper_add(connector, funcs); + ret = drm_connector_init(dev, connector, &drm_simple_connector_funcs, connector_type); + if (ret) { + kfree(connector); + return ERR_PTR(ret); + } + + return connector; +} + +/*----------------------------------------------------------------------------*/ + +static int mud_drm_read(struct mud_drm_device *mdrm, unsigned int regnr, + u32 *vals, size_t val_count) +{ + size_t val_len = val_count * sizeof(u32); + u32 *val_buf; + int ret; + + val_buf = kmalloc(val_len, GFP_KERNEL); + if (!val_buf) + return -ENOMEM; + + ret = regmap_bulk_read(mdrm->regmap, regnr, val_buf, val_count); + if (!ret) + memcpy(vals, val_buf, val_len); + kfree(val_buf); + + return ret; +} + +static int mud_drm_write(struct mud_drm_device *mdrm, unsigned int regnr, + const u32 *vals, size_t val_count) +{ + const u32 *val_buf; + int ret; + + if (val_count == 1) + return regmap_write(mdrm->regmap, regnr, *vals); + + val_buf = kmemdup(vals, val_count * sizeof(*val_buf), GFP_KERNEL); + if (!val_buf) + return -ENOMEM; + + ret = regmap_bulk_write(mdrm->regmap, regnr, val_buf, val_count); + kfree(val_buf); + + return ret; +} + +static int mud_drm_connector_detect(struct drm_connector *connector, + struct drm_modeset_acquire_ctx *ctx, + bool force) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(connector->dev); + unsigned int status, regnr; + int ret; + + regnr = MUD_DRM_REG_CONNECTOR_STATUS + connector->index; + ret = regmap_read(mdrm->regmap, regnr, &status); + printk("%s: index=%u, status=%u, ret=%d\n", __func__, connector->index, status, ret); + if (ret || status > connector_status_unknown) + status = connector_status_disconnected; + + return status; +} + +static void *mud_drm_connector_get_edid(struct drm_connector *connector) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(connector->dev); + unsigned int edid_len, regnr; + void *edid = NULL; + int ret; + + regnr = MUD_DRM_REG_CONNECTOR_EDID_LEN + connector->index; + ret = regmap_read(mdrm->regmap, regnr, &edid_len); + printk("%s: index=%u, edid_len=%u, ret=%d\n", __func__, + connector->index, edid_len, ret); + if (ret || !edid_len) + return NULL; + + edid = kmalloc(edid_len, GFP_KERNEL); + if (!edid) + return NULL; + + regnr = MUD_DRM_REG_CONNECTOR_EDID + (connector->index * MUD_DRM_CONNECTOR_EDID_BLOCK_MAX); + ret = regmap_raw_read(mdrm->regmap, regnr, edid, edid_len); + if (ret) { + kfree(edid); + return NULL; + } + + return edid; +} + +static int mud_drm_connector_get_modes(struct drm_connector *connector) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(connector->dev); + struct mud_drm_display_mode *modes = NULL; + struct regmap *regmap = mdrm->regmap; + unsigned int i, reg_num_modes, regnr; + int ret, num_modes = 0; + void *edid = NULL; + + printk("%s: index=%u\n", __func__, connector->index); + + // FIXME: gadget side has: mconn->capabilities |= MUD_DRM_CONNECTOR_CAP_EDID; + if (connector->connector_type != DRM_MODE_CONNECTOR_VIRTUAL) { + edid = mud_drm_connector_get_edid(connector); + drm_connector_update_edid_property(connector, edid); + } + + regnr = MUD_DRM_REG_CONNECTOR_MODES_COUNT + connector->index; + ret = regmap_read(regmap, regnr, ®_num_modes); + printk(" num_modes=%d, ret=%d\n", reg_num_modes, ret); + if (ret) { + if (edid) + num_modes = drm_add_edid_modes(connector, edid); + } else if (reg_num_modes) { + modes = kmalloc_array(reg_num_modes, sizeof(*modes), GFP_KERNEL); + if (!modes) + goto free; + + regnr = MUD_DRM_REG_CONNECTOR_MODES + connector->index; + ret = regmap_bulk_read(regmap, regnr, modes, + reg_num_modes * MUD_DRM_DISPLAY_MODE_FIELDS); + if (ret) + goto free; + + for (i = 0; i < reg_num_modes; i++) { + struct drm_display_mode *mode; + + mode = drm_mode_create(connector->dev); + if (!mode) + goto free; + + mud_drm_to_display_mode(mode, &modes[i]); + drm_mode_probed_add(connector, mode); + printk(" Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(mode)); + num_modes++; + } + } +free: + kfree(edid); + kfree(modes); + + /* Prevent drm_helper_probe_single_connector_modes() from adding modes */ + if (!num_modes && connector->status == connector_status_connected) + connector->status = connector_status_unknown; + + return num_modes; +} + +static const struct drm_connector_helper_funcs mud_drm_connector_funcs = { + .detect_ctx = mud_drm_connector_detect, + .get_modes = mud_drm_connector_get_modes, +}; + +static int mud_drm_create_tv_properties(struct mud_drm_device *mdrm, struct drm_connector *connector) +{ + unsigned int i, num_modes, defval; + const char **modes; + size_t buf_len; + char *buf; + int ret; + + ret = regmap_read(mdrm->regmap, MUD_DRM_REG_TV_MODES_COUNT, &num_modes); + printk(" MUD_DRM_REG_TV_MODES_COUNT=%u, ret=%d\n", num_modes, ret); + if (ret) + return ret; + if (!num_modes) + return -EINVAL; + + buf_len = num_modes * DRM_PROP_NAME_LEN; + modes = kmalloc_array(num_modes, sizeof(*modes), GFP_KERNEL); + buf = kmalloc(buf_len, GFP_KERNEL); + if (!modes || !buf) { + ret = -ENOMEM; + goto free; + } + + ret = regmap_raw_read(mdrm->regmap, MUD_DRM_REG_TV_MODES, buf, buf_len); + if (ret) + goto free; + + for (i = 0; i < num_modes; i++) { + modes[i] = &buf[i * DRM_PROP_NAME_LEN]; + printk(" names[%u]=%s\n", i, modes[i]); + } + + ret = regmap_read(mdrm->regmap, MUD_DRM_REG_CONNECTOR_TV_MODE + connector->index, &defval); + printk(" MUD_DRM_REG_CONNECTOR_TV_MODE: %u\n", defval); + + ret = drm_mode_create_tv_properties(connector->dev, num_modes, modes); + if (ret) + goto free; + + drm_object_attach_property(&connector->base, + connector->dev->mode_config.tv_mode_property, + defval); +free: + kfree(modes); + kfree(buf); + + return ret; +} + +static int mud_drm_connector_create(struct mud_drm_device *mdrm, int connector_type) +{ + struct drm_device *drm = &mdrm->drm; + struct drm_connector *connector; + struct drm_encoder *encoder; + unsigned int val; + int ret; + + connector = drm_simple_connector_create(drm, &mud_drm_connector_funcs, connector_type); + if (IS_ERR(connector)) + return PTR_ERR(connector); + + connector->polled = (DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT); + + printk("%s: index=%u\n", __func__, connector->index); + + ret = regmap_read(mdrm->regmap, MUD_DRM_REG_CONNECTOR_CAPS + connector->index, &val); + printk(" MUD_DRM_REG_CONNECTOR_CAPS: %x\n", val); + if (ret) + return ret; + + if (!(val & (MUD_DRM_CONNECTOR_CAP_MODE | MUD_DRM_CONNECTOR_CAP_EDID))) { + printk("\n\nNEED MODE or EDID\n"); + } + + if (val & MUD_DRM_CONNECTOR_CAP_TV) { + ret = mud_drm_create_tv_properties(mdrm, connector); + if (ret) + printk("FAILED to create tv props, ret=%d\n", ret); + } + + /* The first connector is attached to the simple pipe encoder */ + if (!connector->index) { + encoder = &mdrm->pipe.encoder; + } else { + encoder = drm_encoder_create_dummy(drm, DRM_MODE_ENCODER_NONE); + if (IS_ERR(encoder)) + return PTR_ERR(encoder); + + encoder->possible_crtcs = 1; + } + + return drm_connector_attach_encoder(connector, encoder); +} + +static long mud_drm_throughput(ktime_t end, ktime_t begin, size_t len) +{ + long throughput; + + throughput = ktime_us_delta(end, begin); + throughput = throughput ? (len * 1000) / throughput : 0; + throughput = throughput * 1000 / 1024; + + return throughput; +} + +static int mud_drm_fb_copy(struct mud_drm_device *mdrm, struct drm_rect *rect, size_t *len) +{ + struct dma_buf_attachment *import_attach; + struct mud_drm_damage *damage = NULL; + struct drm_framebuffer *fb; + ktime_t start = 0; + int ret = 0; + void *vmap; + ktime_t t1, t2; + + mutex_lock(&mdrm->lock); + + if (pageflip == MUD_DRM_PAGEFLIP_STEADY) + start = ktime_get(); + + fb = mdrm->fb; + if (!fb) { + ret = -ENOENT; + goto unlock; + } + + damage = list_first_entry_or_null(&mdrm->damagelist, struct mud_drm_damage, list); + if (!damage) + goto put_fb; + + list_del(&damage->list); + + *rect = damage->rect; + + printk("Flushing [FB:%d] " DRM_RECT_FMT " dt=%lld ms\n", fb->base.id, + DRM_RECT_ARG(rect), ktime_ms_delta(ktime_get(), damage->time)); + + *len = drm_rect_width(rect) * drm_rect_height(rect) * fb->format->cpp[0]; + + /* Regmap restriction, the register value is 32-bit */ + if (*len % sizeof(u32)) { + int adj_x = drm_rect_width(rect) % sizeof(u32); + int adj_y = drm_rect_height(rect) % sizeof(u32); + struct drm_rect adj_rect = *rect; + + if (adj_x) { + adj_x = sizeof(u32) - adj_x; + if (adj_rect.x1 - adj_x >= 0) + adj_rect.x1 -= adj_x; + else if (adj_rect.x2 + adj_x <= fb->width) + adj_rect.x2 += adj_x; + } + + if (adj_y) { + adj_y = sizeof(u32) - adj_y; + if (adj_rect.y1 - adj_y >= 0) + adj_rect.y1 -= adj_y; + else if (adj_rect.y2 + adj_y <= fb->height) + adj_rect.y2 += adj_y; + } + + if (drm_rect_width(&adj_rect) % sizeof(u32) || drm_rect_height(&adj_rect) % sizeof(u32)) + drm_rect_init(rect, 0, 0, fb->width, fb->height); + else + *rect = adj_rect; + + *len = drm_rect_width(rect) * drm_rect_height(rect) * fb->format->cpp[0]; + printk(" ADJUSTED " DRM_RECT_FMT "\n", DRM_RECT_ARG(rect)); + } + + if (*len > mdrm->buf_len) { + ret = -E2BIG; + goto put_fb; + } + + t1 = ktime_get(); + // FIXME: Kconfig: select DRM_KMS_CMA_HELPER + vmap = to_drm_gem_cma_obj(fb->obj[0])->vaddr; +// vmap = drm_gem_shmem_vmap(fb->obj[0]); + t2 = ktime_get(); + if (IS_ERR(vmap)) { + ret = PTR_ERR(vmap); + goto put_fb; + } + + printk(" vmap: %lld ms\n", ktime_ms_delta(t2, t1)); + + import_attach = fb->obj[0]->import_attach; + + if (import_attach) { + ret = dma_buf_begin_cpu_access(import_attach->dmabuf, DMA_FROM_DEVICE); + if (ret) + goto unmap; + } + + /* + * Always copy since we can't pass the shmem buffer straight in since it + * results in sloow uncached reads in the compressor. AND there's src buf stride to consider... + * + * Speed up copy if update is full and stride matches. + */ + // FIXME: Big endian host + if (*len == fb->height * fb->pitches[0]) + memcpy(mdrm->buf, vmap, *len); + else + drm_fb_memcpy(mdrm->buf, vmap, fb, rect); + + if (import_attach) + ret = dma_buf_end_cpu_access(import_attach->dmabuf, DMA_FROM_DEVICE); +unmap: +// drm_gem_shmem_vunmap(fb->obj[0], vmap); + + if (pageflip == MUD_DRM_PAGEFLIP_STEADY && !ret && + drm_rect_width(rect) == fb->width && drm_rect_height(rect) == fb->height) { + unsigned int duration = ktime_ms_delta(ktime_get(), start); + unsigned int old = mdrm->average_pageflip_ms; + + if (!mdrm->average_pageflip_ms) + mdrm->average_pageflip_ms = duration; + else + mdrm->average_pageflip_ms = (duration * 3 / 10) + (mdrm->average_pageflip_ms * 7 / 10); + if (old != mdrm->average_pageflip_ms) + printk("average_pageflip_ms=%u (%u)\n", mdrm->average_pageflip_ms, old); + } + +put_fb: + if (list_empty(&mdrm->damagelist)) { + drm_framebuffer_put(fb); + mdrm->fb = NULL; + } +unlock: + mutex_unlock(&mdrm->lock); + + kfree(damage); + + return ret; +} + +static int mud_drm_write_rect(struct mud_drm_device *mdrm, struct drm_rect *rect) +{ + u32 drect[4]; + + drect[0] = rect->x1; + drect[1] = rect->y1; + drect[2] = drm_rect_width(rect); + drect[3] = drm_rect_height(rect); + + return mud_drm_write(mdrm, MUD_DRM_REG_RECT_X, drect, 4); +} + +static void mud_drm_fb_dirty(struct mud_drm_device *mdrm) +{ + struct drm_rect rect; + size_t len = 0; + int ret, idx; + bool debug = true; + ktime_t t1, t2, t3, t4; + + if (!drm_dev_enter(&mdrm->drm, &idx)) + return; + + if (debug) + t1 = ktime_get(); + + ret = mud_drm_fb_copy(mdrm, &rect, &len); + if (ret) + goto out_exit; + + if (debug) + t2 = ktime_get(); + + ret = mud_drm_write_rect(mdrm, &rect); + if (ret) + goto out_exit; + + if (debug) + t3 = ktime_get(); + + ret = regmap_raw_write(mdrm->regmap, MUD_DRM_REG_BUFFER, mdrm->buf, len); + + if (debug) { + t4 = ktime_get(); + + if (len >= SZ_64K) + printk(" SZ_64K: %lld ms len=%zu\n", ktime_ms_delta(t4, t3), len); + + printk(" out, ret=%d: total %ld kB/s (%lld ms), memcpy %ld kB/s (%lld ms), transfer %ld kB/s (%lld ms)\n", + ret, + mud_drm_throughput(t4, t1, len), + ktime_ms_delta(t4, t1), + mud_drm_throughput(t2, t1, len), + ktime_ms_delta(t2, t1), + mud_drm_throughput(t4, t3, len), + ktime_ms_delta(t4, t3) + ); + } +out_exit: + drm_dev_exit(idx); +} + +static void mud_drm_work(struct work_struct *work) +{ + struct mud_drm_device *mdrm = container_of(work, struct mud_drm_device, work); + + printk("%s: IN\n", __func__); + + while (mdrm->run) { + mud_drm_fb_dirty(mdrm); + + wait_event(mdrm->waitq, !mdrm->run || mdrm->fb); + + printk("%s: LOOP: mdrm->run=%u, mdrm->fb=%px\n", __func__, mdrm->run, mdrm->fb); + } + printk("%s: OUT\n", __func__); +} + +static bool drm_rect_include(struct drm_rect *r1, const struct drm_rect *r2) +{ + return r1->x1 >= r2->x1 && r1->x2 <= r2->x2 && r1->y1 >= r2->y1 && r1->y2 <= r2->y2; +} + +static void mud_drm_fb_mark_dirty(struct mud_drm_device *mdrm, + struct drm_plane_state *old_state, + struct drm_plane_state *state) +{ + struct drm_atomic_helper_damage_iter iter; + struct drm_framebuffer *old_fb = NULL; + struct mud_drm_damage *damage; + ktime_t time = ktime_get(); + struct drm_rect rect; + bool wakeup = false; + bool debug = true; + unsigned int num = 0; + + mutex_lock(&mdrm->lock); + + if (debug) { + if (!list_empty(&mdrm->damagelist)) { + struct list_head *pos; + + list_for_each(pos, &mdrm->damagelist) + num++; + } + } + + wakeup = !mdrm->fb; + + if (mdrm->fb != state->fb) { + old_fb = mdrm->fb; + drm_framebuffer_get(state->fb); + mdrm->fb = state->fb; + } + + drm_atomic_helper_damage_iter_init(&iter, old_state, state); + drm_atomic_for_each_plane_damage(&iter, &rect) { + if (!list_empty(&mdrm->damagelist)) { + struct mud_drm_damage *tmp; + bool found; + + /* Is the rectangle already covered? */ + found = false; + list_for_each_entry(damage, &mdrm->damagelist, list) { + if (drm_rect_include(&rect, &damage->rect)) { + found = true; + break; + } + } + if (found) + continue; + + /* Does the rectangle cover existing ones? */ + found = false; + list_for_each_entry_safe(damage, tmp, &mdrm->damagelist, list) { + if (drm_rect_include(&damage->rect, &rect)) { + if (!found) { + damage->rect = rect; + found = true; + } else { + list_del(&damage->list); + kfree(damage); + } + } + } + if (found) + continue; + + /* Is the rectangle adjacent to an existing one? */ + found = false; + list_for_each_entry_safe(damage, tmp, &mdrm->damagelist, list) { + struct drm_rect box = { + .x1 = damage->rect.x1 - drm_rect_width(&rect), + .x2 = damage->rect.x2 + drm_rect_width(&rect), + .y1 = damage->rect.y1 - drm_rect_height(&rect), + .y2 = damage->rect.y2 + drm_rect_height(&rect), + }; + + if (rect.x1 >= box.x1 && rect.x2 <= box.x2 && + rect.y1 >= box.y1 && rect.y2 <= box.y2) { + damage->rect.x1 = min(damage->rect.x1, rect.x1); + damage->rect.x2 = max(damage->rect.x2, rect.x2); + damage->rect.y1 = min(damage->rect.y1, rect.y1); + damage->rect.y2 = max(damage->rect.y2, rect.y2); + found = true; + break; + } + } + if (found) + continue; + } + + damage = kzalloc(sizeof(*damage), GFP_KERNEL); + if (!damage) + break; + + damage->time = time; + damage->rect = rect; + list_add_tail(&damage->list, &mdrm->damagelist); + } + + printk("%s: wakeup=%u, fb=[FB:%d], old_fb=[FB:%d]\n", __func__, + wakeup, state->fb->base.id, old_fb ? old_fb->base.id : -1); + + if (debug) { + if (!list_empty(&mdrm->damagelist)) { + struct list_head *pos; + unsigned int num2 = 0; + + list_for_each(pos, &mdrm->damagelist) + num2++; + + if (num != num2) { + ktime_t t = ktime_get(); + + printk(" damagelist: num=%u num2=%u\n", num, num2); + list_for_each_entry(damage, &mdrm->damagelist, list) + printk(" " DRM_RECT_FMT " dt=%lld ms\n", + DRM_RECT_ARG(&damage->rect), + ktime_ms_delta(t, damage->time)); + } + } else { + printk(" damagelist is empty: num=%u\n", num); + } + } + + mutex_unlock(&mdrm->lock); + + if (pageflip != MUD_DRM_PAGEFLIP_INLINE && wakeup) + wake_up(&mdrm->waitq); + + if (old_fb) + drm_framebuffer_put(old_fb); +} + +static int mud_drm_pipe_check(struct drm_simple_display_pipe *pipe, + struct drm_plane_state *plane_state, + struct drm_crtc_state *crtc_state) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(pipe->crtc.dev); + struct drm_framebuffer *old_fb = pipe->plane.state->fb; + struct drm_framebuffer *fb = plane_state->fb; + bool check = false; + int ret; + + printk("%s:\n", __func__); + + /* FIXME: Should we be flexible and let the device decide this? */ + /* Only one connector is supported */ + if (hweight32(crtc_state->connector_mask) > 1) + return -EINVAL; + + if (drm_atomic_crtc_needs_modeset(crtc_state)) { + const struct drm_display_mode *mode = &crtc_state->mode; + int connector_idx = ffs(crtc_state->connector_mask) - 1; + struct mud_drm_display_mode dmode; + + printk(" mode_changed=%u, active_changed=%u, connectors_changed=%u", + crtc_state->mode_changed, crtc_state->active_changed, + crtc_state->connectors_changed); + printk(" connector_idx=%d\n", connector_idx); + + if (connector_idx < 0) + return -EINVAL; + + /* Regmap restriction */ + if (fb && (fb->width * fb->height * fb->format->cpp[0]) % sizeof(u32)) { + DRM_DEV_DEBUG_KMS(fb->dev->dev, + "[FB:%d] Unsupported width(%u)/height(%u)/format(0x%x) combo\n", + fb->base.id, fb->width, fb->height, fb->format->format); + return -EINVAL; + } + + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_SET_CONNECTOR, connector_idx); + if (ret) { + printk(" MUD_DRM_REG_SET_CONNECTOR: ret=%d\n", ret); + return ret; + } + + printk(" Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(mode)); + + mud_drm_from_display_mode(&dmode, mode); + ret = mud_drm_write(mdrm, MUD_DRM_REG_SET_MODE, (u32 *)&dmode, + MUD_DRM_DISPLAY_MODE_FIELDS); + if (ret) { + printk(" MUD_DRM_REG_SET_MODE: ret=%d\n", ret); + return ret; + } + + check = true; + } + + printk(" old_fb=[FB:%d], fb=[FB:%d]\n", old_fb ? old_fb->base.id : -1, + fb ? fb->base.id : -1); + + if (fb) { + if (!old_fb || old_fb->width != fb->width || old_fb->height != fb->height || + old_fb->format->format != fb->format->format) { + printk(" CREATE FB\n"); + + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_SET_FORMAT, fb->format->format); + if (ret) { + printk(" MUD_DRM_REG_SET_FORMAT: ret=%d\n", ret); + return ret; + } + + check = true; + } + } + + if (check) { + printk(" RUN CHECK\n"); + + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_SET_COMMIT, MUD_DRM_COMMIT_CHECK); + if (ret) { + printk(" MUD_DRM_REG_SET_COMMIT CHECK: ret=%d\n", ret); + return ret; + } + } + + return 0; +} + +static void mud_drm_pipe_update(struct drm_simple_display_pipe *pipe, + struct drm_plane_state *old_state) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(pipe->crtc.dev); + struct drm_plane_state *state = pipe->plane.state; + struct drm_framebuffer *fb = state->fb; + struct drm_crtc *crtc = &pipe->crtc; + ktime_t start = 0, end = 0; + int idx, ret = 0; + + if (fb) + printk("%s: [FB:%d] flip=%s event=%s\n", __func__, + fb->base.id, old_state->fb && fb != old_state->fb ? "yes" : "no", + crtc->state->event ? "yes" : "no"); + else + printk("%s: [NOFB] event=%s\n", __func__, crtc->state->event ? "yes" : "no"); + + if (!drm_dev_enter(crtc->dev, &idx)) + goto send_event; + + if (!old_state->fb) { + printk(" ENABLE CRTC\n"); + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_PIPE_ENABLE, 1); + if (ret) + printk(" MUD_DRM_REG_PIPE_ENABLE=1: ret=%d\n", ret); + } + + if (pageflip == MUD_DRM_PAGEFLIP_STEADY) + start = ktime_get(); + + if (fb) + mud_drm_fb_mark_dirty(mdrm, old_state, state); + + if (pageflip == MUD_DRM_PAGEFLIP_STEADY) + end = ktime_get(); + + if (pageflip == MUD_DRM_PAGEFLIP_INLINE) + mud_drm_fb_dirty(mdrm); + + if (fb && drm_atomic_crtc_needs_modeset(crtc->state)) { + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_SET_COMMIT, MUD_DRM_COMMIT_APPLY); + if (ret) + printk(" MUD_DRM_REG_SET_COMMIT: ret=%d\n", ret); + } + + if (!fb) { + printk(" DISABLE CRTC\n"); + ret = regmap_write(mdrm->regmap, MUD_DRM_REG_PIPE_ENABLE, 0); + if (ret) + printk(" MUD_DRM_REG_PIPE_ENABLE=0: ret=%d\n", ret); + } + + drm_dev_exit(idx); + +send_event: + if (crtc->state->event) { + if (pageflip == MUD_DRM_PAGEFLIP_STEADY) { + unsigned int duration = ktime_ms_delta(end, start); + + if (duration < mdrm->average_pageflip_ms) { + printk("pageflip wait %u ms\n", mdrm->average_pageflip_ms - duration); + msleep(mdrm->average_pageflip_ms - duration); + } else { + printk("pageflip nowait -%u ms\n", duration - mdrm->average_pageflip_ms); + } + } + + spin_lock_irq(&crtc->dev->event_lock); + drm_crtc_send_vblank_event(crtc, crtc->state->event); + crtc->state->event = NULL; + spin_unlock_irq(&crtc->dev->event_lock); + } + + if (ret) + printk("%s: Failed to UPDATE: %d\n", __func__, ret); +} + +static void mud_drm_pipe_enable(struct drm_simple_display_pipe *pipe, + struct drm_crtc_state *crtc_state, + struct drm_plane_state *plane_state) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(pipe->crtc.dev); + + printk("%s:\n", __func__); + + mutex_lock(&mdrm->lock); + mdrm->run = true; + mutex_unlock(&mdrm->lock); + + queue_work(mdrm->workq, &mdrm->work); +} + +static void mud_drm_pipe_disable(struct drm_simple_display_pipe *pipe) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(pipe->crtc.dev); + + printk("%s:\n", __func__); + + mutex_lock(&mdrm->lock); + mdrm->run = false; + mutex_unlock(&mdrm->lock); + + wake_up(&mdrm->waitq); + cancel_work_sync(&mdrm->work); + + mutex_lock(&mdrm->lock); + + if (mdrm->fb) { + drm_framebuffer_put(mdrm->fb); + mdrm->fb = NULL; + } + + if (!list_empty(&mdrm->damagelist)) { + struct mud_drm_damage *damage, *tmp; + + list_for_each_entry_safe(damage, tmp, &mdrm->damagelist, list) { + list_del(&damage->list); + kfree(damage); + } + } + + mutex_unlock(&mdrm->lock); +} + +static const struct drm_simple_display_pipe_funcs mud_drm_pipe_funcs = { + .check = mud_drm_pipe_check, + .update = mud_drm_pipe_update, + .enable = mud_drm_pipe_enable, + .disable = mud_drm_pipe_disable, + .prepare_fb = drm_gem_fb_simple_display_pipe_prepare_fb, +}; + +static const struct drm_mode_config_funcs mud_drm_mode_config_funcs = { + .fb_create = drm_gem_fb_create_with_dirty, + .atomic_check = drm_atomic_helper_check, + .atomic_commit = drm_atomic_helper_commit, +}; + +static const uint64_t mud_drm_pipe_modifiers[] = { + DRM_FORMAT_MOD_LINEAR, + DRM_FORMAT_MOD_INVALID +}; + +static void mud_drm_driver_release(struct drm_device *drm) +{ + struct mud_drm_device *mdrm = to_mud_drm_device(drm); + + printk("%s:\n", __func__); + + drm_mode_config_cleanup(drm); + drm_dev_fini(drm); + vfree(mdrm->buf); + + if (mdrm->workq) + destroy_workqueue(mdrm->workq); + + mutex_destroy(&mdrm->lock); + + kfree(mdrm); +} + +DEFINE_DRM_GEM_CMA_FOPS(mud_drm_fops); +//DEFINE_DRM_GEM_SHMEM_FOPS(mud_drm_fops); + +static struct drm_driver mud_drm_drm_driver = { + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC, + + .name = "mud_drm", + .desc = "Multifunction USB Device Display", + .date = "20190826", + .major = 1, + .minor = 0, + + .release = mud_drm_driver_release, + .fops = &mud_drm_fops, + DRM_GEM_CMA_VMAP_DRIVER_OPS, +// DRM_GEM_SHMEM_DRIVER_OPS, +}; + +static int mud_drm_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct mud_cell_pdata *pdata = dev_get_platdata(dev); + unsigned int num_connectors, num_formats; + struct mud_drm_device *mdrm; + struct drm_device *drm; + struct regmap *regmap; + u32 vals[4], *formats; + int ret, i; + struct regmap_config mud_drm_regmap_config = { + .reg_bits = 32, + .val_bits = 32, + .cache_type = REGCACHE_NONE, + }; + + /* FIXME: Remove if CMA is not used */ + if (!dev->coherent_dma_mask) { + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(32)); + if (ret) { + dev_warn(dev, "Failed to set dma mask %d\n", ret); + return ret; + } + } + + mdrm = kzalloc(sizeof(*mdrm), GFP_KERNEL); + if (!mdrm) + return -ENOMEM; + + drm = &mdrm->drm; + ret = devm_drm_dev_init(dev, drm, &mud_drm_drm_driver); + if (ret) { + kfree(mdrm); + return ret; + } + + drm_mode_config_init(drm); + drm->mode_config.funcs = &mud_drm_mode_config_funcs; + + mdrm->usb = interface_to_usbdev(pdata->interface); + + mutex_init(&mdrm->lock); + + INIT_LIST_HEAD(&mdrm->damagelist); + + INIT_WORK(&mdrm->work, mud_drm_work); + init_waitqueue_head(&mdrm->waitq); + + mdrm->workq = alloc_workqueue("mud_drm", 0, 0); + if (!mdrm->workq) + return -ENOMEM; + + regmap = devm_regmap_init_usb(dev, pdata->interface, pdata->index, + &mud_drm_regmap_config); + if (IS_ERR(regmap)) + return PTR_ERR(regmap); + + mdrm->regmap = regmap; + + mdrm->buf_len = SZ_8M; + mdrm->buf = vmalloc(mdrm->buf_len); + if (!mdrm->buf) + return -ENOMEM; + + ret = mud_drm_read(mdrm, MUD_DRM_REG_MIN_WIDTH, vals, 4); + if (ret) + return ret; + + drm->mode_config.min_width = vals[0]; + drm->mode_config.max_width = vals[1]; + drm->mode_config.min_height = vals[2]; + drm->mode_config.max_height = vals[3]; + + printk("min_width=%u, max_width=%u, min_height=%u, max_height=%u\n", + drm->mode_config.min_width, drm->mode_config.max_width, + drm->mode_config.min_height, drm->mode_config.max_height); + + ret = regmap_read(regmap, MUD_DRM_REG_FORMATS_COUNT, &num_formats); + printk(" MUD_DRM_REG_FORMATS_COUNT=%u, ret=%d\n", num_formats, ret); + if (ret) + return ret; + + if (!num_formats || num_formats > (MUD_DRM_REG_FORMATS_MAX - MUD_DRM_REG_FORMATS)) + return -EINVAL; + + formats = devm_kmalloc_array(dev, num_formats, sizeof(u32), GFP_KERNEL); + if (!formats) + return -ENOMEM; + + ret = regmap_bulk_read(regmap, MUD_DRM_REG_FORMATS, formats, num_formats); + printk(" MUD_DRM_REG_FORMATS: ret=%d\n", ret); + if (ret) + return ret; + + for (i = 0; i < num_formats; i++) + printk(" 0x%08x\n", formats[i]); + + ret = drm_simple_display_pipe_init(drm, &mdrm->pipe, &mud_drm_pipe_funcs, + formats, num_formats, + mud_drm_pipe_modifiers, NULL); + if (ret) + return ret; + + drm_plane_enable_fb_damage_clips(&mdrm->pipe.plane); + + devm_kfree(dev, formats); + + ret = regmap_read(regmap, MUD_DRM_REG_NUM_CONNECTORS, &num_connectors); + if (ret) + return ret; + + printk("MUD_DRM_REG_NUM_CONNECTORS=%u\n", num_connectors); + + if (!num_connectors || num_connectors > 16) + return -EINVAL; + + for (i = 0; i < num_connectors; i++) { + unsigned int conn_type; + + ret = regmap_read(regmap, MUD_DRM_REG_CONNECTOR_TYPE + i, &conn_type); + if (ret) + return ret; + + /* REVISIT: This needs to be updated as new types are added */ + if (conn_type > DRM_MODE_CONNECTOR_SPI) + return -EINVAL; + + ret = mud_drm_connector_create(mdrm, conn_type); + if (ret) + return ret; + } + + drm_mode_config_reset(drm); + + drm_kms_helper_poll_init(drm); + + platform_set_drvdata(pdev, drm); + + ret = drm_dev_register(drm, 0); + if (ret) + return ret; + +// drm_fbdev_generic_setup(drm, 16); + + return 0; +} + +static int mud_drm_remove(struct platform_device *pdev) +{ + struct drm_device *drm = platform_get_drvdata(pdev); + + printk("%s:\n", __func__); + + drm_kms_helper_poll_fini(drm); + drm_dev_unplug(drm); + drm_atomic_helper_shutdown(drm); + + return 0; +} + +static struct platform_driver mud_drm_driver = { + .driver.name = "mud-drm", + .probe = mud_drm_probe, + .remove = mud_drm_remove, +}; + +module_platform_driver(mud_drm_driver); + +MODULE_ALIAS("platform:mud-drm"); +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/drivers/gpu/drm/mud/mud_drm.h b/drivers/gpu/drm/mud/mud_drm.h new file mode 100644 index 000000000000..8ace63ed2d42 --- /dev/null +++ b/drivers/gpu/drm/mud/mud_drm.h @@ -0,0 +1,137 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright 2020 Noralf Trønnes + */ + +#ifndef __LINUX_MUD_DRM_H +#define __LINUX_MUD_DRM_H + +#include + +#define MUD_DRM_DISPLAY_MODE_FIELDS 15 + +struct mud_drm_display_mode { + u32 clock; + u32 hdisplay; + u32 hsync_start; + u32 hsync_end; + u32 htotal; + u32 hskew; + u32 vdisplay; + u32 vsync_start; + u32 vsync_end; + u32 vtotal; + u32 vscan; + + u32 vrefresh; + + u32 flags; + u32 type; + + u32 picture_aspect_ratio; +} __packed; + +static inline void mud_drm_from_display_mode(struct mud_drm_display_mode *dst, + const struct drm_display_mode *src) +{ + dst->clock = src->clock; + dst->hdisplay = src->hdisplay; + dst->hsync_start = src->hsync_start; + dst->hsync_end = src->hsync_end; + dst->htotal = src->htotal; + dst->hskew = src->hskew; + dst->vdisplay = src->vdisplay; + dst->vsync_start = src->vsync_start; + dst->vsync_end = src->vsync_end; + dst->vtotal = src->vtotal; + dst->vscan = src->vscan; + dst->vrefresh = src->vrefresh; + dst->flags = src->flags; + dst->type = src->type; + dst->picture_aspect_ratio = src->picture_aspect_ratio; +} + +static inline void mud_drm_to_display_mode(struct drm_display_mode *dst, + const struct mud_drm_display_mode *src) +{ + dst->clock = src->clock; + dst->hdisplay = src->hdisplay; + dst->hsync_start = src->hsync_start; + dst->hsync_end = src->hsync_end; + dst->htotal = src->htotal; + dst->hskew = src->hskew; + dst->vdisplay = src->vdisplay; + dst->vsync_start = src->vsync_start; + dst->vsync_end = src->vsync_end; + dst->vtotal = src->vtotal; + dst->vscan = src->vscan; + dst->vrefresh = src->vrefresh; + dst->flags = src->flags; + dst->type = src->type; + dst->picture_aspect_ratio = src->picture_aspect_ratio; + drm_mode_set_name(dst); +} + +#define MUD_DRM_REG_MIN_WIDTH 0x0001 +#define MUD_DRM_REG_MAX_WIDTH 0x0002 +#define MUD_DRM_REG_MIN_HEIGHT 0x0003 +#define MUD_DRM_REG_MAX_HEIGHT 0x0004 + +#define MUD_DRM_REG_NUM_CONNECTORS 0x0005 + +#define MUD_DRM_REG_TV_MODES_COUNT 0x0010 +#define MUD_DRM_REG_TV_MODES 0x0011 + +#define MUD_DRM_REG_PIPE_ENABLE 0x0070 + +#define MUD_DRM_REG_RECT_X 0x0071 +#define MUD_DRM_REG_RECT_Y 0x0072 +#define MUD_DRM_REG_RECT_WIDTH 0x0073 +#define MUD_DRM_REG_RECT_HEIGHT 0x0074 + +#define MUD_DRM_REG_SET_CONNECTOR 0x0050 +#define MUD_DRM_REG_SET_MODE 0x0051 +#define MUD_DRM_REG_SET_MODE_MAX 0x005f +#define MUD_DRM_REG_SET_FORMAT 0x0060 +#define MUD_DRM_REG_SET_COMMIT 0x0061 + #define MUD_DRM_COMMIT_CHECK 1 + #define MUD_DRM_COMMIT_APPLY 2 + +#define MUD_DRM_REG_FORMATS_COUNT 0x0100 +#define MUD_DRM_REG_FORMATS 0x0101 +#define MUD_DRM_REG_FORMATS_MAX 0x01ff + +#define MUD_DRM_MAX_CONNECTORS 16 + +#define MUD_DRM_REG_CONNECTOR_TYPE 0x1000 +#define MUD_DRM_REG_CONNECTOR_CAPS 0x1010 + #define MUD_DRM_CONNECTOR_CAP_MODE BIT(0) + #define MUD_DRM_CONNECTOR_CAP_EDID BIT(1) + #define MUD_DRM_CONNECTOR_CAP_TV BIT(2) +#define MUD_DRM_REG_CONNECTOR_STATUS 0x1020 +#define MUD_DRM_REG_CONNECTOR_EDID_LEN 0x1030 +#define MUD_DRM_REG_CONNECTOR_MODES_COUNT 0x1050 +#define MUD_DRM_REG_CONNECTOR_TV_MODE 0x1070 +#define MUD_DRM_REG_CONNECTOR_EDID 0x2000 + #define MUD_DRM_CONNECTOR_EDID_BLOCK_MAX 4096 +#define MUD_DRM_REG_CONNECTOR_EDID_MAX 0x5fff // 16 * 4096 / 4 = 16384 = 0x4000 +#define MUD_DRM_REG_CONNECTOR_MODES 0x6000 + #define MUD_DRM_CONNECTOR_MODES_COUNT_MAX 64 + #define MUD_DRM_CONNECTOR_MODES_BLOCK_MAX (MUD_DRM_CONNECTOR_MODES_COUNT_MAX * MUD_DRM_DISPLAY_MODE_FIELDS) +#define MUD_DRM_REG_CONNECTOR_MODES_MAX 0x14fff // 16 * 64 * (15 * 4) = 61440 = 0xf000 +#define MUD_DRM_REG_CONNECTOR_MAX 0x14fff + +#define MUD_DRM_REG_BUFFER 0x10000000 + +struct mud_drm_gadget; + +int mud_drm_gadget_writereg(struct mud_drm_gadget *mdg, unsigned int regnr, + const void *buf, size_t len, u8 compression); +int mud_drm_gadget_readreg(struct mud_drm_gadget *mdg, unsigned int regnr, + void *buf, size_t *len, u8 compression); +void mud_drm_gadget_enable(struct mud_drm_gadget *mdg); +void mud_drm_gadget_disable(struct mud_drm_gadget *mdg); +struct mud_drm_gadget *mud_drm_gadget_init(unsigned int minor_id, + size_t max_transfer_size); + +#endif From patchwork Sun Feb 16 17:21:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33AB418E8 for ; Sun, 16 Feb 2020 17:27:08 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1C34B208C4 for ; Sun, 16 Feb 2020 17:27:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C34B208C4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DB2D46E045; Sun, 16 Feb 2020 17:27:01 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org X-Greylist: delayed 319 seconds by postgrey-1.36 at gabe; Sun, 16 Feb 2020 17:27:00 UTC Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9BFD06E045 for ; Sun, 16 Feb 2020 17:27:00 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id 8D66F200F4; Sun, 16 Feb 2020 18:21:41 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 8/9] drm/client: Add drm_client_init_from_id() and drm_client_modeset_set() Date: Sun, 16 Feb 2020 18:21:16 +0100 Message-Id: <20200216172117.49832-9-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=MRF-nNKL37wAc8rothMA:9 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" drm_client_init_from_id() provides a way for clients to add a client based on the minor. drm_client_modeset_set() provides a way to set the modeset for clients that handles connectors and display mode on their own. Signed-off-by: Noralf Trønnes --- drivers/gpu/drm/drm_client.c | 37 ++++++++++++++++++++ drivers/gpu/drm/drm_client_modeset.c | 52 ++++++++++++++++++++++++++++ include/drm/drm_client.h | 4 +++ 3 files changed, 93 insertions(+) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index d9a2e3695525..dbd73fe8d987 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -112,6 +112,43 @@ int drm_client_init(struct drm_device *dev, struct drm_client_dev *client, } EXPORT_SYMBOL(drm_client_init); +/** + * drm_client_init_from_id - Initialise a DRM client + * @minor_id: DRM minor id + * @client: DRM client + * @name: Client name + * @funcs: DRM client functions (optional) + * + * This function looks up the drm_device using the minor id and initializes the client. + * It also registeres the client to avoid a possible race with DRM device unregister. + * + * See drm_client_init() and drm_client_register(). + * + * Returns: + * Zero on success or negative error code on failure. + */ +int drm_client_init_from_id(unsigned int minor_id, struct drm_client_dev *client, + const char *name, const struct drm_client_funcs *funcs) +{ + struct drm_minor *minor; + int ret; + + minor = drm_minor_acquire(minor_id); + if (IS_ERR(minor)) + return PTR_ERR(minor); + + mutex_lock(&minor->dev->clientlist_mutex); + ret = drm_client_init(minor->dev, client, name, funcs); + if (!ret) + list_add(&client->list, &minor->dev->clientlist); + mutex_unlock(&minor->dev->clientlist_mutex); + + drm_minor_release(minor); + + return ret; +} +EXPORT_SYMBOL(drm_client_init_from_id); + /** * drm_client_register - Register client * @client: DRM client diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c index 895b73f23079..9396267e646c 100644 --- a/drivers/gpu/drm/drm_client_modeset.c +++ b/drivers/gpu/drm/drm_client_modeset.c @@ -807,6 +807,58 @@ int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width, } EXPORT_SYMBOL(drm_client_modeset_probe); +/** + * drm_client_modeset_set() - Set modeset + * @client: DRM client + * @connector: Connector + * @mode: Display mode + * @fb: Framebuffer + * + * This function releases any current modeset info and sets the new modeset in + * the client's modeset array. + * + * Returns: + * Zero on success or negative error code on failure. + */ +int drm_client_modeset_set(struct drm_client_dev *client, struct drm_connector *connector, + struct drm_display_mode *mode, struct drm_framebuffer *fb) +{ + struct drm_mode_set *modeset; + int ret = -ENOENT; + + mutex_lock(&client->modeset_mutex); + + drm_client_modeset_release(client); + + if (!connector || !mode || !fb) { + ret = 0; + goto unlock; + } + + drm_client_for_each_modeset(modeset, client) { + if (!connector_has_possible_crtc(connector, modeset->crtc)) + continue; + + modeset->mode = drm_mode_duplicate(client->dev, mode); + if (!modeset->mode) { + ret = -ENOMEM; + break; + } + + drm_connector_get(connector); + modeset->connectors[modeset->num_connectors++] = connector; + + modeset->fb = fb; + ret = 0; + break; + } +unlock: + mutex_unlock(&client->modeset_mutex); + + return ret; +} +EXPORT_SYMBOL(drm_client_modeset_set); + /** * drm_client_rotation() - Check the initial rotation value * @modeset: DRM modeset diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index 5cf2c5dd8b1e..97e4157d07c5 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -104,6 +104,8 @@ struct drm_client_dev { int drm_client_init(struct drm_device *dev, struct drm_client_dev *client, const char *name, const struct drm_client_funcs *funcs); +int drm_client_init_from_id(unsigned int minor_id, struct drm_client_dev *client, + const char *name, const struct drm_client_funcs *funcs); void drm_client_release(struct drm_client_dev *client); void drm_client_register(struct drm_client_dev *client); @@ -155,6 +157,8 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); int drm_client_modeset_create(struct drm_client_dev *client); void drm_client_modeset_free(struct drm_client_dev *client); int drm_client_modeset_probe(struct drm_client_dev *client, unsigned int width, unsigned int height); +int drm_client_modeset_set(struct drm_client_dev *client, struct drm_connector *connector, + struct drm_display_mode *mode, struct drm_framebuffer *fb); bool drm_client_rotation(struct drm_mode_set *modeset, unsigned int *rotation); int drm_client_modeset_commit_force(struct drm_client_dev *client); int drm_client_modeset_commit(struct drm_client_dev *client); From patchwork Sun Feb 16 17:21:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Noralf_Tr=C3=B8nnes?= X-Patchwork-Id: 11384511 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 820841820 for ; Sun, 16 Feb 2020 17:27:27 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3A7B7208C4 for ; Sun, 16 Feb 2020 17:27:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A7B7208C4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=tronnes.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B32316E484; Sun, 16 Feb 2020 17:27:07 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from asav22.altibox.net (asav22.altibox.net [109.247.116.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 30BDB6E45E for ; Sun, 16 Feb 2020 17:27:03 +0000 (UTC) Received: from localhost.localdomain (unknown [81.166.168.211]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: noralf.tronnes@ebnett.no) by asav22.altibox.net (Postfix) with ESMTPSA id D801E200F7; Sun, 16 Feb 2020 18:21:41 +0100 (CET) From: =?utf-8?q?Noralf_Tr=C3=B8nnes?= To: broonie@kernel.org, balbi@kernel.org, lee.jones@linaro.org Subject: [RFC 9/9] usb: gadget: function: mud: Add display support Date: Sun, 16 Feb 2020 18:21:17 +0100 Message-Id: <20200216172117.49832-10-noralf@tronnes.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200216172117.49832-1-noralf@tronnes.org> References: <20200216172117.49832-1-noralf@tronnes.org> MIME-Version: 1.0 X-CMAE-Score: 0 X-CMAE-Analysis: v=2.3 cv=ZvHD1ezG c=1 sm=1 tr=0 a=OYZzhG0JTxDrWp/F2OJbnw==:117 a=OYZzhG0JTxDrWp/F2OJbnw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=IkcTkHD0fZMA:10 a=M51BFTxLslgA:10 a=SJz97ENfAAAA:8 a=pRkQhgFihsD0AXEq5_EA:9 a=a1pVt0SKTu3krwcm:21 a=wySlcR8iXnGGWabC:21 a=QEXdDO2ut3YA:10 a=vFet0B0WnEQeilDPIY6i:22 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-usb@vger.kernel.org, dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add optional display functionality to the Multifunction USB Device. The bulk of the code is placed in the drm subsystem since it's reaching into the drm internals. Signed-off-by: Noralf Trønnes --- drivers/gpu/drm/mud/Kconfig | 3 + drivers/gpu/drm/mud/Makefile | 1 + drivers/gpu/drm/mud/mud_drm_gadget.c | 889 ++++++++++++++++++++++++ drivers/usb/gadget/Kconfig | 12 + drivers/usb/gadget/function/Makefile | 2 + drivers/usb/gadget/function/f_mud_drm.c | 181 +++++ 6 files changed, 1088 insertions(+) create mode 100644 drivers/gpu/drm/mud/mud_drm_gadget.c create mode 100644 drivers/usb/gadget/function/f_mud_drm.c diff --git a/drivers/gpu/drm/mud/Kconfig b/drivers/gpu/drm/mud/Kconfig index 440e994ca0a2..b3c6d073cc9c 100644 --- a/drivers/gpu/drm/mud/Kconfig +++ b/drivers/gpu/drm/mud/Kconfig @@ -13,3 +13,6 @@ config DRM_MUD help This is a KMS driver for Multifunction USB Device displays or display adapters. + +config DRM_MUD_GADGET + tristate diff --git a/drivers/gpu/drm/mud/Makefile b/drivers/gpu/drm/mud/Makefile index d5941d33bcd9..56d2c39ac0eb 100644 --- a/drivers/gpu/drm/mud/Makefile +++ b/drivers/gpu/drm/mud/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-or-later obj-$(CONFIG_DRM_MUD) += mud_drm.o +obj-$(CONFIG_DRM_MUD_GADGET) += mud_drm_gadget.o diff --git a/drivers/gpu/drm/mud/mud_drm_gadget.c b/drivers/gpu/drm/mud/mud_drm_gadget.c new file mode 100644 index 000000000000..9395d8b7cefe --- /dev/null +++ b/drivers/gpu/drm/mud/mud_drm_gadget.c @@ -0,0 +1,889 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include "mud_drm.h" + +/* Temporary debugging aid */ +static unsigned int debug = 8; + +#define pdebug(level, fmt, ...) \ + if (level <= debug) \ + printk(KERN_DEBUG fmt, ##__VA_ARGS__) + +struct mud_drm_gadget_connector { + struct drm_connector *connector; + u32 capabilities; + enum drm_connector_status status; + unsigned int width_mm; + unsigned int height_mm; + void *edid; + size_t edid_len; + struct mud_drm_display_mode *modes; + unsigned int num_modes; +}; + +struct mud_drm_gadget { + struct drm_client_dev client; + + struct mud_drm_gadget_connector *connectors; + unsigned int connector_count; + + const u32 *formats; + unsigned int format_count; + + struct drm_connector *current_connector; + struct mud_drm_display_mode current_mode; + u32 current_format; + + unsigned int rect_x; + unsigned int rect_y; + unsigned int rect_width; + unsigned int rect_height; + + struct drm_client_buffer *buffer; + struct drm_client_buffer *buffer_check; + bool check_ok; + + size_t max_transfer_size; + void *work_buf; +}; + +static int mud_drm_gadget_probe_connector(struct mud_drm_gadget_connector *mconn) +{ + struct drm_connector *connector = mconn->connector; + struct drm_device *dev = connector->dev; + struct drm_display_mode *mode; + unsigned int i = 0; + int ret = 0; + void *edid; + + pdebug(2, "%s:\n", __func__); + + mutex_lock(&dev->mode_config.mutex); + + connector->funcs->fill_modes(connector, + dev->mode_config.max_width, + dev->mode_config.max_height); + + mconn->width_mm = connector->display_info.width_mm; + mconn->height_mm = connector->display_info.height_mm; + mconn->status = connector->status; + + mconn->num_modes = 0; + list_for_each_entry(mode, &connector->modes, head) + mconn->num_modes++; + + pdebug(2, " num_modes=%u\n", mconn->num_modes); + + if (!mconn->num_modes) + goto unlock; + + // FIXME: Checkpatch complains: Reusing the krealloc arg is almost always a bug + mconn->modes = krealloc(mconn->modes, mconn->num_modes * sizeof(*mconn->modes), GFP_KERNEL); + if (!mconn->modes) { + ret = -ENOMEM; + mconn->num_modes = 0; + goto unlock; + } + + list_for_each_entry(mode, &connector->modes, head) { + pdebug(2, " Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(mode)); + mud_drm_from_display_mode(&mconn->modes[i++], mode); + } + + if (!connector->edid_blob_ptr) + goto unlock; + + edid = connector->edid_blob_ptr->data; + mconn->edid_len = connector->edid_blob_ptr->length; + pdebug(2, " edid_len=%zu\n", mconn->edid_len); + if (!mconn->edid_len || !edid) { + mconn->edid_len = 0; + goto unlock; + } + + kfree(mconn->edid); + mconn->edid = kmemdup(edid, mconn->edid_len, GFP_KERNEL); + if (!mconn->edid) { + ret = -ENOMEM; + mconn->edid_len = 0; + goto unlock; + } + +unlock: + mutex_unlock(&dev->mode_config.mutex); + + return ret; +} + +static void mud_drm_gadget_probe_connectors(struct mud_drm_gadget *mdg) +{ + unsigned int i; + + for (i = 0; i < mdg->connector_count; i++) + mud_drm_gadget_probe_connector(&mdg->connectors[i]); +} + +static int mud_drm_gadget_read_connector(struct mud_drm_gadget *mdg, unsigned int regnr, + void *buf, size_t len) +{ + struct mud_drm_gadget_connector *mconn; + size_t count = len / sizeof(u32); + struct drm_connector *connector; + unsigned int index, basereg, i; + __le32 *buf32 = buf; + size_t offset; + u32 val; + + if (regnr < MUD_DRM_REG_CONNECTOR_EDID) { + index = (regnr - MUD_DRM_REG_CONNECTOR_TYPE) % MUD_DRM_MAX_CONNECTORS; + basereg = regnr - index; + } else if (regnr <= MUD_DRM_REG_CONNECTOR_EDID_MAX) { + index = (regnr - MUD_DRM_REG_CONNECTOR_EDID) % MUD_DRM_CONNECTOR_EDID_BLOCK_MAX; + basereg = MUD_DRM_REG_CONNECTOR_EDID; + } else if (regnr <= MUD_DRM_REG_CONNECTOR_MODES_MAX) { + index = (regnr - MUD_DRM_REG_CONNECTOR_MODES) % MUD_DRM_CONNECTOR_MODES_BLOCK_MAX; + basereg = MUD_DRM_REG_CONNECTOR_MODES; + } else { + return -EINVAL; + } + + pdebug(3, "%s: connector index=%u\n", __func__, index); + + if (index >= mdg->connector_count) + return -EINVAL; + + mconn = &mdg->connectors[index]; + connector = mconn->connector; + + switch (basereg) { + case MUD_DRM_REG_CONNECTOR_TV_MODE: + // drm_atomic_connector_get_property() + drm_modeset_lock(&connector->dev->mode_config.connection_mutex, NULL); + val = connector->state ? connector->state->tv.mode : 0; + if (!connector->state) + pr_err("No Connector state!!!\n"); + pdebug(4, " MUD_DRM_REG_CONNECTOR_TV_MODE=%u\n", connector->state->tv.mode); + drm_modeset_unlock(&connector->dev->mode_config.connection_mutex); + break; + case MUD_DRM_REG_CONNECTOR_MODES_COUNT: + val = mconn->num_modes; + break; + case MUD_DRM_REG_CONNECTOR_EDID_LEN: + val = mconn->edid_len; + break; + case MUD_DRM_REG_CONNECTOR_STATUS: + val = mconn->status; + break; + case MUD_DRM_REG_CONNECTOR_CAPS: + val = mconn->capabilities; + break; + case MUD_DRM_REG_CONNECTOR_TYPE: + val = mconn->connector->connector_type; + break; + + case MUD_DRM_REG_CONNECTOR_EDID: + offset = regnr - MUD_DRM_REG_CONNECTOR_EDID - (index * MUD_DRM_CONNECTOR_EDID_BLOCK_MAX); + + pdebug(4, " MUD_DRM_REG_CONNECTOR_EDID: offset=%zu\n", offset); + + if ((offset + len) > mconn->edid_len) + return -EINVAL; + memcpy(buf, mconn->edid + offset, len); + return 0; + + case MUD_DRM_REG_CONNECTOR_MODES: + offset = regnr - MUD_DRM_REG_CONNECTOR_MODES - (index * MUD_DRM_CONNECTOR_MODES_BLOCK_MAX); + + pdebug(4, " MUD_DRM_REG_CONNECTOR_MODES: offset=%zu, count=%zu\n", offset, count); + + if (offset + count > mconn->num_modes * MUD_DRM_DISPLAY_MODE_FIELDS) + return -EINVAL; + + for (i = 0; i < count; i++) + buf32[i] = cpu_to_le32(((u32 *)mconn->modes)[i]); + return 0; + + default: + return -EINVAL; + } + + if (count != 1) + return -EINVAL; + + *buf32 = cpu_to_le32(val); + + return 0; +} + +static int drm_mud_gadget_tv_mode_property(struct drm_device *drm, u32 *num_modes, + const char **names) +{ + struct drm_property *property = drm->mode_config.tv_mode_property; + struct drm_property_enum *prop_enum; + char *buf; + + if (!property) + return -EINVAL; + + *num_modes = 0; + list_for_each_entry(prop_enum, &property->enum_list, head) + (*num_modes)++; + + pdebug(3, "%s: *num_modes=%u\n", __func__, *num_modes); + + if (!names) + return 0; + + buf = kcalloc(*num_modes, DRM_PROP_NAME_LEN, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + *names = buf; + + list_for_each_entry(prop_enum, &property->enum_list, head) { + strncpy(buf, prop_enum->name, DRM_PROP_NAME_LEN); + buf += 32; + } + + return 0; +} + +static bool mud_drm_gadget_check_buffer(struct mud_drm_gadget *mdg, + struct drm_client_buffer *buffer, + struct drm_display_mode *mode) +{ + struct drm_framebuffer *fb; + + if (!buffer) + return false; + + fb = buffer->fb; + + return fb->format->format == mdg->current_format && + fb->width == mode->hdisplay && fb->height == mode->vdisplay; +} + +static int mud_drm_gadget_check(struct mud_drm_gadget *mdg) +{ + struct drm_format_name_buf format_name; + struct drm_client_buffer *buffer; + struct drm_display_mode mode; + void *vaddr; + int ret; + + memset(&mode, 0, sizeof(mode)); + mud_drm_to_display_mode(&mode, &mdg->current_mode); + + pdebug(3, "%s:\n", __func__); + pdebug(3, " Format %s\n", drm_get_format_name(mdg->current_format, &format_name)); + pdebug(3, " Modeline " DRM_MODE_FMT "\n", DRM_MODE_ARG(&mode)); + + mdg->check_ok = false; + + if (!mode.hdisplay || !mdg->current_format) + return -EINVAL; + + if (mdg->buffer_check) { + drm_client_framebuffer_delete(mdg->buffer_check); + mdg->buffer_check = NULL; + } + + if (!mud_drm_gadget_check_buffer(mdg, mdg->buffer, &mode)) { + buffer = drm_client_framebuffer_create(&mdg->client, mode.hdisplay, mode.vdisplay, + mdg->current_format); + if (IS_ERR(buffer)) + return PTR_ERR(buffer); + + vaddr = drm_client_buffer_vmap(buffer); + if (IS_ERR(vaddr)) { + drm_client_framebuffer_delete(buffer); + return PTR_ERR(vaddr); + } + + mdg->buffer_check = buffer; + } else { + buffer = mdg->buffer; + } + + ret = drm_client_modeset_set(&mdg->client, mdg->current_connector, &mode, buffer->fb); + if (ret) + return ret; + + //ret = drm_client_modeset_check(&mdg->client); + + mdg->check_ok = true; + + return 0; +} + +static int mud_drm_gadget_commit(struct mud_drm_gadget *mdg) +{ + int ret; + + pdebug(3, "%s: check_ok=%s\n", __func__, mdg->check_ok ? "true" : "false"); + + if (!mdg->check_ok) + return -EINVAL; + + ret = drm_client_modeset_commit(&mdg->client); + if (ret) + return ret; + + if (mdg->buffer_check) { + drm_client_framebuffer_delete(mdg->buffer); + mdg->buffer = mdg->buffer_check; + mdg->buffer_check = NULL; + } + + return 0; +} + +static size_t mud_drm_gadget_write_buffer_rgb565_to_xrgb8888(struct drm_client_buffer *buffer, + const u16 *src16, size_t len, + unsigned int x1, unsigned int x2, + unsigned int y1, unsigned int y2) +{ + unsigned int dst_width = buffer->fb->width; + unsigned int src_width = x2 - x1; + unsigned int x, y; + u32 *dst32; + u16 val16; + + pdebug(4, "%s: %ux%u+%u+%u\n", __func__, x2 - x1, y2 - y1, x1, y1); + + /* Get the address, it's already mapped */ + dst32 = drm_client_buffer_vmap(buffer); + + for (y = y1; y < y2; y++) { + for (x = x1; x < x2; x++) { + val16 = src16[x + (y * src_width)]; + dst32[x + (y * dst_width)] = ((val16 & 0xf800) << 8) | + ((val16 & 0x07e0) << 5) | + ((val16 & 0x001f) << 3); + len -= 2; + if (!len) + return 0; + } + } + + return len; +} + +static size_t mud_drm_gadget_write_buffer_memcpy(struct drm_client_buffer *buffer, + const void *src, size_t len, + unsigned int x1, unsigned int x2, + unsigned int y1, unsigned int y2) +{ + unsigned int cpp = buffer->fb->format->cpp[0]; + size_t dst_pitch = buffer->fb->pitches[0]; + size_t src_pitch = (x2 - x1) * cpp; + void *dst; + + pdebug(5, "%s: %ux%u+%u+%u\n", __func__, x2 - x1, y2 - y1, x1, y1); + + /* Get the address, it's already mapped */ + dst = drm_client_buffer_vmap(buffer); + dst += y1 * dst_pitch; + dst += x1 * cpp; + + for (; y1 < y2 && len; y1++) { + src_pitch = min(src_pitch, len); + memcpy(dst, src, src_pitch); + src += src_pitch; + dst += dst_pitch; + len -= src_pitch; + } + + return len; +} + +static int mud_drm_gadget_write_buffer(struct mud_drm_gadget *mdg, const void *buf, + size_t offset, size_t len, u8 compression) +{ + struct drm_client_buffer *buffer = mdg->buffer ? mdg->buffer : mdg->buffer_check; + unsigned int x1 = mdg->rect_x; + unsigned int x2 = x1 + mdg->rect_width; + unsigned int y1 = mdg->rect_y; + unsigned int y2 = y1 + mdg->rect_height; + struct drm_framebuffer *fb; + int ret; + + if (!buffer) + return -ENOMEM; + + fb = buffer->fb; + + pdebug(2, "%s: [FB:%d] %ux%u+%u+%u\n", __func__, fb->base.id, + mdg->rect_width, mdg->rect_height, mdg->rect_x, mdg->rect_y); + + if (!x2 || !y2 || x2 > fb->width || y2 > fb->height) + return -EINVAL; + + if (compression & REGMAP_USB_COMPRESSION_LZ4) { + bool direct = !x1 && !y1 && x2 == fb->width && y2 == fb->height && + (x2 * fb->format->cpp[0]) == fb->pitches[0]; + void *dest = mdg->work_buf; + + /* if src buffer fits dst buffer, decompress directly into dst */ + if (direct) + dest = drm_client_buffer_vmap(buffer); + + ret = LZ4_decompress_safe(buf, dest, len, mdg->max_transfer_size); + pdebug(4, " decompress len=%zu, ret=%d\n", len, ret); + if (ret < 0) + return -EIO; + + if (direct) + return 0; + + buf = mdg->work_buf; + len = ret; + } + + if (offset + len > (mdg->rect_width * mdg->rect_height * fb->format->cpp[0])) { + pr_err("%s: Buffer doesn't fit rectangle\n", __func__); + return -EINVAL; + } + + /* offset and len are 32-bit aligned */ + + if (offset) { + unsigned int buf_cpp = fb->format->cpp[0]; + unsigned int pix_offset = offset / buf_cpp; + unsigned int off_x1 = x1 + (pix_offset % buf_cpp); + size_t remain; + + y1 += pix_offset / (x2 - x1); + pdebug(6, " buf_cpp=%u, pix_offset=%u, off_x1=%u, x1=%u, y1=%u\n", + buf_cpp, pix_offset, off_x1, x1, y1); + if (y1 >= y2) + return -EINVAL; + + if (off_x1 != x1) { + remain = mud_drm_gadget_write_buffer_memcpy(buffer, buf, len, + off_x1, x2, y1, y1 + 1); + if (!remain) + return 0; + + buf += len - remain; + len = remain; + if (++y1 >= y2) + return -EINVAL; + } + } + + len = mud_drm_gadget_write_buffer_memcpy(buffer, buf, len, x1, x2, y1, y2); + if (len) + pr_err("%s: Failed to write it all: %zu\n", __func__, len); + + return len ? -EIO : 0; +} + +static int mud_drm_gadget_disable_pipe(struct mud_drm_gadget *mdg) +{ + int ret; + + pdebug(2, "%s: buffer=%px buffer_check=%px\n", + __func__, mdg->buffer, mdg->buffer_check); + + drm_client_modeset_set(&mdg->client, NULL, NULL, NULL); + ret = drm_client_modeset_commit(&mdg->client); + if (ret) + return ret; + + drm_client_framebuffer_delete(mdg->buffer_check); + drm_client_framebuffer_delete(mdg->buffer); + mdg->buffer_check = NULL; + mdg->buffer = NULL; + + return 0; +} + +int mud_drm_gadget_writereg(struct mud_drm_gadget *mdg, unsigned int regnr, + const void *buf, size_t len, u8 compression) +{ + const __le32 *buf32; + size_t count; + u32 val; + int ret; + + pdebug(2, "%s: regnr=0x%02x, len=%zu\n", __func__, regnr, len); + + if (regnr >= MUD_DRM_REG_BUFFER) { + size_t offset = (regnr - MUD_DRM_REG_BUFFER) * sizeof(u32); + + pdebug(3, " MUD_DRM_REG_BUFFER\n"); + return mud_drm_gadget_write_buffer(mdg, buf, offset, len, compression); + } + + if (compression & REGMAP_USB_COMPRESSION_LZ4) { + ret = LZ4_decompress_safe(buf, mdg->work_buf, len, mdg->max_transfer_size); + pdebug(4, " decompress len=%zu, ret=%d\n", len, ret); + if (ret < 0) + return -EIO; + + buf = mdg->work_buf; + len = ret; + } + + count = len / sizeof(u32); + buf32 = buf; + + while (count--) { + val = le32_to_cpup(buf32++); + + if (regnr >= MUD_DRM_REG_SET_MODE && regnr <= MUD_DRM_REG_SET_MODE_MAX) { + ((u32 *)&mdg->current_mode)[regnr - MUD_DRM_REG_SET_MODE] = val; + } else { + switch (regnr) { + case MUD_DRM_REG_PIPE_ENABLE: + pdebug(3, " MUD_DRM_REG_PIPE_ENABLE: %u\n", val); + if (val == 0) { + ret = mud_drm_gadget_disable_pipe(mdg); + if (ret) + return ret; + } + break; + + case MUD_DRM_REG_RECT_X: + mdg->rect_x = val; + break; + case MUD_DRM_REG_RECT_Y: + mdg->rect_y = val; + break; + case MUD_DRM_REG_RECT_WIDTH: + mdg->rect_width = val; + break; + case MUD_DRM_REG_RECT_HEIGHT: + mdg->rect_height = val; + break; + + case MUD_DRM_REG_SET_CONNECTOR: + if (val >= mdg->connector_count) + return -EINVAL; + + pdebug(3, " MUD_DRM_REG_SET_CONNECTOR: %u\n", val); + mdg->current_connector = mdg->connectors[val].connector; + break; + + case MUD_DRM_REG_SET_FORMAT: + pdebug(3, " MUD_DRM_REG_SET_FORMAT: %u\n", val); + mdg->current_format = val; + break; + + case MUD_DRM_REG_SET_COMMIT: + pdebug(3, " MUD_DRM_REG_SET_COMMIT: %u\n", val); + + if (val == MUD_DRM_COMMIT_APPLY) + ret = mud_drm_gadget_commit(mdg); + else + ret = mud_drm_gadget_check(mdg); + pdebug(3, " ret=%d\n", ret); + if (ret) + return ret; + break; + + default: + pr_err("%s: unknown register: 0x%x\n", __func__, regnr); + return -EINVAL; + } + } + + regnr++; + } + + return 0; +} +EXPORT_SYMBOL(mud_drm_gadget_writereg); + +int mud_drm_gadget_readreg(struct mud_drm_gadget *mdg, unsigned int regnr, + void *buf, size_t *len, u8 compression) +{ + struct drm_device *drm = mdg->client.dev; + size_t count = *len / sizeof(u32); + __le32 *buf32 = buf; + u32 val; + int ret; + + /* + * FIXME: + * Should we bother with compression? The host honours our choice. + * EDID and modes are the big ones, only on connector probing. + */ + + pdebug(2, "%s: regnr=0x%02x, count=%zu\n", __func__, regnr, count); + + if (regnr >= MUD_DRM_REG_CONNECTOR_TYPE && regnr < MUD_DRM_REG_CONNECTOR_MAX) + return mud_drm_gadget_read_connector(mdg, regnr, buf, *len); + + if (regnr == MUD_DRM_REG_TV_MODES) { + const char *names; + + ret = drm_mud_gadget_tv_mode_property(drm, &val, &names); + if (ret) + return ret; + if (*len != val * DRM_PROP_NAME_LEN) + ret = -EINVAL; + else + memcpy(buf, names, *len); + kfree(names); + return ret; + } + + while (count--) { + if (regnr >= MUD_DRM_REG_FORMATS && regnr <= MUD_DRM_REG_FORMATS_MAX && + regnr < (MUD_DRM_REG_FORMATS + mdg->format_count)) { + val = mdg->formats[regnr - MUD_DRM_REG_FORMATS]; + } else { + switch (regnr) { + case MUD_DRM_REG_MIN_WIDTH: + val = drm->mode_config.min_width; + break; + case MUD_DRM_REG_MAX_WIDTH: + val = drm->mode_config.max_width; + break; + case MUD_DRM_REG_MIN_HEIGHT: + val = drm->mode_config.min_height; + break; + case MUD_DRM_REG_MAX_HEIGHT: + val = drm->mode_config.max_height; + break; + + case MUD_DRM_REG_NUM_CONNECTORS: + val = mdg->connector_count; + break; + + case MUD_DRM_REG_FORMATS_COUNT: + val = mdg->format_count; + break; + + case MUD_DRM_REG_TV_MODES_COUNT: + ret = drm_mud_gadget_tv_mode_property(drm, &val, NULL); + if (ret) + return ret; + break; + default: + pr_err("%s: unknown register: 0x%x\n", __func__, regnr); + return -EINVAL; + } + } + + *(buf32++) = cpu_to_le32(val); + regnr++; + } + + return 0; +} +EXPORT_SYMBOL(mud_drm_gadget_readreg); + +void mud_drm_gadget_enable(struct mud_drm_gadget *mdg) +{ + pdebug(1, "%s:\n", __func__); +} +EXPORT_SYMBOL(mud_drm_gadget_enable); + +void mud_drm_gadget_disable(struct mud_drm_gadget *mdg) +{ + pdebug(1, "%s:\n", __func__); + mud_drm_gadget_disable_pipe(mdg); +} +EXPORT_SYMBOL(mud_drm_gadget_disable); + +static int drm_mud_gadget_get_formats(struct mud_drm_gadget *mdg) +{ + struct drm_device *drm = mdg->client.dev; + struct drm_plane *plane; + //struct drm_crtc *crtc; + unsigned int i; + u32 *formats; + + pdebug(1, "%s:\n", __func__); + //crtc = drm_crtc_from_index(drm, 0); + //plane = crtc->primary; + + drm_for_each_plane(plane, drm) { + pdebug(1, " plane=%px\n", plane); + if (plane->type == DRM_PLANE_TYPE_PRIMARY) + break; + } + + formats = kcalloc(plane->format_count, sizeof(u32), GFP_KERNEL); + if (!formats) + return -ENOMEM; + + for (i = 0; i < plane->format_count; i++) { + struct drm_format_name_buf format_name; + const struct drm_format_info *fmt; + + fmt = drm_format_info(plane->format_types[i]); + if (fmt->num_planes != 1) + continue; + + formats[mdg->format_count++] = plane->format_types[i]; + pdebug(1, " %s\n", drm_get_format_name(plane->format_types[i], &format_name)); + } + + if (mdg->format_count > (MUD_DRM_REG_FORMATS_MAX - MUD_DRM_REG_FORMATS)) { + kfree(formats); + return -ENOSPC; + } + + mdg->formats = formats; + + return 0; +} + +static bool object_has_property(struct drm_mode_object *obj, struct drm_property *property) +{ + unsigned int i; + + for (i = 0; i < obj->properties->count; i++) + if (obj->properties->properties[i] == property) + return true; + + return false; +} + +static void mud_drm_gadget_get_connectors(struct mud_drm_gadget *mdg) +{ + struct mud_drm_gadget_connector *connectors = NULL; + struct drm_connector_list_iter conn_iter; + unsigned int connector_count = 0; + struct drm_connector *connector; + struct drm_device *drm = mdg->client.dev; + + pdebug(1, "%s:\n", __func__); + + drm_connector_list_iter_begin(drm, &conn_iter); + drm_client_for_each_connector_iter(connector, &conn_iter) { + struct mud_drm_gadget_connector *tmp, *mconn; + + tmp = krealloc(connectors, (connector_count + 1) * sizeof(*connectors), + GFP_KERNEL | __GFP_ZERO); + if (!tmp) + break; + + connectors = tmp; + drm_connector_get(connector); + mconn = &connectors[connector_count++]; + mconn->connector = connector; + + mconn->capabilities = MUD_DRM_CONNECTOR_CAP_MODE; + if (connector->connector_type != DRM_MODE_CONNECTOR_VIRTUAL) + mconn->capabilities |= MUD_DRM_CONNECTOR_CAP_EDID; + if (object_has_property(&connector->base, drm->mode_config.tv_mode_property)) + mconn->capabilities |= MUD_DRM_CONNECTOR_CAP_TV; + } + drm_connector_list_iter_end(&conn_iter); + + mdg->connectors = connectors; + mdg->connector_count = connector_count; +} + +static void mud_drm_gadget_client_unregister(struct drm_client_dev *client) +{ + struct mud_drm_gadget *mdg = container_of(client, struct mud_drm_gadget, client); + unsigned int i; + + pdebug(1, "%s:\n", __func__); + + vfree(mdg->work_buf); + kfree(mdg->formats); + + for (i = 0; i < mdg->connector_count; i++) { + struct mud_drm_gadget_connector *mconn = &mdg->connectors[i]; + + drm_connector_put(mconn->connector); + kfree(mconn->modes); + kfree(mconn->edid); + } + kfree(mdg->connectors); + + drm_client_framebuffer_delete(mdg->buffer_check); + drm_client_framebuffer_delete(mdg->buffer); + drm_client_release(client); + kfree(mdg); +} + +static int mud_drm_gadget_client_hotplug(struct drm_client_dev *client) +{ + struct mud_drm_gadget *mdg = container_of(client, struct mud_drm_gadget, client); + + pdebug(1, "%s:\n", __func__); + + mud_drm_gadget_probe_connectors(mdg); + + return 0; +} + +static const struct drm_client_funcs mdg_client_funcs = { + .owner = THIS_MODULE, + .unregister = mud_drm_gadget_client_unregister, + .hotplug = mud_drm_gadget_client_hotplug, +}; + +struct mud_drm_gadget *mud_drm_gadget_init(unsigned int minor_id, + size_t max_transfer_size) +{ + struct mud_drm_gadget *mdg; + void *work_buf; + int ret; + + pdebug(1, "%s:\n", __func__); + + mdg = kzalloc(sizeof(*mdg), GFP_KERNEL); + work_buf = vmalloc(max_transfer_size); + if (!mdg || !work_buf) { + ret = -ENOMEM; + goto error_free; + } + + mdg->max_transfer_size = max_transfer_size; + mdg->work_buf = work_buf; + + ret = drm_client_init_from_id(minor_id, &mdg->client, "mud-drm-gadget", &mdg_client_funcs); + if (0 && ret) + goto error_free; + + /* From this point on we can't fail since only a drm_dev_unregister() can unload us */ + + if (!ret) { + ret = drm_mud_gadget_get_formats(mdg); + if (ret) + pr_err("ERROR: drm_mud_gadget_get_formats=%d\n", ret); + + mud_drm_gadget_get_connectors(mdg); + mud_drm_gadget_probe_connectors(mdg); + } + + pdebug(1, "%s: connector_count=%u\n", __func__, mdg->connector_count); + + return mdg; + +error_free: + vfree(work_buf); + kfree(mdg); + + return ERR_PTR(ret); +} +EXPORT_SYMBOL(mud_drm_gadget_init); + +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_LICENSE("GPL"); diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig index d6285146ec76..e30cb039f35d 100644 --- a/drivers/usb/gadget/Kconfig +++ b/drivers/usb/gadget/Kconfig @@ -219,6 +219,9 @@ config USB_F_TCM config USB_F_MUD tristate +config USB_F_MUD_DRM + tristate + config USB_F_MUD_PINS tristate @@ -498,6 +501,15 @@ menuconfig USB_CONFIGFS_F_MUD if USB_F_MUD +config USB_CONFIGFS_F_MUD_DRM + bool "Multifunction USB Device Display" + depends on DRM + select DRM_MUD_GADGET + select USB_F_MUD_DRM + select LZ4_DECOMPRESS + help + Display support for the Multifunction USB Device. + config USB_CONFIGFS_F_MUD_PINS bool "Multifunction USB Device GPIO" depends on PINCTRL diff --git a/drivers/usb/gadget/function/Makefile b/drivers/usb/gadget/function/Makefile index 2e24227fcc12..d0f93ce6bbe9 100644 --- a/drivers/usb/gadget/function/Makefile +++ b/drivers/usb/gadget/function/Makefile @@ -52,5 +52,7 @@ usb_f_tcm-y := f_tcm.o obj-$(CONFIG_USB_F_TCM) += usb_f_tcm.o usb_f_mud-y := f_mud.o mud_regmap.o obj-$(CONFIG_USB_F_MUD) += usb_f_mud.o +usb_f_mud_drm-y := f_mud_drm.o +obj-$(CONFIG_USB_F_MUD_DRM) += usb_f_mud_drm.o usb_f_mud_pins-y := f_mud_pins.o obj-$(CONFIG_USB_F_MUD_PINS) += usb_f_mud_pins.o diff --git a/drivers/usb/gadget/function/f_mud_drm.c b/drivers/usb/gadget/function/f_mud_drm.c new file mode 100644 index 000000000000..5e7ba71f3389 --- /dev/null +++ b/drivers/usb/gadget/function/f_mud_drm.c @@ -0,0 +1,181 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2020 Noralf Trønnes + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "f_mud.h" +#include "../../../gpu/drm/mud/mud_drm.h" + +struct f_mud_drm_cell { + struct f_mud_cell cell; + + struct mutex lock; + int refcnt; + + int drm_dev; + const char *backlight_dev; + + struct mud_drm_gadget *mdg; +}; + +static inline struct f_mud_drm_cell *ci_to_f_mud_drm_cell(struct config_item *item) +{ + return container_of(to_config_group(item), struct f_mud_drm_cell, cell.group); +} + +static inline struct f_mud_drm_cell *cell_to_f_mud_drm_cell(struct f_mud_cell *cell) +{ + return container_of(cell, struct f_mud_drm_cell, cell); +} + +static void f_mud_drm_enable(struct f_mud_cell *cell) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + + mud_drm_gadget_enable(pcell->mdg); +} + +static void f_mud_drm_disable(struct f_mud_cell *cell) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + + mud_drm_gadget_disable(pcell->mdg); +} + +static int f_mud_drm_writereg(struct f_mud_cell *cell, unsigned int regnr, + const void *buf, size_t len, u8 compression) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + + return mud_drm_gadget_writereg(pcell->mdg, regnr, buf, len, compression); +} + +static int f_mud_drm_readreg(struct f_mud_cell *cell, unsigned int regnr, + void *buf, size_t *len, u8 compression) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + + return mud_drm_gadget_readreg(pcell->mdg, regnr, buf, len, compression); +} + +static int f_mud_drm_bind(struct f_mud_cell *cell) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + struct mud_drm_gadget *mdg; + int drm_dev, ret = 0; + + printk("%s:\n", __func__); + + mutex_lock(&pcell->lock); + drm_dev = pcell->drm_dev; + printk(" drm_dev=%d\n", pcell->drm_dev); + printk(" backlight_dev='%s'\n", pcell->backlight_dev ? pcell->backlight_dev : ""); + + mdg = mud_drm_gadget_init(drm_dev, cell->ops->max_transfer_size); + if (IS_ERR(mdg)) { + ret = PTR_ERR(mdg); + goto out; + } + + pcell->mdg = mdg; + pcell->refcnt++; +out: + mutex_unlock(&pcell->lock); + + return ret; +} + +static void f_mud_drm_unbind(struct f_mud_cell *cell) +{ + struct f_mud_drm_cell *pcell = cell_to_f_mud_drm_cell(cell); + + printk("%s:\n", __func__); + + mutex_lock(&pcell->lock); + pcell->refcnt--; + mutex_unlock(&pcell->lock); +} + +F_MUD_OPT_INT(f_mud_drm_cell, drm_dev, 0, 63); +F_MUD_OPT_STR(f_mud_drm_cell, backlight_dev); + +static struct configfs_attribute *f_mud_drm_attrs[] = { + &f_mud_drm_cell_attr_drm_dev, + &f_mud_drm_cell_attr_backlight_dev, + NULL, +}; + +static struct configfs_item_operations f_mud_drm_item_ops = { + .release = f_mud_cell_item_release, +}; + +static const struct config_item_type f_mud_drm_func_type = { + .ct_item_ops = &f_mud_drm_item_ops, + .ct_attrs = f_mud_drm_attrs, + .ct_owner = THIS_MODULE, +}; + +static void f_mud_drm_free(struct f_mud_cell *cell) +{ + struct f_mud_drm_cell *pcell = container_of(cell, struct f_mud_drm_cell, cell); + + printk("%s:\n", __func__); + + mutex_destroy(&pcell->lock); + kfree(pcell->backlight_dev); + kfree(pcell); +} + +static struct f_mud_cell *f_mud_drm_alloc(void) +{ + struct f_mud_drm_cell *pcell; + + printk("%s:\n", __func__); + + pcell = kzalloc(sizeof(*pcell), GFP_KERNEL); + if (!pcell) + return ERR_PTR(-ENOMEM); + + mutex_init(&pcell->lock); + config_group_init_type_name(&pcell->cell.group, "", &f_mud_drm_func_type); + + return &pcell->cell; +} + +static const struct f_mud_cell_ops f_mud_drm_ops = { + .name = "mud-drm", + .owner = THIS_MODULE, + + /* + * FIXME: Support interrupt for connector hotplug event + * Polling should be a fallback. + .interrupt_interval_ms = 1000, + */ + + .alloc = f_mud_drm_alloc, + .free = f_mud_drm_free, + .bind = f_mud_drm_bind, + .unbind = f_mud_drm_unbind, + + .regval_bytes = 4, + .max_transfer_size = KMALLOC_MAX_SIZE, + .compression = REGMAP_USB_COMPRESSION_LZ4, + + .enable = f_mud_drm_enable, + .disable = f_mud_drm_disable, + .readreg = f_mud_drm_readreg, + .writereg = f_mud_drm_writereg, +}; + +DECLARE_F_MUD_CELL_INIT(f_mud_drm_ops); + +MODULE_AUTHOR("Noralf Trønnes"); +MODULE_LICENSE("GPL");