From patchwork Fri Feb 12 10:20:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Ardelean X-Patchwork-Id: 12084983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A169EC433E6 for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6053064E13 for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230474AbhBLKTA (ORCPT ); Fri, 12 Feb 2021 05:19:00 -0500 Received: from mx0a-00128a01.pphosted.com ([148.163.135.77]:63240 "EHLO mx0a-00128a01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230318AbhBLKSy (ORCPT ); Fri, 12 Feb 2021 05:18:54 -0500 Received: from pps.filterd (m0167089.ppops.net [127.0.0.1]) by mx0a-00128a01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11CAFO8j001788; Fri, 12 Feb 2021 05:18:02 -0500 Received: from nwd2mta4.analog.com ([137.71.173.58]) by mx0a-00128a01.pphosted.com with ESMTP id 36hrw93ca3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Feb 2021 05:18:02 -0500 Received: from ASHBMBX8.ad.analog.com (ASHBMBX8.ad.analog.com [10.64.17.5]) by nwd2mta4.analog.com (8.14.7/8.14.7) with ESMTP id 11CAI1X5025138 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 12 Feb 2021 05:18:01 -0500 Received: from ASHBCASHYB5.ad.analog.com (10.64.17.133) by ASHBMBX8.ad.analog.com (10.64.17.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.721.2; Fri, 12 Feb 2021 05:18:00 -0500 Received: from ASHBMBX9.ad.analog.com (10.64.17.10) by ASHBCASHYB5.ad.analog.com (10.64.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.721.2; Fri, 12 Feb 2021 05:18:00 -0500 Received: from zeus.spd.analog.com (10.66.68.11) by ASHBMBX9.ad.analog.com (10.64.17.10) with Microsoft SMTP Server id 15.1.1779.2 via Frontend Transport; Fri, 12 Feb 2021 05:17:59 -0500 Received: from localhost.localdomain ([10.48.65.12]) by zeus.spd.analog.com (8.15.1/8.15.1) with ESMTP id 11CAHu5k029319; Fri, 12 Feb 2021 05:17:58 -0500 From: Alexandru Ardelean To: , CC: , , , , , Alexandru Ardelean Subject: [RFC PATCH 1/5] iio: Add output buffer support Date: Fri, 12 Feb 2021 12:20:17 +0200 Message-ID: <20210212102021.47276-2-alexandru.ardelean@analog.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210212102021.47276-1-alexandru.ardelean@analog.com> References: <20210212102021.47276-1-alexandru.ardelean@analog.com> MIME-Version: 1.0 X-ADIRuleOP-NewSCL: Rule Triggered X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-12_03:2021-02-12,2021-02-12 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 priorityscore=1501 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102120077 Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org From: Lars-Peter Clausen Currently IIO only supports buffer mode for capture devices like ADCs. Add support for buffered mode for output devices like DACs. The output buffer implementation is analogous to the input buffer implementation. Instead of using read() to get data from the buffer write() is used to copy data into the buffer. poll() with POLLOUT will wakeup if there is space available for more or equal to the configured watermark of samples. Drivers can remove data from a buffer using iio_buffer_remove_sample(), the function can e.g. called from a trigger handler to write the data to hardware. A buffer can only be either a output buffer or an input, but not both. So, for a device that has an ADC and DAC path, this will mean 2 IIO buffers (one for each direction). The direction of the buffer is decided by the new direction field of the iio_buffer struct and should be set after allocating and before registering it. Signed-off-by: Lars-Peter Clausen Signed-off-by: Alexandru Ardelean --- drivers/iio/industrialio-buffer.c | 110 ++++++++++++++++++++++++++++-- include/linux/iio/buffer.h | 7 ++ include/linux/iio/buffer_impl.h | 11 +++ 3 files changed, 124 insertions(+), 4 deletions(-) diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c index a0d1ad86022f..6f4f5f5544f3 100644 --- a/drivers/iio/industrialio-buffer.c +++ b/drivers/iio/industrialio-buffer.c @@ -162,6 +162,69 @@ static ssize_t iio_buffer_read(struct file *filp, char __user *buf, return ret; } +static size_t iio_buffer_space_available(struct iio_buffer *buf) +{ + if (buf->access->space_available) + return buf->access->space_available(buf); + + return SIZE_MAX; +} + +static ssize_t iio_buffer_write(struct file *filp, const char __user *buf, + size_t n, loff_t *f_ps) +{ + struct iio_dev_buffer_pair *ib = filp->private_data; + struct iio_buffer *rb = ib->buffer; + struct iio_dev *indio_dev = ib->indio_dev; + DEFINE_WAIT_FUNC(wait, woken_wake_function); + size_t datum_size; + size_t to_wait; + int ret; + + if (!rb || !rb->access->write) + return -EINVAL; + + datum_size = rb->bytes_per_datum; + + /* + * If datum_size is 0 there will never be anything to read from the + * buffer, so signal end of file now. + */ + if (!datum_size) + return 0; + + if (filp->f_flags & O_NONBLOCK) + to_wait = 0; + else + to_wait = min_t(size_t, n / datum_size, rb->watermark); + + add_wait_queue(&rb->pollq, &wait); + do { + if (!indio_dev->info) { + ret = -ENODEV; + break; + } + + if (iio_buffer_space_available(rb) < to_wait) { + if (signal_pending(current)) { + ret = -ERESTARTSYS; + break; + } + + wait_woken(&wait, TASK_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT); + continue; + } + + ret = rb->access->write(rb, n, buf); + if (ret == 0 && (filp->f_flags & O_NONBLOCK)) + ret = -EAGAIN; + } while (ret == 0); + remove_wait_queue(&rb->pollq, &wait); + + return ret; +} + /** * iio_buffer_poll() - poll the buffer to find out if it has data * @filp: File structure pointer for device access @@ -182,8 +245,19 @@ static __poll_t iio_buffer_poll(struct file *filp, return 0; poll_wait(filp, &rb->pollq, wait); - if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0)) - return EPOLLIN | EPOLLRDNORM; + + switch (rb->direction) { + case IIO_BUFFER_DIRECTION_IN: + if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0)) + return EPOLLIN | EPOLLRDNORM; + break; + case IIO_BUFFER_DIRECTION_OUT: + if (iio_buffer_space_available(rb) >= rb->watermark) + return EPOLLOUT | EPOLLWRNORM; + break; + } + + /* need a way of knowing if there may be enough data... */ return 0; } @@ -232,6 +306,16 @@ void iio_buffer_wakeup_poll(struct iio_dev *indio_dev) } } +int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data) +{ + if (!buffer || !buffer->access) + return -EINVAL; + if (!buffer->access->write) + return -ENOSYS; + return buffer->access->remove_from(buffer, data); +} +EXPORT_SYMBOL_GPL(iio_buffer_remove_sample); + void iio_buffer_init(struct iio_buffer *buffer) { INIT_LIST_HEAD(&buffer->demux_list); @@ -803,6 +887,8 @@ static int iio_verify_update(struct iio_dev *indio_dev, } if (insert_buffer) { + if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT) + strict_scanmask = true; bitmap_or(compound_mask, compound_mask, insert_buffer->scan_mask, indio_dev->masklength); scan_timestamp |= insert_buffer->scan_timestamp; @@ -945,6 +1031,8 @@ static int iio_update_demux(struct iio_dev *indio_dev) int ret; list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { + if (buffer->direction == IIO_BUFFER_DIRECTION_OUT) + continue; ret = iio_buffer_update_demux(indio_dev, buffer); if (ret < 0) goto error_clear_mux_table; @@ -1155,6 +1243,11 @@ int iio_update_buffers(struct iio_dev *indio_dev, mutex_lock(&indio_dev->info_exist_lock); mutex_lock(&indio_dev->mlock); + if (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT) { + ret = -EINVAL; + goto out_unlock; + } + if (insert_buffer && iio_buffer_is_active(insert_buffer)) insert_buffer = NULL; @@ -1400,6 +1493,7 @@ static const struct file_operations iio_buffer_chrdev_fileops = { .owner = THIS_MODULE, .llseek = noop_llseek, .read = iio_buffer_read, + .write = iio_buffer_write, .poll = iio_buffer_poll, .unlocked_ioctl = iio_buffer_ioctl, .compat_ioctl = compat_ptr_ioctl, @@ -1914,8 +2008,16 @@ static int iio_buffer_mmap(struct file *filep, struct vm_area_struct *vma) if (!(vma->vm_flags & VM_SHARED)) return -EINVAL; - if (!(vma->vm_flags & VM_READ)) - return -EINVAL; + switch (buffer->direction) { + case IIO_BUFFER_DIRECTION_IN: + if (!(vma->vm_flags & VM_READ)) + return -EINVAL; + break; + case IIO_BUFFER_DIRECTION_OUT: + if (!(vma->vm_flags & VM_WRITE)) + return -EINVAL; + break; + } return buffer->access->mmap(buffer, vma); } diff --git a/include/linux/iio/buffer.h b/include/linux/iio/buffer.h index b6928ac5c63d..e87b8773253d 100644 --- a/include/linux/iio/buffer.h +++ b/include/linux/iio/buffer.h @@ -11,8 +11,15 @@ struct iio_buffer; +enum iio_buffer_direction { + IIO_BUFFER_DIRECTION_IN, + IIO_BUFFER_DIRECTION_OUT, +}; + int iio_push_to_buffers(struct iio_dev *indio_dev, const void *data); +int iio_buffer_remove_sample(struct iio_buffer *buffer, u8 *data); + /** * iio_push_to_buffers_with_timestamp() - push data and timestamp to buffers * @indio_dev: iio_dev structure for device. diff --git a/include/linux/iio/buffer_impl.h b/include/linux/iio/buffer_impl.h index 1d57dc7ccb4f..47bdbf4a4519 100644 --- a/include/linux/iio/buffer_impl.h +++ b/include/linux/iio/buffer_impl.h @@ -7,6 +7,7 @@ #ifdef CONFIG_IIO_BUFFER #include +#include struct iio_dev; struct iio_buffer; @@ -23,6 +24,10 @@ struct iio_buffer; * @read: try to get a specified number of bytes (must exist) * @data_available: indicates how much data is available for reading from * the buffer. + * @remove_from: remove sample from buffer. Drivers should calls this to + * remove a sample from a buffer. + * @write: try to write a number of bytes + * @space_available: returns the amount of bytes available in a buffer * @request_update: if a parameter change has been marked, update underlying * storage. * @set_bytes_per_datum:set number of bytes per datum @@ -61,6 +66,9 @@ struct iio_buffer_access_funcs { int (*store_to)(struct iio_buffer *buffer, const void *data); int (*read)(struct iio_buffer *buffer, size_t n, char __user *buf); size_t (*data_available)(struct iio_buffer *buffer); + int (*remove_from)(struct iio_buffer *buffer, void *data); + int (*write)(struct iio_buffer *buffer, size_t n, const char __user *buf); + size_t (*space_available)(struct iio_buffer *buffer); int (*request_update)(struct iio_buffer *buffer); @@ -103,6 +111,9 @@ struct iio_buffer { /** @bytes_per_datum: Size of individual datum including timestamp. */ size_t bytes_per_datum; + /* @direction: Direction of the data stream (in/out). */ + enum iio_buffer_direction direction; + /** * @access: Buffer access functions associated with the * implementation. From patchwork Fri Feb 12 10:20:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Ardelean X-Patchwork-Id: 12084981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9C4FC433E9 for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A08864D9E for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230374AbhBLKTE (ORCPT ); Fri, 12 Feb 2021 05:19:04 -0500 Received: from mx0a-00128a01.pphosted.com ([148.163.135.77]:1830 "EHLO mx0a-00128a01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230389AbhBLKS5 (ORCPT ); Fri, 12 Feb 2021 05:18:57 -0500 Received: from pps.filterd (m0167089.ppops.net [127.0.0.1]) by mx0a-00128a01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11CAFO8k001788; Fri, 12 Feb 2021 05:18:05 -0500 Received: from nwd2mta3.analog.com ([137.71.173.56]) by mx0a-00128a01.pphosted.com with ESMTP id 36hrw93ca8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Feb 2021 05:18:05 -0500 Received: from SCSQMBX11.ad.analog.com (SCSQMBX11.ad.analog.com [10.77.17.10]) by nwd2mta3.analog.com (8.14.7/8.14.7) with ESMTP id 11CAI3bB023449 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=FAIL); Fri, 12 Feb 2021 05:18:04 -0500 Received: from SCSQCASHYB7.ad.analog.com (10.77.17.133) by SCSQMBX11.ad.analog.com (10.77.17.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1779.2; Fri, 12 Feb 2021 02:18:02 -0800 Received: from SCSQMBX10.ad.analog.com (10.77.17.5) by SCSQCASHYB7.ad.analog.com (10.77.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.721.2; Fri, 12 Feb 2021 02:18:02 -0800 Received: from zeus.spd.analog.com (10.66.68.11) by scsqmbx10.ad.analog.com (10.77.17.5) with Microsoft SMTP Server id 15.2.721.2 via Frontend Transport; Fri, 12 Feb 2021 02:18:02 -0800 Received: from localhost.localdomain ([10.48.65.12]) by zeus.spd.analog.com (8.15.1/8.15.1) with ESMTP id 11CAHu5l029319; Fri, 12 Feb 2021 05:17:59 -0500 From: Alexandru Ardelean To: , CC: , , , , , Alexandru Ardelean Subject: [RFC PATCH 2/5] iio: kfifo-buffer: Add output buffer support Date: Fri, 12 Feb 2021 12:20:18 +0200 Message-ID: <20210212102021.47276-3-alexandru.ardelean@analog.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210212102021.47276-1-alexandru.ardelean@analog.com> References: <20210212102021.47276-1-alexandru.ardelean@analog.com> MIME-Version: 1.0 X-ADIRuleOP-NewSCL: Rule Triggered X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-12_03:2021-02-12,2021-02-12 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 priorityscore=1501 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102120077 Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org From: Lars-Peter Clausen Add output buffer support to the kfifo buffer implementation. The implementation is straight forward and mostly just wraps the kfifo API to provide the required operations. Signed-off-by: Lars-Peter Clausen Signed-off-by: Alexandru Ardelean --- drivers/iio/buffer/kfifo_buf.c | 50 ++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/drivers/iio/buffer/kfifo_buf.c b/drivers/iio/buffer/kfifo_buf.c index 1359abed3b31..6e055176f969 100644 --- a/drivers/iio/buffer/kfifo_buf.c +++ b/drivers/iio/buffer/kfifo_buf.c @@ -138,10 +138,60 @@ static void iio_kfifo_buffer_release(struct iio_buffer *buffer) kfree(kf); } +static size_t iio_kfifo_buf_space_available(struct iio_buffer *r) +{ + struct iio_kfifo *kf = iio_to_kfifo(r); + size_t avail; + + mutex_lock(&kf->user_lock); + avail = kfifo_avail(&kf->kf); + mutex_unlock(&kf->user_lock); + + return avail; +} + +static int iio_kfifo_remove_from(struct iio_buffer *r, void *data) +{ + int ret; + struct iio_kfifo *kf = iio_to_kfifo(r); + + if (kfifo_size(&kf->kf) < r->bytes_per_datum) + return -EBUSY; + + ret = kfifo_out(&kf->kf, data, r->bytes_per_datum); + if (ret != r->bytes_per_datum) + return -EBUSY; + + wake_up_interruptible_poll(&r->pollq, POLLOUT | POLLWRNORM); + + return 0; +} + +static int iio_kfifo_write(struct iio_buffer *r, size_t n, + const char __user *buf) +{ + struct iio_kfifo *kf = iio_to_kfifo(r); + int ret, copied; + + mutex_lock(&kf->user_lock); + if (!kfifo_initialized(&kf->kf) || n < kfifo_esize(&kf->kf)) + ret = -EINVAL; + else + ret = kfifo_from_user(&kf->kf, buf, n, &copied); + mutex_unlock(&kf->user_lock); + if (ret) + return ret; + + return copied; +} + static const struct iio_buffer_access_funcs kfifo_access_funcs = { .store_to = &iio_store_to_kfifo, .read = &iio_read_kfifo, .data_available = iio_kfifo_buf_data_available, + .remove_from = &iio_kfifo_remove_from, + .write = &iio_kfifo_write, + .space_available = &iio_kfifo_buf_space_available, .request_update = &iio_request_update_kfifo, .set_bytes_per_datum = &iio_set_bytes_per_datum_kfifo, .set_length = &iio_set_length_kfifo, From patchwork Fri Feb 12 10:20:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Ardelean X-Patchwork-Id: 12084985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D748EC4332B for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B88F864E05 for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230491AbhBLKTH (ORCPT ); Fri, 12 Feb 2021 05:19:07 -0500 Received: from mx0a-00128a01.pphosted.com ([148.163.135.77]:2054 "EHLO mx0a-00128a01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230448AbhBLKS5 (ORCPT ); Fri, 12 Feb 2021 05:18:57 -0500 Received: from pps.filterd (m0167088.ppops.net [127.0.0.1]) by mx0a-00128a01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11CAFKQb026370; Fri, 12 Feb 2021 05:18:06 -0500 Received: from nwd2mta3.analog.com ([137.71.173.56]) by mx0a-00128a01.pphosted.com with ESMTP id 36hr7qkgjc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Feb 2021 05:18:05 -0500 Received: from ASHBMBX9.ad.analog.com (ASHBMBX9.ad.analog.com [10.64.17.10]) by nwd2mta3.analog.com (8.14.7/8.14.7) with ESMTP id 11CAI4Er023454 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=FAIL); Fri, 12 Feb 2021 05:18:04 -0500 Received: from ASHBCASHYB5.ad.analog.com (10.64.17.133) by ASHBMBX9.ad.analog.com (10.64.17.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1779.2; Fri, 12 Feb 2021 05:18:03 -0500 Received: from ASHBMBX8.ad.analog.com (10.64.17.5) by ASHBCASHYB5.ad.analog.com (10.64.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.721.2; Fri, 12 Feb 2021 05:18:03 -0500 Received: from zeus.spd.analog.com (10.66.68.11) by ashbmbx8.ad.analog.com (10.64.17.5) with Microsoft SMTP Server id 15.2.721.2 via Frontend Transport; Fri, 12 Feb 2021 05:18:03 -0500 Received: from localhost.localdomain ([10.48.65.12]) by zeus.spd.analog.com (8.15.1/8.15.1) with ESMTP id 11CAHu5m029319; Fri, 12 Feb 2021 05:18:01 -0500 From: Alexandru Ardelean To: , CC: , , , , , Alexandru Ardelean Subject: [RFC PATCH 3/5] iio: buffer-dma: Allow to provide custom buffer ops Date: Fri, 12 Feb 2021 12:20:19 +0200 Message-ID: <20210212102021.47276-4-alexandru.ardelean@analog.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210212102021.47276-1-alexandru.ardelean@analog.com> References: <20210212102021.47276-1-alexandru.ardelean@analog.com> MIME-Version: 1.0 X-ADIRuleOP-NewSCL: Rule Triggered X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-12_03:2021-02-12,2021-02-12 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 clxscore=1015 spamscore=0 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 priorityscore=1501 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102120077 Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org From: Lars-Peter Clausen Some devices that want to make use of the DMA buffer might need to do something special, like write a register when the buffer is enabled. Extend the API to allow those drivers to provide their own buffer ops. Signed-off-by: Lars-Peter Clausen Signed-off-by: Alexandru Ardelean --- drivers/iio/adc/adi-axi-adc.c | 2 +- drivers/iio/buffer/industrialio-buffer-dma.c | 4 +++- drivers/iio/buffer/industrialio-buffer-dmaengine.c | 14 ++++++++++---- include/linux/iio/buffer-dma.h | 5 ++++- include/linux/iio/buffer-dmaengine.h | 4 +++- 5 files changed, 21 insertions(+), 8 deletions(-) diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c index 74a6da35fd69..45ce97d1f41e 100644 --- a/drivers/iio/adc/adi-axi-adc.c +++ b/drivers/iio/adc/adi-axi-adc.c @@ -116,7 +116,7 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev, dma_name = "rx"; buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent, - dma_name); + dma_name, NULL, NULL); if (IS_ERR(buffer)) return PTR_ERR(buffer); diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index befb0a3d2def..57f2284a292f 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -883,13 +883,15 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_set_length); * allocations are done from a memory region that can be accessed by the device. */ int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, - struct device *dev, const struct iio_dma_buffer_ops *ops) + struct device *dev, const struct iio_dma_buffer_ops *ops, + void *driver_data) { iio_buffer_init(&queue->buffer); queue->buffer.length = PAGE_SIZE; queue->buffer.watermark = queue->buffer.length / 2; queue->dev = dev; queue->ops = ops; + queue->driver_data = driver_data; INIT_LIST_HEAD(&queue->incoming); INIT_LIST_HEAD(&queue->outgoing); diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index bb022922ec23..0736526b36ec 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -163,6 +163,8 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = { * iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine * @dev: Parent device for the buffer * @channel: DMA channel name, typically "rx". + * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used + * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops * * This allocates a new IIO buffer which internally uses the DMAengine framework * to perform its transfers. The parent device will be used to request the DMA @@ -172,7 +174,8 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = { * release it. */ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel) + const char *channel, const struct iio_dma_buffer_ops *ops, + void *driver_data) { struct dmaengine_buffer *dmaengine_buffer; unsigned int width, src_width, dest_width; @@ -211,7 +214,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev); iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev, - &iio_dmaengine_default_ops); + ops ? ops : &iio_dmaengine_default_ops, driver_data); dmaengine_buffer->queue.buffer.attrs = iio_dmaengine_buffer_attrs; dmaengine_buffer->queue.buffer.access = &iio_dmaengine_buffer_ops; @@ -249,6 +252,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res) * devm_iio_dmaengine_buffer_alloc() - Resource-managed iio_dmaengine_buffer_alloc() * @dev: Parent device for the buffer * @channel: DMA channel name, typically "rx". + * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used + * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops * * This allocates a new IIO buffer which internally uses the DMAengine framework * to perform its transfers. The parent device will be used to request the DMA @@ -257,7 +262,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res) * The buffer will be automatically de-allocated once the device gets destroyed. */ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel) + const char *channel, const struct iio_dma_buffer_ops *ops, + void *driver_data) { struct iio_buffer **bufferp, *buffer; @@ -266,7 +272,7 @@ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, if (!bufferp) return ERR_PTR(-ENOMEM); - buffer = iio_dmaengine_buffer_alloc(dev, channel); + buffer = iio_dmaengine_buffer_alloc(dev, channel, ops, driver_data); if (IS_ERR(buffer)) { devres_free(bufferp); return buffer; diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index 315a8d750986..c23fad847f0d 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -110,6 +110,8 @@ struct iio_dma_buffer_queue { bool active; + void *driver_data; + unsigned int num_blocks; struct iio_dma_buffer_block **blocks; unsigned int max_offset; @@ -144,7 +146,8 @@ int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); int iio_dma_buffer_request_update(struct iio_buffer *buffer); int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, - struct device *dma_dev, const struct iio_dma_buffer_ops *ops); + struct device *dma_dev, const struct iio_dma_buffer_ops *ops, + void *driver_data); void iio_dma_buffer_exit(struct iio_dma_buffer_queue *queue); void iio_dma_buffer_release(struct iio_dma_buffer_queue *queue); diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h index 5b502291d6a4..464adee95d4b 100644 --- a/include/linux/iio/buffer-dmaengine.h +++ b/include/linux/iio/buffer-dmaengine.h @@ -7,10 +7,12 @@ #ifndef __IIO_DMAENGINE_H__ #define __IIO_DMAENGINE_H__ +struct iio_dma_buffer_ops; struct iio_buffer; struct device; struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel); + const char *channel, const struct iio_dma_buffer_ops *ops, + void *driver_data); #endif From patchwork Fri Feb 12 10:20:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Ardelean X-Patchwork-Id: 12084989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E15FC43332 for ; Fri, 12 Feb 2021 10:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 089E164D9E for ; Fri, 12 Feb 2021 10:19:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230323AbhBLKTK (ORCPT ); Fri, 12 Feb 2021 05:19:10 -0500 Received: from mx0a-00128a01.pphosted.com ([148.163.135.77]:4976 "EHLO mx0a-00128a01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230453AbhBLKTA (ORCPT ); Fri, 12 Feb 2021 05:19:00 -0500 Received: from pps.filterd (m0167088.ppops.net [127.0.0.1]) by mx0a-00128a01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11CAEpMM026135; Fri, 12 Feb 2021 05:18:07 -0500 Received: from nwd2mta4.analog.com ([137.71.173.58]) by mx0a-00128a01.pphosted.com with ESMTP id 36hr7qkgjg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Feb 2021 05:18:07 -0500 Received: from ASHBMBX8.ad.analog.com (ASHBMBX8.ad.analog.com [10.64.17.5]) by nwd2mta4.analog.com (8.14.7/8.14.7) with ESMTP id 11CAI651025143 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 12 Feb 2021 05:18:06 -0500 Received: from ASHBCASHYB5.ad.analog.com (10.64.17.133) by ASHBMBX8.ad.analog.com (10.64.17.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.721.2; Fri, 12 Feb 2021 05:18:05 -0500 Received: from ASHBMBX8.ad.analog.com (10.64.17.5) by ASHBCASHYB5.ad.analog.com (10.64.17.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.721.2; Fri, 12 Feb 2021 05:18:05 -0500 Received: from zeus.spd.analog.com (10.66.68.11) by ashbmbx8.ad.analog.com (10.64.17.5) with Microsoft SMTP Server id 15.2.721.2 via Frontend Transport; Fri, 12 Feb 2021 05:18:05 -0500 Received: from localhost.localdomain ([10.48.65.12]) by zeus.spd.analog.com (8.15.1/8.15.1) with ESMTP id 11CAHu5n029319; Fri, 12 Feb 2021 05:18:02 -0500 From: Alexandru Ardelean To: , CC: , , , , , Alexandru Ardelean Subject: [RFC PATCH 4/5] iio: buffer-dma: Add output buffer support Date: Fri, 12 Feb 2021 12:20:20 +0200 Message-ID: <20210212102021.47276-5-alexandru.ardelean@analog.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210212102021.47276-1-alexandru.ardelean@analog.com> References: <20210212102021.47276-1-alexandru.ardelean@analog.com> MIME-Version: 1.0 X-ADIRuleOP-NewSCL: Rule Triggered X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-12_03:2021-02-12,2021-02-12 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 malwarescore=0 clxscore=1015 spamscore=0 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 priorityscore=1501 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102120077 Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org From: Lars-Peter Clausen Add support for output buffers to the dma buffer implementation. Signed-off-by: Lars-Peter Clausen Signed-off-by: Alexandru Ardelean --- drivers/iio/adc/adi-axi-adc.c | 3 +- drivers/iio/buffer/industrialio-buffer-dma.c | 116 ++++++++++++++++-- .../buffer/industrialio-buffer-dmaengine.c | 31 +++-- include/linux/iio/buffer-dma.h | 6 + include/linux/iio/buffer-dmaengine.h | 7 +- 5 files changed, 144 insertions(+), 19 deletions(-) diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c index 45ce97d1f41e..d088ab77ba5c 100644 --- a/drivers/iio/adc/adi-axi-adc.c +++ b/drivers/iio/adc/adi-axi-adc.c @@ -106,6 +106,7 @@ static unsigned int adi_axi_adc_read(struct adi_axi_adc_state *st, static int adi_axi_adc_config_dma_buffer(struct device *dev, struct iio_dev *indio_dev) { + enum iio_buffer_direction dir = IIO_BUFFER_DIRECTION_IN; struct iio_buffer *buffer; const char *dma_name; @@ -115,7 +116,7 @@ static int adi_axi_adc_config_dma_buffer(struct device *dev, if (device_property_read_string(dev, "dma-names", &dma_name)) dma_name = "rx"; - buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent, + buffer = devm_iio_dmaengine_buffer_alloc(indio_dev->dev.parent, dir, dma_name, NULL, NULL); if (IS_ERR(buffer)) return PTR_ERR(buffer); diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c b/drivers/iio/buffer/industrialio-buffer-dma.c index 57f2284a292f..36e6e79d2e04 100644 --- a/drivers/iio/buffer/industrialio-buffer-dma.c +++ b/drivers/iio/buffer/industrialio-buffer-dma.c @@ -223,7 +223,8 @@ void iio_dma_buffer_block_done(struct iio_dma_buffer_block *block) spin_unlock_irqrestore(&queue->list_lock, flags); iio_buffer_block_put_atomic(block); - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + wake_up_interruptible_poll(&queue->buffer.pollq, + (uintptr_t)queue->poll_wakup_flags); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_done); @@ -252,7 +253,8 @@ void iio_dma_buffer_block_list_abort(struct iio_dma_buffer_queue *queue, } spin_unlock_irqrestore(&queue->list_lock, flags); - wake_up_interruptible_poll(&queue->buffer.pollq, EPOLLIN | EPOLLRDNORM); + wake_up_interruptible_poll(&queue->buffer.pollq, + (uintptr_t)queue->poll_wakup_flags); } EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort); @@ -353,9 +355,6 @@ int iio_dma_buffer_request_update(struct iio_buffer *buffer) } block->block.id = i; - - block->state = IIO_BLOCK_STATE_QUEUED; - list_add_tail(&block->head, &queue->incoming); } out_unlock: @@ -437,7 +436,29 @@ int iio_dma_buffer_enable(struct iio_buffer *buffer, struct iio_dma_buffer_block *block, *_block; mutex_lock(&queue->lock); + + if (buffer->direction == IIO_BUFFER_DIRECTION_IN) + queue->poll_wakup_flags = POLLIN | POLLRDNORM; + else + queue->poll_wakup_flags = POLLOUT | POLLWRNORM; + queue->fileio.enabled = !queue->num_blocks; + if (queue->fileio.enabled) { + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) { + struct iio_dma_buffer_block *block = + queue->fileio.blocks[i]; + if (buffer->direction == IIO_BUFFER_DIRECTION_IN) { + block->state = IIO_BLOCK_STATE_QUEUED; + list_add_tail(&block->head, &queue->incoming); + } else { + block->state = IIO_BLOCK_STATE_DEQUEUED; + list_add_tail(&block->head, &queue->outgoing); + } + } + } + queue->active = true; list_for_each_entry_safe(block, _block, &queue->incoming, head) { list_del(&block->head); @@ -567,6 +588,61 @@ int iio_dma_buffer_read(struct iio_buffer *buffer, size_t n, } EXPORT_SYMBOL_GPL(iio_dma_buffer_read); +int iio_dma_buffer_write(struct iio_buffer *buf, size_t n, + const char __user *user_buffer) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf); + struct iio_dma_buffer_block *block; + int ret; + + if (n < buf->bytes_per_datum) + return -EINVAL; + + mutex_lock(&queue->lock); + + if (!queue->fileio.enabled) { + ret = -EBUSY; + goto out_unlock; + } + + if (!queue->fileio.active_block) { + block = iio_dma_buffer_dequeue(queue); + if (block == NULL) { + ret = 0; + goto out_unlock; + } + queue->fileio.pos = 0; + queue->fileio.active_block = block; + } else { + block = queue->fileio.active_block; + } + + n = rounddown(n, buf->bytes_per_datum); + if (n > block->block.size - queue->fileio.pos) + n = block->block.size - queue->fileio.pos; + + if (copy_from_user(block->vaddr + queue->fileio.pos, user_buffer, n)) { + ret = -EFAULT; + goto out_unlock; + } + + queue->fileio.pos += n; + + if (queue->fileio.pos == block->block.size) { + queue->fileio.active_block = NULL; + block->block.bytes_used = block->block.size; + iio_dma_buffer_enqueue(queue, block); + } + + ret = n; + +out_unlock: + mutex_unlock(&queue->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_write); + /** * iio_dma_buffer_data_available() - DMA buffer data_available callback * @buf: Buffer to check for data availability @@ -588,12 +664,14 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) */ mutex_lock(&queue->lock); - if (queue->fileio.active_block) - data_available += queue->fileio.active_block->block.size; + if (queue->fileio.active_block) { + data_available += queue->fileio.active_block->block.bytes_used - + queue->fileio.pos; + } spin_lock_irq(&queue->list_lock); list_for_each_entry(block, &queue->outgoing, head) - data_available += block->block.size; + data_available += block->block.bytes_used; spin_unlock_irq(&queue->list_lock); mutex_unlock(&queue->lock); @@ -601,6 +679,28 @@ size_t iio_dma_buffer_data_available(struct iio_buffer *buf) } EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available); +size_t iio_dma_buffer_space_available(struct iio_buffer *buf) +{ + struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buf); + struct iio_dma_buffer_block *block; + size_t space_available = 0; + + mutex_lock(&queue->lock); + if (queue->fileio.active_block) { + space_available += queue->fileio.active_block->block.size - + queue->fileio.pos; + } + + spin_lock_irq(&queue->list_lock); + list_for_each_entry(block, &queue->outgoing, head) + space_available += block->block.size; + spin_unlock_irq(&queue->list_lock); + mutex_unlock(&queue->lock); + + return space_available; +} +EXPORT_SYMBOL_GPL(iio_dma_buffer_space_available); + int iio_dma_buffer_alloc_blocks(struct iio_buffer *buffer, struct iio_buffer_block_alloc_req *req) { diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 0736526b36ec..013cc7c1ecf4 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -37,6 +37,8 @@ struct dmaengine_buffer { size_t align; size_t max_size; + + bool is_tx; }; static struct dmaengine_buffer *iio_buffer_to_dmaengine_buffer( @@ -64,9 +66,12 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, struct dmaengine_buffer *dmaengine_buffer = iio_buffer_to_dmaengine_buffer(&queue->buffer); struct dma_async_tx_descriptor *desc; + enum dma_transfer_direction direction; dma_cookie_t cookie; - block->block.bytes_used = min(block->block.size, + if (!dmaengine_buffer->is_tx) + block->block.bytes_used = block->block.size; + block->block.bytes_used = min(block->block.bytes_used, dmaengine_buffer->max_size); block->block.bytes_used = rounddown(block->block.bytes_used, dmaengine_buffer->align); @@ -75,8 +80,10 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, return 0; } + direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM; + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->block.bytes_used, DMA_DEV_TO_MEM, + block->phys_addr, block->block.bytes_used, direction, DMA_PREP_INTERRUPT); if (!desc) return -ENOMEM; @@ -117,12 +124,14 @@ static void iio_dmaengine_buffer_release(struct iio_buffer *buf) static const struct iio_buffer_access_funcs iio_dmaengine_buffer_ops = { .read = iio_dma_buffer_read, + .write = iio_dma_buffer_write, .set_bytes_per_datum = iio_dma_buffer_set_bytes_per_datum, .set_length = iio_dma_buffer_set_length, .request_update = iio_dma_buffer_request_update, .enable = iio_dma_buffer_enable, .disable = iio_dma_buffer_disable, .data_available = iio_dma_buffer_data_available, + .space_available = iio_dma_buffer_space_available, .release = iio_dmaengine_buffer_release, .alloc_blocks = iio_dma_buffer_alloc_blocks, @@ -162,6 +171,7 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = { /** * iio_dmaengine_buffer_alloc() - Allocate new buffer which uses DMAengine * @dev: Parent device for the buffer + * @direction: Set the direction of the data. * @channel: DMA channel name, typically "rx". * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops @@ -174,11 +184,12 @@ static const struct attribute *iio_dmaengine_buffer_attrs[] = { * release it. */ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel, const struct iio_dma_buffer_ops *ops, - void *driver_data) + enum iio_buffer_direction direction, const char *channel, + const struct iio_dma_buffer_ops *ops, void *driver_data) { struct dmaengine_buffer *dmaengine_buffer; unsigned int width, src_width, dest_width; + bool is_tx = (direction == IIO_BUFFER_DIRECTION_OUT); struct dma_slave_caps caps; struct dma_chan *chan; int ret; @@ -187,6 +198,9 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, if (!dmaengine_buffer) return ERR_PTR(-ENOMEM); + if (!channel) + channel = is_tx ? "tx" : "rx"; + chan = dma_request_chan(dev, channel); if (IS_ERR(chan)) { ret = PTR_ERR(chan); @@ -212,6 +226,7 @@ static struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, dmaengine_buffer->chan = chan; dmaengine_buffer->align = width; dmaengine_buffer->max_size = dma_get_max_seg_size(chan->device->dev); + dmaengine_buffer->is_tx = is_tx; iio_dma_buffer_init(&dmaengine_buffer->queue, chan->device->dev, ops ? ops : &iio_dmaengine_default_ops, driver_data); @@ -251,6 +266,7 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res) /** * devm_iio_dmaengine_buffer_alloc() - Resource-managed iio_dmaengine_buffer_alloc() * @dev: Parent device for the buffer + * @direction: Set the direction of the data. * @channel: DMA channel name, typically "rx". * @ops: Custom iio_dma_buffer_ops, if NULL default ops will be used * @driver_data: Driver data to be passed to custom iio_dma_buffer_ops @@ -262,8 +278,8 @@ static void __devm_iio_dmaengine_buffer_free(struct device *dev, void *res) * The buffer will be automatically de-allocated once the device gets destroyed. */ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel, const struct iio_dma_buffer_ops *ops, - void *driver_data) + enum iio_buffer_direction direction, const char *channel, + const struct iio_dma_buffer_ops *ops, void *driver_data) { struct iio_buffer **bufferp, *buffer; @@ -272,7 +288,8 @@ struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, if (!bufferp) return ERR_PTR(-ENOMEM); - buffer = iio_dmaengine_buffer_alloc(dev, channel, ops, driver_data); + buffer = iio_dmaengine_buffer_alloc(dev, direction, channel, ops, + driver_data); if (IS_ERR(buffer)) { devres_free(bufferp); return buffer; diff --git a/include/linux/iio/buffer-dma.h b/include/linux/iio/buffer-dma.h index c23fad847f0d..0fd844c7f47a 100644 --- a/include/linux/iio/buffer-dma.h +++ b/include/linux/iio/buffer-dma.h @@ -112,6 +112,8 @@ struct iio_dma_buffer_queue { void *driver_data; + unsigned int poll_wakup_flags; + unsigned int num_blocks; struct iio_dma_buffer_block **blocks; unsigned int max_offset; @@ -145,6 +147,10 @@ int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd); int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); int iio_dma_buffer_request_update(struct iio_buffer *buffer); +int iio_dma_buffer_write(struct iio_buffer *buf, size_t n, + const char __user *user_buffer); +size_t iio_dma_buffer_space_available(struct iio_buffer *buf); + int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue, struct device *dma_dev, const struct iio_dma_buffer_ops *ops, void *driver_data); diff --git a/include/linux/iio/buffer-dmaengine.h b/include/linux/iio/buffer-dmaengine.h index 464adee95d4b..009a601c406c 100644 --- a/include/linux/iio/buffer-dmaengine.h +++ b/include/linux/iio/buffer-dmaengine.h @@ -7,12 +7,13 @@ #ifndef __IIO_DMAENGINE_H__ #define __IIO_DMAENGINE_H__ +#include + struct iio_dma_buffer_ops; -struct iio_buffer; struct device; struct iio_buffer *devm_iio_dmaengine_buffer_alloc(struct device *dev, - const char *channel, const struct iio_dma_buffer_ops *ops, - void *driver_data); + enum iio_buffer_direction direction, const char *channel, + const struct iio_dma_buffer_ops *ops, void *driver_data); #endif From patchwork Fri Feb 12 10:20:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Ardelean X-Patchwork-Id: 12084987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 024D0C4332E for ; Fri, 12 Feb 2021 10:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEFCD64D9E for ; Fri, 12 Feb 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbhBLKTI (ORCPT ); Fri, 12 Feb 2021 05:19:08 -0500 Received: from mx0a-00128a01.pphosted.com ([148.163.135.77]:5908 "EHLO mx0a-00128a01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230470AbhBLKS7 (ORCPT ); Fri, 12 Feb 2021 05:18:59 -0500 Received: from pps.filterd (m0167089.ppops.net [127.0.0.1]) by mx0a-00128a01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 11CAFN02001703; Fri, 12 Feb 2021 05:18:08 -0500 Received: from nwd2mta3.analog.com ([137.71.173.56]) by mx0a-00128a01.pphosted.com with ESMTP id 36hrw93cae-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 12 Feb 2021 05:18:07 -0500 Received: from ASHBMBX9.ad.analog.com (ASHBMBX9.ad.analog.com [10.64.17.10]) by nwd2mta3.analog.com (8.14.7/8.14.7) with ESMTP id 11CAI6QI023461 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=FAIL); Fri, 12 Feb 2021 05:18:06 -0500 Received: from ASHBMBX9.ad.analog.com (10.64.17.10) by ASHBMBX9.ad.analog.com (10.64.17.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1779.2; Fri, 12 Feb 2021 05:18:05 -0500 Received: from zeus.spd.analog.com (10.66.68.11) by ASHBMBX9.ad.analog.com (10.64.17.10) with Microsoft SMTP Server id 15.1.1779.2 via Frontend Transport; Fri, 12 Feb 2021 05:18:05 -0500 Received: from localhost.localdomain ([10.48.65.12]) by zeus.spd.analog.com (8.15.1/8.15.1) with ESMTP id 11CAHu5o029319; Fri, 12 Feb 2021 05:18:04 -0500 From: Alexandru Ardelean To: , CC: , , , , , Alexandru Ardelean Subject: [RFC PATCH 5/5] iio: buffer-dma: add support for cyclic DMA transfers Date: Fri, 12 Feb 2021 12:20:21 +0200 Message-ID: <20210212102021.47276-6-alexandru.ardelean@analog.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210212102021.47276-1-alexandru.ardelean@analog.com> References: <20210212102021.47276-1-alexandru.ardelean@analog.com> MIME-Version: 1.0 X-ADIRuleOP-NewSCL: Rule Triggered X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-12_03:2021-02-12,2021-02-12 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 priorityscore=1501 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2102120077 Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org From: Lars-Peter Clausen This change adds support for cyclic DMA transfers using the IIO buffer DMA infrastructure. To do this, userspace must set the IIO_BUFFER_BLOCK_FLAG_CYCLIC flag on the block when enqueueing them via the ENQUEUE_BLOCK ioctl(). Signed-off-by: Lars-Peter Clausen Signed-off-by: Alexandru Ardelean --- .../buffer/industrialio-buffer-dmaengine.c | 24 ++++++++++++------- include/uapi/linux/iio/buffer.h | 1 + 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c index 013cc7c1ecf4..94c93a636ad4 100644 --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c @@ -82,14 +82,22 @@ static int iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue *queue, direction = dmaengine_buffer->is_tx ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM; - desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, - block->phys_addr, block->block.bytes_used, direction, - DMA_PREP_INTERRUPT); - if (!desc) - return -ENOMEM; - - desc->callback_result = iio_dmaengine_buffer_block_done; - desc->callback_param = block; + if (block->block.flags & IIO_BUFFER_BLOCK_FLAG_CYCLIC) { + desc = dmaengine_prep_dma_cyclic(dmaengine_buffer->chan, + block->phys_addr, block->block.bytes_used, + block->block.bytes_used, direction, 0); + if (!desc) + return -ENOMEM; + } else { + desc = dmaengine_prep_slave_single(dmaengine_buffer->chan, + block->phys_addr, block->block.bytes_used, direction, + DMA_PREP_INTERRUPT); + if (!desc) + return -ENOMEM; + + desc->callback_result = iio_dmaengine_buffer_block_done; + desc->callback_param = block; + } cookie = dmaengine_submit(desc); if (dma_submit_error(cookie)) diff --git a/include/uapi/linux/iio/buffer.h b/include/uapi/linux/iio/buffer.h index 70ad3aea01ea..0e0c95f1c38b 100644 --- a/include/uapi/linux/iio/buffer.h +++ b/include/uapi/linux/iio/buffer.h @@ -13,6 +13,7 @@ struct iio_buffer_block_alloc_req { }; #define IIO_BUFFER_BLOCK_FLAG_TIMESTAMP_VALID (1 << 0) +#define IIO_BUFFER_BLOCK_FLAG_CYCLIC (1 << 1) struct iio_buffer_block { __u32 id;