From patchwork Fri Feb 16 12:05:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 10224571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 31F3C603EE for ; Fri, 16 Feb 2018 12:06:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21959293FF for ; Fri, 16 Feb 2018 12:06:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 164EA29426; Fri, 16 Feb 2018 12:06:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E9F0293FF for ; Fri, 16 Feb 2018 12:06:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965776AbeBPMGW (ORCPT ); Fri, 16 Feb 2018 07:06:22 -0500 Received: from heliosphere.sirena.org.uk ([172.104.155.198]:42614 "EHLO heliosphere.sirena.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965781AbeBPMGF (ORCPT ); Fri, 16 Feb 2018 07:06:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sirena.org.uk; s=20170815-heliosphere; h=Date:Message-Id:In-Reply-To: Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:References: List-Id:List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner: List-Archive; bh=o9TuZMOIyPas9Zu37LYpJ14eeLl3j8Y0te7mOZ/5ZpU=; b=xj/LynAT3H00 zHjyBVcCOgXuAasCIeT74JXqNl7Tw6winbcceypyncNxv17vrQp9Jy7tGosxmonykfWpFfHZEXY88 cZleCJKHHo7cCChHGLOctKh8uHHj1lT884cjH3NgIgiLK+zYVrk/q+zgJIHan9t0xfB9pjOlDYRsY DOMIE=; Received: from debutante.sirena.org.uk ([2001:470:1f1d:6b5::3] helo=debutante) by heliosphere.sirena.org.uk with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1emem1-00048X-BP; Fri, 16 Feb 2018 12:05:53 +0000 Received: from broonie by debutante with local (Exim 4.90_1) (envelope-from ) id 1emem0-0005C9-T9; Fri, 16 Feb 2018 12:05:52 +0000 From: Mark Brown To: Charles Keepax Cc: Mark Brown , broonie@kernel.org, jic23@kernel.org, knaack.h@gmx.de, lars@metafoo.de, pmeerw@pmeerw.net, linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org, patches@opensource.cirrus.com, linux-kernel@vger.kernel.org Subject: Applied "regmap: Move the handling for max_raw_read into regmap_raw_read" to the regmap tree In-Reply-To: <20180215175220.2691-1-ckeepax@opensource.cirrus.com> Message-Id: Date: Fri, 16 Feb 2018 12:05:52 +0000 Sender: linux-iio-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-iio@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The patch regmap: Move the handling for max_raw_read into regmap_raw_read has been applied to the regmap tree at https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap.git All being well this means that it will be integrated into the linux-next tree (usually sometime in the next 24 hours) and sent to Linus during the next merge window (or sooner if it is a bug fix), however if problems are discovered then the patch may be dropped or reverted. You may get further e-mails resulting from automated or manual testing and review of the tree, please engage with people reporting problems and send followup patches addressing any issues that are reported if needed. If any updates are required or you are submitting further changes they should be sent as incremental updates against current git, existing patches will not be replaced. Please add any relevant lists and maintainers to the CCs when replying to this mail. Thanks, Mark From 0645ba4331c2b02ba9907b1591ba722535890e9f Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 15 Feb 2018 17:52:16 +0000 Subject: [PATCH] regmap: Move the handling for max_raw_read into regmap_raw_read Currently regmap_bulk_read will split a read into chunks before calling regmap_raw_read if max_raw_read is set. It is more logical for this handling to be inside regmap_raw_read itself, as this removes the need to keep re-implementing the chunking code, which would be the same for all users of regmap_raw_read. Signed-off-by: Charles Keepax Signed-off-by: Mark Brown --- drivers/base/regmap/regmap.c | 90 +++++++++++++++++--------------------------- 1 file changed, 35 insertions(+), 55 deletions(-) diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c index f075c05859b0..0cc7387008c9 100644 --- a/drivers/base/regmap/regmap.c +++ b/drivers/base/regmap/regmap.c @@ -2542,18 +2542,45 @@ int regmap_raw_read(struct regmap *map, unsigned int reg, void *val, if (regmap_volatile_range(map, reg, val_count) || map->cache_bypass || map->cache_type == REGCACHE_NONE) { + int chunk_stride = map->reg_stride; + size_t chunk_size = val_bytes; + size_t chunk_count = val_count; + if (!map->bus->read) { ret = -ENOTSUPP; goto out; } - if (map->max_raw_read && map->max_raw_read < val_len) { - ret = -E2BIG; - goto out; + + if (!map->use_single_read) { + if (map->max_raw_read) + chunk_size = map->max_raw_read; + else + chunk_size = val_len; + if (chunk_size % val_bytes) + chunk_size -= chunk_size % val_bytes; + chunk_count = val_len / chunk_size; + chunk_stride *= chunk_size / val_bytes; } - /* Physical block read if there's no cache involved */ - ret = _regmap_raw_read(map, reg, val, val_len); + /* Read bytes that fit into a multiple of chunk_size */ + for (i = 0; i < chunk_count; i++) { + ret = _regmap_raw_read(map, + reg + (i * chunk_stride), + val + (i * chunk_size), + chunk_size); + if (ret != 0) + return ret; + } + /* Read remaining bytes */ + if (chunk_size * i < val_len) { + ret = _regmap_raw_read(map, + reg + (i * chunk_stride), + val + (i * chunk_size), + val_len - i * chunk_size); + if (ret != 0) + return ret; + } } else { /* Otherwise go word by word for the cache; should be low * cost as we expect to hit the cache. @@ -2655,56 +2682,9 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val, return -EINVAL; if (map->bus && map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) { - /* - * Some devices does not support bulk read, for - * them we have a series of single read operations. - */ - size_t total_size = val_bytes * val_count; - - if (!map->use_single_read && - (!map->max_raw_read || map->max_raw_read > total_size)) { - ret = regmap_raw_read(map, reg, val, - val_bytes * val_count); - if (ret != 0) - return ret; - } else { - /* - * Some devices do not support bulk read or do not - * support large bulk reads, for them we have a series - * of read operations. - */ - int chunk_stride = map->reg_stride; - size_t chunk_size = val_bytes; - size_t chunk_count = val_count; - - if (!map->use_single_read) { - chunk_size = map->max_raw_read; - if (chunk_size % val_bytes) - chunk_size -= chunk_size % val_bytes; - chunk_count = total_size / chunk_size; - chunk_stride *= chunk_size / val_bytes; - } - - /* Read bytes that fit into a multiple of chunk_size */ - for (i = 0; i < chunk_count; i++) { - ret = regmap_raw_read(map, - reg + (i * chunk_stride), - val + (i * chunk_size), - chunk_size); - if (ret != 0) - return ret; - } - - /* Read remaining bytes */ - if (chunk_size * i < total_size) { - ret = regmap_raw_read(map, - reg + (i * chunk_stride), - val + (i * chunk_size), - total_size - i * chunk_size); - if (ret != 0) - return ret; - } - } + ret = regmap_raw_read(map, reg, val, val_bytes * val_count); + if (ret != 0) + return ret; for (i = 0; i < val_count * val_bytes; i += val_bytes) map->format.parse_inplace(val + i);