From patchwork Tue Jan 19 17:09:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 12030519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91CE4C18E7C for ; Tue, 19 Jan 2021 18:28:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6257022CAD for ; Tue, 19 Jan 2021 18:28:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390057AbhASRLK (ORCPT ); Tue, 19 Jan 2021 12:11:10 -0500 Received: from smtp.infotech.no ([82.134.31.41]:59504 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389552AbhASRKt (ORCPT ); Tue, 19 Jan 2021 12:10:49 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id 32E9B2042B2; Tue, 19 Jan 2021 18:09:37 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id IQ6qjm-zJqzh; Tue, 19 Jan 2021 18:09:34 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id 6080A2041AC; Tue, 19 Jan 2021 18:09:33 +0100 (CET) From: Douglas Gilbert To: linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, target-devel@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, bostroesser@gmail.com, ddiss@suse.de, bvanassche@acm.org Subject: [PATCH 1/3] scatterlist: add sgl_copy_sgl() function Date: Tue, 19 Jan 2021 12:09:26 -0500 Message-Id: <20210119170928.79805-2-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210119170928.79805-1-dgilbert@interlog.com> References: <20210119170928.79805-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Both the SCSI and NVMe subsystems receive user data from the block layer in scatterlist_s (aka scatter gather lists (sgl) which are often arrays). If drivers in those subsystems represent storage (e.g. a ramdisk) or cache "hot" user data then they may also choose to use scatterlist_s. Currently there are no sgl to sgl operations in the kernel. Start with a sgl to sgl copy. Stops when the first of the number of requested bytes to copy, or the source sgl, or the destination sgl is exhausted. So the destination sgl will _not_ grow. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 4 ++ lib/scatterlist.c | 74 +++++++++++++++++++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 6f70572b2938..22111ee21383 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -320,6 +320,10 @@ size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents, size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, size_t buflen, off_t skip); +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip, + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, + size_t n_bytes); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index a59778946404..782bcfe72c60 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1058,3 +1058,77 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, return offset; } EXPORT_SYMBOL(sg_zero_buffer); + +/** + * sgl_copy_sgl - Copy over a destination sgl from a source sgl + * @d_sgl: Destination sgl + * @d_nents: Number of SG entries in destination sgl + * @d_skip: Number of bytes to skip in destination before starting + * @s_sgl: Source sgl + * @s_nents: Number of SG entries in source sgl + * @s_skip: Number of bytes to skip in source before starting + * @n_bytes: The (maximum) number of bytes to copy + * + * Returns: + * The number of copied bytes. + * + * Notes: + * Destination arguments appear before the source arguments, as with memcpy(). + * + * Stops copying if either d_sgl, s_sgl or n_bytes is exhausted. + * + * Since memcpy() is used, overlapping copies (where d_sgl and s_sgl belong + * to the same sgl and the copy regions overlap) are not supported. + * + * Large copies are broken into copy segments whose sizes may vary. Those + * copy segment sizes are chosen by the min3() statement in the code below. + * Since SG_MITER_ATOMIC is used for both sides, each copy segment is started + * with kmap_atomic() [in sg_miter_next()] and completed with kunmap_atomic() + * [in sg_miter_stop()]. This means pre-emption is inhibited for relatively + * short periods even in very large copies. + * + * If d_skip is large, potentially spanning multiple d_nents then some + * integer arithmetic to adjust d_sgl may improve performance. For example + * if d_sgl is built using sgl_alloc_order(chainable=false) then the sgl + * will be an array with equally sized segments facilitating that + * arithmetic. The suggestion applies to s_skip, s_sgl and s_nents as well. + * + **/ +size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip, + struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, + size_t n_bytes) +{ + size_t len; + size_t offset = 0; + struct sg_mapping_iter d_iter, s_iter; + + if (n_bytes == 0) + return 0; + sg_miter_start(&s_iter, s_sgl, s_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + sg_miter_start(&d_iter, d_sgl, d_nents, SG_MITER_ATOMIC | SG_MITER_TO_SG); + if (!sg_miter_skip(&s_iter, s_skip)) + goto fini; + if (!sg_miter_skip(&d_iter, d_skip)) + goto fini; + + while (offset < n_bytes) { + if (!sg_miter_next(&s_iter)) + break; + if (!sg_miter_next(&d_iter)) + break; + len = min3(d_iter.length, s_iter.length, n_bytes - offset); + + memcpy(d_iter.addr, s_iter.addr, len); + offset += len; + /* LIFO order (stop d_iter before s_iter) needed with SG_MITER_ATOMIC */ + d_iter.consumed = len; + sg_miter_stop(&d_iter); + s_iter.consumed = len; + sg_miter_stop(&s_iter); + } +fini: + sg_miter_stop(&d_iter); + sg_miter_stop(&s_iter); + return offset; +} +EXPORT_SYMBOL(sgl_copy_sgl); From patchwork Tue Jan 19 17:09:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 12030523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AC8FC43333 for ; Tue, 19 Jan 2021 18:29:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28C16216FD for ; Tue, 19 Jan 2021 18:29:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391454AbhASR2Y (ORCPT ); Tue, 19 Jan 2021 12:28:24 -0500 Received: from smtp.infotech.no ([82.134.31.41]:59527 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390125AbhASRKw (ORCPT ); Tue, 19 Jan 2021 12:10:52 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id CC93B20418D; Tue, 19 Jan 2021 18:09:39 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3JEow0BnS18d; Tue, 19 Jan 2021 18:09:37 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id 0C03F20426D; Tue, 19 Jan 2021 18:09:34 +0100 (CET) From: Douglas Gilbert To: linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, target-devel@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, bostroesser@gmail.com, ddiss@suse.de, bvanassche@acm.org Subject: [PATCH 2/3] scatterlist: add sgl_equal_sgl() function Date: Tue, 19 Jan 2021 12:09:27 -0500 Message-Id: <20210119170928.79805-3-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210119170928.79805-1-dgilbert@interlog.com> References: <20210119170928.79805-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org After enabling copies between scatter gather lists (sgl_s), another storage related operation is to compare two sgl_s for equality. This new function is designed to partially implement NVMe's Compare command and the SCSI VERIFY(BYTCHK=1) command. Like memcmp() this function begins scanning at the start (of each sgl) and returns false on the first miscompare and stops comparing. The sgl_equal_sgl_idx() function additionally yields the index (i.e. byte position) of the first miscompare. The additional parameter, miscompare_idx, is a pointer. If it is non-NULL and a miscompare is detected (i.e. the function returns false) then the byte index of the first miscompare is written to *miscompare_idx. Knowing the location of the first miscompare is needed to implement properly the SCSI COMPARE AND WRITE command. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 8 +++ lib/scatterlist.c | 110 ++++++++++++++++++++++++++++++++++++ 2 files changed, 118 insertions(+) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 22111ee21383..40449ce96a18 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -324,6 +324,14 @@ size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_ski struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, size_t n_bytes); +bool sgl_equal_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes); + +bool sgl_equal_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes, size_t *miscompare_idx); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 782bcfe72c60..a8672bc6d883 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1132,3 +1132,113 @@ size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_ski return offset; } EXPORT_SYMBOL(sgl_copy_sgl); + +/** + * sgl_equal_sgl_idx - check if x and y (both sgl_s) compare equal, report + * index for first unequal bytes + * @x_sgl: x (left) sgl + * @x_nents: Number of SG entries in x (left) sgl + * @x_skip: Number of bytes to skip in x (left) before starting + * @y_sgl: y (right) sgl + * @y_nents: Number of SG entries in y (right) sgl + * @y_skip: Number of bytes to skip in y (right) before starting + * @n_bytes: The (maximum) number of bytes to compare + * @miscompare_idx: if return is false, index of first miscompare written + * to this pointer (if non-NULL). Value will be < n_bytes + * + * Returns: + * true if x and y compare equal before x, y or n_bytes is exhausted. + * Otherwise on a miscompare, returns false (and stops comparing). If return + * is false and miscompare_idx is non-NULL, then index of first miscompared + * byte written to *miscompare_idx. + * + * Notes: + * x and y are symmetrical: they can be swapped and the result is the same. + * + * Implementation is based on memcmp(). x and y segments may overlap. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +bool sgl_equal_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes, size_t *miscompare_idx) +{ + bool equ = true; + size_t len; + size_t offset = 0; + struct sg_mapping_iter x_iter, y_iter; + + if (n_bytes == 0) + return true; + sg_miter_start(&x_iter, x_sgl, x_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + sg_miter_start(&y_iter, y_sgl, y_nents, SG_MITER_ATOMIC | SG_MITER_FROM_SG); + if (!sg_miter_skip(&x_iter, x_skip)) + goto fini; + if (!sg_miter_skip(&y_iter, y_skip)) + goto fini; + + while (offset < n_bytes) { + if (!sg_miter_next(&x_iter)) + break; + if (!sg_miter_next(&y_iter)) + break; + len = min3(x_iter.length, y_iter.length, n_bytes - offset); + + equ = !memcmp(x_iter.addr, y_iter.addr, len); + if (!equ) + goto fini; + offset += len; + /* LIFO order is important when SG_MITER_ATOMIC is used */ + y_iter.consumed = len; + sg_miter_stop(&y_iter); + x_iter.consumed = len; + sg_miter_stop(&x_iter); + } +fini: + if (miscompare_idx && !equ) { + u8 *xp = x_iter.addr; + u8 *yp = y_iter.addr; + u8 *x_endp; + + for (x_endp = xp + len ; xp < x_endp; ++xp, ++yp) { + if (*xp != *yp) + break; + } + *miscompare_idx = offset + len - (x_endp - xp); + } + sg_miter_stop(&y_iter); + sg_miter_stop(&x_iter); + return equ; +} +EXPORT_SYMBOL(sgl_equal_sgl_idx); + +/** + * sgl_equal_sgl - check if x and y (both sgl_s) compare equal + * @x_sgl: x (left) sgl + * @x_nents: Number of SG entries in x (left) sgl + * @x_skip: Number of bytes to skip in x (left) before starting + * @y_sgl: y (right) sgl + * @y_nents: Number of SG entries in y (right) sgl + * @y_skip: Number of bytes to skip in y (right) before starting + * @n_bytes: The (maximum) number of bytes to compare + * + * Returns: + * true if x and y compare equal before x, y or n_bytes is exhausted. + * Otherwise on a miscompare, returns false (and stops comparing). + * + * Notes: + * x and y are symmetrical: they can be swapped and the result is the same. + * + * Implementation is based on memcmp(). x and y segments may overlap. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +bool sgl_equal_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip, + struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, + size_t n_bytes) +{ + return sgl_equal_sgl_idx(x_sgl, x_nents, x_skip, y_sgl, y_nents, y_skip, n_bytes, NULL); +} +EXPORT_SYMBOL(sgl_equal_sgl); From patchwork Tue Jan 19 17:09:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Douglas Gilbert X-Patchwork-Id: 12030521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF8FC433DB for ; Tue, 19 Jan 2021 18:29:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7AF8216FD for ; Tue, 19 Jan 2021 18:29:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391194AbhASR1N (ORCPT ); Tue, 19 Jan 2021 12:27:13 -0500 Received: from smtp.infotech.no ([82.134.31.41]:59533 "EHLO smtp.infotech.no" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390332AbhASRKw (ORCPT ); Tue, 19 Jan 2021 12:10:52 -0500 Received: from localhost (localhost [127.0.0.1]) by smtp.infotech.no (Postfix) with ESMTP id 8CBAA204238; Tue, 19 Jan 2021 18:09:41 +0100 (CET) X-Virus-Scanned: by amavisd-new-2.6.6 (20110518) (Debian) at infotech.no Received: from smtp.infotech.no ([127.0.0.1]) by localhost (smtp.infotech.no [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Fw3IrksLcJGC; Tue, 19 Jan 2021 18:09:39 +0100 (CET) Received: from xtwo70.bingwo.ca (host-104-157-204-209.dyn.295.ca [104.157.204.209]) by smtp.infotech.no (Postfix) with ESMTPA id ACAF92042B1; Tue, 19 Jan 2021 18:09:36 +0100 (CET) From: Douglas Gilbert To: linux-scsi@vger.kernel.org, linux-block@vger.kernel.org, target-devel@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Cc: martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, bostroesser@gmail.com, ddiss@suse.de, bvanassche@acm.org Subject: [PATCH 3/3] scatterlist: add sgl_memset() Date: Tue, 19 Jan 2021 12:09:28 -0500 Message-Id: <20210119170928.79805-4-dgilbert@interlog.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210119170928.79805-1-dgilbert@interlog.com> References: <20210119170928.79805-1-dgilbert@interlog.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The existing sg_zero_buffer() function is a bit restrictive. For example protection information (PI) blocks are usually initialized to 0xff bytes. As its name suggests sgl_memset() is modelled on memset(). One difference is the type of the val argument which is u8 rather than int. Plus it returns the number of bytes (over)written. Change implementation of sg_zero_buffer() to call this new function. Reviewed-by: Bodo Stroesser Signed-off-by: Douglas Gilbert --- include/linux/scatterlist.h | 20 +++++++++- lib/scatterlist.c | 79 +++++++++++++++++++++---------------- 2 files changed, 62 insertions(+), 37 deletions(-) diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 40449ce96a18..04be80d1a07c 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -317,8 +317,6 @@ size_t sg_pcopy_from_buffer(struct scatterlist *sgl, unsigned int nents, const void *buf, size_t buflen, off_t skip); size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents, void *buf, size_t buflen, off_t skip); -size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, - size_t buflen, off_t skip); size_t sgl_copy_sgl(struct scatterlist *d_sgl, unsigned int d_nents, off_t d_skip, struct scatterlist *s_sgl, unsigned int s_nents, off_t s_skip, @@ -332,6 +330,24 @@ bool sgl_equal_sgl_idx(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_ struct scatterlist *y_sgl, unsigned int y_nents, off_t y_skip, size_t n_bytes, size_t *miscompare_idx); +size_t sgl_memset(struct scatterlist *sgl, unsigned int nents, off_t skip, + u8 val, size_t n_bytes); + +/** + * sg_zero_buffer - Zero-out a part of a SG list + * @sgl: The SG list + * @nents: Number of SG entries + * @buflen: The number of bytes to zero out + * @skip: Number of bytes to skip before zeroing + * + * Returns the number of bytes zeroed. + **/ +static inline size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, + size_t buflen, off_t skip) +{ + return sgl_memset(sgl, nents, skip, 0, buflen); +} + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index a8672bc6d883..cb4d59111c78 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1024,41 +1024,6 @@ size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents, } EXPORT_SYMBOL(sg_pcopy_to_buffer); -/** - * sg_zero_buffer - Zero-out a part of a SG list - * @sgl: The SG list - * @nents: Number of SG entries - * @buflen: The number of bytes to zero out - * @skip: Number of bytes to skip before zeroing - * - * Returns the number of bytes zeroed. - **/ -size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, - size_t buflen, off_t skip) -{ - unsigned int offset = 0; - struct sg_mapping_iter miter; - unsigned int sg_flags = SG_MITER_ATOMIC | SG_MITER_TO_SG; - - sg_miter_start(&miter, sgl, nents, sg_flags); - - if (!sg_miter_skip(&miter, skip)) - return false; - - while (offset < buflen && sg_miter_next(&miter)) { - unsigned int len; - - len = min(miter.length, buflen - offset); - memset(miter.addr, 0, len); - - offset += len; - } - - sg_miter_stop(&miter); - return offset; -} -EXPORT_SYMBOL(sg_zero_buffer); - /** * sgl_copy_sgl - Copy over a destination sgl from a source sgl * @d_sgl: Destination sgl @@ -1242,3 +1207,47 @@ bool sgl_equal_sgl(struct scatterlist *x_sgl, unsigned int x_nents, off_t x_skip return sgl_equal_sgl_idx(x_sgl, x_nents, x_skip, y_sgl, y_nents, y_skip, n_bytes, NULL); } EXPORT_SYMBOL(sgl_equal_sgl); + +/** + * sgl_memset - set byte 'val' up to n_bytes times on SG list + * @sgl: The SG list + * @nents: Number of SG entries in sgl + * @skip: Number of bytes to skip before starting + * @val: byte value to write to sgl + * @n_bytes: The (maximum) number of bytes to modify + * + * Returns: + * The number of bytes written. + * + * Notes: + * Stops writing if either sgl or n_bytes is exhausted. If n_bytes is + * set SIZE_MAX then val will be written to each byte until the end + * of sgl. + * + * The notes in sgl_copy_sgl() about large sgl_s _applies here as well. + * + **/ +size_t sgl_memset(struct scatterlist *sgl, unsigned int nents, off_t skip, + u8 val, size_t n_bytes) +{ + size_t offset = 0; + size_t len; + struct sg_mapping_iter miter; + + if (n_bytes == 0) + return 0; + sg_miter_start(&miter, sgl, nents, SG_MITER_ATOMIC | SG_MITER_TO_SG); + if (!sg_miter_skip(&miter, skip)) + goto fini; + + while ((offset < n_bytes) && sg_miter_next(&miter)) { + len = min(miter.length, n_bytes - offset); + memset(miter.addr, val, len); + offset += len; + } +fini: + sg_miter_stop(&miter); + return offset; +} +EXPORT_SYMBOL(sgl_memset); +