From patchwork Fri Jun 23 22:55:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13291469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76FE4C001B0 for ; Fri, 23 Jun 2023 22:57:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231483AbjFWW5B (ORCPT ); Fri, 23 Jun 2023 18:57:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229558AbjFWW5A (ORCPT ); Fri, 23 Jun 2023 18:57:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6655F272A for ; Fri, 23 Jun 2023 15:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687560937; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B4yhQOVBi6GuvWYD5vpVZMpw478eQsHtbdlO705ACPw=; b=Ce0kSroagAqn8V3zlfmkcMrJw/JgfibsWg4xzsvpZbC6J0m98xsR3sGTHTZL9wxKtFpbeS F1mor0BDThO9+v78ePGoJmwDIaI1x9oVqOzqe2il+9OBCcgRjvOxaU71NvplvJ83JEH/Za DNc1tcGdR/78Xyw9OU4AgGHhswKYH/c= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-414-0YWi76drOuGNHDR7iFGrzg-1; Fri, 23 Jun 2023 18:55:33 -0400 X-MC-Unique: 0YWi76drOuGNHDR7iFGrzg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E4F31858EED; Fri, 23 Jun 2023 22:55:31 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id 370D2200C0CD; Fri, 23 Jun 2023 22:55:30 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Santosh Shilimkar , linux-rdma@vger.kernel.org, rds-devel@oss.oracle.com Subject: [PATCH net-next v5 05/16] rds: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage Date: Fri, 23 Jun 2023 23:55:02 +0100 Message-ID: <20230623225513.2732256-6-dhowells@redhat.com> In-Reply-To: <20230623225513.2732256-1-dhowells@redhat.com> References: <20230623225513.2732256-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org When transmitting data, call down into TCP using a single sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced. To make this work, the data is assembled in a bio_vec array and attached to a BVEC-type iterator. Signed-off-by: David Howells cc: Santosh Shilimkar cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-rdma@vger.kernel.org cc: rds-devel@oss.oracle.com cc: netdev@vger.kernel.org --- Notes: ver #4) - Reduce change to only call sendmsg on a page at a time. net/rds/tcp_send.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/net/rds/tcp_send.c b/net/rds/tcp_send.c index 8c4d1d6e9249..7d284ac7e81a 100644 --- a/net/rds/tcp_send.c +++ b/net/rds/tcp_send.c @@ -72,9 +72,10 @@ int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm, { struct rds_conn_path *cp = rm->m_inc.i_conn_path; struct rds_tcp_connection *tc = cp->cp_transport_data; + struct msghdr msg = {}; + struct bio_vec bvec; int done = 0; int ret = 0; - int more; if (hdr_off == 0) { /* @@ -111,15 +112,17 @@ int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm, goto out; } - more = rm->data.op_nents > 1 ? (MSG_MORE | MSG_SENDPAGE_NOTLAST) : 0; while (sg < rm->data.op_nents) { - int flags = MSG_DONTWAIT | MSG_NOSIGNAL | more; - - ret = tc->t_sock->ops->sendpage(tc->t_sock, - sg_page(&rm->data.op_sg[sg]), - rm->data.op_sg[sg].offset + off, - rm->data.op_sg[sg].length - off, - flags); + msg.msg_flags = MSG_SPLICE_PAGES | MSG_DONTWAIT | MSG_NOSIGNAL; + if (sg + 1 < rm->data.op_nents) + msg.msg_flags |= MSG_MORE; + + bvec_set_page(&bvec, sg_page(&rm->data.op_sg[sg]), + rm->data.op_sg[sg].length - off, + rm->data.op_sg[sg].offset + off); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, + rm->data.op_sg[sg].length - off); + ret = sock_sendmsg(tc->t_sock, &msg); rdsdebug("tcp sendpage %p:%u:%u ret %d\n", (void *)sg_page(&rm->data.op_sg[sg]), rm->data.op_sg[sg].offset + off, rm->data.op_sg[sg].length - off, ret); @@ -132,8 +135,6 @@ int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm, off = 0; sg++; } - if (sg == rm->data.op_nents - 1) - more = 0; } out: