From patchwork Sat Jun 17 12:11:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13283551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9572EEB64DB for ; Sat, 17 Jun 2023 12:12:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6422E8E0001; Sat, 17 Jun 2023 08:12:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 57B666B0088; Sat, 17 Jun 2023 08:12:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CE9B8E0001; Sat, 17 Jun 2023 08:12:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 29CEE6B0087 for ; Sat, 17 Jun 2023 08:12:47 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id ED3ECC012E for ; Sat, 17 Jun 2023 12:12:46 +0000 (UTC) X-FDA: 80912128332.23.DF71B3B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 12F53120010 for ; Sat, 17 Jun 2023 12:12:44 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DqkxZ5d+; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687003965; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iAUCq6AQpwiSNlqp6e2886GYsbEpCz4tvlCIXkUtiq4=; b=0LIwZoRPZQUvhehxn3XFzAMeA4SrnPFXNYDni++6LQFDG1lcbMt3QyJpf93lxgxYa++FJe FIHwzRWY7jV6dGrgHEBYgPQgvMBd8eSl5bnx25wJ0ydSO3/BjJq/JwdFkGX+ZXuPkJePUQ WZNAoGK1uRK4gxtY2awyasViRJfaUOM= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DqkxZ5d+; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687003965; a=rsa-sha256; cv=none; b=FdyLC/6U2Q8XocIxoaPNGbTyeBr748iUgcs4KZkj+2yiciTnHwoLUu4W/4f66FY4QLmVcX qbJBt25VLI4t+uUlDlDvafYTAx7fSqT9JZPKgbsqyXjId4cuhe9s1Kn+yfCQorZiAUOk4W yclkmn5rh2lz2PbQT8ytCc+6WrfHFZw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687003964; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iAUCq6AQpwiSNlqp6e2886GYsbEpCz4tvlCIXkUtiq4=; b=DqkxZ5d+8DFXE9IJWTDt9DNeCmJKjwClq2XYLlh4CrmnhQRo4Z7q2vOl7Qwcbhhmqo0G3D 2Fu7j6zgWeuMThpiVPRl+RVQ7oFPalRaBbnLalhhzhQjjp11ok7Ej6fhbobca8ebeWvHE1 bckQKhiy82Av+ThfDsaLPX6ttn9XJSs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-473-tTtJZPH1Nh6NXzEdHxJjJw-1; Sat, 17 Jun 2023 08:12:42 -0400 X-MC-Unique: tTtJZPH1Nh6NXzEdHxJjJw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E86F780123E; Sat, 17 Jun 2023 12:12:40 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.51]) by smtp.corp.redhat.com (Postfix) with ESMTP id B3492C1603B; Sat, 17 Jun 2023 12:12:38 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Karsten Graul , Wenjia Zhang , Jan Karcher , "D. Wythe" , Tony Lu , Wen Gu , linux-s390@vger.kernel.org Subject: [PATCH net-next v2 11/17] smc: Drop smc_sendpage() in favour of smc_sendmsg() + MSG_SPLICE_PAGES Date: Sat, 17 Jun 2023 13:11:40 +0100 Message-ID: <20230617121146.716077-12-dhowells@redhat.com> In-Reply-To: <20230617121146.716077-1-dhowells@redhat.com> References: <20230617121146.716077-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Rspam-User: X-Stat-Signature: njc39cmj5ftbhxe8zqjooqj7i8abjmz7 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 12F53120010 X-HE-Tag: 1687003964-216794 X-HE-Meta: U2FsdGVkX18x7ciXGQgraYGT4gxUs92OlHFFQaA/Iy8tO1S4DQvUI3Xpi4/n5Un0FJtnRjbO3Z9NP9AtjdDEOQb1TSN37P9na35O3xyViWgNOC/HpAYZEGxjHPaYiZTjhYc+PCjNdI1aQm/8ZKeJrWJchTrerkSanDswv2sfP3bkhh3/K0Dre0lbOvDkT0/wZbhVMhJp2I4YfgYKCH9IVEcVpMlMNsTSOaBxFkVgif/vLrDjif6bXt2cZa4R1hBnAjUeHwPDe8i/Z5DwfZy17r8hwkb8znblP0GHomowD9NGtTUPIJNwX6GMQPCfbKZfTo2h+IVPl4ISglnaRizpJeXflQWXWnBdkDEf/BLleAVDhCvb9bzSC/XG9gYGt5ch6a83718JJ8yF1nBiG6iLPOQHpnL9jIIKxN/QL59Rko6/iBrcouO33wS7KUfNgxU9yJS2G9lFeh9DtXJgpG0G/spRMpF9PpEbb75tM4WXBdLoYOZgth3tlitfqYuGbq38aIRODiKPeJqCaT01vhbcTOWXTX23PfMoucVCngww5VYF2vADqbFoV0x0k98GlKC66Vjn08cBBFKY/+Sk/jyyctlrOzYsqgwKeLjbnhb2wjRVPTlML6tc9u7g6DkJIWyaHAXgSvMaV5P7GA7J9Mv599IDdyzT5QkbmDRmMbqPaCt++gUkOLJGcV+KoGVlKlwKGgrnBAKykd5PSw2dO1D9we4hZEv0vDkM7E/by3DjZ2OOpqxfgH8/S+4tHyXmdUNYiTjQw/HgGo4PSh63Kp0vsmn+xb0T2M00ct1rEmiqb42qaflbSrTvJiUmkEXihlRznoo8+J2mLIQNna26j/9YDFEuBoKApwd3O117mYjCL4XLM4USRRZdgX+c6iHHi1h4IA1H9M9x2PTHR2DxprixA0MOmvip+hcyosiWMiotwF70kCh/963tyB6RQlCzcTloC3efYI5lot1U/cQn06L LgwXPUqE qFeSvYv+j41+0qGWhR3b3PIeqPGgAoWEkgRhD0ABAEUYRbUrURexYPjYhG/R+44Zg8xRxYuvqbqiQrOryCweRMvEuhW5ss9uNr0yH17rJRbz32DyGz2jXYAK1aj8XsgmHpg9ronRSf4KrCHSjOxOrjxTj5ZNjINYBF6ZhD5o4wv31GyD1UmfylUKsrZKzzr96v0BvMuVApKw78LC+XjX4y3205dz3KwAQ9mHiX/t3Au+Zb5yl5Ba72+fjqWbdXgGwPVyxwIvb765LlTq0FYBIma4UOEt9xs6uizfi6aOpaBlbnbYZzCxeJFCfcFTsBhjMl030NzDoipt4rbZJJSdki0N43Jfr8jESwPb8qAPNaUgKGtueaD+LLqRKjTJo24KVQvs9i4jRvgIy/5RjuiIcduBIzS74eduznGPoS24TZhAmV/IT9Krbg6hxplvI9+LF+pUfWt6KzOQsBCpIfiehOfdE5ESPJLR3qjOVN7LTf01CaBSb0j0ElhMeaGlhEY9NpCGHa4TDexRo42rBvjkZBTLkESA8drW7Yw1Y X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Drop the smc_sendpage() code as smc_sendmsg() just passes the call down to the underlying TCP socket and smc_tx_sendpage() is just a wrapper around its sendmsg implementation. Signed-off-by: David Howells cc: Karsten Graul cc: Wenjia Zhang cc: Jan Karcher cc: "D. Wythe" cc: Tony Lu cc: Wen Gu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-s390@vger.kernel.org cc: netdev@vger.kernel.org --- net/smc/af_smc.c | 29 ----------------------------- net/smc/smc_stats.c | 2 +- net/smc/smc_stats.h | 1 - net/smc/smc_tx.c | 22 +--------------------- net/smc/smc_tx.h | 2 -- 5 files changed, 2 insertions(+), 54 deletions(-) diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 538e9c6ec8c9..a7f887d91d89 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -3133,34 +3133,6 @@ static int smc_ioctl(struct socket *sock, unsigned int cmd, return put_user(answ, (int __user *)arg); } -static ssize_t smc_sendpage(struct socket *sock, struct page *page, - int offset, size_t size, int flags) -{ - struct sock *sk = sock->sk; - struct smc_sock *smc; - int rc = -EPIPE; - - smc = smc_sk(sk); - lock_sock(sk); - if (sk->sk_state != SMC_ACTIVE) { - release_sock(sk); - goto out; - } - release_sock(sk); - if (smc->use_fallback) { - rc = kernel_sendpage(smc->clcsock, page, offset, - size, flags); - } else { - lock_sock(sk); - rc = smc_tx_sendpage(smc, page, offset, size, flags); - release_sock(sk); - SMC_STAT_INC(smc, sendpage_cnt); - } - -out: - return rc; -} - /* Map the affected portions of the rmbe into an spd, note the number of bytes * to splice in conn->splice_pending, and press 'go'. Delays consumer cursor * updates till whenever a respective page has been fully processed. @@ -3232,7 +3204,6 @@ static const struct proto_ops smc_sock_ops = { .sendmsg = smc_sendmsg, .recvmsg = smc_recvmsg, .mmap = sock_no_mmap, - .sendpage = smc_sendpage, .splice_read = smc_splice_read, }; diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c index e80e34f7ac15..ca14c0f3a07d 100644 --- a/net/smc/smc_stats.c +++ b/net/smc/smc_stats.c @@ -227,7 +227,7 @@ static int smc_nl_fill_stats_tech_data(struct sk_buff *skb, SMC_NLA_STATS_PAD)) goto errattr; if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SENDPAGE_CNT, - smc_tech->sendpage_cnt, + 0, SMC_NLA_STATS_PAD)) goto errattr; if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_CORK_CNT, diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h index 84b7ecd8c05c..b60fe1eb37ab 100644 --- a/net/smc/smc_stats.h +++ b/net/smc/smc_stats.h @@ -71,7 +71,6 @@ struct smc_stats_tech { u64 clnt_v2_succ_cnt; u64 srv_v1_succ_cnt; u64 srv_v2_succ_cnt; - u64 sendpage_cnt; u64 urg_data_cnt; u64 splice_cnt; u64 cork_cnt; diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c index 9b9e0a190734..5147207808e5 100644 --- a/net/smc/smc_tx.c +++ b/net/smc/smc_tx.c @@ -167,8 +167,7 @@ static bool smc_tx_should_cork(struct smc_sock *smc, struct msghdr *msg) * sndbuf_space is still available. The applications * should known how/when to uncork it. */ - if ((msg->msg_flags & MSG_MORE || - smc_tx_is_corked(smc)) && + if ((msg->msg_flags & MSG_MORE || smc_tx_is_corked(smc)) && atomic_read(&conn->sndbuf_space)) return true; @@ -297,25 +296,6 @@ int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len) return rc; } -int smc_tx_sendpage(struct smc_sock *smc, struct page *page, int offset, - size_t size, int flags) -{ - struct msghdr msg = {.msg_flags = flags}; - char *kaddr = kmap(page); - struct kvec iov; - int rc; - - if (flags & MSG_SENDPAGE_NOTLAST) - msg.msg_flags |= MSG_MORE; - - iov.iov_base = kaddr + offset; - iov.iov_len = size; - iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, size); - rc = smc_tx_sendmsg(smc, &msg, size); - kunmap(page); - return rc; -} - /***************************** sndbuf consumer *******************************/ /* sndbuf consumer: actual data transfer of one target chunk with ISM write */ diff --git a/net/smc/smc_tx.h b/net/smc/smc_tx.h index 34b578498b1f..a59f370b8b43 100644 --- a/net/smc/smc_tx.h +++ b/net/smc/smc_tx.h @@ -31,8 +31,6 @@ void smc_tx_pending(struct smc_connection *conn); void smc_tx_work(struct work_struct *work); void smc_tx_init(struct smc_sock *smc); int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len); -int smc_tx_sendpage(struct smc_sock *smc, struct page *page, int offset, - size_t size, int flags); int smc_tx_sndbuf_nonempty(struct smc_connection *conn); void smc_tx_sndbuf_nonfull(struct smc_sock *smc); void smc_tx_consumer_update(struct smc_connection *conn, bool force);