From patchwork Mon Mar 4 08:42:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580180 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AD311A58B for ; Mon, 4 Mar 2024 08:43:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541817; cv=none; b=mGdihvMjYBqy3BRZLAXeyVkZfc8wvp7an1yvbK4fNWOqEdpLWiHHQIrG8Of6ESQAtUgLcUFYD7HgCHAwYf+uyDIrhQR5Z/lKOVSJI3Ca4ANnr96O7vAy7M9C9NyYc6bY4psTtATyiN+N/6g3RKplmKSyMSPpHFfJ5oientSCZVM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541817; c=relaxed/simple; bh=buGQQIqwfDOjT2V25LdvmamRLILtg2fDY6qrTYz9e6I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dgSllDk/Or76MDDfWCfiDIBxPvUux3MqXeRKPAkH42+Vp433jArwXfrjvy2yC2BdAKxZu39g5Nx1rqmAiU1s2Cn0NXaJaYsgrudGa3U+/g+Wlr+Hw3XYewjGmeioAVDd77uOS4biDAlm7TwjDZs6phuQ0f8WPphSec0XC66L5gM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=h7+CwFHM; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="h7+CwFHM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541814; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SInqaLs7+HuZGhZ5JkGiSMY+PTSraJJ+xEdb815D0Ac=; b=h7+CwFHMMMb8Ov2Wq8wSIesDuoe3pDu1uZpeHRrjr18tVA3fbKcOM5zlxF/aFae5GgMoZK VV8cm0gjS8cyn+326XP3ManFAdfDFr4ggqFFaj41B6EjdyZC/OP4mNqnjWAPP2o+s2txew fs3g01Cw6zuEEtRDSfwoDfvKupvppaw= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-439-7ppsfzbsOtSWxQ1KaPhk2Q-1; Mon, 04 Mar 2024 03:43:28 -0500 X-MC-Unique: 7ppsfzbsOtSWxQ1KaPhk2Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E594E38212C5; Mon, 4 Mar 2024 08:43:27 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2A1740C6EBA; Mon, 4 Mar 2024 08:43:26 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 01/21] rxrpc: Record the Tx serial in the rxrpc_txbuf and retransmit trace Date: Mon, 4 Mar 2024 08:42:58 +0000 Message-ID: <20240304084322.705539-2-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Patchwork-Delegate: kuba@kernel.org Each Rx protocol packet contains a per-connection monotonically increasing serial number used to correlate outgoing messages with their replies - something that can be used for RTT calculation. Note this value in the rxrpc_txbuf struct in addition to the wire header and then log it in the rxrpc_retransmit trace for reference. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- include/trace/events/rxrpc.h | 10 +++++++--- net/rxrpc/ar-internal.h | 1 + net/rxrpc/call_event.c | 6 +++--- net/rxrpc/output.c | 36 +++++++++++++++++------------------- net/rxrpc/txbuf.c | 1 + 5 files changed, 29 insertions(+), 25 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 87b8de9b6c1c..9add56980485 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1506,25 +1506,29 @@ TRACE_EVENT(rxrpc_drop_ack, ); TRACE_EVENT(rxrpc_retransmit, - TP_PROTO(struct rxrpc_call *call, rxrpc_seq_t seq, s64 expiry), + TP_PROTO(struct rxrpc_call *call, rxrpc_seq_t seq, + rxrpc_serial_t serial, s64 expiry), - TP_ARGS(call, seq, expiry), + TP_ARGS(call, seq, serial, expiry), TP_STRUCT__entry( __field(unsigned int, call) __field(rxrpc_seq_t, seq) + __field(rxrpc_serial_t, serial) __field(s64, expiry) ), TP_fast_assign( __entry->call = call->debug_id; __entry->seq = seq; + __entry->serial = serial; __entry->expiry = expiry; ), - TP_printk("c=%08x q=%x xp=%lld", + TP_printk("c=%08x q=%x r=%x xp=%lld", __entry->call, __entry->seq, + __entry->serial, __entry->expiry) ); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 7818aae1be8e..f76125755810 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -794,6 +794,7 @@ struct rxrpc_txbuf { ktime_t last_sent; /* Time at which last transmitted */ refcount_t ref; rxrpc_seq_t seq; /* Sequence number of this packet */ + rxrpc_serial_t serial; /* Last serial number transmitted with */ unsigned int call_debug_id; unsigned int debug_id; unsigned int len; /* Amount of data in buffer */ diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 0f78544d043b..a4c309976719 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -160,7 +160,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) goto no_further_resend; found_txb: - if (after(ntohl(txb->wire.serial), call->acks_highest_serial)) + if (after(txb->serial, call->acks_highest_serial)) continue; /* Ack point not yet reached */ rxrpc_see_txbuf(txb, rxrpc_txbuf_see_unacked); @@ -170,7 +170,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) set_bit(RXRPC_TXBUF_RESENT, &txb->flags); } - trace_rxrpc_retransmit(call, txb->seq, + trace_rxrpc_retransmit(call, txb->seq, txb->serial, ktime_to_ns(ktime_sub(txb->last_sent, max_age))); @@ -197,7 +197,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) break; /* Not transmitted yet */ if (ack && ack->reason == RXRPC_ACK_PING_RESPONSE && - before(ntohl(txb->wire.serial), ntohl(ack->serial))) + before(txb->serial, ntohl(ack->serial))) goto do_resend; /* Wasn't accounted for by a more recent ping. */ if (ktime_after(txb->last_sent, max_age)) { diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 4a292f860ae3..bad96a983e12 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -189,7 +189,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) struct rxrpc_connection *conn; struct msghdr msg; struct kvec iov[1]; - rxrpc_serial_t serial; size_t len, n; int ret, rtt_slot = -1; u16 rwind; @@ -216,15 +215,15 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) iov[0].iov_len = sizeof(txb->wire) + sizeof(txb->ack) + n; len = iov[0].iov_len; - serial = rxrpc_get_next_serial(conn); - txb->wire.serial = htonl(serial); - trace_rxrpc_tx_ack(call->debug_id, serial, + txb->serial = rxrpc_get_next_serial(conn); + txb->wire.serial = htonl(txb->serial); + trace_rxrpc_tx_ack(call->debug_id, txb->serial, ntohl(txb->ack.firstPacket), ntohl(txb->ack.serial), txb->ack.reason, txb->ack.nAcks, rwind); if (txb->ack.reason == RXRPC_ACK_PING) - rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_ping); + rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_ping); rxrpc_inc_stat(call->rxnet, stat_tx_ack_send); @@ -235,7 +234,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) ret = do_udp_sendmsg(conn->local->socket, &msg, len); call->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) { - trace_rxrpc_tx_fail(call->debug_id, serial, ret, + trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, rxrpc_tx_point_call_ack); } else { trace_rxrpc_tx_packet(call->debug_id, &txb->wire, @@ -247,7 +246,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (!__rxrpc_call_is_complete(call)) { if (ret < 0) - rxrpc_cancel_rtt_probe(call, serial, rtt_slot); + rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); rxrpc_set_keepalive(call); } @@ -327,15 +326,14 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) struct rxrpc_connection *conn = call->conn; struct msghdr msg; struct kvec iov[1]; - rxrpc_serial_t serial; size_t len; int ret, rtt_slot = -1; _enter("%x,{%d}", txb->seq, txb->len); - /* Each transmission of a Tx packet needs a new serial number */ - serial = rxrpc_get_next_serial(conn); - txb->wire.serial = htonl(serial); + /* Each transmission of a Tx packet+ needs a new serial number */ + txb->serial = rxrpc_get_next_serial(conn); + txb->wire.serial = htonl(txb->serial); if (test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags) && txb->seq == 1) @@ -388,7 +386,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) static int lose; if ((lose++ & 7) == 7) { ret = 0; - trace_rxrpc_tx_data(call, txb->seq, serial, + trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->wire.flags, test_bit(RXRPC_TXBUF_RESENT, &txb->flags), true); @@ -396,7 +394,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } } - trace_rxrpc_tx_data(call, txb->seq, serial, txb->wire.flags, + trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->wire.flags, test_bit(RXRPC_TXBUF_RESENT, &txb->flags), false); /* Track what we've attempted to transmit at least once so that the @@ -415,7 +413,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) - rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); + rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); /* send the packet by UDP * - returns -EMSGSIZE if UDP would have to fragment the packet @@ -429,8 +427,8 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (ret < 0) { rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); - rxrpc_cancel_rtt_probe(call, serial, rtt_slot); - trace_rxrpc_tx_fail(call->debug_id, serial, ret, + rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); + trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, rxrpc_tx_point_call_data_nofrag); } else { trace_rxrpc_tx_packet(call->debug_id, &txb->wire, @@ -489,7 +487,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->last_sent = ktime_get_real(); if (txb->wire.flags & RXRPC_REQUEST_ACK) - rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_data); + rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); switch (conn->local->srx.transport.family) { case AF_INET6: @@ -508,8 +506,8 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (ret < 0) { rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); - rxrpc_cancel_rtt_probe(call, serial, rtt_slot); - trace_rxrpc_tx_fail(call->debug_id, serial, ret, + rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); + trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, rxrpc_tx_point_call_data_frag); } else { trace_rxrpc_tx_packet(call->debug_id, &txb->wire, diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index d43be8512386..f2903c81cf5b 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -34,6 +34,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->flags = 0; txb->ack_why = 0; txb->seq = call->tx_prepared + 1; + txb->serial = 0; txb->wire.epoch = htonl(call->conn->proto.epoch); txb->wire.cid = htonl(call->cid); txb->wire.callNumber = htonl(call->call_id); From patchwork Mon Mar 4 08:42:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580178 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6BCA1B26E for ; Mon, 4 Mar 2024 08:43:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541814; cv=none; b=O8g6m4xMH1usrCfk72CfAa7DExKEJn0QbbLVexELr9FFpfNrgOqFAUi1XnfTq9DuciXaZrQ4tBWNuo8D8teo5KthQhe9oCgd1H3I6OIJ9Ccgncj3Yrcj6OhhBzquS/ddFURDTLSBc3GMFEpCQLo1VprNLPUv/EF+MEFKQk/IdwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541814; c=relaxed/simple; bh=JYhx7DIciqCb/UTDwB/y/lo8NilCfgSTvNbJk+2L6qw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RcWBK8SXlcIX593uw9451TaPKiDjg4CoRY1J93ixdcp1TSvzjs4H8gr1zZpQXeJ+vnNHiSn3aVwL9LlNYJl602JeNvczoN2hehHw0Ll7UzBp8FHZx4pVbUBgfSowBXMS370yk06Du2tkQZVRN2KjM9ho5B9sXdrs3LMCAhDFl44= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=TzF4RUvg; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TzF4RUvg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541811; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iOtV4ZQuN/A4A5dBF0j3j04MO7mqMmmSLmh55RKub28=; b=TzF4RUvg7jFQRp5D9hJf0yoiwCcXnw3Rl8jn34vhKDdbUNoThmi1UP+6rNmMB0NUIrLpWH LeN6lE69U0U1wE+Rc4eOnsk3O5WfONZ6hYIYwx9pmP6QDREDPg5qwzw51EDVTIljKlt6+b ez10BC3nSjr8QSxPzsnYUiDr7Psteiw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-7-4_1v_8B2PFuGIEfGVBi9FQ-1; Mon, 04 Mar 2024 03:43:30 -0500 X-MC-Unique: 4_1v_8B2PFuGIEfGVBi9FQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BFC7E185A782; Mon, 4 Mar 2024 08:43:29 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8697740C94A7; Mon, 4 Mar 2024 08:43:28 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 02/21] rxrpc: Convert rxrpc_txbuf::flags into a mask and don't use atomics Date: Mon, 4 Mar 2024 08:42:59 +0000 Message-ID: <20240304084322.705539-3-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Patchwork-Delegate: kuba@kernel.org Convert the transmission buffer flags into a mask and use | and & rather than bitops functions (atomic ops are not required as only the I/O thread can manipulate them once submitted for transmission). The bottom byte can then correspond directly to the Rx protocol header flags. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- include/trace/events/rxrpc.h | 12 +++++------- net/rxrpc/ar-internal.h | 8 ++++---- net/rxrpc/call_event.c | 8 ++++---- net/rxrpc/input.c | 2 +- net/rxrpc/output.c | 27 +++++++++++++-------------- net/rxrpc/sendmsg.c | 10 ++++------ net/rxrpc/txbuf.c | 4 ++-- 7 files changed, 33 insertions(+), 38 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 9add56980485..33888f688325 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1084,9 +1084,9 @@ TRACE_EVENT(rxrpc_tx_packet, TRACE_EVENT(rxrpc_tx_data, TP_PROTO(struct rxrpc_call *call, rxrpc_seq_t seq, - rxrpc_serial_t serial, u8 flags, bool retrans, bool lose), + rxrpc_serial_t serial, unsigned int flags, bool lose), - TP_ARGS(call, seq, serial, flags, retrans, lose), + TP_ARGS(call, seq, serial, flags, lose), TP_STRUCT__entry( __field(unsigned int, call) @@ -1094,8 +1094,7 @@ TRACE_EVENT(rxrpc_tx_data, __field(rxrpc_serial_t, serial) __field(u32, cid) __field(u32, call_id) - __field(u8, flags) - __field(bool, retrans) + __field(u16, flags) __field(bool, lose) ), @@ -1106,7 +1105,6 @@ TRACE_EVENT(rxrpc_tx_data, __entry->seq = seq; __entry->serial = serial; __entry->flags = flags; - __entry->retrans = retrans; __entry->lose = lose; ), @@ -1116,8 +1114,8 @@ TRACE_EVENT(rxrpc_tx_data, __entry->call_id, __entry->serial, __entry->seq, - __entry->flags, - __entry->retrans ? " *RETRANS*" : "", + __entry->flags & RXRPC_TXBUF_WIRE_FLAGS, + __entry->flags & RXRPC_TXBUF_RESENT ? " *RETRANS*" : "", __entry->lose ? " *LOSE*" : "") ); diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index f76125755810..54d1dc97cb0f 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -800,9 +800,9 @@ struct rxrpc_txbuf { unsigned int len; /* Amount of data in buffer */ unsigned int space; /* Remaining data space */ unsigned int offset; /* Offset of fill point */ - unsigned long flags; -#define RXRPC_TXBUF_LAST 0 /* Set if last packet in Tx phase */ -#define RXRPC_TXBUF_RESENT 1 /* Set if has been resent */ + unsigned int flags; +#define RXRPC_TXBUF_WIRE_FLAGS 0xff /* The wire protocol flags */ +#define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */ u8 /*enum rxrpc_propose_ack_trace*/ ack_why; /* If ack, why */ struct { /* The packet for encrypting and DMA'ing. We align it such @@ -822,7 +822,7 @@ struct rxrpc_txbuf { static inline bool rxrpc_sending_to_server(const struct rxrpc_txbuf *txb) { - return txb->wire.flags & RXRPC_CLIENT_INITIATED; + return txb->flags & RXRPC_CLIENT_INITIATED; } static inline bool rxrpc_sending_to_client(const struct rxrpc_txbuf *txb) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index a4c309976719..77eacbfc5d45 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -84,7 +84,7 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, txb->ack_why = why; txb->wire.seq = 0; txb->wire.type = RXRPC_PACKET_TYPE_ACK; - txb->wire.flags |= RXRPC_SLOW_START_OK; + txb->flags |= RXRPC_SLOW_START_OK; txb->ack.bufferSpace = 0; txb->ack.maxSkew = 0; txb->ack.firstPacket = 0; @@ -167,7 +167,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) if (list_empty(&txb->tx_link)) { list_add_tail(&txb->tx_link, &retrans_queue); - set_bit(RXRPC_TXBUF_RESENT, &txb->flags); + txb->flags |= RXRPC_TXBUF_RESENT; } trace_rxrpc_retransmit(call, txb->seq, txb->serial, @@ -210,7 +210,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) unacked = true; if (list_empty(&txb->tx_link)) { list_add_tail(&txb->tx_link, &retrans_queue); - set_bit(RXRPC_TXBUF_RESENT, &txb->flags); + txb->flags |= RXRPC_TXBUF_RESENT; rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans); } } @@ -320,7 +320,7 @@ static void rxrpc_decant_prepared_tx(struct rxrpc_call *call) call->tx_top = txb->seq; list_add_tail(&txb->call_link, &call->tx_buffer); - if (txb->wire.flags & RXRPC_LAST_PACKET) + if (txb->flags & RXRPC_LAST_PACKET) rxrpc_close_tx_phase(call); rxrpc_transmit_one(call, txb); diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 9691de00ade7..c435b50c33f4 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -212,7 +212,7 @@ static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to, list_for_each_entry_rcu(txb, &call->tx_buffer, call_link, false) { if (before_eq(txb->seq, call->acks_hard_ack)) continue; - if (test_bit(RXRPC_TXBUF_LAST, &txb->flags)) { + if (txb->flags & RXRPC_LAST_PACKET) { set_bit(RXRPC_CALL_TX_LAST, &call->flags); rot_last = true; } diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index bad96a983e12..8344ece5358a 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -205,7 +205,8 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) msg.msg_flags = 0; if (txb->ack.reason == RXRPC_ACK_PING) - txb->wire.flags |= RXRPC_REQUEST_ACK; + txb->flags |= RXRPC_REQUEST_ACK; + txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; n = rxrpc_fill_out_ack(conn, call, txb, &rwind); if (n == 0) @@ -239,7 +240,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } else { trace_rxrpc_tx_packet(call->debug_id, &txb->wire, rxrpc_tx_point_call_ack); - if (txb->wire.flags & RXRPC_REQUEST_ACK) + if (txb->flags & RXRPC_REQUEST_ACK) call->peer->rtt_last_req = ktime_get_real(); } rxrpc_tx_backoff(call, ret); @@ -357,13 +358,13 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) * service call, lest OpenAFS incorrectly send us an ACK with some * soft-ACKs in it and then never follow up with a proper hard ACK. */ - if (txb->wire.flags & RXRPC_REQUEST_ACK) + if (txb->flags & RXRPC_REQUEST_ACK) why = rxrpc_reqack_already_on; - else if (test_bit(RXRPC_TXBUF_LAST, &txb->flags) && rxrpc_sending_to_client(txb)) + else if ((txb->flags & RXRPC_LAST_PACKET) && rxrpc_sending_to_client(txb)) why = rxrpc_reqack_no_srv_last; else if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events)) why = rxrpc_reqack_ack_lost; - else if (test_bit(RXRPC_TXBUF_RESENT, &txb->flags)) + else if (txb->flags & RXRPC_TXBUF_RESENT) why = rxrpc_reqack_retrans; else if (call->cong_mode == RXRPC_CALL_SLOW_START && call->cong_cwnd <= 2) why = rxrpc_reqack_slow_start; @@ -379,7 +380,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) rxrpc_inc_stat(call->rxnet, stat_why_req_ack[why]); trace_rxrpc_req_ack(call->debug_id, txb->seq, why); if (why != rxrpc_reqack_no_srv_last) - txb->wire.flags |= RXRPC_REQUEST_ACK; + txb->flags |= RXRPC_REQUEST_ACK; dont_set_request_ack: if (IS_ENABLED(CONFIG_AF_RXRPC_INJECT_LOSS)) { @@ -387,15 +388,12 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if ((lose++ & 7) == 7) { ret = 0; trace_rxrpc_tx_data(call, txb->seq, txb->serial, - txb->wire.flags, - test_bit(RXRPC_TXBUF_RESENT, &txb->flags), - true); + txb->flags, true); goto done; } } - trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->wire.flags, - test_bit(RXRPC_TXBUF_RESENT, &txb->flags), false); + trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->flags, false); /* Track what we've attempted to transmit at least once so that the * retransmission algorithm doesn't try to resend what we haven't sent @@ -411,8 +409,9 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (txb->len >= call->peer->maxdata) goto send_fragmentable; + txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; txb->last_sent = ktime_get_real(); - if (txb->wire.flags & RXRPC_REQUEST_ACK) + if (txb->flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); /* send the packet by UDP @@ -442,7 +441,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) done: if (ret >= 0) { call->tx_last_sent = txb->last_sent; - if (txb->wire.flags & RXRPC_REQUEST_ACK) { + if (txb->flags & RXRPC_REQUEST_ACK) { call->peer->rtt_last_req = txb->last_sent; if (call->peer->rtt_count > 1) { unsigned long nowj = jiffies, ack_lost_at; @@ -486,7 +485,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) _debug("send fragment"); txb->last_sent = ktime_get_real(); - if (txb->wire.flags & RXRPC_REQUEST_ACK) + if (txb->flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); switch (conn->local->srx.transport.family) { diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 5677d5690a02..25c7c1d4f4c6 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -240,7 +240,7 @@ static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, rxrpc_notify_end_tx_t notify_end_tx) { rxrpc_seq_t seq = txb->seq; - bool last = test_bit(RXRPC_TXBUF_LAST, &txb->flags), poke; + bool poke, last = txb->flags & RXRPC_LAST_PACKET; rxrpc_inc_stat(call->rxnet, stat_tx_data); @@ -394,13 +394,11 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, /* add the packet to the send queue if it's now full */ if (!txb->space || (msg_data_left(msg) == 0 && !more)) { - if (msg_data_left(msg) == 0 && !more) { - txb->wire.flags |= RXRPC_LAST_PACKET; - __set_bit(RXRPC_TXBUF_LAST, &txb->flags); - } + if (msg_data_left(msg) == 0 && !more) + txb->flags |= RXRPC_LAST_PACKET; else if (call->tx_top - call->acks_hard_ack < call->tx_winsize) - txb->wire.flags |= RXRPC_MORE_PACKETS; + txb->flags |= RXRPC_MORE_PACKETS; ret = call->security->secure_packet(call, txb); if (ret < 0) diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index f2903c81cf5b..48d5a8f644e5 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -31,7 +31,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->space = sizeof(txb->data); txb->len = 0; txb->offset = 0; - txb->flags = 0; + txb->flags = call->conn->out_clientflag; txb->ack_why = 0; txb->seq = call->tx_prepared + 1; txb->serial = 0; @@ -40,7 +40,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->wire.callNumber = htonl(call->call_id); txb->wire.seq = htonl(txb->seq); txb->wire.type = packet_type; - txb->wire.flags = call->conn->out_clientflag; + txb->wire.flags = 0; txb->wire.userStatus = 0; txb->wire.securityIndex = call->security_ix; txb->wire._rsvd = 0; From patchwork Mon Mar 4 08:43:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580181 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A104E1CF87 for ; Mon, 4 Mar 2024 08:43:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541819; cv=none; b=iKd+a9clYpSNk7OyIcNQPSH3iPhru6kdQkSyT/5SSDkDJJc43Bjxr59bm2EP99DL3QJfwoTTFWNJySiK6LS+7swNFiXvVPRDxc3+6my8acHBjMTc2aHgx+MjQkYYmH5p8Dx61hHL8qYNS3bZ4n2L93jfpAnVCopCihu2GrlDQxg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541819; c=relaxed/simple; bh=YWox0hlkKOXJP+Kviv9g4eZURamiy3zXGc3RHcp1fE4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QICgCIgutGxJKcBo+Lfcg8TZRfvDhms6DUD8YYMjSoa1myZ0SN6HpVZ/N8AUWbJj7jfl2k8JmmUGqS7EOrJ3rhEsNW9qmhfVecctfPJDhib1IVslF6UJXyR7+PBKK/kR/IQKB7kz5InFPQN5MH/D8ZZ0tpv+aJWApptR9ymLMKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=g+l2731V; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="g+l2731V" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cMQOrfq9a6ADqnlPZTTmLOdel6QUSyVccUmRoxVrLQY=; b=g+l2731V1dDbXOXXtJgF1Mwt5WYD4TJxtkEuwmmK/0I0YNcZN+AEIIxDyTAmyqN2C0NPOZ neBT0LyIkb1r6Q6xC2OOGMDd5tDQncuUxnSK0eZTEeuEum0DO/BiSHnJwzaoAK8Vw5jsCu OR0ssihcjk5HA+KeK0basSvcBcnl+x8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-44-Vqfie6JjOu2z3gaRoWZK8g-1; Mon, 04 Mar 2024 03:43:32 -0500 X-MC-Unique: Vqfie6JjOu2z3gaRoWZK8g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9B06F106D06C; Mon, 4 Mar 2024 08:43:31 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7D5811C060AF; Mon, 4 Mar 2024 08:43:30 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 03/21] rxrpc: Note cksum in txbuf Date: Mon, 4 Mar 2024 08:43:00 +0000 Message-ID: <20240304084322.705539-4-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Patchwork-Delegate: kuba@kernel.org Add a field to rxrpc_txbuf in which to store the checksum to go in the header as this may get overwritten in the wire header struct when transmitting as part of a jumbo packet. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 1 + net/rxrpc/output.c | 1 + net/rxrpc/rxkad.c | 2 +- net/rxrpc/txbuf.c | 1 + 4 files changed, 4 insertions(+), 1 deletion(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 54d1dc97cb0f..c9a2882627aa 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -803,6 +803,7 @@ struct rxrpc_txbuf { unsigned int flags; #define RXRPC_TXBUF_WIRE_FLAGS 0xff /* The wire protocol flags */ #define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */ + __be16 cksum; /* Checksum to go in header */ u8 /*enum rxrpc_propose_ack_trace*/ ack_why; /* If ack, why */ struct { /* The packet for encrypting and DMA'ing. We align it such diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 8344ece5358a..828b145edc56 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -335,6 +335,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* Each transmission of a Tx packet+ needs a new serial number */ txb->serial = rxrpc_get_next_serial(conn); txb->wire.serial = htonl(txb->serial); + txb->wire.cksum = txb->cksum; if (test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags) && txb->seq == 1) diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 6b32d61d4cdc..28c9ce763be4 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -378,7 +378,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) y = (y >> 16) & 0xffff; if (y == 0) y = 1; /* zero checksums are not permitted */ - txb->wire.cksum = htons(y); + txb->cksum = htons(y); switch (call->conn->security_level) { case RXRPC_SECURITY_PLAIN: diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 48d5a8f644e5..7273615afe94 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -35,6 +35,7 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->ack_why = 0; txb->seq = call->tx_prepared + 1; txb->serial = 0; + txb->cksum = 0; txb->wire.epoch = htonl(call->conn->proto.epoch); txb->wire.cid = htonl(call->cid); txb->wire.callNumber = htonl(call->call_id); From patchwork Mon Mar 4 08:43:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580182 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBB3B20310 for ; Mon, 4 Mar 2024 08:43:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541819; cv=none; b=pXCowviwkW6oU6nBvqU/AWusDtTSFt7o13StIz1AwLIYbfsR/hJg7+ePNrqhluSyMZFL0C0FtxLf5cGeEnoUwz+3LjdVf4ZGCa3H5SlPFq0U5j8z9MRHcRI1ByYDUJI/PCedLqwZ3r4PDscfzJqTBR04eLYUNPjGIsXLpytgkE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541819; c=relaxed/simple; bh=i3aV9zJdURdXtYV6Ok20ZUsYSMXKtbojlDZd884F27M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P4MmNDrw+3ikSMJU5Q9h+YWHv+RZsHJbNebQ9p/HKIAXFjjNMfYSra79e1CIFKqEVPqUIMK7vmnpu9SjI240nmlFYhaocsMsEOGRRiBg+JR2KaOF1tRfY/NbB/RS9e1Z3BtNey4YnynZmLAAgima1WfZUFUfUos59DvakSbP6CU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RHDvZbEh; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RHDvZbEh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541816; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J5CCiqcXzsrLEjGrVegk9RIULtgWof6gfdvznJIRmrE=; b=RHDvZbEhzne6dzMKfEr6MigFk8atmsSdinsJMElVr2DxcDpo3DhOD4AV7Eu3mJaN6AkEsz fn5KJrQ5f7prLkBmahdUSPhcAb4b5DiP7adV59SorDI7RY0ZUGsV5k5UKyBE5QvMAcC9FW MT+1I5Ju0AyrlwNWR21LFo+1JQsi4CM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-57-stjnI-hNNcKMMCxeT0AhGw-1; Mon, 04 Mar 2024 03:43:33 -0500 X-MC-Unique: stjnI-hNNcKMMCxeT0AhGw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 565BD38212C7; Mon, 4 Mar 2024 08:43:33 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 385A8422A9; Mon, 4 Mar 2024 08:43:32 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 04/21] rxrpc: Fix the names of the fields in the ACK trailer struct Date: Mon, 4 Mar 2024 08:43:01 +0000 Message-ID: <20240304084322.705539-5-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Patchwork-Delegate: kuba@kernel.org From AFS-3.3 a trailer containing extra info was added to the ACK packet format - but AF_RXRPC has the names of some of the fields mixed up compared to other AFS implementations. Rename the struct and the fields to make them match. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- include/trace/events/rxrpc.h | 2 +- net/rxrpc/conn_event.c | 16 ++++++++-------- net/rxrpc/input.c | 22 +++++++++++----------- net/rxrpc/output.c | 14 +++++++------- net/rxrpc/protocol.h | 6 +++--- 5 files changed, 30 insertions(+), 30 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 33888f688325..c730cd732348 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -83,7 +83,7 @@ EM(rxrpc_badmsg_bad_abort, "bad-abort") \ EM(rxrpc_badmsg_bad_jumbo, "bad-jumbo") \ EM(rxrpc_badmsg_short_ack, "short-ack") \ - EM(rxrpc_badmsg_short_ack_info, "short-ack-info") \ + EM(rxrpc_badmsg_short_ack_trailer, "short-ack-trailer") \ EM(rxrpc_badmsg_short_hdr, "short-hdr") \ EM(rxrpc_badmsg_unsupported_packet, "unsup-pkt") \ EM(rxrpc_badmsg_zero_call, "zero-call") \ diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 1f251d758cb9..598b4ee389fc 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -88,7 +88,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, struct rxrpc_ackpacket ack; }; } __attribute__((packed)) pkt; - struct rxrpc_ackinfo ack_info; + struct rxrpc_acktrailer trailer; size_t len; int ret, ioc; u32 serial, mtu, call_id, padding; @@ -122,8 +122,8 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, iov[0].iov_len = sizeof(pkt.whdr); iov[1].iov_base = &padding; iov[1].iov_len = 3; - iov[2].iov_base = &ack_info; - iov[2].iov_len = sizeof(ack_info); + iov[2].iov_base = &trailer; + iov[2].iov_len = sizeof(trailer); serial = rxrpc_get_next_serial(conn); @@ -158,14 +158,14 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, pkt.ack.serial = htonl(skb ? sp->hdr.serial : 0); pkt.ack.reason = skb ? RXRPC_ACK_DUPLICATE : RXRPC_ACK_IDLE; pkt.ack.nAcks = 0; - ack_info.rxMTU = htonl(rxrpc_rx_mtu); - ack_info.maxMTU = htonl(mtu); - ack_info.rwind = htonl(rxrpc_rx_window_size); - ack_info.jumbo_max = htonl(rxrpc_rx_jumbo_max); + trailer.maxMTU = htonl(rxrpc_rx_mtu); + trailer.ifMTU = htonl(mtu); + trailer.rwind = htonl(rxrpc_rx_window_size); + trailer.jumbo_max = htonl(rxrpc_rx_jumbo_max); pkt.whdr.flags |= RXRPC_SLOW_START_OK; padding = 0; iov[0].iov_len += sizeof(pkt.ack); - len += sizeof(pkt.ack) + 3 + sizeof(ack_info); + len += sizeof(pkt.ack) + 3 + sizeof(trailer); ioc = 3; trace_rxrpc_tx_ack(chan->call_debug_id, serial, diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index c435b50c33f4..ea2df62e05e7 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -670,14 +670,14 @@ static void rxrpc_complete_rtt_probe(struct rxrpc_call *call, /* * Process the extra information that may be appended to an ACK packet */ -static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, - struct rxrpc_ackinfo *ackinfo) +static void rxrpc_input_ack_trailer(struct rxrpc_call *call, struct sk_buff *skb, + struct rxrpc_acktrailer *trailer) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_peer *peer; unsigned int mtu; bool wake = false; - u32 rwind = ntohl(ackinfo->rwind); + u32 rwind = ntohl(trailer->rwind); if (rwind > RXRPC_TX_MAX_WINDOW) rwind = RXRPC_TX_MAX_WINDOW; @@ -691,7 +691,7 @@ static void rxrpc_input_ackinfo(struct rxrpc_call *call, struct sk_buff *skb, if (call->cong_ssthresh > rwind) call->cong_ssthresh = rwind; - mtu = min(ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU)); + mtu = min(ntohl(trailer->maxMTU), ntohl(trailer->ifMTU)); peer = call->peer; if (mtu < peer->maxdata) { @@ -837,7 +837,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) struct rxrpc_ack_summary summary = { 0 }; struct rxrpc_ackpacket ack; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); - struct rxrpc_ackinfo info; + struct rxrpc_acktrailer trailer; rxrpc_serial_t ack_serial, acked_serial; rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt, since; int nr_acks, offset, ioffset; @@ -917,11 +917,11 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) goto send_response; } - info.rxMTU = 0; + trailer.maxMTU = 0; ioffset = offset + nr_acks + 3; - if (skb->len >= ioffset + sizeof(info) && - skb_copy_bits(skb, ioffset, &info, sizeof(info)) < 0) - return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack_info); + if (skb->len >= ioffset + sizeof(trailer) && + skb_copy_bits(skb, ioffset, &trailer, sizeof(trailer)) < 0) + return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack_trailer); if (nr_acks > 0) skb_condense(skb); @@ -950,8 +950,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) } /* Parse rwind and mtu sizes if provided. */ - if (info.rxMTU) - rxrpc_input_ackinfo(call, skb, &info); + if (trailer.maxMTU) + rxrpc_input_ack_trailer(call, skb, &trailer); if (first_soft_ack == 0) return rxrpc_proto_abort(call, 0, rxrpc_eproto_ackr_zero); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 828b145edc56..3803bf900a46 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -83,7 +83,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, struct rxrpc_txbuf *txb, u16 *_rwind) { - struct rxrpc_ackinfo ackinfo; + struct rxrpc_acktrailer trailer; unsigned int qsize, sack, wrap, to; rxrpc_seq_t window, wtop; int rsize; @@ -126,16 +126,16 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, qsize = (window - 1) - call->rx_consumed; rsize = max_t(int, call->rx_winsize - qsize, 0); *_rwind = rsize; - ackinfo.rxMTU = htonl(rxrpc_rx_mtu); - ackinfo.maxMTU = htonl(mtu); - ackinfo.rwind = htonl(rsize); - ackinfo.jumbo_max = htonl(jmax); + trailer.maxMTU = htonl(rxrpc_rx_mtu); + trailer.ifMTU = htonl(mtu); + trailer.rwind = htonl(rsize); + trailer.jumbo_max = htonl(jmax); *ackp++ = 0; *ackp++ = 0; *ackp++ = 0; - memcpy(ackp, &ackinfo, sizeof(ackinfo)); - return txb->ack.nAcks + 3 + sizeof(ackinfo); + memcpy(ackp, &trailer, sizeof(trailer)); + return txb->ack.nAcks + 3 + sizeof(trailer); } /* diff --git a/net/rxrpc/protocol.h b/net/rxrpc/protocol.h index e8ee4af43ca8..4fe6b4d20ada 100644 --- a/net/rxrpc/protocol.h +++ b/net/rxrpc/protocol.h @@ -135,9 +135,9 @@ struct rxrpc_ackpacket { /* * ACK packets can have a further piece of information tagged on the end */ -struct rxrpc_ackinfo { - __be32 rxMTU; /* maximum Rx MTU size (bytes) [AFS 3.3] */ - __be32 maxMTU; /* maximum interface MTU size (bytes) [AFS 3.3] */ +struct rxrpc_acktrailer { + __be32 maxMTU; /* maximum Rx MTU size (bytes) [AFS 3.3] */ + __be32 ifMTU; /* maximum interface MTU size (bytes) [AFS 3.3] */ __be32 rwind; /* Rx window size (packets) [AFS 3.4] */ __be32 jumbo_max; /* max packets to stick into a jumbo packet [AFS 3.5] */ }; From patchwork Mon Mar 4 08:43:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580184 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2053A22635 for ; Mon, 4 Mar 2024 08:43:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541824; cv=none; b=i8fBhF/104q//MCj06FVn0BtWv70Vp0aQEi1izhvJPu6lRDo3Tw1Fz4G1dmxxLHDEUS9cGoq6SlUUlyve+ZH5UQekuFABmfuO2Act1+5JCvkYVJUETjQRRltmC94QRKMwsDGL1Uza8F3TKPKNZFCvjVO8Qo+XS5XmqlYZ0qEC2s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541824; c=relaxed/simple; bh=MQ2ZFjtue/mlbT3gPUeVjpfqMS/NxZ1JCX8pts1RXQw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HF+MTKKV+kdrjxu60LrmnseEipehBwfqBU6w17N8MK+5qoZpad5PdsrKlQ9thyqCxhhmgUK00vQxQaQH94TE+hzNMZfDRGhjShyipMUDc/TjTZWZNyWoQtO/roG17xAQykXCauUopX7wDtORmFEcD+Fm+HeskfnYiuhaahLsDlo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PrgH8rfV; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PrgH8rfV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541820; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ccm9SVhd+bAluRvWjYsOnhrUMg2QaaHeYK4kvnT54uM=; b=PrgH8rfVpFGUjIi46ykDLr50ze/zxp+8CToiyxsx7d6b8pZoXzI2xQocd7H7Mn12lp3bmH esBMupf6of/FbnmQeFLKQTtXD2v2iz/V9xnLf4lJxr1X14wPlv+QWPjS8I0JT0+bOT10nE NRu6vwssBgbqDk+4N8s6rp0a0dOP0yM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-255-8tstsTf1Mc6VqNP5HQCJ2g-1; Mon, 04 Mar 2024 03:43:35 -0500 X-MC-Unique: 8tstsTf1Mc6VqNP5HQCJ2g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 17BF738212C1; Mon, 4 Mar 2024 08:43:35 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id E91CD40C6EBA; Mon, 4 Mar 2024 08:43:33 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 05/21] rxrpc: Strip barriers and atomics off of timer tracking Date: Mon, 4 Mar 2024 08:43:02 +0000 Message-ID: <20240304084322.705539-6-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Patchwork-Delegate: kuba@kernel.org Strip the atomic ops and barriering off of the call timer tracking as this is handled solely within the I/O thread, except for expect_term_by which is set by sendmsg(). Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/call_event.c | 32 ++++++++++++++++---------------- net/rxrpc/conn_client.c | 4 ++-- net/rxrpc/input.c | 15 ++++++--------- net/rxrpc/output.c | 18 ++++++++---------- 4 files changed, 32 insertions(+), 37 deletions(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 77eacbfc5d45..84eedbb49fcb 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -27,7 +27,7 @@ void rxrpc_propose_ping(struct rxrpc_call *call, u32 serial, unsigned long ping_at = now + rxrpc_idle_ack_delay; if (time_before(ping_at, call->ping_at)) { - WRITE_ONCE(call->ping_at, ping_at); + call->ping_at = ping_at; rxrpc_reduce_call_timer(call, ping_at, now, rxrpc_timer_set_for_ping); trace_rxrpc_propose_ack(call, why, RXRPC_ACK_PING, serial); @@ -53,7 +53,7 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, ack_at += READ_ONCE(call->tx_backoff); ack_at += now; if (time_before(ack_at, call->delay_ack_at)) { - WRITE_ONCE(call->delay_ack_at, ack_at); + call->delay_ack_at = ack_at; rxrpc_reduce_call_timer(call, ack_at, now, rxrpc_timer_set_for_ack); } @@ -220,7 +220,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, !list_empty(&retrans_queue)); - WRITE_ONCE(call->resend_at, resend_at); + call->resend_at = resend_at; if (unacked) rxrpc_congestion_timeout(call); @@ -260,7 +260,7 @@ static void rxrpc_begin_service_reply(struct rxrpc_call *call) unsigned long now = jiffies; rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SEND_REPLY); - WRITE_ONCE(call->delay_ack_at, now + MAX_JIFFY_OFFSET); + call->delay_ack_at = now + MAX_JIFFY_OFFSET; if (call->ackr_reason == RXRPC_ACK_DELAY) call->ackr_reason = 0; trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); @@ -399,13 +399,13 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) /* If we see our async-event poke, check for timeout trippage. */ now = jiffies; - t = READ_ONCE(call->expect_rx_by); + t = call->expect_rx_by; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_normal, now); expired = true; } - t = READ_ONCE(call->expect_req_by); + t = call->expect_req_by; if (__rxrpc_call_state(call) == RXRPC_CALL_SERVER_RECV_REQUEST && time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now); @@ -418,41 +418,41 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) expired = true; } - t = READ_ONCE(call->delay_ack_at); + t = call->delay_ack_at; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_ack, now); - cmpxchg(&call->delay_ack_at, t, now + MAX_JIFFY_OFFSET); + call->delay_ack_at = now + MAX_JIFFY_OFFSET; rxrpc_send_ACK(call, RXRPC_ACK_DELAY, 0, rxrpc_propose_ack_ping_for_lost_ack); } - t = READ_ONCE(call->ack_lost_at); + t = call->ack_lost_at; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_lost_ack, now); - cmpxchg(&call->ack_lost_at, t, now + MAX_JIFFY_OFFSET); + call->ack_lost_at = now + MAX_JIFFY_OFFSET; set_bit(RXRPC_CALL_EV_ACK_LOST, &call->events); } - t = READ_ONCE(call->keepalive_at); + t = call->keepalive_at; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_keepalive, now); - cmpxchg(&call->keepalive_at, t, now + MAX_JIFFY_OFFSET); + call->keepalive_at = now + MAX_JIFFY_OFFSET; rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_keepalive); } - t = READ_ONCE(call->ping_at); + t = call->ping_at; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_ping, now); - cmpxchg(&call->ping_at, t, now + MAX_JIFFY_OFFSET); + call->ping_at = now + MAX_JIFFY_OFFSET; rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_keepalive); } - t = READ_ONCE(call->resend_at); + t = call->resend_at; if (time_after_eq(now, t)) { trace_rxrpc_timer(call, rxrpc_timer_exp_resend, now); - cmpxchg(&call->resend_at, t, now + MAX_JIFFY_OFFSET); + call->resend_at = now + MAX_JIFFY_OFFSET; resend = true; } diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c index 3b9b267a4431..d25bf1cf3670 100644 --- a/net/rxrpc/conn_client.c +++ b/net/rxrpc/conn_client.c @@ -636,7 +636,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call test_bit(RXRPC_CALL_EXPOSED, &call->flags)) { unsigned long final_ack_at = jiffies + 2; - WRITE_ONCE(chan->final_ack_at, final_ack_at); + chan->final_ack_at = final_ack_at; smp_wmb(); /* vs rxrpc_process_delayed_final_acks() */ set_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags); rxrpc_reduce_conn_timer(conn, final_ack_at); @@ -770,7 +770,7 @@ void rxrpc_discard_expired_client_conns(struct rxrpc_local *local) conn_expires_at = conn->idle_timestamp + expiry; - now = READ_ONCE(jiffies); + now = jiffies; if (time_after(conn_expires_at, now)) goto not_yet_expired; } diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index ea2df62e05e7..e53a49accc16 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -288,14 +288,12 @@ static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, static bool rxrpc_receiving_reply(struct rxrpc_call *call) { struct rxrpc_ack_summary summary = { 0 }; - unsigned long now, timo; + unsigned long now; rxrpc_seq_t top = READ_ONCE(call->tx_top); if (call->ackr_reason) { now = jiffies; - timo = now + MAX_JIFFY_OFFSET; - - WRITE_ONCE(call->delay_ack_at, timo); + call->delay_ack_at = now + MAX_JIFFY_OFFSET; trace_rxrpc_timer(call, rxrpc_timer_init_for_reply, now); } @@ -594,7 +592,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) if (timo) { now = jiffies; expect_req_by = now + timo; - WRITE_ONCE(call->expect_req_by, expect_req_by); + call->expect_req_by = now + timo; rxrpc_reduce_call_timer(call, expect_req_by, now, rxrpc_timer_set_for_idle); } @@ -1048,11 +1046,10 @@ void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) timo = READ_ONCE(call->next_rx_timo); if (timo) { - unsigned long now = jiffies, expect_rx_by; + unsigned long now = jiffies; - expect_rx_by = now + timo; - WRITE_ONCE(call->expect_rx_by, expect_rx_by); - rxrpc_reduce_call_timer(call, expect_rx_by, now, + call->expect_rx_by = now + timo; + rxrpc_reduce_call_timer(call, call->expect_rx_by, now, rxrpc_timer_set_for_normal); } diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 3803bf900a46..2386b01b2231 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -67,11 +67,10 @@ static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret) */ static void rxrpc_set_keepalive(struct rxrpc_call *call) { - unsigned long now = jiffies, keepalive_at = call->next_rx_timo / 6; + unsigned long now = jiffies; - keepalive_at += now; - WRITE_ONCE(call->keepalive_at, keepalive_at); - rxrpc_reduce_call_timer(call, keepalive_at, now, + call->keepalive_at = now + call->next_rx_timo / 6; + rxrpc_reduce_call_timer(call, call->keepalive_at, now, rxrpc_timer_set_for_keepalive); } @@ -449,7 +448,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) ack_lost_at = rxrpc_get_rto_backoff(call->peer, false); ack_lost_at += nowj; - WRITE_ONCE(call->ack_lost_at, ack_lost_at); + call->ack_lost_at = ack_lost_at; rxrpc_reduce_call_timer(call, ack_lost_at, nowj, rxrpc_timer_set_for_lost_ack); } @@ -458,11 +457,10 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (txb->seq == 1 && !test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags)) { - unsigned long nowj = jiffies, expect_rx_by; + unsigned long nowj = jiffies; - expect_rx_by = nowj + call->next_rx_timo; - WRITE_ONCE(call->expect_rx_by, expect_rx_by); - rxrpc_reduce_call_timer(call, expect_rx_by, nowj, + call->expect_rx_by = nowj + call->next_rx_timo; + rxrpc_reduce_call_timer(call, call->expect_rx_by, nowj, rxrpc_timer_set_for_normal); } @@ -724,7 +722,7 @@ void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb) unsigned long now = jiffies; unsigned long resend_at = now + call->peer->rto_j; - WRITE_ONCE(call->resend_at, resend_at); + call->resend_at = resend_at; rxrpc_reduce_call_timer(call, resend_at, now, rxrpc_timer_set_for_send); } From patchwork Mon Mar 4 08:43:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580183 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DCA3224C7 for ; Mon, 4 Mar 2024 08:43:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541821; cv=none; b=iQl3OtsZbd++OY2ZGFRRG5hSHYFYvRlf3RTrM35E2+Uhct5GtUhcMYGeDGQUNHo+xJ/X/UTV3YfnZ5sdQE0fGbz9kMLc+ajN+RKbfgE90kRfb8P3vFGc/YHWCeRMXuygdJmT6bhzBR9py9sjNcQwvSCUzq5hyq+Zs2og7STDO20= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541821; c=relaxed/simple; bh=0raf8PRlraWnPhbn6SRbKPcBod2VJHcvfxva3reVfio=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JDn5fb4e1Xso+mWJBhOxkH3je6SgWodaNSx9WbCzUsZXJLsHwRELzI6umlzSxvQ9bSo8NPNMkj66txoFhecyWXvHfkgfgdIWs2jFz9Guf0VVcuUeup/9hRFFH9t1LD7tfgrGR5c0D9JenKWWn1SNTMFBSJQk5MxhbrOw8GGrNvA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Exl8Dd0z; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Exl8Dd0z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541819; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ol9Zm5BaokJG1LA5SA9PnxgRsE5CFPeI5pD0RB5to08=; b=Exl8Dd0zkHAGnEGWtQnLsYQsoof9zV/3GgLQmdeqS9jEmi/mPiVjiPiYqGf03egwdk1zT1 HWbOcsAC16yKhu57Gw0h75NhgcI20NAGOkzqgfwAErZcZHRaoTz0KptLa9P4a3oU5S4nZi yDEKEvb/VjEjhUWg4FNDLE8CQlHrQFY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-138-Lx9SgLAkNySWxjizXIlCoQ-1; Mon, 04 Mar 2024 03:43:37 -0500 X-MC-Unique: Lx9SgLAkNySWxjizXIlCoQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E1BC78489A0; Mon, 4 Mar 2024 08:43:36 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id C271120229A4; Mon, 4 Mar 2024 08:43:35 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 06/21] rxrpc: Remove atomic handling on some fields only used in I/O thread Date: Mon, 4 Mar 2024 08:43:03 +0000 Message-ID: <20240304084322.705539-7-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Patchwork-Delegate: kuba@kernel.org call->tx_transmitted and call->acks_prev_seq don't need to be managed with cmpxchg() and barriers as it's only used within the singular I/O thread. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/call_event.c | 10 ++++------ net/rxrpc/output.c | 8 +++----- 2 files changed, 7 insertions(+), 11 deletions(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 84eedbb49fcb..1184518dcdb8 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -115,7 +115,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) struct rxrpc_skb_priv *sp; struct rxrpc_txbuf *txb; unsigned long resend_at; - rxrpc_seq_t transmitted = READ_ONCE(call->tx_transmitted); + rxrpc_seq_t transmitted = call->tx_transmitted; ktime_t now, max_age, oldest, ack_ts; bool unacked = false; unsigned int i; @@ -184,16 +184,14 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) * seen. Anything between the soft-ACK table and that point will get * ACK'd or NACK'd in due course, so don't worry about it here; here we * need to consider retransmitting anything beyond that point. - * - * Note that ACK for a packet can beat the update of tx_transmitted. */ - if (after_eq(READ_ONCE(call->acks_prev_seq), READ_ONCE(call->tx_transmitted))) + if (after_eq(call->acks_prev_seq, call->tx_transmitted)) goto no_further_resend; list_for_each_entry_from(txb, &call->tx_buffer, call_link) { - if (before_eq(txb->seq, READ_ONCE(call->acks_prev_seq))) + if (before_eq(txb->seq, call->acks_prev_seq)) continue; - if (after(txb->seq, READ_ONCE(call->tx_transmitted))) + if (after(txb->seq, call->tx_transmitted)) break; /* Not transmitted yet */ if (ack && ack->reason == RXRPC_ACK_PING_RESPONSE && diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 2386b01b2231..1e039b6f4494 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -397,12 +397,10 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* Track what we've attempted to transmit at least once so that the * retransmission algorithm doesn't try to resend what we haven't sent - * yet. However, this can race as we can receive an ACK before we get - * to this point. But, OTOH, if we won't get an ACK mentioning this - * packet unless the far side received it (though it could have - * discarded it anyway and NAK'd it). + * yet. */ - cmpxchg(&call->tx_transmitted, txb->seq - 1, txb->seq); + if (txb->seq == call->tx_transmitted + 1) + call->tx_transmitted = txb->seq; /* send the packet with the don't fragment bit set if we currently * think it's small enough */ From patchwork Mon Mar 4 08:43:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580185 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66FBD24A00 for ; Mon, 4 Mar 2024 08:43:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541825; cv=none; b=ENg2GjoKQ0KvxHtbiPeg2g73/e/vWymONcelNdh/FnSkazH9pffxDWoxjW4DMGkZu2ORfZba2hu31XIdAACt8VGm7Vg3rzFQ+SZSKkWp1S2ho0Qy0XAJd/qTSFHGfy62WMzYq18GghsEXQ5ILmGKx39gRzu/aF4Qse9731oWmaQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541825; c=relaxed/simple; bh=M4d/FB7CR6JSc+NPeVGOFj1bwk3Vsnoq4ewViEAq+Rs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KnpbqBmDXA03zSEyVyptOjAfhGP5IeYFwnQsZcYLnMixV+SSlfRWR0qqQV6c5fNg4wDLTkqy8oDHZqhq1LpCzk8efYZ9NDUG+bkBZmUjgBXMTTUFybdFRjDrOF6SQU9HcEnVbU6ne0hoJf7ELn79ctrM9Qo9D/4SC4gZx71Y7IA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZzYW9sCK; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZzYW9sCK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541822; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5EZMFPUNWrXNVbCFwmPu2Cto1pVZS1f1ESdtHsSsnJ4=; b=ZzYW9sCKWkq8sSdtl6IXsD2jXESJ3cxhoSdRT5UYsQzuFoI2rX3+7vgZ3pvXtB2YJ7Qtvo xG+KmvZF0daY/69qRrbdBnNFmsXfYWMtv79B0HHnBSb80Q/T7DCdnLemKkzn6sMhOVz3eD gywicsspoKswomdL/+2H+dbjS3AczuQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-451-yWT9csKpOmmZF7f8PB4x5w-1; Mon, 04 Mar 2024 03:43:39 -0500 X-MC-Unique: yWT9csKpOmmZF7f8PB4x5w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A23C028EA6E4; Mon, 4 Mar 2024 08:43:38 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82129492BE2; Mon, 4 Mar 2024 08:43:37 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 07/21] rxrpc: Do lazy DF flag resetting Date: Mon, 4 Mar 2024 08:43:04 +0000 Message-ID: <20240304084322.705539-8-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Patchwork-Delegate: kuba@kernel.org Don't reset the DF flag after transmission, but rather set it when needed since it should be a fast op now that we call IP directly. This includes turning it off for RESPONSE packets and, for the moment, ACK packets. In future, we will need to turn it on for ACK packets used to do path MTU discovery. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/output.c | 4 ++-- net/rxrpc/rxkad.c | 1 - 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 1e039b6f4494..8aa8ba32eacc 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -231,6 +231,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->ack.previousPacket = htonl(call->rx_highest_seq); iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); + rxrpc_local_dont_fragment(conn->local, false); ret = do_udp_sendmsg(conn->local->socket, &msg, len); call->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) { @@ -406,6 +407,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) * think it's small enough */ if (txb->len >= call->peer->maxdata) goto send_fragmentable; + rxrpc_local_dont_fragment(conn->local, true); txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; txb->last_sent = ktime_get_real(); @@ -492,8 +494,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) rxrpc_inc_stat(call->rxnet, stat_tx_data_send_frag); ret = do_udp_sendmsg(conn->local->socket, &msg, len); conn->peer->last_tx_at = ktime_get_seconds(); - - rxrpc_local_dont_fragment(conn->local, true); break; default: diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index 28c9ce763be4..e451ac90bfee 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -726,7 +726,6 @@ static int rxkad_send_response(struct rxrpc_connection *conn, rxrpc_local_dont_fragment(conn->local, false); ret = kernel_sendmsg(conn->local->socket, &msg, iov, 3, len); - rxrpc_local_dont_fragment(conn->local, true); if (ret < 0) { trace_rxrpc_tx_fail(conn->debug_id, serial, ret, rxrpc_tx_point_rxkad_response); From patchwork Mon Mar 4 08:43:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580186 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A2D619473 for ; Mon, 4 Mar 2024 08:43:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541825; cv=none; b=PXB98SoH8rvGaGAMHOhsDjcQeLV0HCN2KchybbNw8POlnVLFIVC9nFlXB5xoyVvJI7Fa0AqPLw1KMp6GNEDe1l9Q+DApHxIjA6Fw3C2BYc3/41cz/XsC8Bsb3VYgmZ2ZRMLEz4Gk5ygVzj/0VRAsrICm22OC1hTmMr8tnXSLtfo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541825; c=relaxed/simple; bh=LvSy9gNlA8Evtz0FnjAtQm8C7Ali9u71Ylg0oxye8RI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LPuaAuIu0TK5UQdb2Yf++3C9CSDHBmJddafKWkMM3EzXN5+twOuV5Ic+JIG6xQFN7lzkMiRH2fEQ9DgYTye9Yfioj1M1sz+O7Euvqe38vE75TL7I7kt2+F9OAP9lQTSktiJYuanyTnkuvatS6QNUXdzeov6YvcXHrHQ8IntTrhg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Uq5zt6N5; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Uq5zt6N5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541823; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4F3hxevbUmT5YKtRkYbrij6YAin9EXQUXbHYdfqk2aI=; b=Uq5zt6N5hXmaiBf8OEAA2bCp+mgldxnUh4LLfTA07Pki1ny3PCVIdFMcMwfHoz55/PG5Nj th1ICsMmvr0yqK+O96WtFYzx8QgwbzaPJmdKsV9P1zvH6lk4WIk6aPA3DFWk3WRIdnL6V6 lRIp9tVBnLYdyXhCgrBW2HZFnq0Twqs= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-128-0drJd69lPlmZy0MTcaWEpA-1; Mon, 04 Mar 2024 03:43:41 -0500 X-MC-Unique: 0drJd69lPlmZy0MTcaWEpA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 86AB83C00090; Mon, 4 Mar 2024 08:43:40 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4215F422A9; Mon, 4 Mar 2024 08:43:39 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 08/21] rxrpc: Merge together DF/non-DF branches of data Tx function Date: Mon, 4 Mar 2024 08:43:05 +0000 Message-ID: <20240304084322.705539-9-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Patchwork-Delegate: kuba@kernel.org Merge together the DF and non-DF branches of the transmission function and always set the flag to the right thing before transmitting. If we see -EMSGSIZE from udp_sendmsg(), turn off DF and retry. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/output.c | 54 +++++++++++++--------------------------------- 1 file changed, 15 insertions(+), 39 deletions(-) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 8aa8ba32eacc..e2c9e645fcfb 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -323,8 +323,9 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call) */ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { - enum rxrpc_req_ack_trace why; struct rxrpc_connection *conn = call->conn; + enum rxrpc_req_ack_trace why; + enum rxrpc_tx_point frag; struct msghdr msg; struct kvec iov[1]; size_t len; @@ -405,11 +406,16 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* send the packet with the don't fragment bit set if we currently * think it's small enough */ - if (txb->len >= call->peer->maxdata) - goto send_fragmentable; - rxrpc_local_dont_fragment(conn->local, true); + if (txb->len >= call->peer->maxdata) { + rxrpc_local_dont_fragment(conn->local, false); + frag = rxrpc_tx_point_call_data_frag; + } else { + rxrpc_local_dont_fragment(conn->local, true); + frag = rxrpc_tx_point_call_data_nofrag; + } txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; +retry: txb->last_sent = ktime_get_real(); if (txb->flags & RXRPC_REQUEST_ACK) rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); @@ -435,8 +441,11 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } rxrpc_tx_backoff(call, ret); - if (ret == -EMSGSIZE) - goto send_fragmentable; + if (ret == -EMSGSIZE && frag == rxrpc_tx_point_call_data_frag) { + rxrpc_local_dont_fragment(conn->local, false); + frag = rxrpc_tx_point_call_data_frag; + goto retry; + } done: if (ret >= 0) { @@ -478,39 +487,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) _leave(" = %d [%u]", ret, call->peer->maxdata); return ret; - -send_fragmentable: - /* attempt to send this message with fragmentation enabled */ - _debug("send fragment"); - - txb->last_sent = ktime_get_real(); - if (txb->flags & RXRPC_REQUEST_ACK) - rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); - - switch (conn->local->srx.transport.family) { - case AF_INET6: - case AF_INET: - rxrpc_local_dont_fragment(conn->local, false); - rxrpc_inc_stat(call->rxnet, stat_tx_data_send_frag); - ret = do_udp_sendmsg(conn->local->socket, &msg, len); - conn->peer->last_tx_at = ktime_get_seconds(); - break; - - default: - BUG(); - } - - if (ret < 0) { - rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); - rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); - trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, - rxrpc_tx_point_call_data_frag); - } else { - trace_rxrpc_tx_packet(call->debug_id, &txb->wire, - rxrpc_tx_point_call_data_frag); - } - rxrpc_tx_backoff(call, ret); - goto done; } /* From patchwork Mon Mar 4 08:43:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580187 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4976724B2B for ; Mon, 4 Mar 2024 08:43:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541828; cv=none; b=czGXBCQlKKPdWbELbt1Gld1MvHACnsTOcEQVkh9mjnZKf73O25eVf+Id12zlXw+N01fLpox85X0wwcyh3jXOu2U+tRriDt0EeUm+s4b89GlfMzn5wmOEH4ZJLpqM7Z//NttUf7LiexIbV20EptJ5RDgemZk3O6iR1vfvGF7GsTU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541828; c=relaxed/simple; bh=1yskpCVzWiHGiofGxGLw9LLF25f7okHA+AFw/Vlb7/4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G+HzXY5uQIkwqLmpBQKTv4v4Juq/Kbyzq+e204VM4uU+xXPN35mG/Mfknb7AwozCpF3uRJ2NA9AzR8V5S2NZ1HXSsv/C1TSwJXkimZXcyUWtpRvCNhBGE3jJsvcKM+vDvncPfE30KIKwaxhSGM9fzsoYbh1kXtx06fy/L7hVPbg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Q+tVEEE3; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q+tVEEE3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TRdqimspnMZVPKymX1p9GQpRQzRORSRvhQalJQ51PVE=; b=Q+tVEEE3JVHjEGu7VQ4Yk/a/C9U8BYYtqttSD3QZfALaQp9HxW6BEKllgIzwYHee3WcIgS iHVqZveQhZdFT5+RLTITjxKsuuMl3isTPyKculmazM6Z/BRn3cPToj6cuV5rlzS1G0npn0 VWgg+ngUwl8RCVte2sH06tLR30D0+Ic= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-158-PI_DrKkvOWCpZZ1IjQTxYg-1; Mon, 04 Mar 2024 03:43:43 -0500 X-MC-Unique: PI_DrKkvOWCpZZ1IjQTxYg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8FDDB3C00090; Mon, 4 Mar 2024 08:43:42 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 442171121313; Mon, 4 Mar 2024 08:43:41 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 09/21] rxrpc: Add a kvec[] to the rxrpc_txbuf struct Date: Mon, 4 Mar 2024 08:43:06 +0000 Message-ID: <20240304084322.705539-10-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Patchwork-Delegate: kuba@kernel.org Add a kvec[] to the rxrpc_txbuf struct to point to the contributory buffers for a packet. Start with just a single element for now, but this will be expanded later. Make the ACK sending function use it, which means that rxrpc_fill_out_ack() doesn't need to return the size of the sack table, padding and trailer. Make the data sending code use it, both in where sendmsg() packages code up into txbufs and where those txbufs are transmitted. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 2 ++ net/rxrpc/output.c | 33 ++++++++++++--------------------- net/rxrpc/sendmsg.c | 8 +++++--- net/rxrpc/txbuf.c | 3 +++ 4 files changed, 22 insertions(+), 24 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c9a2882627aa..c6731f43a2d5 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -805,6 +805,8 @@ struct rxrpc_txbuf { #define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */ __be16 cksum; /* Checksum to go in header */ u8 /*enum rxrpc_propose_ack_trace*/ ack_why; /* If ack, why */ + u8 nr_kvec; + struct kvec kvec[1]; struct { /* The packet for encrypting and DMA'ing. We align it such * that data[] aligns correctly for any crypto blocksize. diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index e2c9e645fcfb..f2b10c3e4cc2 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -77,10 +77,10 @@ static void rxrpc_set_keepalive(struct rxrpc_call *call) /* * Fill out an ACK packet. */ -static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, - struct rxrpc_call *call, - struct rxrpc_txbuf *txb, - u16 *_rwind) +static void rxrpc_fill_out_ack(struct rxrpc_connection *conn, + struct rxrpc_call *call, + struct rxrpc_txbuf *txb, + u16 *_rwind) { struct rxrpc_acktrailer trailer; unsigned int qsize, sack, wrap, to; @@ -134,7 +134,9 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, *ackp++ = 0; *ackp++ = 0; memcpy(ackp, &trailer, sizeof(trailer)); - return txb->ack.nAcks + 3 + sizeof(trailer); + txb->kvec[0].iov_len = sizeof(txb->wire) + + sizeof(txb->ack) + txb->ack.nAcks + 3 + sizeof(trailer); + txb->len = txb->kvec[0].iov_len; } /* @@ -187,8 +189,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { struct rxrpc_connection *conn; struct msghdr msg; - struct kvec iov[1]; - size_t len, n; int ret, rtt_slot = -1; u16 rwind; @@ -207,13 +207,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->flags |= RXRPC_REQUEST_ACK; txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; - n = rxrpc_fill_out_ack(conn, call, txb, &rwind); - if (n == 0) - return 0; - - iov[0].iov_base = &txb->wire; - iov[0].iov_len = sizeof(txb->wire) + sizeof(txb->ack) + n; - len = iov[0].iov_len; + rxrpc_fill_out_ack(conn, call, txb, &rwind); txb->serial = rxrpc_get_next_serial(conn); txb->wire.serial = htonl(txb->serial); @@ -230,9 +224,9 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) /* Grab the highest received seq as late as possible */ txb->ack.previousPacket = htonl(call->rx_highest_seq); - iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); + iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, txb->len); rxrpc_local_dont_fragment(conn->local, false); - ret = do_udp_sendmsg(conn->local->socket, &msg, len); + ret = do_udp_sendmsg(conn->local->socket, &msg, txb->len); call->peer->last_tx_at = ktime_get_seconds(); if (ret < 0) { trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, @@ -327,7 +321,6 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) enum rxrpc_req_ack_trace why; enum rxrpc_tx_point frag; struct msghdr msg; - struct kvec iov[1]; size_t len; int ret, rtt_slot = -1; @@ -342,10 +335,8 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->seq == 1) txb->wire.userStatus = RXRPC_USERSTATUS_SERVICE_UPGRADE; - iov[0].iov_base = &txb->wire; - iov[0].iov_len = sizeof(txb->wire) + txb->len; - len = iov[0].iov_len; - iov_iter_kvec(&msg.msg_iter, WRITE, iov, 1, len); + len = txb->kvec[0].iov_len; + iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, len); msg.msg_name = &call->peer->srx.transport; msg.msg_namelen = call->peer->srx.transport_len; diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 25c7c1d4f4c6..1e81046ea8a6 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -362,7 +362,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, if (!txb) goto maybe_error; - txb->offset = offset; + txb->offset = offset + sizeof(struct rxrpc_wire_header); txb->space -= offset; txb->space = min_t(size_t, chunk, txb->space); } @@ -374,8 +374,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, size_t copy = min_t(size_t, txb->space, msg_data_left(msg)); _debug("add %zu", copy); - if (!copy_from_iter_full(txb->data + txb->offset, copy, - &msg->msg_iter)) + if (!copy_from_iter_full(txb->kvec[0].iov_base + txb->offset, + copy, &msg->msg_iter)) goto efault; _debug("added"); txb->space -= copy; @@ -404,6 +404,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, if (ret < 0) goto out; + txb->kvec[0].iov_len += txb->len; + txb->len = txb->kvec[0].iov_len; rxrpc_queue_packet(rx, call, txb, notify_end_tx); txb = NULL; } diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 7273615afe94..91e96cda6dc7 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -36,6 +36,9 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->seq = call->tx_prepared + 1; txb->serial = 0; txb->cksum = 0; + txb->nr_kvec = 1; + txb->kvec[0].iov_base = &txb->wire; + txb->kvec[0].iov_len = sizeof(txb->wire); txb->wire.epoch = htonl(call->conn->proto.epoch); txb->wire.cid = htonl(call->cid); txb->wire.callNumber = htonl(call->call_id); From patchwork Mon Mar 4 08:43:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580188 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E1686199D9 for ; Mon, 4 Mar 2024 08:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541830; cv=none; b=T+TjyOCTHbrox9BPEbEKkuC4oN4ByhYjyIaXWgUyxm7aCBMMN4MU0lI3JohdEAIzjJUGuuA5PeBMO/a4eypOWigqC7r0PdEYbaD9pbwG1d/pG/aYX/cae+uIk/+/AIg4+tC8lOYEFEPCxxVuSFgQFtl1EHv5pdvXiq0kv5KGnrc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541830; c=relaxed/simple; bh=Fb3EEdeGG6hr40pcfHcbrnPc71Su9PqpTX7U2qzg2NY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xw63kJzc2NSMuE2Ey7+Z3JLMKwEa4bhNjWJf5GuW6kecavZtIQNZmPJ1vHsa6DP6DHtOFa24YtKbQYgO3bpsnbi/UaTtbb37SDYGKpsABPrKtYLaZQY8ZyGxBsnuiQbSRop71TbicMNYTT5lF6NzUgtDVCKFvSnSG2VWm/7khLM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Y+j81VNQ; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Y+j81VNQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p2jcthewAMVGdt4V/a8GGyr1bztrZf6ReP981b5efaE=; b=Y+j81VNQMoifhxKdSZh1wnEUEukSwXuZBbFL2IFgp/n9NpLj/lNB2zuWCorZ+kcNDwN26A RBwBJWrJuT8aDJjgEmupfc+VoGJ95CoWa0pCJm8LrqxYgvtV0+5YerWkIy7h8MPiRTw+zq IucBHG5JaG7vOaqGcCdUlXbj8vWjbwY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-208-bgL59sUCNkKk_ZH8_3r9Ig-1; Mon, 04 Mar 2024 03:43:44 -0500 X-MC-Unique: bgL59sUCNkKk_ZH8_3r9Ig-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A84F8489AB; Mon, 4 Mar 2024 08:43:44 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 320981121313; Mon, 4 Mar 2024 08:43:43 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 10/21] rxrpc: Split up the DATA packet transmission function Date: Mon, 4 Mar 2024 08:43:07 +0000 Message-ID: <20240304084322.705539-11-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Patchwork-Delegate: kuba@kernel.org Split (sub)packet preparation and timestamping out of the DATA packet transmission function to make it easier to glue multiple txbufs together into a jumbo DATA packet. This will require preparation and timestamping of all the subpackets in a txbuf, and these functions provide convenient points to place the required iteration. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 1 - net/rxrpc/output.c | 98 +++++++++++++++++++++++++++++------------ 2 files changed, 69 insertions(+), 30 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index c6731f43a2d5..54550ab62adc 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -1166,7 +1166,6 @@ static inline struct rxrpc_net *rxrpc_net(struct net *net) */ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb); int rxrpc_send_abort_packet(struct rxrpc_call *); -int rxrpc_send_data_packet(struct rxrpc_call *, struct rxrpc_txbuf *); void rxrpc_send_conn_abort(struct rxrpc_connection *conn); void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb); void rxrpc_send_keepalive(struct rxrpc_peer *); diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index f2b10c3e4cc2..25b8fc9aef97 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -313,37 +313,23 @@ int rxrpc_send_abort_packet(struct rxrpc_call *call) } /* - * send a packet through the transport endpoint + * Prepare a (sub)packet for transmission. */ -int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +static void rxrpc_prepare_data_subpacket(struct rxrpc_call *call, struct rxrpc_txbuf *txb, + rxrpc_serial_t serial) { - struct rxrpc_connection *conn = call->conn; + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; enum rxrpc_req_ack_trace why; - enum rxrpc_tx_point frag; - struct msghdr msg; - size_t len; - int ret, rtt_slot = -1; + struct rxrpc_connection *conn = call->conn; _enter("%x,{%d}", txb->seq, txb->len); - /* Each transmission of a Tx packet+ needs a new serial number */ - txb->serial = rxrpc_get_next_serial(conn); - txb->wire.serial = htonl(txb->serial); - txb->wire.cksum = txb->cksum; + txb->serial = serial; if (test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags) && txb->seq == 1) txb->wire.userStatus = RXRPC_USERSTATUS_SERVICE_UPGRADE; - len = txb->kvec[0].iov_len; - iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, len); - - msg.msg_name = &call->peer->srx.transport; - msg.msg_namelen = call->peer->srx.transport_len; - msg.msg_control = NULL; - msg.msg_controllen = 0; - msg.msg_flags = 0; - /* If our RTT cache needs working on, request an ACK. Also request * ACKs if a DATA packet appears to have been lost. * @@ -376,6 +362,59 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->flags |= RXRPC_REQUEST_ACK; dont_set_request_ack: + whdr->flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; + whdr->serial = htonl(txb->serial); + whdr->cksum = txb->cksum; + + trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->flags, false); +} + +/* + * Prepare a packet for transmission. + */ +static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +{ + rxrpc_serial_t serial; + + /* Each transmission of a Tx packet needs a new serial number */ + serial = rxrpc_get_next_serial(call->conn); + + rxrpc_prepare_data_subpacket(call, txb, serial); + + return txb->len; +} + +/* + * Set the times on a packet before transmission + */ +static int rxrpc_tstamp_data_packets(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +{ + ktime_t tstamp = ktime_get_real(); + int rtt_slot = -1; + + txb->last_sent = tstamp; + if (txb->flags & RXRPC_REQUEST_ACK) + rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); + + return rtt_slot; +} + +/* + * send a packet through the transport endpoint + */ +static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +{ + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; + struct rxrpc_connection *conn = call->conn; + enum rxrpc_tx_point frag; + struct msghdr msg; + size_t len; + int ret, rtt_slot = -1; + + _enter("%x,{%d}", txb->seq, txb->len); + + len = rxrpc_prepare_data_packet(call, txb); + if (IS_ENABLED(CONFIG_AF_RXRPC_INJECT_LOSS)) { static int lose; if ((lose++ & 7) == 7) { @@ -386,7 +425,13 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) } } - trace_rxrpc_tx_data(call, txb->seq, txb->serial, txb->flags, false); + iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, len); + + msg.msg_name = &call->peer->srx.transport; + msg.msg_namelen = call->peer->srx.transport_len; + msg.msg_control = NULL; + msg.msg_controllen = 0; + msg.msg_flags = 0; /* Track what we've attempted to transmit at least once so that the * retransmission algorithm doesn't try to resend what we haven't sent @@ -405,11 +450,8 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) frag = rxrpc_tx_point_call_data_nofrag; } - txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; retry: - txb->last_sent = ktime_get_real(); - if (txb->flags & RXRPC_REQUEST_ACK) - rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); + rtt_slot = rxrpc_tstamp_data_packets(call, txb); /* send the packet by UDP * - returns -EMSGSIZE if UDP would have to fragment the packet @@ -424,11 +466,9 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (ret < 0) { rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); - trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, - rxrpc_tx_point_call_data_nofrag); + trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, frag); } else { - trace_rxrpc_tx_packet(call->debug_id, &txb->wire, - rxrpc_tx_point_call_data_nofrag); + trace_rxrpc_tx_packet(call->debug_id, whdr, frag); } rxrpc_tx_backoff(call, ret); From patchwork Mon Mar 4 08:43:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580189 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE2AB374E6 for ; Mon, 4 Mar 2024 08:43:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541833; cv=none; b=Jv64CCArdLTm/b6km8BPMOZYeCi5YHwLfONwIdvDW0XKePfsQWbtZIoF32CWVfTSMgLSWnRQ4f/gybDIixsyWRV4RZ8vicMVFJ0DURYQY9oxAvfV54c9r+b9tqDGNPNeN1YFcVsSQ3ldgAzi5G283/+I29qOYgF1LEY9lBKYqnw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541833; c=relaxed/simple; bh=OQhrdRbsy6dNNd/zZSQhmGvtRjzZYE4Nim/ssYLKPhk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gFlWOuqa5eb3Br+Sr9txkC7Ih+kiRv4rxCJQAir2TwBkZMwBZ718pnGnAAl6NdmZZrzAM3GToeY1+snnhq/s3HywE4szRaTvSS1P74BMDxFWYvutTX9NVXK+NO33IxyejwVC1c8b16V5rvhnJJM/DjIHZIu4xqw/BNZi3EW/GCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YskJqMgO; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YskJqMgO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541830; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xts3XtVON9I71sFj7FzOone2ONVNFLRmYJBbj5l1ezs=; b=YskJqMgOl0nkWqrpUMkeNp4eMWBYduA/qbTFszRxu6WD6x5JkkAwjxRWC4RgKzRFPSsmbn +C5v3ic/9TrAwPofd6pPkbdK3qBmd9eOjXwKQ+iPfwABrcJuYl5dO0kEWdNoqvB2mH+FYv ++FcJa4a/QQgQJZnEQwSRUZdiV05ves= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-367-O92bpm_xN4y0qHaSFW0Q_Q-1; Mon, 04 Mar 2024 03:43:46 -0500 X-MC-Unique: O92bpm_xN4y0qHaSFW0Q_Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 29B1428EA6E3; Mon, 4 Mar 2024 08:43:46 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id EB5212026D0A; Mon, 4 Mar 2024 08:43:44 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 11/21] rxrpc: Don't pick values out of the wire header when setting up security Date: Mon, 4 Mar 2024 08:43:08 +0000 Message-ID: <20240304084322.705539-12-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Patchwork-Delegate: kuba@kernel.org Don't pick values out of the wire header in rxkad when setting up DATA packet security, but rather use other sources. This makes it easier to get rid of txb->wire. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/rxkad.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index e451ac90bfee..ef0849c8329c 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -259,7 +259,7 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, _enter(""); - check = txb->seq ^ ntohl(txb->wire.callNumber); + check = txb->seq ^ call->call_id; hdr->data_size = htonl((u32)check << 16 | txb->len); txb->len += sizeof(struct rxkad_level1_hdr); @@ -302,7 +302,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, _enter(""); - check = txb->seq ^ ntohl(txb->wire.callNumber); + check = txb->seq ^ call->call_id; rxkhdr->data_size = htonl(txb->len | (u32)check << 16); rxkhdr->checksum = 0; @@ -362,9 +362,9 @@ static int rxkad_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) memcpy(&iv, call->conn->rxkad.csum_iv.x, sizeof(iv)); /* calculate the security checksum */ - x = (ntohl(txb->wire.cid) & RXRPC_CHANNELMASK) << (32 - RXRPC_CIDSHIFT); + x = (call->cid & RXRPC_CHANNELMASK) << (32 - RXRPC_CIDSHIFT); x |= txb->seq & 0x3fffffff; - crypto.buf[0] = txb->wire.callNumber; + crypto.buf[0] = htonl(call->call_id); crypto.buf[1] = htonl(x); sg_init_one(&sg, crypto.buf, 8); From patchwork Mon Mar 4 08:43:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580190 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE2E5374EC for ; Mon, 4 Mar 2024 08:43:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541833; cv=none; b=QuJA1hlfI1RpZ1b3RPVC9St8EMQdhm5P5IUJt7juAc5vIkb1vxcpn7Xi7UJpHR3Ak3VuW23qOibsFXTpuZC7e+wfAb5FI4Q/1grdn22LietfGkCrcqn/8/+zgR2Xf6QAuM01h1inVcVuGPAAs8poaK1rcpkQv8EvCCyYEcJZ3kk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541833; c=relaxed/simple; bh=HPT1/Otm7HC3lWXqMPZL6qX9ql3qTr8GnyMTVbuei7c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cJeC9YVSUXsQEToKoaG5w5uX84QYA52NfOHYN6HmFnuoV4taJGe7QRc+I4GF7xKsxxbtBbbNBAgTjnx+Sjud0V6OnmHMgWcNUIA9BWjZFSrrO4UpbeSV/0HpzByRniiV8HsyprPffqMtIbzfz3sXw4lml6pIBoaBh1xTXDfKZes= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=H7VpiWb3; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="H7VpiWb3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0j3g3jFYPBuEOmQrpIsH83jKP4pkLZUnpgeZVLL69Gc=; b=H7VpiWb3T6Y3XGPeSLho8mBVHT9YmPGGCEO4LCf5GYjX5N1wBKEf1u7lVvehZqHSNGQnFY BrIv/0Lai2BxvEp+2v8ETNd93B3fLavOhjfnGTrs8krOljt/SGzDZXeje302DtCa3icuJF 2KNLqPtJ7DBES1y6ipx3NYJB3Kiawqg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-507-130P9ETdPrSyRiQdHL1zsg-1; Mon, 04 Mar 2024 03:43:48 -0500 X-MC-Unique: 130P9ETdPrSyRiQdHL1zsg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6C509811E81; Mon, 4 Mar 2024 08:43:48 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0A57324D; Mon, 4 Mar 2024 08:43:46 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 12/21] rxrpc: Move rxrpc_send_ACK() to output.c with rxrpc_send_ack_packet() Date: Mon, 4 Mar 2024 08:43:09 +0000 Message-ID: <20240304084322.705539-13-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Patchwork-Delegate: kuba@kernel.org Move rxrpc_send_ACK() to output.c to so that it is with rxrpc_send_ack_packet() prior to merging the two. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/call_event.c | 37 ------------------------------------- net/rxrpc/output.c | 39 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 40 insertions(+), 40 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 54550ab62adc..a8795ef0d669 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -873,7 +873,6 @@ int rxrpc_user_charge_accept(struct rxrpc_sock *, unsigned long); */ void rxrpc_propose_ping(struct rxrpc_call *call, u32 serial, enum rxrpc_propose_ack_trace why); -void rxrpc_send_ACK(struct rxrpc_call *, u8, rxrpc_serial_t, enum rxrpc_propose_ack_trace); void rxrpc_propose_delay_ACK(struct rxrpc_call *, rxrpc_serial_t, enum rxrpc_propose_ack_trace); void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *); @@ -1164,7 +1163,8 @@ static inline struct rxrpc_net *rxrpc_net(struct net *net) /* * output.c */ -int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb); +void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, + rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why); int rxrpc_send_abort_packet(struct rxrpc_call *); void rxrpc_send_conn_abort(struct rxrpc_connection *conn); void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb); diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 1184518dcdb8..e19ea54dce54 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -61,43 +61,6 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, trace_rxrpc_propose_ack(call, why, RXRPC_ACK_DELAY, serial); } -/* - * Queue an ACK for immediate transmission. - */ -void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, - rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) -{ - struct rxrpc_txbuf *txb; - - if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) - return; - - rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); - - txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, - rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); - if (!txb) { - kleave(" = -ENOMEM"); - return; - } - - txb->ack_why = why; - txb->wire.seq = 0; - txb->wire.type = RXRPC_PACKET_TYPE_ACK; - txb->flags |= RXRPC_SLOW_START_OK; - txb->ack.bufferSpace = 0; - txb->ack.maxSkew = 0; - txb->ack.firstPacket = 0; - txb->ack.previousPacket = 0; - txb->ack.serial = htonl(serial); - txb->ack.reason = ack_reason; - txb->ack.nAcks = 0; - - trace_rxrpc_send_ack(call, why, ack_reason, serial); - rxrpc_send_ack_packet(call, txb); - rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); -} - /* * Handle congestion being detected by the retransmit timeout. */ diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 25b8fc9aef97..ec9ae9c6c492 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -185,7 +185,7 @@ static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, /* * Transmit an ACK packet. */ -int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { struct rxrpc_connection *conn; struct msghdr msg; @@ -248,6 +248,43 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) return ret; } +/* + * Queue an ACK for immediate transmission. + */ +void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, + rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) +{ + struct rxrpc_txbuf *txb; + + if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) + return; + + rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); + + txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, + rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); + if (!txb) { + kleave(" = -ENOMEM"); + return; + } + + txb->ack_why = why; + txb->wire.seq = 0; + txb->wire.type = RXRPC_PACKET_TYPE_ACK; + txb->flags |= RXRPC_SLOW_START_OK; + txb->ack.bufferSpace = 0; + txb->ack.maxSkew = 0; + txb->ack.firstPacket = 0; + txb->ack.previousPacket = 0; + txb->ack.serial = htonl(serial); + txb->ack.reason = ack_reason; + txb->ack.nAcks = 0; + + trace_rxrpc_send_ack(call, why, ack_reason, serial); + rxrpc_send_ack_packet(call, txb); + rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); +} + /* * Send an ABORT call packet. */ From patchwork Mon Mar 4 08:43:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580191 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8C651B5B2 for ; Mon, 4 Mar 2024 08:43:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541838; cv=none; b=hHthGNPTIRcYWbLtjjmMfIUCqq0BhdAOHKzPlsEqnecXCIR/kPdKSsCtX8MOMOa22m2c4Kk72KcYk1paO8uhytolicj+Hkpo/kYYv9cxDmVzM0VMXilzkS2BmXkbvr64sReMNfvLPqglsSwwQKX9YTxhzQDt/w9SqDzlCsFuYZU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541838; c=relaxed/simple; bh=JSAb75K5u7gBHHU2jfoUTctUJDfxxq41wkWofdDPz0M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bqym45/3MKUN590mKLtFSuus8XmhNJvST1RgGqFauRWa+HPZrG8rLtMVRjgw46tigi0nfGoOqOVbe02IaHR60g18+2oluFaSWSI6DEEGl25QtZDbdpp6AXe7skIVK05xbxcMFHDyv//ocB4C//AyCzMvrfkqgrXePVsSJ5PMx2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bJgHl5Fc; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bJgHl5Fc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541836; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=au5j5ZW3rj6O9c0BmuDpUr2RzwJwFV//kP+rZhiY3eA=; b=bJgHl5Fc8aL3jTdB9bUB5gP7oKJ6Mq2rhNXJvYwEa6I0CPE1N2/APiL7z/2vvngEEk9VWS q6cphsV3hM8eVMfGKPU620ykhiVzIv+C87GJBxv44k9H45raZywUdBftVO/7NjdZkrQKqG f9hRpPZg839nhbIKs2UM2VLZuRilXPM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-558-5hm474SANgKBtLmFpJNzSg-1; Mon, 04 Mar 2024 03:43:52 -0500 X-MC-Unique: 5hm474SANgKBtLmFpJNzSg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 34F96106D061; Mon, 4 Mar 2024 08:43:52 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C8602026D06; Mon, 4 Mar 2024 08:43:49 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 13/21] rxrpc: Use rxrpc_txbuf::kvec[0] instead of rxrpc_txbuf::wire Date: Mon, 4 Mar 2024 08:43:10 +0000 Message-ID: <20240304084322.705539-14-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Patchwork-Delegate: kuba@kernel.org Use rxrpc_txbuf::kvec[0] instead of rxrpc_txbuf::wire to gain access to the Rx protocol header. In future, the wire header will be stored in a page frag, not in the rxrpc_txbuf struct making it possible to use MSG_SPLICE_PAGES when sending it. Similarly, access the ack header as being immediately after the wire header when filling out an ACK packet. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- Notes: Changes ======= ver #2) - Remove unused variable. net/rxrpc/ar-internal.h | 5 +-- net/rxrpc/output.c | 79 ++++++++++++++++++++--------------------- net/rxrpc/txbuf.c | 27 +++++++------- 3 files changed, 56 insertions(+), 55 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index a8795ef0d669..9ea4e7e9d9f7 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -804,6 +804,7 @@ struct rxrpc_txbuf { #define RXRPC_TXBUF_WIRE_FLAGS 0xff /* The wire protocol flags */ #define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */ __be16 cksum; /* Checksum to go in header */ + unsigned short ack_rwind; /* ACK receive window */ u8 /*enum rxrpc_propose_ack_trace*/ ack_why; /* If ack, why */ u8 nr_kvec; struct kvec kvec[1]; @@ -812,11 +813,11 @@ struct rxrpc_txbuf { * that data[] aligns correctly for any crypto blocksize. */ u8 pad[64 - sizeof(struct rxrpc_wire_header)]; - struct rxrpc_wire_header wire; /* Network-ready header */ + struct rxrpc_wire_header _wire; /* Network-ready header */ union { u8 data[RXRPC_JUMBO_DATALEN]; /* Data packet */ struct { - struct rxrpc_ackpacket ack; + struct rxrpc_ackpacket _ack; DECLARE_FLEX_ARRAY(u8, acks); }; }; diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index ec9ae9c6c492..5398aa24bb8e 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -77,11 +77,13 @@ static void rxrpc_set_keepalive(struct rxrpc_call *call) /* * Fill out an ACK packet. */ -static void rxrpc_fill_out_ack(struct rxrpc_connection *conn, - struct rxrpc_call *call, +static void rxrpc_fill_out_ack(struct rxrpc_call *call, struct rxrpc_txbuf *txb, - u16 *_rwind) + u8 ack_reason, + rxrpc_serial_t serial) { + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; + struct rxrpc_ackpacket *ack = (struct rxrpc_ackpacket *)(whdr + 1); struct rxrpc_acktrailer trailer; unsigned int qsize, sack, wrap, to; rxrpc_seq_t window, wtop; @@ -97,15 +99,24 @@ static void rxrpc_fill_out_ack(struct rxrpc_connection *conn, window = call->ackr_window; wtop = call->ackr_wtop; sack = call->ackr_sack_base % RXRPC_SACK_SIZE; - txb->ack.firstPacket = htonl(window); - txb->ack.nAcks = wtop - window; + + whdr->seq = 0; + whdr->type = RXRPC_PACKET_TYPE_ACK; + txb->flags |= RXRPC_SLOW_START_OK; + ack->bufferSpace = 0; + ack->maxSkew = 0; + ack->firstPacket = htonl(window); + ack->previousPacket = htonl(call->rx_highest_seq); + ack->serial = htonl(serial); + ack->reason = ack_reason; + ack->nAcks = wtop - window; if (after(wtop, window)) { wrap = RXRPC_SACK_SIZE - sack; - to = min_t(unsigned int, txb->ack.nAcks, RXRPC_SACK_SIZE); + to = min_t(unsigned int, ack->nAcks, RXRPC_SACK_SIZE); - if (sack + txb->ack.nAcks <= RXRPC_SACK_SIZE) { - memcpy(txb->acks, call->ackr_sack_table + sack, txb->ack.nAcks); + if (sack + ack->nAcks <= RXRPC_SACK_SIZE) { + memcpy(txb->acks, call->ackr_sack_table + sack, ack->nAcks); } else { memcpy(txb->acks, call->ackr_sack_table + sack, wrap); memcpy(txb->acks + wrap, call->ackr_sack_table, @@ -115,16 +126,16 @@ static void rxrpc_fill_out_ack(struct rxrpc_connection *conn, ackp += to; } else if (before(wtop, window)) { pr_warn("ack window backward %x %x", window, wtop); - } else if (txb->ack.reason == RXRPC_ACK_DELAY) { - txb->ack.reason = RXRPC_ACK_IDLE; + } else if (ack->reason == RXRPC_ACK_DELAY) { + ack->reason = RXRPC_ACK_IDLE; } - mtu = conn->peer->if_mtu; - mtu -= conn->peer->hdrsize; + mtu = call->peer->if_mtu; + mtu -= call->peer->hdrsize; jmax = rxrpc_rx_jumbo_max; qsize = (window - 1) - call->rx_consumed; rsize = max_t(int, call->rx_winsize - qsize, 0); - *_rwind = rsize; + txb->ack_rwind = rsize; trailer.maxMTU = htonl(rxrpc_rx_mtu); trailer.ifMTU = htonl(mtu); trailer.rwind = htonl(rsize); @@ -134,8 +145,7 @@ static void rxrpc_fill_out_ack(struct rxrpc_connection *conn, *ackp++ = 0; *ackp++ = 0; memcpy(ackp, &trailer, sizeof(trailer)); - txb->kvec[0].iov_len = sizeof(txb->wire) + - sizeof(txb->ack) + txb->ack.nAcks + 3 + sizeof(trailer); + txb->kvec[0].iov_len += sizeof(*ack) + ack->nAcks + 3 + sizeof(trailer); txb->len = txb->kvec[0].iov_len; } @@ -187,10 +197,11 @@ static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, */ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; struct rxrpc_connection *conn; + struct rxrpc_ackpacket *ack = (struct rxrpc_ackpacket *)(whdr + 1); struct msghdr msg; int ret, rtt_slot = -1; - u16 rwind; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) return -ECONNRESET; @@ -203,27 +214,22 @@ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *tx msg.msg_controllen = 0; msg.msg_flags = 0; - if (txb->ack.reason == RXRPC_ACK_PING) + if (ack->reason == RXRPC_ACK_PING) txb->flags |= RXRPC_REQUEST_ACK; - txb->wire.flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; - - rxrpc_fill_out_ack(conn, call, txb, &rwind); + whdr->flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; txb->serial = rxrpc_get_next_serial(conn); - txb->wire.serial = htonl(txb->serial); + whdr->serial = htonl(txb->serial); trace_rxrpc_tx_ack(call->debug_id, txb->serial, - ntohl(txb->ack.firstPacket), - ntohl(txb->ack.serial), txb->ack.reason, txb->ack.nAcks, - rwind); + ntohl(ack->firstPacket), + ntohl(ack->serial), ack->reason, ack->nAcks, + txb->ack_rwind); - if (txb->ack.reason == RXRPC_ACK_PING) + if (ack->reason == RXRPC_ACK_PING) rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_ping); rxrpc_inc_stat(call->rxnet, stat_tx_ack_send); - /* Grab the highest received seq as late as possible */ - txb->ack.previousPacket = htonl(call->rx_highest_seq); - iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, txb->len); rxrpc_local_dont_fragment(conn->local, false); ret = do_udp_sendmsg(conn->local->socket, &msg, txb->len); @@ -232,7 +238,7 @@ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *tx trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, rxrpc_tx_point_call_ack); } else { - trace_rxrpc_tx_packet(call->debug_id, &txb->wire, + trace_rxrpc_tx_packet(call->debug_id, whdr, rxrpc_tx_point_call_ack); if (txb->flags & RXRPC_REQUEST_ACK) call->peer->rtt_last_req = ktime_get_real(); @@ -268,18 +274,9 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, return; } - txb->ack_why = why; - txb->wire.seq = 0; - txb->wire.type = RXRPC_PACKET_TYPE_ACK; - txb->flags |= RXRPC_SLOW_START_OK; - txb->ack.bufferSpace = 0; - txb->ack.maxSkew = 0; - txb->ack.firstPacket = 0; - txb->ack.previousPacket = 0; - txb->ack.serial = htonl(serial); - txb->ack.reason = ack_reason; - txb->ack.nAcks = 0; + rxrpc_fill_out_ack(call, txb, ack_reason, serial); + txb->ack_why = why; trace_rxrpc_send_ack(call, why, ack_reason, serial); rxrpc_send_ack_packet(call, txb); rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); @@ -365,7 +362,7 @@ static void rxrpc_prepare_data_subpacket(struct rxrpc_call *call, struct rxrpc_t if (test_bit(RXRPC_CONN_PROBING_FOR_UPGRADE, &conn->flags) && txb->seq == 1) - txb->wire.userStatus = RXRPC_USERSTATUS_SERVICE_UPGRADE; + whdr->userStatus = RXRPC_USERSTATUS_SERVICE_UPGRADE; /* If our RTT cache needs working on, request an ACK. Also request * ACKs if a DATA packet appears to have been lost. diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 91e96cda6dc7..2e8c5b15a84f 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -19,10 +19,13 @@ atomic_t rxrpc_nr_txbuf; struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, gfp_t gfp) { + struct rxrpc_wire_header *whdr; struct rxrpc_txbuf *txb; txb = kmalloc(sizeof(*txb), gfp); if (txb) { + whdr = &txb->_wire; + INIT_LIST_HEAD(&txb->call_link); INIT_LIST_HEAD(&txb->tx_link); refcount_set(&txb->ref, 1); @@ -37,18 +40,18 @@ struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, txb->serial = 0; txb->cksum = 0; txb->nr_kvec = 1; - txb->kvec[0].iov_base = &txb->wire; - txb->kvec[0].iov_len = sizeof(txb->wire); - txb->wire.epoch = htonl(call->conn->proto.epoch); - txb->wire.cid = htonl(call->cid); - txb->wire.callNumber = htonl(call->call_id); - txb->wire.seq = htonl(txb->seq); - txb->wire.type = packet_type; - txb->wire.flags = 0; - txb->wire.userStatus = 0; - txb->wire.securityIndex = call->security_ix; - txb->wire._rsvd = 0; - txb->wire.serviceId = htons(call->dest_srx.srx_service); + txb->kvec[0].iov_base = whdr; + txb->kvec[0].iov_len = sizeof(*whdr); + whdr->epoch = htonl(call->conn->proto.epoch); + whdr->cid = htonl(call->cid); + whdr->callNumber = htonl(call->call_id); + whdr->seq = htonl(txb->seq); + whdr->type = packet_type; + whdr->flags = 0; + whdr->userStatus = 0; + whdr->securityIndex = call->security_ix; + whdr->_rsvd = 0; + whdr->serviceId = htons(call->dest_srx.srx_service); trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1, From patchwork Mon Mar 4 08:43:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580192 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E96B1B801 for ; Mon, 4 Mar 2024 08:44:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541843; cv=none; b=aDsla73ZowRcjwuSomALGWZY0ZcMVDjLqcL19ZCWi8pa/aQLK8zBznsUfsM6k00RoKnk0JImNr4iREFqjDMfpVyL2aqZv9pWlOOTkR3Vi0wNVRt2fJk3NDR8awIQjzXCAoZuYXI6jFZLHuEyd3aVmJygboCgEJwAUIJ/qnTtWhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541843; c=relaxed/simple; bh=OJcJA2NR+2/fo1YKX1vNDZR/ZNy+Z7pPwFh8+cMi/q4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YDJG9brsZFi11X4aWXNRfe5U6ErA0zszM/Yf5da0ScSrBkpmSgsTgbKL3ammrq4mY0IQ7Tnp5HLxwnfRHM7CVVjahyTWBjX4MUrkQJCyxz971Sc9GzMZVYTrY5PfJvluF1XzG6bp+fyUwxu/b2DYAg1GcSVkbxW9xaur47RzZj0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gIgukT6o; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gIgukT6o" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541840; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/uLl3dHGB02GCDwAqlUq57TZldij+SfvwDd2Vrtxbq8=; b=gIgukT6oi0iYUK2lupA68ba/s/fh07b62RPNKpo9f9125+gdmcOlyfuqKiWcbUpe+7wYOu E4smZwucUbOj22ONzA5w/I7kvpDbvJ9g07hSEGbtGKFa3FyrIDAXMm+q1V7FnKzJTbl5Ha LyOnW9AizxszCdywxujLIo/PjBfE7NY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-584-q7N4skMbPZqmrR2vaiAOBg-1; Mon, 04 Mar 2024 03:43:56 -0500 X-MC-Unique: q7N4skMbPZqmrR2vaiAOBg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CC6E91C03D88; Mon, 4 Mar 2024 08:43:55 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 60C49492BC8; Mon, 4 Mar 2024 08:43:53 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 14/21] rxrpc: Do zerocopy using MSG_SPLICE_PAGES and page frags Date: Mon, 4 Mar 2024 08:43:11 +0000 Message-ID: <20240304084322.705539-15-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Patchwork-Delegate: kuba@kernel.org Switch from keeping the transmission buffers in the rxrpc_txbuf struct and allocated from the slab, to allocating them using page fragment allocators (which uses raw pages), thereby allowing them to be passed to MSG_SPLICE_PAGES and avoid copying into the UDP buffers. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 32 +++---- net/rxrpc/conn_object.c | 4 + net/rxrpc/insecure.c | 11 +-- net/rxrpc/local_object.c | 3 + net/rxrpc/output.c | 65 +++++++------- net/rxrpc/rxkad.c | 47 +++++----- net/rxrpc/sendmsg.c | 22 ++--- net/rxrpc/txbuf.c | 180 ++++++++++++++++++++++++++++++--------- 8 files changed, 219 insertions(+), 145 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 9ea4e7e9d9f7..47f4689379ca 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -248,10 +248,9 @@ struct rxrpc_security { struct rxrpc_key_token *); /* Work out how much data we can store in a packet, given an estimate - * of the amount of data remaining. + * of the amount of data remaining and allocate a data buffer. */ - int (*how_much_data)(struct rxrpc_call *, size_t, - size_t *, size_t *, size_t *); + struct rxrpc_txbuf *(*alloc_txbuf)(struct rxrpc_call *call, size_t remaining, gfp_t gfp); /* impose security on a packet */ int (*secure_packet)(struct rxrpc_call *, struct rxrpc_txbuf *); @@ -292,6 +291,7 @@ struct rxrpc_local { struct socket *socket; /* my UDP socket */ struct task_struct *io_thread; struct completion io_thread_ready; /* Indication that the I/O thread started */ + struct page_frag_cache tx_alloc; /* Tx control packet allocation (I/O thread only) */ struct rxrpc_sock *service; /* Service(s) listening on this endpoint */ #ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY struct sk_buff_head rx_delay_queue; /* Delay injection queue */ @@ -500,6 +500,8 @@ struct rxrpc_connection { struct list_head proc_link; /* link in procfs list */ struct list_head link; /* link in master connection list */ struct sk_buff_head rx_queue; /* received conn-level packets */ + struct page_frag_cache tx_data_alloc; /* Tx DATA packet allocation */ + struct mutex tx_data_alloc_lock; struct mutex security_lock; /* Lock for security management */ const struct rxrpc_security *security; /* applied security module */ @@ -788,7 +790,6 @@ struct rxrpc_send_params { * Buffer of data to be output as a packet. */ struct rxrpc_txbuf { - struct rcu_head rcu; struct list_head call_link; /* Link in call->tx_sendmsg/tx_buffer */ struct list_head tx_link; /* Link in live Enc queue or Tx queue */ ktime_t last_sent; /* Time at which last transmitted */ @@ -806,22 +807,8 @@ struct rxrpc_txbuf { __be16 cksum; /* Checksum to go in header */ unsigned short ack_rwind; /* ACK receive window */ u8 /*enum rxrpc_propose_ack_trace*/ ack_why; /* If ack, why */ - u8 nr_kvec; - struct kvec kvec[1]; - struct { - /* The packet for encrypting and DMA'ing. We align it such - * that data[] aligns correctly for any crypto blocksize. - */ - u8 pad[64 - sizeof(struct rxrpc_wire_header)]; - struct rxrpc_wire_header _wire; /* Network-ready header */ - union { - u8 data[RXRPC_JUMBO_DATALEN]; /* Data packet */ - struct { - struct rxrpc_ackpacket _ack; - DECLARE_FLEX_ARRAY(u8, acks); - }; - }; - } __aligned(64); + u8 nr_kvec; /* Amount of kvec[] used */ + struct kvec kvec[3]; }; static inline bool rxrpc_sending_to_server(const struct rxrpc_txbuf *txb) @@ -1299,8 +1286,9 @@ static inline void rxrpc_sysctl_exit(void) {} * txbuf.c */ extern atomic_t rxrpc_nr_txbuf; -struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, - gfp_t gfp); +struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_size, + size_t data_align, gfp_t gfp); +struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_size); void rxrpc_get_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what); void rxrpc_see_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what); void rxrpc_put_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what); diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index df8a271948a1..0af4642aeec4 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -68,6 +68,7 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet, INIT_LIST_HEAD(&conn->proc_link); INIT_LIST_HEAD(&conn->link); mutex_init(&conn->security_lock); + mutex_init(&conn->tx_data_alloc_lock); skb_queue_head_init(&conn->rx_queue); conn->rxnet = rxnet; conn->security = &rxrpc_no_security; @@ -341,6 +342,9 @@ static void rxrpc_clean_up_connection(struct work_struct *work) */ rxrpc_purge_queue(&conn->rx_queue); + if (conn->tx_data_alloc.va) + __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va), + conn->tx_data_alloc.pagecnt_bias); call_rcu(&conn->rcu, rxrpc_rcu_free_connection); } diff --git a/net/rxrpc/insecure.c b/net/rxrpc/insecure.c index 34353b6e584b..f2701068ed9e 100644 --- a/net/rxrpc/insecure.c +++ b/net/rxrpc/insecure.c @@ -15,14 +15,11 @@ static int none_init_connection_security(struct rxrpc_connection *conn, } /* - * Work out how much data we can put in an unsecured packet. + * Allocate an appropriately sized buffer for the amount of data remaining. */ -static int none_how_much_data(struct rxrpc_call *call, size_t remain, - size_t *_buf_size, size_t *_data_size, size_t *_offset) +static struct rxrpc_txbuf *none_alloc_txbuf(struct rxrpc_call *call, size_t remain, gfp_t gfp) { - *_buf_size = *_data_size = min_t(size_t, remain, RXRPC_JUMBO_DATALEN); - *_offset = 0; - return 0; + return rxrpc_alloc_data_txbuf(call, min_t(size_t, remain, RXRPC_JUMBO_DATALEN), 0, gfp); } static int none_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) @@ -79,7 +76,7 @@ const struct rxrpc_security rxrpc_no_security = { .exit = none_exit, .init_connection_security = none_init_connection_security, .free_call_crypto = none_free_call_crypto, - .how_much_data = none_how_much_data, + .alloc_txbuf = none_alloc_txbuf, .secure_packet = none_secure_packet, .verify_packet = none_verify_packet, .respond_to_challenge = none_respond_to_challenge, diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c index 34d307368135..504453c688d7 100644 --- a/net/rxrpc/local_object.c +++ b/net/rxrpc/local_object.c @@ -452,6 +452,9 @@ void rxrpc_destroy_local(struct rxrpc_local *local) #endif rxrpc_purge_queue(&local->rx_queue); rxrpc_purge_client_connections(local); + if (local->tx_alloc.va) + __page_frag_cache_drain(virt_to_page(local->tx_alloc.va), + local->tx_alloc.pagecnt_bias); } /* diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 5398aa24bb8e..0a317498b8e0 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -83,18 +83,16 @@ static void rxrpc_fill_out_ack(struct rxrpc_call *call, rxrpc_serial_t serial) { struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; + struct rxrpc_acktrailer *trailer = txb->kvec[2].iov_base + 3; struct rxrpc_ackpacket *ack = (struct rxrpc_ackpacket *)(whdr + 1); - struct rxrpc_acktrailer trailer; unsigned int qsize, sack, wrap, to; rxrpc_seq_t window, wtop; int rsize; u32 mtu, jmax; - u8 *ackp = txb->acks; + u8 *filler = txb->kvec[2].iov_base; + u8 *sackp = txb->kvec[1].iov_base; - call->ackr_nr_unacked = 0; - atomic_set(&call->ackr_nr_consumed, 0); rxrpc_inc_stat(call->rxnet, stat_tx_ack_fill); - clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags); window = call->ackr_window; wtop = call->ackr_wtop; @@ -110,20 +108,27 @@ static void rxrpc_fill_out_ack(struct rxrpc_call *call, ack->serial = htonl(serial); ack->reason = ack_reason; ack->nAcks = wtop - window; + filler[0] = 0; + filler[1] = 0; + filler[2] = 0; + + if (ack_reason == RXRPC_ACK_PING) + txb->flags |= RXRPC_REQUEST_ACK; if (after(wtop, window)) { + txb->len += ack->nAcks; + txb->kvec[1].iov_base = sackp; + txb->kvec[1].iov_len = ack->nAcks; + wrap = RXRPC_SACK_SIZE - sack; to = min_t(unsigned int, ack->nAcks, RXRPC_SACK_SIZE); if (sack + ack->nAcks <= RXRPC_SACK_SIZE) { - memcpy(txb->acks, call->ackr_sack_table + sack, ack->nAcks); + memcpy(sackp, call->ackr_sack_table + sack, ack->nAcks); } else { - memcpy(txb->acks, call->ackr_sack_table + sack, wrap); - memcpy(txb->acks + wrap, call->ackr_sack_table, - to - wrap); + memcpy(sackp, call->ackr_sack_table + sack, wrap); + memcpy(sackp + wrap, call->ackr_sack_table, to - wrap); } - - ackp += to; } else if (before(wtop, window)) { pr_warn("ack window backward %x %x", window, wtop); } else if (ack->reason == RXRPC_ACK_DELAY) { @@ -135,18 +140,11 @@ static void rxrpc_fill_out_ack(struct rxrpc_call *call, jmax = rxrpc_rx_jumbo_max; qsize = (window - 1) - call->rx_consumed; rsize = max_t(int, call->rx_winsize - qsize, 0); - txb->ack_rwind = rsize; - trailer.maxMTU = htonl(rxrpc_rx_mtu); - trailer.ifMTU = htonl(mtu); - trailer.rwind = htonl(rsize); - trailer.jumbo_max = htonl(jmax); - - *ackp++ = 0; - *ackp++ = 0; - *ackp++ = 0; - memcpy(ackp, &trailer, sizeof(trailer)); - txb->kvec[0].iov_len += sizeof(*ack) + ack->nAcks + 3 + sizeof(trailer); - txb->len = txb->kvec[0].iov_len; + txb->ack_rwind = rsize; + trailer->maxMTU = htonl(rxrpc_rx_mtu); + trailer->ifMTU = htonl(mtu); + trailer->rwind = htonl(rsize); + trailer->jumbo_max = htonl(jmax); } /* @@ -195,7 +193,7 @@ static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, /* * Transmit an ACK packet. */ -static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +static void rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; struct rxrpc_connection *conn; @@ -204,7 +202,7 @@ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *tx int ret, rtt_slot = -1; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) - return -ECONNRESET; + return; conn = call->conn; @@ -212,10 +210,8 @@ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *tx msg.msg_namelen = call->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; - msg.msg_flags = 0; + msg.msg_flags = MSG_SPLICE_PAGES; - if (ack->reason == RXRPC_ACK_PING) - txb->flags |= RXRPC_REQUEST_ACK; whdr->flags = txb->flags & RXRPC_TXBUF_WIRE_FLAGS; txb->serial = rxrpc_get_next_serial(conn); @@ -250,8 +246,6 @@ static int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *tx rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); rxrpc_set_keepalive(call); } - - return ret; } /* @@ -267,16 +261,19 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); - txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, - rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); + txb = rxrpc_alloc_ack_txbuf(call, call->ackr_wtop - call->ackr_window); if (!txb) { kleave(" = -ENOMEM"); return; } + txb->ack_why = why; + rxrpc_fill_out_ack(call, txb, ack_reason, serial); + call->ackr_nr_unacked = 0; + atomic_set(&call->ackr_nr_consumed, 0); + clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags); - txb->ack_why = why; trace_rxrpc_send_ack(call, why, ack_reason, serial); rxrpc_send_ack_packet(call, txb); rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx); @@ -465,7 +462,7 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t msg.msg_namelen = call->peer->srx.transport_len; msg.msg_control = NULL; msg.msg_controllen = 0; - msg.msg_flags = 0; + msg.msg_flags = MSG_SPLICE_PAGES; /* Track what we've attempted to transmit at least once so that the * retransmission algorithm doesn't try to resend what we haven't sent diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c index ef0849c8329c..e540501a20ad 100644 --- a/net/rxrpc/rxkad.c +++ b/net/rxrpc/rxkad.c @@ -145,16 +145,17 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn, /* * Work out how much data we can put in a packet. */ -static int rxkad_how_much_data(struct rxrpc_call *call, size_t remain, - size_t *_buf_size, size_t *_data_size, size_t *_offset) +static struct rxrpc_txbuf *rxkad_alloc_txbuf(struct rxrpc_call *call, size_t remain, gfp_t gfp) { - size_t shdr, buf_size, chunk; + struct rxrpc_txbuf *txb; + size_t shdr, space; + + remain = min(remain, 65535 - sizeof(struct rxrpc_wire_header)); switch (call->conn->security_level) { default: - buf_size = chunk = min_t(size_t, remain, RXRPC_JUMBO_DATALEN); - shdr = 0; - goto out; + space = min_t(size_t, remain, RXRPC_JUMBO_DATALEN); + return rxrpc_alloc_data_txbuf(call, space, 0, GFP_KERNEL); case RXRPC_SECURITY_AUTH: shdr = sizeof(struct rxkad_level1_hdr); break; @@ -163,17 +164,15 @@ static int rxkad_how_much_data(struct rxrpc_call *call, size_t remain, break; } - buf_size = round_down(RXRPC_JUMBO_DATALEN, RXKAD_ALIGN); - - chunk = buf_size - shdr; - if (remain < chunk) - buf_size = round_up(shdr + remain, RXKAD_ALIGN); + space = min_t(size_t, round_down(RXRPC_JUMBO_DATALEN, RXKAD_ALIGN), remain + shdr); + space = round_up(space, RXKAD_ALIGN); -out: - *_buf_size = buf_size; - *_data_size = chunk; - *_offset = shdr; - return 0; + txb = rxrpc_alloc_data_txbuf(call, space, RXKAD_ALIGN, GFP_KERNEL); + if (txb) { + txb->offset += shdr; + txb->space -= shdr; + } + return txb; } /* @@ -251,7 +250,8 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, struct rxrpc_txbuf *txb, struct skcipher_request *req) { - struct rxkad_level1_hdr *hdr = (void *)txb->data; + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; + struct rxkad_level1_hdr *hdr = (void *)(whdr + 1); struct rxrpc_crypt iv; struct scatterlist sg; size_t pad; @@ -267,14 +267,14 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, pad = RXKAD_ALIGN - pad; pad &= RXKAD_ALIGN - 1; if (pad) { - memset(txb->data + txb->offset, 0, pad); + memset(txb->kvec[0].iov_base + txb->offset, 0, pad); txb->len += pad; } /* start the encryption afresh */ memset(&iv, 0, sizeof(iv)); - sg_init_one(&sg, txb->data, 8); + sg_init_one(&sg, hdr, 8); skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x); @@ -293,7 +293,8 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, struct skcipher_request *req) { const struct rxrpc_key_token *token; - struct rxkad_level2_hdr *rxkhdr = (void *)txb->data; + struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; + struct rxkad_level2_hdr *rxkhdr = (void *)(whdr + 1); struct rxrpc_crypt iv; struct scatterlist sg; size_t pad; @@ -312,7 +313,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, pad = RXKAD_ALIGN - pad; pad &= RXKAD_ALIGN - 1; if (pad) { - memset(txb->data + txb->offset, 0, pad); + memset(txb->kvec[0].iov_base + txb->offset, 0, pad); txb->len += pad; } @@ -320,7 +321,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, token = call->conn->key->payload.data[0]; memcpy(&iv, token->kad->session_key, sizeof(iv)); - sg_init_one(&sg, txb->data, txb->len); + sg_init_one(&sg, rxkhdr, txb->len); skcipher_request_set_sync_tfm(req, call->conn->rxkad.cipher); skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_crypt(req, &sg, &sg, txb->len, iv.x); @@ -1255,7 +1256,7 @@ const struct rxrpc_security rxkad = { .free_preparse_server_key = rxkad_free_preparse_server_key, .destroy_server_key = rxkad_destroy_server_key, .init_connection_security = rxkad_init_connection_security, - .how_much_data = rxkad_how_much_data, + .alloc_txbuf = rxkad_alloc_txbuf, .secure_packet = rxkad_secure_packet, .verify_packet = rxkad_verify_packet, .free_call_crypto = rxkad_free_call_crypto, diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 1e81046ea8a6..4d152f06b039 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -336,7 +336,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, do { if (!txb) { - size_t remain, bufsize, chunk, offset; + size_t remain; _debug("alloc"); @@ -348,23 +348,11 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, * region (enc blocksize), but the trailer is not. */ remain = more ? INT_MAX : msg_data_left(msg); - ret = call->conn->security->how_much_data(call, remain, - &bufsize, &chunk, &offset); - if (ret < 0) - goto maybe_error; - - _debug("SIZE: %zu/%zu @%zu", chunk, bufsize, offset); - - /* create a buffer that we can retain until it's ACK'd */ - ret = -ENOMEM; - txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_DATA, - GFP_KERNEL); - if (!txb) + txb = call->conn->security->alloc_txbuf(call, remain, sk->sk_allocation); + if (IS_ERR(txb)) { + ret = PTR_ERR(txb); goto maybe_error; - - txb->offset = offset + sizeof(struct rxrpc_wire_header); - txb->space -= offset; - txb->space = min_t(size_t, chunk, txb->space); + } } _debug("append"); diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index 2e8c5b15a84f..b2a82ab756c2 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -14,53 +14,146 @@ static atomic_t rxrpc_txbuf_debug_ids; atomic_t rxrpc_nr_txbuf; /* - * Allocate and partially initialise an I/O request structure. + * Allocate and partially initialise a data transmission buffer. */ -struct rxrpc_txbuf *rxrpc_alloc_txbuf(struct rxrpc_call *call, u8 packet_type, - gfp_t gfp) +struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_size, + size_t data_align, gfp_t gfp) { struct rxrpc_wire_header *whdr; struct rxrpc_txbuf *txb; + size_t total, hoff = 0; + void *buf; txb = kmalloc(sizeof(*txb), gfp); - if (txb) { - whdr = &txb->_wire; - - INIT_LIST_HEAD(&txb->call_link); - INIT_LIST_HEAD(&txb->tx_link); - refcount_set(&txb->ref, 1); - txb->call_debug_id = call->debug_id; - txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids); - txb->space = sizeof(txb->data); - txb->len = 0; - txb->offset = 0; - txb->flags = call->conn->out_clientflag; - txb->ack_why = 0; - txb->seq = call->tx_prepared + 1; - txb->serial = 0; - txb->cksum = 0; - txb->nr_kvec = 1; - txb->kvec[0].iov_base = whdr; - txb->kvec[0].iov_len = sizeof(*whdr); - whdr->epoch = htonl(call->conn->proto.epoch); - whdr->cid = htonl(call->cid); - whdr->callNumber = htonl(call->call_id); - whdr->seq = htonl(txb->seq); - whdr->type = packet_type; - whdr->flags = 0; - whdr->userStatus = 0; - whdr->securityIndex = call->security_ix; - whdr->_rsvd = 0; - whdr->serviceId = htons(call->dest_srx.srx_service); - - trace_rxrpc_txbuf(txb->debug_id, - txb->call_debug_id, txb->seq, 1, - packet_type == RXRPC_PACKET_TYPE_DATA ? - rxrpc_txbuf_alloc_data : - rxrpc_txbuf_alloc_ack); - atomic_inc(&rxrpc_nr_txbuf); + if (!txb) + return NULL; + + if (data_align) + hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr); + total = hoff + sizeof(*whdr) + data_size; + + mutex_lock(&call->conn->tx_data_alloc_lock); + buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, + ~(data_align - 1) & ~(L1_CACHE_BYTES - 1)); + mutex_unlock(&call->conn->tx_data_alloc_lock); + if (!buf) { + kfree(txb); + return NULL; + } + + whdr = buf + hoff; + + INIT_LIST_HEAD(&txb->call_link); + INIT_LIST_HEAD(&txb->tx_link); + refcount_set(&txb->ref, 1); + txb->last_sent = KTIME_MIN; + txb->call_debug_id = call->debug_id; + txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids); + txb->space = data_size; + txb->len = 0; + txb->offset = sizeof(*whdr); + txb->flags = call->conn->out_clientflag; + txb->ack_why = 0; + txb->seq = call->tx_prepared + 1; + txb->serial = 0; + txb->cksum = 0; + txb->nr_kvec = 1; + txb->kvec[0].iov_base = whdr; + txb->kvec[0].iov_len = sizeof(*whdr); + + whdr->epoch = htonl(call->conn->proto.epoch); + whdr->cid = htonl(call->cid); + whdr->callNumber = htonl(call->call_id); + whdr->seq = htonl(txb->seq); + whdr->type = RXRPC_PACKET_TYPE_DATA; + whdr->flags = 0; + whdr->userStatus = 0; + whdr->securityIndex = call->security_ix; + whdr->_rsvd = 0; + whdr->serviceId = htons(call->dest_srx.srx_service); + + trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1, + rxrpc_txbuf_alloc_data); + + atomic_inc(&rxrpc_nr_txbuf); + return txb; +} + +/* + * Allocate and partially initialise an ACK packet. + */ +struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_size) +{ + struct rxrpc_wire_header *whdr; + struct rxrpc_acktrailer *trailer; + struct rxrpc_ackpacket *ack; + struct rxrpc_txbuf *txb; + gfp_t gfp = rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS; + void *buf, *buf2 = NULL; + u8 *filler; + + txb = kmalloc(sizeof(*txb), gfp); + if (!txb) + return NULL; + + buf = page_frag_alloc(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + if (!buf) { + kfree(txb); + return NULL; + } + + if (sack_size) { + buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + if (!buf2) { + page_frag_free(buf); + kfree(txb); + return NULL; + } } + whdr = buf; + ack = buf + sizeof(*whdr); + filler = buf + sizeof(*whdr) + sizeof(*ack) + 1; + trailer = buf + sizeof(*whdr) + sizeof(*ack) + 1 + 3; + + INIT_LIST_HEAD(&txb->call_link); + INIT_LIST_HEAD(&txb->tx_link); + refcount_set(&txb->ref, 1); + txb->call_debug_id = call->debug_id; + txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids); + txb->space = 0; + txb->len = sizeof(*whdr) + sizeof(*ack) + 3 + sizeof(*trailer); + txb->offset = 0; + txb->flags = call->conn->out_clientflag; + txb->ack_rwind = 0; + txb->seq = 0; + txb->serial = 0; + txb->cksum = 0; + txb->nr_kvec = 3; + txb->kvec[0].iov_base = whdr; + txb->kvec[0].iov_len = sizeof(*whdr) + sizeof(*ack); + txb->kvec[1].iov_base = buf2; + txb->kvec[1].iov_len = sack_size; + txb->kvec[2].iov_base = filler; + txb->kvec[2].iov_len = 3 + sizeof(*trailer); + + whdr->epoch = htonl(call->conn->proto.epoch); + whdr->cid = htonl(call->cid); + whdr->callNumber = htonl(call->call_id); + whdr->seq = 0; + whdr->type = RXRPC_PACKET_TYPE_ACK; + whdr->flags = 0; + whdr->userStatus = 0; + whdr->securityIndex = call->security_ix; + whdr->_rsvd = 0; + whdr->serviceId = htons(call->dest_srx.srx_service); + + get_page(virt_to_head_page(trailer)); + + trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1, + rxrpc_txbuf_alloc_ack); + atomic_inc(&rxrpc_nr_txbuf); return txb; } @@ -79,12 +172,15 @@ void rxrpc_see_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what) trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, r, what); } -static void rxrpc_free_txbuf(struct rcu_head *rcu) +static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) { - struct rxrpc_txbuf *txb = container_of(rcu, struct rxrpc_txbuf, rcu); + int i; trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 0, rxrpc_txbuf_free); + for (i = 0; i < txb->nr_kvec; i++) + if (txb->kvec[i].iov_base) + page_frag_free(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } @@ -103,7 +199,7 @@ void rxrpc_put_txbuf(struct rxrpc_txbuf *txb, enum rxrpc_txbuf_trace what) dead = __refcount_dec_and_test(&txb->ref, &r); trace_rxrpc_txbuf(debug_id, call_debug_id, seq, r - 1, what); if (dead) - call_rcu(&txb->rcu, rxrpc_free_txbuf); + rxrpc_free_txbuf(txb); } } From patchwork Mon Mar 4 08:43:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580193 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CF1F3AC26 for ; Mon, 4 Mar 2024 08:44:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541847; cv=none; b=uFA0WvcfF76d563JmFPbje19hDumVcWyBwdG6VzWtukHamT89EvU7rh35gCOPUz4ER5mEZGTdVWC9tH4XM+RvTtHxV5AbeCljOMYbiBNkbSQRMZbUVpNKb168lcHZNBQyXHa5/hzHn3+uVYt1036wOM+QGUTDrAnb3Icb6hmrrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541847; c=relaxed/simple; bh=ljMsL1C2LYCeeJeMBEdzm9LuanQgXWoC+/BDvOiBmDY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O7KwS/qlo1uLeSHzKKpX6/KEulkfqykfZfVJrNrMgzcma2AtwKOVfdig/zv2mH3Ti7qeyIQ34iLf1ekwmnQKNyC2/Lvl3J++e9O8s/d/JejqL86n3wG70ei5VnNtwsyxVfpOZk3sbaE9+KhQLxLCGgTPQ2tc/68pvihS8XoVT2U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=RyycyZVf; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RyycyZVf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541845; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E5AUx7yIzfKd4QjuMWluENnnnPZkBpDZmSme9BhaoSY=; b=RyycyZVf1cOMz2kwhpvyH/HF0CmrPMdO41ppFMUEM9DJMop8zPR7a6NfqzPsjRp1DQgxUp jjRvy5OuoX7ptPdJTo1acCITC1D9F833iSQpj8tOkn0iw+0Tfp4E/BRReY6oUd5hdXxLKx Z/6pnxiuEne/K0MgmZ6neYjig1aEBLo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-487-1xZjqKa6M8iAdCauiNNf6w-1; Mon, 04 Mar 2024 03:43:59 -0500 X-MC-Unique: 1xZjqKa6M8iAdCauiNNf6w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0ED3C185A782; Mon, 4 Mar 2024 08:43:59 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id C71B7422A9; Mon, 4 Mar 2024 08:43:56 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 15/21] rxrpc: Parse received packets before dealing with timeouts Date: Mon, 4 Mar 2024 08:43:12 +0000 Message-ID: <20240304084322.705539-16-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Patchwork-Delegate: kuba@kernel.org Parse the received packets before going and processing timeouts as the timeouts may be reset by the reception of a packet. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/call_event.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index e19ea54dce54..58826710322d 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -358,6 +358,9 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) if (skb && skb->mark == RXRPC_SKB_MARK_ERROR) goto out; + if (skb) + rxrpc_input_call_packet(call, skb); + /* If we see our async-event poke, check for timeout trippage. */ now = jiffies; t = call->expect_rx_by; @@ -417,9 +420,6 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) resend = true; } - if (skb) - rxrpc_input_call_packet(call, skb); - rxrpc_transmit_some_data(call); if (skb) { From patchwork Mon Mar 4 08:43:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580194 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B251E3B780 for ; Mon, 4 Mar 2024 08:44:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541849; cv=none; b=VGuGZzY83bFY7pD/7Z1TyQEJTcpL97Rsu7wdjNXGym4F0K613YsQrFbcuiaWItXsjztOY1wyr8+YEtx7/jG8w5bZDtrScAXi1M8EMuWoGUud5amMktMp04GFhvdkZMXOLAH/jHvXKnBRAUaFCn51D0WvGklulWVxfQZHSpgTaXY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541849; c=relaxed/simple; bh=xqU7TdJ6uMoTB6NE2LCdz37Pqfl01I9/rlA/Td1pBIA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=H95zZfJoTi5eq7bQi5EWoVBC7UtyyVOXI5CCWlmjyvuhJ/WmxRax5Sw0FIWl0sWvYs7lZPnP8L/x9OIU2JVf9W8NHn0r0wY0IxjP8bYG9JXpwzWEfo0fsysdMHFEqB8v6PNNLS6OAPpx48nffDlQq/vk8fU/xzBmL55jMb2JBIU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=FmeCORb8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FmeCORb8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541845; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dFZQL0DrD/luJDZ0nEKBrlgAEZrrkxfF6eLg9CJqo1A=; b=FmeCORb8ldFQ/qMuC6t5ITgwIFoD0850UiXjJdXOoysDJaKalaxZDiPlpJsAtJJCz5ObRV jo7zSCIDrbS6xDWm61UIdCL+0NIL/1KAIdf4Mt9brYS6LCtzsK9vJ3Aoze+4QnFSb5MRwo VKsvP7G8OGi+A73iXx8Lc37e9LTWGbk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-160-KF9HvTUHOj2EgOdugEoTXg-1; Mon, 04 Mar 2024 03:44:02 -0500 X-MC-Unique: KF9HvTUHOj2EgOdugEoTXg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B3CB938212C3; Mon, 4 Mar 2024 08:44:01 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0531E1121313; Mon, 4 Mar 2024 08:43:59 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 16/21] rxrpc: Don't permit resending after all Tx packets acked Date: Mon, 4 Mar 2024 08:43:13 +0000 Message-ID: <20240304084322.705539-17-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Patchwork-Delegate: kuba@kernel.org Once all the packets transmitted as part of a call have been acked, don't permit any resending. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/call_event.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 58826710322d..ef28ebf37c7d 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -450,7 +450,9 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_lost_ack); - if (resend && __rxrpc_call_state(call) != RXRPC_CALL_CLIENT_RECV_REPLY) + if (resend && + __rxrpc_call_state(call) != RXRPC_CALL_CLIENT_RECV_REPLY && + !test_bit(RXRPC_CALL_TX_ALL_ACKED, &call->flags)) rxrpc_resend(call, NULL); if (test_and_clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags)) From patchwork Mon Mar 4 08:43:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580195 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B41DA3BB52 for ; Mon, 4 Mar 2024 08:44:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541851; cv=none; b=KWVtnl/gOKP32LwctuQuAgCyPJh/Il2jJm6+mM+0GRhlKihvZnGUGV2VuHQcw3jaQaeUCtbIaqMAIiykzmfEIcVenw/SewHtSdDJev6sfqVYEcoI6B9sswD+sRnNldAfthYwnzInVTdzTL9rAaeinqkJWzQ6hxeMrbTWk0T68Oo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541851; c=relaxed/simple; bh=LtDaQEcUgwlyLQoeDa5Vdh8VqlUPSCUSWH4hyTU1oBo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pKAHP9giqYJNfc/IB9t8QYzbbjVDa2/9GRAORu/JthPT4fbA9TxlPrI1iFmhcz6wwgBzC/tnACUkp0BwyGU8wCCpy8rMtYlWNynKnRDTVpHO9WouVnkImf4UXrhRG3FiIZ7PzGCoxCIfu1f8J6KtEcH2pn3t9rAo1KFL74jsRaM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Xc2KYp/2; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Xc2KYp/2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541848; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XkwsEs6LGiGvvVcBvR5BaMQvRZjXx8qXujYu/a9gwf0=; b=Xc2KYp/2hi09IELHf7D8yiazpAY7Tvq3leRa3YI4lCnbKubmS4iVc1p5GtBwC51fZ1/j63 1vMmF6nP8FoAbt5Ym7fENiuzZB7gSS8jm0D2M/F9SjVksrtTEYRvSu7aAVOEdQRtBBFeVz Uc6003vz1HmtI9ADtXAsNTRayyICRQs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-519-VbX4U4iEMUiNOzUOzeQuIQ-1; Mon, 04 Mar 2024 03:44:05 -0500 X-MC-Unique: VbX4U4iEMUiNOzUOzeQuIQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B98E88489A7; Mon, 4 Mar 2024 08:44:04 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2F24492BC7; Mon, 4 Mar 2024 08:44:02 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 17/21] rxrpc: Differentiate PING ACK transmission traces. Date: Mon, 4 Mar 2024 08:43:14 +0000 Message-ID: <20240304084322.705539-18-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-Patchwork-Delegate: kuba@kernel.org There are three points that transmit PING ACKs and all of them use the same trace string. Change two of them to use different strings. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- include/trace/events/rxrpc.h | 2 ++ net/rxrpc/call_event.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index c730cd732348..3b726a6c8c42 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -364,11 +364,13 @@ #define rxrpc_propose_ack_traces \ EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \ + EM(rxrpc_propose_ack_delayed_ack, "DlydAck") \ EM(rxrpc_propose_ack_input_data, "DataIn ") \ EM(rxrpc_propose_ack_input_data_hole, "DataInH") \ EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \ EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \ EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \ + EM(rxrpc_propose_ack_ping_for_0_retrans, "0-Retrn") \ EM(rxrpc_propose_ack_ping_for_old_rtt, "OldRtt ") \ EM(rxrpc_propose_ack_ping_for_params, "Params ") \ EM(rxrpc_propose_ack_ping_for_rtt, "Rtt ") \ diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index ef28ebf37c7d..bc1a5abb7d6f 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -197,7 +197,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) goto out; rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, - rxrpc_propose_ack_ping_for_lost_ack); + rxrpc_propose_ack_ping_for_0_retrans); goto out; } @@ -387,7 +387,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) trace_rxrpc_timer(call, rxrpc_timer_exp_ack, now); call->delay_ack_at = now + MAX_JIFFY_OFFSET; rxrpc_send_ACK(call, RXRPC_ACK_DELAY, 0, - rxrpc_propose_ack_ping_for_lost_ack); + rxrpc_propose_ack_delayed_ack); } t = call->ack_lost_at; From patchwork Mon Mar 4 08:43:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580197 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 814EA3C47C for ; Mon, 4 Mar 2024 08:44:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541854; cv=none; b=YCBT7z8w8co6Tn0FIq+bVRKeSh+krrs/AT7PuL1dZIIi7ABL5DuWPeukVhOGPuF5kszRAfZLChgt9G1O8XaHfz8eayABYJq0aJSswv1FkXkkPu3wZatJMuI0TqLXTPN8762Lr6M8q6u0bmxard8C9Ihlf8zVtnJ+u7e4Eng5ixk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541854; c=relaxed/simple; bh=oIqB3xc596pNpY+hyhYB0P322CF2/th4m8CJZWolOUc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Tg3VqDxaw7MYSlwkrbgUR50Az3c/JHVtzTJpoOsMkb3s8mYxFdnvXUWFW14Y1ldrcvAlBj63euNfafT8oFGTB6Et8Q7p9857iX/ylgEVmAxZgQFGsIdqaeY/fRr5HS52oZBlzGzv2qYy2GYLidh7WN7W79w9h/AqbRTTbZJ142w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=eijaGAyR; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eijaGAyR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541850; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O49RI6IXXsxc9exHEL8HK+AdkLbFGJgCrHpZAGl7/2I=; b=eijaGAyRCqsHB4yfZhQ3qZQ0xw9hQ0Sar0DnItp23AbEuh24wjlZJJBjqRVs19hH+GUg6j CREb9zfoImLFhCZnLJg45r6Ee4Etr9szv5xseZ4RE1+XFVjwL/t5WdN/1H87pq9T5v9l+9 h/HMX+VmJQWja4z0aHEruDEn+xYRkcM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-581-Bpj6YXPiPHmzZHM5mG9Dcg-1; Mon, 04 Mar 2024 03:44:08 -0500 X-MC-Unique: Bpj6YXPiPHmzZHM5mG9Dcg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 57679867949; Mon, 4 Mar 2024 08:44:08 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id F262E2026D0A; Mon, 4 Mar 2024 08:44:05 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 18/21] rxrpc: Use ktimes for call timeout tracking and set the timer lazily Date: Mon, 4 Mar 2024 08:43:15 +0000 Message-ID: <20240304084322.705539-19-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Patchwork-Delegate: kuba@kernel.org Track the call timeouts as ktimes rather than jiffies as the latter's granularity is too high and only set the timer at the end of the event handling function. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- Notes: Changes ======= ver #2) - Use ktime_to_us() rather than dividing a ktime by 1000 in tracepoints. include/trace/events/rxrpc.h | 174 +++++++++++++++++--------------- net/rxrpc/af_rxrpc.c | 12 +-- net/rxrpc/ar-internal.h | 35 +++---- net/rxrpc/call_event.c | 189 +++++++++++++++++------------------ net/rxrpc/call_object.c | 56 +++++------ net/rxrpc/input.c | 28 +++--- net/rxrpc/misc.c | 8 +- net/rxrpc/output.c | 40 +++----- net/rxrpc/proc.c | 10 +- net/rxrpc/rtt.c | 36 +++---- net/rxrpc/sendmsg.c | 25 ++--- net/rxrpc/sysctl.c | 16 +-- 12 files changed, 307 insertions(+), 322 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 3b726a6c8c42..a1b126a6b0d7 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -119,6 +119,7 @@ EM(rxrpc_call_poke_complete, "Compl") \ EM(rxrpc_call_poke_error, "Error") \ EM(rxrpc_call_poke_idle, "Idle") \ + EM(rxrpc_call_poke_set_timeout, "Set-timo") \ EM(rxrpc_call_poke_start, "Start") \ EM(rxrpc_call_poke_timer, "Timer") \ E_(rxrpc_call_poke_timer_now, "Timer-now") @@ -340,27 +341,16 @@ E_(rxrpc_rtt_rx_requested_ack, "RACK") #define rxrpc_timer_traces \ - EM(rxrpc_timer_begin, "Begin ") \ - EM(rxrpc_timer_exp_ack, "ExpAck") \ - EM(rxrpc_timer_exp_hard, "ExpHrd") \ - EM(rxrpc_timer_exp_idle, "ExpIdl") \ - EM(rxrpc_timer_exp_keepalive, "ExpKA ") \ - EM(rxrpc_timer_exp_lost_ack, "ExpLoA") \ - EM(rxrpc_timer_exp_normal, "ExpNml") \ - EM(rxrpc_timer_exp_ping, "ExpPng") \ - EM(rxrpc_timer_exp_resend, "ExpRsn") \ - EM(rxrpc_timer_init_for_reply, "IniRpl") \ - EM(rxrpc_timer_init_for_send_reply, "SndRpl") \ - EM(rxrpc_timer_restart, "Restrt") \ - EM(rxrpc_timer_set_for_ack, "SetAck") \ - EM(rxrpc_timer_set_for_hard, "SetHrd") \ - EM(rxrpc_timer_set_for_idle, "SetIdl") \ - EM(rxrpc_timer_set_for_keepalive, "KeepAl") \ - EM(rxrpc_timer_set_for_lost_ack, "SetLoA") \ - EM(rxrpc_timer_set_for_normal, "SetNml") \ - EM(rxrpc_timer_set_for_ping, "SetPng") \ - EM(rxrpc_timer_set_for_resend, "SetRTx") \ - E_(rxrpc_timer_set_for_send, "SetSnd") + EM(rxrpc_timer_trace_delayed_ack, "DelayAck ") \ + EM(rxrpc_timer_trace_expect_rx, "ExpectRx ") \ + EM(rxrpc_timer_trace_hard, "HardLimit") \ + EM(rxrpc_timer_trace_idle, "IdleLimit") \ + EM(rxrpc_timer_trace_keepalive, "KeepAlive") \ + EM(rxrpc_timer_trace_lost_ack, "LostAck ") \ + EM(rxrpc_timer_trace_ping, "DelayPing") \ + EM(rxrpc_timer_trace_resend, "Resend ") \ + EM(rxrpc_timer_trace_resend_reset, "ResendRst") \ + E_(rxrpc_timer_trace_resend_tx, "ResendTx ") #define rxrpc_propose_ack_traces \ EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \ @@ -1314,90 +1304,112 @@ TRACE_EVENT(rxrpc_rtt_rx, __entry->rto) ); -TRACE_EVENT(rxrpc_timer, - TP_PROTO(struct rxrpc_call *call, enum rxrpc_timer_trace why, - unsigned long now), +TRACE_EVENT(rxrpc_timer_set, + TP_PROTO(struct rxrpc_call *call, ktime_t delay, + enum rxrpc_timer_trace why), - TP_ARGS(call, why, now), + TP_ARGS(call, delay, why), TP_STRUCT__entry( __field(unsigned int, call) __field(enum rxrpc_timer_trace, why) - __field(long, now) - __field(long, ack_at) - __field(long, ack_lost_at) - __field(long, resend_at) - __field(long, ping_at) - __field(long, expect_rx_by) - __field(long, expect_req_by) - __field(long, expect_term_by) - __field(long, timer) + __field(ktime_t, delay) ), TP_fast_assign( __entry->call = call->debug_id; __entry->why = why; - __entry->now = now; - __entry->ack_at = call->delay_ack_at; - __entry->ack_lost_at = call->ack_lost_at; - __entry->resend_at = call->resend_at; - __entry->expect_rx_by = call->expect_rx_by; - __entry->expect_req_by = call->expect_req_by; - __entry->expect_term_by = call->expect_term_by; - __entry->timer = call->timer.expires; + __entry->delay = delay; ), - TP_printk("c=%08x %s a=%ld la=%ld r=%ld xr=%ld xq=%ld xt=%ld t=%ld", + TP_printk("c=%08x %s to=%lld", __entry->call, __print_symbolic(__entry->why, rxrpc_timer_traces), - __entry->ack_at - __entry->now, - __entry->ack_lost_at - __entry->now, - __entry->resend_at - __entry->now, - __entry->expect_rx_by - __entry->now, - __entry->expect_req_by - __entry->now, - __entry->expect_term_by - __entry->now, - __entry->timer - __entry->now) + ktime_to_us(__entry->delay)) + ); + +TRACE_EVENT(rxrpc_timer_exp, + TP_PROTO(struct rxrpc_call *call, ktime_t delay, + enum rxrpc_timer_trace why), + + TP_ARGS(call, delay, why), + + TP_STRUCT__entry( + __field(unsigned int, call) + __field(enum rxrpc_timer_trace, why) + __field(ktime_t, delay) + ), + + TP_fast_assign( + __entry->call = call->debug_id; + __entry->why = why; + __entry->delay = delay; + ), + + TP_printk("c=%08x %s to=%lld", + __entry->call, + __print_symbolic(__entry->why, rxrpc_timer_traces), + ktime_to_us(__entry->delay)) + ); + +TRACE_EVENT(rxrpc_timer_can, + TP_PROTO(struct rxrpc_call *call, enum rxrpc_timer_trace why), + + TP_ARGS(call, why), + + TP_STRUCT__entry( + __field(unsigned int, call) + __field(enum rxrpc_timer_trace, why) + ), + + TP_fast_assign( + __entry->call = call->debug_id; + __entry->why = why; + ), + + TP_printk("c=%08x %s", + __entry->call, + __print_symbolic(__entry->why, rxrpc_timer_traces)) + ); + +TRACE_EVENT(rxrpc_timer_restart, + TP_PROTO(struct rxrpc_call *call, ktime_t delay, unsigned long delayj), + + TP_ARGS(call, delay, delayj), + + TP_STRUCT__entry( + __field(unsigned int, call) + __field(unsigned long, delayj) + __field(ktime_t, delay) + ), + + TP_fast_assign( + __entry->call = call->debug_id; + __entry->delayj = delayj; + __entry->delay = delay; + ), + + TP_printk("c=%08x to=%lld j=%ld", + __entry->call, + ktime_to_us(__entry->delay), + __entry->delayj) ); TRACE_EVENT(rxrpc_timer_expired, - TP_PROTO(struct rxrpc_call *call, unsigned long now), + TP_PROTO(struct rxrpc_call *call), - TP_ARGS(call, now), + TP_ARGS(call), TP_STRUCT__entry( __field(unsigned int, call) - __field(long, now) - __field(long, ack_at) - __field(long, ack_lost_at) - __field(long, resend_at) - __field(long, ping_at) - __field(long, expect_rx_by) - __field(long, expect_req_by) - __field(long, expect_term_by) - __field(long, timer) ), TP_fast_assign( __entry->call = call->debug_id; - __entry->now = now; - __entry->ack_at = call->delay_ack_at; - __entry->ack_lost_at = call->ack_lost_at; - __entry->resend_at = call->resend_at; - __entry->expect_rx_by = call->expect_rx_by; - __entry->expect_req_by = call->expect_req_by; - __entry->expect_term_by = call->expect_term_by; - __entry->timer = call->timer.expires; ), - TP_printk("c=%08x EXPIRED a=%ld la=%ld r=%ld xr=%ld xq=%ld xt=%ld t=%ld", - __entry->call, - __entry->ack_at - __entry->now, - __entry->ack_lost_at - __entry->now, - __entry->resend_at - __entry->now, - __entry->expect_rx_by - __entry->now, - __entry->expect_req_by - __entry->now, - __entry->expect_term_by - __entry->now, - __entry->timer - __entry->now) + TP_printk("c=%08x EXPIRED", + __entry->call) ); TRACE_EVENT(rxrpc_rx_lose, @@ -1507,7 +1519,7 @@ TRACE_EVENT(rxrpc_drop_ack, TRACE_EVENT(rxrpc_retransmit, TP_PROTO(struct rxrpc_call *call, rxrpc_seq_t seq, - rxrpc_serial_t serial, s64 expiry), + rxrpc_serial_t serial, ktime_t expiry), TP_ARGS(call, seq, serial, expiry), @@ -1515,7 +1527,7 @@ TRACE_EVENT(rxrpc_retransmit, __field(unsigned int, call) __field(rxrpc_seq_t, seq) __field(rxrpc_serial_t, serial) - __field(s64, expiry) + __field(ktime_t, expiry) ), TP_fast_assign( @@ -1529,7 +1541,7 @@ TRACE_EVENT(rxrpc_retransmit, __entry->call, __entry->seq, __entry->serial, - __entry->expiry) + ktime_to_us(__entry->expiry)) ); TRACE_EVENT(rxrpc_congest, diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c index 465bfe5eb061..5222bc97d192 100644 --- a/net/rxrpc/af_rxrpc.c +++ b/net/rxrpc/af_rxrpc.c @@ -487,7 +487,7 @@ EXPORT_SYMBOL(rxrpc_kernel_new_call_notification); * rxrpc_kernel_set_max_life - Set maximum lifespan on a call * @sock: The socket the call is on * @call: The call to configure - * @hard_timeout: The maximum lifespan of the call in jiffies + * @hard_timeout: The maximum lifespan of the call in ms * * Set the maximum lifespan of a call. The call will end with ETIME or * ETIMEDOUT if it takes longer than this. @@ -495,14 +495,14 @@ EXPORT_SYMBOL(rxrpc_kernel_new_call_notification); void rxrpc_kernel_set_max_life(struct socket *sock, struct rxrpc_call *call, unsigned long hard_timeout) { - unsigned long now; + ktime_t delay = ms_to_ktime(hard_timeout), expect_term_by; mutex_lock(&call->user_mutex); - now = jiffies; - hard_timeout += now; - WRITE_ONCE(call->expect_term_by, hard_timeout); - rxrpc_reduce_call_timer(call, hard_timeout, now, rxrpc_timer_set_for_hard); + expect_term_by = ktime_add(ktime_get_real(), delay); + WRITE_ONCE(call->expect_term_by, expect_term_by); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_hard); + rxrpc_poke_call(call, rxrpc_call_poke_set_timeout); mutex_unlock(&call->user_mutex); } diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 47f4689379ca..21ecac22b51d 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -352,8 +352,8 @@ struct rxrpc_peer { u32 mdev_us; /* medium deviation */ u32 mdev_max_us; /* maximal mdev for the last rtt period */ u32 rttvar_us; /* smoothed mdev_max */ - u32 rto_j; /* Retransmission timeout in jiffies */ - u8 backoff; /* Backoff timeout */ + u32 rto_us; /* Retransmission timeout in usec */ + u8 backoff; /* Backoff timeout (as shift) */ u8 cong_ssthresh; /* Congestion slow-start threshold */ }; @@ -620,17 +620,17 @@ struct rxrpc_call { const struct rxrpc_security *security; /* applied security module */ struct mutex user_mutex; /* User access mutex */ struct sockaddr_rxrpc dest_srx; /* Destination address */ - unsigned long delay_ack_at; /* When DELAY ACK needs to happen */ - unsigned long ack_lost_at; /* When ACK is figured as lost */ - unsigned long resend_at; /* When next resend needs to happen */ - unsigned long ping_at; /* When next to send a ping */ - unsigned long keepalive_at; /* When next to send a keepalive ping */ - unsigned long expect_rx_by; /* When we expect to get a packet by */ - unsigned long expect_req_by; /* When we expect to get a request DATA packet by */ - unsigned long expect_term_by; /* When we expect call termination by */ - u32 next_rx_timo; /* Timeout for next Rx packet (jif) */ - u32 next_req_timo; /* Timeout for next Rx request packet (jif) */ - u32 hard_timo; /* Maximum lifetime or 0 (jif) */ + ktime_t delay_ack_at; /* When DELAY ACK needs to happen */ + ktime_t ack_lost_at; /* When ACK is figured as lost */ + ktime_t resend_at; /* When next resend needs to happen */ + ktime_t ping_at; /* When next to send a ping */ + ktime_t keepalive_at; /* When next to send a keepalive ping */ + ktime_t expect_rx_by; /* When we expect to get a packet by */ + ktime_t expect_req_by; /* When we expect to get a request DATA packet by */ + ktime_t expect_term_by; /* When we expect call termination by */ + u32 next_rx_timo; /* Timeout for next Rx packet (ms) */ + u32 next_req_timo; /* Timeout for next Rx request packet (ms) */ + u32 hard_timo; /* Maximum lifetime or 0 (s) */ struct timer_list timer; /* Combined event timer */ struct work_struct destroyer; /* In-process-context destroyer */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ @@ -675,7 +675,7 @@ struct rxrpc_call { rxrpc_seq_t tx_transmitted; /* Highest packet transmitted */ rxrpc_seq_t tx_prepared; /* Highest Tx slot prepared. */ rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */ - u16 tx_backoff; /* Delay to insert due to Tx failure */ + u16 tx_backoff; /* Delay to insert due to Tx failure (ms) */ u8 tx_winsize; /* Maximum size of Tx window */ #define RXRPC_TX_MAX_WINDOW 128 ktime_t tx_last_sent; /* Last time a transmission occurred */ @@ -866,11 +866,6 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *, rxrpc_serial_t, void rxrpc_shrink_call_tx_buffer(struct rxrpc_call *); void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb); -void rxrpc_reduce_call_timer(struct rxrpc_call *call, - unsigned long expire_at, - unsigned long now, - enum rxrpc_timer_trace why); - bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb); /* @@ -1214,7 +1209,7 @@ static inline int rxrpc_abort_eproto(struct rxrpc_call *call, */ void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace, int, rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t); -unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool); +ktime_t rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans); void rxrpc_peer_init_rtt(struct rxrpc_peer *); /* diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index bc1a5abb7d6f..2a9f74eb7c46 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -23,14 +23,14 @@ void rxrpc_propose_ping(struct rxrpc_call *call, u32 serial, enum rxrpc_propose_ack_trace why) { - unsigned long now = jiffies; - unsigned long ping_at = now + rxrpc_idle_ack_delay; + ktime_t delay = ms_to_ktime(READ_ONCE(rxrpc_idle_ack_delay)); + ktime_t now = ktime_get_real(); + ktime_t ping_at = ktime_add(now, delay); - if (time_before(ping_at, call->ping_at)) { + trace_rxrpc_propose_ack(call, why, RXRPC_ACK_PING, serial); + if (ktime_before(ping_at, call->ping_at)) { call->ping_at = ping_at; - rxrpc_reduce_call_timer(call, ping_at, now, - rxrpc_timer_set_for_ping); - trace_rxrpc_propose_ack(call, why, RXRPC_ACK_PING, serial); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_ping); } } @@ -40,25 +40,18 @@ void rxrpc_propose_ping(struct rxrpc_call *call, u32 serial, void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) { - unsigned long expiry = rxrpc_soft_ack_delay; - unsigned long now = jiffies, ack_at; + ktime_t now = ktime_get_real(), delay; - if (rxrpc_soft_ack_delay < expiry) - expiry = rxrpc_soft_ack_delay; - if (call->peer->srtt_us != 0) - ack_at = usecs_to_jiffies(call->peer->srtt_us >> 3); + trace_rxrpc_propose_ack(call, why, RXRPC_ACK_DELAY, serial); + + if (call->peer->srtt_us) + delay = (call->peer->srtt_us >> 3) * NSEC_PER_USEC; else - ack_at = expiry; - - ack_at += READ_ONCE(call->tx_backoff); - ack_at += now; - if (time_before(ack_at, call->delay_ack_at)) { - call->delay_ack_at = ack_at; - rxrpc_reduce_call_timer(call, ack_at, now, - rxrpc_timer_set_for_ack); - } + delay = ms_to_ktime(READ_ONCE(rxrpc_soft_ack_delay)); + ktime_add_ms(delay, call->tx_backoff); - trace_rxrpc_propose_ack(call, why, RXRPC_ACK_DELAY, serial); + call->delay_ack_at = ktime_add(now, delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_delayed_ack); } /* @@ -77,9 +70,8 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) struct rxrpc_ackpacket *ack = NULL; struct rxrpc_skb_priv *sp; struct rxrpc_txbuf *txb; - unsigned long resend_at; rxrpc_seq_t transmitted = call->tx_transmitted; - ktime_t now, max_age, oldest, ack_ts; + ktime_t now, max_age, oldest, ack_ts, delay; bool unacked = false; unsigned int i; LIST_HEAD(retrans_queue); @@ -87,7 +79,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) _enter("{%d,%d}", call->acks_hard_ack, call->tx_top); now = ktime_get_real(); - max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j)); + max_age = ktime_sub_us(now, call->peer->rto_us); oldest = now; if (list_empty(&call->tx_buffer)) @@ -178,10 +170,8 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) no_further_resend: no_resend: - resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); - resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, - !list_empty(&retrans_queue)); - call->resend_at = resend_at; + delay = rxrpc_get_rto_backoff(call->peer, !list_empty(&retrans_queue)); + call->resend_at = ktime_add(oldest, delay); if (unacked) rxrpc_congestion_timeout(call); @@ -191,8 +181,7 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) * retransmitting data. */ if (list_empty(&retrans_queue)) { - rxrpc_reduce_call_timer(call, resend_at, jiffies, - rxrpc_timer_set_for_resend); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_resend_reset); ack_ts = ktime_sub(now, call->acks_latest_ts); if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) goto out; @@ -218,13 +207,11 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) */ static void rxrpc_begin_service_reply(struct rxrpc_call *call) { - unsigned long now = jiffies; - rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SEND_REPLY); - call->delay_ack_at = now + MAX_JIFFY_OFFSET; if (call->ackr_reason == RXRPC_ACK_DELAY) call->ackr_reason = 0; - trace_rxrpc_timer(call, rxrpc_timer_init_for_send_reply, now); + call->delay_ack_at = KTIME_MAX; + trace_rxrpc_timer_can(call, rxrpc_timer_trace_delayed_ack); } /* @@ -333,8 +320,8 @@ static void rxrpc_send_initial_ping(struct rxrpc_call *call) */ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) { - unsigned long now, next, t; - bool resend = false, expired = false; + ktime_t now, t; + bool resend = false; s32 abort_code; rxrpc_see_call(call, rxrpc_call_see_input); @@ -362,66 +349,69 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_input_call_packet(call, skb); /* If we see our async-event poke, check for timeout trippage. */ - now = jiffies; - t = call->expect_rx_by; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_normal, now); - expired = true; + now = ktime_get_real(); + t = ktime_sub(call->expect_rx_by, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_expect_rx); + goto expired; } - t = call->expect_req_by; - if (__rxrpc_call_state(call) == RXRPC_CALL_SERVER_RECV_REQUEST && - time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now); - expired = true; + t = ktime_sub(call->expect_req_by, now); + if (t <= 0) { + call->expect_req_by = KTIME_MAX; + if (__rxrpc_call_state(call) == RXRPC_CALL_SERVER_RECV_REQUEST) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_idle); + goto expired; + } } - t = READ_ONCE(call->expect_term_by); - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_hard, now); - expired = true; + t = ktime_sub(READ_ONCE(call->expect_term_by), now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_hard); + goto expired; } - t = call->delay_ack_at; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_ack, now); - call->delay_ack_at = now + MAX_JIFFY_OFFSET; + t = ktime_sub(call->delay_ack_at, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_delayed_ack); + call->delay_ack_at = KTIME_MAX; rxrpc_send_ACK(call, RXRPC_ACK_DELAY, 0, rxrpc_propose_ack_delayed_ack); } - t = call->ack_lost_at; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_lost_ack, now); - call->ack_lost_at = now + MAX_JIFFY_OFFSET; + t = ktime_sub(call->ack_lost_at, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_lost_ack); + call->ack_lost_at = KTIME_MAX; set_bit(RXRPC_CALL_EV_ACK_LOST, &call->events); } - t = call->keepalive_at; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_keepalive, now); - call->keepalive_at = now + MAX_JIFFY_OFFSET; + t = ktime_sub(call->ping_at, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_ping); + call->ping_at = KTIME_MAX; rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_keepalive); } - t = call->ping_at; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_ping, now); - call->ping_at = now + MAX_JIFFY_OFFSET; - rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, - rxrpc_propose_ack_ping_for_keepalive); - } - - t = call->resend_at; - if (time_after_eq(now, t)) { - trace_rxrpc_timer(call, rxrpc_timer_exp_resend, now); - call->resend_at = now + MAX_JIFFY_OFFSET; + t = ktime_sub(call->resend_at, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_resend); + call->resend_at = KTIME_MAX; resend = true; } rxrpc_transmit_some_data(call); + now = ktime_get_real(); + t = ktime_sub(call->keepalive_at, now); + if (t <= 0) { + trace_rxrpc_timer_exp(call, t, rxrpc_timer_trace_keepalive); + call->keepalive_at = KTIME_MAX; + rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, + rxrpc_propose_ack_ping_for_keepalive); + } + if (skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); @@ -433,19 +423,6 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_send_initial_ping(call); /* Process events */ - if (expired) { - if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && - (int)call->conn->hi_serial - (int)call->rx_serial > 0) { - trace_rxrpc_call_reset(call); - rxrpc_abort_call(call, 0, RX_CALL_DEAD, -ECONNRESET, - rxrpc_abort_call_reset); - } else { - rxrpc_abort_call(call, 0, RX_CALL_TIMEOUT, -ETIME, - rxrpc_abort_call_timeout); - } - goto out; - } - if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events)) rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_propose_ack_ping_for_lost_ack); @@ -474,23 +451,33 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) /* Make sure the timer is restarted */ if (!__rxrpc_call_is_complete(call)) { - next = call->expect_rx_by; + ktime_t next = READ_ONCE(call->expect_term_by), delay; -#define set(T) { t = READ_ONCE(T); if (time_before(t, next)) next = t; } +#define set(T) { ktime_t _t = (T); if (ktime_before(_t, next)) next = _t; } set(call->expect_req_by); - set(call->expect_term_by); + set(call->expect_rx_by); set(call->delay_ack_at); set(call->ack_lost_at); set(call->resend_at); set(call->keepalive_at); set(call->ping_at); - now = jiffies; - if (time_after_eq(now, next)) + now = ktime_get_real(); + delay = ktime_sub(next, now); + if (delay <= 0) { rxrpc_poke_call(call, rxrpc_call_poke_timer_now); - - rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); + } else { + unsigned long nowj = jiffies, delayj, nextj; + + delayj = max(nsecs_to_jiffies(delay), 1); + nextj = nowj + delayj; + if (time_before(nextj, call->timer.expires) || + !timer_pending(&call->timer)) { + trace_rxrpc_timer_restart(call, delay, delayj); + timer_reduce(&call->timer, nextj); + } + } } out: @@ -505,4 +492,16 @@ bool rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_shrink_call_tx_buffer(call); _leave(""); return true; + +expired: + if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && + (int)call->conn->hi_serial - (int)call->rx_serial > 0) { + trace_rxrpc_call_reset(call); + rxrpc_abort_call(call, 0, RX_CALL_DEAD, -ECONNRESET, + rxrpc_abort_call_reset); + } else { + rxrpc_abort_call(call, 0, RX_CALL_TIMEOUT, -ETIME, + rxrpc_abort_call_timeout); + } + goto out; } diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 9fc9a6c3f685..01fa71e8b1f7 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -70,20 +70,11 @@ static void rxrpc_call_timer_expired(struct timer_list *t) _enter("%d", call->debug_id); if (!__rxrpc_call_is_complete(call)) { - trace_rxrpc_timer_expired(call, jiffies); + trace_rxrpc_timer_expired(call); rxrpc_poke_call(call, rxrpc_call_poke_timer); } } -void rxrpc_reduce_call_timer(struct rxrpc_call *call, - unsigned long expire_at, - unsigned long now, - enum rxrpc_timer_trace why) -{ - trace_rxrpc_timer(call, why, now); - timer_reduce(&call->timer, expire_at); -} - static struct lock_class_key rxrpc_call_user_mutex_lock_class_key; static void rxrpc_destroy_call(struct work_struct *); @@ -163,12 +154,20 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, spin_lock_init(&call->notify_lock); spin_lock_init(&call->tx_lock); refcount_set(&call->ref, 1); - call->debug_id = debug_id; - call->tx_total_len = -1; - call->next_rx_timo = 20 * HZ; - call->next_req_timo = 1 * HZ; - call->ackr_window = 1; - call->ackr_wtop = 1; + call->debug_id = debug_id; + call->tx_total_len = -1; + call->next_rx_timo = 20 * HZ; + call->next_req_timo = 1 * HZ; + call->ackr_window = 1; + call->ackr_wtop = 1; + call->delay_ack_at = KTIME_MAX; + call->ack_lost_at = KTIME_MAX; + call->resend_at = KTIME_MAX; + call->ping_at = KTIME_MAX; + call->keepalive_at = KTIME_MAX; + call->expect_rx_by = KTIME_MAX; + call->expect_req_by = KTIME_MAX; + call->expect_term_by = KTIME_MAX; memset(&call->sock_node, 0xed, sizeof(call->sock_node)); @@ -226,11 +225,11 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, __set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags); if (p->timeouts.normal) - call->next_rx_timo = min(msecs_to_jiffies(p->timeouts.normal), 1UL); + call->next_rx_timo = min(p->timeouts.normal, 1); if (p->timeouts.idle) - call->next_req_timo = min(msecs_to_jiffies(p->timeouts.idle), 1UL); + call->next_req_timo = min(p->timeouts.idle, 1); if (p->timeouts.hard) - call->hard_timo = p->timeouts.hard * HZ; + call->hard_timo = p->timeouts.hard; ret = rxrpc_init_client_call_security(call); if (ret < 0) { @@ -253,18 +252,13 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, */ void rxrpc_start_call_timer(struct rxrpc_call *call) { - unsigned long now = jiffies; - unsigned long j = now + MAX_JIFFY_OFFSET; - - call->delay_ack_at = j; - call->ack_lost_at = j; - call->resend_at = j; - call->ping_at = j; - call->keepalive_at = j; - call->expect_rx_by = j; - call->expect_req_by = j; - call->expect_term_by = j + call->hard_timo; - call->timer.expires = now; + if (call->hard_timo) { + ktime_t delay = ms_to_ktime(call->hard_timo * 1000); + + call->expect_term_by = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_hard); + } + call->timer.expires = jiffies; } /* diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index e53a49accc16..09cce1d5d605 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -252,6 +252,9 @@ static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, { ASSERT(test_bit(RXRPC_CALL_TX_LAST, &call->flags)); + call->resend_at = KTIME_MAX; + trace_rxrpc_timer_can(call, rxrpc_timer_trace_resend); + if (unlikely(call->cong_last_nack)) { rxrpc_free_skb(call->cong_last_nack, rxrpc_skb_put_last_nack); call->cong_last_nack = NULL; @@ -288,13 +291,11 @@ static void rxrpc_end_tx_phase(struct rxrpc_call *call, bool reply_begun, static bool rxrpc_receiving_reply(struct rxrpc_call *call) { struct rxrpc_ack_summary summary = { 0 }; - unsigned long now; rxrpc_seq_t top = READ_ONCE(call->tx_top); if (call->ackr_reason) { - now = jiffies; - call->delay_ack_at = now + MAX_JIFFY_OFFSET; - trace_rxrpc_timer(call, rxrpc_timer_init_for_reply, now); + call->delay_ack_at = KTIME_MAX; + trace_rxrpc_timer_can(call, rxrpc_timer_trace_delayed_ack); } if (!test_bit(RXRPC_CALL_TX_LAST, &call->flags)) { @@ -327,7 +328,7 @@ static void rxrpc_end_rx_phase(struct rxrpc_call *call, rxrpc_serial_t serial) case RXRPC_CALL_SERVER_RECV_REQUEST: rxrpc_set_call_state(call, RXRPC_CALL_SERVER_ACK_REQUEST); - call->expect_req_by = jiffies + MAX_JIFFY_OFFSET; + call->expect_req_by = KTIME_MAX; rxrpc_propose_delay_ACK(call, serial, rxrpc_propose_ack_processing_op); break; @@ -587,14 +588,12 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) case RXRPC_CALL_SERVER_RECV_REQUEST: { unsigned long timo = READ_ONCE(call->next_req_timo); - unsigned long now, expect_req_by; if (timo) { - now = jiffies; - expect_req_by = now + timo; - call->expect_req_by = now + timo; - rxrpc_reduce_call_timer(call, expect_req_by, now, - rxrpc_timer_set_for_idle); + ktime_t delay = ms_to_ktime(timo); + + call->expect_req_by = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_idle); } break; } @@ -1046,11 +1045,10 @@ void rxrpc_input_call_packet(struct rxrpc_call *call, struct sk_buff *skb) timo = READ_ONCE(call->next_rx_timo); if (timo) { - unsigned long now = jiffies; + ktime_t delay = ms_to_ktime(timo); - call->expect_rx_by = now + timo; - rxrpc_reduce_call_timer(call, call->expect_rx_by, now, - rxrpc_timer_set_for_normal); + call->expect_rx_by = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_expect_rx); } switch (sp->hdr.type) { diff --git a/net/rxrpc/misc.c b/net/rxrpc/misc.c index 825b81183046..657cf35089a6 100644 --- a/net/rxrpc/misc.c +++ b/net/rxrpc/misc.c @@ -17,22 +17,22 @@ unsigned int rxrpc_max_backlog __read_mostly = 10; /* - * How long to wait before scheduling an ACK with subtype DELAY (in jiffies). + * How long to wait before scheduling an ACK with subtype DELAY (in ms). * * We use this when we've received new data packets. If those packets aren't * all consumed within this time we will send a DELAY ACK if an ACK was not * requested to let the sender know it doesn't need to resend. */ -unsigned long rxrpc_soft_ack_delay = HZ; +unsigned long rxrpc_soft_ack_delay = 1000; /* - * How long to wait before scheduling an ACK with subtype IDLE (in jiffies). + * How long to wait before scheduling an ACK with subtype IDLE (in ms). * * We use this when we've consumed some previously soft-ACK'd packets when * further packets aren't immediately received to decide when to send an IDLE * ACK let the other end know that it can free up its Tx buffer space. */ -unsigned long rxrpc_idle_ack_delay = HZ / 2; +unsigned long rxrpc_idle_ack_delay = 500; /* * Receive window size in packets. This indicates the maximum number of diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 0a317498b8e0..ec82193e5681 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -48,12 +48,10 @@ static const char rxrpc_keepalive_string[] = ""; static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret) { if (ret < 0) { - u16 tx_backoff = READ_ONCE(call->tx_backoff); - - if (tx_backoff < HZ) - WRITE_ONCE(call->tx_backoff, tx_backoff + 1); + if (call->tx_backoff < 1000) + call->tx_backoff += 100; } else { - WRITE_ONCE(call->tx_backoff, 0); + call->tx_backoff = 0; } } @@ -67,11 +65,10 @@ static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret) */ static void rxrpc_set_keepalive(struct rxrpc_call *call) { - unsigned long now = jiffies; + ktime_t delay = ms_to_ktime(READ_ONCE(call->next_rx_timo) / 6); - call->keepalive_at = now + call->next_rx_timo / 6; - rxrpc_reduce_call_timer(call, call->keepalive_at, now, - rxrpc_timer_set_for_keepalive); + call->keepalive_at = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_keepalive); } /* @@ -515,24 +512,21 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t if (txb->flags & RXRPC_REQUEST_ACK) { call->peer->rtt_last_req = txb->last_sent; if (call->peer->rtt_count > 1) { - unsigned long nowj = jiffies, ack_lost_at; + ktime_t delay = rxrpc_get_rto_backoff(call->peer, false); + ktime_t now = ktime_get_real(); - ack_lost_at = rxrpc_get_rto_backoff(call->peer, false); - ack_lost_at += nowj; - call->ack_lost_at = ack_lost_at; - rxrpc_reduce_call_timer(call, ack_lost_at, nowj, - rxrpc_timer_set_for_lost_ack); + call->ack_lost_at = ktime_add(now, delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_lost_ack); } } if (txb->seq == 1 && !test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags)) { - unsigned long nowj = jiffies; + ktime_t delay = ms_to_ktime(READ_ONCE(call->next_rx_timo)); - call->expect_rx_by = nowj + call->next_rx_timo; - rxrpc_reduce_call_timer(call, call->expect_rx_by, nowj, - rxrpc_timer_set_for_normal); + call->expect_rx_by = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_expect_rx); } rxrpc_set_keepalive(call); @@ -755,11 +749,9 @@ void rxrpc_transmit_one(struct rxrpc_call *call, struct rxrpc_txbuf *txb) rxrpc_instant_resend(call, txb); } } else { - unsigned long now = jiffies; - unsigned long resend_at = now + call->peer->rto_j; + ktime_t delay = ns_to_ktime(call->peer->rto_us * NSEC_PER_USEC); - call->resend_at = resend_at; - rxrpc_reduce_call_timer(call, resend_at, now, - rxrpc_timer_set_for_send); + call->resend_at = ktime_add(ktime_get_real(), delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_resend_tx); } } diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c index 26dc2f26d92d..263a2251e3d2 100644 --- a/net/rxrpc/proc.c +++ b/net/rxrpc/proc.c @@ -52,9 +52,9 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) struct rxrpc_call *call; struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); enum rxrpc_call_state state; - unsigned long timeout = 0; rxrpc_seq_t acks_hard_ack; char lbuff[50], rbuff[50]; + long timeout = 0; if (v == &rxnet->calls) { seq_puts(seq, @@ -76,10 +76,8 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) sprintf(rbuff, "%pISpc", &call->dest_srx.transport); state = rxrpc_call_state(call); - if (state != RXRPC_CALL_SERVER_PREALLOC) { - timeout = READ_ONCE(call->expect_rx_by); - timeout -= jiffies; - } + if (state != RXRPC_CALL_SERVER_PREALLOC) + timeout = ktime_ms_delta(READ_ONCE(call->expect_rx_by), ktime_get_real()); acks_hard_ack = READ_ONCE(call->acks_hard_ack); seq_printf(seq, @@ -309,7 +307,7 @@ static int rxrpc_peer_seq_show(struct seq_file *seq, void *v) peer->mtu, now - peer->last_tx_at, peer->srtt_us >> 3, - jiffies_to_usecs(peer->rto_j)); + peer->rto_us); return 0; } diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c index be61d6f5be8d..cdab7b7d08a0 100644 --- a/net/rxrpc/rtt.c +++ b/net/rxrpc/rtt.c @@ -11,8 +11,8 @@ #include #include "ar-internal.h" -#define RXRPC_RTO_MAX ((unsigned)(120 * HZ)) -#define RXRPC_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */ +#define RXRPC_RTO_MAX (120 * USEC_PER_SEC) +#define RXRPC_TIMEOUT_INIT ((unsigned int)(1 * MSEC_PER_SEC)) /* RFC6298 2.1 initial RTO value */ #define rxrpc_jiffies32 ((u32)jiffies) /* As rxrpc_jiffies32 */ static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer) @@ -22,7 +22,7 @@ static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer) static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer) { - return usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us); + return (peer->srtt_us >> 3) + peer->rttvar_us; } static u32 rxrpc_bound_rto(u32 rto) @@ -124,7 +124,7 @@ static void rxrpc_set_rto(struct rxrpc_peer *peer) /* NOTE: clamping at RXRPC_RTO_MIN is not required, current algo * guarantees that rto is higher. */ - peer->rto_j = rxrpc_bound_rto(rto); + peer->rto_us = rxrpc_bound_rto(rto); } static void rxrpc_ack_update_rtt(struct rxrpc_peer *peer, long rtt_us) @@ -163,33 +163,33 @@ void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why, spin_unlock(&peer->rtt_input_lock); trace_rxrpc_rtt_rx(call, why, rtt_slot, send_serial, resp_serial, - peer->srtt_us >> 3, peer->rto_j); + peer->srtt_us >> 3, peer->rto_us); } /* - * Get the retransmission timeout to set in jiffies, backing it off each time - * we retransmit. + * Get the retransmission timeout to set in nanoseconds, backing it off each + * time we retransmit. */ -unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans) +ktime_t rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans) { - u64 timo_j; - u8 backoff = READ_ONCE(peer->backoff); + u64 timo_us; + u32 backoff = READ_ONCE(peer->backoff); - timo_j = peer->rto_j; - timo_j <<= backoff; - if (retrans && timo_j * 2 <= RXRPC_RTO_MAX) + timo_us = peer->rto_us; + timo_us <<= backoff; + if (retrans && timo_us * 2 <= RXRPC_RTO_MAX) WRITE_ONCE(peer->backoff, backoff + 1); - if (timo_j < 1) - timo_j = 1; + if (timo_us < 1) + timo_us = 1; - return timo_j; + return ns_to_ktime(timo_us * NSEC_PER_USEC); } void rxrpc_peer_init_rtt(struct rxrpc_peer *peer) { - peer->rto_j = RXRPC_TIMEOUT_INIT; - peer->mdev_us = jiffies_to_usecs(RXRPC_TIMEOUT_INIT); + peer->rto_us = RXRPC_TIMEOUT_INIT; + peer->mdev_us = RXRPC_TIMEOUT_INIT; peer->backoff = 0; //minmax_reset(&peer->rtt_min, rxrpc_jiffies32, ~0U); } diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index 4d152f06b039..6f765768c49c 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -609,7 +609,6 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) __releases(&rx->sk.sk_lock.slock) { struct rxrpc_call *call; - unsigned long now, j; bool dropped_lock = false; int ret; @@ -687,25 +686,21 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len) switch (p.call.nr_timeouts) { case 3: - j = msecs_to_jiffies(p.call.timeouts.normal); - if (p.call.timeouts.normal > 0 && j == 0) - j = 1; - WRITE_ONCE(call->next_rx_timo, j); + WRITE_ONCE(call->next_rx_timo, p.call.timeouts.normal); fallthrough; case 2: - j = msecs_to_jiffies(p.call.timeouts.idle); - if (p.call.timeouts.idle > 0 && j == 0) - j = 1; - WRITE_ONCE(call->next_req_timo, j); + WRITE_ONCE(call->next_req_timo, p.call.timeouts.idle); fallthrough; case 1: if (p.call.timeouts.hard > 0) { - j = p.call.timeouts.hard * HZ; - now = jiffies; - j += now; - WRITE_ONCE(call->expect_term_by, j); - rxrpc_reduce_call_timer(call, j, now, - rxrpc_timer_set_for_hard); + ktime_t delay = ms_to_ktime(p.call.timeouts.hard * MSEC_PER_SEC); + + WRITE_ONCE(call->expect_term_by, + ktime_add(p.call.timeouts.hard, + ktime_get_real())); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_hard); + rxrpc_poke_call(call, rxrpc_call_poke_set_timeout); + } break; } diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c index ecaeb4ecfb58..c9bedd0e2d86 100644 --- a/net/rxrpc/sysctl.c +++ b/net/rxrpc/sysctl.c @@ -15,6 +15,8 @@ static const unsigned int four = 4; static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1; static const unsigned int n_65535 = 65535; static const unsigned int n_max_acks = 255; +static const unsigned long one_ms = 1; +static const unsigned long max_ms = 1000; static const unsigned long one_jiffy = 1; static const unsigned long max_jiffies = MAX_JIFFY_OFFSET; #ifdef CONFIG_AF_RXRPC_INJECT_RX_DELAY @@ -28,24 +30,24 @@ static const unsigned long max_500 = 500; * information on the individual parameters. */ static struct ctl_table rxrpc_sysctl_table[] = { - /* Values measured in milliseconds but used in jiffies */ + /* Values measured in milliseconds */ { .procname = "soft_ack_delay", .data = &rxrpc_soft_ack_delay, .maxlen = sizeof(unsigned long), .mode = 0644, - .proc_handler = proc_doulongvec_ms_jiffies_minmax, - .extra1 = (void *)&one_jiffy, - .extra2 = (void *)&max_jiffies, + .proc_handler = proc_doulongvec_minmax, + .extra1 = (void *)&one_ms, + .extra2 = (void *)&max_ms, }, { .procname = "idle_ack_delay", .data = &rxrpc_idle_ack_delay, .maxlen = sizeof(unsigned long), .mode = 0644, - .proc_handler = proc_doulongvec_ms_jiffies_minmax, - .extra1 = (void *)&one_jiffy, - .extra2 = (void *)&max_jiffies, + .proc_handler = proc_doulongvec_minmax, + .extra1 = (void *)&one_ms, + .extra2 = (void *)&max_ms, }, { .procname = "idle_conn_expiry", From patchwork Mon Mar 4 08:43:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580198 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DBC01DA5E for ; Mon, 4 Mar 2024 08:44:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541860; cv=none; b=S598NoNK03dANB0kMMvXn/N9kJTIn7/ZFOYIbi3rPtFHpPJdky3iGLhhGiLnyhgsRhvA29HDjysMG67T5gjaeR4EZnOJuBG/It2maHj6ZfDG8ntiimpO/ygxfHJ77dlOC2J6A/oaWLDVfhAD+wi9uDanPdU/oo7YLfQ4Ivw4ChI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541860; c=relaxed/simple; bh=3W1tGWYV3aFDXvrf37HRbAw1JRDsqeWSwvyhDoPXG2c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lQK0R9z/DpqyNUgQAXQhl+/5GMg7hnQdM6Y9pwzXLJwUIZg8L631n+JazIH+VhCUWv68OlLg8oAcKEvqfBCxQS8YoD1U3dGDu7T2lgsvO277FaLiciAN+FB7CriynySCf5Uii6gzNlXTy3STjZpnWgswTb1l14EPYw2JZX3w7dE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bCdzA07c; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bCdzA07c" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541857; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Sw1mJ1KMDj2Xrl/dCIVrkP2L8ZbK2rHywBCTVXbbrs8=; b=bCdzA07cEP202OZ6bEas2NbuC/FzOJ9+qmIFeMEzRIbn2CFYNVgoOd3WfnoykK+XJiiu90 xhusB1Jt00NKTg7pYMIvZ6qeXUp18ocTdkzzz+giHfGKuExxUihsMsbq85UJmeTCU8C9FU yeq1mNedIkntyLtU7ElSiDs4BUjYlv4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441-pU0f7eogM_qXfOc54oR9_A-1; Mon, 04 Mar 2024 03:44:11 -0500 X-MC-Unique: pU0f7eogM_qXfOc54oR9_A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 21881185A782; Mon, 4 Mar 2024 08:44:11 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5632D492BD7; Mon, 4 Mar 2024 08:44:09 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 19/21] rxrpc: Record probes after transmission and reduce number of time-gets Date: Mon, 4 Mar 2024 08:43:16 +0000 Message-ID: <20240304084322.705539-20-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Patchwork-Delegate: kuba@kernel.org Move the recording of a successfully transmitted DATA or ACK packet that will provide RTT probing to after the transmission. With the I/O thread model, this can be done because parsing of the responding ACK can no longer race with the post-transmission code. Move the various timeout-settings done after successfully transmitting a DATA packet into rxrpc_tstamp_data_packets() and eliminate a number of calls to get the current time. As a consequence we no longer need to cancel a proposed RTT probe on transmission failure. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/output.c | 105 +++++++++++++++++---------------------------- 1 file changed, 40 insertions(+), 65 deletions(-) diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index ec82193e5681..5ea9601efd05 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -63,7 +63,7 @@ static void rxrpc_tx_backoff(struct rxrpc_call *call, int ret) * Receiving a response to the ping will prevent the ->expect_rx_by timer from * expiring. */ -static void rxrpc_set_keepalive(struct rxrpc_call *call) +static void rxrpc_set_keepalive(struct rxrpc_call *call, ktime_t now) { ktime_t delay = ms_to_ktime(READ_ONCE(call->next_rx_timo) / 6); @@ -147,8 +147,8 @@ static void rxrpc_fill_out_ack(struct rxrpc_call *call, /* * Record the beginning of an RTT probe. */ -static int rxrpc_begin_rtt_probe(struct rxrpc_call *call, rxrpc_serial_t serial, - enum rxrpc_rtt_tx_trace why) +static void rxrpc_begin_rtt_probe(struct rxrpc_call *call, rxrpc_serial_t serial, + ktime_t now, enum rxrpc_rtt_tx_trace why) { unsigned long avail = call->rtt_avail; int rtt_slot = 9; @@ -161,30 +161,15 @@ static int rxrpc_begin_rtt_probe(struct rxrpc_call *call, rxrpc_serial_t serial, goto no_slot; call->rtt_serial[rtt_slot] = serial; - call->rtt_sent_at[rtt_slot] = ktime_get_real(); + call->rtt_sent_at[rtt_slot] = now; smp_wmb(); /* Write data before avail bit */ set_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); trace_rxrpc_rtt_tx(call, why, rtt_slot, serial); - return rtt_slot; + return; no_slot: trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_no_slot, rtt_slot, serial); - return -1; -} - -/* - * Cancel an RTT probe. - */ -static void rxrpc_cancel_rtt_probe(struct rxrpc_call *call, - rxrpc_serial_t serial, int rtt_slot) -{ - if (rtt_slot != -1) { - clear_bit(rtt_slot + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); - smp_wmb(); /* Clear pending bit before setting slot */ - set_bit(rtt_slot, &call->rtt_avail); - trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_cancel, rtt_slot, serial); - } } /* @@ -196,7 +181,8 @@ static void rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t struct rxrpc_connection *conn; struct rxrpc_ackpacket *ack = (struct rxrpc_ackpacket *)(whdr + 1); struct msghdr msg; - int ret, rtt_slot = -1; + ktime_t now; + int ret; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) return; @@ -218,9 +204,6 @@ static void rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t ntohl(ack->serial), ack->reason, ack->nAcks, txb->ack_rwind); - if (ack->reason == RXRPC_ACK_PING) - rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_ping); - rxrpc_inc_stat(call->rxnet, stat_tx_ack_send); iov_iter_kvec(&msg.msg_iter, WRITE, txb->kvec, txb->nr_kvec, txb->len); @@ -233,16 +216,14 @@ static void rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t } else { trace_rxrpc_tx_packet(call->debug_id, whdr, rxrpc_tx_point_call_ack); + now = ktime_get_real(); + if (ack->reason == RXRPC_ACK_PING) + rxrpc_begin_rtt_probe(call, txb->serial, now, rxrpc_rtt_tx_ping); if (txb->flags & RXRPC_REQUEST_ACK) - call->peer->rtt_last_req = ktime_get_real(); + call->peer->rtt_last_req = now; + rxrpc_set_keepalive(call, now); } rxrpc_tx_backoff(call, ret); - - if (!__rxrpc_call_is_complete(call)) { - if (ret < 0) - rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); - rxrpc_set_keepalive(call); - } } /* @@ -413,18 +394,36 @@ static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_tx } /* - * Set the times on a packet before transmission + * Set timeouts after transmitting a packet. */ -static int rxrpc_tstamp_data_packets(struct rxrpc_call *call, struct rxrpc_txbuf *txb) +static void rxrpc_tstamp_data_packets(struct rxrpc_call *call, struct rxrpc_txbuf *txb) { - ktime_t tstamp = ktime_get_real(); - int rtt_slot = -1; + ktime_t now = ktime_get_real(); + bool ack_requested = txb->flags & RXRPC_REQUEST_ACK; - txb->last_sent = tstamp; - if (txb->flags & RXRPC_REQUEST_ACK) - rtt_slot = rxrpc_begin_rtt_probe(call, txb->serial, rxrpc_rtt_tx_data); + call->tx_last_sent = now; + txb->last_sent = now; + + if (ack_requested) { + rxrpc_begin_rtt_probe(call, txb->serial, now, rxrpc_rtt_tx_data); + + call->peer->rtt_last_req = now; + if (call->peer->rtt_count > 1) { + ktime_t delay = rxrpc_get_rto_backoff(call->peer, false); - return rtt_slot; + call->ack_lost_at = ktime_add(now, delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_lost_ack); + } + } + + if (!test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags)) { + ktime_t delay = ms_to_ktime(READ_ONCE(call->next_rx_timo)); + + call->expect_rx_by = ktime_add(now, delay); + trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_expect_rx); + } + + rxrpc_set_keepalive(call, now); } /* @@ -437,7 +436,7 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t enum rxrpc_tx_point frag; struct msghdr msg; size_t len; - int ret, rtt_slot = -1; + int ret; _enter("%x,{%d}", txb->seq, txb->len); @@ -479,8 +478,6 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t } retry: - rtt_slot = rxrpc_tstamp_data_packets(call, txb); - /* send the packet by UDP * - returns -EMSGSIZE if UDP would have to fragment the packet * to go out of the interface @@ -493,7 +490,6 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t if (ret < 0) { rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); - rxrpc_cancel_rtt_probe(call, txb->serial, rtt_slot); trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, frag); } else { trace_rxrpc_tx_packet(call->debug_id, whdr, frag); @@ -508,28 +504,7 @@ static int rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_txbuf *t done: if (ret >= 0) { - call->tx_last_sent = txb->last_sent; - if (txb->flags & RXRPC_REQUEST_ACK) { - call->peer->rtt_last_req = txb->last_sent; - if (call->peer->rtt_count > 1) { - ktime_t delay = rxrpc_get_rto_backoff(call->peer, false); - ktime_t now = ktime_get_real(); - - call->ack_lost_at = ktime_add(now, delay); - trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_lost_ack); - } - } - - if (txb->seq == 1 && - !test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, - &call->flags)) { - ktime_t delay = ms_to_ktime(READ_ONCE(call->next_rx_timo)); - - call->expect_rx_by = ktime_add(ktime_get_real(), delay); - trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_expect_rx); - } - - rxrpc_set_keepalive(call); + rxrpc_tstamp_data_packets(call, txb); } else { /* Cancel the call if the initial transmission fails, * particularly if that's due to network routing issues that From patchwork Mon Mar 4 08:43:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580199 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 401133D38F for ; Mon, 4 Mar 2024 08:44:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541861; cv=none; b=LLl1Iy+dmDnNGAaIuFJxODkw9Dc0Fmm0TxTn4MK521SerjwrEw1gvDZaLD+GY5kUld+ZDd0SQDhffXq0Ez10D1Gh3Sy/TwancrJAd/8sg4HyWA7XKgf6Rc5iEzI5nfSU38Q+WYGWvoqfSrkhcNRB2W5i3LzI2z5BCW6RH7W7GCA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541861; c=relaxed/simple; bh=Hm+gxW/kHadxtc48yXaukCqMew6ym1016BnoNle9pNk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YdJ43k6jES9oe4y3xc4HLLLQbj+zbmAoUN1+H7VkP/naQDCicnwpNVrIBe+r0Hg7e1OU5Jf1TvaYPNRZRnew1ph8qwwFmk458qXOQ4tWofxER2N1WvUdXKz+zKKVJFT6+OyoDHm90Q0lPaembxMDE4uw3Qt2YaSsj1f5FlPrnzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NZQ9Azr0; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NZQ9Azr0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UXvJfV7xY8fIppX1WMGYdHo1z+U7+tMRhsEec3It9fE=; b=NZQ9Azr0pI+U8tf1htHvUUif7s9AaGcAZz6cDcd6iwvBBRbMHpOpaSFoAI2Ir9skWx17pe fGJoI38ZuntYooUP+nuEphuoUw8nLh49+uGkp9QNZWKKuQSJKtuqjIJLgyppKybC3SAe9K DSDxQm4KUuMfwtlBavDv8CgAaX1D6Fk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-651-Oe7E2GaqM4eeeZsLRGrRHg-1; Mon, 04 Mar 2024 03:44:13 -0500 X-MC-Unique: Oe7E2GaqM4eeeZsLRGrRHg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BE968489A0; Mon, 4 Mar 2024 08:44:13 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id E0A7F24D; Mon, 4 Mar 2024 08:44:11 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 20/21] rxrpc: Clean up the resend algorithm Date: Mon, 4 Mar 2024 08:43:17 +0000 Message-ID: <20240304084322.705539-21-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Patchwork-Delegate: kuba@kernel.org Clean up the DATA packet resending algorithm to retransmit packets as we come across them whilst walking the transmission buffer rather than queuing them for retransmission at the end. This can be done as ACK parsing - and thus the discarding of successful packets - is now done in the same thread rather than separately in softirq context and a locked section is no longer required. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/call_event.c | 79 ++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 41 deletions(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 2a9f74eb7c46..6c5e3054209b 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -71,23 +71,18 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) struct rxrpc_skb_priv *sp; struct rxrpc_txbuf *txb; rxrpc_seq_t transmitted = call->tx_transmitted; - ktime_t now, max_age, oldest, ack_ts, delay; - bool unacked = false; + ktime_t next_resend = KTIME_MAX, rto = ns_to_ktime(call->peer->rto_us * NSEC_PER_USEC); + ktime_t resend_at = KTIME_MAX, now, delay; + bool unacked = false, did_send = false; unsigned int i; - LIST_HEAD(retrans_queue); _enter("{%d,%d}", call->acks_hard_ack, call->tx_top); now = ktime_get_real(); - max_age = ktime_sub_us(now, call->peer->rto_us); - oldest = now; if (list_empty(&call->tx_buffer)) goto no_resend; - if (list_empty(&call->tx_buffer)) - goto no_further_resend; - trace_rxrpc_resend(call, ack_skb); txb = list_first_entry(&call->tx_buffer, struct rxrpc_txbuf, call_link); @@ -115,19 +110,23 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) goto no_further_resend; found_txb: - if (after(txb->serial, call->acks_highest_serial)) + resend_at = ktime_add(txb->last_sent, rto); + if (after(txb->serial, call->acks_highest_serial)) { + if (ktime_after(resend_at, now) && + ktime_before(resend_at, next_resend)) + next_resend = resend_at; continue; /* Ack point not yet reached */ + } rxrpc_see_txbuf(txb, rxrpc_txbuf_see_unacked); - if (list_empty(&txb->tx_link)) { - list_add_tail(&txb->tx_link, &retrans_queue); - txb->flags |= RXRPC_TXBUF_RESENT; - } - trace_rxrpc_retransmit(call, txb->seq, txb->serial, - ktime_to_ns(ktime_sub(txb->last_sent, - max_age))); + ktime_sub(resend_at, now)); + + txb->flags |= RXRPC_TXBUF_RESENT; + rxrpc_transmit_one(call, txb); + did_send = true; + now = ktime_get_real(); if (list_is_last(&txb->call_link, &call->tx_buffer)) goto no_further_resend; @@ -144,6 +143,8 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) goto no_further_resend; list_for_each_entry_from(txb, &call->tx_buffer, call_link) { + resend_at = ktime_add(txb->last_sent, rto); + if (before_eq(txb->seq, call->acks_prev_seq)) continue; if (after(txb->seq, call->tx_transmitted)) @@ -153,25 +154,30 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) before(txb->serial, ntohl(ack->serial))) goto do_resend; /* Wasn't accounted for by a more recent ping. */ - if (ktime_after(txb->last_sent, max_age)) { - if (ktime_before(txb->last_sent, oldest)) - oldest = txb->last_sent; + if (ktime_after(resend_at, now)) { + if (ktime_before(resend_at, next_resend)) + next_resend = resend_at; continue; } do_resend: unacked = true; - if (list_empty(&txb->tx_link)) { - list_add_tail(&txb->tx_link, &retrans_queue); - txb->flags |= RXRPC_TXBUF_RESENT; - rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans); - } + + txb->flags |= RXRPC_TXBUF_RESENT; + rxrpc_transmit_one(call, txb); + did_send = true; + rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans); + now = ktime_get_real(); } no_further_resend: no_resend: - delay = rxrpc_get_rto_backoff(call->peer, !list_empty(&retrans_queue)); - call->resend_at = ktime_add(oldest, delay); + if (resend_at < KTIME_MAX) { + delay = rxrpc_get_rto_backoff(call->peer, did_send); + resend_at = ktime_add(resend_at, delay); + trace_rxrpc_timer_set(call, resend_at - now, rxrpc_timer_trace_resend_reset); + } + call->resend_at = resend_at; if (unacked) rxrpc_congestion_timeout(call); @@ -180,24 +186,15 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) * that an ACK got lost somewhere. Send a ping to find out instead of * retransmitting data. */ - if (list_empty(&retrans_queue)) { - trace_rxrpc_timer_set(call, delay, rxrpc_timer_trace_resend_reset); - ack_ts = ktime_sub(now, call->acks_latest_ts); - if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) - goto out; - rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, - rxrpc_propose_ack_ping_for_0_retrans); - goto out; - } + if (!did_send) { + ktime_t next_ping = ktime_add_us(call->acks_latest_ts, + call->peer->srtt_us >> 3); - /* Retransmit the queue */ - while ((txb = list_first_entry_or_null(&retrans_queue, - struct rxrpc_txbuf, tx_link))) { - list_del_init(&txb->tx_link); - rxrpc_transmit_one(call, txb); + if (ktime_sub(next_ping, now) <= 0) + rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, + rxrpc_propose_ack_ping_for_0_retrans); } -out: _leave(""); } From patchwork Mon Mar 4 08:43:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13580200 X-Patchwork-Delegate: kuba@kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E5203D963 for ; Mon, 4 Mar 2024 08:44:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541863; cv=none; b=qZ/woGyMWhDOJmJ+OhQTcOj0lz5/JMR3pgqtPS9OfrAdhq4edyV+NH8x+TDTGKaRTxG8JagxoEFhLyvJVU0omOpkN2qICc3A/nCmoB6qRR9mhj31bkg7b5muy/cDYhXbNCc5lilSiAk96D4jCf9TApODm4jwAfLzBKzz2xslCc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709541863; c=relaxed/simple; bh=+YU+DFYa36t0XHlc/+xgZbyWimKd8RtBolX8Ff61bVs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kk63ieVvud/uFnRczg8MCn+4MKMVHK2oeSkfeAmbaVx0QpKMLrUXol9sardalE4+cfc9RpxxuYEH0iHIpVokBJmE+/x3A7nYIbSaWU5bnxfr2246ehJvAj0Pz3gbghhAPl5zfMnvauoCviexcxALlHrbNmRTtpYbJo3q979cHns= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=UBmcMwsm; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="UBmcMwsm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709541860; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x5jC1ywkJjQJ57JX0r0EDUWxh8r1trMoZVBFATzsPLw=; b=UBmcMwsm/EJfIee0AAHraMeyyfGFQ0+k2+/bhkRmQYEgfOgQFBjg0cc9E69SNUj348gg3s o3vVQ5ld7mECZCAVbl2QsszeocIVY3XF4aZTYhFKMYiMHC2p90e1zK8jsfk3M7W75JHRHZ K0yuAqym/tPn48wsRovibsnzrcsMGik= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-661-IC7ggiR_N1a8xZzWEPyhnQ-1; Mon, 04 Mar 2024 03:44:15 -0500 X-MC-Unique: IC7ggiR_N1a8xZzWEPyhnQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 321C738212CC; Mon, 4 Mar 2024 08:44:15 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.114]) by smtp.corp.redhat.com (Postfix) with ESMTP id 11A581C060B1; Mon, 4 Mar 2024 08:44:13 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Marc Dionne , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 21/21] rxrpc: Extract useful fields from a received ACK to skb priv data Date: Mon, 4 Mar 2024 08:43:18 +0000 Message-ID: <20240304084322.705539-22-dhowells@redhat.com> In-Reply-To: <20240304084322.705539-1-dhowells@redhat.com> References: <20240304084322.705539-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Patchwork-Delegate: kuba@kernel.org Extract useful fields from a received ACK packet into the skb private data early on in the process of parsing incoming packets. This makes the ACK fields available even before we've matched the ACK up to a call and will allow us to deal with path MTU discovery probe responses even after the relevant call has been completed. Signed-off-by: David Howells cc: Marc Dionne cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-afs@lists.infradead.org cc: netdev@vger.kernel.org --- net/rxrpc/ar-internal.h | 7 +++-- net/rxrpc/call_event.c | 4 +-- net/rxrpc/input.c | 61 ++++++++++++++++++----------------------- net/rxrpc/io_thread.c | 11 ++++++++ 4 files changed, 45 insertions(+), 38 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 21ecac22b51d..08c0a32db8c7 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -198,8 +198,8 @@ struct rxrpc_host_header { * - max 48 bytes (struct sk_buff::cb) */ struct rxrpc_skb_priv { - struct rxrpc_connection *conn; /* Connection referred to (poke packet) */ union { + struct rxrpc_connection *conn; /* Connection referred to (poke packet) */ struct { u16 offset; /* Offset of data */ u16 len; /* Length of data */ @@ -208,9 +208,12 @@ struct rxrpc_skb_priv { }; struct { rxrpc_seq_t first_ack; /* First packet in acks table */ + rxrpc_seq_t prev_ack; /* Highest seq seen */ + rxrpc_serial_t acked_serial; /* Packet in response to (or 0) */ + u8 reason; /* Reason for ack */ u8 nr_acks; /* Number of acks+nacks */ u8 nr_nacks; /* Number of nacks */ - }; + } ack; }; struct rxrpc_host_header hdr; /* RxRPC packet header from this packet */ }; diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 6c5e3054209b..7bbb68504766 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -93,12 +93,12 @@ void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb) sp = rxrpc_skb(ack_skb); ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header); - for (i = 0; i < sp->nr_acks; i++) { + for (i = 0; i < sp->ack.nr_acks; i++) { rxrpc_seq_t seq; if (ack->acks[i] & 1) continue; - seq = sp->first_ack + i; + seq = sp->ack.first_ack + i; if (after(txb->seq, transmitted)) break; if (after(txb->seq, seq)) diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 09cce1d5d605..3dedb8c0618c 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -710,20 +710,19 @@ static rxrpc_seq_t rxrpc_input_check_prev_ack(struct rxrpc_call *call, rxrpc_seq_t seq) { struct sk_buff *skb = call->cong_last_nack; - struct rxrpc_ackpacket ack; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); unsigned int i, new_acks = 0, retained_nacks = 0; - rxrpc_seq_t old_seq = sp->first_ack; - u8 *acks = skb->data + sizeof(struct rxrpc_wire_header) + sizeof(ack); + rxrpc_seq_t old_seq = sp->ack.first_ack; + u8 *acks = skb->data + sizeof(struct rxrpc_wire_header) + sizeof(struct rxrpc_ackpacket); - if (after_eq(seq, old_seq + sp->nr_acks)) { - summary->nr_new_acks += sp->nr_nacks; - summary->nr_new_acks += seq - (old_seq + sp->nr_acks); + if (after_eq(seq, old_seq + sp->ack.nr_acks)) { + summary->nr_new_acks += sp->ack.nr_nacks; + summary->nr_new_acks += seq - (old_seq + sp->ack.nr_acks); summary->nr_retained_nacks = 0; } else if (seq == old_seq) { - summary->nr_retained_nacks = sp->nr_nacks; + summary->nr_retained_nacks = sp->ack.nr_nacks; } else { - for (i = 0; i < sp->nr_acks; i++) { + for (i = 0; i < sp->ack.nr_acks; i++) { if (acks[i] == RXRPC_ACK_TYPE_NACK) { if (before(old_seq + i, seq)) new_acks++; @@ -736,7 +735,7 @@ static rxrpc_seq_t rxrpc_input_check_prev_ack(struct rxrpc_call *call, summary->nr_retained_nacks = retained_nacks; } - return old_seq + sp->nr_acks; + return old_seq + sp->ack.nr_acks; } /* @@ -756,10 +755,10 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); unsigned int i, old_nacks = 0; - rxrpc_seq_t lowest_nak = seq + sp->nr_acks; + rxrpc_seq_t lowest_nak = seq + sp->ack.nr_acks; u8 *acks = skb->data + sizeof(struct rxrpc_wire_header) + sizeof(struct rxrpc_ackpacket); - for (i = 0; i < sp->nr_acks; i++) { + for (i = 0; i < sp->ack.nr_acks; i++) { if (acks[i] == RXRPC_ACK_TYPE_ACK) { summary->nr_acks++; if (after_eq(seq, since)) @@ -771,7 +770,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, old_nacks++; } else { summary->nr_new_nacks++; - sp->nr_nacks++; + sp->ack.nr_nacks++; } if (before(seq, lowest_nak)) @@ -832,7 +831,6 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call, static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_ack_summary summary = { 0 }; - struct rxrpc_ackpacket ack; struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_acktrailer trailer; rxrpc_serial_t ack_serial, acked_serial; @@ -841,29 +839,24 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) _enter(""); - offset = sizeof(struct rxrpc_wire_header); - if (skb_copy_bits(skb, offset, &ack, sizeof(ack)) < 0) - return rxrpc_proto_abort(call, 0, rxrpc_badmsg_short_ack); - offset += sizeof(ack); - - ack_serial = sp->hdr.serial; - acked_serial = ntohl(ack.serial); - first_soft_ack = ntohl(ack.firstPacket); - prev_pkt = ntohl(ack.previousPacket); - hard_ack = first_soft_ack - 1; - nr_acks = ack.nAcks; - sp->first_ack = first_soft_ack; - sp->nr_acks = nr_acks; - summary.ack_reason = (ack.reason < RXRPC_ACK__INVALID ? - ack.reason : RXRPC_ACK__INVALID); + offset = sizeof(struct rxrpc_wire_header) + sizeof(struct rxrpc_ackpacket); + + ack_serial = sp->hdr.serial; + acked_serial = sp->ack.acked_serial; + first_soft_ack = sp->ack.first_ack; + prev_pkt = sp->ack.prev_ack; + nr_acks = sp->ack.nr_acks; + hard_ack = first_soft_ack - 1; + summary.ack_reason = (sp->ack.reason < RXRPC_ACK__INVALID ? + sp->ack.reason : RXRPC_ACK__INVALID); trace_rxrpc_rx_ack(call, ack_serial, acked_serial, first_soft_ack, prev_pkt, summary.ack_reason, nr_acks); - rxrpc_inc_stat(call->rxnet, stat_rx_acks[ack.reason]); + rxrpc_inc_stat(call->rxnet, stat_rx_acks[summary.ack_reason]); if (acked_serial != 0) { - switch (ack.reason) { + switch (summary.ack_reason) { case RXRPC_ACK_PING_RESPONSE: rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, rxrpc_rtt_rx_ping_response); @@ -883,7 +876,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) * indicates that the client address changed due to NAT. The server * lost the call because it switched to a different peer. */ - if (unlikely(ack.reason == RXRPC_ACK_EXCEEDS_WINDOW) && + if (unlikely(summary.ack_reason == RXRPC_ACK_EXCEEDS_WINDOW) && first_soft_ack == 1 && prev_pkt == 0 && rxrpc_is_client_call(call)) { @@ -896,7 +889,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) * indicate a change of address. However, we can retransmit the call * if we still have it buffered to the beginning. */ - if (unlikely(ack.reason == RXRPC_ACK_OUT_OF_SEQUENCE) && + if (unlikely(summary.ack_reason == RXRPC_ACK_OUT_OF_SEQUENCE) && first_soft_ack == 1 && prev_pkt == 0 && call->acks_hard_ack == 0 && @@ -937,7 +930,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) call->acks_first_seq = first_soft_ack; call->acks_prev_seq = prev_pkt; - switch (ack.reason) { + switch (summary.ack_reason) { case RXRPC_ACK_PING: break; default: @@ -994,7 +987,7 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) rxrpc_congestion_management(call, skb, &summary, acked_serial); send_response: - if (ack.reason == RXRPC_ACK_PING) + if (summary.ack_reason == RXRPC_ACK_PING) rxrpc_send_ACK(call, RXRPC_ACK_PING_RESPONSE, ack_serial, rxrpc_propose_ack_respond_to_ping); else if (sp->hdr.flags & RXRPC_REQUEST_ACK) diff --git a/net/rxrpc/io_thread.c b/net/rxrpc/io_thread.c index 4a3a08a0e2cd..0300baa9afcd 100644 --- a/net/rxrpc/io_thread.c +++ b/net/rxrpc/io_thread.c @@ -124,6 +124,7 @@ static bool rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) { struct rxrpc_wire_header whdr; + struct rxrpc_ackpacket ack; /* dig out the RxRPC connection details */ if (skb_copy_bits(skb, 0, &whdr, sizeof(whdr)) < 0) @@ -141,6 +142,16 @@ static bool rxrpc_extract_header(struct rxrpc_skb_priv *sp, sp->hdr.securityIndex = whdr.securityIndex; sp->hdr._rsvd = ntohs(whdr._rsvd); sp->hdr.serviceId = ntohs(whdr.serviceId); + + if (sp->hdr.type == RXRPC_PACKET_TYPE_ACK) { + if (skb_copy_bits(skb, sizeof(whdr), &ack, sizeof(ack)) < 0) + return rxrpc_bad_message(skb, rxrpc_badmsg_short_ack); + sp->ack.first_ack = ntohl(ack.firstPacket); + sp->ack.prev_ack = ntohl(ack.previousPacket); + sp->ack.acked_serial = ntohl(ack.serial); + sp->ack.reason = ack.reason; + sp->ack.nr_acks = ack.nAcks; + } return true; }