From patchwork Sat Feb 13 00:09:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 478D9C433DB for ; Sat, 13 Feb 2021 00:10:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0CFC1601FB for ; Sat, 13 Feb 2021 00:10:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229767AbhBMAKA (ORCPT ); Fri, 12 Feb 2021 19:10:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhBMAJ6 (ORCPT ); Fri, 12 Feb 2021 19:09:58 -0500 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F222C061574 for ; Fri, 12 Feb 2021 16:09:18 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id 7so1410612wrz.0 for ; Fri, 12 Feb 2021 16:09:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=vUaX6CJSvvP6EgiP8l7AT1SqEY7jKOaixGCmCkgRPvs=; b=HL8tW3jtAYf/fPKsAtvWS0WEvGBmoHEcycUshBiUh/i4qm/Vlf24fQuty5BujWvfAV 2UTqTYDRjNSzygwvDeHou7vgCgllBb3VvwNQsBqnpf484LkXRrCSp1jTpsbc63X+JCSw 70eU97Gz60RUuaW/2en2iEg3qXj59cpz72YZmayViEhUyTjA/SP6l1Pu0dW7mmAXgeuN j7bcdhMCynsBqJf+zB1VaX+NDWBIiAT/zswhcAExP+cOLKl+HTnHCZi+R414qOx96OBn Q/pH/pnzD6igedXynIwW5s8RRamOR8urOUw9qAQY36byeoxfl7AhjTjfZnzuydAYpWh6 hTUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=vUaX6CJSvvP6EgiP8l7AT1SqEY7jKOaixGCmCkgRPvs=; b=FnqcMtZitqkRUOqMJv7LT4OUwvFSNesP7cOeGbbMACo0J3Gied/rudwRE5+3ysn5aO GdvzIUpAxWdyBiu7/0AJca97W3QAHh0dpZzs2WZh5lsHT1AfhomVddfLHL/Xchbb+7zc xxKvHxSCYptN6CGL1GfArXDPZhfhcy+8P83yN3BxvAtoi8LX415tngW7uVEe4dZxQ8fN v1xZAb5ScojPmfCMKVyZHVhkMecHt4zvYI56q7ldte1lxYCgg1ElXoUC3NHEje4a+iYW 04HoAnVm6UheA05PojRajvXzMzOJ0t09xQw+LTqkoDPfpv9PUv6i+7PC/j0f3rdRNlLK SHrQ== X-Gm-Message-State: AOAM531NLjSisQoFCgOjv5LH/N0sCRSCvE54Ljnz4PmloxSp4l5MdR1P 20JHv5k+61cVVr/vlVw40iRlSjk0miE= X-Google-Smtp-Source: ABdhPJy4soyALHjhnH3cqBweN5nt8p8o5KD5s+g8s3PhC62uVw4fgb3KayRtbmzLUTAiUvx2wL0tgw== X-Received: by 2002:a05:6000:188c:: with SMTP id a12mr6423882wri.105.1613174956753; Fri, 12 Feb 2021 16:09:16 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u4sm12009329wrr.37.2021.02.12.16.09.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:16 -0800 (PST) Message-Id: <2d6858b1625aa3c96688c6c6a9157c2d2b16f43e.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:02 +0000 Subject: [PATCH v3 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Teach `packet_write_gently()` to write the pkt-line header and the actual buffer in 2 separate calls to `write_in_full()` and avoid the need for a static buffer, thread-safe scratch space, or an excessively large stack buffer. Change the API of `write_packetized_from_fd()` to accept a scratch space argument from its caller to avoid similar issues here. These changes are intended to make it easier to use pkt-line routines in a multi-threaded context with multiple concurrent writers writing to different streams. Signed-off-by: Jeff Hostetler --- convert.c | 7 ++++--- pkt-line.c | 28 +++++++++++++++++++--------- pkt-line.h | 12 +++++++++--- 3 files changed, 32 insertions(+), 15 deletions(-) diff --git a/convert.c b/convert.c index ee360c2f07ce..41012c2d301c 100644 --- a/convert.c +++ b/convert.c @@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - if (fd >= 0) - err = write_packetized_from_fd(fd, process->in); - else + if (fd >= 0) { + struct packet_scratch_space scratch; + err = write_packetized_from_fd(fd, process->in, &scratch); + } else err = write_packetized_from_buf(src, len, process->in); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index d633005ef746..4cff2f7a68a5 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -196,17 +196,25 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...) static int packet_write_gently(const int fd_out, const char *buf, size_t size) { - static char packet_write_buffer[LARGE_PACKET_MAX]; + char header[4]; size_t packet_size; - if (size > sizeof(packet_write_buffer) - 4) + if (size > LARGE_PACKET_DATA_MAX) return error(_("packet write failed - data exceeds max packet size")); packet_trace(buf, size, 1); packet_size = size + 4; - set_packet_header(packet_write_buffer, packet_size); - memcpy(packet_write_buffer + 4, buf, size); - if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0) + + set_packet_header(header, packet_size); + + /* + * Write the header and the buffer in 2 parts so that we do not need + * to allocate a buffer or rely on a static buffer. This avoids perf + * and multi-threading issues. + */ + + if (write_in_full(fd_out, header, 4) < 0 || + write_in_full(fd_out, buf, size) < 0) return error(_("packet write failed")); return 0; } @@ -242,19 +250,21 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) packet_trace(data, len, 1); } -int write_packetized_from_fd(int fd_in, int fd_out) +int write_packetized_from_fd(int fd_in, int fd_out, + struct packet_scratch_space *scratch) { - static char buf[LARGE_PACKET_DATA_MAX]; int err = 0; ssize_t bytes_to_write; while (!err) { - bytes_to_write = xread(fd_in, buf, sizeof(buf)); + bytes_to_write = xread(fd_in, scratch->buffer, + sizeof(scratch->buffer)); if (bytes_to_write < 0) return COPY_READ_ERROR; if (bytes_to_write == 0) break; - err = packet_write_gently(fd_out, buf, bytes_to_write); + err = packet_write_gently(fd_out, scratch->buffer, + bytes_to_write); } if (!err) err = packet_flush_gently(fd_out); diff --git a/pkt-line.h b/pkt-line.h index 8c90daa59ef0..c0722aefe638 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -5,6 +5,13 @@ #include "strbuf.h" #include "sideband.h" +#define LARGE_PACKET_MAX 65520 +#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + +struct packet_scratch_space { + char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */ +}; + /* * Write a packetized stream, where each line is preceded by * its length (including the header) as a 4-byte hex number. @@ -32,7 +39,7 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); -int write_packetized_from_fd(int fd_in, int fd_out); +int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch); int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); /* @@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader); enum packet_read_status packet_reader_peek(struct packet_reader *reader); #define DEFAULT_PACKET_MAX 1000 -#define LARGE_PACKET_MAX 65520 -#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + extern char packet_buffer[LARGE_PACKET_MAX]; struct packet_writer { From patchwork Sat Feb 13 00:09:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12086357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3423C433E0 for ; Sat, 13 Feb 2021 00:10:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FD2664D5D for ; Sat, 13 Feb 2021 00:10:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230053AbhBMAKC (ORCPT ); Fri, 12 Feb 2021 19:10:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229796AbhBMAKA (ORCPT ); Fri, 12 Feb 2021 19:10:00 -0500 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADD1EC061756 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) Received: by mail-wm1-x335.google.com with SMTP id x4so1370690wmi.3 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=1q5FaxYK9Kc6ZuQx9j/xJ03xM7/gGqa15J7NjF8Uad8=; b=hTaR02SlWXIp7XBxJVIJL2ag4ds4nUaX7fJb5R4RHxg6acsVs7SH+R9zfZX7tDRbDX Pmt0X3cy1E/Z71m1asxUd8fjBHIewaLschTyS7M1mghKewn8isdmvnQOKR4BQ9Sk/sdY uwoRZ4CIU2dkx5bw/osURtaF5R24hfHjiI08bmwvvh6XN+eSdsl0UQkdzU+I5i5YL4Xt Q5SXqP0tU3yFZKjpV7C0euVouNMBAtiTkCD40M6vRsSgHb9MfeBlqprGZdtsRxk43b1f 1U/e5E+LAr4tnqNKDPMPOFwYf+NSxX9SX1BJYVUYwWCEWsSyvnctfv9d/4sSGWRgJYY6 SsXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=1q5FaxYK9Kc6ZuQx9j/xJ03xM7/gGqa15J7NjF8Uad8=; b=kYUrksBx9ohFPlAFxP77EyS/U7bs9q4lwZSkC522+gUW+IPoAXcYks7/+nOn2Yy10o z19WPBro3opNuz3p50CBtN2FvlMd8blBkP5PJmRmYliP4Q3a74ze8YC90T3kJ0KnUiXJ kDAmHOjqdUXGisq27GteE20cfhtfEBsoX3KaI7sfGQFKKy4I5vZdVNwFMUPkp3qLLVTr uzadFAn5eXDDbPguAnZO/8jsHNLqHB3noj4R25taCXTLgs7Qe13GxRLCRl8HTd+WNxY6 vv57sPNgJvlL2F2AZqqHBQlzld3/HHSZXhenLquKQCqR3vUaa+c9TRndY/CotNNZkK5b ds+Q== X-Gm-Message-State: AOAM532PkW0I+NRXr2AAD5sxnz9f4BW2VZLfVd+7jrVkvxSqtIBBw6cG T5jzSWb0j9qVg3zxkX36TOergLJxhxU= X-Google-Smtp-Source: ABdhPJwnzufvMHQP5KbQ7A+FpqsMZjcvIzv1sKTaD7fVrD+k+W8hWqHgyrjsZ/o7QoLu0XYYqRCtAw== X-Received: by 2002:a1c:545d:: with SMTP id p29mr4616437wmi.77.1613174957394; Fri, 12 Feb 2021 16:09:17 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id i1sm1478425wmq.12.2021.02.12.16.09.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:17 -0800 (PST) Message-Id: <91a9f63d66924d14a22feedf7b1d88fe298b90bc.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:03 +0000 Subject: [PATCH v3 02/12] pkt-line: do not issue flush packets in write_packetized_*() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and `write_packetized_from_fd()` and require the caller to call it if desired. Rename both functions to `write_packetized_from_*_no_flush()` to prevent later merge accidents. `write_packetized_from_buf()` currently only has one caller: `apply_multi_file_filter()` in `convert.c`. It always wants a flush packet to be written after writing the payload. However, we are about to introduce a caller that wants to write many packets before a final flush packet, so let's make the caller responsible for emitting the flush packet. Signed-off-by: Jeff Hostetler Signed-off-by: Johannes Schindelin --- convert.c | 8 ++++++-- pkt-line.c | 10 +++------- pkt-line.h | 4 ++-- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/convert.c b/convert.c index 41012c2d301c..bccf7afa8797 100644 --- a/convert.c +++ b/convert.c @@ -885,9 +885,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (fd >= 0) { struct packet_scratch_space scratch; - err = write_packetized_from_fd(fd, process->in, &scratch); + err = write_packetized_from_fd_no_flush(fd, process->in, &scratch); } else - err = write_packetized_from_buf(src, len, process->in); + err = write_packetized_from_buf_no_flush(src, len, process->in); + if (err) + goto done; + + err = packet_flush_gently(process->in); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 4cff2f7a68a5..3602b0d37092 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -250,8 +250,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) packet_trace(data, len, 1); } -int write_packetized_from_fd(int fd_in, int fd_out, - struct packet_scratch_space *scratch) +int write_packetized_from_fd_no_flush(int fd_in, int fd_out, + struct packet_scratch_space *scratch) { int err = 0; ssize_t bytes_to_write; @@ -266,12 +266,10 @@ int write_packetized_from_fd(int fd_in, int fd_out, err = packet_write_gently(fd_out, scratch->buffer, bytes_to_write); } - if (!err) - err = packet_flush_gently(fd_out); return err; } -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out) { int err = 0; size_t bytes_written = 0; @@ -287,8 +285,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write); bytes_written += bytes_to_write; } - if (!err) - err = packet_flush_gently(fd_out); return err; } diff --git a/pkt-line.h b/pkt-line.h index c0722aefe638..a7149429ac35 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -39,8 +39,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); -int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch); -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch); +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out); /* * Read a packetized line into the buffer, which must be at least size bytes From patchwork Sat Feb 13 00:09:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12086359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AB50C433E0 for ; Sat, 13 Feb 2021 00:10:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A7F5601FB for ; Sat, 13 Feb 2021 00:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231200AbhBMAKO (ORCPT ); Fri, 12 Feb 2021 19:10:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229798AbhBMAKA (ORCPT ); Fri, 12 Feb 2021 19:10:00 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4C73C0613D6 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id 7so1410646wrz.0 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=ZrqHdrLvGrETy2dSaKQym4tWVcFDSXunor28gito1nY=; b=pb/6g7RTW4Y9rRzwWU6bOmB5mawRuVHqm8JdbOvnHLAcc5c+GlHZoUwdgbNs0wfsg3 mLRCGCRtqziO0X6CtH5Khc8+IWBbBeQ6liRuoYERLxnR6YLRV6tMS8he47ORY6MdzxG7 aFwsjl1jqKqH+UsJqiX5RZ+cSYh9s6rQ2KO5xwnjElA4KPCvWYjZtSJejK+mJSHhUPyu 22IKeFhrjx3aCpoz0so6CGNh7iOQGxKAJzZsAH3gna5xmyWJ+8LYnnBvNGnfGIeCdodi NPuQL+zXOTXJWpU4mWjXuBwz++Yb8OS39mOXcMjF8OyMS8lQn2htIrOLQos7SlAMmjc+ 5ZGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=ZrqHdrLvGrETy2dSaKQym4tWVcFDSXunor28gito1nY=; b=Yi49VnWsM9TmxtC5qx+LAPE9dwdefaZz3gd7h2Kz6ysTb1Yf/Ozcc+FI5OSY5AMFq6 6x2DU4m6O/t0gut9V0pKdSO2aVW8ELV7bnse0wEXB5waXmVkEatEpu98WGMrqbZqWjA9 D2LBdn2mf1EZJM4+RT+II8ulDL5ZFLH9Qr19g7DUgKYqQn2QpgHCYBUzVx9xGpYvNiYx mzCxD16X29DKTs3sWyp0chMTM24WvXF6dXAK0WDJXogokC5WRM61+PfO2coksHz3w1e0 vxvbjqa5ThxW/LXW+6FRECD5GbUCnPpHotlmnNdH7rVDUDS6kHkU4Y/THLIXgroLMEHw QZ6w== X-Gm-Message-State: AOAM5301dB0YCnfwLXL3ljx9KtO5uwq2/SRGNkSRzb8HyCd3peQiFkwO MoE4pwttNbA/TxKhTe11mZEeM27xD+Y= X-Google-Smtp-Source: ABdhPJzShGU/hwfgPQXmCBs13aPhy1s5yU2euJqXMBkgrWuMYJ4YefleZEK8jP0gKHrduxLEyIl8Tw== X-Received: by 2002:a5d:618e:: with SMTP id j14mr5937061wru.377.1613174957963; Fri, 12 Feb 2021 16:09:17 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u14sm13279326wro.10.2021.02.12.16.09.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:17 -0800 (PST) Message-Id: In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:04 +0000 Subject: [PATCH v3 03/12] pkt-line: (optionally) libify the packet readers Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin So far, the (possibly indirect) callers of `get_packet_data()` can ask that function to return an error instead of `die()`ing upon end-of-file. However, random read errors will still cause the process to die. So let's introduce an explicit option to tell the packet reader machinery to please be nice and only return an error. This change prepares pkt-line for use by long-running daemon processes. Such processes should be able to serve multiple concurrent clients and and survive random IO errors. If there is an error on one connection, a daemon should be able to drop that connection and continue serving existing and future connections. This ability will be used by a Git-aware "Internal FSMonitor" feature in a later patch series. Signed-off-by: Johannes Schindelin --- pkt-line.c | 19 +++++++++++++++++-- pkt-line.h | 4 ++++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index 3602b0d37092..83c46e6b46ee 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -304,8 +304,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, *src_size -= ret; } else { ret = read_in_full(fd, dst, size); - if (ret < 0) + if (ret < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error_errno(_("read error")); die_errno(_("read error")); + } } /* And complain if we didn't get enough bytes to satisfy the read. */ @@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, if (options & PACKET_READ_GENTLE_ON_EOF) return -1; + if (options & PACKET_READ_NEVER_DIE) + return error(_("the remote end hung up unexpectedly")); die(_("the remote end hung up unexpectedly")); } @@ -341,6 +346,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, len = packet_length(linelen); if (len < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length " + "character: %.4s"), linelen); die(_("protocol error: bad line length character: %.4s"), linelen); } else if (!len) { packet_trace("0000", 4, 0); @@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, *pktlen = 0; return PACKET_READ_RESPONSE_END; } else if (len < 4) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); } len -= 4; - if ((unsigned)len >= size) + if ((unsigned)len >= size) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); + } if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) { *pktlen = -1; diff --git a/pkt-line.h b/pkt-line.h index a7149429ac35..2e472efaf2c5 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou * * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an * ERR packet. + * + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect). */ #define PACKET_READ_GENTLE_ON_EOF (1u<<0) #define PACKET_READ_CHOMP_NEWLINE (1u<<1) #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_NEVER_DIE (1u<<3) int packet_read(int fd, char **src_buffer, size_t *src_len, char *buffer, unsigned size, int options); From patchwork Sat Feb 13 00:09:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12086361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEAB4C433E6 for ; Sat, 13 Feb 2021 00:10:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CED664D5D for ; Sat, 13 Feb 2021 00:10:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231256AbhBMAKP (ORCPT ); Fri, 12 Feb 2021 19:10:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229650AbhBMAKA (ORCPT ); Fri, 12 Feb 2021 19:10:00 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08772C061786 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id n8so1316706wrm.10 for ; Fri, 12 Feb 2021 16:09:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=eKIITmcddPWeE1VBlgWVjIuBS5dmDXJSUzTRMH0h5jc=; b=lnJVM0a5BdC+Qi1SMa83Rdckn7NWG/hPBm6/vsLcDpm6Pnj6pN9c+1jiZYx7mH+9mv wIIMsoCw35YmhuiRD8a0Kf49m76xg70LqQ8lEkEX2/3zudBOo3gMzV/GshPh3ubkaH2I 6q2cXR1r6Vn2M/rt2KhDEh5ETuqDDj/sM/rfkVcZPAf2wnzx6ziISqj/wWTm1bkBDx8u A3M+y5wgAE61rtdmRtwjQghCiIcGVCAvxfOgU38T2LXTfCXrnTZAKqgFTjK7EASA3145 fHtnuBJBsx7+nfkm15Cps/8fjbncz4HJdqEVSvQjAqoaNGCXY9Bb4dVjVGz5hSyesHXT jaNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=eKIITmcddPWeE1VBlgWVjIuBS5dmDXJSUzTRMH0h5jc=; b=N7uhmB3kh+K7wcV2198FwdLoEqkfC2an8pMLqSD0V2yc7npSin9RX6TyG4MyVu7+5p Nxl+ujg4b2ahFBv4bH9IzAdrx5m+eaHbGuaGOcz1sksFQl5Fzf6dWn15PfV6lUsdY4et y5bvL1ApPGZMspFwK6X3j4/0sEpGu0TaYSCBh4CPEF9c7koAp6ita4Skh0aBSjVp9pKB 5cEFMvHs781qElW3hKSABsRFBuFrRQ86gYwJRdQWYiY2MVGfu9eWDpZn2BSRmbnWhycb 4u968t0TfQAtW5UgPcEJJ1cRG3JUIdglj3SJ+q6m/QKu1pvSubkj38SguebVTGtBmVcD xcHw== X-Gm-Message-State: AOAM5323vynsNCqmceix5ZUtiTGengvnWuG9lda/lPE/WMALjNIBc9oH ZmgUFQZM/GSXb2d4H7hAbf7ogwlWG7s= X-Google-Smtp-Source: ABdhPJyWUc7fV/JimDM7ex0pEZc4in16d5vOGXBa7UwnIzn9mwMNlIHhSrHvHVeMc+pExIqm0SL/jw== X-Received: by 2002:a5d:61d0:: with SMTP id q16mr6161981wrv.387.1613174958578; Fri, 12 Feb 2021 16:09:18 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id l7sm12302185wrn.11.2021.02.12.16.09.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:18 -0800 (PST) Message-Id: <81e14bed955c6b50e155f6f73cb642d6c9f2fd73.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:05 +0000 Subject: [PATCH v3 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Update the calling sequence of `read_packetized_to_strbuf()` to take an options argument and not assume a fixed set of options. Update the only existing caller accordingly to explicitly pass the formerly-assumed flags. The `read_packetized_to_strbuf()` function calls `packet_read()` with a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`). This assumption has been fine for the single existing caller `apply_multi_file_filter()` in `convert.c`. In a later commit we would like to add other callers to `read_packetized_to_strbuf()` that need a different set of options. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- convert.c | 3 ++- pkt-line.c | 4 ++-- pkt-line.h | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index bccf7afa8797..9f44f00d841f 100644 --- a/convert.c +++ b/convert.c @@ -908,7 +908,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - err = read_packetized_to_strbuf(process->out, &nbuf) < 0; + err = read_packetized_to_strbuf(process->out, &nbuf, + PACKET_READ_GENTLE_ON_EOF) < 0; if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 83c46e6b46ee..18ecad65e08c 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -442,7 +442,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len) return packet_read_line_generic(-1, src, src_len, dst_len); } -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options) { int packet_len; @@ -458,7 +458,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) * that there is already room for the extra byte. */ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1, - PACKET_READ_GENTLE_ON_EOF); + options); if (packet_len <= 0) break; sb_out->len += packet_len; diff --git a/pkt-line.h b/pkt-line.h index 2e472efaf2c5..e347fe46832a 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -142,7 +142,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size); /* * Reads a stream of variable sized packets until a flush packet is detected. */ -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out); +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options); /* * Receive multiplexed output stream over git native protocol. From patchwork Sat Feb 13 00:09:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F701C433E0 for ; Sat, 13 Feb 2021 00:10:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CC3964E92 for ; Sat, 13 Feb 2021 00:10:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229497AbhBMAKQ (ORCPT ); Fri, 12 Feb 2021 19:10:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbhBMAKB (ORCPT ); Fri, 12 Feb 2021 19:10:01 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C292C06178A for ; Fri, 12 Feb 2021 16:09:20 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id v14so1334382wro.7 for ; Fri, 12 Feb 2021 16:09:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=yOVx8B7rIEXDZ/mpDexA8Qe2BDO2imKZikCVpUgG7e8=; b=GsHr4M2Tu5Q6CZ/aOgbE1QxeWODNKzbCdOwSr6iu8QJ1wfuTSpgqF5WwJr58WZ3el5 bXzsfj/XGtGQ7ifTa//Ms6tVwuBNeg5gN1zRFY0aUcBMuVwGtHsLAn6WdbTW9xRqKW0+ FZ9zuae0glg+q7BelMvpD+DWhGocSJVgkyWKPfs3nnuyN0aWGcCPvqqxSP5hogNRPGTv WpcnZDe3aPn3KJCm2947U0kVgEXhuuvt9Jtq1Oaxi9at8UjUO0HTXNoakGPorXnOVgG7 Ol93G2WOUeZXqwptcEAYHLjZ3urORsHhnyNC5MJaZAkgt00+2WvNpYFKEIJT2MsWGv8u vLyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=yOVx8B7rIEXDZ/mpDexA8Qe2BDO2imKZikCVpUgG7e8=; b=uCTHEFXSfgvgs7sfB6UEi16uNzGqS9W80vig646IpNDAEQcW4c64mt5U0BKw5I5VPZ NIRFwviz/hiOXq64QgaMLsMORG1Qm8xRCEkROlIMx0oFLmbhQybhJ7AH8WfJ4Nt7UHe/ RWdEq/y5G1x4NqrOoFFDnuliOdIR5ab9JHsg5to7JULswEhJrXeZJU3J8YbPM+Hxj2we 9JLRp0+gBvvTvWF+mTUoS6gZpPM4rIhxWkMK0W/LZAZvdBKMG1xb3ItHCueBo4ApQwYf 0J8j09rgfWpSKhlxc9c9x9+J30VxOmlNAPf0jZR1sXSISeJq9+C06MV8nozGUygobW9+ oe0g== X-Gm-Message-State: AOAM532E5lopbg4/pKf1yz+iXRf2D+pAHlHw2/WcqMzThFCQkBaeNDqQ UTdsNfdZsSUVTKQIKoOm99vV9wV1vvw= X-Google-Smtp-Source: ABdhPJwdnvEbugDGSqVMV+jUgRBMXE5CAT7j1sXGvWhFkf6kIQvnyOQ35q3iawd7X4BJdmejqIzcVQ== X-Received: by 2002:a5d:6045:: with SMTP id j5mr6011043wrt.365.1613174959250; Fri, 12 Feb 2021 16:09:19 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id w12sm15364879wmi.4.2021.02.12.16.09.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:18 -0800 (PST) Message-Id: <22eec60761a88107b2e337ce13eed1020352aa73.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:06 +0000 Subject: [PATCH v3 05/12] simple-ipc: design documentation for new IPC mechanism Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Brief design documentation for new IPC mechanism allowing foreground Git client to talk with an existing daemon process at a known location using a named pipe or unix domain socket. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/technical/api-simple-ipc.txt diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt new file mode 100644 index 000000000000..670a5c163e39 --- /dev/null +++ b/Documentation/technical/api-simple-ipc.txt @@ -0,0 +1,34 @@ +simple-ipc API +============== + +The simple-ipc API is used to send an IPC message and response between +a (presumably) foreground Git client process to a background server or +daemon process. The server process must already be running. Multiple +client processes can simultaneously communicate with the server +process. + +Communication occurs over a named pipe on Windows and a Unix domain +socket on other platforms. Clients and the server rendezvous at a +previously agreed-to application-specific pathname (which is outside +the scope of this design). + +This IPC mechanism differs from the existing `sub-process.c` model +(Documentation/technical/long-running-process-protocol.txt) and used +by applications like Git-LFS. In the simple-ipc model the server is +assumed to be a very long-running system service. In contrast, in the +LFS-style sub-process model the helper is started with the foreground +process and exits when the foreground process terminates. + +How the simple-ipc server is started is also outside the scope of the +IPC mechanism. For example, the server might be started during +maintenance operations. + +The IPC protocol consists of a single request message from the client and +an optional request message from the server. For simplicity, pkt-line +routines are used to hide chunking and buffering concerns. Each side +terminates their message with a flush packet. +(Documentation/technical/protocol-common.txt) + +The actual format of the client and server messages is application +specific. The IPC layer transmits and receives an opaque buffer without +any concern for the content within. From patchwork Sat Feb 13 00:09:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C46FAC433DB for ; Sat, 13 Feb 2021 00:10:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85A4C64E9C for ; Sat, 13 Feb 2021 00:10:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231549AbhBMAKo (ORCPT ); Fri, 12 Feb 2021 19:10:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbhBMAKj (ORCPT ); Fri, 12 Feb 2021 19:10:39 -0500 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6805BC06178B for ; Fri, 12 Feb 2021 16:09:21 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id 7so1410704wrz.0 for ; Fri, 12 Feb 2021 16:09:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=Wm3vbYWaR0XyGtF69w4fwZgm9wvPSOhWziyRIfmSjBA=; b=kR3WVNvTu5u+jkf1YnNAGu/cHE1XofQfD+AjJaMVqaAZ5xo6kWAm0fjmn3wFkjJoT3 mUT4wzwgJ4rC2wFNwogdOldRoLjcO1TsCBdi1rWpqXA2t2Vhd29ky24Nv40zSZfvMnOM FGL5Fle9W2GWVyKnZMJaBDdi2KY9Fz+rfuxsm0gZjQFRlu8uwT1ElnUkl3N/6BZ29z1i HE+CoccRrTD/I4AYyDoEFvabteNxVegggV50yrH5t1x7d+R0v9BOCvNW2lkSzQAw30wE /lWxX2M/0v7UrJ00FqrRcUGPAr/uNt9PlRiy1wxMSjaGVfOFhMzZK1AGIOBM9ZcN7SVC RXyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=Wm3vbYWaR0XyGtF69w4fwZgm9wvPSOhWziyRIfmSjBA=; b=oVlMeWu5c2zqPv5DccPWs4dHpB7WUTP6g9vsvQF9y2CCC8mKFl6utkhL4v0zlpJIvZ FMNbDfPreAUuOEIj/DbVetAq3LRDs0B5MMIZm+x5jnwBEteQn5NYmUYrkkkyBuafCGy7 ifBJIOLeFEM+dXbZrw5LJsUn925uR1SS2vCXozn/sEIKmHlhBaxPgkp+nYq0CByjqX+l eqNT3qz5jJZ6he6OX2q+eDWX4nxwdhPk37OuphjDRQ6ZfqrPSC7MNWPlD5/h47Y7WPel fej2/eleECYYHpEWzfClFM7+UHnK/duG+YPY43qwXQz4FfzA6D/AKwetJCjFGWxpcLnX LQUQ== X-Gm-Message-State: AOAM530xNIewGWBuqIgZFBPaYwKflF0elZQAJd7mkJDlphX/aVOX2HLb /rtCf0ogOQ8skf8nvh/5MEJj77C4SbA= X-Google-Smtp-Source: ABdhPJzTLARXHi7WSY1iE7SNH8G5Yi7SeL6YvTepkkNcHZP0mGwNr4o+Us9ha6tQkqOobLEV7jrFtQ== X-Received: by 2002:a5d:690b:: with SMTP id t11mr6198777wru.12.1613174959964; Fri, 12 Feb 2021 16:09:19 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u10sm15490776wmj.40.2021.02.12.16.09.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:19 -0800 (PST) Message-Id: <171ec43ecfa45054afc378aca00c13282e438c47.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:07 +0000 Subject: [PATCH v3 06/12] simple-ipc: add win32 implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Windows implementation of "simple-ipc" using named pipes. Signed-off-by: Jeff Hostetler --- Makefile | 5 + compat/simple-ipc/ipc-shared.c | 28 ++ compat/simple-ipc/ipc-win32.c | 749 ++++++++++++++++++++++++++++ config.mak.uname | 2 + contrib/buildsystems/CMakeLists.txt | 4 + simple-ipc.h | 224 +++++++++ 6 files changed, 1012 insertions(+) create mode 100644 compat/simple-ipc/ipc-shared.c create mode 100644 compat/simple-ipc/ipc-win32.c create mode 100644 simple-ipc.h diff --git a/Makefile b/Makefile index 4128b457e14b..40d5cab78d3f 100644 --- a/Makefile +++ b/Makefile @@ -1679,6 +1679,11 @@ else LIB_OBJS += unix-socket.o endif +ifdef USE_WIN32_IPC + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-win32.o +endif + ifdef NO_ICONV BASIC_CFLAGS += -DNO_ICONV endif diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c new file mode 100644 index 000000000000..1edec8159532 --- /dev/null +++ b/compat/simple-ipc/ipc-shared.c @@ -0,0 +1,28 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifdef SUPPORTS_SIMPLE_IPC + +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data = NULL; + int ret; + + ret = ipc_server_run_async(&server_data, path, opts, + application_cb, application_data); + if (ret) + return ret; + + ret = ipc_server_await(server_data); + + ipc_server_free(server_data); + + return ret; +} + +#endif /* SUPPORTS_SIMPLE_IPC */ diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c new file mode 100644 index 000000000000..f0cfbf9d15c3 --- /dev/null +++ b/compat/simple-ipc/ipc-win32.c @@ -0,0 +1,749 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifndef GIT_WINDOWS_NATIVE +#error This file can only be compiled on Windows +#endif + +static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc) +{ + int off = 0; + struct strbuf realpath = STRBUF_INIT; + + if (!strbuf_realpath(&realpath, path, 0)) + return -1; + + off = swprintf(wpath, alloc, L"\\\\.\\pipe\\"); + if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0) + return -1; + + /* Handle drive prefix */ + if (wpath[off] && wpath[off + 1] == L':') { + wpath[off + 1] = L'_'; + off += 2; + } + + for (; wpath[off]; off++) + if (wpath[off] == L'/') + wpath[off] = L'\\'; + + strbuf_release(&realpath); + return 0; +} + +static enum ipc_active_state get_active_state(wchar_t *pipe_path) +{ + if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT)) + return IPC_STATE__LISTENING; + + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + if (GetLastError() == ERROR_FILE_NOT_FOUND) + return IPC_STATE__PATH_NOT_FOUND; + + return IPC_STATE__OTHER_ERROR; +} + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + wchar_t pipe_path[MAX_PATH]; + + if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0) + return IPC_STATE__INVALID_PATH; + + return get_active_state(pipe_path); +} + +#define WAIT_STEP_MS (50) + +static enum ipc_active_state connect_to_server( + const wchar_t *wpath, + DWORD timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + DWORD t_start_ms, t_waited_ms; + DWORD step_ms; + HANDLE hPipe = INVALID_HANDLE_VALUE; + DWORD mode = PIPE_READMODE_BYTE; + DWORD gle; + + *pfd = -1; + + for (;;) { + hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE, + 0, NULL, OPEN_EXISTING, 0, NULL); + if (hPipe != INVALID_HANDLE_VALUE) + break; + + gle = GetLastError(); + + switch (gle) { + case ERROR_FILE_NOT_FOUND: + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + if (!timeout_ms) + return IPC_STATE__PATH_NOT_FOUND; + + step_ms = (timeout_ms < WAIT_STEP_MS) ? + timeout_ms : WAIT_STEP_MS; + sleep_millisec(step_ms); + + timeout_ms -= step_ms; + break; /* try again */ + + case ERROR_PIPE_BUSY: + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + if (!timeout_ms) + return IPC_STATE__NOT_LISTENING; + + t_start_ms = (DWORD)(getnanotime() / 1000000); + + if (!WaitNamedPipeW(wpath, timeout_ms)) { + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + return IPC_STATE__OTHER_ERROR; + } + + /* + * A pipe server instance became available. + * Race other client processes to connect to + * it. + * + * But first decrement our overall timeout so + * that we don't starve if we keep losing the + * race. But also guard against special + * NPMWAIT_ values (0 and -1). + */ + t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms; + if (t_waited_ms < timeout_ms) + timeout_ms -= t_waited_ms; + else + timeout_ms = 1; + break; /* try again */ + + default: + return IPC_STATE__OTHER_ERROR; + } + } + + if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + *pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY); + if (*pfd < 0) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + /* fd now owns hPipe */ + + return IPC_STATE__LISTENING; +} + +/* + * The default connection timeout for Windows clients. + * + * This is not currently part of the ipc_ API (nor the config settings) + * because of differences between Windows and other platforms. + * + * This value was chosen at random. + */ +#define WINDOWS_CONNECTION_TIMEOUT_MS (30000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + wchar_t wpath[MAX_PATH]; + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0) + state = IPC_STATE__INVALID_PATH; + else + state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd)); + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *response) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, response); + + ipc_client_close_connection(connection); + + return ret; +} + +/* + * Duplicate the given pipe handle and wrap it in a file descriptor so + * that we can use pkt-line on it. + */ +static int dup_fd_from_pipe(const HANDLE pipe) +{ + HANDLE process = GetCurrentProcess(); + HANDLE handle; + int fd; + + if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + errno = err_win_to_posix(GetLastError()); + return -1; + } + + fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY); + if (fd < 0) { + errno = err_win_to_posix(GetLastError()); + CloseHandle(handle); + return -1; + } + + /* + * `handle` is now owned by `fd` and will be automatically closed + * when the descriptor is closed. + */ + + return fd; +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_SERVER_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_server_thread_data *server_thread_data; +}; + +struct ipc_server_thread_data { + enum magic magic; + struct ipc_server_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + HANDLE hPipe; +}; + +/* + * On Windows, the conceptual "ipc-server" is implemented as a pool of + * n idential/peer "server-thread" threads. That is, there is no + * hierarchy of threads; and therefore no controller thread managing + * the pool. Each thread has an independent handle to the named pipe, + * receives incoming connections, processes the client, and re-uses + * the pipe for the next client connection. + * + * Therefore, the "ipc-server" only needs to maintain a list of the + * spawned threads for eventual "join" purposes. + * + * A single "stop-event" is visible to all of the server threads to + * tell them to shutdown (when idle). + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + wchar_t wpath[MAX_PATH]; + + HANDLE hEventStopRequested; + struct ipc_server_thread_data *thread_list; + int is_stopped; +}; + +enum connect_result { + CR_CONNECTED = 0, + CR_CONNECT_PENDING, + CR_CONNECT_ERROR, + CR_WAIT_ERROR, + CR_SHUTDOWN, +}; + +static enum connect_result queue_overlapped_connect( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + if (ConnectNamedPipe(server_thread_data->hPipe, lpo)) + goto failed; + + switch (GetLastError()) { + case ERROR_IO_PENDING: + return CR_CONNECT_PENDING; + + case ERROR_PIPE_CONNECTED: + SetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + break; + } + +failed: + error(_("ConnectNamedPipe failed for '%s' (%lu)"), + server_thread_data->server_data->buf_path.buf, + GetLastError()); + return CR_CONNECT_ERROR; +} + +/* + * Use Windows Overlapped IO to wait for a connection or for our event + * to be signalled. + */ +static enum connect_result wait_for_connection( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + enum connect_result r; + HANDLE waitHandles[2]; + DWORD dwWaitResult; + + r = queue_overlapped_connect(server_thread_data, lpo); + if (r != CR_CONNECT_PENDING) + return r; + + waitHandles[0] = server_thread_data->server_data->hEventStopRequested; + waitHandles[1] = lpo->hEvent; + + dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE); + switch (dwWaitResult) { + case WAIT_OBJECT_0 + 0: + return CR_SHUTDOWN; + + case WAIT_OBJECT_0 + 1: + ResetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + return CR_WAIT_ERROR; + } +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + * + * Simple-IPC only contains one round trip, so we flush and close + * here after the response. + */ +static int do_io(struct ipc_server_thread_data *server_thread_data) +{ + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.server_thread_data = server_thread_data; + + reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe); + if (reply_data.fd < 0) + return error(_("could not create fd from pipe for '%s'"), + server_thread_data->server_data->buf_path.buf); + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = server_thread_data->server_data->application_cb( + server_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + + FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd))); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Handle IPC request and response with this connected client. And reset + * the pipe to prepare for the next client. + */ +static int use_connection(struct ipc_server_thread_data *server_thread_data) +{ + int ret; + + ret = do_io(server_thread_data); + + FlushFileBuffers(server_thread_data->hPipe); + DisconnectNamedPipe(server_thread_data->hPipe); + + return ret; +} + +/* + * Thread proc for an IPC server worker thread. It handles a series of + * connections from clients. It cleans and reuses the hPipe between each + * client. + */ +static void *server_thread_proc(void *_server_thread_data) +{ + struct ipc_server_thread_data *server_thread_data = _server_thread_data; + HANDLE hEventConnected = INVALID_HANDLE_VALUE; + OVERLAPPED oConnect; + enum connect_result cr; + int ret; + + assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE); + + trace2_thread_start("ipc-server"); + trace2_data_string("ipc-server", NULL, "pipe", + server_thread_data->server_data->buf_path.buf); + + hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL); + + memset(&oConnect, 0, sizeof(oConnect)); + oConnect.hEvent = hEventConnected; + + for (;;) { + cr = wait_for_connection(server_thread_data, &oConnect); + + switch (cr) { + case CR_SHUTDOWN: + goto finished; + + case CR_CONNECTED: + ret = use_connection(server_thread_data); + if (ret == SIMPLE_IPC_QUIT) { + ipc_server_stop_async( + server_thread_data->server_data); + goto finished; + } + if (ret > 0) { + /* + * Ignore (transient) IO errors with this + * client and reset for the next client. + */ + } + break; + + case CR_CONNECT_PENDING: + /* By construction, this should not happen. */ + BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING", + server_thread_data->server_data->buf_path.buf); + + case CR_CONNECT_ERROR: + case CR_WAIT_ERROR: + /* + * Ignore these theoretical errors. + */ + DisconnectNamedPipe(server_thread_data->hPipe); + break; + + default: + BUG("unandled case after wait_for_connection"); + } + } + +finished: + CloseHandle(server_thread_data->hPipe); + CloseHandle(hEventConnected); + + trace2_thread_exit(); + return NULL; +} + +static HANDLE create_new_pipe(wchar_t *wpath, int is_first) +{ + HANDLE hPipe; + DWORD dwOpenMode, dwPipeMode; + LPSECURITY_ATTRIBUTES lpsa = NULL; + + dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND | + FILE_FLAG_OVERLAPPED; + + dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT | + PIPE_REJECT_REMOTE_CLIENTS; + + if (is_first) { + dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE; + + /* + * On Windows, the first server pipe instance gets to + * set the ACL / Security Attributes on the named + * pipe; subsequent instances inherit and cannot + * change them. + * + * TODO Should we allow the application layer to + * specify security attributes, such as `LocalService` + * or `LocalSystem`, when we create the named pipe? + * This question is probably not important when the + * daemon is started by a foreground user process and + * only needs to talk to the current user, but may be + * if the daemon is run via the Control Panel as a + * System Service. + */ + } + + hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode, + PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa); + + return hPipe; +} + +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + wchar_t wpath[MAX_PATH]; + HANDLE hPipeFirst = INVALID_HANDLE_VALUE; + int k; + int ret = 0; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)); + if (ret < 0) + return error( + _("could not create normalized wchar_t path for '%s'"), + path); + + hPipeFirst = create_new_pipe(wpath, 1); + if (hPipeFirst == INVALID_HANDLE_VALUE) + return error(_("IPC server already running on '%s'"), path); + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL); + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + wcscpy(server_data->wpath, wpath); + + if (nr_threads < 1) + nr_threads = 1; + + for (k = 0; k < nr_threads; k++) { + struct ipc_server_thread_data *std; + + std = xcalloc(1, sizeof(*std)); + std->magic = MAGIC_SERVER_THREAD_DATA; + std->server_data = server_data; + std->hPipe = INVALID_HANDLE_VALUE; + + std->hPipe = (k == 0) + ? hPipeFirst + : create_new_pipe(server_data->wpath, 0); + + if (std->hPipe == INVALID_HANDLE_VALUE) { + /* + * If we've reached a pipe instance limit for + * this path, just use fewer threads. + */ + free(std); + break; + } + + if (pthread_create(&std->pthread_id, NULL, + server_thread_proc, std)) { + /* + * Likewise, if we're out of threads, just use + * fewer threads than requested. + * + * However, we just give up if we can't even get + * one thread. This should not happen. + */ + if (k == 0) + die(_("could not start thread[0] for '%s'"), + path); + + CloseHandle(std->hPipe); + free(std); + break; + } + + std->next_thread = server_data->thread_list; + server_data->thread_list = std; + } + + *returned_server_data = server_data; + return 0; +} + +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + if (!server_data) + return 0; + + /* + * Gently tell all of the ipc_server threads to shutdown. + * This will be seen the next time they are idle (and waiting + * for a connection). + * + * We DO NOT attempt to force them to drop an active connection. + */ + SetEvent(server_data->hEventStopRequested); + return 0; +} + +int ipc_server_await(struct ipc_server_data *server_data) +{ + DWORD dwWaitResult; + + if (!server_data) + return 0; + + dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE); + if (dwWaitResult != WAIT_OBJECT_0) + return error(_("wait for hEvent failed for '%s'"), + server_data->buf_path.buf); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + pthread_join(std->pthread_id, NULL); + + server_data->thread_list = std->next_thread; + free(std); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + strbuf_release(&server_data->buf_path); + + if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE) + CloseHandle(server_data->hEventStopRequested); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + server_data->thread_list = std->next_thread; + free(std); + } + + free(server_data); +} diff --git a/config.mak.uname b/config.mak.uname index 198ab1e58f83..76087cff6789 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes # USE_NED_ALLOCATOR = YesPlease @@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S))) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes USE_NED_ALLOCATOR = YesPlease diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index c151dd7257f3..4bd41054ee70 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND compat_SOURCES unix-socket.c) endif() +if(CMAKE_SYSTEM_NAME STREQUAL "Windows") + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +endif() + set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) #header checks diff --git a/simple-ipc.h b/simple-ipc.h new file mode 100644 index 000000000000..a3f96b42cca2 --- /dev/null +++ b/simple-ipc.h @@ -0,0 +1,224 @@ +#ifndef GIT_SIMPLE_IPC_H +#define GIT_SIMPLE_IPC_H + +/* + * See Documentation/technical/api-simple-ipc.txt + */ + +#if defined(GIT_WINDOWS_NATIVE) +#define SUPPORTS_SIMPLE_IPC +#endif + +#ifdef SUPPORTS_SIMPLE_IPC +#include "pkt-line.h" + +/* + * Simple IPC Client Side API. + */ + +enum ipc_active_state { + /* + * The pipe/socket exists and the daemon is waiting for connections. + */ + IPC_STATE__LISTENING = 0, + + /* + * The pipe/socket exists, but the daemon is not listening. + * Perhaps it is very busy. + * Perhaps the daemon died without deleting the path. + * Perhaps it is shutting down and draining existing clients. + * Perhaps it is dead, but other clients are lingering and + * still holding a reference to the pathname. + */ + IPC_STATE__NOT_LISTENING, + + /* + * The requested pathname is bogus and no amount of retries + * will fix that. + */ + IPC_STATE__INVALID_PATH, + + /* + * The requested pathname is not found. This usually means + * that there is no daemon present. + */ + IPC_STATE__PATH_NOT_FOUND, + + IPC_STATE__OTHER_ERROR, +}; + +struct ipc_client_connect_options { + /* + * Spin under timeout if the server is running but can't + * accept our connection yet. This should always be set + * unless you just want to poke the server and see if it + * is alive. + */ + unsigned int wait_if_busy:1; + + /* + * Spin under timeout if the pipe/socket is not yet present + * on the file system. This is useful if we just started + * the service and need to wait for it to become ready. + */ + unsigned int wait_if_not_found:1; +}; + +#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ + .wait_if_busy = 0, \ + .wait_if_not_found = 0, \ +} + +/* + * Determine if a server is listening on this named pipe or socket using + * platform-specific logic. This might just probe the filesystem or it + * might make a trivial connection to the server using this pathname. + */ +enum ipc_active_state ipc_get_active_state(const char *path); + +struct ipc_client_connection { + int fd; +}; + +/* + * Try to connect to the daemon on the named pipe or socket. + * + * Returns IPC_STATE__LISTENING and a connection handle. + * + * Otherwise, returns info to help decide whether to retry or to + * spawn/respawn the server. + */ +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection); + +void ipc_client_close_connection(struct ipc_client_connection *connection); + +/* + * Used by the client to synchronously send and receive a message with + * the server on the provided client connection. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer); + +/* + * Used by the client to synchronously connect and send and receive a + * message to the server listening at the given path. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer); + +/* + * Simple IPC Server Side API. + */ + +struct ipc_server_reply_data; + +typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *, + const char *response, + size_t response_len); + +/* + * Prototype for an application-supplied callback to process incoming + * client IPC messages and compose a reply. The `application_cb` should + * use the provided `reply_cb` and `reply_data` to send an IPC response + * back to the client. The `reply_cb` callback can be called multiple + * times for chunking purposes. A reply message is optional and may be + * omitted if not necessary for the application. + * + * The return value from the application callback is ignored. + * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server. + */ +typedef int (ipc_server_application_cb)(void *application_data, + const char *request, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data); + +#define SIMPLE_IPC_QUIT -2 + +/* + * Opaque instance data to represent an IPC server instance. + */ +struct ipc_server_data; + +/* + * Control parameters for the IPC server instance. + * Use this to hide platform-specific settings. + */ +struct ipc_server_opts +{ + int nr_threads; +}; + +/* + * Start an IPC server instance in one or more background threads + * and return a handle to the pool. + * + * Returns 0 if the asynchronous server pool was started successfully. + * Returns -1 if not. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +/* + * Gently signal the IPC server pool to shutdown. No new client + * connections will be accepted, but existing connections will be + * allowed to complete. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data); + +/* + * Block the calling thread until all threads in the IPC server pool + * have completed and been joined. + */ +int ipc_server_await(struct ipc_server_data *server_data); + +/* + * Close and free all resource handles associated with the IPC server + * pool. + */ +void ipc_server_free(struct ipc_server_data *server_data); + +/* + * Run an IPC server instance and block the calling thread of the + * current process. It does not return until the IPC server has + * either shutdown or had an unrecoverable error. + * + * The IPC server handles incoming IPC messages from client processes + * and may use one or more background threads as necessary. + * + * Returns 0 after the server has completed successfully. + * Returns -1 if the server cannot be started. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + * + * Note that `ipc_server_run()` is a synchronous wrapper around the + * above asynchronous routines. It effectively hides all of the + * server state and thread details from the caller and presents a + * simple synchronous interface. + */ +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +#endif /* SUPPORTS_SIMPLE_IPC */ +#endif /* GIT_SIMPLE_IPC_H */ From patchwork Sat Feb 13 00:09:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E62C2C433DB for ; Sat, 13 Feb 2021 00:10:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C1F2964E9A for ; Sat, 13 Feb 2021 00:10:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231532AbhBMAKs (ORCPT ); Fri, 12 Feb 2021 19:10:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231317AbhBMAKl (ORCPT ); Fri, 12 Feb 2021 19:10:41 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8F39C06178C for ; Fri, 12 Feb 2021 16:09:21 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id n10so1392661wmq.0 for ; Fri, 12 Feb 2021 16:09:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=pLWcy94w9J4o818ymQpDkutL/w0jO3O/XtnO/MXGwZpZVWxegM/PLvOuJoujQ+nABJ +7ae8OlWcnpmlja3v36+cjMkIOYOuWYfmo0DNTZsXDkHQ7Vep7UZydHCOOxMIzyjA1At ovat8SIyLdGerKYEmQzoX53ANeVGeXkSBvoqIr4ParG43emiflCXvtLY2R6QGQP8rbO6 4zGMmivs8CVHo0aU581tiM1PsS5iCR4DvkvBxPtMGKpG4o/ssnNFfwcAVk7O9rXOUXG6 lzcZ3PBdHG6H3UJZlU8Lg8of6kqQGk16P7dNulLbvcbEKCHhXd+8jGU9d7rNPsrIzeIs i6QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=XzGHv+WBD0UQYuZG8dVni/v9B8+Ue0OTi708m27XTSSAkEtDIloLvVrMX6r0WnPuqU yAUH/EaHtU6wLkQrcr1ijREaFmH5gXR8QMs7A+bAlqsiq2Fki1uoVg9Jzku+sMlkBKhO qN9zA+ouARI1z82/EPopQu997MaOYoG9Hg9M2OaarSkjfwnLC0F3fUVqC0VjWEKTECOU P5b6qf0sm+Hu6kj2aWKuIdgtbsllXrgT1hCMJEbZwMTz3t4VRt+PLd74i6rd/x3wwXPS 8PHy3RjJ+mE4y/2FcxG+AJ3pr15l3YQRNV4r1b6/Ysh2wYdwPfFyrpuKBA1hnAnp9t+U D5Cg== X-Gm-Message-State: AOAM533kIKT6T9l+PCup2srG2Xi9+YMENdck5Ln4R/kHadHQD7RkwjGm nkyBuhtB8Q01/NSeA/E7Dba3CvdK+X4= X-Google-Smtp-Source: ABdhPJyVgfqj2+gvAb7d/aLZGnMO42uL9ZB/reaVEeK8/TUE/5klbk6WW/DG4rJcPbD90BWyDyEyhg== X-Received: by 2002:a1c:dcd7:: with SMTP id t206mr4508275wmg.108.1613174960708; Fri, 12 Feb 2021 16:09:20 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b2sm12842322wrv.73.2021.02.12.16.09.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:20 -0800 (PST) Message-Id: In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:08 +0000 Subject: [PATCH v3 07/12] unix-socket: elimiate static unix_stream_socket() helper function Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler The static helper function `unix_stream_socket()` calls `die()`. This is not appropriate for all callers. Eliminate the wrapper function and make the callers propagate the error. Signed-off-by: Jeff Hostetler --- unix-socket.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 19ed48be9902..69f81d64e9d5 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,14 +1,6 @@ #include "cache.h" #include "unix-socket.h" -static int unix_stream_socket(void) -{ - int fd = socket(AF_UNIX, SOCK_STREAM, 0); - if (fd < 0) - die_errno("unable to create socket"); - return fd; -} - static int chdir_len(const char *orig, int len) { char *path = xmemdupz(orig, len); @@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, int unix_stream_connect(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; + if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; unix_sockaddr_cleanup(&ctx); @@ -87,15 +82,16 @@ int unix_stream_connect(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } int unix_stream_listen(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -103,7 +99,9 @@ int unix_stream_listen(const char *path) if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; @@ -116,8 +114,9 @@ int unix_stream_listen(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } From patchwork Sat Feb 13 00:09:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44E5DC433E0 for ; Sat, 13 Feb 2021 00:11:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 198A164D5D for ; Sat, 13 Feb 2021 00:11:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231759AbhBMAK4 (ORCPT ); Fri, 12 Feb 2021 19:10:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231359AbhBMAKl (ORCPT ); Fri, 12 Feb 2021 19:10:41 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 695A2C061793 for ; Fri, 12 Feb 2021 16:09:22 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id i9so1380933wmq.1 for ; Fri, 12 Feb 2021 16:09:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=3DE2LGbxZPS99naynGsZAiR5LTnfiDo7m2CNe53QjEk=; b=gl6TLB07y0RBDaClruZhfjRI2g7dfJ9K06K1bKq1bsqLvL6hr5bHh5RgLiq1rDDswr 9u7hivpHZaqkqFYOMHK5cPZ03Pn5v0/4/NS0XvMC+K74OiOiM1Hilo+NRMy0vyBNsvO0 BTJiolpnAO6wW0M5+rTm5tst/Q/bMGYqFrhhD1x2OI3XfdOlGCBDzaheWhgQFdmNlxy9 tQ/omIYdDi+aCczzUEvFR65qeN/UmalEnuLnarEDFiK0Uvh1F1f8sS+Srnw1Mz6+cVBj 6iywoYRHtjOmZwNNb2a0dS9RARzudJ3nuL/RiZWlYwiDceeKTZ9SFfjSkOaEMHJzQELI IVTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=3DE2LGbxZPS99naynGsZAiR5LTnfiDo7m2CNe53QjEk=; b=aAgjfxdL8Pe4YGWgQYQWxg5ZwWX5JepzXX8Kzw6+/xDYdxzOKyozQB28fw90L3cimA S2VwBi56xfAHib62hug2kb4dr3JhsXozLWMH0Ci5KYPSlBy3I+dYKcJMHUty9wHwYtD9 CTWNhkGzNECTGHNcd2EuQXotDse0cxwcEQzYGY4o2Pyk84/7EeSwUHtuc35fX9t9TsrO Ir3pSK0XmX1/qpPxHuED8UvluDUDW0ZmaBVSkWyProu66j3g+iZ4YMr5vzuRbSQHUCsW DBNJLwUw2vZdqGnKyz4iP3AbDluqV4I/m0B51inMcWzc/Oas/GrPLaDNPOEAbX5SAPSV uLuw== X-Gm-Message-State: AOAM530S4hirDj1B9JajszICVLL56xXNKY352ZGjfI53+aGjnCFxsUlb 5fvGkqtB4/iNE7QfXcqgBzsV1enQQK4= X-Google-Smtp-Source: ABdhPJyeGnPLRNzeYLyvajabTfk3jKd29c5v8MY9Cw9uNRvK/FuZ0+3cRkSlCqT/wve20GfngM1hEQ== X-Received: by 2002:a1c:e255:: with SMTP id z82mr4582705wmg.93.1613174961247; Fri, 12 Feb 2021 16:09:21 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id j16sm8638879wra.17.2021.02.12.16.09.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:20 -0800 (PST) Message-Id: <985b2e02b2df7725d70f1365f7cd2e525c9f3ade.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:09 +0000 Subject: [PATCH v3 08/12] unix-socket: add backlog size option to unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Update `unix_stream_listen()` to take an options structure to override default behaviors. This commit includes the size of the `listen()` backlog. Signed-off-by: Jeff Hostetler --- builtin/credential-cache--daemon.c | 3 ++- unix-socket.c | 9 +++++++-- unix-socket.h | 14 +++++++++++++- 3 files changed, 22 insertions(+), 4 deletions(-) diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c index c61f123a3b81..4c6c89ab0de2 100644 --- a/builtin/credential-cache--daemon.c +++ b/builtin/credential-cache--daemon.c @@ -203,9 +203,10 @@ static int serve_cache_loop(int fd) static void serve_cache(const char *socket_path, int debug) { + struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT; int fd; - fd = unix_stream_listen(socket_path); + fd = unix_stream_listen(socket_path, &opts); if (fd < 0) die_errno("unable to bind to '%s'", socket_path); diff --git a/unix-socket.c b/unix-socket.c index 69f81d64e9d5..5ac7dafe9828 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -89,9 +89,11 @@ int unix_stream_connect(const char *path) return -1; } -int unix_stream_listen(const char *path) +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts) { int fd = -1, saved_errno; + int backlog; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -106,7 +108,10 @@ int unix_stream_listen(const char *path) if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; - if (listen(fd, 5) < 0) + backlog = opts->listen_backlog_size; + if (backlog <= 0) + backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG; + if (listen(fd, backlog) < 0) goto fail; unix_sockaddr_cleanup(&ctx); diff --git a/unix-socket.h b/unix-socket.h index e271aeec5a07..06a5a05b03fe 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -1,7 +1,19 @@ #ifndef UNIX_SOCKET_H #define UNIX_SOCKET_H +struct unix_stream_listen_opts { + int listen_backlog_size; +}; + +#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) + +#define UNIX_STREAM_LISTEN_OPTS_INIT \ +{ \ + .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ +} + int unix_stream_connect(const char *path); -int unix_stream_listen(const char *path); +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts); #endif /* UNIX_SOCKET_H */ From patchwork Sat Feb 13 00:09:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB0BDC433E0 for ; Sat, 13 Feb 2021 00:11:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A426864DC3 for ; Sat, 13 Feb 2021 00:11:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232025AbhBMAL2 (ORCPT ); Fri, 12 Feb 2021 19:11:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231663AbhBMAKx (ORCPT ); Fri, 12 Feb 2021 19:10:53 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0979BC061794 for ; Fri, 12 Feb 2021 16:09:23 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id v15so1365730wrx.4 for ; Fri, 12 Feb 2021 16:09:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=ASF8+rFWzj6aUvWIsRo10Spu216IfvU7//f0qbC+iF8=; b=sYxiAu+qt7noTY43B9g6XjklBE/saIPy0Oi4DxAIwadvuWyJuYBYNynB+IXrCEYhRc EYN5qTWiBZTeJseZ2Jvtv9XkIggrC2GAWxTKXTDrHymdskXXwz/a5EFillVBhAGHc7/O tziEUfQ6dph5iHnJOBxlIg5tR0BpSD+aOweg4KqvaY8q2riy7ETDgeySVV2FKoBOZPK8 1hOPBajCoaHf9+69glHBvtVhpUxG4caPBCSsC9z2LvrLuffwN1KVyzVPfjoMXU1VM2nB cSuK+hPa4zyaZ9YddaZQYU1eiVqjcuPuY0kWLXn8K5DR7awHgwdU1vWifuW3jg50/B9I nvoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=ASF8+rFWzj6aUvWIsRo10Spu216IfvU7//f0qbC+iF8=; b=eoTjS0BVvXrJtCSi/x8qOueZDkWul56QlJZKAFmD91zdglJIKOqiw7KStXxcoOaDL1 FOPWYguuG/86RfJsnihoOil4U85NZmMylr0FTvHUE+IxrkwicfSuNrzee5xwABDhVHtX ql4cf72xv3/iO9YrWe9S7PtUhUzLoHJJeU6m4P+CDDP1MaxqFN2uHoWlcIRNUT/MIDRw Mqc0INvSm76RAhLSqPItfvR795f+KYXrnQa2SQklrxrBxSnsc0fZMP/syJ8DzNRJ7v70 gAnVIxA7V8M9L0q/tHB/7mj0/hXRuXEJAF2JZy01TdILd+yzUKu/exVB9pNk7PMyYyNS +I0A== X-Gm-Message-State: AOAM530eV37APuTA/VNR5M80SkXKOtR6Ugo5ImlPMXk9HRrWXy+fSUjT vKwzo/DzGT+pzZC91a1b5y2pIlwBlU4= X-Google-Smtp-Source: ABdhPJz+er/7xPAvbC3mEOdtBboVCkNwysgEf1Ex3tEVjwAa3i41C43KaKZyUT3IMUQqCCiFw2avLA== X-Received: by 2002:adf:c40a:: with SMTP id v10mr6164876wrf.10.1613174961872; Fri, 12 Feb 2021 16:09:21 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b19sm15130940wmj.22.2021.02.12.16.09.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:21 -0800 (PST) Message-Id: <1bfa36409d0706d5e22703f80bf95dfa1a313a83.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:10 +0000 Subject: [PATCH v3 09/12] unix-socket: disallow chdir() when creating unix domain sockets Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Calls to `chdir()` are dangerous in a multi-threaded context. If `unix_stream_listen()` or `unix_stream_connect()` is given a socket pathname that is too long to fit in a `sockaddr_un` structure, it will `chdir()` to the parent directory of the requested socket pathname, create the socket using a relative pathname, and then `chdir()` back. This is not thread-safe. Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this flag is set. Signed-off-by: Jeff Hostetler --- builtin/credential-cache.c | 2 +- unix-socket.c | 17 ++++++++++++----- unix-socket.h | 4 +++- 3 files changed, 16 insertions(+), 7 deletions(-) diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c index 9b3f70990597..76a6ba37223f 100644 --- a/builtin/credential-cache.c +++ b/builtin/credential-cache.c @@ -14,7 +14,7 @@ static int send_request(const char *socket, const struct strbuf *out) { int got_data = 0; - int fd = unix_stream_connect(socket); + int fd = unix_stream_connect(socket, 0); if (fd < 0) return -1; diff --git a/unix-socket.c b/unix-socket.c index 5ac7dafe9828..1eaa8cf759c0 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -28,16 +28,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx) } static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, - struct unix_sockaddr_context *ctx) + struct unix_sockaddr_context *ctx, + int disallow_chdir) { int size = strlen(path) + 1; ctx->orig_dir = NULL; if (size > sizeof(sa->sun_path)) { - const char *slash = find_last_dir_sep(path); + const char *slash; const char *dir; struct strbuf cwd = STRBUF_INIT; + if (disallow_chdir) { + errno = ENAMETOOLONG; + return -1; + } + + slash = find_last_dir_sep(path); if (!slash) { errno = ENAMETOOLONG; return -1; @@ -63,13 +70,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, return 0; } -int unix_stream_connect(const char *path) +int unix_stream_connect(const char *path, int disallow_chdir) { int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) @@ -99,7 +106,7 @@ int unix_stream_listen(const char *path, unlink(path); - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) diff --git a/unix-socket.h b/unix-socket.h index 06a5a05b03fe..2c0b2e79d7b3 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -3,6 +3,7 @@ struct unix_stream_listen_opts { int listen_backlog_size; + unsigned int disallow_chdir:1; }; #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) @@ -10,9 +11,10 @@ struct unix_stream_listen_opts { #define UNIX_STREAM_LISTEN_OPTS_INIT \ { \ .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ + .disallow_chdir = 0, \ } -int unix_stream_connect(const char *path); +int unix_stream_connect(const char *path, int disallow_chdir); int unix_stream_listen(const char *path, const struct unix_stream_listen_opts *opts); From patchwork Sat Feb 13 00:09:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E894AC433DB for ; Sat, 13 Feb 2021 00:11:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE48A64DC3 for ; Sat, 13 Feb 2021 00:11:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232010AbhBMAL0 (ORCPT ); Fri, 12 Feb 2021 19:11:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231699AbhBMAKx (ORCPT ); Fri, 12 Feb 2021 19:10:53 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABDEAC061797 for ; Fri, 12 Feb 2021 16:09:23 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id g10so1388650wrx.1 for ; Fri, 12 Feb 2021 16:09:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=BxZO76/AviWURZU430dd0Mk+fI3Wg10NNXpI5VTJui4=; b=OOmZq4+g6x2ZrKtuHwzGu7uYTFhCMhRDUS6OyobpnkUF+iAK5XSNiS4Tpr5bsawOwX XXjeL7J3UCGl+pLEnpn2HIDKajCVqPZZ3fF4pVG5Ux9stFiU7jQQ5aVd+2yPEE+/+uRF jllLqJPO7qL9gMq1sZzdSAhgflvUMermhdTZfsWz3sqgRS8F1H8FvL/z7zUXSN1hV8au 18dVamSXVIt4ScyrNeDa5bjHQAq+BHIe/vg5fN3UcEQsrDz6ACLyme355aZl/ctDzVTR v9IRDoalBsWw1WwRkbjL4vUCzcqRpf7hVwSrIIURp8n4UK5Y1SNFsQBrlWPC5GZy09rM BTmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=BxZO76/AviWURZU430dd0Mk+fI3Wg10NNXpI5VTJui4=; b=bk/YDTq9TjVjw9jYsHYr3cqaPxTk6HEePPYlrrOlgiU2ms0YWZbgcXnJSwrOTwLltB Z2mvs9dImDocR8c472ZyNc9sgqEoIyEHeLafoxxi7sYOUvGrYNxW0lPRQZwMY/VoX9I0 MmEDJZbcJahGqtIZ4TUAjyeAc97lrjYyr1VfBB1fXGKFQhn4MopRfx9rY6SO7nqA5GBf 4djwCT2hbUOuB5zkdexC2AMC6PwXgPzTET1tRqufjYv/IDXecq/kXjUwZku97/SE5D+u HmcYtSI0uuPYsjKdKvEg5MeWHm7v2XT46AhQBEbYjqvloTPsXV1OyHz5Tq/1iG1l8vRb EOEA== X-Gm-Message-State: AOAM533DOcITl8K3F8oqQo0RrSLuDk8tXwuxLh/mlVKedvslau6hSY1c A8iHOn7XovUFtTZOS8VGnpTUvE60UmU= X-Google-Smtp-Source: ABdhPJwogP9orp6WLole0Kp5bl/xX3rnqQ70YaC7JC5PYtH4ojZR2NXuQluwuVbQRbalPDKtX24evQ== X-Received: by 2002:adf:ee84:: with SMTP id b4mr5979795wro.339.1613174962422; Fri, 12 Feb 2021 16:09:22 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id a17sm14832850wrx.63.2021.02.12.16.09.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:22 -0800 (PST) Message-Id: In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:11 +0000 Subject: [PATCH v3 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create a version of `unix_stream_listen()` that uses a ".lock" lockfile to create the unix domain socket in a race-free manner. Unix domain sockets have a fundamental problem on Unix systems because they persist in the filesystem until they are deleted. This is independent of whether a server is actually listening for connections. Well-behaved servers are expected to delete the socket when they shutdown. A new server cannot easily tell if a found socket is attached to an active server or is leftover cruft from a dead server. The traditional solution used by `unix_stream_listen()` is to force delete the socket pathname and then create a new socket. This solves the latter (cruft) problem, but in the case of the former, it orphans the existing server (by stealing the pathname associated with the socket it is listening on). We cannot directly use a .lock lockfile to create the socket because the socket is created by `bind(2)` rather than the `open(2)` mechanism used by `tempfile.c`. As an alternative, we hold a plain lockfile (".lock") as a mutual exclusion device. Under the lock, we test if an existing socket ("") is has an active server. If not, create a new socket and begin listening. Then we rollback the lockfile in all cases. Signed-off-by: Jeff Hostetler --- unix-socket.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++ unix-socket.h | 29 +++++++++++++ 2 files changed, 144 insertions(+) diff --git a/unix-socket.c b/unix-socket.c index 1eaa8cf759c0..647bbde37f97 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,4 +1,5 @@ #include "cache.h" +#include "lockfile.h" #include "unix-socket.h" static int chdir_len(const char *orig, int len) @@ -132,3 +133,117 @@ int unix_stream_listen(const char *path, errno = saved_errno; return -1; } + +static int is_another_server_alive(const char *path, + const struct unix_stream_listen_opts *opts) +{ + struct stat st; + int fd; + + if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) { + /* + * A socket-inode exists on disk at `path`, but we + * don't know whether it belongs to an active server + * or whether the last server died without cleaning + * up. + * + * Poke it with a trivial connection to try to find + * out. + */ + fd = unix_stream_connect(path, opts->disallow_chdir); + if (fd >= 0) { + close(fd); + return 1; + } + } + + return 0; +} + +struct unix_stream_server_socket *unix_stream_server__listen_with_lock( + const char *path, + const struct unix_stream_listen_opts *opts) +{ + struct lock_file lock = LOCK_INIT; + int fd_socket; + struct unix_stream_server_socket *server_socket; + + /* + * Create a lock at ".lock" if we can. + */ + if (hold_lock_file_for_update_timeout(&lock, path, 0, + opts->timeout_ms) < 0) { + error_errno(_("could not lock listener socket '%s'"), path); + return NULL; + } + + /* + * If another server is listening on "" give up. We do not + * want to create a socket and steal future connections from them. + */ + if (is_another_server_alive(path, opts)) { + errno = EADDRINUSE; + error_errno(_("listener socket already in use '%s'"), path); + rollback_lock_file(&lock); + return NULL; + } + + /* + * Create and bind to a Unix domain socket at "". + */ + fd_socket = unix_stream_listen(path, opts); + if (fd_socket < 0) { + error_errno(_("could not create listener socket '%s'"), path); + rollback_lock_file(&lock); + return NULL; + } + + server_socket = xcalloc(1, sizeof(*server_socket)); + server_socket->path_socket = strdup(path); + server_socket->fd_socket = fd_socket; + lstat(path, &server_socket->st_socket); + + /* + * Always rollback (just delete) ".lock" because we already created + * "" as a socket and do not want to commit_lock to do the atomic + * rename trick. + */ + rollback_lock_file(&lock); + + return server_socket; +} + +void unix_stream_server__free( + struct unix_stream_server_socket *server_socket) +{ + if (!server_socket) + return; + + if (server_socket->fd_socket >= 0) { + if (!unix_stream_server__was_stolen(server_socket)) + unlink(server_socket->path_socket); + close(server_socket->fd_socket); + } + + free(server_socket->path_socket); + free(server_socket); +} + +int unix_stream_server__was_stolen( + struct unix_stream_server_socket *server_socket) +{ + struct stat st_now; + + if (!server_socket) + return 0; + + if (lstat(server_socket->path_socket, &st_now) == -1) + return 1; + + if (st_now.st_ino != server_socket->st_socket.st_ino) + return 1; + + /* We might also consider the ctime on some platforms. */ + + return 0; +} diff --git a/unix-socket.h b/unix-socket.h index 2c0b2e79d7b3..8faf5b692f90 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -2,14 +2,17 @@ #define UNIX_SOCKET_H struct unix_stream_listen_opts { + long timeout_ms; int listen_backlog_size; unsigned int disallow_chdir:1; }; +#define DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT (100) #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) #define UNIX_STREAM_LISTEN_OPTS_INIT \ { \ + .timeout_ms = DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT, \ .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ .disallow_chdir = 0, \ } @@ -18,4 +21,30 @@ int unix_stream_connect(const char *path, int disallow_chdir); int unix_stream_listen(const char *path, const struct unix_stream_listen_opts *opts); +struct unix_stream_server_socket { + char *path_socket; + struct stat st_socket; + int fd_socket; +}; + +/* + * Create a Unix Domain Socket at the given path under the protection + * of a '.lock' lockfile. + */ +struct unix_stream_server_socket *unix_stream_server__listen_with_lock( + const char *path, + const struct unix_stream_listen_opts *opts); + +/* + * Close and delete the socket. + */ +void unix_stream_server__free( + struct unix_stream_server_socket *server_socket); + +/* + * Return 1 if the inode of the pathname to our socket changes. + */ +int unix_stream_server__was_stolen( + struct unix_stream_server_socket *server_socket); + #endif /* UNIX_SOCKET_H */ From patchwork Sat Feb 13 00:09:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A13CC433E0 for ; Sat, 13 Feb 2021 00:11:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B92AB64E95 for ; Sat, 13 Feb 2021 00:11:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231753AbhBMALo (ORCPT ); Fri, 12 Feb 2021 19:11:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231743AbhBMAKy (ORCPT ); Fri, 12 Feb 2021 19:10:54 -0500 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73806C0617A7 for ; Fri, 12 Feb 2021 16:09:24 -0800 (PST) Received: by mail-wr1-x436.google.com with SMTP id b3so1362925wrj.5 for ; Fri, 12 Feb 2021 16:09:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=eZXjHOsZEerzYCjEv6rp9xYsvp5DJvu1L8nA4ANLFUM=; b=EiexPGzt08tKlrWafwkI+9BNWNlt/9jWxyX6AFL2ON2Ux75GBL9kYEH9475vgLI4G3 b9tu0JA4n/wUxmssJQf7BXxqbIg/SaYcarfOUoFAd8DpvIYj78at7nARtWez4QS9kzlz o3TvK/H0vtKqsCiDIC8bnMp2ogI+1+4+PD/Rn9H6oQEYxc8ix0xhF9M6RiCoGL0Mkgqw dbFiKsKjJtW1SMDCihOtzoH9Kkoe+VEWjCpOynFjbIMOD6Y7lQzZ15Vm7O6SjaVTF6RB kYNfV7EpYUqtz6p66sOLW5INo2qqbylqCfzpimUHTImXUQgADean4S5maOrefw99BAkT SJkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=eZXjHOsZEerzYCjEv6rp9xYsvp5DJvu1L8nA4ANLFUM=; b=p7fTCIhCkG2GpoClvG6wW38efrqCaGA71AGaII0ClXYN18itTAP6XxzjZOWKBq2idV tQft1ISCAXW1NipJLVNuVUL7xNWTZ731y48OlPtY1f+JvS8hVJMna6OfRtXzDpr89UTr h1W+sr0XNF2ip64UrrUrXwTwThuhgh/ZKlweKcv1hJybDWsK1oNJX1XMHCZX3ws94DKd 9My+3cq+mW1rI19Cme3YuDAI9uudQ05sFTOjmvjVPo2KWL0yXnnNcOJi8gxD83kG7p6D tYL2ZeyONEjVvovx2C5BRODWjx+7UFuKMk8rpDBerUbUEZ3TnTtzSnivcm1OjOt6zCfG +HGQ== X-Gm-Message-State: AOAM532M9asav4R5ytpRC4pEovAGiPzOAiblk5bSVD9CAIGTdBoB0+Ts tJWLPOJEZmu4WCwVwEKKvMhrBLZj7ss= X-Google-Smtp-Source: ABdhPJzn+YGHma+rbQEKUKd2kxX3WHW+w1fdkPgkPITYLuqt6cuS9xcG+utWyHjrCDfE6sMuq4Rp5w== X-Received: by 2002:adf:b350:: with SMTP id k16mr6094331wrd.190.1613174963086; Fri, 12 Feb 2021 16:09:23 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id i8sm14653840wry.90.2021.02.12.16.09.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:22 -0800 (PST) Message-Id: <43c8db9a4468c0ca50e8f4efa55ab01a77cafcf6.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:12 +0000 Subject: [PATCH v3 11/12] simple-ipc: add Unix domain socket implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Unix domain socket based implementation of "simple-ipc". A set of `ipc_client` routines implement a client library to connect to an `ipc_server` over a Unix domain socket, send a simple request, and receive a single response. Clients use blocking IO on the socket. A set of `ipc_server` routines implement a thread pool to listen for and concurrently service client connections. The server creates a new Unix domain socket at a known location. If a socket already exists with that name, the server tries to determine if another server is already listening on the socket or if the socket is dead. If socket is busy, the server exits with an error rather than stealing the socket. If the socket is dead, the server creates a new one and starts up. If while running, the server detects that its socket has been stolen by another server, it automatically exits. Signed-off-by: Jeff Hostetler --- Makefile | 2 + compat/simple-ipc/ipc-unix-socket.c | 979 ++++++++++++++++++++++++++++ contrib/buildsystems/CMakeLists.txt | 2 + simple-ipc.h | 13 +- 4 files changed, 995 insertions(+), 1 deletion(-) create mode 100644 compat/simple-ipc/ipc-unix-socket.c diff --git a/Makefile b/Makefile index 40d5cab78d3f..08a4c88b92f5 100644 --- a/Makefile +++ b/Makefile @@ -1677,6 +1677,8 @@ ifdef NO_UNIX_SOCKETS BASIC_CFLAGS += -DNO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o endif ifdef USE_WIN32_IPC diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c new file mode 100644 index 000000000000..b7fd0b34329e --- /dev/null +++ b/compat/simple-ipc/ipc-unix-socket.c @@ -0,0 +1,979 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" +#include "unix-socket.h" + +#ifdef NO_UNIX_SOCKETS +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets +#endif + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + struct stat st; + struct ipc_client_connection *connection_test = NULL; + + options.wait_if_busy = 0; + options.wait_if_not_found = 0; + + if (lstat(path, &st) == -1) { + switch (errno) { + case ENOENT: + case ENOTDIR: + return IPC_STATE__NOT_LISTENING; + default: + return IPC_STATE__INVALID_PATH; + } + } + + /* also complain if a plain file is in the way */ + if ((st.st_mode & S_IFMT) != S_IFSOCK) + return IPC_STATE__INVALID_PATH; + + /* + * Just because the filesystem has a S_IFSOCK type inode + * at `path`, doesn't mean it that there is a server listening. + * Ping it to be sure. + */ + state = ipc_client_try_connect(path, &options, &connection_test); + ipc_client_close_connection(connection_test); + + return state; +} + +/* + * This value was chosen at random. + */ +#define WAIT_STEP_MS (50) + +/* + * Try to connect to the server. If the server is just starting up or + * is very busy, we may not get a connection the first time. + */ +static enum ipc_active_state connect_to_server( + const char *path, + int timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + int wait_ms = 50; + int k; + + *pfd = -1; + + for (k = 0; k < timeout_ms; k += wait_ms) { + int fd = unix_stream_connect(path, options->uds_disallow_chdir); + + if (fd != -1) { + *pfd = fd; + return IPC_STATE__LISTENING; + } + + if (errno == ENOENT) { + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + + goto sleep_and_try_again; + } + + if (errno == ETIMEDOUT) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + if (errno == ECONNREFUSED) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + return IPC_STATE__OTHER_ERROR; + + sleep_and_try_again: + sleep_millisec(wait_ms); + } + + return IPC_STATE__NOT_LISTENING; +} + +/* + * A randomly chosen timeout value. + */ +#define MY_CONNECTION_TIMEOUT_MS (1000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, answer); + + ipc_client_close_connection(connection); + + return ret; +} + +static int set_socket_blocking_flag(int fd, int make_nonblocking) +{ + int flags; + + flags = fcntl(fd, F_GETFL, NULL); + + if (flags < 0) + return -1; + + if (make_nonblocking) + flags |= O_NONBLOCK; + else + flags &= ~O_NONBLOCK; + + return fcntl(fd, F_SETFL, flags); +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_WORKER_THREAD_DATA, + MAGIC_ACCEPT_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_worker_thread_data *worker_thread_data; +}; + +struct ipc_worker_thread_data { + enum magic magic; + struct ipc_worker_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; +}; + +struct ipc_accept_thread_data { + enum magic magic; + struct ipc_server_data *server_data; + + struct unix_stream_server_socket *server_socket; + + int fd_send_shutdown; + int fd_wait_shutdown; + pthread_t pthread_id; +}; + +/* + * With unix-sockets, the conceptual "ipc-server" is implemented as a single + * controller "accept-thread" thread and a pool of "worker-thread" threads. + * The former does the usual `accept()` loop and dispatches connections + * to an idle worker thread. The worker threads wait in an idle loop for + * a new connection, communicate with the client and relay data to/from + * the `application_cb` and then wait for another connection from the + * server thread. This avoids the overhead of constantly creating and + * destroying threads. + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + + struct ipc_accept_thread_data *accept_thread; + struct ipc_worker_thread_data *worker_thread_list; + + pthread_mutex_t work_available_mutex; + pthread_cond_t work_available_cond; + + /* + * Accepted but not yet processed client connections are kept + * in a circular buffer FIFO. The queue is empty when the + * positions are equal. + */ + int *fifo_fds; + int queue_size; + int back_pos; + int front_pos; + + int shutdown_requested; + int is_stopped; +}; + +/* + * Remove and return the oldest queued connection. + * + * Returns -1 if empty. + */ +static int fifo_dequeue(struct ipc_server_data *server_data) +{ + /* ASSERT holding mutex */ + + int fd; + + if (server_data->back_pos == server_data->front_pos) + return -1; + + fd = server_data->fifo_fds[server_data->front_pos]; + server_data->fifo_fds[server_data->front_pos] = -1; + + server_data->front_pos++; + if (server_data->front_pos == server_data->queue_size) + server_data->front_pos = 0; + + return fd; +} + +/* + * Push a new fd onto the back of the queue. + * + * Drop it and return -1 if queue is already full. + */ +static int fifo_enqueue(struct ipc_server_data *server_data, int fd) +{ + /* ASSERT holding mutex */ + + int next_back_pos; + + next_back_pos = server_data->back_pos + 1; + if (next_back_pos == server_data->queue_size) + next_back_pos = 0; + + if (next_back_pos == server_data->front_pos) { + /* Queue is full. Just drop it. */ + close(fd); + return -1; + } + + server_data->fifo_fds[server_data->back_pos] = fd; + server_data->back_pos = next_back_pos; + + return fd; +} + +/* + * Wait for a connection to be queued to the FIFO and return it. + * + * Returns -1 if someone has already requested a shutdown. + */ +static int worker_thread__wait_for_connection( + struct ipc_worker_thread_data *worker_thread_data) +{ + /* ASSERT NOT holding mutex */ + + struct ipc_server_data *server_data = worker_thread_data->server_data; + int fd = -1; + + pthread_mutex_lock(&server_data->work_available_mutex); + for (;;) { + if (server_data->shutdown_requested) + break; + + fd = fifo_dequeue(server_data); + if (fd >= 0) + break; + + pthread_cond_wait(&server_data->work_available_cond, + &server_data->work_available_mutex); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + + return fd; +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* A randomly chosen value. */ +#define MY_WAIT_POLL_TIMEOUT_MS (10) + +/* + * If the client hangs up without sending any data on the wire, just + * quietly close the socket and ignore this client. + * + * This worker thread is committed to reading the IPC request data + * from the client at the other end of this fd. Wait here for the + * client to actually put something on the wire -- because if the + * client just does a ping (connect and hangup without sending any + * data), our use of the pkt-line read routines will spew an error + * message. + * + * Return -1 if the client hung up. + * Return 0 if data (possibly incomplete) is ready. + */ +static int worker_thread__wait_for_io_start( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + struct ipc_server_data *server_data = worker_thread_data->server_data; + struct pollfd pollfd[1]; + int result; + + for (;;) { + pollfd[0].fd = fd; + pollfd[0].events = POLLIN; + + result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + goto cleanup; + } + + if (result == 0) { + /* a timeout */ + + int in_shutdown; + + pthread_mutex_lock(&server_data->work_available_mutex); + in_shutdown = server_data->shutdown_requested; + pthread_mutex_unlock(&server_data->work_available_mutex); + + /* + * If a shutdown is already in progress and this + * client has not started talking yet, just drop it. + */ + if (in_shutdown) + goto cleanup; + continue; + } + + if (pollfd[0].revents & POLLHUP) + goto cleanup; + + if (pollfd[0].revents & POLLIN) + return 0; + + goto cleanup; + } + +cleanup: + close(fd); + return -1; +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + */ +static int worker_thread__do_io( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + /* ASSERT NOT holding lock */ + + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.worker_thread_data = worker_thread_data; + + reply_data.fd = fd; + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = worker_thread_data->server_data->application_cb( + worker_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Block SIGPIPE on the current thread (so that we get EPIPE from + * write() rather than an actual signal). + * + * Note that using sigchain_push() and _pop() to control SIGPIPE + * around our IO calls is not thread safe: + * [] It uses a global stack of handler frames. + * [] It uses ALLOC_GROW() to resize it. + * [] Finally, according to the `signal(2)` man-page: + * "The effects of `signal()` in a multithreaded process are unspecified." + */ +static void thread_block_sigpipe(sigset_t *old_set) +{ + sigset_t new_set; + + sigemptyset(&new_set); + sigaddset(&new_set, SIGPIPE); + + sigemptyset(old_set); + pthread_sigmask(SIG_BLOCK, &new_set, old_set); +} + +/* + * Thread proc for an IPC worker thread. It handles a series of + * connections from clients. It pulls the next fd from the queue + * processes it, and then waits for the next client. + * + * Block SIGPIPE in this worker thread for the life of the thread. + * This avoids stray (and sometimes delayed) SIGPIPE signals caused + * by client errors and/or when we are under extremely heavy IO load. + * + * This means that the application callback will have SIGPIPE blocked. + * The callback should not change it. + */ +static void *worker_thread_proc(void *_worker_thread_data) +{ + struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data; + struct ipc_server_data *server_data = worker_thread_data->server_data; + sigset_t old_set; + int fd, io; + int ret; + + trace2_thread_start("ipc-worker"); + + thread_block_sigpipe(&old_set); + + for (;;) { + fd = worker_thread__wait_for_connection(worker_thread_data); + if (fd == -1) + break; /* in shutdown */ + + io = worker_thread__wait_for_io_start(worker_thread_data, fd); + if (io == -1) + continue; /* client hung up without sending anything */ + + ret = worker_thread__do_io(worker_thread_data, fd); + + if (ret == SIMPLE_IPC_QUIT) { + trace2_data_string("ipc-worker", NULL, "queue_stop_async", + "application_quit"); + /* + * The application layer is telling the ipc-server + * layer to shutdown. + * + * We DO NOT have a response to send to the client. + * + * Queue an async stop (to stop the other threads) and + * allow this worker thread to exit now (no sense waiting + * for the thread-pool shutdown signal). + * + * Other non-idle worker threads are allowed to finish + * responding to their current clients. + */ + ipc_server_stop_async(server_data); + break; + } + } + + trace2_thread_exit(); + return NULL; +} + +/* A randomly chosen value. */ +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000) + +/* + * Accept a new client connection on our socket. This uses non-blocking + * IO so that we can also wait for shutdown requests on our socket-pair + * without actually spinning on a fast timeout. + */ +static int accept_thread__wait_for_connection( + struct ipc_accept_thread_data *accept_thread_data) +{ + struct pollfd pollfd[2]; + int result; + + for (;;) { + pollfd[0].fd = accept_thread_data->fd_wait_shutdown; + pollfd[0].events = POLLIN; + + pollfd[1].fd = accept_thread_data->server_socket->fd_socket; + pollfd[1].events = POLLIN; + + result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + return result; + } + + if (result == 0) { + /* a timeout */ + + /* + * If someone deletes or force-creates a new unix + * domain socket at our path, all future clients + * will be routed elsewhere and we silently starve. + * If that happens, just queue a shutdown. + */ + if (unix_stream_server__was_stolen( + accept_thread_data->server_socket)) { + trace2_data_string("ipc-accept", NULL, + "queue_stop_async", + "socket_stolen"); + ipc_server_stop_async( + accept_thread_data->server_data); + } + continue; + } + + if (pollfd[0].revents & POLLIN) { + /* shutdown message queued to socketpair */ + return -1; + } + + if (pollfd[1].revents & POLLIN) { + /* a connection is available on server_socket */ + + int client_fd = + accept(accept_thread_data->server_socket->fd_socket, + NULL, NULL); + if (client_fd >= 0) + return client_fd; + + /* + * An error here is unlikely -- it probably + * indicates that the connecting process has + * already dropped the connection. + */ + continue; + } + + BUG("unandled poll result errno=%d r[0]=%d r[1]=%d", + errno, pollfd[0].revents, pollfd[1].revents); + } +} + +/* + * Thread proc for the IPC server "accept thread". This waits for + * an incoming socket connection, appends it to the queue of available + * connections, and notifies a worker thread to process it. + * + * Block SIGPIPE in this thread for the life of the thread. This + * avoids any stray SIGPIPE signals when closing pipe fds under + * extremely heavy loads (such as when the fifo queue is full and we + * drop incomming connections). + */ +static void *accept_thread_proc(void *_accept_thread_data) +{ + struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data; + struct ipc_server_data *server_data = accept_thread_data->server_data; + sigset_t old_set; + + trace2_thread_start("ipc-accept"); + + thread_block_sigpipe(&old_set); + + for (;;) { + int client_fd = accept_thread__wait_for_connection( + accept_thread_data); + + pthread_mutex_lock(&server_data->work_available_mutex); + if (server_data->shutdown_requested) { + pthread_mutex_unlock(&server_data->work_available_mutex); + if (client_fd >= 0) + close(client_fd); + break; + } + + if (client_fd < 0) { + /* ignore transient accept() errors */ + } + else { + fifo_enqueue(server_data, client_fd); + pthread_cond_broadcast(&server_data->work_available_cond); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + } + + trace2_thread_exit(); + return NULL; +} + +/* + * We can't predict the connection arrival rate relative to the worker + * processing rate, therefore we allow the "accept-thread" to queue up + * a generous number of connections, since we'd rather have the client + * not unnecessarily timeout if we can avoid it. (The assumption is + * that this will be used for FSMonitor and a few second wait on a + * connection is better than having the client timeout and do the full + * computation itself.) + * + * The FIFO queue size is set to a multiple of the worker pool size. + * This value chosen at random. + */ +#define FIFO_SCALE (100) + +/* + * The backlog value for `listen(2)`. This doesn't need to huge, + * rather just large enough for our "accept-thread" to wake up and + * queue incoming connections onto the FIFO without the kernel + * dropping any. + * + * This value chosen at random. + */ +#define LISTEN_BACKLOG (50) + +static struct unix_stream_server_socket *create_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts) +{ + struct unix_stream_server_socket *server_socket = NULL; + struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT; + + uslg_opts.listen_backlog_size = LISTEN_BACKLOG; + uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir; + + server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts); + if (!server_socket) + return NULL; + + if (set_socket_blocking_flag(server_socket->fd_socket, 1)) { + int saved_errno = errno; + error_errno(_("could not set listener socket nonblocking '%s'"), + path); + unix_stream_server__free(server_socket); + errno = saved_errno; + return NULL; + } + + trace2_data_string("ipc-server", NULL, "listen-with-lock", path); + return server_socket; +} + +static struct unix_stream_server_socket *setup_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts) +{ + struct unix_stream_server_socket *server_socket; + + trace2_region_enter("ipc-server", "create-listener_socket", NULL); + server_socket = create_listener_socket(path, ipc_opts); + trace2_region_leave("ipc-server", "create-listener_socket", NULL); + + return server_socket; +} + +/* + * Start IPC server in a pool of background threads. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct unix_stream_server_socket *server_socket = NULL; + struct ipc_server_data *server_data; + int sv[2]; + int k; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + /* + * Create a socketpair and set sv[1] to non-blocking. This + * will used to send a shutdown message to the accept-thread + * and allows the accept-thread to wait on EITHER a client + * connection or a shutdown request without spinning. + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0) + return error_errno(_("could not create socketpair for '%s'"), + path); + + if (set_socket_blocking_flag(sv[1], 1)) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return error_errno(_("making socketpair nonblocking '%s'"), + path); + } + + server_socket = setup_listener_socket(path, opts); + if (!server_socket) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return -1; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + + if (nr_threads < 1) + nr_threads = 1; + + pthread_mutex_init(&server_data->work_available_mutex, NULL); + pthread_cond_init(&server_data->work_available_cond, NULL); + + server_data->queue_size = nr_threads * FIFO_SCALE; + server_data->fifo_fds = xcalloc(server_data->queue_size, + sizeof(*server_data->fifo_fds)); + + server_data->accept_thread = + xcalloc(1, sizeof(*server_data->accept_thread)); + server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA; + server_data->accept_thread->server_data = server_data; + server_data->accept_thread->server_socket = server_socket; + server_data->accept_thread->fd_send_shutdown = sv[0]; + server_data->accept_thread->fd_wait_shutdown = sv[1]; + + if (pthread_create(&server_data->accept_thread->pthread_id, NULL, + accept_thread_proc, server_data->accept_thread)) + die_errno(_("could not start accept_thread '%s'"), path); + + for (k = 0; k < nr_threads; k++) { + struct ipc_worker_thread_data *wtd; + + wtd = xcalloc(1, sizeof(*wtd)); + wtd->magic = MAGIC_WORKER_THREAD_DATA; + wtd->server_data = server_data; + + if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc, + wtd)) { + if (k == 0) + die(_("could not start worker[0] for '%s'"), + path); + /* + * Limp along with the thread pool that we have. + */ + break; + } + + wtd->next_thread = server_data->worker_thread_list; + server_data->worker_thread_list = wtd; + } + + *returned_server_data = server_data; + return 0; +} + +/* + * Gently tell the IPC server treads to shutdown. + * Can be run on any thread. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + /* ASSERT NOT holding mutex */ + + int fd; + + if (!server_data) + return 0; + + trace2_region_enter("ipc-server", "server-stop-async", NULL); + + pthread_mutex_lock(&server_data->work_available_mutex); + + server_data->shutdown_requested = 1; + + /* + * Write a byte to the shutdown socket pair to wake up the + * accept-thread. + */ + if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0) + error_errno("could not write to fd_send_shutdown"); + + /* + * Drain the queue of existing connections. + */ + while ((fd = fifo_dequeue(server_data)) != -1) + close(fd); + + /* + * Gently tell worker threads to stop processing new connections + * and exit. (This does not abort in-process conversations.) + */ + pthread_cond_broadcast(&server_data->work_available_cond); + + pthread_mutex_unlock(&server_data->work_available_mutex); + + trace2_region_leave("ipc-server", "server-stop-async", NULL); + + return 0; +} + +/* + * Wait for all IPC server threads to stop. + */ +int ipc_server_await(struct ipc_server_data *server_data) +{ + pthread_join(server_data->accept_thread->pthread_id, NULL); + + if (!server_data->shutdown_requested) + BUG("ipc-server: accept-thread stopped for '%s'", + server_data->buf_path.buf); + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + pthread_join(wtd->pthread_id, NULL); + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + struct ipc_accept_thread_data * accept_thread_data; + + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + accept_thread_data = server_data->accept_thread; + if (accept_thread_data) { + unix_stream_server__free(accept_thread_data->server_socket); + + if (accept_thread_data->fd_send_shutdown != -1) + close(accept_thread_data->fd_send_shutdown); + if (accept_thread_data->fd_wait_shutdown != -1) + close(accept_thread_data->fd_wait_shutdown); + + free(server_data->accept_thread); + } + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + pthread_cond_destroy(&server_data->work_available_cond); + pthread_mutex_destroy(&server_data->work_available_mutex); + + strbuf_release(&server_data->buf_path); + + free(server_data->fifo_fds); + free(server_data); +} diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index 4bd41054ee70..4c27a373414a 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -248,6 +248,8 @@ endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +else() + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c) endif() set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) diff --git a/simple-ipc.h b/simple-ipc.h index a3f96b42cca2..f7e72e966f9a 100644 --- a/simple-ipc.h +++ b/simple-ipc.h @@ -5,7 +5,7 @@ * See Documentation/technical/api-simple-ipc.txt */ -#if defined(GIT_WINDOWS_NATIVE) +#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS) #define SUPPORTS_SIMPLE_IPC #endif @@ -62,11 +62,17 @@ struct ipc_client_connect_options { * the service and need to wait for it to become ready. */ unsigned int wait_if_not_found:1; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ .wait_if_busy = 0, \ .wait_if_not_found = 0, \ + .uds_disallow_chdir = 0, \ } /* @@ -159,6 +165,11 @@ struct ipc_server_data; struct ipc_server_opts { int nr_threads; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; /* From patchwork Sat Feb 13 00:09:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12086377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 134EFC433DB for ; Sat, 13 Feb 2021 00:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BDE9C64DC3 for ; Sat, 13 Feb 2021 00:11:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231835AbhBMALf (ORCPT ); Fri, 12 Feb 2021 19:11:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231753AbhBMAKy (ORCPT ); Fri, 12 Feb 2021 19:10:54 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3605C0617A9 for ; Fri, 12 Feb 2021 16:09:25 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id a16so1372171wmm.0 for ; Fri, 12 Feb 2021 16:09:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=BotalnfufejzC+XjwCf1vvrEZjvDIasfuvdQCQUcLtw=; b=JI6+HoGvv9M37INw6ICow2SKcERxyTxtZoUz/hN/AT3ZnOEX4SI+A4nHRHBUn1g/X6 TH2i2nXnyKhykAku+a9Py9BXAaEEA1imsvZzLYH/I1ONvxbbcKwknZ28Ol/pPgqtOTAD DaHvp1FevsDy6kqUn6/jIU0bmFdZksWA3yB34n5RaRQtyumaUjML9bn2e534D+nuGnu+ m+PIkT47P+V/r9h+9hBN7Q26TZDB42kHB5YRu8EO0+5NSkWiVc5gNhi4usIKK50bErlE Z0IhXIwW5pKRsN8WR/GoZitKaT8oYAkQ285UbAizyVhp6fw3U1ERv+yDwkzItXMZvfyD XLHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=BotalnfufejzC+XjwCf1vvrEZjvDIasfuvdQCQUcLtw=; b=QTgIARspMexe1RHo1FUU9kNoct1j4K9NHasO7K0mYsH/CqGNoPDCON77pf++vJWnT0 SttDw6U5J2M7Mnl3S5TMO4Rhei6s1GgmabRSJj0Jz178L9yM8V79d51DwEdP1HqtU3Ds 84r5E8KEUvsuJXG2oqxaMlbzMQRc1fcmYtUDDSYmH6tZM/O/9Ww8thDdvYkFAT0/+jcL elLe4xhCDx45qP57dBJfUM2imflHeQo2FsB029ckms0slPz6X7ArFreQAKYGbwEK0QVK S2EsHYvOqKa3RnJaby6nwGVp25yNWnDdoqH6TudQQHD2hMwuwqOwA9Oa5hqgR1ds2x0A bEKQ== X-Gm-Message-State: AOAM531OBgB/ghN8Rue6VWD4y0bzqUTPRk6hDeJr2amiMQZUIqG5RWrI g7jWQgbJYiTH7AXuZelS1EeKNVka17M= X-Google-Smtp-Source: ABdhPJwSCGjIY2bV/HMLX6BOoSqjcvxIpX9fECmxiIin1tF6M5qSa5Lk3fOoCFIgYFo/g3QyaUS8aQ== X-Received: by 2002:a1c:b6c5:: with SMTP id g188mr4698175wmf.27.1613174963672; Fri, 12 Feb 2021 16:09:23 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id 36sm14354213wrj.97.2021.02.12.16.09.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Feb 2021 16:09:23 -0800 (PST) Message-Id: <1e5c856ade8557d7514d9bee1c58bf978aba062c.1613174954.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Sat, 13 Feb 2021 00:09:13 +0000 Subject: [PATCH v3 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism. Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc" functions. When the tool is invoked with "run-daemon", it runs a server to listen for "simple-ipc" connections on a test socket or named pipe and responds to a set of commands to exercise/stress the communication setup. When the tool is invoked with "start-daemon", it spawns a "run-daemon" command in the background and waits for the server to become ready before exiting. (This helps make unit tests in t0052 more predictable and avoids the need for arbitrary sleeps in the test script.) The tool also has a series of client "send" commands to send commands and data to a server instance. Signed-off-by: Jeff Hostetler --- Makefile | 1 + t/helper/test-simple-ipc.c | 713 +++++++++++++++++++++++++++++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0052-simple-ipc.sh | 134 +++++++ 5 files changed, 850 insertions(+) create mode 100644 t/helper/test-simple-ipc.c create mode 100755 t/t0052-simple-ipc.sh diff --git a/Makefile b/Makefile index 08a4c88b92f5..93f2e7ca9e1f 100644 --- a/Makefile +++ b/Makefile @@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o TEST_BUILTINS_OBJS += test-sha1.o TEST_BUILTINS_OBJS += test-sha256.o TEST_BUILTINS_OBJS += test-sigchain.o +TEST_BUILTINS_OBJS += test-simple-ipc.o TEST_BUILTINS_OBJS += test-strcmp-offset.o TEST_BUILTINS_OBJS += test-string-list.o TEST_BUILTINS_OBJS += test-submodule-config.o diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c new file mode 100644 index 000000000000..92aa7f843cfa --- /dev/null +++ b/t/helper/test-simple-ipc.c @@ -0,0 +1,713 @@ +/* + * test-simple-ipc.c: verify that the Inter-Process Communication works. + */ + +#include "test-tool.h" +#include "cache.h" +#include "strbuf.h" +#include "simple-ipc.h" +#include "parse-options.h" +#include "thread-utils.h" +#include "strvec.h" + +#ifndef SUPPORTS_SIMPLE_IPC +int cmd__simple_ipc(int argc, const char **argv) +{ + die("simple IPC not available on this platform"); +} +#else + +/* + * The test daemon defines an "application callback" that supports a + * series of commands (see `test_app_cb()`). + * + * Unknown commands are caught here and we send an error message back + * to the client process. + */ +static int app__unhandled_command(const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int ret; + + strbuf_addf(&buf, "unhandled command: %s", command); + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a single very large buffer. This is to ensure that + * long response are properly handled -- whether the chunking occurs + * in the kernel or in the (probably pkt-line) layer. + */ +#define BIG_ROWS (10000) +static int app__big_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < BIG_ROWS; row++) + strbuf_addf(&buf, "big: %.75d\n", row); + + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a series of lines. This is to ensure that we can incrementally + * compute the response and chunk it to the client. + */ +#define CHUNK_ROWS (10000) +static int app__chunk_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < CHUNK_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * Slowly reply with a series of lines. This is to model an expensive to + * compute chunked response (which might happen if this callback is running + * in a thread and is fighting for a lock with other threads). + */ +#define SLOW_ROWS (1000) +#define SLOW_DELAY_MS (10) +static int app__slow_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < SLOW_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + sleep_millisec(SLOW_DELAY_MS); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * The client sent a command followed by a (possibly very) large buffer. + */ +static int app__sendbytes_command(const char *received, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf_resp = STRBUF_INIT; + const char *p = "?"; + int len_ballast = 0; + int k; + int errs = 0; + int ret; + + if (skip_prefix(received, "sendbytes ", &p)) + len_ballast = strlen(p); + + /* + * Verify that the ballast is n copies of a single letter. + * And that the multi-threaded IO layer didn't cross the streams. + */ + for (k = 1; k < len_ballast; k++) + if (p[k] != p[0]) + errs++; + + if (errs) + strbuf_addf(&buf_resp, "errs:%d\n", errs); + else + strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast); + + ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len); + + strbuf_release(&buf_resp); + + return ret; +} + +/* + * An arbitrary fixed address to verify that the application instance + * data is handled properly. + */ +static int my_app_data = 42; + +static ipc_server_application_cb test_app_cb; + +/* + * This is "application callback" that sits on top of the "ipc-server". + * It completely defines the set of command verbs supported by this + * application. + */ +static int test_app_cb(void *application_data, + const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + /* + * Verify that we received the application-data that we passed + * when we started the ipc-server. (We have several layers of + * callbacks calling callbacks and it's easy to get things mixed + * up (especially when some are "void*").) + */ + if (application_data != (void*)&my_app_data) + BUG("application_cb: application_data pointer wrong"); + + if (!strcmp(command, "quit")) { + /* + * The client sent a "quit" command. This is an async + * request for the server to shutdown. + * + * We DO NOT send the client a response message + * (because we have nothing to say and the other + * server threads have not yet stopped). + * + * Tell the ipc-server layer to start shutting down. + * This includes: stop listening for new connections + * on the socket/pipe and telling all worker threads + * to finish/drain their outgoing responses to other + * clients. + * + * This DOES NOT force an immediate sync shutdown. + */ + return SIMPLE_IPC_QUIT; + } + + if (!strcmp(command, "ping")) { + const char *answer = "pong"; + return reply_cb(reply_data, answer, strlen(answer)); + } + + if (!strcmp(command, "big")) + return app__big_command(reply_cb, reply_data); + + if (!strcmp(command, "chunk")) + return app__chunk_command(reply_cb, reply_data); + + if (!strcmp(command, "slow")) + return app__slow_command(reply_cb, reply_data); + + if (starts_with(command, "sendbytes ")) + return app__sendbytes_command(command, reply_cb, reply_data); + + return app__unhandled_command(command, reply_cb, reply_data); +} + +/* + * This process will run as a simple-ipc server and listen for IPC commands + * from client processes. + */ +static int daemon__run_server(const char *path, int argc, const char **argv) +{ + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc run-daemon ["), + NULL + }; + struct option daemon_options[] = { + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Synchronously run the ipc-server. We don't need any application + * instance data, so pass an arbitrary pointer (that we'll later + * verify made the round trip). + */ + return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data); +} + +#ifndef GIT_WINDOWS_NATIVE +/* + * This is adapted from `daemonize()`. Use `fork()` to directly create and + * run the daemon in a child process. + */ +static int spawn_server(const char *path, + const struct ipc_server_opts *opts, + pid_t *pid) +{ + *pid = fork(); + + switch (*pid) { + case 0: + if (setsid() == -1) + error_errno(_("setsid failed")); + close(0); + close(1); + close(2); + sanitize_stdfds(); + + return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data); + + case -1: + return error_errno(_("could not spawn daemon in the background")); + + default: + return 0; + } +} +#else +/* + * Conceptually like `daemonize()` but different because Windows does not + * have `fork(2)`. Spawn a normal Windows child process but without the + * limitations of `start_command()` and `finish_command()`. + */ +static int spawn_server(const char *path, + const struct ipc_server_opts *opts, + pid_t *pid) +{ + char test_tool_exe[MAX_PATH]; + struct strvec args = STRVEC_INIT; + int in, out; + + GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH); + + in = open("/dev/null", O_RDONLY); + out = open("/dev/null", O_WRONLY); + + strvec_push(&args, test_tool_exe); + strvec_push(&args, "simple-ipc"); + strvec_push(&args, "run-daemon"); + strvec_pushf(&args, "--threads=%d", opts->nr_threads); + + *pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out); + close(in); + close(out); + + strvec_clear(&args); + + if (*pid < 0) + return error(_("could not spawn daemon in the background")); + + return 0; +} +#endif + +/* + * This is adapted from `wait_or_whine()`. Watch the child process and + * let it get started and begin listening for requests on the socket + * before reporting our success. + */ +static int wait_for_server_startup(const char * path, pid_t pid_child, + int max_wait_sec) +{ + int status; + pid_t pid_seen; + enum ipc_active_state s; + time_t time_limit, now; + + time(&time_limit); + time_limit += max_wait_sec; + + for (;;) { + pid_seen = waitpid(pid_child, &status, WNOHANG); + + if (pid_seen == -1) + return error_errno(_("waitpid failed")); + + else if (pid_seen == 0) { + /* + * The child is still running (this should be + * the normal case). Try to connect to it on + * the socket and see if it is ready for + * business. + * + * If there is another daemon already running, + * our child will fail to start (possibly + * after a timeout on the lock), but we don't + * care (who responds) if the socket is live. + */ + s = ipc_get_active_state(path); + if (s == IPC_STATE__LISTENING) + return 0; + + time(&now); + if (now > time_limit) + return error(_("daemon not online yet")); + + continue; + } + + else if (pid_seen == pid_child) { + /* + * The new child daemon process shutdown while + * it was starting up, so it is not listening + * on the socket. + * + * Try to ping the socket in the odd chance + * that another daemon started (or was already + * running) while our child was starting. + * + * Again, we don't care who services the socket. + */ + s = ipc_get_active_state(path); + if (s == IPC_STATE__LISTENING) + return 0; + + /* + * We don't care about the WEXITSTATUS() nor + * any of the WIF*(status) values because + * `cmd__simple_ipc()` does the `!!result` + * trick on all function return values. + * + * So it is sufficient to just report the + * early shutdown as an error. + */ + return error(_("daemon failed to start")); + } + + else + return error(_("waitpid is confused")); + } +} + +/* + * This process will start a simple-ipc server in a background process and + * wait for it to become ready. This is like `daemonize()` but gives us + * more control and better error reporting (and makes it easier to write + * unit tests). + */ +static int daemon__start_server(const char *path, int argc, const char **argv) +{ + pid_t pid_child; + int ret; + int max_wait_sec = 60; + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc start-daemon ["), + NULL + }; + + struct option daemon_options[] = { + OPT_INTEGER(0, "max-wait", &max_wait_sec, + N_("seconds to wait for daemon to startup")), + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (max_wait_sec < 0) + max_wait_sec = 0; + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Run the actual daemon in a background process. + */ + ret = spawn_server(path, &opts, &pid_child); + if (pid_child <= 0) + return ret; + + /* + * Let the parent wait for the child process to get started + * and begin listening for requests on the socket. + */ + ret = wait_for_server_startup(path, pid_child, max_wait_sec); + + return ret; +} + +/* + * This process will run a quick probe to see if a simple-ipc server + * is active on this path. + * + * Returns 0 if the server is alive. + */ +static int client__probe_server(const char *path) +{ + enum ipc_active_state s; + + s = ipc_get_active_state(path); + switch (s) { + case IPC_STATE__LISTENING: + return 0; + + case IPC_STATE__NOT_LISTENING: + return error("no server listening at '%s'", path); + + case IPC_STATE__PATH_NOT_FOUND: + return error("path not found '%s'", path); + + case IPC_STATE__INVALID_PATH: + return error("invalid pipe/socket name '%s'", path); + + case IPC_STATE__OTHER_ERROR: + default: + return error("other error for '%s'", path); + } +} + +/* + * Send an IPC command to an already-running server daemon and print the + * response. + * + * argv[2] contains a simple (1 word) command verb that `test_app_cb()` + * (in the daemon process) will understand. + */ +static int client__send_ipc(int argc, const char **argv, const char *path) +{ + const char *command = argc > 2 ? argv[2] : "(no command)"; + struct strbuf buf = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + if (!ipc_client_send_command(path, &options, command, &buf)) { + if (buf.len) { + printf("%s\n", buf.buf); + fflush(stdout); + } + strbuf_release(&buf); + + return 0; + } + + return error("failed to send '%s' to '%s'", command, path); +} + +/* + * Send an IPC command followed by ballast to confirm that a large + * message can be sent and that the kernel or pkt-line layers will + * properly chunk it and that the daemon receives the entire message. + */ +static int do_sendbytes(int bytecount, char byte, const char *path, + const struct ipc_client_connect_options *options) +{ + struct strbuf buf_send = STRBUF_INIT; + struct strbuf buf_resp = STRBUF_INIT; + + strbuf_addstr(&buf_send, "sendbytes "); + strbuf_addchars(&buf_send, byte, bytecount); + + if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) { + strbuf_rtrim(&buf_resp); + printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf); + fflush(stdout); + strbuf_release(&buf_send); + strbuf_release(&buf_resp); + + return 0; + } + + return error("client failed to sendbytes(%d, '%c') to '%s'", + bytecount, byte, path); +} + +/* + * Send an IPC command with ballast to an already-running server daemon. + */ +static int client__sendbytes(int argc, const char **argv, const char *path) +{ + int bytecount = 1024; + char *string = "x"; + const char * const sendbytes_usage[] = { + N_("test-helper simple-ipc sendbytes []"), + NULL + }; + struct option sendbytes_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")), + OPT_END() + }; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + options.uds_disallow_chdir = 0; + + argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0); + + return do_sendbytes(bytecount, string[0], path, &options); +} + +struct multiple_thread_data { + pthread_t pthread_id; + struct multiple_thread_data *next; + const char *path; + int bytecount; + int batchsize; + int sum_errors; + int sum_good; + char letter; +}; + +static void *multiple_thread_proc(void *_multiple_thread_data) +{ + struct multiple_thread_data *d = _multiple_thread_data; + int k; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + /* + * A multi-threaded client should not be randomly calling chdir(). + * The test will pass without this restriction because the test is + * not otherwise accessing the filesystem, but it makes us honest. + */ + options.uds_disallow_chdir = 1; + + trace2_thread_start("multiple"); + + for (k = 0; k < d->batchsize; k++) { + if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options)) + d->sum_errors++; + else + d->sum_good++; + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Start a client-side thread pool. Each thread sends a series of + * IPC requests. Each request is on a new connection to the server. + */ +static int client__multiple(int argc, const char **argv, const char *path) +{ + struct multiple_thread_data *list = NULL; + int k; + int nr_threads = 5; + int bytecount = 1; + int batchsize = 10; + int sum_join_errors = 0; + int sum_thread_errors = 0; + int sum_good = 0; + + const char * const multiple_usage[] = { + N_("test-helper simple-ipc multiple []"), + NULL + }; + struct option multiple_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")), + OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0); + + if (bytecount < 1) + bytecount = 1; + if (nr_threads < 1) + nr_threads = 1; + if (batchsize < 1) + batchsize = 1; + + for (k = 0; k < nr_threads; k++) { + struct multiple_thread_data *d = xcalloc(1, sizeof(*d)); + d->next = list; + d->path = path; + d->bytecount = bytecount + batchsize*(k/26); + d->batchsize = batchsize; + d->sum_errors = 0; + d->sum_good = 0; + d->letter = 'A' + (k % 26); + + if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) { + warning("failed to create thread[%d] skipping remainder", k); + free(d); + break; + } + + list = d; + } + + while (list) { + struct multiple_thread_data *d = list; + + if (pthread_join(d->pthread_id, NULL)) + sum_join_errors++; + + sum_thread_errors += d->sum_errors; + sum_good += d->sum_good; + + list = d->next; + free(d); + } + + printf("client (good %d) (join %d), (errors %d)\n", + sum_good, sum_join_errors, sum_thread_errors); + + return (sum_join_errors + sum_thread_errors) ? 1 : 0; +} + +int cmd__simple_ipc(int argc, const char **argv) +{ + const char *path = "ipc-test"; + + if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC")) + return 0; + + /* + * Use '!!' on all dispatch functions to map from `error()` style + * (returns -1) style to `test_must_fail` style (expects 1). This + * makes shell error messages less confusing. + */ + + if (argc == 2 && !strcmp(argv[1], "is-active")) + return !!client__probe_server(path); + + if (argc >= 2 && !strcmp(argv[1], "run-daemon")) + return !!daemon__run_server(path, argc, argv); + + if (argc >= 2 && !strcmp(argv[1], "start-daemon")) + return !!daemon__start_server(path, argc, argv); + + /* + * Client commands follow. Ensure a server is running before + * going any further. + */ + if (client__probe_server(path)) + return 1; + + if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send")) + return !!client__send_ipc(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "sendbytes")) + return !!client__sendbytes(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "multiple")) + return !!client__multiple(argc, argv, path); + + die("Unhandled argv[1]: '%s'", argv[1]); +} +#endif diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index 9d6d14d92937..a409655f03b5 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -64,6 +64,7 @@ static struct test_cmd cmds[] = { { "sha1", cmd__sha1 }, { "sha256", cmd__sha256 }, { "sigchain", cmd__sigchain }, + { "simple-ipc", cmd__simple_ipc }, { "strcmp-offset", cmd__strcmp_offset }, { "string-list", cmd__string_list }, { "submodule-config", cmd__submodule_config }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index a6470ff62c42..564eb3c8e911 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv); int cmd__oid_array(int argc, const char **argv); int cmd__sha256(int argc, const char **argv); int cmd__sigchain(int argc, const char **argv); +int cmd__simple_ipc(int argc, const char **argv); int cmd__strcmp_offset(int argc, const char **argv); int cmd__string_list(int argc, const char **argv); int cmd__submodule_config(int argc, const char **argv); diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh new file mode 100755 index 000000000000..e36b786709ec --- /dev/null +++ b/t/t0052-simple-ipc.sh @@ -0,0 +1,134 @@ +#!/bin/sh + +test_description='simple command server' + +. ./test-lib.sh + +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || { + skip_all='simple IPC not supported on this platform' + test_done +} + +stop_simple_IPC_server () { + test-tool simple-ipc send quit +} + +test_expect_success 'start simple command server' ' + test_atexit stop_simple_IPC_server && + test-tool simple-ipc start-daemon --threads=8 && + test-tool simple-ipc is-active +' + +test_expect_success 'simple command server' ' + test-tool simple-ipc send ping >actual && + echo pong >expect && + test_cmp expect actual +' + +test_expect_success 'servers cannot share the same path' ' + test_must_fail test-tool simple-ipc run-daemon && + test-tool simple-ipc is-active +' + +test_expect_success 'big response' ' + test-tool simple-ipc send big >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'chunk response' ' + test-tool simple-ipc send chunk >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'slow response' ' + test-tool simple-ipc send slow >actual && + test_line_count -ge 100 actual && + grep -q "big: [0]*99\$" actual +' + +# Send an IPC with n=100,000 bytes of ballast. This should be large enough +# to force both the kernel and the pkt-line layer to chunk the message to the +# daemon and for the daemon to receive it in chunks. +# +test_expect_success 'sendbytes' ' + test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual && + grep "sent:A00100000 rcvd:A00100000" actual +' + +# Start a series of client threads that each make +# IPC requests to the server. Each ( * ) request +# will open a new connection to the server and randomly bind to a server +# thread. Each client thread exits after completing its batch. So the +# total number of live client threads will be smaller than the total. +# Each request will send a message containing at least bytes +# of ballast. (Responses are small.) +# +# The purpose here is to test threading in the server and responding to +# many concurrent client requests (regardless of whether they come from +# 1 client process or many). And to test that the server side of the +# named pipe/socket is stable. (On Windows this means that the server +# pipe is properly recycled.) +# +# On Windows it also lets us adjust the connection timeout in the +# `ipc_client_send_command()`. +# +# Note it is easy to drive the system into failure by requesting an +# insane number of threads on client or server and/or increasing the +# per-thread batchsize or the per-request bytecount (ballast). +# On Windows these failures look like "pipe is busy" errors. +# So I've chosen fairly conservative values for now. +# +# We expect output of the form "sent: ..." +# With terms (7, 19, 13) we expect: +# in [A-G] +# in [19+0 .. 19+(13-1)] +# and (7 * 13) successful responses. +# +test_expect_success 'stress test threads' ' + test-tool simple-ipc multiple \ + --threads=7 \ + --bytecount=19 \ + --batchsize=13 \ + >actual && + test_line_count = 92 actual && + grep "good 91" actual && + grep "sent:A" actual_a && + cat >expect_a <<-EOF && + sent:A00000019 rcvd:A00000019 + sent:A00000020 rcvd:A00000020 + sent:A00000021 rcvd:A00000021 + sent:A00000022 rcvd:A00000022 + sent:A00000023 rcvd:A00000023 + sent:A00000024 rcvd:A00000024 + sent:A00000025 rcvd:A00000025 + sent:A00000026 rcvd:A00000026 + sent:A00000027 rcvd:A00000027 + sent:A00000028 rcvd:A00000028 + sent:A00000029 rcvd:A00000029 + sent:A00000030 rcvd:A00000030 + sent:A00000031 rcvd:A00000031 + EOF + test_cmp expect_a actual_a +' + +# Sending a "quit" message to the server causes it to start an "async +# shutdown" -- queuing shutdown events to all socket/pipe thread-pool +# threads. Each thread will process that event after finishing +# (draining) any in-progress IO with other clients. So when the "send +# quit" client command exits, the ipc-server may still be running (but +# it should be cleaning up). +# +# So, insert a generous sleep here to give the server time to shutdown. +# +test_expect_success '`quit` works' ' + test-tool simple-ipc send quit && + + sleep 5 && + + test_must_fail test-tool simple-ipc is-active && + test_must_fail test-tool simple-ipc send ping +' + +test_done