From patchwork Mon Mar 22 10:29:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8084C433C1 for ; Mon, 22 Mar 2021 10:30:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 746AD6198E for ; Mon, 22 Mar 2021 10:30:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230106AbhCVK3z (ORCPT ); Mon, 22 Mar 2021 06:29:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229703AbhCVK3x (ORCPT ); Mon, 22 Mar 2021 06:29:53 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FD1FC061574 for ; Mon, 22 Mar 2021 03:29:53 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id c8so3161526wrq.11 for ; Mon, 22 Mar 2021 03:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=a/Oyk2QmXqT5S9k11Fh4Qbmn3snKM0BOK/s/suNSi68=; b=rm36yxNqqVlHCTcIG4dsFpPqFAzbYO2fHwaW8ZH1RETKGoBVTjvH/SaY/Ac0/KmCV9 09dCFg3CGNE8/X4OS8OebPJveI4i1kxS7+gZ5pEy5uNgL6hIEnRW05xmyICKOD7EKRon oxBATVPsaRHigfsGXlZjQGeQXxeEaumjTYWh8ABKdIqsJGDVver8yHd1XsPaYA8Vy7cG l0tWKYvk7cFHZnk2parAlfzxAUUo9eNhg/68V0tzh6/2IhHlJt3RhvI5OytYSOyqV8Zz pPW4qJAkWFv6BtjogOWtZLzTogghPgKM+Lz6n/HshL4sB91rJ7SuZ2NbxPG+i15Cw0cX T/cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=a/Oyk2QmXqT5S9k11Fh4Qbmn3snKM0BOK/s/suNSi68=; b=kFyuGJOP5i4gNzpEAXkKW0IcYld2PlDu0IIRfjS5O+wrsd+bmGgPHsM9pUgnlNmu2x Y8p5OMI3jBSqN187JR2j90HzTK1gPBXVbRmoPta7R2FiAG9Xo5QjQ90C0F1qeCD3hxf4 qxzTHdQCqcon04p8druQ+OvouOPvP8c3r3PT6X4JeL/fkbS11+g3N+xfFWypRyEfNR6B l5lX1Hi08W8DDlw1nTUI+ZVJcYRiefLwZ5A55unXgzQxDSnRsNcVIECjccvKnyX5CBZh ++JrYtbJT/JSt7dbas7zw226gWPOykEs/QDJEbfjHXUAEyVOKngCyP768fTFBMxR6DUA aKKQ== X-Gm-Message-State: AOAM532Xc9OFmKcwsVmpSEiq8xYsAaRFXaq7wDmJJF4CC7DWFcdpL9y/ 9kGoSLXgr4dOAidtzr9hXZao3F1pjXo= X-Google-Smtp-Source: ABdhPJzSEXBxSdt4J0f0hiTJX6JCTodiE9l273l7XvW4efHp+IQXC9/FjLPRPSXskAKIES6r1JXgOw== X-Received: by 2002:adf:cd8c:: with SMTP id q12mr17097722wrj.185.1616408992077; Mon, 22 Mar 2021 03:29:52 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id h20sm15476085wmp.38.2021.03.22.03.29.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:51 -0700 (PDT) Message-Id: In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:37 +0000 Subject: [PATCH v7 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Teach `packet_write_gently()` to write the pkt-line header and the actual buffer in 2 separate calls to `write_in_full()` and avoid the need for a static buffer, thread-safe scratch space, or an excessively large stack buffer. Change `write_packetized_from_fd()` to allocate a temporary buffer rather than using a static buffer to avoid similar issues here. These changes are intended to make it easier to use pkt-line routines in a multi-threaded context with multiple concurrent writers writing to different streams. Signed-off-by: Jeff Hostetler --- pkt-line.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index d633005ef746..66bd0ddfd1d0 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -196,17 +196,26 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...) static int packet_write_gently(const int fd_out, const char *buf, size_t size) { - static char packet_write_buffer[LARGE_PACKET_MAX]; + char header[4]; size_t packet_size; - if (size > sizeof(packet_write_buffer) - 4) + if (size > LARGE_PACKET_DATA_MAX) return error(_("packet write failed - data exceeds max packet size")); packet_trace(buf, size, 1); packet_size = size + 4; - set_packet_header(packet_write_buffer, packet_size); - memcpy(packet_write_buffer + 4, buf, size); - if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0) + + set_packet_header(header, packet_size); + + /* + * Write the header and the buffer in 2 parts so that we do + * not need to allocate a buffer or rely on a static buffer. + * This also avoids putting a large buffer on the stack which + * might have multi-threading issues. + */ + + if (write_in_full(fd_out, header, 4) < 0 || + write_in_full(fd_out, buf, size) < 0) return error(_("packet write failed")); return 0; } @@ -244,20 +253,23 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) int write_packetized_from_fd(int fd_in, int fd_out) { - static char buf[LARGE_PACKET_DATA_MAX]; + char *buf = xmalloc(LARGE_PACKET_DATA_MAX); int err = 0; ssize_t bytes_to_write; while (!err) { - bytes_to_write = xread(fd_in, buf, sizeof(buf)); - if (bytes_to_write < 0) + bytes_to_write = xread(fd_in, buf, LARGE_PACKET_DATA_MAX); + if (bytes_to_write < 0) { + free(buf); return COPY_READ_ERROR; + } if (bytes_to_write == 0) break; err = packet_write_gently(fd_out, buf, bytes_to_write); } if (!err) err = packet_flush_gently(fd_out); + free(buf); return err; } From patchwork Mon Mar 22 10:29:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12154343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E8BC433E4 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B73F619A3 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbhCVKac (ORCPT ); Mon, 22 Mar 2021 06:30:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229547AbhCVK3y (ORCPT ); Mon, 22 Mar 2021 06:29:54 -0400 Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com [IPv6:2a00:1450:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E56A0C061574 for ; Mon, 22 Mar 2021 03:29:53 -0700 (PDT) Received: by mail-wr1-x436.google.com with SMTP id j18so16082397wra.2 for ; Mon, 22 Mar 2021 03:29:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=/wnX5m6bv462aJ4/n7oQX2gdwOtnSjSdlOa61NW87Yg=; b=KhRBn5DRkrNIBgem+0E4DqkhXgIDHgVV42XHxQVZX2j+3zLGMY2nye+Cn/x70d7IOY MyT+rYSS0SY4M2OxP1bpOTFMcefnQaEQ4JYmPpwGssi6zOEuBYJh2behb9c7PFEH+V2p u3DqPGoSiqdRuGw+oKq/uiNovTT+uH9gfnvGRpjFhq5BxgY1EicbBH7k843WBo9Ac69e i2EV/UVnGiC/EjjZnipWbm7JkpQOCeVspzYvAp0GNZ+r2Ss3R2BZxeT7vB8UhII7lS+T BG1KJVJufYACZ9z6bW4BI3kJrINkPnBZ6PpZOXegLCSx1LghvW5bnJMyHVO98bzJvn2w zUAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=/wnX5m6bv462aJ4/n7oQX2gdwOtnSjSdlOa61NW87Yg=; b=n0PtnbRAGybMCUyIxufOKgXRuMQVcSVjJPyk+0aaC9Df4FMVTG+k2TE6KwkblEmGWB 8e4QWZPc6uo2ynp+kx0oVgcEkrB2mT1PQNbttdcFHKnfl2CDyYFPgauttZlG8dC2JcRZ 7z1ldG9d664ruJnQ0i33mnUjrlKfM9Kn0U33rgXn6BvreCzv3VXkaugcJSN4hHDetQAn C/tPRsin5EXbM7crY7mzgAtxEb0jRya3DGn7/AsFjYt7ekJEr9RF4xEiQQWn1qHfMNCH l3xYxAJfp97SmgUIrpfKD0sb4S6APwLmAGTQ3nzoI+zff+/cdPqQULTu2TMDdk7I9Fqb y5UA== X-Gm-Message-State: AOAM531wXL46uEwt6O4Q1bxBIbpzdUcxKT/6UddCUpFMkIc3iMtpiDAv KrRCreDmt19dHmK9Ci0dKXoqZyU425w= X-Google-Smtp-Source: ABdhPJypnICLzfDCK/tjiUzHezHvamdaNmm8A5gE9Nq1sG/xIWsaAmdMhwz1WA3YWypJ1odKfe0FDQ== X-Received: by 2002:a5d:6dcc:: with SMTP id d12mr17031676wrz.136.1616408992726; Mon, 22 Mar 2021 03:29:52 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id n6sm22231829wrw.63.2021.03.22.03.29.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:52 -0700 (PDT) Message-Id: In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:38 +0000 Subject: [PATCH v7 02/12] pkt-line: do not issue flush packets in write_packetized_*() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and `write_packetized_from_fd()` and require the caller to call it if desired. Rename both functions to `write_packetized_from_*_no_flush()` to prevent later merge accidents. `write_packetized_from_buf()` currently only has one caller: `apply_multi_file_filter()` in `convert.c`. It always wants a flush packet to be written after writing the payload. However, we are about to introduce a caller that wants to write many packets before a final flush packet, so let's make the caller responsible for emitting the flush packet. Signed-off-by: Jeff Hostetler Signed-off-by: Johannes Schindelin --- convert.c | 8 ++++++-- pkt-line.c | 8 ++------ pkt-line.h | 4 ++-- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/convert.c b/convert.c index ee360c2f07ce..976d4905cb3a 100644 --- a/convert.c +++ b/convert.c @@ -884,9 +884,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len goto done; if (fd >= 0) - err = write_packetized_from_fd(fd, process->in); + err = write_packetized_from_fd_no_flush(fd, process->in); else - err = write_packetized_from_buf(src, len, process->in); + err = write_packetized_from_buf_no_flush(src, len, process->in); + if (err) + goto done; + + err = packet_flush_gently(process->in); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 66bd0ddfd1d0..bb0fb0c3802c 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -251,7 +251,7 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) packet_trace(data, len, 1); } -int write_packetized_from_fd(int fd_in, int fd_out) +int write_packetized_from_fd_no_flush(int fd_in, int fd_out) { char *buf = xmalloc(LARGE_PACKET_DATA_MAX); int err = 0; @@ -267,13 +267,11 @@ int write_packetized_from_fd(int fd_in, int fd_out) break; err = packet_write_gently(fd_out, buf, bytes_to_write); } - if (!err) - err = packet_flush_gently(fd_out); free(buf); return err; } -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out) { int err = 0; size_t bytes_written = 0; @@ -289,8 +287,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write); bytes_written += bytes_to_write; } - if (!err) - err = packet_flush_gently(fd_out); return err; } diff --git a/pkt-line.h b/pkt-line.h index 8c90daa59ef0..31012b9943bf 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -32,8 +32,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); -int write_packetized_from_fd(int fd_in, int fd_out); -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_fd_no_flush(int fd_in, int fd_out); +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out); /* * Read a packetized line into the buffer, which must be at least size bytes From patchwork Mon Mar 22 10:29:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12154341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21E51C433DB for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E670D619A1 for ; Mon, 22 Mar 2021 10:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230138AbhCVKaa (ORCPT ); Mon, 22 Mar 2021 06:30:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229904AbhCVK3z (ORCPT ); Mon, 22 Mar 2021 06:29:55 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97D4FC061756 for ; Mon, 22 Mar 2021 03:29:54 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id a132-20020a1c668a0000b029010f141fe7c2so8863842wmc.0 for ; Mon, 22 Mar 2021 03:29:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=Gx2HCnbaUgWpjCcl/3ugpx4J6eD/LX1CndW/0CSddwQ=; b=OV23Fdmyp+qxAu+/E5MBSFkA3R0lcxeKIXlGqAsj1N+h3TZGTci6TMRvgV4+3Ghd7a 7ZoTjfYf4iM123coqyT+/rQb7nSg3af50QBNVA5Oa0GKxFnsaVKxRyWCzEkdpwXV2b43 zPPC9AMfZstjnAv32RCFMpGq1GLHLMzrQBnZvwksNVPOOCDpJNOciu27VAc6Vj0732Jh md9sU9pGA7pV7X3NhcVYngwXEDwT3Iq4Y1cTX7yWTvEYTfeDYsJDXd/dzxuWnEbIc99s 5o16/u237IMF5JCu4xPRjoxLbAD1tD+olLCJYNF5E3NOoHmtdAN09apB7GAnuYJADnpC hx0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=Gx2HCnbaUgWpjCcl/3ugpx4J6eD/LX1CndW/0CSddwQ=; b=eWIE8uQuG0FKZUey8kDWsdCr6npeCf4RkrjHfmOgehkJRdotFbCutSEPOZ5e0X+1iw mXW4VZLyqJicBUBXklLLMfH4/tDYNrbHos3u2BUme+oYlZZa0LPLw/lWFw3AcSx0V0r+ saK7llgGok8v3YEVKmQNIvovkcwB42fYKyosKhS/5b9lYNxKccV+5pJjBtHlyMAq6ZIW upKUHCloV7LNBbnxDGUeqI0h7y7l6JTGpD8lRyGTZLgd+P2X2NvcWvrS1ZWhCc3alaVH NF9CifiM218a3dJZ6Clh/7VLD8o5gy1RReEanuGRlh0v1CpmGJgb6i5klmv1lRjm5y32 Nz7g== X-Gm-Message-State: AOAM53134GrXV5o4heUPRSau5eapKw9CCYaxcoBa96KdQjyY7NRfHNaa QpffIRxp1PWaMAcdHIy85rY0HX2gF1A= X-Google-Smtp-Source: ABdhPJxLnslwla7LaWWqOrpKN4lBt3EAD8E31m1mwtfl4DaaRr3TLNxaajgIkCerGbRLJNV+aAiVqg== X-Received: by 2002:a05:600c:198c:: with SMTP id t12mr15211622wmq.183.1616408993370; Mon, 22 Mar 2021 03:29:53 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id v18sm20416142wru.85.2021.03.22.03.29.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:53 -0700 (PDT) Message-Id: <3718da39da30ffc283e74eb94c942d0110eb9676.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:39 +0000 Subject: [PATCH v7 03/12] pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Introduce PACKET_READ_GENTLE_ON_READ_ERROR option to help libify the packet readers. So far, the (possibly indirect) callers of `get_packet_data()` can ask that function to return an error instead of `die()`ing upon end-of-file. However, random read errors will still cause the process to die. So let's introduce an explicit option to tell the packet reader machinery to please be nice and only return an error on read errors. This change prepares pkt-line for use by long-running daemon processes. Such processes should be able to serve multiple concurrent clients and and survive random IO errors. If there is an error on one connection, a daemon should be able to drop that connection and continue serving existing and future connections. This ability will be used by a Git-aware "Builtin FSMonitor" feature in a later patch series. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- pkt-line.c | 19 +++++++++++++++++-- pkt-line.h | 11 ++++++++--- 2 files changed, 25 insertions(+), 5 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index bb0fb0c3802c..457ac4e151bb 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -306,8 +306,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, *src_size -= ret; } else { ret = read_in_full(fd, dst, size); - if (ret < 0) + if (ret < 0) { + if (options & PACKET_READ_GENTLE_ON_READ_ERROR) + return error_errno(_("read error")); die_errno(_("read error")); + } } /* And complain if we didn't get enough bytes to satisfy the read. */ @@ -315,6 +318,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, if (options & PACKET_READ_GENTLE_ON_EOF) return -1; + if (options & PACKET_READ_GENTLE_ON_READ_ERROR) + return error(_("the remote end hung up unexpectedly")); die(_("the remote end hung up unexpectedly")); } @@ -343,6 +348,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, len = packet_length(linelen); if (len < 0) { + if (options & PACKET_READ_GENTLE_ON_READ_ERROR) + return error(_("protocol error: bad line length " + "character: %.4s"), linelen); die(_("protocol error: bad line length character: %.4s"), linelen); } else if (!len) { packet_trace("0000", 4, 0); @@ -357,12 +365,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, *pktlen = 0; return PACKET_READ_RESPONSE_END; } else if (len < 4) { + if (options & PACKET_READ_GENTLE_ON_READ_ERROR) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); } len -= 4; - if ((unsigned)len >= size) + if ((unsigned)len >= size) { + if (options & PACKET_READ_GENTLE_ON_READ_ERROR) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); + } if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) { *pktlen = -1; diff --git a/pkt-line.h b/pkt-line.h index 31012b9943bf..80ce0187e2ea 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -68,10 +68,15 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou * * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an * ERR packet. + * + * If options contains PACKET_READ_GENTLE_ON_READ_ERROR, we will not die + * on read errors, but instead return -1. However, we may still die on an + * ERR packet (if requested). */ -#define PACKET_READ_GENTLE_ON_EOF (1u<<0) -#define PACKET_READ_CHOMP_NEWLINE (1u<<1) -#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_GENTLE_ON_EOF (1u<<0) +#define PACKET_READ_CHOMP_NEWLINE (1u<<1) +#define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_GENTLE_ON_READ_ERROR (1u<<3) int packet_read(int fd, char **src_buffer, size_t *src_len, char *buffer, unsigned size, int options); From patchwork Mon Mar 22 10:29:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12154347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFC61C433E9 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 833A7619BB for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230252AbhCVKae (ORCPT ); Mon, 22 Mar 2021 06:30:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230137AbhCVK3z (ORCPT ); Mon, 22 Mar 2021 06:29:55 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DD7EC061574 for ; Mon, 22 Mar 2021 03:29:55 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id d191so9158759wmd.2 for ; Mon, 22 Mar 2021 03:29:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=iwMQ+JSvS6tvCRXxs4LuTI2qdYUXF8krnXVw3yWlkS0=; b=vfgXSYJfzLIMAKPosVZ10qI/BP1Y8iYXrGCqJwUA3EpgT3IQFj0UBnmdLdgvf15SPz Nr9qrVQn2Uvq+2RC84N4h0w2M9j7Ev+7kOM6KgBvl0UJ1gYT+FuqHVcrUKo/zE/vxPTY 7uxsQ7Ng5pP9ho7x3BWXUcNJv1YGL3X22Q0/h89PWsDEp4ofNFTwDFqf6NuZbJs8n6a/ fEIkAKctyryorSW8sIzG3x5bfzMoYaBzNCdEYYLYPMbu8ziwryNX5khEnG72vu5g7HDU AOKKl7aazqMlcSv9Aq9+0c74t07m5TD1FGqc92phLRLYTCjVgXBI/LrXcJEja51lq1/y cH0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=iwMQ+JSvS6tvCRXxs4LuTI2qdYUXF8krnXVw3yWlkS0=; b=OvT2L3PEZ0EYDhhdwFceuuAfbUGyZ8ufCDh+L6a5iSZwzVGbfGw02Pr1bpDanRfcgd +j9BcZYVk9dNdlAc8jRMGWDQt5icBuDu9g6YZoxwqVlabws0Ib8kRIK3T2QkJpdZrXVR jeAyNjTlmiYu7lcfv0JDFpRsPq0Y7ZPkrZCrtW5kdrtyQlJRE5EduBa+iCfVToGzCqBq IEAnnfCdqq3FL/I2SOazrTzyOj5J6e/Xauk8u1ylbASxLMssyZZIdyQtrOUVKGlfdCTU 2nOWc4WEea+R/6Wdsk5Vj//3x0Sotn3N21S4A7b+fR0mghohS8RZWfkumeuhzKwMUlzA nlJQ== X-Gm-Message-State: AOAM53162+qLZMRB1Kgjvpu5OPj5DJGsXJUYKFr7GX180ziKq7QTVIoC jNpWnTwWoMedASidjx66G7uqJLQs0ak= X-Google-Smtp-Source: ABdhPJyjMfO3xyg9Y4LTzzvYigNgkc38CC4hRLTtCUui/CzyPyO7Eom68FciWb7VuTTGB6IPXWktwg== X-Received: by 2002:a1c:a5c7:: with SMTP id o190mr15527362wme.172.1616408994076; Mon, 22 Mar 2021 03:29:54 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id s3sm15793180wmd.21.2021.03.22.03.29.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:53 -0700 (PDT) Message-Id: In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:40 +0000 Subject: [PATCH v7 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Update the calling sequence of `read_packetized_to_strbuf()` to take an options argument and not assume a fixed set of options. Update the only existing caller accordingly to explicitly pass the formerly-assumed flags. The `read_packetized_to_strbuf()` function calls `packet_read()` with a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`). This assumption has been fine for the single existing caller `apply_multi_file_filter()` in `convert.c`. In a later commit we would like to add other callers to `read_packetized_to_strbuf()` that need a different set of options. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- convert.c | 3 ++- pkt-line.c | 4 ++-- pkt-line.h | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index 976d4905cb3a..516f1095b06e 100644 --- a/convert.c +++ b/convert.c @@ -907,7 +907,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - err = read_packetized_to_strbuf(process->out, &nbuf) < 0; + err = read_packetized_to_strbuf(process->out, &nbuf, + PACKET_READ_GENTLE_ON_EOF) < 0; if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 457ac4e151bb..0194137528c3 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -444,7 +444,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len) return packet_read_line_generic(-1, src, src_len, dst_len); } -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options) { int packet_len; @@ -460,7 +460,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) * that there is already room for the extra byte. */ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1, - PACKET_READ_GENTLE_ON_EOF); + options); if (packet_len <= 0) break; sb_out->len += packet_len; diff --git a/pkt-line.h b/pkt-line.h index 80ce0187e2ea..5af5f4568768 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -136,7 +136,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size); /* * Reads a stream of variable sized packets until a flush packet is detected. */ -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out); +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options); /* * Receive multiplexed output stream over git native protocol. From patchwork Mon Mar 22 10:29:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA311C433E6 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 93977619A4 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230201AbhCVKah (ORCPT ); Mon, 22 Mar 2021 06:30:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229472AbhCVK35 (ORCPT ); Mon, 22 Mar 2021 06:29:57 -0400 Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com [IPv6:2a00:1450:4864:20::32c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F08CC061574 for ; Mon, 22 Mar 2021 03:29:56 -0700 (PDT) Received: by mail-wm1-x32c.google.com with SMTP id p19so9169410wmq.1 for ; Mon, 22 Mar 2021 03:29:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=/Lelo8/06j9WQIIf7GDhH+9BwIWlnOU4qaeio0Ut5GI=; b=nXl2kLf9N8g2TMp59d8Y3j8aCicaRKU4ctkR++SZfD4BXzvPshEoyYKBr11IlyEr4E aPEBcxS3hngskcg8OXUV8dOU6wDp9T0JvA918FNzniyMSmW/h/OhJK7ynLxkz3hiFgHT l5ODoP/UALhIxVQFExyNBTSyHVsATJkPBzBRSc4rCe3w1+CXgSIRyFwxF7zVCl/7c1Uh g+3D1gZzvvM3SRZigGDLco3S+/RYxq9MCIccbPe+aA6Qow2XrIcpq7ujluNnRDGmvHFW w/gv+ymutKGNyx7mopqyUVlMDBfo2qs5a5RO/aphT50pSmdfljE6jO9BanqYMikLqNto Rfig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=/Lelo8/06j9WQIIf7GDhH+9BwIWlnOU4qaeio0Ut5GI=; b=BOZ1mizQmGmAXUWyCsteJwPSJLjtOaap5Ay/8zYNl8sCPIsq8Hut2PuB5eFw30vu4u ecHPgOAK2lifuHezasFjwPPg+Au/3qn5eQcT+X/04RH2SuZIgU2iatYK65nSgVqwKFW9 xeE9uSar0++Fu9zHY5uDmJ0b0KA/4cg/av0JD2J2g+BOd5PgRhu01HRb7mKsxmB7oT6I US4xyaK3agXx5+7Jp83myRYwtbSuhxPl0zzKkCqVdd6WQT3pyEXqCaETEcx+ny4KDcOU QcSaDSZViCIHmcmb79BUek5j4rhUHIiQxGgro8sUVDw+9+4oOzmDObqvXzvqXn99vgxa E+KQ== X-Gm-Message-State: AOAM531YVA9vW8ROuYB9s2tGdY/i3BPq6qwCAYhVI61pVDeDkE9JYf/s CTT0seMQ38QFC0M7qNv9kDJyHrVEL5o= X-Google-Smtp-Source: ABdhPJwN82dypA6majrMArGpGRWUgI7tDHP/+CkZ3aXaLhJeNGh/X+J8fwS938kNDPwpSUPgQsoRXQ== X-Received: by 2002:a05:600c:252:: with SMTP id 18mr15002616wmj.67.1616408994862; Mon, 22 Mar 2021 03:29:54 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u2sm19819326wrp.12.2021.03.22.03.29.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:54 -0700 (PDT) Message-Id: In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:41 +0000 Subject: [PATCH v7 05/12] simple-ipc: design documentation for new IPC mechanism Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Brief design documentation for new IPC mechanism allowing foreground Git client to talk with an existing daemon process at a known location using a named pipe or unix domain socket. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- Documentation/technical/api-simple-ipc.txt | 105 +++++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 Documentation/technical/api-simple-ipc.txt diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt new file mode 100644 index 000000000000..d79ad323e675 --- /dev/null +++ b/Documentation/technical/api-simple-ipc.txt @@ -0,0 +1,105 @@ +Simple-IPC API +============== + +The Simple-IPC API is a collection of `ipc_` prefixed library routines +and a basic communication protocol that allow an IPC-client process to +send an application-specific IPC-request message to an IPC-server +process and receive an application-specific IPC-response message. + +Communication occurs over a named pipe on Windows and a Unix domain +socket on other platforms. IPC-clients and IPC-servers rendezvous at +a previously agreed-to application-specific pathname (which is outside +the scope of this design) that is local to the computer system. + +The IPC-server routines within the server application process create a +thread pool to listen for connections and receive request messages +from multiple concurrent IPC-clients. When received, these messages +are dispatched up to the server application callbacks for handling. +IPC-server routines then incrementally relay responses back to the +IPC-client. + +The IPC-client routines within a client application process connect +to the IPC-server and send a request message and wait for a response. +When received, the response is returned back the caller. + +For example, the `fsmonitor--daemon` feature will be built as a server +application on top of the IPC-server library routines. It will have +threads watching for file system events and a thread pool waiting for +client connections. Clients, such as `git status` will request a list +of file system events since a point in time and the server will +respond with a list of changed files and directories. The formats of +the request and response are application-specific; the IPC-client and +IPC-server routines treat them as opaque byte streams. + + +Comparison with sub-process model +--------------------------------- + +The Simple-IPC mechanism differs from the existing `sub-process.c` +model (Documentation/technical/long-running-process-protocol.txt) and +used by applications like Git-LFS. In the LFS-style sub-process model +the helper is started by the foreground process, communication happens +via a pair of file descriptors bound to the stdin/stdout of the +sub-process, the sub-process only serves the current foreground +process, and the sub-process exits when the foreground process +terminates. + +In the Simple-IPC model the server is a very long-running service. It +can service many clients at the same time and has a private socket or +named pipe connection to each active client. It might be started +(on-demand) by the current client process or it might have been +started by a previous client or by the OS at boot time. The server +process is not associated with a terminal and it persists after +clients terminate. Clients do not have access to the stdin/stdout of +the server process and therefore must communicate over sockets or +named pipes. + + +Server startup and shutdown +--------------------------- + +How an application server based upon IPC-server is started is also +outside the scope of the Simple-IPC design and is a property of the +application using it. For example, the server might be started or +restarted during routine maintenance operations, or it might be +started as a system service during the system boot-up sequence, or it +might be started on-demand by a foreground Git command when needed. + +Similarly, server shutdown is a property of the application using +the simple-ipc routines. For example, the server might decide to +shutdown when idle or only upon explicit request. + + +Simple-IPC protocol +------------------- + +The Simple-IPC protocol consists of a single request message from the +client and an optional response message from the server. Both the +client and server messages are unlimited in length and are terminated +with a flush packet. + +The pkt-line routines (Documentation/technical/protocol-common.txt) +are used to simplify buffer management during message generation, +transmission, and reception. A flush packet is used to mark the end +of the message. This allows the sender to incrementally generate and +transmit the message. It allows the receiver to incrementally receive +the message in chunks and to know when they have received the entire +message. + +The actual byte format of the client request and server response +messages are application specific. The IPC layer transmits and +receives them as opaque byte buffers without any concern for the +content within. It is the job of the calling application layer to +understand the contents of the request and response messages. + + +Summary +------- + +Conceptually, the Simple-IPC protocol is similar to an HTTP REST +request. Clients connect, make an application-specific and +stateless request, receive an application-specific +response, and disconnect. It is a one round trip facility for +querying the server. The Simple-IPC routines hide the socket, +named pipe, and thread pool details and allow the application +layer to focus on the application at hand. From patchwork Mon Mar 22 10:29:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17D77C433ED for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB0A1619A6 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230281AbhCVKak (ORCPT ); Mon, 22 Mar 2021 06:30:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230179AbhCVK36 (ORCPT ); Mon, 22 Mar 2021 06:29:58 -0400 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 430FEC0613D8 for ; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) Received: by mail-wm1-x329.google.com with SMTP id b2-20020a7bc2420000b029010be1081172so8943823wmj.1 for ; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=HjvlbUxyN16fvG/kqda3cUTU+Hiv8kcfZdXOFrRWk5s=; b=MyzVgnq+hflskV+ZBYVCC9fD173l4ewXywyNNN7DUFwVVADr28Cjq+vr9yQxvtjtjH 6yWAOsKXa+lfJd3ANzCalPoXKT43kPE+kGGmXl4C7LFP96hQ6g9/C27wiowf7Kgk/7BZ Etjp0HHwlFBF8ggwPNbuzYXLYmrFC9+SHvk3wR4xbEErAVbVjjQ6xQOGRG3w0RSJ3vg5 vgLkkNjXO2RKkMhGuMwq4AGvXuNQRReV1s65vkxU0fqkck7LHSSDHu3oV+Q1psM6BXzW varPS/loXz0uFn6Zlbio+QJoaOyx0yrGpLm7crmB7Eovt/c9oTz4KZPJjFiv3zJyg1WI 6FIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=HjvlbUxyN16fvG/kqda3cUTU+Hiv8kcfZdXOFrRWk5s=; b=fINxHjcY79MaXwh5EkGLeY+1fXTuPR8v6r7Db/jyfEmCJMg0bB66HeuvR9qTQtRini Guy5fnnU4r7aqphMNYpjLgvqHmhCbLBpO/wHaRvrXO7sgP/jipb0OoDkXIKWiznzUAYb o1M8XTcKLeB+lydnRizfMV86iSO9OfX3iSU6SwpDdjJvnpzgzXWZaoU+TMCBMkaoJk7k PGCIO79qem+tpPW83qF2dvvmNBN9RpKpwncSCgH7N/BMpWAg1Sv5sFnSp9gSjFmnkW33 u+PqSx+g9m59oZxX1IpJfKL7Y+196XClZ6usbMjzAteOYpCJjYmO9QAxq3f5vrqMa4L6 YeDg== X-Gm-Message-State: AOAM530189Z93PptEVBa3UKOJ+DF1lXFyToEaOkDXgJSAXhB+pGf8J4l sHlY6RSmKB50jQwxvrOgC2C++pXnh3A= X-Google-Smtp-Source: ABdhPJyHvVubHebLTQRU4BjT9gNMdWT8VJSXc16jKcxzl1PJlL5A5u9iqC7PL0vQPgyIHtLIx2I1kA== X-Received: by 2002:a7b:c0d1:: with SMTP id s17mr15159572wmh.153.1616408995607; Mon, 22 Mar 2021 03:29:55 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id m11sm18057945wrz.40.2021.03.22.03.29.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:55 -0700 (PDT) Message-Id: <58c3fb7cd776600c2e859a7db23dcfe00b52c6f0.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:42 +0000 Subject: [PATCH v7 06/12] simple-ipc: add win32 implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Windows implementation of "simple-ipc" using named pipes. Signed-off-by: Jeff Hostetler --- Makefile | 5 + compat/simple-ipc/ipc-shared.c | 28 ++ compat/simple-ipc/ipc-win32.c | 751 ++++++++++++++++++++++++++++ config.mak.uname | 2 + contrib/buildsystems/CMakeLists.txt | 4 + simple-ipc.h | 228 +++++++++ 6 files changed, 1018 insertions(+) create mode 100644 compat/simple-ipc/ipc-shared.c create mode 100644 compat/simple-ipc/ipc-win32.c create mode 100644 simple-ipc.h diff --git a/Makefile b/Makefile index dd08b4ced01c..d3c42d3f4f9f 100644 --- a/Makefile +++ b/Makefile @@ -1667,6 +1667,11 @@ else LIB_OBJS += unix-socket.o endif +ifdef USE_WIN32_IPC + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-win32.o +endif + ifdef NO_ICONV BASIC_CFLAGS += -DNO_ICONV endif diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c new file mode 100644 index 000000000000..1edec8159532 --- /dev/null +++ b/compat/simple-ipc/ipc-shared.c @@ -0,0 +1,28 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifdef SUPPORTS_SIMPLE_IPC + +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data = NULL; + int ret; + + ret = ipc_server_run_async(&server_data, path, opts, + application_cb, application_data); + if (ret) + return ret; + + ret = ipc_server_await(server_data); + + ipc_server_free(server_data); + + return ret; +} + +#endif /* SUPPORTS_SIMPLE_IPC */ diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c new file mode 100644 index 000000000000..8f89c02037e3 --- /dev/null +++ b/compat/simple-ipc/ipc-win32.c @@ -0,0 +1,751 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifndef GIT_WINDOWS_NATIVE +#error This file can only be compiled on Windows +#endif + +static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc) +{ + int off = 0; + struct strbuf realpath = STRBUF_INIT; + + if (!strbuf_realpath(&realpath, path, 0)) + return -1; + + off = swprintf(wpath, alloc, L"\\\\.\\pipe\\"); + if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0) + return -1; + + /* Handle drive prefix */ + if (wpath[off] && wpath[off + 1] == L':') { + wpath[off + 1] = L'_'; + off += 2; + } + + for (; wpath[off]; off++) + if (wpath[off] == L'/') + wpath[off] = L'\\'; + + strbuf_release(&realpath); + return 0; +} + +static enum ipc_active_state get_active_state(wchar_t *pipe_path) +{ + if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT)) + return IPC_STATE__LISTENING; + + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + if (GetLastError() == ERROR_FILE_NOT_FOUND) + return IPC_STATE__PATH_NOT_FOUND; + + return IPC_STATE__OTHER_ERROR; +} + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + wchar_t pipe_path[MAX_PATH]; + + if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0) + return IPC_STATE__INVALID_PATH; + + return get_active_state(pipe_path); +} + +#define WAIT_STEP_MS (50) + +static enum ipc_active_state connect_to_server( + const wchar_t *wpath, + DWORD timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + DWORD t_start_ms, t_waited_ms; + DWORD step_ms; + HANDLE hPipe = INVALID_HANDLE_VALUE; + DWORD mode = PIPE_READMODE_BYTE; + DWORD gle; + + *pfd = -1; + + for (;;) { + hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE, + 0, NULL, OPEN_EXISTING, 0, NULL); + if (hPipe != INVALID_HANDLE_VALUE) + break; + + gle = GetLastError(); + + switch (gle) { + case ERROR_FILE_NOT_FOUND: + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + if (!timeout_ms) + return IPC_STATE__PATH_NOT_FOUND; + + step_ms = (timeout_ms < WAIT_STEP_MS) ? + timeout_ms : WAIT_STEP_MS; + sleep_millisec(step_ms); + + timeout_ms -= step_ms; + break; /* try again */ + + case ERROR_PIPE_BUSY: + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + if (!timeout_ms) + return IPC_STATE__NOT_LISTENING; + + t_start_ms = (DWORD)(getnanotime() / 1000000); + + if (!WaitNamedPipeW(wpath, timeout_ms)) { + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + return IPC_STATE__OTHER_ERROR; + } + + /* + * A pipe server instance became available. + * Race other client processes to connect to + * it. + * + * But first decrement our overall timeout so + * that we don't starve if we keep losing the + * race. But also guard against special + * NPMWAIT_ values (0 and -1). + */ + t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms; + if (t_waited_ms < timeout_ms) + timeout_ms -= t_waited_ms; + else + timeout_ms = 1; + break; /* try again */ + + default: + return IPC_STATE__OTHER_ERROR; + } + } + + if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + *pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY); + if (*pfd < 0) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + /* fd now owns hPipe */ + + return IPC_STATE__LISTENING; +} + +/* + * The default connection timeout for Windows clients. + * + * This is not currently part of the ipc_ API (nor the config settings) + * because of differences between Windows and other platforms. + * + * This value was chosen at random. + */ +#define WINDOWS_CONNECTION_TIMEOUT_MS (30000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + wchar_t wpath[MAX_PATH]; + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0) + state = IPC_STATE__INVALID_PATH; + else + state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd)); + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *response) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, response); + + ipc_client_close_connection(connection); + + return ret; +} + +/* + * Duplicate the given pipe handle and wrap it in a file descriptor so + * that we can use pkt-line on it. + */ +static int dup_fd_from_pipe(const HANDLE pipe) +{ + HANDLE process = GetCurrentProcess(); + HANDLE handle; + int fd; + + if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + errno = err_win_to_posix(GetLastError()); + return -1; + } + + fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY); + if (fd < 0) { + errno = err_win_to_posix(GetLastError()); + CloseHandle(handle); + return -1; + } + + /* + * `handle` is now owned by `fd` and will be automatically closed + * when the descriptor is closed. + */ + + return fd; +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_SERVER_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_server_thread_data *server_thread_data; +}; + +struct ipc_server_thread_data { + enum magic magic; + struct ipc_server_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + HANDLE hPipe; +}; + +/* + * On Windows, the conceptual "ipc-server" is implemented as a pool of + * n idential/peer "server-thread" threads. That is, there is no + * hierarchy of threads; and therefore no controller thread managing + * the pool. Each thread has an independent handle to the named pipe, + * receives incoming connections, processes the client, and re-uses + * the pipe for the next client connection. + * + * Therefore, the "ipc-server" only needs to maintain a list of the + * spawned threads for eventual "join" purposes. + * + * A single "stop-event" is visible to all of the server threads to + * tell them to shutdown (when idle). + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + wchar_t wpath[MAX_PATH]; + + HANDLE hEventStopRequested; + struct ipc_server_thread_data *thread_list; + int is_stopped; +}; + +enum connect_result { + CR_CONNECTED = 0, + CR_CONNECT_PENDING, + CR_CONNECT_ERROR, + CR_WAIT_ERROR, + CR_SHUTDOWN, +}; + +static enum connect_result queue_overlapped_connect( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + if (ConnectNamedPipe(server_thread_data->hPipe, lpo)) + goto failed; + + switch (GetLastError()) { + case ERROR_IO_PENDING: + return CR_CONNECT_PENDING; + + case ERROR_PIPE_CONNECTED: + SetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + break; + } + +failed: + error(_("ConnectNamedPipe failed for '%s' (%lu)"), + server_thread_data->server_data->buf_path.buf, + GetLastError()); + return CR_CONNECT_ERROR; +} + +/* + * Use Windows Overlapped IO to wait for a connection or for our event + * to be signalled. + */ +static enum connect_result wait_for_connection( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + enum connect_result r; + HANDLE waitHandles[2]; + DWORD dwWaitResult; + + r = queue_overlapped_connect(server_thread_data, lpo); + if (r != CR_CONNECT_PENDING) + return r; + + waitHandles[0] = server_thread_data->server_data->hEventStopRequested; + waitHandles[1] = lpo->hEvent; + + dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE); + switch (dwWaitResult) { + case WAIT_OBJECT_0 + 0: + return CR_SHUTDOWN; + + case WAIT_OBJECT_0 + 1: + ResetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + return CR_WAIT_ERROR; + } +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + * + * Simple-IPC only contains one round trip, so we flush and close + * here after the response. + */ +static int do_io(struct ipc_server_thread_data *server_thread_data) +{ + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.server_thread_data = server_thread_data; + + reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe); + if (reply_data.fd < 0) + return error(_("could not create fd from pipe for '%s'"), + server_thread_data->server_data->buf_path.buf); + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR); + if (ret >= 0) { + ret = server_thread_data->server_data->application_cb( + server_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + + FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd))); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Handle IPC request and response with this connected client. And reset + * the pipe to prepare for the next client. + */ +static int use_connection(struct ipc_server_thread_data *server_thread_data) +{ + int ret; + + ret = do_io(server_thread_data); + + FlushFileBuffers(server_thread_data->hPipe); + DisconnectNamedPipe(server_thread_data->hPipe); + + return ret; +} + +/* + * Thread proc for an IPC server worker thread. It handles a series of + * connections from clients. It cleans and reuses the hPipe between each + * client. + */ +static void *server_thread_proc(void *_server_thread_data) +{ + struct ipc_server_thread_data *server_thread_data = _server_thread_data; + HANDLE hEventConnected = INVALID_HANDLE_VALUE; + OVERLAPPED oConnect; + enum connect_result cr; + int ret; + + assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE); + + trace2_thread_start("ipc-server"); + trace2_data_string("ipc-server", NULL, "pipe", + server_thread_data->server_data->buf_path.buf); + + hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL); + + memset(&oConnect, 0, sizeof(oConnect)); + oConnect.hEvent = hEventConnected; + + for (;;) { + cr = wait_for_connection(server_thread_data, &oConnect); + + switch (cr) { + case CR_SHUTDOWN: + goto finished; + + case CR_CONNECTED: + ret = use_connection(server_thread_data); + if (ret == SIMPLE_IPC_QUIT) { + ipc_server_stop_async( + server_thread_data->server_data); + goto finished; + } + if (ret > 0) { + /* + * Ignore (transient) IO errors with this + * client and reset for the next client. + */ + } + break; + + case CR_CONNECT_PENDING: + /* By construction, this should not happen. */ + BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING", + server_thread_data->server_data->buf_path.buf); + + case CR_CONNECT_ERROR: + case CR_WAIT_ERROR: + /* + * Ignore these theoretical errors. + */ + DisconnectNamedPipe(server_thread_data->hPipe); + break; + + default: + BUG("unandled case after wait_for_connection"); + } + } + +finished: + CloseHandle(server_thread_data->hPipe); + CloseHandle(hEventConnected); + + trace2_thread_exit(); + return NULL; +} + +static HANDLE create_new_pipe(wchar_t *wpath, int is_first) +{ + HANDLE hPipe; + DWORD dwOpenMode, dwPipeMode; + LPSECURITY_ATTRIBUTES lpsa = NULL; + + dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND | + FILE_FLAG_OVERLAPPED; + + dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT | + PIPE_REJECT_REMOTE_CLIENTS; + + if (is_first) { + dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE; + + /* + * On Windows, the first server pipe instance gets to + * set the ACL / Security Attributes on the named + * pipe; subsequent instances inherit and cannot + * change them. + * + * TODO Should we allow the application layer to + * specify security attributes, such as `LocalService` + * or `LocalSystem`, when we create the named pipe? + * This question is probably not important when the + * daemon is started by a foreground user process and + * only needs to talk to the current user, but may be + * if the daemon is run via the Control Panel as a + * System Service. + */ + } + + hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode, + PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa); + + return hPipe; +} + +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + wchar_t wpath[MAX_PATH]; + HANDLE hPipeFirst = INVALID_HANDLE_VALUE; + int k; + int ret = 0; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)); + if (ret < 0) { + errno = EINVAL; + return -1; + } + + hPipeFirst = create_new_pipe(wpath, 1); + if (hPipeFirst == INVALID_HANDLE_VALUE) { + errno = EADDRINUSE; + return -2; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL); + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + wcscpy(server_data->wpath, wpath); + + if (nr_threads < 1) + nr_threads = 1; + + for (k = 0; k < nr_threads; k++) { + struct ipc_server_thread_data *std; + + std = xcalloc(1, sizeof(*std)); + std->magic = MAGIC_SERVER_THREAD_DATA; + std->server_data = server_data; + std->hPipe = INVALID_HANDLE_VALUE; + + std->hPipe = (k == 0) + ? hPipeFirst + : create_new_pipe(server_data->wpath, 0); + + if (std->hPipe == INVALID_HANDLE_VALUE) { + /* + * If we've reached a pipe instance limit for + * this path, just use fewer threads. + */ + free(std); + break; + } + + if (pthread_create(&std->pthread_id, NULL, + server_thread_proc, std)) { + /* + * Likewise, if we're out of threads, just use + * fewer threads than requested. + * + * However, we just give up if we can't even get + * one thread. This should not happen. + */ + if (k == 0) + die(_("could not start thread[0] for '%s'"), + path); + + CloseHandle(std->hPipe); + free(std); + break; + } + + std->next_thread = server_data->thread_list; + server_data->thread_list = std; + } + + *returned_server_data = server_data; + return 0; +} + +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + if (!server_data) + return 0; + + /* + * Gently tell all of the ipc_server threads to shutdown. + * This will be seen the next time they are idle (and waiting + * for a connection). + * + * We DO NOT attempt to force them to drop an active connection. + */ + SetEvent(server_data->hEventStopRequested); + return 0; +} + +int ipc_server_await(struct ipc_server_data *server_data) +{ + DWORD dwWaitResult; + + if (!server_data) + return 0; + + dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE); + if (dwWaitResult != WAIT_OBJECT_0) + return error(_("wait for hEvent failed for '%s'"), + server_data->buf_path.buf); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + pthread_join(std->pthread_id, NULL); + + server_data->thread_list = std->next_thread; + free(std); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + strbuf_release(&server_data->buf_path); + + if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE) + CloseHandle(server_data->hEventStopRequested); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + server_data->thread_list = std->next_thread; + free(std); + } + + free(server_data); +} diff --git a/config.mak.uname b/config.mak.uname index e22d4b6d67a3..2b3303f34be8 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes # USE_NED_ALLOCATOR = YesPlease @@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S))) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes USE_NED_ALLOCATOR = YesPlease diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index ac3dbc079af8..40c9e8e3bd9d 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND compat_SOURCES unix-socket.c) endif() +if(CMAKE_SYSTEM_NAME STREQUAL "Windows") + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +endif() + set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) #header checks diff --git a/simple-ipc.h b/simple-ipc.h new file mode 100644 index 000000000000..ab5619e3d76f --- /dev/null +++ b/simple-ipc.h @@ -0,0 +1,228 @@ +#ifndef GIT_SIMPLE_IPC_H +#define GIT_SIMPLE_IPC_H + +/* + * See Documentation/technical/api-simple-ipc.txt + */ + +#if defined(GIT_WINDOWS_NATIVE) +#define SUPPORTS_SIMPLE_IPC +#endif + +#ifdef SUPPORTS_SIMPLE_IPC +#include "pkt-line.h" + +/* + * Simple IPC Client Side API. + */ + +enum ipc_active_state { + /* + * The pipe/socket exists and the daemon is waiting for connections. + */ + IPC_STATE__LISTENING = 0, + + /* + * The pipe/socket exists, but the daemon is not listening. + * Perhaps it is very busy. + * Perhaps the daemon died without deleting the path. + * Perhaps it is shutting down and draining existing clients. + * Perhaps it is dead, but other clients are lingering and + * still holding a reference to the pathname. + */ + IPC_STATE__NOT_LISTENING, + + /* + * The requested pathname is bogus and no amount of retries + * will fix that. + */ + IPC_STATE__INVALID_PATH, + + /* + * The requested pathname is not found. This usually means + * that there is no daemon present. + */ + IPC_STATE__PATH_NOT_FOUND, + + IPC_STATE__OTHER_ERROR, +}; + +struct ipc_client_connect_options { + /* + * Spin under timeout if the server is running but can't + * accept our connection yet. This should always be set + * unless you just want to poke the server and see if it + * is alive. + */ + unsigned int wait_if_busy:1; + + /* + * Spin under timeout if the pipe/socket is not yet present + * on the file system. This is useful if we just started + * the service and need to wait for it to become ready. + */ + unsigned int wait_if_not_found:1; +}; + +#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ + .wait_if_busy = 0, \ + .wait_if_not_found = 0, \ +} + +/* + * Determine if a server is listening on this named pipe or socket using + * platform-specific logic. This might just probe the filesystem or it + * might make a trivial connection to the server using this pathname. + */ +enum ipc_active_state ipc_get_active_state(const char *path); + +struct ipc_client_connection { + int fd; +}; + +/* + * Try to connect to the daemon on the named pipe or socket. + * + * Returns IPC_STATE__LISTENING and a connection handle. + * + * Otherwise, returns info to help decide whether to retry or to + * spawn/respawn the server. + */ +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection); + +void ipc_client_close_connection(struct ipc_client_connection *connection); + +/* + * Used by the client to synchronously send and receive a message with + * the server on the provided client connection. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer); + +/* + * Used by the client to synchronously connect and send and receive a + * message to the server listening at the given path. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer); + +/* + * Simple IPC Server Side API. + */ + +struct ipc_server_reply_data; + +typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *, + const char *response, + size_t response_len); + +/* + * Prototype for an application-supplied callback to process incoming + * client IPC messages and compose a reply. The `application_cb` should + * use the provided `reply_cb` and `reply_data` to send an IPC response + * back to the client. The `reply_cb` callback can be called multiple + * times for chunking purposes. A reply message is optional and may be + * omitted if not necessary for the application. + * + * The return value from the application callback is ignored. + * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server. + */ +typedef int (ipc_server_application_cb)(void *application_data, + const char *request, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data); + +#define SIMPLE_IPC_QUIT -2 + +/* + * Opaque instance data to represent an IPC server instance. + */ +struct ipc_server_data; + +/* + * Control parameters for the IPC server instance. + * Use this to hide platform-specific settings. + */ +struct ipc_server_opts +{ + int nr_threads; +}; + +/* + * Start an IPC server instance in one or more background threads + * and return a handle to the pool. + * + * Returns 0 if the asynchronous server pool was started successfully. + * Returns -1 if not. + * Returns -2 if we could not startup because another server is using + * the socket or named pipe. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +/* + * Gently signal the IPC server pool to shutdown. No new client + * connections will be accepted, but existing connections will be + * allowed to complete. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data); + +/* + * Block the calling thread until all threads in the IPC server pool + * have completed and been joined. + */ +int ipc_server_await(struct ipc_server_data *server_data); + +/* + * Close and free all resource handles associated with the IPC server + * pool. + */ +void ipc_server_free(struct ipc_server_data *server_data); + +/* + * Run an IPC server instance and block the calling thread of the + * current process. It does not return until the IPC server has + * either shutdown or had an unrecoverable error. + * + * The IPC server handles incoming IPC messages from client processes + * and may use one or more background threads as necessary. + * + * Returns 0 after the server has completed successfully. + * Returns -1 if the server cannot be started. + * Returns -2 if we could not startup because another server is using + * the socket or named pipe. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + * + * Note that `ipc_server_run()` is a synchronous wrapper around the + * above asynchronous routines. It effectively hides all of the + * server state and thread details from the caller and presents a + * simple synchronous interface. + */ +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +#endif /* SUPPORTS_SIMPLE_IPC */ +#endif /* GIT_SIMPLE_IPC_H */ From patchwork Mon Mar 22 10:29:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AC4BC433F1 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28B42619B6 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230284AbhCVKal (ORCPT ); Mon, 22 Mar 2021 06:30:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230185AbhCVK36 (ORCPT ); Mon, 22 Mar 2021 06:29:58 -0400 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5BC9C061574 for ; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) Received: by mail-wr1-x432.google.com with SMTP id o16so16096507wrn.0 for ; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=CI1iHHQR+WicMFATyce2+jkUujtSX0x2SUYMLuYsZmWbEdVLKIDybkyJKmqqLlqqm3 zqJfG4o9/g2VHyikx5y0ScW6itM+Uq4KwzXYzIVELM36x2GbIxJXzASLQwnbggNllAMN Z8ERc4j7lCq7n/Gde2SNXJatU7j69Vr0VzDxmbNHSgG0dmoZv+iZoj4jJNm5CclmFVj+ VSd81eGSaovg8GIG1QXO+wHeYDrxLR6sK8qPMeBkyMHVa0Xv2rOY3IfUWVHbwFCh/IHQ +/UMwAiu/PllHnpgCj0Nj9Olca/pvupy4wflo7J5v916F/m40drBrxIwYdd+gpQwoFAJ Uvvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=RquQ00jJeEKuKOuLuF2y7GE51GI0EuM/D57kdZrpEamxMJKvyzeHpbOikGRzn7vvLD qNj8pODYR3y3BiCirKERr0f8huJxnbadPJboyGoi4mZnV3Lewl9uyyJdgTfYPQRu1MuV 9lOYc0VJUiX4urKgCiSNFy+AjaHJbdWQdL+DvZfAQJr60K1E4ME1iDGFfm++ZlmIfwVM Wj0Pt+I/NCKpXftN9miqAI+JgPaUzBjRayo+d0qNvOc8jVJEmLgKAWGjJp6Wfa+r5gpn MtBGMO5H3IR6ieUHxfQNI2WlrhdMeLmdSr1dDPMs7TEHuj54og+1/joNCBYtKYrRieur lJcQ== X-Gm-Message-State: AOAM530Eb4+WVZ7yPWl1pWIlIN8NC9KElVRH76Y5O8MrotFJ4mrp4/jx v9DNm/X1eYK+2qzEaXSejnhgnY6hkOo= X-Google-Smtp-Source: ABdhPJxVV4T9C09wqyIdXgFToLx1vDvH9EOTwMrEoQxzSfC22EPkLnc3ugdN+n0B/1TwWMfqB3q/Pg== X-Received: by 2002:adf:e411:: with SMTP id g17mr17501684wrm.225.1616408996399; Mon, 22 Mar 2021 03:29:56 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id x25sm18700090wmj.14.2021.03.22.03.29.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:56 -0700 (PDT) Message-Id: <4e8c352fb366471c02d1cdf605d37e017eb3f507.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:43 +0000 Subject: [PATCH v7 07/12] unix-socket: eliminate static unix_stream_socket() helper function Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler The static helper function `unix_stream_socket()` calls `die()`. This is not appropriate for all callers. Eliminate the wrapper function and make the callers propagate the error. Signed-off-by: Jeff Hostetler --- unix-socket.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 19ed48be9902..69f81d64e9d5 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,14 +1,6 @@ #include "cache.h" #include "unix-socket.h" -static int unix_stream_socket(void) -{ - int fd = socket(AF_UNIX, SOCK_STREAM, 0); - if (fd < 0) - die_errno("unable to create socket"); - return fd; -} - static int chdir_len(const char *orig, int len) { char *path = xmemdupz(orig, len); @@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, int unix_stream_connect(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; + if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; unix_sockaddr_cleanup(&ctx); @@ -87,15 +82,16 @@ int unix_stream_connect(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } int unix_stream_listen(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -103,7 +99,9 @@ int unix_stream_listen(const char *path) if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; @@ -116,8 +114,9 @@ int unix_stream_listen(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } From patchwork Mon Mar 22 10:29:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F35EFC433EA for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D4662619A3 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230273AbhCVKaj (ORCPT ); Mon, 22 Mar 2021 06:30:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229962AbhCVK37 (ORCPT ); Mon, 22 Mar 2021 06:29:59 -0400 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48496C061756 for ; Mon, 22 Mar 2021 03:29:58 -0700 (PDT) Received: by mail-wr1-x42a.google.com with SMTP id v4so16065822wrp.13 for ; Mon, 22 Mar 2021 03:29:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=hZi6C6sQmeIRBlncuQOMliFiSKu+0LMYAJ0CX1WIRg0=; b=HIwRE5akY/zr21XjS6Iskx7BfvRKgBveRccWYlBI/v1+S4CXRzUgpqXDqrAS1qw7ok GobqTWflrrNArHJC4hbtiFhinjBRvEonmPRbqV6QQpelQxW9djC1znDb7qGffPfLCmwd 8EqkC5fBglFa9j2wzZmPAFrL0FyHRrkc+68I5fM7Thskr3VqRr19oJRQXMKh2tuscp2T Frfcuve21vIFUzzTgmyl8GFh2LziHQBOAWL6X9nleMlrnp7p/5wRKxRwGjrC62k51VVZ SyHUFJUxv00lycikdqgkOYd85ogga8Z3Osurmq9idcZiBKsdqYvTQQqqOtUcEgS6nAh3 E/xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=hZi6C6sQmeIRBlncuQOMliFiSKu+0LMYAJ0CX1WIRg0=; b=sOGtbM97/66XhoqE46rco3Y/920Mwjj99PWRE5j8YDzJUkgzJIy0qN1QlfuwubBetb J8dIO8RuTFVD00ybSyOo68GUngdgsg/qBtTfBArBjrg52ZSNa0X5dYGDiWZAPPYLnyIK zMrwu2C9M8+YcHuwjnTIAk+KLB0DOcivO789EP7lBiF+qlALWycl7LS0LylzqVCKLYgA nXueVbF7WVVGPK2lbHsdhw9Hxhtcey7hjeiPHpSEPn6Jz9oHX/ozVYxKpB0TCvVDyBnI R2EfpJGuBR+aAUpsRef41964F/m0lkUD4DuIIkmGpzfkBHVl3MlSBGAMglTWE2kSOXu5 eXdg== X-Gm-Message-State: AOAM530PEAgPKmokGo2OGbT1yXMZkq6x6rB6Oe3GT04Vz+9ttvtmp05W +FZeLgWccxx0ATJk8vXNbYdbmjuBhhs= X-Google-Smtp-Source: ABdhPJxrLytsmx6eX1/qY8jJc/u1p5sQdy43KnWanjUUEBhA2N/OcCddywTifyc4iTAh2bzjvbyOmw== X-Received: by 2002:a5d:4341:: with SMTP id u1mr17520325wrr.88.1616408997117; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id n6sm22232080wrw.63.2021.03.22.03.29.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:56 -0700 (PDT) Message-Id: <3b71f52d862832f3eb65d4041baac4c3a9f3e153.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:44 +0000 Subject: [PATCH v7 08/12] unix-socket: add backlog size option to unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Update `unix_stream_listen()` to take an options structure to override default behaviors. This commit includes the size of the `listen()` backlog. Signed-off-by: Jeff Hostetler --- builtin/credential-cache--daemon.c | 3 ++- unix-socket.c | 11 +++++++++-- unix-socket.h | 9 ++++++++- 3 files changed, 19 insertions(+), 4 deletions(-) diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c index c61f123a3b81..4c6c89ab0de2 100644 --- a/builtin/credential-cache--daemon.c +++ b/builtin/credential-cache--daemon.c @@ -203,9 +203,10 @@ static int serve_cache_loop(int fd) static void serve_cache(const char *socket_path, int debug) { + struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT; int fd; - fd = unix_stream_listen(socket_path); + fd = unix_stream_listen(socket_path, &opts); if (fd < 0) die_errno("unable to bind to '%s'", socket_path); diff --git a/unix-socket.c b/unix-socket.c index 69f81d64e9d5..012becd93d57 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,6 +1,8 @@ #include "cache.h" #include "unix-socket.h" +#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) + static int chdir_len(const char *orig, int len) { char *path = xmemdupz(orig, len); @@ -89,9 +91,11 @@ int unix_stream_connect(const char *path) return -1; } -int unix_stream_listen(const char *path) +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts) { int fd = -1, saved_errno; + int backlog; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -106,7 +110,10 @@ int unix_stream_listen(const char *path) if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; - if (listen(fd, 5) < 0) + backlog = opts->listen_backlog_size; + if (backlog <= 0) + backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG; + if (listen(fd, backlog) < 0) goto fail; unix_sockaddr_cleanup(&ctx); diff --git a/unix-socket.h b/unix-socket.h index e271aeec5a07..ec2fb3ea7267 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -1,7 +1,14 @@ #ifndef UNIX_SOCKET_H #define UNIX_SOCKET_H +struct unix_stream_listen_opts { + int listen_backlog_size; +}; + +#define UNIX_STREAM_LISTEN_OPTS_INIT { 0 } + int unix_stream_connect(const char *path); -int unix_stream_listen(const char *path); +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts); #endif /* UNIX_SOCKET_H */ From patchwork Mon Mar 22 10:29:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2C34C433EB for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C037B619A1 for ; Mon, 22 Mar 2021 10:31:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230264AbhCVKai (ORCPT ); Mon, 22 Mar 2021 06:30:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230186AbhCVK37 (ORCPT ); Mon, 22 Mar 2021 06:29:59 -0400 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E430FC061574 for ; Mon, 22 Mar 2021 03:29:58 -0700 (PDT) Received: by mail-wr1-x433.google.com with SMTP id b9so16075111wrt.8 for ; Mon, 22 Mar 2021 03:29:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=lHzIhsUhBoGLcHN3G24uN3UPY7gPzQXhJA11HE1H76c=; b=ib+v/nqji2Mjpzs2h1I1/ewAV+wmHYp8j/qhC6/uobtUG4+RvL5e20AW5IKWymdNdF QolZkdOexC/iet223hvzhAk0M5xZMRDj2yI+J1CXA5+xV5CS3HvAqJJiOxUpziAU8dxY bHJ/VhtUpKGuoBxGazbAkOHNf14saiyVxY4llKvitUey4RUCi6BCgHErTO2HXweesSs4 v4iUWpj6KsGmjqCLPaK9RxG+0Bw9PsF0vEcujQKvcE2qI+nyvNfyJwj50GMtZAhRalzf pP8hOSpDZGTpqhSeZJOLX2R9F/l9B7KdRqwsqZQ7LHy8Xhx/N3YGDRcBI7lJooMtyLqa wvEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=lHzIhsUhBoGLcHN3G24uN3UPY7gPzQXhJA11HE1H76c=; b=GMM9EfISPqEFjLQqbQbDLILRwic1iNuySkX2tPn7koY3sgdD67ORrN/+N5YjgGax0r klPQpp6UCiezd9k6FnedhddlHM18Jk9Wb6WihcDpHCsJ9ihgQ13LpioLAhTaxM6d+jsw 0ZfPJ/QHliI0D6wRykhnXwdGEfeBi03+HIHliVbAJk2TFA/mOuv+MX9aGgZYQ6+Hh37G fxuY0N54E5oN1BgyYVot28SODSvcraEIy3d9TNNTy3nnOpI4W2Y7zcAEPHW1kjXVKVsT jqYzNJ4H2qsJgMCrDfVN8Eb7vM3gIQ6Lur7T3ua/Y0TSzxXVDl7+FUlpsYf0jgBYII5m 11QA== X-Gm-Message-State: AOAM531huc9kWfcR9THoS28S7llWwOpu3DvD1QBjZUWBfHI+RGeUFpSv plASwfPuVrhdPDtuED7U9XMipeJ3Vzo= X-Google-Smtp-Source: ABdhPJxVTVxBUg+mOBTevYhSJ2H4P7E0hYkXGdsozQtJFu2segbaIvKn5bFJcKmIPAQ514BiDMbMqA== X-Received: by 2002:adf:b1de:: with SMTP id r30mr17856584wra.125.1616408997724; Mon, 22 Mar 2021 03:29:57 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id n1sm23043062wro.36.2021.03.22.03.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:57 -0700 (PDT) Message-Id: <5972a198361c153e000a9d11c05bec480cb9f4ce.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:45 +0000 Subject: [PATCH v7 09/12] unix-socket: disallow chdir() when creating unix domain sockets Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Calls to `chdir()` are dangerous in a multi-threaded context. If `unix_stream_listen()` or `unix_stream_connect()` is given a socket pathname that is too long to fit in a `sockaddr_un` structure, it will `chdir()` to the parent directory of the requested socket pathname, create the socket using a relative pathname, and then `chdir()` back. This is not thread-safe. Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this flag is set. Signed-off-by: Jeff Hostetler --- builtin/credential-cache.c | 2 +- unix-socket.c | 17 ++++++++++++----- unix-socket.h | 3 ++- 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c index 9b3f70990597..76a6ba37223f 100644 --- a/builtin/credential-cache.c +++ b/builtin/credential-cache.c @@ -14,7 +14,7 @@ static int send_request(const char *socket, const struct strbuf *out) { int got_data = 0; - int fd = unix_stream_connect(socket); + int fd = unix_stream_connect(socket, 0); if (fd < 0) return -1; diff --git a/unix-socket.c b/unix-socket.c index 012becd93d57..e0be1badb58d 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -30,16 +30,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx) } static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, - struct unix_sockaddr_context *ctx) + struct unix_sockaddr_context *ctx, + int disallow_chdir) { int size = strlen(path) + 1; ctx->orig_dir = NULL; if (size > sizeof(sa->sun_path)) { - const char *slash = find_last_dir_sep(path); + const char *slash; const char *dir; struct strbuf cwd = STRBUF_INIT; + if (disallow_chdir) { + errno = ENAMETOOLONG; + return -1; + } + + slash = find_last_dir_sep(path); if (!slash) { errno = ENAMETOOLONG; return -1; @@ -65,13 +72,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, return 0; } -int unix_stream_connect(const char *path) +int unix_stream_connect(const char *path, int disallow_chdir) { int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) @@ -101,7 +108,7 @@ int unix_stream_listen(const char *path, unlink(path); - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) diff --git a/unix-socket.h b/unix-socket.h index ec2fb3ea7267..8542cdd7995d 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -3,11 +3,12 @@ struct unix_stream_listen_opts { int listen_backlog_size; + unsigned int disallow_chdir:1; }; #define UNIX_STREAM_LISTEN_OPTS_INIT { 0 } -int unix_stream_connect(const char *path); +int unix_stream_connect(const char *path, int disallow_chdir); int unix_stream_listen(const char *path, const struct unix_stream_listen_opts *opts); From patchwork Mon Mar 22 10:29:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40320C433EC for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 10132619C6 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230285AbhCVKam (ORCPT ); Mon, 22 Mar 2021 06:30:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230195AbhCVK37 (ORCPT ); Mon, 22 Mar 2021 06:29:59 -0400 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86DAAC061574 for ; Mon, 22 Mar 2021 03:29:59 -0700 (PDT) Received: by mail-wr1-x434.google.com with SMTP id v11so16058575wro.7 for ; Mon, 22 Mar 2021 03:29:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=5Pdau/r0fcMGLYHt+z4JaTsxB8Kwqlni+GJpFcY8nBk=; b=lcZQqRrvPj4rmifLdp8zi1SefEJNkM3Gn3DM+GcHFMJVVfWBm/mTiu2WynGE1qeWXG y0/C5IizAkPgKgCe7HJNJOrjoLb0xB0EtaIA0MnkW2bHMsyFPlNPHZ0I/OrGOfc9mj+G wbiSpZMJGWA8Mbrc5YoFypMAPjZI7AKbovOMHgr4o8rKNdZRk/0i6gPsjDFNGOIg4mvU aTPPq7FGo/hLxBT2wQKqJsrbdeZOqwEpeNAgkuLwf2DowOgE0jnP6YwkYTcBQvUDSGeo 2OX1WhF1MEAkUOgQTzfDJPgzZCTAbWejqLMBu/daCm6PhZlP7s9jJOT9OOc/nBJkepDx Qn+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=5Pdau/r0fcMGLYHt+z4JaTsxB8Kwqlni+GJpFcY8nBk=; b=okNWa5mTBJLzDjf86VFOXIteMTgYk4CPI/os5dzBcGsL4oRxujgWHyxjkKfkt2T8Bc svxaK81FNinGv8u6dVn1SLsyXNB5FfKFgOdvgbM8Y2EhAiqKqq/oFcGkZf7WzQv1IBAD x2AHWjexy2pxjpFbRL3XaIU5MqfWXk8kpQZ21ZKrUP0C6uml/c00uGN3OQknbPadPxyg G0RDjoocB7TfBWKhRAMh6pCtcgWVgp4Rj7OkpHJ7i9Ly+VvmDYjtvN4U5RsgzR2mRzYq Y6eqBYL9B0nRQ2i99bxx2pbPEKzg6bVqFto5kQyt0cCeUhFb8qR2S2U7hwaTY1jB8g/g gVYg== X-Gm-Message-State: AOAM531J4Tlzh3ecwSttWD40WddjCrrBH68bUPvrjhIeMnvSbRjKrLF9 ZVOs+D8y9L3yb65IVjAckW19Opt/I0E= X-Google-Smtp-Source: ABdhPJykcxoP+I4NibwCa0MOuljkSLDSkWZjwszT4eHkF66VcuRMDCi5D/GjI5Oj6QaUg/k3paXBZA== X-Received: by 2002:a5d:46c4:: with SMTP id g4mr17328318wrs.241.1616408998339; Mon, 22 Mar 2021 03:29:58 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id v18sm20259560wrf.41.2021.03.22.03.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:58 -0700 (PDT) Message-Id: <02c885fd623df3551c46aa270c23f87e7ef79af2.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:46 +0000 Subject: [PATCH v7 10/12] unix-stream-server: create unix domain socket under lock Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create a wrapper class for `unix_stream_listen()` that uses a ".lock" lockfile to create the unix domain socket in a race-free manner. Unix domain sockets have a fundamental problem on Unix systems because they persist in the filesystem until they are deleted. This is independent of whether a server is actually listening for connections. Well-behaved servers are expected to delete the socket when they shutdown. A new server cannot easily tell if a found socket is attached to an active server or is leftover cruft from a dead server. The traditional solution used by `unix_stream_listen()` is to force delete the socket pathname and then create a new socket. This solves the latter (cruft) problem, but in the case of the former, it orphans the existing server (by stealing the pathname associated with the socket it is listening on). We cannot directly use a .lock lockfile to create the socket because the socket is created by `bind(2)` rather than the `open(2)` mechanism used by `tempfile.c`. As an alternative, we hold a plain lockfile (".lock") as a mutual exclusion device. Under the lock, we test if an existing socket ("") is has an active server. If not, we create a new socket and begin listening. Then we use "rollback" to delete the lockfile in all cases. This wrapper code conceptually exists at a higher-level than the core unix_stream_connect() and unix_stream_listen() routines that it consumes. It is isolated in a wrapper class for clarity. Signed-off-by: Jeff Hostetler --- Makefile | 1 + contrib/buildsystems/CMakeLists.txt | 2 +- unix-stream-server.c | 125 ++++++++++++++++++++++++++++ unix-stream-server.h | 33 ++++++++ 4 files changed, 160 insertions(+), 1 deletion(-) create mode 100644 unix-stream-server.c create mode 100644 unix-stream-server.h diff --git a/Makefile b/Makefile index d3c42d3f4f9f..012694276f6d 100644 --- a/Makefile +++ b/Makefile @@ -1665,6 +1665,7 @@ ifdef NO_UNIX_SOCKETS BASIC_CFLAGS += -DNO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o + LIB_OBJS += unix-stream-server.o endif ifdef USE_WIN32_IPC diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index 40c9e8e3bd9d..c94011269ebb 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -243,7 +243,7 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Windows") elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") add_compile_definitions(PROCFS_EXECUTABLE_PATH="/proc/self/exe" HAVE_DEV_TTY ) - list(APPEND compat_SOURCES unix-socket.c) + list(APPEND compat_SOURCES unix-socket.c unix-stream-server.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") diff --git a/unix-stream-server.c b/unix-stream-server.c new file mode 100644 index 000000000000..efa2a207abcd --- /dev/null +++ b/unix-stream-server.c @@ -0,0 +1,125 @@ +#include "cache.h" +#include "lockfile.h" +#include "unix-socket.h" +#include "unix-stream-server.h" + +#define DEFAULT_LOCK_TIMEOUT (100) + +/* + * Try to connect to a unix domain socket at `path` (if it exists) and + * see if there is a server listening. + * + * We don't know if the socket exists, whether a server died and + * failed to cleanup, or whether we have a live server listening, so + * we "poke" it. + * + * We immediately hangup without sending/receiving any data because we + * don't know anything about the protocol spoken and don't want to + * block while writing/reading data. It is sufficient to just know + * that someone is listening. + */ +static int is_another_server_alive(const char *path, + const struct unix_stream_listen_opts *opts) +{ + int fd = unix_stream_connect(path, opts->disallow_chdir); + if (fd >= 0) { + close(fd); + return 1; + } + + return 0; +} + +int unix_ss_create(const char *path, + const struct unix_stream_listen_opts *opts, + long timeout_ms, + struct unix_ss_socket **new_server_socket) +{ + struct lock_file lock = LOCK_INIT; + int fd_socket; + struct unix_ss_socket *server_socket; + + *new_server_socket = NULL; + + if (timeout_ms < 0) + timeout_ms = DEFAULT_LOCK_TIMEOUT; + + /* + * Create a lock at ".lock" if we can. + */ + if (hold_lock_file_for_update_timeout(&lock, path, 0, timeout_ms) < 0) + return -1; + + /* + * If another server is listening on "" give up. We do not + * want to create a socket and steal future connections from them. + */ + if (is_another_server_alive(path, opts)) { + rollback_lock_file(&lock); + errno = EADDRINUSE; + return -2; + } + + /* + * Create and bind to a Unix domain socket at "". + */ + fd_socket = unix_stream_listen(path, opts); + if (fd_socket < 0) { + int saved_errno = errno; + rollback_lock_file(&lock); + errno = saved_errno; + return -1; + } + + server_socket = xcalloc(1, sizeof(*server_socket)); + server_socket->path_socket = strdup(path); + server_socket->fd_socket = fd_socket; + lstat(path, &server_socket->st_socket); + + *new_server_socket = server_socket; + + /* + * Always rollback (just delete) ".lock" because we already created + * "" as a socket and do not want to commit_lock to do the atomic + * rename trick. + */ + rollback_lock_file(&lock); + + return 0; +} + +void unix_ss_free(struct unix_ss_socket *server_socket) +{ + if (!server_socket) + return; + + if (server_socket->fd_socket >= 0) { + if (!unix_ss_was_stolen(server_socket)) + unlink(server_socket->path_socket); + close(server_socket->fd_socket); + } + + free(server_socket->path_socket); + free(server_socket); +} + +int unix_ss_was_stolen(struct unix_ss_socket *server_socket) +{ + struct stat st_now; + + if (!server_socket) + return 0; + + if (lstat(server_socket->path_socket, &st_now) == -1) + return 1; + + if (st_now.st_ino != server_socket->st_socket.st_ino) + return 1; + if (st_now.st_dev != server_socket->st_socket.st_dev) + return 1; + + if (!S_ISSOCK(st_now.st_mode)) + return 1; + + return 0; +} diff --git a/unix-stream-server.h b/unix-stream-server.h new file mode 100644 index 000000000000..ae2712ba39b1 --- /dev/null +++ b/unix-stream-server.h @@ -0,0 +1,33 @@ +#ifndef UNIX_STREAM_SERVER_H +#define UNIX_STREAM_SERVER_H + +#include "unix-socket.h" + +struct unix_ss_socket { + char *path_socket; + struct stat st_socket; + int fd_socket; +}; + +/* + * Create a Unix Domain Socket at the given path under the protection + * of a '.lock' lockfile. + * + * Returns 0 on success, -1 on error, -2 if socket is in use. + */ +int unix_ss_create(const char *path, + const struct unix_stream_listen_opts *opts, + long timeout_ms, + struct unix_ss_socket **server_socket); + +/* + * Close and delete the socket. + */ +void unix_ss_free(struct unix_ss_socket *server_socket); + +/* + * Return 1 if the inode of the pathname to our socket changes. + */ +int unix_ss_was_stolen(struct unix_ss_socket *server_socket); + +#endif /* UNIX_STREAM_SERVER_H */ From patchwork Mon Mar 22 10:29:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66DCEC433F2 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B6DA619AF for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230288AbhCVKam (ORCPT ); Mon, 22 Mar 2021 06:30:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230194AbhCVKaB (ORCPT ); Mon, 22 Mar 2021 06:30:01 -0400 Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com [IPv6:2a00:1450:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72473C061574 for ; Mon, 22 Mar 2021 03:30:00 -0700 (PDT) Received: by mail-wr1-x429.google.com with SMTP id o16so16096666wrn.0 for ; Mon, 22 Mar 2021 03:30:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=h36MvPrkxYbI87PiJht3Uy6WAT/7y+IB8YI1O+bBO1k=; b=LuhFGC3/C2f7Fjn/URNrUBqLjr/Bf0+a6phzt8wJSL4weC0wTUbZEw0CMgLbt7md3Z eCEr2dd5IrEMyU+1Z13X6ptw+6wp+K95029CvF7OykUdBU+/El/n6Vxv9g1JNuuh4ums ehBA5EgR4j2IVwom6aWRH1uNrS+0yEY34sK8THAfuiC6ES8M677l0hI9tb7C7KvosXLY lXUITaV75hYW4X9DWOPrZToR5gL3wVOy0+QTXuG2EpOCLyFXQ5EFLJrRQRu6wOxBhd07 WO3H4XKPmP4diMf0NsxUxFEPm7v/Gj2kIHmALApdNGaEdt7Q0s+2k8n+apDxVwnKPDKO Ej9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=h36MvPrkxYbI87PiJht3Uy6WAT/7y+IB8YI1O+bBO1k=; b=mfe/TxNtecjwygY4YnvWFF0tTbu+wezDAn9EAzi7vXZD5e3nbA3dawV89JZ89xJSBj uQHzyVrkYfpa4toeSgC25R9Xj7sc3cy/39Jfp+3B75VLLlqaXXZLjClGPNFth7jlZMQw cWXxnh7CiDU1QFQOPd69MbRyGyAObcpiSw+qB9re7fYcwZ2FkjSGJGUCLwcQm1xmtk5k h57eCOCzJ5aWaPYV4JLlZEcinNlkVDDn7rf2jmCSbfQBukcsCY7ydVLsrZyGAXdshkAF uhDzr+lyQtvhEl7+g9kqoH6b3O9eG+Havr2kZGBrrsd2WPHisKa8S0gABJ4lEl8PX+eu K1xw== X-Gm-Message-State: AOAM531UCUdpV4TVExMGjJa0asz0uVVFZ4yBlzdttgXGHOJ1RkiznFIq 7xKgAeoOJSr1QSz/B3Oj62DKKhVYxR4= X-Google-Smtp-Source: ABdhPJx1k/dNdB+UEfwENdvQCNdTfjD4nzmaSm1ArnOOG7P/t/V/Z6oMrLTg47HY1PdRHmsD3VE5QA== X-Received: by 2002:a05:6000:1210:: with SMTP id e16mr17610395wrx.19.1616408999019; Mon, 22 Mar 2021 03:29:59 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id a14sm18894742wrn.5.2021.03.22.03.29.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:58 -0700 (PDT) Message-Id: In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:47 +0000 Subject: [PATCH v7 11/12] simple-ipc: add Unix domain socket implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Unix domain socket based implementation of "simple-ipc". A set of `ipc_client` routines implement a client library to connect to an `ipc_server` over a Unix domain socket, send a simple request, and receive a single response. Clients use blocking IO on the socket. A set of `ipc_server` routines implement a thread pool to listen for and concurrently service client connections. The server creates a new Unix domain socket at a known location. If a socket already exists with that name, the server tries to determine if another server is already listening on the socket or if the socket is dead. If socket is busy, the server exits with an error rather than stealing the socket. If the socket is dead, the server creates a new one and starts up. If while running, the server detects that its socket has been stolen by another server, it automatically exits. Signed-off-by: Jeff Hostetler --- Makefile | 2 + compat/simple-ipc/ipc-unix-socket.c | 999 ++++++++++++++++++++++++++++ contrib/buildsystems/CMakeLists.txt | 2 + simple-ipc.h | 13 +- 4 files changed, 1015 insertions(+), 1 deletion(-) create mode 100644 compat/simple-ipc/ipc-unix-socket.c diff --git a/Makefile b/Makefile index 012694276f6d..20dd65d19658 100644 --- a/Makefile +++ b/Makefile @@ -1666,6 +1666,8 @@ ifdef NO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o LIB_OBJS += unix-stream-server.o + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o endif ifdef USE_WIN32_IPC diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c new file mode 100644 index 000000000000..38689b278df3 --- /dev/null +++ b/compat/simple-ipc/ipc-unix-socket.c @@ -0,0 +1,999 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" +#include "unix-socket.h" +#include "unix-stream-server.h" + +#ifdef NO_UNIX_SOCKETS +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets +#endif + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + struct stat st; + struct ipc_client_connection *connection_test = NULL; + + options.wait_if_busy = 0; + options.wait_if_not_found = 0; + + if (lstat(path, &st) == -1) { + switch (errno) { + case ENOENT: + case ENOTDIR: + return IPC_STATE__NOT_LISTENING; + default: + return IPC_STATE__INVALID_PATH; + } + } + + /* also complain if a plain file is in the way */ + if ((st.st_mode & S_IFMT) != S_IFSOCK) + return IPC_STATE__INVALID_PATH; + + /* + * Just because the filesystem has a S_IFSOCK type inode + * at `path`, doesn't mean it that there is a server listening. + * Ping it to be sure. + */ + state = ipc_client_try_connect(path, &options, &connection_test); + ipc_client_close_connection(connection_test); + + return state; +} + +/* + * Retry frequency when trying to connect to a server. + * + * This value should be short enough that we don't seriously delay our + * caller, but not fast enough that our spinning puts pressure on the + * system. + */ +#define WAIT_STEP_MS (50) + +/* + * Try to connect to the server. If the server is just starting up or + * is very busy, we may not get a connection the first time. + */ +static enum ipc_active_state connect_to_server( + const char *path, + int timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + int k; + + *pfd = -1; + + for (k = 0; k < timeout_ms; k += WAIT_STEP_MS) { + int fd = unix_stream_connect(path, options->uds_disallow_chdir); + + if (fd != -1) { + *pfd = fd; + return IPC_STATE__LISTENING; + } + + if (errno == ENOENT) { + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + + goto sleep_and_try_again; + } + + if (errno == ETIMEDOUT) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + if (errno == ECONNREFUSED) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + return IPC_STATE__OTHER_ERROR; + + sleep_and_try_again: + sleep_millisec(WAIT_STEP_MS); + } + + return IPC_STATE__NOT_LISTENING; +} + +/* + * The total amount of time that we are willing to wait when trying to + * connect to a server. + * + * When the server is first started, it might take a little while for + * it to become ready to service requests. Likewise, the server may + * be very (temporarily) busy and not respond to our connections. + * + * We should gracefully and silently handle those conditions and try + * again for a reasonable time period. + * + * The value chosen here should be long enough for the server + * to reliably heal from the above conditions. + */ +#define MY_CONNECTION_TIMEOUT_MS (1000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, answer); + + ipc_client_close_connection(connection); + + return ret; +} + +static int set_socket_blocking_flag(int fd, int make_nonblocking) +{ + int flags; + + flags = fcntl(fd, F_GETFL, NULL); + + if (flags < 0) + return -1; + + if (make_nonblocking) + flags |= O_NONBLOCK; + else + flags &= ~O_NONBLOCK; + + return fcntl(fd, F_SETFL, flags); +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_WORKER_THREAD_DATA, + MAGIC_ACCEPT_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_worker_thread_data *worker_thread_data; +}; + +struct ipc_worker_thread_data { + enum magic magic; + struct ipc_worker_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; +}; + +struct ipc_accept_thread_data { + enum magic magic; + struct ipc_server_data *server_data; + + struct unix_ss_socket *server_socket; + + int fd_send_shutdown; + int fd_wait_shutdown; + pthread_t pthread_id; +}; + +/* + * With unix-sockets, the conceptual "ipc-server" is implemented as a single + * controller "accept-thread" thread and a pool of "worker-thread" threads. + * The former does the usual `accept()` loop and dispatches connections + * to an idle worker thread. The worker threads wait in an idle loop for + * a new connection, communicate with the client and relay data to/from + * the `application_cb` and then wait for another connection from the + * server thread. This avoids the overhead of constantly creating and + * destroying threads. + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + + struct ipc_accept_thread_data *accept_thread; + struct ipc_worker_thread_data *worker_thread_list; + + pthread_mutex_t work_available_mutex; + pthread_cond_t work_available_cond; + + /* + * Accepted but not yet processed client connections are kept + * in a circular buffer FIFO. The queue is empty when the + * positions are equal. + */ + int *fifo_fds; + int queue_size; + int back_pos; + int front_pos; + + int shutdown_requested; + int is_stopped; +}; + +/* + * Remove and return the oldest queued connection. + * + * Returns -1 if empty. + */ +static int fifo_dequeue(struct ipc_server_data *server_data) +{ + /* ASSERT holding mutex */ + + int fd; + + if (server_data->back_pos == server_data->front_pos) + return -1; + + fd = server_data->fifo_fds[server_data->front_pos]; + server_data->fifo_fds[server_data->front_pos] = -1; + + server_data->front_pos++; + if (server_data->front_pos == server_data->queue_size) + server_data->front_pos = 0; + + return fd; +} + +/* + * Push a new fd onto the back of the queue. + * + * Drop it and return -1 if queue is already full. + */ +static int fifo_enqueue(struct ipc_server_data *server_data, int fd) +{ + /* ASSERT holding mutex */ + + int next_back_pos; + + next_back_pos = server_data->back_pos + 1; + if (next_back_pos == server_data->queue_size) + next_back_pos = 0; + + if (next_back_pos == server_data->front_pos) { + /* Queue is full. Just drop it. */ + close(fd); + return -1; + } + + server_data->fifo_fds[server_data->back_pos] = fd; + server_data->back_pos = next_back_pos; + + return fd; +} + +/* + * Wait for a connection to be queued to the FIFO and return it. + * + * Returns -1 if someone has already requested a shutdown. + */ +static int worker_thread__wait_for_connection( + struct ipc_worker_thread_data *worker_thread_data) +{ + /* ASSERT NOT holding mutex */ + + struct ipc_server_data *server_data = worker_thread_data->server_data; + int fd = -1; + + pthread_mutex_lock(&server_data->work_available_mutex); + for (;;) { + if (server_data->shutdown_requested) + break; + + fd = fifo_dequeue(server_data); + if (fd >= 0) + break; + + pthread_cond_wait(&server_data->work_available_cond, + &server_data->work_available_mutex); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + + return fd; +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* A randomly chosen value. */ +#define MY_WAIT_POLL_TIMEOUT_MS (10) + +/* + * If the client hangs up without sending any data on the wire, just + * quietly close the socket and ignore this client. + * + * This worker thread is committed to reading the IPC request data + * from the client at the other end of this fd. Wait here for the + * client to actually put something on the wire -- because if the + * client just does a ping (connect and hangup without sending any + * data), our use of the pkt-line read routines will spew an error + * message. + * + * Return -1 if the client hung up. + * Return 0 if data (possibly incomplete) is ready. + */ +static int worker_thread__wait_for_io_start( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + struct ipc_server_data *server_data = worker_thread_data->server_data; + struct pollfd pollfd[1]; + int result; + + for (;;) { + pollfd[0].fd = fd; + pollfd[0].events = POLLIN; + + result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + goto cleanup; + } + + if (result == 0) { + /* a timeout */ + + int in_shutdown; + + pthread_mutex_lock(&server_data->work_available_mutex); + in_shutdown = server_data->shutdown_requested; + pthread_mutex_unlock(&server_data->work_available_mutex); + + /* + * If a shutdown is already in progress and this + * client has not started talking yet, just drop it. + */ + if (in_shutdown) + goto cleanup; + continue; + } + + if (pollfd[0].revents & POLLHUP) + goto cleanup; + + if (pollfd[0].revents & POLLIN) + return 0; + + goto cleanup; + } + +cleanup: + close(fd); + return -1; +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + */ +static int worker_thread__do_io( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + /* ASSERT NOT holding lock */ + + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.worker_thread_data = worker_thread_data; + + reply_data.fd = fd; + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_GENTLE_ON_READ_ERROR); + if (ret >= 0) { + ret = worker_thread_data->server_data->application_cb( + worker_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Block SIGPIPE on the current thread (so that we get EPIPE from + * write() rather than an actual signal). + * + * Note that using sigchain_push() and _pop() to control SIGPIPE + * around our IO calls is not thread safe: + * [] It uses a global stack of handler frames. + * [] It uses ALLOC_GROW() to resize it. + * [] Finally, according to the `signal(2)` man-page: + * "The effects of `signal()` in a multithreaded process are unspecified." + */ +static void thread_block_sigpipe(sigset_t *old_set) +{ + sigset_t new_set; + + sigemptyset(&new_set); + sigaddset(&new_set, SIGPIPE); + + sigemptyset(old_set); + pthread_sigmask(SIG_BLOCK, &new_set, old_set); +} + +/* + * Thread proc for an IPC worker thread. It handles a series of + * connections from clients. It pulls the next fd from the queue + * processes it, and then waits for the next client. + * + * Block SIGPIPE in this worker thread for the life of the thread. + * This avoids stray (and sometimes delayed) SIGPIPE signals caused + * by client errors and/or when we are under extremely heavy IO load. + * + * This means that the application callback will have SIGPIPE blocked. + * The callback should not change it. + */ +static void *worker_thread_proc(void *_worker_thread_data) +{ + struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data; + struct ipc_server_data *server_data = worker_thread_data->server_data; + sigset_t old_set; + int fd, io; + int ret; + + trace2_thread_start("ipc-worker"); + + thread_block_sigpipe(&old_set); + + for (;;) { + fd = worker_thread__wait_for_connection(worker_thread_data); + if (fd == -1) + break; /* in shutdown */ + + io = worker_thread__wait_for_io_start(worker_thread_data, fd); + if (io == -1) + continue; /* client hung up without sending anything */ + + ret = worker_thread__do_io(worker_thread_data, fd); + + if (ret == SIMPLE_IPC_QUIT) { + trace2_data_string("ipc-worker", NULL, "queue_stop_async", + "application_quit"); + /* + * The application layer is telling the ipc-server + * layer to shutdown. + * + * We DO NOT have a response to send to the client. + * + * Queue an async stop (to stop the other threads) and + * allow this worker thread to exit now (no sense waiting + * for the thread-pool shutdown signal). + * + * Other non-idle worker threads are allowed to finish + * responding to their current clients. + */ + ipc_server_stop_async(server_data); + break; + } + } + + trace2_thread_exit(); + return NULL; +} + +/* A randomly chosen value. */ +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000) + +/* + * Accept a new client connection on our socket. This uses non-blocking + * IO so that we can also wait for shutdown requests on our socket-pair + * without actually spinning on a fast timeout. + */ +static int accept_thread__wait_for_connection( + struct ipc_accept_thread_data *accept_thread_data) +{ + struct pollfd pollfd[2]; + int result; + + for (;;) { + pollfd[0].fd = accept_thread_data->fd_wait_shutdown; + pollfd[0].events = POLLIN; + + pollfd[1].fd = accept_thread_data->server_socket->fd_socket; + pollfd[1].events = POLLIN; + + result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + return result; + } + + if (result == 0) { + /* a timeout */ + + /* + * If someone deletes or force-creates a new unix + * domain socket at our path, all future clients + * will be routed elsewhere and we silently starve. + * If that happens, just queue a shutdown. + */ + if (unix_ss_was_stolen( + accept_thread_data->server_socket)) { + trace2_data_string("ipc-accept", NULL, + "queue_stop_async", + "socket_stolen"); + ipc_server_stop_async( + accept_thread_data->server_data); + } + continue; + } + + if (pollfd[0].revents & POLLIN) { + /* shutdown message queued to socketpair */ + return -1; + } + + if (pollfd[1].revents & POLLIN) { + /* a connection is available on server_socket */ + + int client_fd = + accept(accept_thread_data->server_socket->fd_socket, + NULL, NULL); + if (client_fd >= 0) + return client_fd; + + /* + * An error here is unlikely -- it probably + * indicates that the connecting process has + * already dropped the connection. + */ + continue; + } + + BUG("unandled poll result errno=%d r[0]=%d r[1]=%d", + errno, pollfd[0].revents, pollfd[1].revents); + } +} + +/* + * Thread proc for the IPC server "accept thread". This waits for + * an incoming socket connection, appends it to the queue of available + * connections, and notifies a worker thread to process it. + * + * Block SIGPIPE in this thread for the life of the thread. This + * avoids any stray SIGPIPE signals when closing pipe fds under + * extremely heavy loads (such as when the fifo queue is full and we + * drop incomming connections). + */ +static void *accept_thread_proc(void *_accept_thread_data) +{ + struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data; + struct ipc_server_data *server_data = accept_thread_data->server_data; + sigset_t old_set; + + trace2_thread_start("ipc-accept"); + + thread_block_sigpipe(&old_set); + + for (;;) { + int client_fd = accept_thread__wait_for_connection( + accept_thread_data); + + pthread_mutex_lock(&server_data->work_available_mutex); + if (server_data->shutdown_requested) { + pthread_mutex_unlock(&server_data->work_available_mutex); + if (client_fd >= 0) + close(client_fd); + break; + } + + if (client_fd < 0) { + /* ignore transient accept() errors */ + } + else { + fifo_enqueue(server_data, client_fd); + pthread_cond_broadcast(&server_data->work_available_cond); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + } + + trace2_thread_exit(); + return NULL; +} + +/* + * We can't predict the connection arrival rate relative to the worker + * processing rate, therefore we allow the "accept-thread" to queue up + * a generous number of connections, since we'd rather have the client + * not unnecessarily timeout if we can avoid it. (The assumption is + * that this will be used for FSMonitor and a few second wait on a + * connection is better than having the client timeout and do the full + * computation itself.) + * + * The FIFO queue size is set to a multiple of the worker pool size. + * This value chosen at random. + */ +#define FIFO_SCALE (100) + +/* + * The backlog value for `listen(2)`. This doesn't need to huge, + * rather just large enough for our "accept-thread" to wake up and + * queue incoming connections onto the FIFO without the kernel + * dropping any. + * + * This value chosen at random. + */ +#define LISTEN_BACKLOG (50) + +static int create_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts, + struct unix_ss_socket **new_server_socket) +{ + struct unix_ss_socket *server_socket = NULL; + struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT; + int ret; + + uslg_opts.listen_backlog_size = LISTEN_BACKLOG; + uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir; + + ret = unix_ss_create(path, &uslg_opts, -1, &server_socket); + if (ret) + return ret; + + if (set_socket_blocking_flag(server_socket->fd_socket, 1)) { + int saved_errno = errno; + unix_ss_free(server_socket); + errno = saved_errno; + return -1; + } + + *new_server_socket = server_socket; + + trace2_data_string("ipc-server", NULL, "listen-with-lock", path); + return 0; +} + +static int setup_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts, + struct unix_ss_socket **new_server_socket) +{ + int ret, saved_errno; + + trace2_region_enter("ipc-server", "create-listener_socket", NULL); + + ret = create_listener_socket(path, ipc_opts, new_server_socket); + + saved_errno = errno; + trace2_region_leave("ipc-server", "create-listener_socket", NULL); + errno = saved_errno; + + return ret; +} + +/* + * Start IPC server in a pool of background threads. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct unix_ss_socket *server_socket = NULL; + struct ipc_server_data *server_data; + int sv[2]; + int k; + int ret; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + /* + * Create a socketpair and set sv[1] to non-blocking. This + * will used to send a shutdown message to the accept-thread + * and allows the accept-thread to wait on EITHER a client + * connection or a shutdown request without spinning. + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0) + return -1; + + if (set_socket_blocking_flag(sv[1], 1)) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return -1; + } + + ret = setup_listener_socket(path, opts, &server_socket); + if (ret) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return ret; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + + if (nr_threads < 1) + nr_threads = 1; + + pthread_mutex_init(&server_data->work_available_mutex, NULL); + pthread_cond_init(&server_data->work_available_cond, NULL); + + server_data->queue_size = nr_threads * FIFO_SCALE; + CALLOC_ARRAY(server_data->fifo_fds, server_data->queue_size); + + server_data->accept_thread = + xcalloc(1, sizeof(*server_data->accept_thread)); + server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA; + server_data->accept_thread->server_data = server_data; + server_data->accept_thread->server_socket = server_socket; + server_data->accept_thread->fd_send_shutdown = sv[0]; + server_data->accept_thread->fd_wait_shutdown = sv[1]; + + if (pthread_create(&server_data->accept_thread->pthread_id, NULL, + accept_thread_proc, server_data->accept_thread)) + die_errno(_("could not start accept_thread '%s'"), path); + + for (k = 0; k < nr_threads; k++) { + struct ipc_worker_thread_data *wtd; + + wtd = xcalloc(1, sizeof(*wtd)); + wtd->magic = MAGIC_WORKER_THREAD_DATA; + wtd->server_data = server_data; + + if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc, + wtd)) { + if (k == 0) + die(_("could not start worker[0] for '%s'"), + path); + /* + * Limp along with the thread pool that we have. + */ + break; + } + + wtd->next_thread = server_data->worker_thread_list; + server_data->worker_thread_list = wtd; + } + + *returned_server_data = server_data; + return 0; +} + +/* + * Gently tell the IPC server treads to shutdown. + * Can be run on any thread. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + /* ASSERT NOT holding mutex */ + + int fd; + + if (!server_data) + return 0; + + trace2_region_enter("ipc-server", "server-stop-async", NULL); + + pthread_mutex_lock(&server_data->work_available_mutex); + + server_data->shutdown_requested = 1; + + /* + * Write a byte to the shutdown socket pair to wake up the + * accept-thread. + */ + if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0) + error_errno("could not write to fd_send_shutdown"); + + /* + * Drain the queue of existing connections. + */ + while ((fd = fifo_dequeue(server_data)) != -1) + close(fd); + + /* + * Gently tell worker threads to stop processing new connections + * and exit. (This does not abort in-process conversations.) + */ + pthread_cond_broadcast(&server_data->work_available_cond); + + pthread_mutex_unlock(&server_data->work_available_mutex); + + trace2_region_leave("ipc-server", "server-stop-async", NULL); + + return 0; +} + +/* + * Wait for all IPC server threads to stop. + */ +int ipc_server_await(struct ipc_server_data *server_data) +{ + pthread_join(server_data->accept_thread->pthread_id, NULL); + + if (!server_data->shutdown_requested) + BUG("ipc-server: accept-thread stopped for '%s'", + server_data->buf_path.buf); + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + pthread_join(wtd->pthread_id, NULL); + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + struct ipc_accept_thread_data * accept_thread_data; + + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + accept_thread_data = server_data->accept_thread; + if (accept_thread_data) { + unix_ss_free(accept_thread_data->server_socket); + + if (accept_thread_data->fd_send_shutdown != -1) + close(accept_thread_data->fd_send_shutdown); + if (accept_thread_data->fd_wait_shutdown != -1) + close(accept_thread_data->fd_wait_shutdown); + + free(server_data->accept_thread); + } + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + pthread_cond_destroy(&server_data->work_available_cond); + pthread_mutex_destroy(&server_data->work_available_mutex); + + strbuf_release(&server_data->buf_path); + + free(server_data->fifo_fds); + free(server_data); +} diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index c94011269ebb..9897fcc8ea2a 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -248,6 +248,8 @@ endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +else() + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c) endif() set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) diff --git a/simple-ipc.h b/simple-ipc.h index ab5619e3d76f..dc3606e30bd6 100644 --- a/simple-ipc.h +++ b/simple-ipc.h @@ -5,7 +5,7 @@ * See Documentation/technical/api-simple-ipc.txt */ -#if defined(GIT_WINDOWS_NATIVE) +#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS) #define SUPPORTS_SIMPLE_IPC #endif @@ -62,11 +62,17 @@ struct ipc_client_connect_options { * the service and need to wait for it to become ready. */ unsigned int wait_if_not_found:1; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ .wait_if_busy = 0, \ .wait_if_not_found = 0, \ + .uds_disallow_chdir = 0, \ } /* @@ -159,6 +165,11 @@ struct ipc_server_data; struct ipc_server_opts { int nr_threads; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; /* From patchwork Mon Mar 22 10:29:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12154357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50AD9C433F4 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A86B619C0 for ; Mon, 22 Mar 2021 10:31:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbhCVKan (ORCPT ); Mon, 22 Mar 2021 06:30:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230173AbhCVKaB (ORCPT ); Mon, 22 Mar 2021 06:30:01 -0400 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40EA9C061756 for ; Mon, 22 Mar 2021 03:30:01 -0700 (PDT) Received: by mail-wm1-x331.google.com with SMTP id t5-20020a1c77050000b029010e62cea9deso8960367wmi.0 for ; Mon, 22 Mar 2021 03:30:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=uEmCwqIkaRyh2EUwKn9bEZLdHc1MywpkOcXMZAf7JT4=; b=FZWMYRu9P00teb/TP5tVwvIMmb3pooazZhRrzZXnGL4VcxaGFK2y9Dg8BbAvu2J9Zg TOOT1FsHR9YfeZ+UdLu7g5xYrbE6TTgoBEP/RdHVZ0pM9eTuyf2CjibjCd3fGaZGxdRp pnkfN33nKlR7uRCNlxg+FIo5ijhz3tlDuIk39pmK6mTkrlX5fduSfoqeePawVnfoOUD2 ivkH14xvMjtjczHYRkUgnr48PsL78wkohWC+s62j5Ec+QC0LEmKRkqS3nWT0+zoaOVVv EB5OuG7T8KPxu4jOb+o7DFyAjsaHmunNR0mBwNujJ0hxj1pTh25ZGGofGJV1ncdGtUbd xuYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=uEmCwqIkaRyh2EUwKn9bEZLdHc1MywpkOcXMZAf7JT4=; b=ECWTgYtB0Y166AuNZcOdkgY1FOnjyyxrGF9MbNYUzyN5dXgHvlPUvMV33jSzqWruQn GSCk7JmpjXZd384s1/XhUM+vaYDNOKPbOmUZhT4uBIQJa05Dzf2fD4IbUltMKsyT39c6 soP4F0R10SaisXhQW61hCHrvdeDknSTmWlM/Zqe3jyErcEUZpRC9LwCfO1dfyi4tCfhv Zer0XFvGhAepegNiz2VsasScMwkG5ECCOZ+/l78E/hdOW2bl5fO9rHffLoIqUAs9eqCe pPk2KDG9MHpHvy2Rwp62tQwc/Jac/u2W+AjrXUzUjEQFnSj9OGOHI+RktIzYYWgJVc9z uleQ== X-Gm-Message-State: AOAM5301QHK6EuYnMbCwLmF2q6mtrvzsglsj+8+TBzqldlXvSUN2Jn7v c3qqg8u4wtpPYgsMv0/5gBy6vp3LD00= X-Google-Smtp-Source: ABdhPJwnip1OikLDzyGlmm1inXbWVbXfWIKj51nlwCwx8xmU+zRT5tAdltEI9hcw+txqipbDYgL1OQ== X-Received: by 2002:a05:600c:4f8e:: with SMTP id n14mr15821851wmq.166.1616408999734; Mon, 22 Mar 2021 03:29:59 -0700 (PDT) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id j123sm16122663wmb.1.2021.03.22.03.29.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 03:29:59 -0700 (PDT) Message-Id: <8b5dcca68440773e81aa0ac7b52c1dd9e6c132ad.1616408989.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 22 Mar 2021 10:29:48 +0000 Subject: [PATCH v7 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism. Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc" functions. When the tool is invoked with "run-daemon", it runs a server to listen for "simple-ipc" connections on a test socket or named pipe and responds to a set of commands to exercise/stress the communication setup. When the tool is invoked with "start-daemon", it spawns a "run-daemon" command in the background and waits for the server to become ready before exiting. (This helps make unit tests in t0052 more predictable and avoids the need for arbitrary sleeps in the test script.) The tool also has a series of client "send" commands to send commands and data to a server instance. Signed-off-by: Jeff Hostetler --- Makefile | 1 + t/helper/test-simple-ipc.c | 787 +++++++++++++++++++++++++++++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0052-simple-ipc.sh | 122 ++++++ 5 files changed, 912 insertions(+) create mode 100644 t/helper/test-simple-ipc.c create mode 100755 t/t0052-simple-ipc.sh diff --git a/Makefile b/Makefile index 20dd65d19658..e556388d28d0 100644 --- a/Makefile +++ b/Makefile @@ -734,6 +734,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o TEST_BUILTINS_OBJS += test-sha1.o TEST_BUILTINS_OBJS += test-sha256.o TEST_BUILTINS_OBJS += test-sigchain.o +TEST_BUILTINS_OBJS += test-simple-ipc.o TEST_BUILTINS_OBJS += test-strcmp-offset.o TEST_BUILTINS_OBJS += test-string-list.o TEST_BUILTINS_OBJS += test-submodule-config.o diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c new file mode 100644 index 000000000000..42040ef81b1e --- /dev/null +++ b/t/helper/test-simple-ipc.c @@ -0,0 +1,787 @@ +/* + * test-simple-ipc.c: verify that the Inter-Process Communication works. + */ + +#include "test-tool.h" +#include "cache.h" +#include "strbuf.h" +#include "simple-ipc.h" +#include "parse-options.h" +#include "thread-utils.h" +#include "strvec.h" + +#ifndef SUPPORTS_SIMPLE_IPC +int cmd__simple_ipc(int argc, const char **argv) +{ + die("simple IPC not available on this platform"); +} +#else + +/* + * The test daemon defines an "application callback" that supports a + * series of commands (see `test_app_cb()`). + * + * Unknown commands are caught here and we send an error message back + * to the client process. + */ +static int app__unhandled_command(const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int ret; + + strbuf_addf(&buf, "unhandled command: %s", command); + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a single very large buffer. This is to ensure that + * long response are properly handled -- whether the chunking occurs + * in the kernel or in the (probably pkt-line) layer. + */ +#define BIG_ROWS (10000) +static int app__big_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < BIG_ROWS; row++) + strbuf_addf(&buf, "big: %.75d\n", row); + + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a series of lines. This is to ensure that we can incrementally + * compute the response and chunk it to the client. + */ +#define CHUNK_ROWS (10000) +static int app__chunk_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < CHUNK_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * Slowly reply with a series of lines. This is to model an expensive to + * compute chunked response (which might happen if this callback is running + * in a thread and is fighting for a lock with other threads). + */ +#define SLOW_ROWS (1000) +#define SLOW_DELAY_MS (10) +static int app__slow_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < SLOW_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + sleep_millisec(SLOW_DELAY_MS); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * The client sent a command followed by a (possibly very) large buffer. + */ +static int app__sendbytes_command(const char *received, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf_resp = STRBUF_INIT; + const char *p = "?"; + int len_ballast = 0; + int k; + int errs = 0; + int ret; + + if (skip_prefix(received, "sendbytes ", &p)) + len_ballast = strlen(p); + + /* + * Verify that the ballast is n copies of a single letter. + * And that the multi-threaded IO layer didn't cross the streams. + */ + for (k = 1; k < len_ballast; k++) + if (p[k] != p[0]) + errs++; + + if (errs) + strbuf_addf(&buf_resp, "errs:%d\n", errs); + else + strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast); + + ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len); + + strbuf_release(&buf_resp); + + return ret; +} + +/* + * An arbitrary fixed address to verify that the application instance + * data is handled properly. + */ +static int my_app_data = 42; + +static ipc_server_application_cb test_app_cb; + +/* + * This is the "application callback" that sits on top of the + * "ipc-server". It completely defines the set of commands supported + * by this application. + */ +static int test_app_cb(void *application_data, + const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + /* + * Verify that we received the application-data that we passed + * when we started the ipc-server. (We have several layers of + * callbacks calling callbacks and it's easy to get things mixed + * up (especially when some are "void*").) + */ + if (application_data != (void*)&my_app_data) + BUG("application_cb: application_data pointer wrong"); + + if (!strcmp(command, "quit")) { + /* + * The client sent a "quit" command. This is an async + * request for the server to shutdown. + * + * We DO NOT send the client a response message + * (because we have nothing to say and the other + * server threads have not yet stopped). + * + * Tell the ipc-server layer to start shutting down. + * This includes: stop listening for new connections + * on the socket/pipe and telling all worker threads + * to finish/drain their outgoing responses to other + * clients. + * + * This DOES NOT force an immediate sync shutdown. + */ + return SIMPLE_IPC_QUIT; + } + + if (!strcmp(command, "ping")) { + const char *answer = "pong"; + return reply_cb(reply_data, answer, strlen(answer)); + } + + if (!strcmp(command, "big")) + return app__big_command(reply_cb, reply_data); + + if (!strcmp(command, "chunk")) + return app__chunk_command(reply_cb, reply_data); + + if (!strcmp(command, "slow")) + return app__slow_command(reply_cb, reply_data); + + if (starts_with(command, "sendbytes ")) + return app__sendbytes_command(command, reply_cb, reply_data); + + return app__unhandled_command(command, reply_cb, reply_data); +} + +struct cl_args +{ + const char *subcommand; + const char *path; + const char *token; + + int nr_threads; + int max_wait_sec; + int bytecount; + int batchsize; + + char bytevalue; +}; + +static struct cl_args cl_args = { + .subcommand = NULL, + .path = "ipc-test", + .token = NULL, + + .nr_threads = 5, + .max_wait_sec = 60, + .bytecount = 1024, + .batchsize = 10, + + .bytevalue = 'x', +}; + +/* + * This process will run as a simple-ipc server and listen for IPC commands + * from client processes. + */ +static int daemon__run_server(void) +{ + int ret; + + struct ipc_server_opts opts = { + .nr_threads = cl_args.nr_threads, + }; + + /* + * Synchronously run the ipc-server. We don't need any application + * instance data, so pass an arbitrary pointer (that we'll later + * verify made the round trip). + */ + ret = ipc_server_run(cl_args.path, &opts, test_app_cb, (void*)&my_app_data); + if (ret == -2) + error(_("socket/pipe already in use: '%s'"), cl_args.path); + else if (ret == -1) + error_errno(_("could not start server on: '%s'"), cl_args.path); + + return ret; +} + +#ifndef GIT_WINDOWS_NATIVE +/* + * This is adapted from `daemonize()`. Use `fork()` to directly create and + * run the daemon in a child process. + */ +static int spawn_server(pid_t *pid) +{ + struct ipc_server_opts opts = { + .nr_threads = cl_args.nr_threads, + }; + + *pid = fork(); + + switch (*pid) { + case 0: + if (setsid() == -1) + error_errno(_("setsid failed")); + close(0); + close(1); + close(2); + sanitize_stdfds(); + + return ipc_server_run(cl_args.path, &opts, test_app_cb, + (void*)&my_app_data); + + case -1: + return error_errno(_("could not spawn daemon in the background")); + + default: + return 0; + } +} +#else +/* + * Conceptually like `daemonize()` but different because Windows does not + * have `fork(2)`. Spawn a normal Windows child process but without the + * limitations of `start_command()` and `finish_command()`. + */ +static int spawn_server(pid_t *pid) +{ + char test_tool_exe[MAX_PATH]; + struct strvec args = STRVEC_INIT; + int in, out; + + GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH); + + in = open("/dev/null", O_RDONLY); + out = open("/dev/null", O_WRONLY); + + strvec_push(&args, test_tool_exe); + strvec_push(&args, "simple-ipc"); + strvec_push(&args, "run-daemon"); + strvec_pushf(&args, "--name=%s", cl_args.path); + strvec_pushf(&args, "--threads=%d", cl_args.nr_threads); + + *pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out); + close(in); + close(out); + + strvec_clear(&args); + + if (*pid < 0) + return error(_("could not spawn daemon in the background")); + + return 0; +} +#endif + +/* + * This is adapted from `wait_or_whine()`. Watch the child process and + * let it get started and begin listening for requests on the socket + * before reporting our success. + */ +static int wait_for_server_startup(pid_t pid_child) +{ + int status; + pid_t pid_seen; + enum ipc_active_state s; + time_t time_limit, now; + + time(&time_limit); + time_limit += cl_args.max_wait_sec; + + for (;;) { + pid_seen = waitpid(pid_child, &status, WNOHANG); + + if (pid_seen == -1) + return error_errno(_("waitpid failed")); + + else if (pid_seen == 0) { + /* + * The child is still running (this should be + * the normal case). Try to connect to it on + * the socket and see if it is ready for + * business. + * + * If there is another daemon already running, + * our child will fail to start (possibly + * after a timeout on the lock), but we don't + * care (who responds) if the socket is live. + */ + s = ipc_get_active_state(cl_args.path); + if (s == IPC_STATE__LISTENING) + return 0; + + time(&now); + if (now > time_limit) + return error(_("daemon not online yet")); + + continue; + } + + else if (pid_seen == pid_child) { + /* + * The new child daemon process shutdown while + * it was starting up, so it is not listening + * on the socket. + * + * Try to ping the socket in the odd chance + * that another daemon started (or was already + * running) while our child was starting. + * + * Again, we don't care who services the socket. + */ + s = ipc_get_active_state(cl_args.path); + if (s == IPC_STATE__LISTENING) + return 0; + + /* + * We don't care about the WEXITSTATUS() nor + * any of the WIF*(status) values because + * `cmd__simple_ipc()` does the `!!result` + * trick on all function return values. + * + * So it is sufficient to just report the + * early shutdown as an error. + */ + return error(_("daemon failed to start")); + } + + else + return error(_("waitpid is confused")); + } +} + +/* + * This process will start a simple-ipc server in a background process and + * wait for it to become ready. This is like `daemonize()` but gives us + * more control and better error reporting (and makes it easier to write + * unit tests). + */ +static int daemon__start_server(void) +{ + pid_t pid_child; + int ret; + + /* + * Run the actual daemon in a background process. + */ + ret = spawn_server(&pid_child); + if (pid_child <= 0) + return ret; + + /* + * Let the parent wait for the child process to get started + * and begin listening for requests on the socket. + */ + ret = wait_for_server_startup(pid_child); + + return ret; +} + +/* + * This process will run a quick probe to see if a simple-ipc server + * is active on this path. + * + * Returns 0 if the server is alive. + */ +static int client__probe_server(void) +{ + enum ipc_active_state s; + + s = ipc_get_active_state(cl_args.path); + switch (s) { + case IPC_STATE__LISTENING: + return 0; + + case IPC_STATE__NOT_LISTENING: + return error("no server listening at '%s'", cl_args.path); + + case IPC_STATE__PATH_NOT_FOUND: + return error("path not found '%s'", cl_args.path); + + case IPC_STATE__INVALID_PATH: + return error("invalid pipe/socket name '%s'", cl_args.path); + + case IPC_STATE__OTHER_ERROR: + default: + return error("other error for '%s'", cl_args.path); + } +} + +/* + * Send an IPC command token to an already-running server daemon and + * print the response. + * + * This is a simple 1 word command/token that `test_app_cb()` (in the + * daemon process) will understand. + */ +static int client__send_ipc(void) +{ + const char *command = "(no-command)"; + struct strbuf buf = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + if (cl_args.token && *cl_args.token) + command = cl_args.token; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + if (!ipc_client_send_command(cl_args.path, &options, command, &buf)) { + if (buf.len) { + printf("%s\n", buf.buf); + fflush(stdout); + } + strbuf_release(&buf); + + return 0; + } + + return error("failed to send '%s' to '%s'", command, cl_args.path); +} + +/* + * Send an IPC command to an already-running server and ask it to + * shutdown. "send quit" is an async request and queues a shutdown + * event in the server, so we spin and wait here for it to actually + * shutdown to make the unit tests a little easier to write. + */ +static int client__stop_server(void) +{ + int ret; + time_t time_limit, now; + enum ipc_active_state s; + + time(&time_limit); + time_limit += cl_args.max_wait_sec; + + cl_args.token = "quit"; + + ret = client__send_ipc(); + if (ret) + return ret; + + for (;;) { + sleep_millisec(100); + + s = ipc_get_active_state(cl_args.path); + + if (s != IPC_STATE__LISTENING) { + /* + * The socket/pipe is gone and/or has stopped + * responding. Lets assume that the daemon + * process has exited too. + */ + return 0; + } + + time(&now); + if (now > time_limit) + return error(_("daemon has not shutdown yet")); + } +} + +/* + * Send an IPC command followed by ballast to confirm that a large + * message can be sent and that the kernel or pkt-line layers will + * properly chunk it and that the daemon receives the entire message. + */ +static int do_sendbytes(int bytecount, char byte, const char *path, + const struct ipc_client_connect_options *options) +{ + struct strbuf buf_send = STRBUF_INIT; + struct strbuf buf_resp = STRBUF_INIT; + + strbuf_addstr(&buf_send, "sendbytes "); + strbuf_addchars(&buf_send, byte, bytecount); + + if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) { + strbuf_rtrim(&buf_resp); + printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf); + fflush(stdout); + strbuf_release(&buf_send); + strbuf_release(&buf_resp); + + return 0; + } + + return error("client failed to sendbytes(%d, '%c') to '%s'", + bytecount, byte, path); +} + +/* + * Send an IPC command with ballast to an already-running server daemon. + */ +static int client__sendbytes(void) +{ + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + options.uds_disallow_chdir = 0; + + return do_sendbytes(cl_args.bytecount, cl_args.bytevalue, cl_args.path, + &options); +} + +struct multiple_thread_data { + pthread_t pthread_id; + struct multiple_thread_data *next; + const char *path; + int bytecount; + int batchsize; + int sum_errors; + int sum_good; + char letter; +}; + +static void *multiple_thread_proc(void *_multiple_thread_data) +{ + struct multiple_thread_data *d = _multiple_thread_data; + int k; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + /* + * A multi-threaded client should not be randomly calling chdir(). + * The test will pass without this restriction because the test is + * not otherwise accessing the filesystem, but it makes us honest. + */ + options.uds_disallow_chdir = 1; + + trace2_thread_start("multiple"); + + for (k = 0; k < d->batchsize; k++) { + if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options)) + d->sum_errors++; + else + d->sum_good++; + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Start a client-side thread pool. Each thread sends a series of + * IPC requests. Each request is on a new connection to the server. + */ +static int client__multiple(void) +{ + struct multiple_thread_data *list = NULL; + int k; + int sum_join_errors = 0; + int sum_thread_errors = 0; + int sum_good = 0; + + for (k = 0; k < cl_args.nr_threads; k++) { + struct multiple_thread_data *d = xcalloc(1, sizeof(*d)); + d->next = list; + d->path = cl_args.path; + d->bytecount = cl_args.bytecount + cl_args.batchsize*(k/26); + d->batchsize = cl_args.batchsize; + d->sum_errors = 0; + d->sum_good = 0; + d->letter = 'A' + (k % 26); + + if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) { + warning("failed to create thread[%d] skipping remainder", k); + free(d); + break; + } + + list = d; + } + + while (list) { + struct multiple_thread_data *d = list; + + if (pthread_join(d->pthread_id, NULL)) + sum_join_errors++; + + sum_thread_errors += d->sum_errors; + sum_good += d->sum_good; + + list = d->next; + free(d); + } + + printf("client (good %d) (join %d), (errors %d)\n", + sum_good, sum_join_errors, sum_thread_errors); + + return (sum_join_errors + sum_thread_errors) ? 1 : 0; +} + +int cmd__simple_ipc(int argc, const char **argv) +{ + const char * const simple_ipc_usage[] = { + N_("test-helper simple-ipc is-active [] []"), + N_("test-helper simple-ipc run-daemon [] []"), + N_("test-helper simple-ipc start-daemon [] [] []"), + N_("test-helper simple-ipc stop-daemon [] []"), + N_("test-helper simple-ipc send [] []"), + N_("test-helper simple-ipc sendbytes [] [] []"), + N_("test-helper simple-ipc multiple [] [] [] []"), + NULL + }; + + const char *bytevalue = NULL; + + struct option options[] = { +#ifndef GIT_WINDOWS_NATIVE + OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("name or pathname of unix domain socket")), +#else + OPT_STRING(0, "name", &cl_args.path, N_("name"), N_("named-pipe name")), +#endif + OPT_INTEGER(0, "threads", &cl_args.nr_threads, N_("number of threads in server thread pool")), + OPT_INTEGER(0, "max-wait", &cl_args.max_wait_sec, N_("seconds to wait for daemon to start or stop")), + + OPT_INTEGER(0, "bytecount", &cl_args.bytecount, N_("number of bytes")), + OPT_INTEGER(0, "batchsize", &cl_args.batchsize, N_("number of requests per thread")), + + OPT_STRING(0, "byte", &bytevalue, N_("byte"), N_("ballast character")), + OPT_STRING(0, "token", &cl_args.token, N_("token"), N_("command token to send to the server")), + + OPT_END() + }; + + if (argc < 2) + usage_with_options(simple_ipc_usage, options); + + if (argc == 2 && !strcmp(argv[1], "-h")) + usage_with_options(simple_ipc_usage, options); + + if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC")) + return 0; + + cl_args.subcommand = argv[1]; + + argc--; + argv++; + + argc = parse_options(argc, argv, NULL, options, simple_ipc_usage, 0); + + if (cl_args.nr_threads < 1) + cl_args.nr_threads = 1; + if (cl_args.max_wait_sec < 0) + cl_args.max_wait_sec = 0; + if (cl_args.bytecount < 1) + cl_args.bytecount = 1; + if (cl_args.batchsize < 1) + cl_args.batchsize = 1; + + if (bytevalue && *bytevalue) + cl_args.bytevalue = bytevalue[0]; + + /* + * Use '!!' on all dispatch functions to map from `error()` style + * (returns -1) style to `test_must_fail` style (expects 1). This + * makes shell error messages less confusing. + */ + + if (!strcmp(cl_args.subcommand, "is-active")) + return !!client__probe_server(); + + if (!strcmp(cl_args.subcommand, "run-daemon")) + return !!daemon__run_server(); + + if (!strcmp(cl_args.subcommand, "start-daemon")) + return !!daemon__start_server(); + + /* + * Client commands follow. Ensure a server is running before + * sending any data. This might be overkill, but then again + * this is a test harness. + */ + + if (!strcmp(cl_args.subcommand, "stop-daemon")) { + if (client__probe_server()) + return 1; + return !!client__stop_server(); + } + + if (!strcmp(cl_args.subcommand, "send")) { + if (client__probe_server()) + return 1; + return !!client__send_ipc(); + } + + if (!strcmp(cl_args.subcommand, "sendbytes")) { + if (client__probe_server()) + return 1; + return !!client__sendbytes(); + } + + if (!strcmp(cl_args.subcommand, "multiple")) { + if (client__probe_server()) + return 1; + return !!client__multiple(); + } + + die("Unhandled subcommand: '%s'", cl_args.subcommand); +} +#endif diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index f97cd9f48a69..287aa6002307 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -65,6 +65,7 @@ static struct test_cmd cmds[] = { { "sha1", cmd__sha1 }, { "sha256", cmd__sha256 }, { "sigchain", cmd__sigchain }, + { "simple-ipc", cmd__simple_ipc }, { "strcmp-offset", cmd__strcmp_offset }, { "string-list", cmd__string_list }, { "submodule-config", cmd__submodule_config }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index 28072c0ad5ab..9ea4b31011dd 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -55,6 +55,7 @@ int cmd__sha1(int argc, const char **argv); int cmd__oid_array(int argc, const char **argv); int cmd__sha256(int argc, const char **argv); int cmd__sigchain(int argc, const char **argv); +int cmd__simple_ipc(int argc, const char **argv); int cmd__strcmp_offset(int argc, const char **argv); int cmd__string_list(int argc, const char **argv); int cmd__submodule_config(int argc, const char **argv); diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh new file mode 100755 index 000000000000..ff98be31a51b --- /dev/null +++ b/t/t0052-simple-ipc.sh @@ -0,0 +1,122 @@ +#!/bin/sh + +test_description='simple command server' + +. ./test-lib.sh + +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || { + skip_all='simple IPC not supported on this platform' + test_done +} + +stop_simple_IPC_server () { + test-tool simple-ipc stop-daemon +} + +test_expect_success 'start simple command server' ' + test_atexit stop_simple_IPC_server && + test-tool simple-ipc start-daemon --threads=8 && + test-tool simple-ipc is-active +' + +test_expect_success 'simple command server' ' + test-tool simple-ipc send --token=ping >actual && + echo pong >expect && + test_cmp expect actual +' + +test_expect_success 'servers cannot share the same path' ' + test_must_fail test-tool simple-ipc run-daemon && + test-tool simple-ipc is-active +' + +test_expect_success 'big response' ' + test-tool simple-ipc send --token=big >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'chunk response' ' + test-tool simple-ipc send --token=chunk >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'slow response' ' + test-tool simple-ipc send --token=slow >actual && + test_line_count -ge 100 actual && + grep -q "big: [0]*99\$" actual +' + +# Send an IPC with n=100,000 bytes of ballast. This should be large enough +# to force both the kernel and the pkt-line layer to chunk the message to the +# daemon and for the daemon to receive it in chunks. +# +test_expect_success 'sendbytes' ' + test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual && + grep "sent:A00100000 rcvd:A00100000" actual +' + +# Start a series of client threads that each make +# IPC requests to the server. Each ( * ) request +# will open a new connection to the server and randomly bind to a server +# thread. Each client thread exits after completing its batch. So the +# total number of live client threads will be smaller than the total. +# Each request will send a message containing at least bytes +# of ballast. (Responses are small.) +# +# The purpose here is to test threading in the server and responding to +# many concurrent client requests (regardless of whether they come from +# 1 client process or many). And to test that the server side of the +# named pipe/socket is stable. (On Windows this means that the server +# pipe is properly recycled.) +# +# On Windows it also lets us adjust the connection timeout in the +# `ipc_client_send_command()`. +# +# Note it is easy to drive the system into failure by requesting an +# insane number of threads on client or server and/or increasing the +# per-thread batchsize or the per-request bytecount (ballast). +# On Windows these failures look like "pipe is busy" errors. +# So I've chosen fairly conservative values for now. +# +# We expect output of the form "sent: ..." +# With terms (7, 19, 13) we expect: +# in [A-G] +# in [19+0 .. 19+(13-1)] +# and (7 * 13) successful responses. +# +test_expect_success 'stress test threads' ' + test-tool simple-ipc multiple \ + --threads=7 \ + --bytecount=19 \ + --batchsize=13 \ + >actual && + test_line_count = 92 actual && + grep "good 91" actual && + grep "sent:A" actual_a && + cat >expect_a <<-EOF && + sent:A00000019 rcvd:A00000019 + sent:A00000020 rcvd:A00000020 + sent:A00000021 rcvd:A00000021 + sent:A00000022 rcvd:A00000022 + sent:A00000023 rcvd:A00000023 + sent:A00000024 rcvd:A00000024 + sent:A00000025 rcvd:A00000025 + sent:A00000026 rcvd:A00000026 + sent:A00000027 rcvd:A00000027 + sent:A00000028 rcvd:A00000028 + sent:A00000029 rcvd:A00000029 + sent:A00000030 rcvd:A00000030 + sent:A00000031 rcvd:A00000031 + EOF + test_cmp expect_a actual_a +' + +test_expect_success 'stop-daemon works' ' + test-tool simple-ipc stop-daemon && + test_must_fail test-tool simple-ipc is-active && + test_must_fail test-tool simple-ipc send --token=ping +' + +test_done