@@ -1215,8 +1215,6 @@ static int fsmonitor_run_daemon_1(struct fsmonitor_daemon_state *state)
_("could not start IPC thread pool on '%s'"),
state->path_ipc.buf);
- ipc_server_start_async(&state->ipc_server_data);
-
/*
* Start the fsmonitor listener thread to collect filesystem
* events.
@@ -516,6 +516,12 @@ void fsm_listen__loop(struct fsmonitor_daemon_state *state)
}
data->stream_started = 1;
+ /*
+ * Our fs event listener is now running, so it's safe to start
+ * serving client requests.
+ */
+ ipc_server_start_async(state->ipc_server_data);
+
pthread_mutex_lock(&data->dq_lock);
pthread_cond_wait(&data->dq_finished, &data->dq_lock);
pthread_mutex_unlock(&data->dq_lock);
@@ -741,6 +741,12 @@ void fsm_listen__loop(struct fsmonitor_daemon_state *state)
start_rdcw_watch(data->watch_gitdir) == -1)
goto force_error_stop;
+ /*
+ * Now that we've established the rdcw watches, we can start
+ * serving clients.
+ */
+ ipc_server_start_async(state->ipc_server_data);
+
for (;;) {
dwWait = WaitForMultipleObjects(data->nr_listener_handles,
data->hListener,
There's a racy hang in fsmonitor on macOS that we sometimes see in CI. When we serve a client, what's supposed to happen is: 1. The client thread calls with_lock__wait_for_cookie() in which we create a cookie file and then wait for a pthread_cond event 2. The filesystem event listener sees the cookie file creation, does some internal book-keeping, and then triggers the pthread_cond. But there's a problem: we start the listener that accepts client threads before we start the fs event thread. So it's possible for us to accept a client which creates the cookie file and starts waiting before the fs event thread is initialized, and we miss those filesystem events entirely. That leaves the client thread hanging forever. In CI, the symptom is that t9210 (which is testing scalar, which always enables fsmonitor under the hood) may hang forever in "scalar clone". It is waiting on "git fetch" which is waiting on the fsmonitor daemon. The race happens more frequently under load, but you can trigger it predictably with a sleep like this, which delays the start of the fs event thread: --- a/compat/fsmonitor/fsm-listen-darwin.c +++ b/compat/fsmonitor/fsm-listen-darwin.c @@ -510,6 +510,7 @@ void fsm_listen__loop(struct fsmonitor_daemon_state *state) FSEventStreamSetDispatchQueue(data->stream, data->dq); data->stream_scheduled = 1; + sleep(1); if (!FSEventStreamStart(data->stream)) { error(_("Failed to start the FSEventStream")); goto force_error_stop_without_loop; One solution might be to reverse the order of initialization: start the fs event thread before we start the thread listening for clients. But the fsmonitor code explicitly does it in the opposite direction. The fs event thread wants to refer to the ipc_server_data struct, so we need it to be initialized first. A further complication is that we need a signal from the fs event thread that it is actually ready and listening. And those details happen within backend-specific fsmonitor code, whereas the initialization is in the shared code. So instead, let's use the ipc_server init/start split added in the previous commit. The generic fsmonitor code will init the ipc_server but _not_ start it, leaving that to the backend specific code, which now needs to call ipc_server_start_async() at the right time. For macOS, that is right after we start the FSEventStream that you can see in the diff above. It's not clear to me if Windows suffers from the same problem (and we simply don't trigger it in CI), or if it is immune. Regardless, the obvious place to start accepting clients there is right after we've established the ReadDirectoryChanges watch. This makes the hangs go away in our macOS CI environment, even when compiled with the sleep() above. Helped-by: Koji Nakamaru <koji.nakamaru@gree.net> Signed-off-by: Jeff King <peff@peff.net> --- builtin/fsmonitor--daemon.c | 2 -- compat/fsmonitor/fsm-listen-darwin.c | 6 ++++++ compat/fsmonitor/fsm-listen-win32.c | 6 ++++++ 3 files changed, 12 insertions(+), 2 deletions(-)