From patchwork Sat Apr 14 16:36:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 10341371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9AD0E6039A for ; Sat, 14 Apr 2018 16:37:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7985128A67 for ; Sat, 14 Apr 2018 16:37:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6998628A95; Sat, 14 Apr 2018 16:37:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A43C828A67 for ; Sat, 14 Apr 2018 16:37:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751154AbeDNQg0 (ORCPT ); Sat, 14 Apr 2018 12:36:26 -0400 Received: from mail-io0-f193.google.com ([209.85.223.193]:42090 "EHLO mail-io0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751121AbeDNQgZ (ORCPT ); Sat, 14 Apr 2018 12:36:25 -0400 Received: by mail-io0-f193.google.com with SMTP id d5so13491246iob.9; Sat, 14 Apr 2018 09:36:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=Nh/dFQhByVxvPi1LJ7dNLkJe5Q+E6xrNLNO6uCVyHFY=; b=azyTWxqkIqH/l7HY9YkwkAYmF3SXZzG81q/iBKcz/vQwYGQw5LuJDZwbwFTrZdlR5t mvtuMijvs8KhhcV+0kMsBnEAe1y7jjwnDWSGr5yr9EQjdVN47C+45BeJO9gFLvkvnjHm 0VOWuyo6PxKPXy1m7fRLVVF6UNRj3307ChCufRs1CjoXM6tV13OTsGn+I3NK59aZRyP+ HwztJaRIYpGQNvLi8O9XWHjlwbzPkkl0D79mNnHlTcTClWBWWvbXB2xVAR04UinV7wjE lD0+btkSYrvTYIaeS0eWhyxG1PVR6eZc0w+/WLTUIaIXad+SyELVS0bjzbHGbGf1V/M3 Yi+w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=Nh/dFQhByVxvPi1LJ7dNLkJe5Q+E6xrNLNO6uCVyHFY=; b=VOoXTtNV1EHM5CASUjZwKFvxFsfiKDgOLuV2IggDYj/CeSCjIJJKZNfzCe0HKw+Q7s AFMDju2BILn+LSiyDj7FP6u7XVkmzOUugdhaT16BEcGlj3YqE9+WZQ0s9i6dL9PGeC/U BlmIZcX6CSEL7ks8HfTaupEgDmVdY1wlwmRCs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=Nh/dFQhByVxvPi1LJ7dNLkJe5Q+E6xrNLNO6uCVyHFY=; b=BEU7ACo424qYWE7co4ynVgHxBYoKQTgP/kuof9FphVlU+HWxP10HHMBljRbMNkppX+ PQrEX2Bj2+x4tsXQMQtWI0YN6sxS+s+saUilP/fDMe/oBkPmVtHLrCSj0M7yhdLXRvyB rgrj95NcqU4+BbWsXKTJeumZrmlk/3IkRqtbN+4bqfHgY3tDeLSc5CDks+C2MCAXWtY6 sR6A8unO9zgL8c7nqDBXZBSICw4IqFu2TOR5sW7N69B4xe00WlpKFAetMUONcX3IfZkm 2NdmFLH5EgrdwKUPwHv7meO3cT0neyhmRROw2s3eNggDfObdbCHHmuniqxsbcikBWEYA 5Y1w== X-Gm-Message-State: ALQs6tAkpYFv0KkECVgmce0pawmpdmg2kpd+s4vA3GjXks31yzzly2gk 6BdpWON5Kkq8OE4zUwIjyobXgKnszaoLNgd/WGo= X-Google-Smtp-Source: AIpwx49UrubB892FiJ/rRkQLI1bnr68RGCGtBvUUCsc4pjGPYceUicw6MwPYJ+GRmXS5WBt5eNkpBp6YqP2xEmfWjDs= X-Received: by 10.107.175.219 with SMTP id p88mr17247347ioo.257.1523723784279; Sat, 14 Apr 2018 09:36:24 -0700 (PDT) MIME-Version: 1.0 Received: by 10.107.95.15 with HTTP; Sat, 14 Apr 2018 09:36:23 -0700 (PDT) In-Reply-To: <20180414080206.GV30522@ZenIV.linux.org.uk> References: <20180413181350.88831-1-khazhy@google.com> <20180413202823.204377-1-khazhy@google.com> <20180413141430.2788e2562e3e24bd273fe78b@linux-foundation.org> <3362fb2d-85ff-86af-399f-698c986e46cc@suse.com> <20180414080206.GV30522@ZenIV.linux.org.uk> From: Linus Torvalds Date: Sat, 14 Apr 2018 09:36:23 -0700 X-Google-Sender-Auth: WZuCwBVJqjqveHD5VdpUYmBUsjQ Message-ID: Subject: Re: [PATCH] fs/dcache.c: re-add cond_resched() in shrink_dcache_parent() To: Al Viro Cc: Nikolay Borisov , Andrew Morton , Khazhismel Kumykov , linux-fsdevel , Linux Kernel Mailing List , David Rientjes , Goldwyn Rodrigues , Jeff Mahoney , Davidlohr Bueso Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sat, Apr 14, 2018 at 1:02 AM, Al Viro wrote: > > "Bail out" is definitely a bad idea, "sleep"... what on? Especially > since there might be several evictions we are overlapping with... Well, one thing that should be looked at is the return condition from select_collect() that shrink_dcache_parent() uses. Because I think that return condition is somewhat insane. The logic there seems to be: - if we have found something, stop walking. Either NOW (if somebody is waiting) or after you've hit a rename (if nobody is) Now, this actually makes perfect sense for the whole rename situation: if there's nobody waiting for us, but we hit a rename, we probably should stop anyway just to let whoever is doing that rename continue, and we might as well try to get rid of the dentries we have found so far. But it does *not* make sense for the case where we've hit a dentry that is already on the shrink list. Sure, we'll continue to gather all the other dentries, but if there is concurrent shrinking, shouldn't we give up the CPU more eagerly - *particularly* if somebody else is waiting (it might be the other process that actually gets rid of the shrinking dentries!)? So my gut feel is that we should at least try doing something like this in select_collect(): - if (!list_empty(&data->dispose)) + if (data->found) ret = need_resched() ? D_WALK_QUIT : D_WALK_NORETRY; because even if we haven't actually been able to shrink something, if we hit an already shrinking entry we should probably at least not do the "retry for rename". And if we actually are going to reschedule, we might as well start from the beginning. I realize that *this* thread might not be making any actual progress (because it didn't find any dentries to shrink), but since it did find _a_ dentry that is being shrunk, we know the operation itself - on a bigger scale - is making progress. Hmm? Now, this is independent of the fact that we probably do need a cond_resched() in shrink_dcache_parent(), to actually do the reschedule if we're not preemptible. The "need_resched()" in select_collect() is obviously done while holding HOWEVER. Even in that case, I don't think shrink_dcache_parent() is the right point. I'd rather just do it differently in shrink_dentry_list(): do it even for the empty list case by just doing it at the top of the loop: static void shrink_dentry_list(struct list_head *list) { - while (!list_empty(list)) { + while (cond_resched(), !list_empty(list)) { struct dentry *dentry, *parent; - cond_resched(); so my full patch that I would suggest might be TheRightThing(tm) is attached (but it should be committed as two patches, since the two issues are independent - I'm just attaching it as one for testing in case somebody wants to run some nasty workloads on it) Comments? Side note: I think we might want to make that while (cond_resched(), ) { .... } thing a pattern for doing cond_resched() in loops, instead of having the cond_resched() inside the loop itself. It not only handles the "zero iterations" case, it also ends up being neutral location-waise wrt 'continue' statements, and potentially generates *better* code. For example, in this case, doing the cond_resched() at the very top of the loop means that the loop itself then does that dentry = list_entry(list->prev, struct dentry, d_lru); right after the "list_empty()" test - which means that register allocation etc might be easier, because it doesn't have a function call (with associated register clobbers) in between the two accesses to "list". And I think that might be a fairly common pattern - the loop conditional uses the same values as the loop itself then uses. I don't know. Maybe I'm just making excuses for the somewhat unusual syntax. Anybody want to test this out? Linus fs/dcache.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 86d2de63461e..76507109cbcd 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1049,11 +1049,9 @@ static bool shrink_lock_dentry(struct dentry *dentry) static void shrink_dentry_list(struct list_head *list) { - while (!list_empty(list)) { + while (cond_resched(), !list_empty(list)) { struct dentry *dentry, *parent; - cond_resched(); - dentry = list_entry(list->prev, struct dentry, d_lru); spin_lock(&dentry->d_lock); rcu_read_lock(); @@ -1462,7 +1460,7 @@ static enum d_walk_ret select_collect(void *_data, struct dentry *dentry) * ensures forward progress). We'll be coming back to find * the rest. */ - if (!list_empty(&data->dispose)) + if (data->found) ret = need_resched() ? D_WALK_QUIT : D_WALK_NORETRY; out: return ret;