From patchwork Thu Mar 11 18:00:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12132399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F74DC433DB for ; Thu, 11 Mar 2021 18:08:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D16264DE5 for ; Thu, 11 Mar 2021 18:08:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D16264DE5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 144168D02DA; Thu, 11 Mar 2021 13:08:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 11BB38D02B2; Thu, 11 Mar 2021 13:08:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED74F8D02DA; Thu, 11 Mar 2021 13:08:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id CD98F8D02B2 for ; Thu, 11 Mar 2021 13:08:30 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8A7E4181AF5D0 for ; Thu, 11 Mar 2021 18:08:30 +0000 (UTC) X-FDA: 77908378380.18.FC7C368 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf23.hostedemail.com (Postfix) with ESMTP id DD986A001A90 for ; Thu, 11 Mar 2021 18:08:21 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id 6so26638136ybq.7 for ; Thu, 11 Mar 2021 10:08:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=0hIsya0NPg3LraqUh9U14zv4tDyz3lAbAtT3O6KMbxI=; b=kViAvDBzbyxZqyKRKrGS5AptEYuUi9oYcEnG60YN58mxT00AUqITV3I11Y+z9uN/ce pXUrGMUEldjoLpWK/xnQ3ESQ4g52cJUG7ie2cBDAyzJqZgbtj7FBj75EcN+VhcCctcvy A5Qz29luD76h6ydg59eXDLYzdRN56k9MxnzgfALOhHa7MN7Xb6FK8sNySmlxJPAeXht/ w+wDBy9Q9zQo5OcRkcsdx+owECuvGehGkeW3i6BxDK3cxI15FGxH9yKO2Z+wEQw+ujF0 A0rbaA+zXSTzxZOcuh3E9dun0NYuCoN0WaxaqCc/cqiL2xVeZHFy9n3oE90rPBjoGleK m4bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:message-id:mime-version:subject :from:to:cc:content-transfer-encoding; bh=0hIsya0NPg3LraqUh9U14zv4tDyz3lAbAtT3O6KMbxI=; b=KBBsPgEZW6nO5ZqjhGDAzI7IEmwClvizhh0io+EWMAjkYiJ4e4+1mby51IhNJAXhuJ T83JUaBFSuCyA3hXDj+3aF+Kgjn2r0x+vWf0RNpmEVNjevXbMdcpaS7RWuTf/rL2JvBa 9PsjAABh1PDRcPswFoKsiXAeoX3MUXuPlvo01pzsIXPKFczwF7cUPsS7G7SWUj9Yq0tw dHSuhSneFvv/WRzpCJbTf3+Qm/y/wp0Bi+jdYFrnvO7clc3aJ8XyxiDhTkhyl9TrkIH0 OqAAYk5N6jOVy16H/ComhlS7V+Lelz8ApM6bnZRABJfBMDgry9mvJ7wR4p4c8KHzk+5p vdCA== X-Gm-Message-State: AOAM531Fd9m4AtwVn8NhzFqWMS4jM+AW/A8DdvNEDZet2CWCOhEsi+3I n5ZA/orrp5iOoIITHqShXQ3BSsLIbYI= X-Google-Smtp-Source: ABdhPJylQ6J6dAviKLv4ULtSSEsMJZ6skvXuUU4nlEFTt4EJSBPKsNrFY1Lo+yI7hOazITJ+8wVkHa/s3NQ= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:c4e3:28f1:33df:26ba]) (user=seanjc job=sendgmr) by 2002:a25:dccc:: with SMTP id y195mr13360755ybe.182.1615485662412; Thu, 11 Mar 2021 10:01:02 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 11 Mar 2021 10:00:57 -0800 Message-Id: <20210311180057.1582638-1-seanjc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v2] mm/mmu_notifiers: Esnure range_end() is paired with range_start() From: Sean Christopherson To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jason Gunthorpe , David Rientjes , Ben Gardon , Michal Hocko , " =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= " , Andrea Arcangeli , Johannes Weiner , Dimitri Sivanich , Sean Christopherson X-Stat-Signature: 51oo61ynajcpnewtk9dyoh4rkjkinqx3 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DD986A001A90 Received-SPF: none (flex--seanjc.bounces.google.com>: No applicable sender policy available) receiver=imf23; identity=mailfrom; envelope-from="<33lpKYAYKCMU3plyunrzzrwp.nzxwty58-xxv6lnv.z2r@flex--seanjc.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615486101-709880 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If one or more notifiers fails .invalidate_range_start(), invoke .invalidate_range_end() for "all" notifiers. If there are multiple notifiers, those that did not fail are expecting _start() and _end() to be paired, e.g. KVM's mmu_notifier_count would become imbalanced. Disallow notifiers that can fail _start() from implementing _end() so that it's unnecessary to either track which notifiers rejected _start(), or had already succeeded prior to a failed _start(). Note, the existing behavior of calling _start() on all notifiers even after a previous notifier failed _start() was an unintented "feature". Make it canon now that the behavior is depended on for correctness. As of today, the bug is likely benign: 1. The only caller of the non-blocking notifier is OOM kill. 2. The only notifiers that can fail _start() are the i915 and Nouveau drivers. 3. The only notifiers that utilize _end() are the SGI UV GRU driver and KVM. 4. The GRU driver will never coincide with the i195/Nouveau drivers. 5. An imbalanced kvm->mmu_notifier_count only causes soft lockup in the _guest_, and the guest is already doomed due to being an OOM victim. Fix the bug now to play nice with future usage, e.g. KVM has a potential use case for blocking memslot updates in KVM while an invalidation is in-progress, and failure to unblock would result in said updates being blocked indefinitely and hanging. Found by inspection. Verified by adding a second notifier in KVM that periodically returns -EAGAIN on non-blockable ranges, triggering OOM, and observing that KVM exits with an elevated notifier count. Fixes: 93065ac753e4 ("mm, oom: distinguish blockable mode for mmu notifiers") Suggested-by: Jason Gunthorpe Cc: stable@vger.kernel.org Cc: David Rientjes Cc: Ben Gardon Cc: Michal Hocko Cc: "Jérôme Glisse" Cc: Andrea Arcangeli Cc: Johannes Weiner Cc: Dimitri Sivanich Signed-off-by: Sean Christopherson Reviewed-by: Jason Gunthorpe --- v2: Reimplemented as suggested by Jason. Only functional change relative to Jason's suggestion is to check invalidate_range_end before calling to avoid a NULL pointer dereference. I also added more comments, hopefully they're helpful... v1: https://lkml.kernel.org/r/20210310213117.1444147-1-seanjc@google.com include/linux/mmu_notifier.h | 10 +++++----- mm/mmu_notifier.c | 23 +++++++++++++++++++++++ 2 files changed, 28 insertions(+), 5 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index b8200782dede..1a6a9eb6d3fa 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -169,11 +169,11 @@ struct mmu_notifier_ops { * the last refcount is dropped. * * If blockable argument is set to false then the callback cannot - * sleep and has to return with -EAGAIN. 0 should be returned - * otherwise. Please note that if invalidate_range_start approves - * a non-blocking behavior then the same applies to - * invalidate_range_end. - * + * sleep and has to return with -EAGAIN if sleeping would be required. + * 0 should be returned otherwise. Please note that notifiers that can + * fail invalidate_range_start are not allowed to implement + * invalidate_range_end, as there is no mechanism for informing the + * notifier that its start failed. */ int (*invalidate_range_start)(struct mmu_notifier *subscription, const struct mmu_notifier_range *range); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 61ee40ed804e..459d195d2ff6 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -501,10 +501,33 @@ static int mn_hlist_invalidate_range_start( ""); WARN_ON(mmu_notifier_range_blockable(range) || _ret != -EAGAIN); + /* + * We call all the notifiers on any EAGAIN, + * there is no way for a notifier to know if + * its start method failed, thus a start that + * does EAGAIN can't also do end. + */ + WARN_ON(ops->invalidate_range_end); ret = _ret; } } } + + if (ret) { + /* + * Must be non-blocking to get here. If there are multiple + * notifiers and one or more failed start, any that succeeded + * start are expecting their end to be called. Do so now. + */ + hlist_for_each_entry_rcu(subscription, &subscriptions->list, + hlist, srcu_read_lock_held(&srcu)) { + if (!subscription->ops->invalidate_range_end) + continue; + + subscription->ops->invalidate_range_end(subscription, + range); + } + } srcu_read_unlock(&srcu, id); return ret;