From patchwork Mon Oct 16 16:31:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13423691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F402CDB465 for ; Mon, 16 Oct 2023 16:31:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A55568D00B6; Mon, 16 Oct 2023 12:31:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A05468D0001; Mon, 16 Oct 2023 12:31:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A6888D00B6; Mon, 16 Oct 2023 12:31:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7AF8B8D0001 for ; Mon, 16 Oct 2023 12:31:46 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3AE671A0A6A for ; Mon, 16 Oct 2023 16:31:46 +0000 (UTC) X-FDA: 81351865812.19.E6C9C74 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by imf16.hostedemail.com (Postfix) with ESMTP id 80542180026 for ; Mon, 16 Oct 2023 16:31:43 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Muq8TlR2; spf=none (imf16.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697473903; a=rsa-sha256; cv=none; b=q6ino8e4fgHzNEB0FThZV4W+IebXjW5Ln53WghJi1L9+nIfx7ZyZBVhAqqCkl3cN3GZXC5 Aw86J0U5hA2cIAMbvLMrURbCEbbI6mikxqMkCMX3ik8XLQafCBYb2S0T5ZBBYNZg1+1z1E MjkW/duI2CTiaK808lrZBf+m2eeK8u4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Muq8TlR2; spf=none (imf16.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697473903; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=rtcLf634amKXX13PoI1EmxXp12+Wl6KlAGXtdbAfusc=; b=RQ8PqHkrLyem9JNtTM96dzxnlwY/obTuoa2L7Y7xk7XWSgs9wx3EUXo22oeB0z7pyeyXCj ucgdcuPhmHVTLRYYtLZHZylFpdl3XU1DTcn6p4WiU4ruJd3HWP1lupw3XuemPPdvzy6k76 yNntcJT3bUhnQp8pOCbpof29ZXYuXhk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473903; x=1729009903; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=KArkgIRLDhwmtFvU9fISOShueMaeloHHrxiJqwNwLLU=; b=Muq8TlR20U79Xln/YzeIWFM+r6Q7tdT//4qOROk98pEsCYWrq3Ixcx6v DoM4tnzM1pp6+nyl2PKbiJJHW7HA0kVJVep9yFHIKN+iVW/gyYNp9icnI 41Wd5Ack9fS4Bgx87TvbErJ0PTV0h6wf13E9ixSwIWdQ3CZIsLv8dUDsq hGyE47P4EF+9kPyz8W7Y+9gjsByX/oWgCeFYPTc1KvhfesPIMh0vJ3U+7 edyK06zd6CwmxwPLWx2vdVk8BRd1T3zLfrd7MOXqplp+HyQumeSW8l3Xq 9LgUavROyIC8etdkE4lF6W6+CLtXH5sFczUj4cbxuof/NRmk+PBuLhw9O w==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="449785309" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="449785309" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:31:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="872177333" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="872177333" Received: from ranaelna-mobl.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.208.247]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:31:29 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 21C6A10A1F3; Mon, 16 Oct 2023 19:31:27 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , stable@kernel.org, Nikolay Borisov Subject: [PATCHv2] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Date: Mon, 16 Oct 2023 19:31:22 +0300 Message-ID: <20231016163122.12855-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 80542180026 X-Stat-Signature: f7tgt6b9agkw76nfjfoz4er8whs61k8t X-Rspam-User: X-HE-Tag: 1697473903-867709 X-HE-Meta: U2FsdGVkX18stSzvHwiRdsMpSpC43r4Nvq50AU/Bz6pMn1iYuPiTGY4poZi2QMygwJD9WNkUfl21arCRmrhIGmVdCzuodAq6JDVlty2fmYBkAMjA8//sfUyycuMtvTJJBf5FqfbZAsISvD8+ZQvpWM1fB/ie6qa3pxzOG8Ui6jB6Gg22Y3lIEyypTBgFBMPjnb5wo9+g64nuva0QaEe8eo069f03YpxXUuc7Z0CrdUqf219LOn1tJQs3UA34fkvvUeIM5lMZwwlEyI5Ccwv5ilG2zh+KTdxOTH/XTnti1aLGfp8r5UCcLX3nDPBQYFDqfQIw9Jy8mERQxGRFadDoBRn+d3z9/7qjB8JcSLpMgqnoqGsryO/xcrETyPwg3Zlrg+pJreaO9jsQ1bc4lsUUrqZ5het3YTMkzXcnau6lQ5vPZe2DX+Z5SR5EFimJCeSlxpwz7ahrIJz2MVVSbfdV86Gp2blwf/RKFVLcJ8utT2Tg0I0qyef6Qn61Oy+T6au6u8rk3/AAR8QAXtPMFEk3ss/0r7kIkZTSnYCHMwZotpqRyOwP9yZckXNMJ9jl4eNmKXOUnt10D4T0v+wfE93wDrfSiRvNBFFpSRGT7fuc6P20hSyUBIlFT50LfviIAhEbRuYlj1qjFutQ8fzCz+/1ObavHbAjJRkOtvGeFaCvapT7uS9E5lrvlNcZ+r5QT5J8B4neqfCLZ78hXrjTKhjS+XkxhP5w1VokeqWgyvGLcpxM3XrKu9MSBWhMd7+3/xUvny0t3pGy+Lbo8FUp557O9ZWBXjGW9XtZTsV/sneDqQgysqRxZBxOe6+6JdPKBhpRfCpmQEyrD12e1po0ywZbhbAxKc8/ftEbZxJJzRDLffqb0fUT4cQxxWGK+fYuZonFc61xO+kXUE3ykHRqzeIqFD/b1qQq029YWMbv/I6tVln8pcMPmUSGWlzkWvSGjfsKarxJxVcgV4KpvU6QI3f arM5sZKL LNf/NsyW2xxlU/QKbc+Mky0FPr1mQEL6nwcqLp6tlOkDshqYV5Sz5TsfG7HgVZB0CuhwKHgnlfGyB4+M71e10+/FZgOgvl6oqQy9DkugIZvqPWlVSGYfkDXS3D+FNqH5aWVxEsfcr4QqNufb1aKPYqmbQQACj+1ySVyM/kMKv1xq87SIqDjStSEXFGPRKMdX2i8fo9Ez9wH0ah0IZZHuUE6Q45FzRZAlQQOY1JJGGjyP2aHaAgmjM2Grmfne6zyVaw1jRnfQA0xCSLhUSRvhzRGLtdEsnB2mf2OMI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Signed-off-by: Kirill A. Shutemov Reported-by: Michael Roth Reviewed-by: Nikolay Borisov Reviewed-by: Vlastimil Babka Tested-by: Michael Roth --- v2: - Fix deadlock (Vlastimil); - Fix comments (Vlastimil); - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called from atomic context; --- drivers/firmware/efi/unaccepted_memory.c | 71 ++++++++++++++++++++++-- 1 file changed, 67 insertions(+), 4 deletions(-) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index 853f7dc3c21d..fa3363889224 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -5,9 +5,17 @@ #include #include -/* Protects unaccepted memory bitmap */ +/* Protects unaccepted memory bitmap and accepting_list */ static DEFINE_SPINLOCK(unaccepted_memory_lock); +struct accept_range { + struct list_head list; + unsigned long start; + unsigned long end; +}; + +static LIST_HEAD(accepting_list); + /* * accept_memory() -- Consult bitmap and accept the memory if needed. * @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) { struct efi_unaccepted_memory *unaccepted; unsigned long range_start, range_end; + struct accept_range range, *entry; unsigned long flags; u64 unit_size; @@ -78,20 +87,74 @@ void accept_memory(phys_addr_t start, phys_addr_t end) if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; - range_start = start / unit_size; - + range.start = start / unit_size; + range.end = DIV_ROUND_UP(end, unit_size); +retry: spin_lock_irqsave(&unaccepted_memory_lock, flags); + + /* + * Check if anybody works on accepting the same range of the memory. + * + * The check is done with unit_size granularity. It is crucial to catch + * all accept requests to the same unit_size block, even if they don't + * overlap on physical address level. + */ + list_for_each_entry(entry, &accepting_list, list) { + if (entry->end < range.start) + continue; + if (entry->start >= range.end) + continue; + + /* + * Somebody else accepting the range. Or at least part of it. + * + * Drop the lock and retry until it is complete. + */ + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + + /* + * The code is reachable from atomic context. + * cond_resched() cannot be used. + */ + cpu_relax(); + + goto retry; + } + + /* + * Register that the range is about to be accepted. + * Make sure nobody else will accept it. + */ + list_add(&range.list, &accepting_list); + + range_start = range.start; for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap, - DIV_ROUND_UP(end, unit_size)) { + range.end) { unsigned long phys_start, phys_end; unsigned long len = range_end - range_start; phys_start = range_start * unit_size + unaccepted->phys_base; phys_end = range_end * unit_size + unaccepted->phys_base; + /* + * Keep interrupts disabled until the accept operation is + * complete in order to prevent deadlocks. + * + * Enabling interrupts before calling arch_accept_memory() + * creates an opportunity for an interrupt handler to request + * acceptance for the same memory. The handler will continuously + * spin with interrupts disabled, preventing other task from + * making progress with the acceptance process. + */ + spin_unlock(&unaccepted_memory_lock); + arch_accept_memory(phys_start, phys_end); + + spin_lock(&unaccepted_memory_lock); bitmap_clear(unaccepted->bitmap, range_start, len); } + + list_del(&range.list); spin_unlock_irqrestore(&unaccepted_memory_lock, flags); }