From patchwork Fri Aug 18 20:58:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Helge Deller X-Patchwork-Id: 13358320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 866ACEE4992 for ; Fri, 18 Aug 2023 20:58:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Subject:To:From :Date:Reply-To:Cc:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=5zKMmgO7Ktz9nxlBmEcRLocuPswhfJUrGX7or0mC4KY=; b=T2zq/mkuf4q6kA ttSZ7gSYCpBEetqse1QifahFYELb4vPF1Z2Oo0nJ5oO314vKiynzeU6Ayl0eWpRnX83Fv+KxFMdrj xPjMSeqQSDUM94WGevIfJ14wRa4p3NadBRqoqHwLBAI4db9mASBDRiD7e/UXGu96qX8FRnAhXvQvr d7g+xrOWRC4o0GKJM8f8Cdw5AdGo6dNTcJeaiTqK69eocucITLevm2eZxPA4JyQ01zeNUT/VZuUf1 RzyinPXH6ic3gsobmkL4WmcMeO9qD0Mq9gp55tNZmMnUarO+Qw/fs5VeXxzwmAOKSpITruybQO9aS eI8tpPQdyEFk8pKMmH2A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qX6Xz-00A1y1-0j; Fri, 18 Aug 2023 20:58:19 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qX6Xw-00A1xF-2N for linux-arm-kernel@lists.infradead.org; Fri, 18 Aug 2023 20:58:18 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B731A63C2B; Fri, 18 Aug 2023 20:58:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C1BE8C433C7; Fri, 18 Aug 2023 20:58:12 +0000 (UTC) Date: Fri, 18 Aug 2023 22:58:09 +0200 From: Helge Deller To: linux-arm-kernel@lists.infradead.org, "Russell King (Oracle)" , Arnd Bergmann , Andrew Morton , linux-kernel@vger.kernel.org Subject: [PATCH][RESEND] arm: Fix flush_dcache_page() for usage from irq context Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230818_135816_830734_B3DAAE3F X-CRM114-Status: GOOD ( 12.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since at least kernel 6.1, flush_dcache_page() is called with IRQs disabled, e.g. from aio_complete(). But the current implementation for flush_dcache_page() on ARM (32-bit) unintentionally re-enables IRQs, which may lead to deadlocks. Fix it by using xa_lock_irqsave() and xa_unlock_irqrestore() for the flush_dcache_mmap_*lock() macros instead. Cc: "Russell King (Oracle)" Cc: Arnd Bergmann Cc: linux-arm-kernel@lists.infradead.org Reviewed-by: Arnd Bergmann Signed-off-by: Helge Deller diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index a094f964c869..5b8a1ef0dc50 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -315,6 +315,10 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages) #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages) +#define flush_dcache_mmap_lock_irqsave(mapping, flags) \ + xa_lock_irqsave(&mapping->i_pages, flags) +#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \ + xa_unlock_irqrestore(&mapping->i_pages, flags) /* * We don't appear to need to do anything here. In fact, if we did, we'd diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 2508be91b7a0..e70dd5e354ae 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -238,6 +238,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p { struct mm_struct *mm = current->active_mm; struct vm_area_struct *mpnt; + unsigned long flags; pgoff_t pgoff; /* @@ -248,7 +249,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p */ pgoff = page->index; - flush_dcache_mmap_lock(mapping); + flush_dcache_mmap_lock_irqsave(mapping, flags); vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { unsigned long offset; @@ -262,7 +263,7 @@ static void __flush_dcache_aliases(struct address_space *mapping, struct page *p offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; flush_cache_page(mpnt, mpnt->vm_start + offset, page_to_pfn(page)); } - flush_dcache_mmap_unlock(mapping); + flush_dcache_mmap_unlock_irqrestore(mapping, flags); } #if __LINUX_ARM_ARCH__ >= 6