From patchwork Wed May 24 13:13:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13253991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5A01C77B73 for ; Wed, 24 May 2023 13:24:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=BLqCFNmUZt2JFt0qhmz0qpcJ3fW1B4OfqXt/ZsYkmqM=; b=OHrfe7boUOs4Ll gCVt1M/tOFzOqUaK9saFPq7fpBhuGa2GV8uwAWE849fvNydeax28ff4OTByBQiLULifffQCND9sUT GLUE2HLjDIRXT1w67754YsO9mlQcNKQwQel8Vs94Njb4BvhxPyPj9j0+a5w12mMzuhxapXE7EEALt ER6IX7R6e1OFUobx7wm3e25rBeV8hdHts9qcGIPiIsfQ6e518lAVs7TXVU3zHJKZwQOYY85Y5WB65 1LKFGHTlc8dS50KIpoGCz/++/488S7AerCvYRQNs3g4zUfyqydkmXdupDqIQMhE2Xf1Ewr3EI/UVN LbJq+rpuMmVez/9yASFw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q1oTU-00DYxb-26; Wed, 24 May 2023 13:24:20 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q1oTS-00DYx7-0O for linux-arm-kernel@lists.infradead.org; Wed, 24 May 2023 13:24:19 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7596761725; Wed, 24 May 2023 13:24:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AEE71C433EF; Wed, 24 May 2023 13:24:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684934656; bh=X7w8UNBEZj68xoYR2+1fzYHN1lzC2y2GrI0EwehDjsk=; h=From:To:Cc:Subject:Date:From; b=FLq4/MnXaZ6eJCMpPSag8E23usYfdLGrPnSBlHVhAgabRu7U8V+uwZHsFtnpICY7k NfA6c416MaBVqIjhbewttDxwAO4dCAYiMavTYtF+tdsn8KRBPZJvoBJKDN+cPTUO3j qehqPW4nDLe3SiC9miRmYVhumkrU/a4/vyiFkQ2wmOjly6+L5Cl+KSqc9YR4yxJxDv Fsm8eLSE3zEsOJagRcd8mGUeZaPa/MjVvdlxS2Y+HaLRBrrbwZCzyjL997lIQMDR0u oZjUVo1tG5+H7Dtg8FxWuYn2U9awNGjVuG/XsKp+hJOI7mtI6zIBGmzbBOidleMP77 yuf1G+DOrcoQg== From: Jisheng Zhang To: Catalin Marinas , Will Deacon Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Suren Baghdasaryan Subject: [PATCH] arm64: mm: pass original fault address to handle_mm_fault() in PER_VMA_LOCK block Date: Wed, 24 May 2023 21:13:05 +0800 Message-Id: <20230524131305.2808-1-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230524_062418_201165_FEB38B9B X-CRM114-Status: GOOD ( 13.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When reading the arm64's PER_VMA_LOCK support code, I found a bit difference between arm64 and other arch when calling handle_mm_fault() during VMA lock-based page fault handling: the fault address is masked before passing to handle_mm_fault(). This is also different from the usage in mmap_lock-based handling. I think we need to pass the original fault address to handle_mm_fault() as we did in commit 84c5e23edecd ("arm64: mm: Pass original fault address to handle_mm_fault()"). If we go through the code path further, we can find that the "masked" fault address can cause mismatched fault address between perf sw major/minor page fault sw event and perf page fault sw event: do_page_fault perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, ..., addr) // orig addr handle_mm_fault mm_account_fault perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, ...) // masked addr Fixes: cd7f176aea5f ("arm64/mm: try VMA lock-based page fault handling first") Signed-off-by: Jisheng Zhang Acked-by: Catalin Marinas --- arch/arm64/mm/fault.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index cb21ccd7940d..6045a5117ac1 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -600,8 +600,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, vma_end_read(vma); goto lock_mmap; } - fault = handle_mm_fault(vma, addr & PAGE_MASK, - mm_flags | FAULT_FLAG_VMA_LOCK, regs); + fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) {