From patchwork Thu Aug 18 07:37:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12946892 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E827C00140 for ; Thu, 18 Aug 2022 10:07:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00CF78D0001; Thu, 18 Aug 2022 06:07:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EFF4A6B0074; Thu, 18 Aug 2022 06:07:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEE4A8D0001; Thu, 18 Aug 2022 06:07:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D0BC16B0073 for ; Thu, 18 Aug 2022 06:07:58 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B2BE5C18C6 for ; Thu, 18 Aug 2022 10:07:58 +0000 (UTC) X-FDA: 79812287436.03.589F76F Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf09.hostedemail.com (Postfix) with ESMTP id A00F9141CE3 for ; Thu, 18 Aug 2022 09:59:57 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VMZnOij_1660808275; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VMZnOij_1660808275) by smtp.aliyun-inc.com; Thu, 18 Aug 2022 15:37:56 +0800 From: Baolin Wang To: sj@kernel.org, akpm@linux-foundation.org Cc: baolin.wang@linux.alibaba.com, muchun.song@linux.dev, mike.kravetz@oracle.com, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/2] mm/damon: validate if the pmd entry is present before accessing Date: Thu, 18 Aug 2022 15:37:43 +0800 Message-Id: <58b1d1f5fbda7db49ca886d9ef6783e3dcbbbc98.1660805030.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660816802; a=rsa-sha256; cv=none; b=CVSYYrs0f40utRUWTvd27TXpM1E6hyhuto3sbtUNoBuTU34CRFrFwuwl0UYmCy3fpUHfmc He4Dk7v1ObitjadPz6c2zYFRhJkrkhEKQ4T0mtLxq6yA5exw51rUnurVCaxmOyCkU/4aqV CrC4o49L2Oy5n9JM7xeeDUj8R24bNfg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660816802; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=WOV+y7G+Fg4w65jWTwKj0UdeeEwFKqSPXznC4p/vWKQ=; b=u6MN0xZdO1W6RHQRSqFJjacwO1qcfFhaCsg8N6f1T8wtxNF1/aqVd+fnyJ0GD2QA/OMQbV B54rwB/KW92dTRNMEBDE9hG+w8OICGYCT8Gnwcf2FJu7UAEyEvAtOBP3zqJutIJuQ4KIFK KGxxf+3N5f46inB4WDrZVJlBej1EsE4= X-Stat-Signature: jyxwg4nsiyec6h7afk64d7xbpgtpibd8 Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A00F9141CE3 X-HE-Tag: 1660816797-938208 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The pmd_huge() is used to validate if the pmd entry is mapped by a huge page, also including the case of non-present (migration or hwpoisoned) pmd entry on arm64 or x86 architectures. That means the pmd_pfn() can not get the correct pfn number for the non-present pmd entry, which will cause damon_get_page() to get an incorrect page struct (also may be NULL by pfn_to_online_page()) to make the access statistics incorrect. Moreover it does not make sense that we still waste time to get the page of the non-present entry, just treat it as not-accessed and skip it, that keeps consistent with non-present pte level entry. Thus adding a pmd entry present validation to fix above issues. Signed-off-by: Baolin Wang Reviewed-by: SeongJae Park Reviewed-by: Muchun Song --- Changes from v1: - Update the commit message to make it more clear. - Add reviewed tag from SeongJae. --- mm/damon/vaddr.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 3c7b9d6..1d16c6c 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -304,6 +304,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, if (pmd_huge(*pmd)) { ptl = pmd_lock(walk->mm, pmd); + if (!pmd_present(*pmd)) { + spin_unlock(ptl); + return 0; + } + if (pmd_huge(*pmd)) { damon_pmdp_mkold(pmd, walk->mm, addr); spin_unlock(ptl); @@ -431,6 +436,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr, #ifdef CONFIG_TRANSPARENT_HUGEPAGE if (pmd_huge(*pmd)) { ptl = pmd_lock(walk->mm, pmd); + if (!pmd_present(*pmd)) { + spin_unlock(ptl); + return 0; + } + if (!pmd_huge(*pmd)) { spin_unlock(ptl); goto regular_page;