From patchwork Sat Oct 26 05:43:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13852072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83458D10BE4 for ; Sat, 26 Oct 2024 05:43:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F41A56B0082; Sat, 26 Oct 2024 01:43:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF17D6B0083; Sat, 26 Oct 2024 01:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB8A66B0085; Sat, 26 Oct 2024 01:43:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BD34D6B0082 for ; Sat, 26 Oct 2024 01:43:23 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0CF421A1357 for ; Sat, 26 Oct 2024 05:42:47 +0000 (UTC) X-FDA: 82714660476.28.E332281 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id 7E1A0100002 for ; Sat, 26 Oct 2024 05:42:41 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729921360; a=rsa-sha256; cv=none; b=RVbYYXWQxuRsscWPukssKfQdCJKwmpEvc+1dlq8MqFHs059/3iI6k2iaiY0O2MWHyX+Dik xEwmDGWbZTXLC43g43+Eya+AoCnns7JN99PRWQvSiiJnDLrug+OmmeedTXUWoDZ+Camq+g Nh3oIPoHZADk6mZiKOyjarBbDzPfrWA= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf05.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729921360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G3yTv10aSTZCodVPGWl0cOVeH8/E9QdFWWce8PXsHio=; b=rFvjHVNZB9qrAlopGY5oAIQYCr2tfAAeN9n/hI2ZOIsCwXBiqzgfzS4ziCoSuh8rSYAg7u h1ifYRA6DYNU0exiKzSvns6VGE9yPdyCKzmr+OBQgsdkgkUQFFN7vFLyzGMhy2RHA6XOwG UR/ugMdjXVm8W9jwRhuIAk9cpNaJsDk= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Xb7mv4czrzlXHC; Sat, 26 Oct 2024 13:41:19 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 13F761800A0; Sat, 26 Oct 2024 13:43:16 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 26 Oct 2024 13:43:15 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Matthew Wilcox , Muchun Song , "Huang, Ying" , , Kefeng Wang Subject: [PATCH v2 2/2] mm: use aligned address in copy_user_gigantic_page() Date: Sat, 26 Oct 2024 13:43:07 +0800 Message-ID: <20241026054307.3896926-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20241026054307.3896926-1-wangkefeng.wang@huawei.com> References: <20241026054307.3896926-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Stat-Signature: fhioq61fgqd7d54bk91q1i4oa11z3eu8 X-Rspamd-Queue-Id: 7E1A0100002 X-Rspamd-Server: rspam02 X-HE-Tag: 1729921361-910525 X-HE-Meta: U2FsdGVkX19dXfQ4hOtKDcKMVgCfhGfA42wZGsgn5sFYYdSh8iUpFjT7ybct2Kj4a1pGUkgpFmaMiiDbEpJ+TZKfh/FyX0dIK0sQm+jPeCDGt+bjXV4Jp5JiZqY5xtlEdxsaPNhbUHdYhpst6NM9+qyP44HOJZ5EeEknMQbfjBf53CYoVk6/xt4AQaJfDsew0bP9Qag1stIIhYMtV07SgItcyE84Cr64HX8/LYFgKGnsImEyad92EJF/utN6LuvBAJBRV+WioAw8jvdwjKZqw6eu9dXtxVWQXO+xNfvESexulkZisTZsO8vr12UmXdttj2p+viKt7Xs7WY9uvYeWut6s6zg2pWha0kzyKycIb2uOgjBBjLBUAUjFNQz6zeGGAFIy+fDD8WJOiyCIHaxO1SEIPjX5t96Sw0lavIaYirlm3qPTwWNO0BodnxhVAP+DjnP+1yPtNZHa6Zc4KIeNXfZEdnOwWY3pO6Yo7C7/FAYfdyDnH3cq786Tl8tP6SA5TEmhvWiNidr7VtQLehBaK39+MmR+QfATZELyNF1KO1gNkFGEp9W0rqwhdkXg1T3DQ1DL6oZ8Vx/PJeBzfwZ2COfCsEf68RJoFLjYreIjXhdnCLISmxY55nWDz2DGMzgOE92aESmoq696qzMR6w4Aq9ojOHmCrrPQtGoGIOn3nfbQAk+A2J52Ek6/MjaTVb2aHyGHJ51An+FnCNdoCAkuNPijejhp481MIUWCJRJJWzrYJ2+6k3PwJ1Z4SmV84bkPKmZxN9h/F5CvEpSyjYMi5cy+89m53JNN6dqsOdxg3+0LrpuqFue8m0XcJhP7BjnN3/ZsbVrB6K2WVkEb2B4cQYgjx/HHxalJz4/By/0Em94R2qoY4yUL7IV1lL2ggD3Ayguldom0RddkBtEvhfOFlC/SoQ+EbqIAaeHit6IrY8gzguGYPYSr4mTB94L9GK/d4er7fAQ/n0IVdyg9Q2X /+kN1bYB sLYXf7xWt9/S9E2AlCpnqecrrtz5pGd3tpKxNmJcpSvNoe0mwt/eHJyJYpoSKwxzwF63EEsB2A4ZenluDYUTSOhuMpwp8J3zm8um+MDGOAGZll0TAggMMZT/LsC5RwceGD+wGBkUOF+ZCvEahtFMvB7EZV36LFLzVATD1dWBDzwK8ZnIwDVXUIwJDfHGIKl2kmvwu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When copying gigantic page, it copies page from the first page to the last page, if directly passing addr_hint which maybe not the address of the first page of folio, then some archs could flush the wrong cache if it does use the addr_hint as a hint. For non-gigantic page, it calculates the base address inside, even passed the wrong addr_hint, it only has performance impact as the process_huge_page() wants to process target page last to keep its cache lines hot), no functional impact. Let's pass the real accessed address to copy_user_large_folio() and use the aligned address in copy_user_gigantic_page() to fix it. Fixes: 530dd9926dc1 ("mm: memory: improve copy_user_large_folio()") Signed-off-by: Kefeng Wang --- v2: - update changelog to clarify the impact, per Andrew mm/hugetlb.c | 5 ++--- mm/memory.c | 1 + 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2c8c5da0f5d3..15b5d46d49d2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5338,7 +5338,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, break; } ret = copy_user_large_folio(new_folio, pte_folio, - ALIGN_DOWN(addr, sz), dst_vma); + addr, dst_vma); folio_put(pte_folio); if (ret) { folio_put(new_folio); @@ -6641,8 +6641,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, *foliop = NULL; goto out; } - ret = copy_user_large_folio(folio, *foliop, - ALIGN_DOWN(dst_addr, size), dst_vma); + ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); folio_put(*foliop); *foliop = NULL; if (ret) { diff --git a/mm/memory.c b/mm/memory.c index ef47b7ea5ddd..e5284bab659d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6860,6 +6860,7 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, struct page *dst_page; struct page *src_page; + addr = ALIGN_DOWN(addr, folio_size(dst)); for (i = 0; i < nr_pages; i++) { dst_page = folio_page(dst, i); src_page = folio_page(src, i);