From patchwork Wed Jun 12 12:28:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13694891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07FD7C27C77 for ; Wed, 12 Jun 2024 12:29:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 49A766B0092; Wed, 12 Jun 2024 08:29:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44A996B0093; Wed, 12 Jun 2024 08:29:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3116D6B0095; Wed, 12 Jun 2024 08:29:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 126856B0092 for ; Wed, 12 Jun 2024 08:29:33 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7285912174D for ; Wed, 12 Jun 2024 12:29:32 +0000 (UTC) X-FDA: 82222167384.01.0633F73 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf20.hostedemail.com (Postfix) with ESMTP id 300FB1C0007 for ; Wed, 12 Jun 2024 12:29:28 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718195370; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=J4mq4Q/lJEk2u5fBvfwqUx/D8o90l4ApCDmO/2f+2Tk=; b=4owljx4PBh1ti8vQx0P3AADnilYJ3llZL0xalxzel4BiNXT9cFkS2UFvwGGQjo1wIKAWyW yBLcVeLXPFBlUsp+SZK2cEgDy9U5W4/3QL1aNfQv+fD8OoibZc8VShHLI9r9sBHAxDxsI1 DUSnog/P9STkQ9MEVasX1WYDqwLPbU8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718195370; a=rsa-sha256; cv=none; b=u8tBmMSnNI1SpFi0p4Or2j7Z9DHgUetwq5aFXyTAe6S+NEIqUVWGqLXdV/0biKii5w+AVJ aTkEfPK7RmuZtAxeb9DFACVN7rcSOzR+HTNRIFkH03BjK3yyBwyKBLNNGNnrfEp7Htpeey 5iFg4/hxX9N9NGBWj2dGrMNL1uLyTDU= Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Vzl9p4yZRzwVxc; Wed, 12 Jun 2024 20:25:18 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 9921C18007B; Wed, 12 Jun 2024 20:29:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 12 Jun 2024 20:29:24 +0800 From: Kefeng Wang To: Andrew Morton CC: , Baolin Wang , , David Hildenbrand , John Hubbard , Mel Gorman , Ryan Roberts , , Kefeng Wang Subject: [PATCH v2] mm: fix possible OOB in numa_rebuild_large_mapping() Date: Wed, 12 Jun 2024 20:28:22 +0800 Message-ID: <20240612122822.4033433-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 300FB1C0007 X-Rspam-User: X-Rspamd-Server: rspam09 X-Stat-Signature: 7qtkazk8oh3gyyoy44yjsq6cdk31ebp4 X-HE-Tag: 1718195368-60814 X-HE-Meta: U2FsdGVkX19UMlZ9QhVie88nVOZbkif4Gj/c/J4iOPCE98UXALX9epamyT6lyIl0H9Npx6qdA0XHMravFqcLInbiCtZ+RkaX94Wgv7b/dHTILNKl3xIz9xIQP6UZ+pkUKGBfBi0eRdf30xyuPKT2Sl/yEenV8/1HwLOHEx/fZwlHxVnp2I/oigeykrQ7mESQ3jHKkZxiWHAJjoliAaWHGRyWc5m4eR/sv0YejF44nWw2bEHyX8vGLm6y3MG+pDoekwrAjV1XlXPSZdOqmN9SY1HeCqIzBeVxhYos+kLkHcn/Q5TMS0N6khukpNoUTWq7i56Jh5aoOpP44XjJwGSohO1YkKvYFNbtgaiFNCH6sN/FBXum9xqMXMt2v1Gmjbux22C/qSf7eQC6u0BC4pXGXjD+hr0U7IhZG4lP7I/+gullc4nrZj0dDwSNp3SKKXTRXsR/AKlIKW0WA0wyrBJn/yRU+sSzL8ZZP7O5tPbgPWNfR+hnv80l1lcmg4TDikcdX1P6k+mIxmhBb8uB5QOG4PXdsw9h0xfB9DLbQ+Z5dExGypVgC1wi+kfXWrtw2SIgImsI1y558kqXxNIkcdmpPtNNlXaNkPcISk5PIFnOYkaJ5H/MyMOntjG4+FmWoNzSKrZDmzHs5C5fF8jjdXLhjmaXWelJUDnl+keQn/vSmVYyysHryELMmmXl8Th0MtpKW1SLCb2Xw3sBz9kG/79I5Tr/bGeGIk8xIdDDngWXQvMAXLsoWCL804GW9o3ZG+vo+oNq5DG3PfkYI/81uCnQ3H0E/e0yva+A8Xgr+0Am6PXAzJE1XpfrHug6W+FpTks9HSCbMEZCqxna7qBVr5X9qHaoKZu82IOX6f0rLJfRyPFJIpgTPrJWxuCjE0ZvTNdh4481C8PzM6O6Qstp8WEvMyoaTxvy3lr5cH5hOZspBfZ3SRDTHFBSEkdMTR+gcF9g4GRcfkNUoP4cIBF/C9w KINFhh5i RVoYjLN/izP7CPp0n3UfNhn1bWpQcQ39RVF9r8ed+aKOBhxubxalwxbxRewx2cbv50v6QCSMKvhv3nUwSfuNP1PDMmbpwNBdS4V5vi/34w/JxrRgVMUUCPC6zynFxrUfMnU5d1gna+XPfN9Wf3sd+N5yLXdMNgYYpPQK4GGMPh0h5JEMUtPljXP3TxJIVcE00fII9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The large folio is mapped with folio size(not greater PMD_SIZE) aligned virtual address during the pagefault, ie, 'addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE)' in do_anonymous_page(). But after the mremap(), the virtual address only requires PAGE_SIZE alignment. Also pte is moved to new in move_page_tables(), then traversal of the new pte in the numa_rebuild_large_mapping() could hit the following issue, Unable to handle kernel paging request at virtual address 00000a80c021a788 Mem abort info: ESR = 0x0000000096000004 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x04: level 0 translation fault Data abort info: ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000 CM = 0, WnR = 0, TnD = 0, TagAccess = 0 GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0 user pgtable: 4k pages, 48-bit VAs, pgdp=00002040341a6000 [00000a80c021a788] pgd=0000000000000000, p4d=0000000000000000 Internal error: Oops: 0000000096000004 [#1] SMP ... CPU: 76 PID: 15187 Comm: git Kdump: loaded Tainted: G W 6.10.0-rc2+ #209 Hardware name: Huawei TaiShan 2280 V2/BC82AMDD, BIOS 1.79 08/21/2021 pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : numa_rebuild_large_mapping+0x338/0x638 lr : numa_rebuild_large_mapping+0x320/0x638 sp : ffff8000b41c3b00 x29: ffff8000b41c3b30 x28: ffff8000812a0000 x27: 00000000000a8000 x26: 00000000000000a8 x25: 0010000000000001 x24: ffff20401c7170f0 x23: 0000ffff33a1e000 x22: 0000ffff33a76000 x21: ffff20400869eca0 x20: 0000ffff33976000 x19: 00000000000000a8 x18: ffffffffffffffff x17: 0000000000000000 x16: 0000000000000020 x15: ffff8000b41c36a8 x14: 0000000000000000 x13: 205d373831353154 x12: 5b5d333331363732 x11: 000000000011ff78 x10: 000000000011ff10 x9 : ffff800080273f30 x8 : 000000320400869e x7 : c0000000ffffd87f x6 : 00000000001e6ba8 x5 : ffff206f3fb5af88 x4 : 0000000000000000 x3 : 0000000000000000 x2 : 0000000000000000 x1 : fffffdffc0000000 x0 : 00000a80c021a780 Call trace: numa_rebuild_large_mapping+0x338/0x638 do_numa_page+0x3e4/0x4e0 handle_pte_fault+0x1bc/0x238 __handle_mm_fault+0x20c/0x400 handle_mm_fault+0xa8/0x288 do_page_fault+0x124/0x498 do_translation_fault+0x54/0x80 do_mem_abort+0x4c/0xa8 el0_da+0x40/0x110 el0t_64_sync_handler+0xe4/0x158 el0t_64_sync+0x188/0x190 Fix it by making the start and end not only within the vma range, but also within the page table range. Fixes: d2136d749d76 ("mm: support multi-size THP numa balancing") Signed-off-by: Kefeng Wang Acked-by: David Hildenbrand Reviewed-by: Baolin Wang --- v2: - don't pass nr_pages into numa_rebuild_large_mapping() - address comment and suggestion from David mm/memory.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 0d309cfb703c..60f7a05ad0cd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5228,10 +5228,16 @@ static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_stru bool ignore_writable, bool pte_write_upgrade) { int nr = pte_pfn(fault_pte) - folio_pfn(folio); - unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); - unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); - pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; - unsigned long addr; + unsigned long start, end, addr = vmf->address; + unsigned long addr_start = addr - (nr << PAGE_SHIFT); + unsigned long pt_start = ALIGN_DOWN(addr, PMD_SIZE); + pte_t *start_ptep; + + /* Stay within the VMA and within the page table. */ + start = max3(addr_start, pt_start, vma->vm_start); + end = min3(addr_start + folio_size(folio), pt_start + PMD_SIZE, + vma->vm_end); + start_ptep = vmf->pte - ((addr - start) >> PAGE_SHIFT); /* Restore all PTEs' mapping of the large folio */ for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {