From patchwork Mon Apr 14 13:45:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: now4yreal X-Patchwork-Id: 14050452 Received: from out162-62-57-49.mail.qq.com (out162-62-57-49.mail.qq.com [162.62.57.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 566A62472B7; Mon, 14 Apr 2025 13:45:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=162.62.57.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744638353; cv=none; b=po7EZJNEmsDcBtKT4Ejnx5OApBBX2hnDXvzx9FPjPemUvcEGbJq9Ln7Fcequ3kzAFcdF9gseHtG3C/Pua+yJO1YLVyD9apASHTXsf71/CGVpEKBIEr+cnyzOYNehNIrKBT01a1t8nYi81UgLu2bj79AGsbIQP3ai7LI1VSSyzeY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744638353; c=relaxed/simple; bh=4NeYmB9q0Xa5lw/KwLdiknTolot09oWKKUCOR+KLpF8=; h=From:To:Subject:Mime-Version:Content-Type:Date:Message-ID; b=PWmGN5r0Q7IdvqfYo5DKC63/nPr5hya7y/vkDUq78upsQXTS/SI2QOOvGh7M4XQffqKn7OrKTP7tLkkKhoFyfkSFyjPsTEYoRLaTQrryZB6h9bTGyJr2hXTO81L5m+1XvhoTlE2hUcvu2gi7jNy6W5CPDyqrEa1mp1uQcAfFvKQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=foxmail.com; spf=pass smtp.mailfrom=foxmail.com; dkim=pass (1024-bit key) header.d=foxmail.com header.i=@foxmail.com header.b=gEWCDIsg; arc=none smtp.client-ip=162.62.57.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=foxmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=foxmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=foxmail.com header.i=@foxmail.com header.b="gEWCDIsg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1744638328; bh=xFZWyms9J6JlCb47mQK2egN6X8OyWTGwa5WXNP4kK1o=; h=From:To:Subject:Date; b=gEWCDIsgN2PlxM/tluW2bv+0tSzSDr797AwQPE27eVVyrGi9fNXQLHharNrkIB+Dn NqeAzIQ5lDvZLYyQPOMDw3hHufiOsx3a3/v4z3S210qojwAL2aIx85jYbHrTZFO87j XO3NVfjeeicwG1E8qpmSmmo1h5CvFs6nHriZsG4A= X-QQ-FEAT: oHWrrGTW1dBM1wq2dbSTgn8AaBOV+Hnu X-QQ-SSF: 00000000000000F0000000000000 X-QQ-XMRINFO: MYTJVxP1dBxwZFBmm3GDYp0= X-QQ-XMAILINFO: NPric4jOaQqklm6cROcT4aIkRRNW8lWF1qMg6kcefKwKWwY7ztMn1S1l12ruLM xpS7ywSXwI8sW3PrgAIHT4NswqyBgEPxInMGUhGLATaIyCrlqPMh3q+iJ9vFcTRjPkH61PAPTi/2X m6HljtA9+ddA9+kfJnhHRfOolQGG6wt/cdpsV1591OtSQ7Mc9ShOuNU5Pf8xPzn5AXyE/HQxXYldp kQzTHAji2samaXJhjnixFmuNcJ1+OpveckujSTh+Xss1gHSpJczu8cRq6szJh6NOI6bT2Ra3R0pZ7 sJMDjDabP1/v3xcVKNcO8pjnfnn6SCA3AnPF6+BFLPOzCz7YRaT0IOY4b27fg/KNwlVNEKcYJbPZo oolr73eSDUOZqiDNX4WgL2iQvywIZ9MixzrHR4kJLTuIdYPrPONcCDuOqDRIQMQ5iZ8EBBtkcDOw+ IFaVgK8o5aACmVMGlVyatI5LknXGcb86gpWWBkVlEqgo0YuLaX2v11gHShrHu4Z5OZXJVJov9Prv3 gmFZIY3Zd0NGdHsJvVdGl+Ay4ByIP8B7IOzRVOteCP/L7Tie/HFn8+wtaBeEkZtKdBrkMjN/4BHTb QtOxwqkoMj5Q/6zu9MqByKYh29oo845yXfW3EpwhBXANQAf/tdMg/2k7poEFkz3aZvgbTP5w1Eo04 /6MBsrF2m31SnuVIhPsFJ0lc/+wBZwmxnTjKDY9CJLTL8Gv2iY+ZVOShIBxhYLhLyJt1095RN96Ns xe9GfI0iqaujbhvoAVzMk7Lq0Ibf8huu2iUO4jVCg0cv51dM2OlXI1IPyYNwtL1HiCG1obhbftyrJ Hnog8RhQRMMtaxMl1cdEjsB6Z2V0wN0T8pp1+LQG2BKJIsQELyc7RWXTwCfCAfEXiYQgOW8jmWZ4K E7trknZnbof5OJiRKBmVl6ebnL4Bd280uBKGB6vT2NFGwABdsJrOL4k/Khd8Tgd7gQt6W2QCNNe5p dCbXz/hFUY1AGCCDU+Yk8psq0tB24= X-HAS-ATTACH: no X-QQ-BUSINESS-ORIGIN: 2 X-QQ-STYLE: X-QQ-mid: webmail545t1744638326t1976966 From: " =?gb18030?q?now4yreal?= " To: " =?gb18030?q?Brauner?= " , " =?gb18030?q?Matthew_Wi?= =?gb18030?q?lcox_=28Oracle=29?= " , " =?gb18030?q?Kara?= " , " =?gb18030?q?Viro?= " , " =?gb18030?q?Bacik?= " , " =?gb18030?q?Stone?= " , " =?gb18030?q?Sandeen?= " , " =?gb18030?q?Johnson?= " , " =?gb18030?q?li?= =?gb18030?q?nux-fsdevel?= " , " =?gb18030?q?linux-kernel?= " Subject: [Bug Report] OOB-read BUG in HFS+ filesystem Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Date: Mon, 14 Apr 2025 21:45:25 +0800 X-Priority: 3 Message-ID: X-QQ-MIME: TCMime 1.0 by Tencent X-Mailer: QQMail 2.x X-QQ-Mailer: QQMail 2.x Dear Linux Security Maintainers, I would like to report a OOB-read vulnerability in the HFS+ file system, which I discovered using our in-house developed kernel fuzzer, Symsyz. 1. Vulnerability Detail and Root Cause: The vulnerability occurs in the function `hfsplus_bnode_dump` at LOC1(Please see the code below), where it calls `hfs_bnode_read_u16` to read `key_off` from the file system at the offset `off`. The value of `key_off` is a value in the image, and its size is controllable by the user (in the POC, we control this value to be 29234). At LOC2, `key_off` is used as an offset to read the content of the file system, triggering the following control flow: `hfs_bnode_read_u16 -> hfs_bnode_read`. The problem arises in the `hfs_bnode_read` function at LOC3, where the value of `key_off` read from the file system is not validated. If `key_off >> PAGE_SHIFT` exceeds the range of `node->page`, it will cause an out-of-bounds read at LOC4, triggering the vulnerability. ```c // fs/hfsplus/bnode.c +291 void hfs_bnode_dump(struct hfs_bnode *node) { struct hfs_bnode_desc desc; __be32 cnid; int i, off, key_off; hfs_dbg(BNODE_MOD, "bnode: %d\n", node->this); hfs_bnode_read(node, &desc, 0, sizeof(desc)); hfs_dbg(BNODE_MOD, "%d, %d, %d, %d, %d\n", be32_to_cpu(desc.next), be32_to_cpu(desc.prev), desc.type, desc.height, be16_to_cpu(desc.num_recs)); off = node->tree->node_size - 2; for (i = be16_to_cpu(desc.num_recs); i >= 0; off -= 2, i--) { key_off = hfs_bnode_read_u16(node, off); <------- LOC1: read offset from filesystem hfs_dbg(BNODE_MOD, " %d", key_off); if (i && node->type == HFS_NODE_INDEX) { int tmp; if (node->tree->attributes & HFS_TREE_VARIDXKEYS || node->tree->cnid == HFSPLUS_ATTR_CNID) tmp = hfs_bnode_read_u16(node, key_off) + 2; else tmp = node->tree->max_key_len + 2; hfs_dbg_cont(BNODE_MOD, " (%d", tmp); hfs_bnode_read(node, &cnid, key_off + tmp, 4); hfs_dbg_cont(BNODE_MOD, ",%d)", be32_to_cpu(cnid)); } else if (i && node->type == HFS_NODE_LEAF) { int tmp; tmp = hfs_bnode_read_u16(node, key_off); <------------ LOC2:read content according to the key_off hfs_dbg_cont(BNODE_MOD, " (%d)", tmp); } } hfs_dbg_cont(BNODE_MOD, "\n"); } // fs/hfsplus/bnode.c +22 void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) { struct page **pagep; int l; off += node->page_offset; <------------ LOC3: missing check pagep = node->page + (off >> PAGE_SHIFT); <------------ LOC4: trigger the bug off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); memcpy_from_page(buf, *pagep, off, l); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); memcpy_from_page(buf, *++pagep, 0, l); } } ``` 2. Impact Analysis Through this vulnerability, it is possible to construct arbitrary kernel memory reads, which can be used to leak the kernel base address. When combined with other kernel arbitrary write vulnerabilities, this can lead to kernel control flow hijacking and other severe security issues. 3. Suggested Fix Add validation for `off` in the function `hfs_bnode_read` (fs/hfsplus/bnode.c +22), a possible patch may as below. ``` ``` 4. Crash Log Overview: ``` BUG: KASAN: slab-out-of-bounds in hfsplus_bnode_read+0x228/0x240 fs/hfsplus/bnode.c:32 Read of size 8 at addr ffff88802315cfc0 by task syz.0.7/9865 CPU: 0 UID: 0 PID: 9865 Comm: syz.0.7 Not tainted 6.15.0-rc1-00308-gecd5d67ad602 #3 PREEMPT(full) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x10e/0x1f0 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:408 [inline] print_report+0xc6/0x680 mm/kasan/report.c:521 kasan_report+0xe4/0x120 mm/kasan/report.c:634 hfsplus_bnode_read+0x228/0x240 fs/hfsplus/bnode.c:32 hfsplus_bnode_read_u16 fs/hfsplus/bnode.c:45 [inline] hfsplus_bnode_dump+0x31f/0x3c0 fs/hfsplus/bnode.c:321 hfsplus_brec_remove+0x3d2/0x4e0 fs/hfsplus/brec.c:229 __hfsplus_delete_attr+0x2a0/0x3b0 fs/hfsplus/attributes.c:299 hfsplus_delete_all_attrs+0x26f/0x330 fs/hfsplus/attributes.c:378 hfsplus_delete_cat+0x851/0xde0 fs/hfsplus/catalog.c:425 hfsplus_unlink+0x20f/0x7f0 fs/hfsplus/dir.c:385 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547 vfs_rename+0xf47/0x2120 fs/namei.c:5086 do_renameat2+0x82c/0xc90 fs/namei.c:5235 __do_sys_renameat2 fs/namei.c:5269 [inline] __se_sys_renameat2 fs/namei.c:5266 [inline] __x64_sys_renameat2+0xe7/0x130 fs/namei.c:5266 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xc7/0x250 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f2a209b2d5d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f2a2181cba8 EFLAGS: 00000246 ORIG_RAX: 000000000000013c RAX: ffffffffffffffda RBX: 00007f2a20bd5fa0 RCX: 00007f2a209b2d5d RDX: 0000000000000004 RSI: 00004000000000c0 RDI: 0000000000000005 RBP: 00007f2a20a36327 R08: 0000000000000000 R09: 0000000000000000 R10: 0000400000000180 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f2a20bd5fac R14: 00007f2a20bd6038 R15: 00007f2a2181cd40 Allocated by task 9865: kasan_save_stack+0x33/0x60 mm/kasan/common.c:47 kasan_save_track+0x14/0x30 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __do_kmalloc_node mm/slub.c:4331 [inline] __kmalloc_noprof+0x20e/0x560 mm/slub.c:4343 kmalloc_noprof include/linux/slab.h:909 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] __hfs_bnode_create+0x107/0x8b0 fs/hfsplus/bnode.c:409 hfsplus_bnode_find+0x2db/0xd20 fs/hfsplus/bnode.c:486 hfsplus_brec_find+0x2b8/0x520 fs/hfsplus/bfind.c:172 hfsplus_find_attr fs/hfsplus/attributes.c:160 [inline] hfsplus_delete_all_attrs+0x248/0x330 fs/hfsplus/attributes.c:371 hfsplus_delete_cat+0x851/0xde0 fs/hfsplus/catalog.c:425 hfsplus_unlink+0x20f/0x7f0 fs/hfsplus/dir.c:385 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547 vfs_rename+0xf47/0x2120 fs/namei.c:5086 do_renameat2+0x82c/0xc90 fs/namei.c:5235 __do_sys_renameat2 fs/namei.c:5269 [inline] __se_sys_renameat2 fs/namei.c:5266 [inline] __x64_sys_renameat2+0xe7/0x130 fs/namei.c:5266 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xc7/0x250 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff88802315cf00 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 40 bytes to the right of allocated 152-byte region [ffff88802315cf00, ffff88802315cf98) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff88802315ce00 pfn:0x2315c flags: 0xfff00000000200(workingset|node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000200 ffff88801b4413c0 ffffea0000005310 ffff88801b440288 raw: ffff88802315ce00 0000000000100002 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x252800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP|__GFP_THISNODE), pid 9, tgid 9 (kworker/0:0), ts 36080866601, free_ts 24921668201 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x181/0x1b0 mm/page_alloc.c:1717 prep_new_page mm/page_alloc.c:1725 [inline] get_page_from_freelist+0x1074/0x3780 mm/page_alloc.c:3652 __alloc_pages_slowpath mm/page_alloc.c:4473 [inline] __alloc_frozen_pages_noprof+0x5a5/0x2420 mm/page_alloc.c:4947 alloc_slab_page mm/slub.c:2461 [inline] allocate_slab mm/slub.c:2623 [inline] new_slab+0x94/0x340 mm/slub.c:2676 ___slab_alloc+0xd97/0x1970 mm/slub.c:3862 __slab_alloc.isra.0+0x56/0xb0 mm/slub.c:3952 __slab_alloc_node mm/slub.c:4027 [inline] slab_alloc_node mm/slub.c:4188 [inline] __kmalloc_cache_node_noprof+0x276/0x420 mm/slub.c:4370 kmalloc_node_noprof include/linux/slab.h:928 [inline] alloc_worker kernel/workqueue.c:2647 [inline] create_worker+0x10f/0x7e0 kernel/workqueue.c:2790 maybe_create_worker kernel/workqueue.c:3063 [inline] manage_workers kernel/workqueue.c:3115 [inline] worker_thread+0x926/0xe60 kernel/workqueue.c:3375 kthread+0x3a5/0x770 kernel/kthread.c:464 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 page last free pid 5273 tgid 5273 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1262 [inline] __free_frozen_pages+0x709/0x1030 mm/page_alloc.c:2680 rcu_do_batch kernel/rcu/tree.c:2568 [inline] rcu_core+0x7ad/0x14a0 kernel/rcu/tree.c:2824 handle_softirqs+0x1e7/0x8a0 kernel/softirq.c:579 __do_softirq kernel/softirq.c:613 [inline] invoke_softirq kernel/softirq.c:453 [inline] __irq_exit_rcu+0xfe/0x160 kernel/softirq.c:680 irq_exit_rcu+0x9/0x30 kernel/softirq.c:696 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline] sysvec_apic_timer_interrupt+0xa3/0xc0 arch/x86/kernel/apic/apic.c:1049 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702 Memory state around the buggy address: ffff88802315ce80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff88802315cf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff88802315cf80: 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc fc ^ ffff88802315d000: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ffff88802315d080: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ``` Since I am not a core hfs developer and only have a general understanding of the file system’s internal logic, there might be inaccuracies in this analysis. I would appreciate it if you could forward this report to the appropriate maintainers for confirmation and further investigation. Please feel free to reach out if you need any clarification or would like additional information. I’ve attached the POC (written in C) for your convenience — it can be compiled directly with `gcc`. Thanks for your attention to this matter. Best regards, luka diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 87974d5e6791..5bd31ebe648b 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -22,10 +22,14 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) { struct page **pagep; - int l; + int l, pagenum; off += node->page_offset; - pagep = node->page + (off >> PAGE_SHIFT); + pagenum = off >> PAGE_SHIFT + if (pagenum >= node->tree->pages_per_bnode) + break; + + pagep = node->page + pagenum; off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off);