From patchwork Mon Apr 15 16:35:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 13630300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2570BC04FF9 for ; Mon, 15 Apr 2024 16:35:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 48DA86B0089; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A3BF6B0092; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EC606B0093; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E62EA6B0089 for ; Mon, 15 Apr 2024 12:35:41 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B07E1140414 for ; Mon, 15 Apr 2024 16:35:41 +0000 (UTC) X-FDA: 82012317282.29.23AE462 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf19.hostedemail.com (Postfix) with ESMTP id EFD861A0002 for ; Mon, 15 Apr 2024 16:35:39 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=j7Pap2A7; spf=pass (imf19.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.180 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713198940; a=rsa-sha256; cv=none; b=5mlOa5h3sa4V9wmz/tUYR2kouaPZ11LFW4aDEBDwSfU3Duhd+zzMBzJkCzgqBrH+rBshX7 JOb42Zp7CRY5+TUFRp5iUyAiQr9bQfbJV6TpwcnOlWkNtyxF33+QnFJdK54hSGe3tmdsCB B/pVZnos+Sr6nmBEBmGz6A1hUegbf3Q= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=j7Pap2A7; spf=pass (imf19.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.180 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713198940; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/gygX+FvvCvnuivqlBrTQ1NpCn0zMAH2mLCXFy++6+0=; b=08s+YUXrg7S3inT75qeNQZlZ5HAnDkXMjaviBSHzgbiA7odkLzXihmYz7D2xX6KlU0jXST +hLW7fDiBsJn6hVakdC5BVTVG+XzNkQ6kP7dHom8XMxqqJLkQpapTX8nPgOfhrYG0yIi7V 1DNZ4oLvFFKvQYWnlhW4lH8azoNDrXw= Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-6ed32341906so3056615b3a.1 for ; Mon, 15 Apr 2024 09:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713198939; x=1713803739; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/gygX+FvvCvnuivqlBrTQ1NpCn0zMAH2mLCXFy++6+0=; b=j7Pap2A7MWJ+C75TK4wPuv2QIIMRa61UmzsVWLhh9WaH4UWtUeILxmmejLmLBfZIBk 78XdWFYMpjlm0bWh8dhp5p/GgQp9hWOP4t5o0PXeNQR4w8rciTOO7640uZaN4CVQOpz1 oPWTmQ2FZzsbhDoE2eAfwa2RBIniwxDA3/Z3g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713198939; x=1713803739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/gygX+FvvCvnuivqlBrTQ1NpCn0zMAH2mLCXFy++6+0=; b=Pkwj/HT1UqCIaQc3Vsw89aFw/NZrIuLAZRueH/VI2pBjeGTqviYq7F2TmkjptL/dzB J/uzeIcR3uLlMlmUcWQX5HcidfZTtBkj2k6oSv/iU2heikqCXIgVzr8m+/DD3o+MGM0W jPE2mh3PNwOTCfxJr9YMRQokM/t0+fn34NwDvUKoN2ZV75ZXnpcrn9f4kPBRowIG756o eHMdzp4dvCLPjT1jBsJUoQ1mnpXF3r/bT1uuaSZkQ4tOGU5rAIcJOxcJF0AwxLK89Y9a pkKm4K09IDwhvPlG5ItBbzgUTSuouOogKfeeAxtURdNotAkrpN46H4Pf3gT3HnTVmbvH gZQA== X-Forwarded-Encrypted: i=1; AJvYcCWPjwv3Otx2P0o2LPAvwbxLAstVIOJPnThmSTtlSN8/i0iyjQAIwhHpM4n9A3LYJFrGhiBOHAQFEtyv5STbWYlqz1Y= X-Gm-Message-State: AOJu0YxjTzMGe8Wnot4INnJ8/itoA7A3eiqx05OvqY1aifAuyCcn91xx QuwAke8+r1h9F/2YgzoWADyP+Y3fBOzMEOWyrP60Ejy6h3e1JlHbEfZ2zxQMeg== X-Google-Smtp-Source: AGHT+IEaA7X3VbIPaY5lDhZ/flSMp31Y3/7QmHubLeWyaAf2uY2qRI4qpHcLbS+ZjSPfy/x+NOP6xw== X-Received: by 2002:a05:6a00:a0e:b0:6eb:1ea:52ed with SMTP id p14-20020a056a000a0e00b006eb01ea52edmr10978298pfh.1.1713198937974; Mon, 15 Apr 2024 09:35:37 -0700 (PDT) Received: from localhost (15.4.198.104.bc.googleusercontent.com. [104.198.4.15]) by smtp.gmail.com with UTF8SMTPSA id d20-20020aa78694000000b006e6adfb8897sm7408442pfo.156.2024.04.15.09.35.37 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 15 Apr 2024 09:35:37 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, jannh@google.com, sroettger@google.com, willy@infradead.org, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, usama.anjum@collabora.com, corbet@lwn.net, Liam.Howlett@oracle.com, surenb@google.com, merimus@google.com, rdunlap@infradead.org Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, pedro.falcato@gmail.com, dave.hansen@intel.com, linux-hardening@vger.kernel.org, deraadt@openbsd.org, Jeff Xu Subject: [PATCH v10 1/5] mseal: Wire up mseal syscall Date: Mon, 15 Apr 2024 16:35:20 +0000 Message-ID: <20240415163527.626541-2-jeffxu@chromium.org> X-Mailer: git-send-email 2.44.0.683.g7961c838ac-goog In-Reply-To: <20240415163527.626541-1-jeffxu@chromium.org> References: <20240415163527.626541-1-jeffxu@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EFD861A0002 X-Stat-Signature: tth6d3e996zpk8x7murfgwoozi1wrmp9 X-Rspam-User: X-HE-Tag: 1713198939-638957 X-HE-Meta: U2FsdGVkX1+EKfOp+OE0RKzWpueYlsU/49Xf/RZ9JU4bAWNH12l+oQVSnqJGlKYk0+btlJBhEfpeVRWCyJ49JDsNONNvDB4P+Efm3grOoLWck81B1TF0puIO/UibxVLgdenIdf6kQeAD76NvGfOk88OqNqNvT7R7+nTMZiSxJBePO+mKAhuA6PlmWUqrcZm7WWc9QIFEwp2hzQXqGF+jPC5DPTUW2WLKYQkSW2fvhfeXa6T9qQN05XiMnaJ3RUfQ9PJ4a6NF8GpYCFLsguMrlhdY1Q99W43pXA3JXxEiHSGL2u1mGbXpH+o67S/HIkb+IRMYoCSWI6HIFOMw4dDhxj2J7ebEZS3gQE3Gek3j1Xyjzv8l7QQdM0tcHR1YaowPJc9Wzcuv8eT71yxODGwHZ0mVzNoRkbUmtHOFe0bLaO5sV1KoIDtYPoV8R6JlrRvahagcZcCJ+tejWjhS7nujVU7/tLtD7hQJggY7arO+uh5k+6UQbbN4jE43+2NPO4x4hM55cET+Bmt1prPvaZCNJS2MblG0p7OQ+j65S47+gh4mfKzJzEwmR9FA6DiTKN6aa70oWP3ISfQWZPO3GGzvWeluHdX7J0oTFMHTL+1ipl1vZFULZGFp20nHgs8WFK9mVSlUiZbj1so9am27UdKh065TCfTzENkStoc1FmZaotqlfuBQSehBseMl+VLrQ9Y1gx7vVedNlxGkDowHS4ZqIx1KIM8Rod/StuJcBIwYCHQP1BOcGvXgjAY+Gkze/KY/DjbtZsuPmZZ8RoJnljvEgvubJJfWnSN/qqwqlrhX7crDf4fNxtjorfA5APhInH8QuoBZkgqPlN9CNsoyWiuUvAXuOTUIVu4Bvz2P9DrobPmbP9IuRcfuEojxCsdn3YRwKly0yjyhmcooJ2eLqIlLbF6R/8e7d2CWwQddhQyrcuLmozc3uzm6apbhIQXhn9E2a0Qdkeq4+mtwQ5H+fZX +pYRd2UF KRYsPGeVha3ea3mggLxMDj1ZY5yye2bEXTccrDNLgYPJ2qWWgOHTHsiUjFO6WDfqCnRHoTh50EHmsDcgQtFmtFuioqc5XXvNPcV7sRxqO6peAQYbRtrMtxEU767+TyPnreI7IIyzq2zurPJCtSQrSECv9lQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Jeff Xu Wire up mseal syscall for all architectures. Signed-off-by: Jeff Xu --- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/uapi/asm-generic/unistd.h | 5 ++++- kernel/sys_ni.c | 1 + 19 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 8ff110826ce2..d8f96362e9f8 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -501,3 +501,4 @@ 569 common lsm_get_self_attr sys_lsm_get_self_attr 570 common lsm_set_self_attr sys_lsm_set_self_attr 571 common lsm_list_modules sys_lsm_list_modules +572 common mseal sys_mseal diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index b6c9e01e14f5..2ed7d229c8f9 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -475,3 +475,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 491b2b9bd553..1346579f802f 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 462 +#define __NR_compat_syscalls 463 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index 7118282d1c79..266b96acc014 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -929,6 +929,8 @@ __SYSCALL(__NR_lsm_get_self_attr, sys_lsm_get_self_attr) __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) +#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal) /* * Please add new compat syscalls above this comment and update diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 7fd43fd4c9f2..22a3cbd4c602 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index b00ab2cabab9..2b81a6bd78b2 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -467,3 +467,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 83cfc9eb6b88..cc869f5d5693 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -400,3 +400,4 @@ 459 n32 lsm_get_self_attr sys_lsm_get_self_attr 460 n32 lsm_set_self_attr sys_lsm_set_self_attr 461 n32 lsm_list_modules sys_lsm_list_modules +462 n32 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl index 532b855df589..1464c6be6eb3 100644 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl +++ b/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -376,3 +376,4 @@ 459 n64 lsm_get_self_attr sys_lsm_get_self_attr 460 n64 lsm_set_self_attr sys_lsm_set_self_attr 461 n64 lsm_list_modules sys_lsm_list_modules +462 n64 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index f45c9530ea93..008ebe60263e 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -449,3 +449,4 @@ 459 o32 lsm_get_self_attr sys_lsm_get_self_attr 460 o32 lsm_set_self_attr sys_lsm_set_self_attr 461 o32 lsm_list_modules sys_lsm_list_modules +462 o32 mseal sys_mseal diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index b236a84c4e12..b13c21373974 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 17173b82ca21..3656f1ca7a21 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -548,3 +548,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 095bb86339a7..bd0fee24ad10 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal sys_mseal diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index 86fe269f0220..bbf83a2db986 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index b23d59313589..ac6c281ccfe0 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -507,3 +507,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 5f8591ce7f25..7fd1f57ad3d3 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -466,3 +466,4 @@ 459 i386 lsm_get_self_attr sys_lsm_get_self_attr 460 i386 lsm_set_self_attr sys_lsm_set_self_attr 461 i386 lsm_list_modules sys_lsm_list_modules +462 i386 mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 7e8d46f4147f..52df0dec70da 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -383,6 +383,7 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index dd116598fb25..67083fc1b2f5 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -432,3 +432,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 75f00965ab15..d983c48a3b6a 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -842,8 +842,11 @@ __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) +#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal) + #undef __NR_syscalls -#define __NR_syscalls 462 +#define __NR_syscalls 463 /* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index faad00cce269..d7eee421d4bc 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -196,6 +196,7 @@ COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); COND_SYSCALL(cachestat); +COND_SYSCALL(mseal); COND_SYSCALL(perf_event_open); COND_SYSCALL(accept4); From patchwork Mon Apr 15 16:35:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 13630308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A748C4345F for ; Mon, 15 Apr 2024 16:35:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A83876B0092; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0A2D6B0093; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80F076B0095; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3E26E6B0093 for ; Mon, 15 Apr 2024 12:35:42 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0BCDF806DC for ; Mon, 15 Apr 2024 16:35:42 +0000 (UTC) X-FDA: 82012317324.09.B3682EC Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf05.hostedemail.com (Postfix) with ESMTP id 04880100014 for ; Mon, 15 Apr 2024 16:35:39 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="W/EaniRi"; spf=pass (imf05.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.174 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713198940; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QnCTCSle99WuorIKXvt4zXtQrdrGKvYsPewqlVt4SL8=; b=cObd07WXF/sWywTORxniERnPFo4FXZc+TjAE2OOH6rY26N/hk5WhZjDvByFKu25RI//la6 QEkoqAne7BDLoJSTRES0xFHrD16ifhJdEx1PvAHPScShk1RpVbiA/5f9ggYOXpO3aI0KZn qisPZI3fFpGgAlKKfxOWAkEJt9uMJoM= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="W/EaniRi"; spf=pass (imf05.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.174 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713198940; a=rsa-sha256; cv=none; b=dNkY6yNYIIUo4ldJ6QW5E43UNkSJaMHgCaa/RntZAOqobYr9G0t7hrQJxXmac4FzE59hqd djYVJd+SFyCgctUDSZ/MEbFX4xdN7TcA4eHDMSEEoe04H4Sh+RqVQY77Z0UYDPm2LhecXQ 95N+zdy4anZBzcXICwJAwla3QS7yQk0= Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-6ed2dc03df6so3094180b3a.1 for ; Mon, 15 Apr 2024 09:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713198939; x=1713803739; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QnCTCSle99WuorIKXvt4zXtQrdrGKvYsPewqlVt4SL8=; b=W/EaniRiwbW4o59mF6x8bI1zEfdS4slh6+MdxkTVWbt5vT0P0YbBHyfyqnJ33Xea/h F2/YhUStvTWOPUnwVOi353DxAeLNt6gmwweePiyVIgFB0ooKgS2CCV36iBpgbFInhnhz rnc7ND7vl7elBdOnqnP7cNROBYVCCZeJ2WepA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713198939; x=1713803739; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QnCTCSle99WuorIKXvt4zXtQrdrGKvYsPewqlVt4SL8=; b=lHZJuGkfFOqGoA6C1bwpWbIRcD//r1NENdiqXEewMRwPcFZLHZnpdr/NBuyALsPff3 Li4vkUOtV9QufKozQNCgNzSNwPJ+cBcJIn+nqKzGBOgAbxU32M43rMxivyV4Pj/aUDN3 oEvpNRleMrmmBfSe42gE3QfWbNutiZnnV5w09fKMFjiN6Di/QXNcsrNzVL494aEIab+y cGY38fkrLRxsnyhhQLb4ZwdCyNqbUbYsWuUCMz+4xjWBnoGjKep9ODx41ozj1vftwDnI h7zMfmTdksKRFFkuPIqMSasKNx62rLDduUQ8Sbh08aBA6Rdcj+dhAp0b7ul1SjU9eEu/ reXQ== X-Forwarded-Encrypted: i=1; AJvYcCWYqi9zoNv2qI+a+TYkl2DdNM/vj2PIKjEfd+c8jss6qh6ZlY+RrRmhQgFyJtn+IXNx70H27m5cz9gwDNlPAIbhVXE= X-Gm-Message-State: AOJu0YxdJX8CKw0bm0sf/fFJFICzmm2DvBdcJ/bPT2Ce6JHt/QoP2SNW cbQ9SZmUyR/0foex+bdedXIeAuLePzJwG4lLj/f3+9wl2jWeyJ25rEIE1ze6jw== X-Google-Smtp-Source: AGHT+IHQNHj7ENiJ0rCkxfuN0ytHMjzNjPBlUKmoggR3i0J7wtyEtoqG7oL4MpdgV62krnVmQYGOeg== X-Received: by 2002:a05:6a00:801:b0:6ea:9252:435 with SMTP id m1-20020a056a00080100b006ea92520435mr13773673pfk.30.1713198938753; Mon, 15 Apr 2024 09:35:38 -0700 (PDT) Received: from localhost (15.4.198.104.bc.googleusercontent.com. [104.198.4.15]) by smtp.gmail.com with UTF8SMTPSA id e10-20020a63aa0a000000b005dc5129ba9dsm7199863pgf.72.2024.04.15.09.35.38 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 15 Apr 2024 09:35:38 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, jannh@google.com, sroettger@google.com, willy@infradead.org, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, usama.anjum@collabora.com, corbet@lwn.net, Liam.Howlett@oracle.com, surenb@google.com, merimus@google.com, rdunlap@infradead.org Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, pedro.falcato@gmail.com, dave.hansen@intel.com, linux-hardening@vger.kernel.org, deraadt@openbsd.org, Jeff Xu Subject: [PATCH v10 2/5] mseal: add mseal syscall Date: Mon, 15 Apr 2024 16:35:21 +0000 Message-ID: <20240415163527.626541-3-jeffxu@chromium.org> X-Mailer: git-send-email 2.44.0.683.g7961c838ac-goog In-Reply-To: <20240415163527.626541-1-jeffxu@chromium.org> References: <20240415163527.626541-1-jeffxu@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 04880100014 X-Rspam-User: X-Stat-Signature: 1gtkqhcjx4opt5h9xb94gc7deo4a619j X-Rspamd-Server: rspam01 X-HE-Tag: 1713198939-414372 X-HE-Meta: U2FsdGVkX1/PILkSLzTPyywz0Kb1nWNj5GRw0W69/eFiDS/wP4+m3NZWeT8ETuRzZkt6JXxzRl1b9Sqjj6oy6N91tIoUXJtq9spGzX+BCaZFoZCLuzz1L74QEbsT14+8EBS56F//EoKjrMWW6I8d90zPr8b1epi0B1z6X6jlMcNX29t6htALQEHZZfNW/SB36wDdVIjr4nNMrYdV8pBCFkxB46PGelldCqUsVofZhwm+YVJMKdBIszZuScD5OR6sm5Z71SJj5k1cX0Hlu31FTUwYTKMf/LakwACLUOCTpneKlWRO2Lsv/nHsuChO/EGTPRbIqkqIASf/wZ6RRxqSwxGn35r1XLt6gUkv8sXmxosqUJKh8Qqo30AKv/qUR2fQfVQOwgGwziwIWWmaxwnnL3zC4f3l9DmcCZe3zzPqDKHs6us2uoLPJllbCzFxWLfUvcCdJxQ7lNcaehnmtR7SslJIvZg1eZ+ERQWvvSJglqUQF7y+ojLkeTFbuuJISYz3yPjDHZLo8jqsYWpYFcEyiTfVEo29iZUVM/nL0NQhSBvGFktxXslAevlmtAu013L63vqp0wraUHL4n5HCLFYOxNTKWiGUyepAOqG35UQF6tBZgcBv+Zy6hcFW0EDoscP7cW7LHAt5eobwdYZ7gwzKAoiIEkLHZzCvO04I7XMXo8/u+ptoNpTWbiJt3PHfRSQm3LLGKbTz3K2hDJcCH7hAaLFN3tM1fvdlvIFyJKNdTKQeP8LcIeVuk+pcOgR7KUMtql/u95hrTK4r5OGICfVghAOgczecpEg93AO+E2Zb3KEUke1ymoDbrZdcX9bXD8CDxvqd3+MsVtT1gfg5qTlTs37lu3k30D3Vq7b8mswMvY2HV15fxgtrBYfTBPuo2AeLfSfGpgafdf4hazjEFSqYCwD3HLB3gKFvBwlZpl3QnVpfBg3mWTFamdim4/JP1Pl4DGCWkdtTWAzy6COCH8P sLdQQA8z Fa2tDLzqQmVQwssCDku7UihMuPTkog1vYnMZAUFSvf+QLcuQqvS4zaqkYN4JOxXFruZquUWhiFYmy0nsp19RHnCaQfHc+B8gqsS54v8GgbidSNnY/IFuheOyGuhquDYGY6C/fM+YClmoOQUrNa6jhcVACojRsQaT4QehCmlVsi4XrAmlvw5wuVdg5KyXXycawIRPoA4FzJxRlTbFkcLKZQFqG3SHEqCgJVln5lS/k2k+henjG/1QAzx0tRtFTzQX/ahCOGTXuv63LPbl4HNDkf0v6dIEZU2EwLitH1KQUo58YICDFms52HDeNZ3MfSdLuzQvTB3RmkNypS8wfyBUorv38zgjaUkHbU1f5D6oc5SeeZUpsmHcWIWBNWpezUvc2E+6RFBnPXi3D6ieBUX3HJa/KWT1nfsKluLehyWbqpE39DX5GRFr09fB+U7pQnL9WMBrQCkTzo1fQ24iY5pDVgGkYJw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Jeff Xu The new mseal() is an syscall on 64 bit CPU, and with following signature: int mseal(void addr, size_t len, unsigned long flags) addr/len: memory range. flags: reserved. mseal() blocks following operations for the given memory range. 1> Unmapping, moving to another location, and shrinking the size, via munmap() and mremap(), can leave an empty space, therefore can be replaced with a VMA with a new set of attributes. 2> Moving or expanding a different VMA into the current location, via mremap(). 3> Modifying a VMA via mmap(MAP_FIXED). 4> Size expansion, via mremap(), does not appear to pose any specific risks to sealed VMAs. It is included anyway because the use case is unclear. In any case, users can rely on merging to expand a sealed VMA. 5> mprotect() and pkey_mprotect(). 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous memory, when users don't have write permission to the memory. Those behaviors can alter region contents by discarding pages, effectively a memset(0) for anonymous memory. Following input during RFC are incooperated into this patch: Jann Horn: raising awareness and providing valuable insights on the destructive madvise operations. Linus Torvalds: assisting in defining system call signature and scope. Liam R. Howlett: perf optimization. Theo de Raadt: sharing the experiences and insight gained from implementing mimmutable() in OpenBSD. Finally, the idea that inspired this patch comes from Stephen Röttger’s work in Chrome V8 CFI. Signed-off-by: Jeff Xu --- include/linux/syscalls.h | 1 + mm/Makefile | 4 + mm/internal.h | 37 +++++ mm/madvise.c | 12 ++ mm/mmap.c | 31 +++- mm/mprotect.c | 10 ++ mm/mremap.c | 31 ++++ mm/mseal.c | 307 +++++++++++++++++++++++++++++++++++++++ 8 files changed, 432 insertions(+), 1 deletion(-) create mode 100644 mm/mseal.c diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index e619ac10cd23..9104952d323d 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -821,6 +821,7 @@ asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags); asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size, unsigned long prot, unsigned long pgoff, unsigned long flags); +asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned long flags); asmlinkage long sys_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, diff --git a/mm/Makefile b/mm/Makefile index 4abb40b911ec..739811890e36 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -42,6 +42,10 @@ ifdef CONFIG_CROSS_MEMORY_ATTACH mmu-$(CONFIG_MMU) += process_vm_access.o endif +ifdef CONFIG_64BIT +mmu-$(CONFIG_MMU) += mseal.o +endif + obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ maccess.o page-writeback.o folio-compat.o \ readahead.o swap.o truncate.o vmscan.o shrinker.o \ diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..a858161489b3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1326,6 +1326,43 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority); +#ifdef CONFIG_64BIT +/* VM is sealed, in vm_flags */ +#define VM_SEALED _BITUL(63) +#endif + +#ifdef CONFIG_64BIT +static inline int can_do_mseal(unsigned long flags) +{ + if (flags) + return -EINVAL; + + return 0; +} + +bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end); +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, + unsigned long end, int behavior); +#else +static inline int can_do_mseal(unsigned long flags) +{ + return -EPERM; +} + +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + return true; +} + +static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, + unsigned long end, int behavior) +{ + return true; +} +#endif + #ifdef CONFIG_SHRINKER_DEBUG static inline __printf(2, 0) int shrinker_debugfs_name_alloc( struct shrinker *shrinker, const char *fmt, va_list ap) diff --git a/mm/madvise.c b/mm/madvise.c index 44a498c94158..f7d589534e82 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1394,6 +1394,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * -EIO - an I/O error occurred while paging in data. * -EBADF - map exists, but area maps something that isn't a file. * -EAGAIN - a kernel resource was temporarily unavailable. + * -EPERM - memory is sealed. */ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior) { @@ -1437,10 +1438,21 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh start = untagged_addr_remote(mm, start); end = start + len; + /* + * Check if the address range is sealed for do_madvise(). + * can_modify_mm_madv assumes we have acquired the lock on MM. + */ + if (!can_modify_mm_madv(mm, start, end, behavior)) { + error = -EPERM; + goto out; + } + blk_start_plug(&plug); error = madvise_walk_vmas(mm, start, end, behavior, madvise_vma_behavior); blk_finish_plug(&plug); + +out: if (write) mmap_write_unlock(mm); else diff --git a/mm/mmap.c b/mm/mmap.c index 6dbda99a47da..4b80076c319e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1267,6 +1267,16 @@ unsigned long do_mmap(struct file *file, unsigned long addr, return -EEXIST; } + /* + * addr is returned from get_unmapped_area, + * There are two cases: + * 1> MAP_FIXED == false + * unallocated memory, no need to check sealing. + * 1> MAP_FIXED == true + * sealing is checked inside mmap_region when + * do_vmi_munmap is called. + */ + if (prot == PROT_EXEC) { pkey = execute_only_pkey(mm); if (pkey < 0) @@ -2682,6 +2692,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, if (end == start) return -EINVAL; + /* + * Check if memory is sealed before arch_unmap. + * Prevent unmapping a sealed VMA. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, start, end)) + return -EPERM; + /* arch_unmap() might do unmaps itself. */ arch_unmap(mm, start, end); @@ -2744,7 +2762,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr, } /* Unmap any existing mapping in the area */ - if (do_vmi_munmap(&vmi, mm, addr, len, uf, false)) + error = do_vmi_munmap(&vmi, mm, addr, len, uf, false); + if (error == -EPERM) + return error; + else if (error) return -ENOMEM; /* @@ -3094,6 +3115,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; + /* + * Check if memory is sealed before arch_unmap. + * Prevent unmapping a sealed VMA. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, start, end)) + return -EPERM; + arch_unmap(mm, start, end); return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock); } diff --git a/mm/mprotect.c b/mm/mprotect.c index f8a4544b4601..b30b2494bfcd 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -743,6 +744,15 @@ static int do_mprotect_pkey(unsigned long start, size_t len, } } + /* + * checking if memory is sealed. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(current->mm, start, end)) { + error = -EPERM; + goto out; + } + prev = vma_prev(&vmi); if (start > vma->vm_start) prev = vma; diff --git a/mm/mremap.c b/mm/mremap.c index 38d98465f3d8..d69b438dcf83 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -902,7 +902,25 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, if ((mm->map_count + 2) >= sysctl_max_map_count - 3) return -ENOMEM; + /* + * In mremap_to(). + * Move a VMA to another location, check if src addr is sealed. + * + * Place can_modify_mm here because mremap_to() + * does its own checking for address range, and we only + * check the sealing after passing those checks. + * + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, addr, addr + old_len)) + return -EPERM; + if (flags & MREMAP_FIXED) { + /* + * In mremap_to(). + * VMA is moved to dst address, and munmap dst first. + * do_munmap will check if dst is sealed. + */ ret = do_munmap(mm, new_addr, new_len, uf_unmap_early); if (ret) goto out; @@ -1061,6 +1079,19 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, goto out; } + /* + * Below is shrink/expand case (not mremap_to()) + * Check if src address is sealed, if so, reject. + * In other words, prevent shrinking or expanding a sealed VMA. + * + * Place can_modify_mm here so we can keep the logic related to + * shrink/expand together. + */ + if (!can_modify_mm(mm, addr, addr + old_len)) { + ret = -EPERM; + goto out; + } + /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. diff --git a/mm/mseal.c b/mm/mseal.c new file mode 100644 index 000000000000..daadac4b8125 --- /dev/null +++ b/mm/mseal.c @@ -0,0 +1,307 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Implement mseal() syscall. + * + * Copyright (c) 2023,2024 Google, Inc. + * + * Author: Jeff Xu + */ + +#include +#include +#include +#include +#include +#include +#include +#include "internal.h" + +static inline bool vma_is_sealed(struct vm_area_struct *vma) +{ + return (vma->vm_flags & VM_SEALED); +} + +static inline void set_vma_sealed(struct vm_area_struct *vma) +{ + vm_flags_set(vma, VM_SEALED); +} + +/* + * check if a vma is sealed for modification. + * return true, if modification is allowed. + */ +static bool can_modify_vma(struct vm_area_struct *vma) +{ + if (vma_is_sealed(vma)) + return false; + + return true; +} + +static bool is_madv_discard(int behavior) +{ + return behavior & + (MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED | + MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK); +} + +static bool is_ro_anon(struct vm_area_struct *vma) +{ + /* check anonymous mapping. */ + if (vma->vm_file || vma->vm_flags & VM_SHARED) + return false; + + /* + * check for non-writable: + * PROT=RO or PKRU is not writeable. + */ + if (!(vma->vm_flags & VM_WRITE) || + !arch_vma_access_permitted(vma, true, false, false)) + return true; + + return false; +} + +/* + * Check if the vmas of a memory range are allowed to be modified. + * the memory ranger can have a gap (unallocated memory). + * return true, if it is allowed. + */ +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma; + + VMA_ITERATOR(vmi, mm, start); + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) { + if (!can_modify_vma(vma)) + return false; + } + + /* Allow by default. */ + return true; +} + +/* + * Check if the vmas of a memory range are allowed to be modified by madvise. + * the memory ranger can have a gap (unallocated memory). + * return true, if it is allowed. + */ +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end, + int behavior) +{ + struct vm_area_struct *vma; + + VMA_ITERATOR(vmi, mm, start); + + if (!is_madv_discard(behavior)) + return true; + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) + if (is_ro_anon(vma) && !can_modify_vma(vma)) + return false; + + /* Allow by default. */ + return true; +} + +static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma, + struct vm_area_struct **prev, unsigned long start, + unsigned long end, vm_flags_t newflags) +{ + int ret = 0; + vm_flags_t oldflags = vma->vm_flags; + + if (newflags == oldflags) + goto out; + + vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto out; + } + + set_vma_sealed(vma); +out: + *prev = vma; + return ret; +} + +/* + * Check for do_mseal: + * 1> start is part of a valid vma. + * 2> end is part of a valid vma. + * 3> No gap (unallocated address) between start and end. + * 4> map is sealable. + */ +static int check_mm_seal(unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma; + unsigned long nstart = start; + + VMA_ITERATOR(vmi, current->mm, start); + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) { + if (vma->vm_start > nstart) + /* unallocated memory found. */ + return -ENOMEM; + + if (vma->vm_end >= end) + return 0; + + nstart = vma->vm_end; + } + + return -ENOMEM; +} + +/* + * Apply sealing. + */ +static int apply_mm_seal(unsigned long start, unsigned long end) +{ + unsigned long nstart; + struct vm_area_struct *vma, *prev; + + VMA_ITERATOR(vmi, current->mm, start); + + vma = vma_iter_load(&vmi); + /* + * Note: check_mm_seal should already checked ENOMEM case. + * so vma should not be null, same for the other ENOMEM cases. + */ + prev = vma_prev(&vmi); + if (start > vma->vm_start) + prev = vma; + + nstart = start; + for_each_vma_range(vmi, vma, end) { + int error; + unsigned long tmp; + vm_flags_t newflags; + + newflags = vma->vm_flags | VM_SEALED; + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags); + if (error) + return error; + nstart = vma_iter_end(&vmi); + } + + return 0; +} + +/* + * mseal(2) seals the VM's meta data from + * selected syscalls. + * + * addr/len: VM address range. + * + * The address range by addr/len must meet: + * start (addr) must be in a valid VMA. + * end (addr + len) must be in a valid VMA. + * no gap (unallocated memory) between start and end. + * start (addr) must be page aligned. + * + * len: len will be page aligned implicitly. + * + * Below VMA operations are blocked after sealing. + * 1> Unmapping, moving to another location, and shrinking + * the size, via munmap() and mremap(), can leave an empty + * space, therefore can be replaced with a VMA with a new + * set of attributes. + * 2> Moving or expanding a different vma into the current location, + * via mremap(). + * 3> Modifying a VMA via mmap(MAP_FIXED). + * 4> Size expansion, via mremap(), does not appear to pose any + * specific risks to sealed VMAs. It is included anyway because + * the use case is unclear. In any case, users can rely on + * merging to expand a sealed VMA. + * 5> mprotect and pkey_mprotect. + * 6> Some destructive madvice() behavior (e.g. MADV_DONTNEED) + * for anonymous memory, when users don't have write permission to the + * memory. Those behaviors can alter region contents by discarding pages, + * effectively a memset(0) for anonymous memory. + * + * flags: reserved. + * + * return values: + * zero: success. + * -EINVAL: + * invalid input flags. + * start address is not page aligned. + * Address arange (start + len) overflow. + * -ENOMEM: + * addr is not a valid address (not allocated). + * end (start + len) is not a valid address. + * a gap (unallocated memory) between start and end. + * -EPERM: + * - In 32 bit architecture, sealing is not supported. + * Note: + * user can call mseal(2) multiple times, adding a seal on an + * already sealed memory is a no-action (no error). + * + * unseal() is not supported. + */ +static int do_mseal(unsigned long start, size_t len_in, unsigned long flags) +{ + size_t len; + int ret = 0; + unsigned long end; + struct mm_struct *mm = current->mm; + + ret = can_do_mseal(flags); + if (ret) + return ret; + + start = untagged_addr(start); + if (!PAGE_ALIGNED(start)) + return -EINVAL; + + len = PAGE_ALIGN(len_in); + /* Check to see whether len was rounded up from small -ve to zero. */ + if (len_in && !len) + return -EINVAL; + + end = start + len; + if (end < start) + return -EINVAL; + + if (end == start) + return 0; + + if (mmap_write_lock_killable(mm)) + return -EINTR; + + /* + * First pass, this helps to avoid + * partial sealing in case of error in input address range, + * e.g. ENOMEM error. + */ + ret = check_mm_seal(start, end); + if (ret) + goto out; + + /* + * Second pass, this should success, unless there are errors + * from vma_modify_flags, e.g. merge/split error, or process + * reaching the max supported VMAs, however, those cases shall + * be rare. + */ + ret = apply_mm_seal(start, end); + +out: + mmap_write_unlock(current->mm); + return ret; +} + +SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long, + flags) +{ + return do_mseal(start, len, flags); +} From patchwork Mon Apr 15 16:35:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 13630309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BDCAC00A94 for ; Mon, 15 Apr 2024 16:35:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37E7B6B0096; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 307246B009A; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEC596B0098; Mon, 15 Apr 2024 12:35:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B02F36B0095 for ; Mon, 15 Apr 2024 12:35:43 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 6891D1403E0 for ; Mon, 15 Apr 2024 16:35:43 +0000 (UTC) X-FDA: 82012317366.07.7AC6733 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf17.hostedemail.com (Postfix) with ESMTP id 5273240020 for ; Mon, 15 Apr 2024 16:35:41 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=F9NjfquJ; spf=pass (imf17.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713198941; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BN2W59EliwXnJXkhzz3LAv4gwtoe49iUNaJXJxTnDts=; b=rVGjUc7RMZboLo5cf/NtK/DS4hQeM+tfotK8RIzEwsPm9DnfbXZxBn1i7055jOv7f4YgmX mz8wJaACSO3l86Hy66MVoG+BTdFoY5t4w+wxk6U0o0qnZoUpAXW2fInUI40JysaU7BAQIQ Ay1Ut/+P9V6Sh6HAFiBnVmh5TW5wmRw= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=F9NjfquJ; spf=pass (imf17.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=jeffxu@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713198941; a=rsa-sha256; cv=none; b=8fLm7siz7X3wqm8anaHG18omMFoNtx4EJb3EQw1i2HGLgPTnDy12y2NTvTBdrxrRqdlJ+7 SHrxvY8fiyhArbZCHJFEs95QjeiDtkaoCYIbGBjamdZ3LbnGE2bzeiT4s4ok8H66fh52MZ mqUCVzQuY7gmnx4IolAsoLlLT/DzyhM= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1e2232e30f4so24999475ad.2 for ; Mon, 15 Apr 2024 09:35:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713198940; x=1713803740; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BN2W59EliwXnJXkhzz3LAv4gwtoe49iUNaJXJxTnDts=; b=F9NjfquJ7FefomFIP7mJrEjzmHtfRr2JwmHbvjW7RwINLrJJeaSNYmXhwEU6ATd10Z D+AehqjYtosJWDhR8fF8KdJeQxxnGauMXByMVzuw4PsukoaOg0E6ZL9MJNDY9KLTt+mA 3Z4swrVrwopl2Rtye1Sr2qwCULmiBz/wFY3Lc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713198940; x=1713803740; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BN2W59EliwXnJXkhzz3LAv4gwtoe49iUNaJXJxTnDts=; b=C01o1E9PSHOumAaDUFZiQa9U4BrjZFGs0X7t3FYosv/izmIux9i6hEasEjWjkdGV7L iCns8qLUiwXv0T3psciLAS2qcd38KQzCmQTHIDydR8xVpqDtgPSwOZUFtYZ3vwXtD8x+ veDw7VimWWyM9CkHJrffcOH6xCR1u1qgdIbcvjoVmVZFBSd8u99H+PmI0LzidFjWDLg+ MregJlQkHohrl5b8DOKwInNcWQ5l+czynq+/8tfNmVn4bhcrxDdj+WxDUgBw0W7laVU5 sgcL/9oH2bAXjMSw0iVwRTDg/5e8rTFiYLAN1U3zqVg9BXBcBOivK8XzofuFgyLJbUH5 xn7w== X-Forwarded-Encrypted: i=1; AJvYcCXMKhdc6QU4Oni3UBip1mUb6SzgH48Wj6KUu7qHAWo0LknBvAQ+7+a7/SgldXN9JlpbFLoG8h+0HRXJg+Q4ScXhrt8= X-Gm-Message-State: AOJu0YxoMucjgl6KLBzlLsvPriRgZIi3aEpe1dzc9r8pQTXFgLXYPCbH QXj1woR1weWfDwNifLzQxZuButUL8M8MNq0606ML4P1nw46Pz21ah9eigQLvHw== X-Google-Smtp-Source: AGHT+IEF7hNuyDQgCTgm17ED/4J8gdxZbQtqGQQyVAQ/anHpZpNuVsagXDGzM7LF/8rq/2lkUAaPxQ== X-Received: by 2002:a17:903:1246:b0:1e6:6bba:7c70 with SMTP id u6-20020a170903124600b001e66bba7c70mr3793369plh.36.1713198939810; Mon, 15 Apr 2024 09:35:39 -0700 (PDT) Received: from localhost (15.4.198.104.bc.googleusercontent.com. [104.198.4.15]) by smtp.gmail.com with UTF8SMTPSA id mo3-20020a1709030a8300b001e510b3e807sm8137503plb.263.2024.04.15.09.35.39 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 15 Apr 2024 09:35:39 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, jannh@google.com, sroettger@google.com, willy@infradead.org, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, usama.anjum@collabora.com, corbet@lwn.net, Liam.Howlett@oracle.com, surenb@google.com, merimus@google.com, rdunlap@infradead.org Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, pedro.falcato@gmail.com, dave.hansen@intel.com, linux-hardening@vger.kernel.org, deraadt@openbsd.org, Jeff Xu Subject: [PATCH v10 3/5] selftest mm/mseal memory sealing Date: Mon, 15 Apr 2024 16:35:22 +0000 Message-ID: <20240415163527.626541-4-jeffxu@chromium.org> X-Mailer: git-send-email 2.44.0.683.g7961c838ac-goog In-Reply-To: <20240415163527.626541-1-jeffxu@chromium.org> References: <20240415163527.626541-1-jeffxu@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5273240020 X-Rspam-User: X-Stat-Signature: xc48sdg4c7qyybssazosmweedxrh9giq X-Rspamd-Server: rspam01 X-HE-Tag: 1713198941-526191 X-HE-Meta: U2FsdGVkX19yGCn/HU76uw6d5llSVuCJ1t8xdC0c5wIcJII81g5dxWj3An6glzqCy6HG7MtReZz+tLnmcCxAtv6YTtT+RhZEaMTZwA/U52KHcBiPbGEeftSPxgOvHvuLfumErZd4Jz3K8wlGhxTmlcBe3niAGW1m89OSuNzkNVAZnVdri8sDkwvUwI3696sqzr941M/b98I1NR2HSitfrw7mTFsOL+c4bw3o4C9o9BgZQriBfMsicZ+UfHAHuQI7qgMFYRIaq8aunzvjmNmVHHy+mrWZAwJYXHBHPWuvaexjnO7gyqz7B3fwG4wB/aqXzlHamGsnwFg2yB47QbjcwF0NWmso7wzLUkvI2J4Zp9Y+kw7SLtnP9UGBUUOTG75VPwsHWxrTsYDUlRXW6HIDb/qVI0XcedbGOOu98kTCcErF1lIFThtbMZgmnS2fuiWr3vExha3GxLz5BR77YyVNCeiDHoYwwAMoZw8sD2ecI2uR+YcBlGmrAkLmn4SWs8aKhEtuftkdf8qmvshYAF1pTuk5cUZCLDVrR7cpsEcfIHgIvVv5tf+zpVub17io/hwp0LgsLMNbvxF92CXza7kypeWXMlVEtYgv5MYa3fw6LzUxiExNi7cSqJyxJTg2fbF5xDSnHVvyPD9JD9rFbDxobpmrsuCa0/Jkp2+m9wHrV2Z498Zrk6wqd9trxVVlKFzuFbAnEimGhWSuzNRa1k6OKUh3B0Tb+9ItEvVupZC0AKaXEMG8izW36rPDXHWX6pVMQjY3lPSm+thcAlL6iuNa6eJDExrI3TXa3CJQEpqJTAbwPueMOq3oGDcbdJQ8vUEVq/A2vm+IU/00kFh9owRj5bglL8xGiMA+N1CjAMgQ5cs/JWd1hJFdADVt5/Apdnkiv8LohPsjkdz9tpgquXQbFpWbmYJmSPXr9CIOa0tmzpYnKaj3rjJ+Jb5/W1b1zjh24qqKtDitwRvzu9/RQ2i W3ifqW+m Qk2A4WdwbCAuYy8u/cXaYk0L+nfkd8RQ8p6JfKuu5NZR1bec/SNurRC3AtqmtbWfg3zz/cDYNs6ifG/lbn+qLq6kTvKod6+iZB0RO4yC2HVKysuoTf8nCkqOpx1YKXhLODUvhkFIf2Yu557M4U7B64+eEPUw/Z6Mqc0CP1DYQ1HhwYdG888grQgDaY9KfLc22u75rIqQKXJFMPlsRiIIRFcvaELbFdUbmIusgdeN+fzGfX28owrmQpVaJbSN/JqMBnN/2nmUD/1ph/sKWIYYZgKg/nBGcN5kQF61pY/IcF6Gm9xw3Z7eErn5Lsz92+ecQtl7prEQwr8+qFIFihGOKP1DMaFwDzvheW9bjpY2KM9G//swPIzmcmyXYKEZpbyU5YMKtpIRykFImhU3JhtGVAOhfoR/bmfkenDzwEIiXE2FXKBueNddbpb0h8g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Jeff Xu selftest for memory sealing change in mmap() and mseal(). Signed-off-by: Jeff Xu --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include "../kselftest.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * need those definition for manually build using gcc. + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test + */ +#ifndef PKEY_DISABLE_ACCESS +# define PKEY_DISABLE_ACCESS 0x1 +#endif + +#ifndef PKEY_DISABLE_WRITE +# define PKEY_DISABLE_WRITE 0x2 +#endif + +#ifndef PKEY_BITS_PER_KEY +#define PKEY_BITS_PER_PKEY 2 +#endif + +#ifndef PKEY_MASK +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) +#endif + +#define FAIL_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + +#define SKIP_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + + +#define TEST_END_CHECK() {\ + ksft_test_result_pass("%s\n", __func__);\ + return;\ +test_end:\ + return;\ +} + +#ifndef u64 +#define u64 unsigned long long +#endif + +static unsigned long get_vma_size(void *addr, int *prot) +{ + FILE *maps; + char line[256]; + int size = 0; + uintptr_t addr_start, addr_end; + char protstr[5]; + *prot = 0; + + maps = fopen("/proc/self/maps", "r"); + if (!maps) + return 0; + + while (fgets(line, sizeof(line), maps)) { + if (sscanf(line, "%lx-%lx %4s", &addr_start, &addr_end, &protstr) == 3) { + if (addr_start == (uintptr_t) addr) { + size = addr_end - addr_start; + if (protstr[0] == 'r') + *prot |= 0x4; + if (protstr[1] == 'w') + *prot |= 0x2; + if (protstr[2] == 'x') + *prot |= 0x1; + break; + } + } + } + fclose(maps); + return size; +} + +/* + * define sys_xyx to call syscall directly. + */ +static int sys_mseal(void *start, size_t len) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mseal, start, len, 0); + return sret; +} + +static int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mprotect, ptr, size, prot); + return sret; +} + +static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot, + unsigned long pkey) +{ + int sret; + + errno = 0; + sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey); + return sret; +} + +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, + unsigned long flags, unsigned long fd, unsigned long offset) +{ + void *sret; + + errno = 0; + sret = (void *) syscall(__NR_mmap, addr, len, prot, + flags, fd, offset); + return sret; +} + +static int sys_munmap(void *ptr, size_t size) +{ + int sret; + + errno = 0; + sret = syscall(__NR_munmap, ptr, size); + return sret; +} + +static int sys_madvise(void *start, size_t len, int types) +{ + int sret; + + errno = 0; + sret = syscall(__NR_madvise, start, len, types); + return sret; +} + +static int sys_pkey_alloc(unsigned long flags, unsigned long init_val) +{ + int ret = syscall(__NR_pkey_alloc, flags, init_val); + + return ret; +} + +static unsigned int __read_pkey_reg(void) +{ + unsigned int pkey_reg = 0; +#if defined(__i386__) || defined(__x86_64__) /* arch */ + unsigned int eax, edx; + unsigned int ecx = 0; + + asm volatile(".byte 0x0f,0x01,0xee\n\t" + : "=a" (eax), "=d" (edx) + : "c" (ecx)); + pkey_reg = eax; +#endif + return pkey_reg; +} + +static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */ + unsigned int eax = pkey_reg; + unsigned int ecx = 0; + unsigned int edx = 0; + + asm volatile(".byte 0x0f,0x01,0xef\n\t" + : : "a" (eax), "c" (ecx), "d" (edx)); + assert(pkey_reg == __read_pkey_reg()); +#endif +} + +static unsigned long pkey_bit_position(int pkey) +{ + return pkey * PKEY_BITS_PER_PKEY; +} + +static u64 set_pkey_bits(u64 reg, int pkey, u64 flags) +{ + unsigned long shift = pkey_bit_position(pkey); + + /* mask out bits from pkey in old value */ + reg &= ~((u64)PKEY_MASK << shift); + /* OR in new bits for pkey */ + reg |= (flags & PKEY_MASK) << shift; + return reg; +} + +static void set_pkey(int pkey, unsigned long pkey_value) +{ + unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE); + u64 new_pkey_reg; + + assert(!(pkey_value & ~mask)); + new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value); + __write_pkey_reg(new_pkey_reg); +} + +static void setup_single_address(int size, void **ptrOut) +{ + void *ptr; + + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + assert(ptr != (void *)-1); + *ptrOut = ptr; +} + +static void setup_single_address_rw(int size, void **ptrOut) +{ + void *ptr; + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE; + + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0); + assert(ptr != (void *)-1); + *ptrOut = ptr; +} + +static void clean_single_address(void *ptr, int size) +{ + int ret; + + ret = munmap(ptr, size); + assert(!ret); +} + +static void seal_single_address(void *ptr, int size) +{ + int ret; + + ret = sys_mseal(ptr, size); + assert(!ret); +} + +bool seal_support(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + + ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (ptr == (void *) -1) + return false; + + ret = sys_mseal(ptr, page_size); + if (ret < 0) + return false; + + return true; +} + +bool pkey_supported(void) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */ + int pkey = sys_pkey_alloc(0, 0); + + if (pkey > 0) + return true; +#endif + return false; +} + +static void test_seal_addseal(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_start(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* munmap 2 pages from ptr. */ + ret = sys_munmap(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail because 2 pages from ptr are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail because 2 pages from ptr are unmapped. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_middle(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* munmap 2 pages from ptr + page. */ + ret = sys_munmap(ptr + page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail, since middle 2 pages are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* we still can add seal to the first page and last page*/ + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mseal(ptr + 3 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_end(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* unmap last 2 pages. */ + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail since last 2 pages are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* The first 2 pages is not sealed, and can add seals */ + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_multiple_vmas(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split the vma into 3. */ + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will get applied to all 4 pages - 3 VMAs. */ + ret = sys_mprotect(ptr, size, PROT_READ); + FAIL_TEST_IF_FALSE(!ret); + + /* use mprotect to split the vma into 3. */ + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mseal get applied to all 4 pages - 3 VMAs. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_split_start(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first page, this will split the VMA */ + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* add seal to the remain 3 pages */ + ret = sys_mseal(ptr + page_size, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_split_end(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the last page */ + ret = sys_mseal(ptr + 3 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* Adding seals to the first 3 pages */ + ret = sys_mseal(ptr, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_invalid_input(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(8 * page_size, &ptr); + clean_single_address(ptr + 4 * page_size, 4 * page_size); + + /* invalid flag */ + ret = syscall(__NR_mseal, ptr, size, 0x20); + FAIL_TEST_IF_FALSE(ret < 0); + + /* unaligned address */ + ret = sys_mseal(ptr + 1, 2 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* length too big */ + ret = sys_mseal(ptr, 5 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* length overflow */ + ret = sys_mseal(ptr, UINT64_MAX/page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* start is not in a valid VMA */ + ret = sys_mseal(ptr - page_size, 5 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + TEST_END_CHECK(); +} + +static void test_seal_zero_length(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal 0 length will be OK, same as mprotect */ + ret = sys_mseal(ptr, 0); + FAIL_TEST_IF_FALSE(!ret); + + /* verify the 4 pages are not sealed by previous call. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_zero_address(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + /* use mmap to change protection. */ + ptr = sys_mmap(0, size, PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + FAIL_TEST_IF_FALSE(ptr == 0); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == 4 * page_size); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + /* verify the 4 pages are sealed by previous call. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret); + + TEST_END_CHECK(); +} + +static void test_seal_twice(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + /* apply the same seal will be OK. idempotent. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, size); + + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_start_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, page_size); + + /* the first page is sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* pages after the first page is not sealed. */ + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_end_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr + page_size, 3 * page_size); + + /* first page is not sealed */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* last 3 page are sealed */ + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_unalign_len(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, page_size * 2 - 1); + + /* 2 pages are sealed. */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_unalign_len_variant_2(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + if (seal) + seal_single_address(ptr, page_size * 2 + 1); + + /* 3 pages are sealed. */ + ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 3, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) + seal_single_address(ptr, page_size * 4); + + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size * 2, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma_with_split(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split as two vma. */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mseal can apply across 2 vma, also split them. */ + if (seal) + seal_single_address(ptr + page_size, page_size * 2); + + /* the first page is not sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* the second page is sealed. */ + ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* the third page is sealed. */ + ret = sys_mprotect(ptr + 2 * page_size, page_size, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* the fouth page is not sealed. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_partial_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* seal one page. */ + if (seal) + seal_single_address(ptr, page_size); + + /* mprotect first 2 page will fail, since the first page are sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma_with_gap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* use munmap to free two pages in the middle */ + ret = sys_munmap(ptr + page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail, because there is a gap in the address. */ + /* notes, internally mprotect still updated the first page. */ + ret = sys_mprotect(ptr, 4 * page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, 4 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* the first page is not sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + /* the last page is not sealed. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_split(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal all 4 pages. */ + if (seal) { + ret = sys_mseal(ptr, 4 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mprotect is sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_merge(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split one page. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal first two pages. */ + if (seal) { + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* 2 pages are sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* last 2 pages are not sealed. */ + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + TEST_END_CHECK(); +} + +static void test_seal_munmap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* 4 pages are sealed. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +/* + * allocate 4 pages, + * use mprotect to split it as two VMAs + * seal the whole range + * munmap will fail on both + */ +static void test_seal_munmap_two_vma(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + ret = sys_munmap(ptr, page_size * 2); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr + page_size, page_size * 2); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +/* + * allocate a VMA with 4 pages. + * munmap the middle 2 pages. + * seal the whole 4 pages, will fail. + * munmap the first page will be OK. + * munmap the last page will be OK. + */ +static void test_seal_munmap_vma_with_gap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_munmap(ptr + page_size, page_size * 2); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + /* can't have gap in the middle. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + } + + ret = sys_munmap(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr + page_size * 2, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_munmap_start_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + setup_single_address(size, &ptr); + + /* unmap the first page. */ + ret = sys_munmap(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the last 3 pages. */ + if (seal) { + ret = sys_mseal(ptr + page_size, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* unmap from the first page. */ + ret = sys_munmap(ptr, size); + if (seal) { + FAIL_TEST_IF_FALSE(ret < 0); + + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size * 3); + } else { + /* note: this will be OK, even the first page is */ + /* already unmapped. */ + FAIL_TEST_IF_FALSE(!ret); + + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 0); + } + + TEST_END_CHECK(); +} + +static void test_munmap_end_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + /* unmap last page. */ + ret = sys_munmap(ptr + page_size * 3, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first 3 pages. */ + if (seal) { + ret = sys_mseal(ptr, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* unmap all pages. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_munmap_middle_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + setup_single_address(size, &ptr); + /* unmap 2 pages in the middle. */ + ret = sys_munmap(ptr + page_size, page_size * 2); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first page. */ + if (seal) { + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* munmap all 4 pages. */ + ret = sys_munmap(ptr, size); + if (seal) { + FAIL_TEST_IF_FALSE(ret < 0); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + + size = get_vma_size(ptr + page_size * 3, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + } else { + FAIL_TEST_IF_FALSE(!ret); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == 0); + + size = get_vma_size(ptr + page_size * 3, &prot); + FAIL_TEST_IF_FALSE(size == 0); + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_shrink(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* shrink from 4 pages to 2 pages. */ + ret2 = mremap(ptr, size, 2 * page_size, 0, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_expand(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + /* ummap last 2 pages. */ + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* expand from 2 page to 4 pages. */ + ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 == ptr); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move(bool seal) +{ + void *ptr, *newPtr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newPtr); + clean_single_address(newPtr, size); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* move from ptr to fixed address. */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mmap_overwrite_prot(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to change protection. */ + ret2 = sys_mmap(ptr, size, PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mmap_expand(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 12 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + /* ummap last 4 pages. */ + ret = sys_munmap(ptr + 8 * page_size, 4 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, 8 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to expand. */ + ret2 = sys_mmap(ptr, size, PROT_READ, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mmap_shrink(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 12 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to shrink. */ + ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_shrink_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move and shrink to fixed address */ + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_expand_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(page_size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move and expand to fixed address */ + ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move to fixed address */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_fixed_zero(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* + * MREMAP_FIXED can move the mapping to zero address + */ + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 == 0); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_dontunmap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move, and don't unmap src addr. */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_dontunmap_anyaddr(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* + * The 0xdeaddead should not have effect on dest addr + * when MREMAP_DONTUNMAP is set. + */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, + 0xdeaddead); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead); + + } + + TEST_END_CHECK(); +} + + +static void test_seal_merge_and_split(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size; + int ret; + int prot; + + /* (24 RO) */ + setup_single_address(24 * page_size, &ptr); + + /* use mprotect(NONE) to set out boundary */ + /* (1 NONE) (22 RO) (1 NONE) */ + ret = sys_mprotect(ptr, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(!ret); + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 22 * page_size); + FAIL_TEST_IF_FALSE(prot == 4); + + /* use mseal to split from beginning */ + /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */ + ret = sys_mseal(ptr + page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 2 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 21 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* use mseal to split from the end. */ + /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 22 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 22 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 2 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 20 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge with prev. */ + /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 2 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge with after. */ + /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 21 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 21 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 2 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* split and merge from prev */ + /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 1 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 3 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + ret = sys_munmap(ptr + page_size, page_size); + FAIL_TEST_IF_FALSE(ret < 0); + ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* split and merge from next */ + /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 20 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 20 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 3 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge from middle of prev and middle of next. */ + /* (1 NONE) (22 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 22 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_rw(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address_rw(size, &ptr); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't take effect on RW memory. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + /* base seal still apply. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_pkey(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int pkey; + + SKIP_TEST_IF_FALSE(pkey_supported()); + + setup_single_address_rw(size, &ptr); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + pkey = sys_pkey_alloc(0, 0); + FAIL_TEST_IF_FALSE(pkey > 0); + + ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't take effect if PKRU allow write. */ + set_pkey(pkey, 0); + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + /* sealing will take effect if PKRU deny write. */ + set_pkey(pkey, PKEY_DISABLE_WRITE); + ret = sys_madvise(ptr, size, MADV_DONTNEED); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* base seal still apply. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_filebacked(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int fd; + unsigned long mapflags = MAP_PRIVATE; + + fd = memfd_create("test", 0); + FAIL_TEST_IF_FALSE(fd > 0); + + ret = fallocate(fd, 0, 0, size); + FAIL_TEST_IF_FALSE(!ret); + + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0); + FAIL_TEST_IF_FALSE(ptr != MAP_FAILED); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't apply for file backed mapping. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + close(fd); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_shared(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED; + + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't apply for shared mapping. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, size); + + ret = sys_madvise(ptr, size, MADV_DONTNEED); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +int main(int argc, char **argv) +{ + bool test_seal = seal_support(); + + ksft_print_header(); + + if (!test_seal) + ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n"); + + if (!pkey_supported()) + ksft_print_msg("PKEY not supported\n"); + + ksft_set_plan(80); + + test_seal_addseal(); + test_seal_unmapped_start(); + test_seal_unmapped_middle(); + test_seal_unmapped_end(); + test_seal_multiple_vmas(); + test_seal_split_start(); + test_seal_split_end(); + test_seal_invalid_input(); + test_seal_zero_length(); + test_seal_twice(); + + test_seal_mprotect(false); + test_seal_mprotect(true); + + test_seal_start_mprotect(false); + test_seal_start_mprotect(true); + + test_seal_end_mprotect(false); + test_seal_end_mprotect(true); + + test_seal_mprotect_unalign_len(false); + test_seal_mprotect_unalign_len(true); + + test_seal_mprotect_unalign_len_variant_2(false); + test_seal_mprotect_unalign_len_variant_2(true); + + test_seal_mprotect_two_vma(false); + test_seal_mprotect_two_vma(true); + + test_seal_mprotect_two_vma_with_split(false); + test_seal_mprotect_two_vma_with_split(true); + + test_seal_mprotect_partial_mprotect(false); + test_seal_mprotect_partial_mprotect(true); + + test_seal_mprotect_two_vma_with_gap(false); + test_seal_mprotect_two_vma_with_gap(true); + + test_seal_mprotect_merge(false); + test_seal_mprotect_merge(true); + + test_seal_mprotect_split(false); + test_seal_mprotect_split(true); + + test_seal_munmap(false); + test_seal_munmap(true); + test_seal_munmap_two_vma(false); + test_seal_munmap_two_vma(true); + test_seal_munmap_vma_with_gap(false); + test_seal_munmap_vma_with_gap(true); + + test_munmap_start_freed(false); + test_munmap_start_freed(true); + test_munmap_middle_freed(false); + test_munmap_middle_freed(true); + test_munmap_end_freed(false); + test_munmap_end_freed(true); + + test_seal_mremap_shrink(false); + test_seal_mremap_shrink(true); + test_seal_mremap_expand(false); + test_seal_mremap_expand(true); + test_seal_mremap_move(false); + test_seal_mremap_move(true); + + test_seal_mremap_shrink_fixed(false); + test_seal_mremap_shrink_fixed(true); + test_seal_mremap_expand_fixed(false); + test_seal_mremap_expand_fixed(true); + test_seal_mremap_move_fixed(false); + test_seal_mremap_move_fixed(true); + test_seal_mremap_move_dontunmap(false); + test_seal_mremap_move_dontunmap(true); + test_seal_mremap_move_fixed_zero(false); + test_seal_mremap_move_fixed_zero(true); + test_seal_mremap_move_dontunmap_anyaddr(false); + test_seal_mremap_move_dontunmap_anyaddr(true); + test_seal_discard_ro_anon(false); + test_seal_discard_ro_anon(true); + test_seal_discard_ro_anon_on_rw(false); + test_seal_discard_ro_anon_on_rw(true); + test_seal_discard_ro_anon_on_shared(false); + test_seal_discard_ro_anon_on_shared(true); + test_seal_discard_ro_anon_on_filebacked(false); + test_seal_discard_ro_anon_on_filebacked(true); + test_seal_mmap_overwrite_prot(false); + test_seal_mmap_overwrite_prot(true); + test_seal_mmap_expand(false); + test_seal_mmap_expand(true); + test_seal_mmap_shrink(false); + test_seal_mmap_shrink(true); + + test_seal_merge_and_split(); + test_seal_zero_address(); + + test_seal_discard_ro_anon_on_pkey(false); + test_seal_discard_ro_anon_on_pkey(true); + + ksft_finished(); + return 0; +} From patchwork Mon Apr 15 16:35:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 13630310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDEDAC04FF9 for ; Mon, 15 Apr 2024 16:35:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 813C06B0095; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74BCF6B0099; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37AE16B0095; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 079C16B0095 for ; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BD59CC05EB for ; Mon, 15 Apr 2024 16:35:43 +0000 (UTC) X-FDA: 82012317366.16.1CFC607 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by imf20.hostedemail.com (Postfix) with ESMTP id C3A6C1C0027 for ; Mon, 15 Apr 2024 16:35:41 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=jxA1E7Ki; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.171 as permitted sender) smtp.mailfrom=jeffxu@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713198941; a=rsa-sha256; cv=none; b=GI1kOMxSqwBo0IYIwK90pyshYD7lnSHbLq+KLkdBcXdhWXLvKlc/YzZVLDiqvQxDEZBCl7 WXNM9JE6xg42TjWlVefidvSVmDABBjvbIhqKKhC14gdfxK79hs4FrrHN5atkt6GOcxtiUO Ibx8avPNzKZu5Fy+v5SqWExj/G/nbAQ= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=jxA1E7Ki; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf20.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.171 as permitted sender) smtp.mailfrom=jeffxu@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713198941; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9iXbJH0cVJ4nmmmu00apsXHUH2CgJzw3cA9Huce0b50=; b=t8rwT+nTvEh+Zs85bapKwADqP7AIDXs+WnJark+N/TYv3WJZfj//GQE58gXWKNZHx84aHe Lyl/f4FNhsl8bmW5DrhprcKtrrz4moLGGylWDb6kfA0mk37D2MpLY7q2V1twFrd0IMBmmn fJWZ8o7HG+mWAhP0FctNkjmpAjas1E8= Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6eddff25e4eso2866601b3a.3 for ; Mon, 15 Apr 2024 09:35:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713198941; x=1713803741; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9iXbJH0cVJ4nmmmu00apsXHUH2CgJzw3cA9Huce0b50=; b=jxA1E7KikUwXsv4JbU54w/k0ObyovQHD+zmUIfdCLZqj4zmHyivV73mQYDSSJdh9uk TqgrqPdwGHZpmCFGxEawHfcmeYx+Sa2+AQWDpvKTVUJ7/0htOr9vIKBytzX/ZRSewrwX R8JambhNPeeptqsKjw52HKmwXzDWCq0HxsAWE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713198941; x=1713803741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9iXbJH0cVJ4nmmmu00apsXHUH2CgJzw3cA9Huce0b50=; b=aEJMgfsKk2HMKo183Ozofo4iJsRPeb3uu4rFdQoXNj+Ol/n0aqYjJk3oLYz5w0t0hS yPvD5xvPKeEyJH1kxNHPzmhyWpIkAL19RhuSqRGdzU7ASRUEyqfFtrLcnzVc6dzklcHI if/QVon9Lj5syxCmbitHKHz0VhaAPnnUXsAh3pOftoiadPvALyq8xDcOjvNW0Zh4kYtS ymglYNRlEyY7BtlWR3mrxSTBD7dlD7ze6kj8GU3RidUk9o3kMNHNufxk8VMwiJpd1sWq TPWfP62zvpYMMVR+85sBwFLCce7cDoSA9BZ/J/iKsvJJJDZeoP5sIZbkXZO6XzUkRDbd UeFw== X-Forwarded-Encrypted: i=1; AJvYcCWjriByv4bY5s5BmySuQmmlKuMkcEixEItcIqi3urtzfWBM/2UMKSqlGfN8IuDLT51j8hN3jeo2oFCnZ7AW6lkCXes= X-Gm-Message-State: AOJu0YzFg2aaRRYjiN2LeAOZTbtYtY3f+dWZSUV/yuLu87xk69cyqSru /qOj6wUraOulUxJVhJlhK5qvud1zFrq0kI8MTrisiaH6UCe18NwsRb/liaVZ/g== X-Google-Smtp-Source: AGHT+IEnWFlBUtlHwsk92i90BD+0YVbJMZlklZvuszYn32fReCkIhwN/hdSLkl+zg3BR7klNQ6I0LQ== X-Received: by 2002:a17:902:d58b:b0:1e2:1915:2479 with SMTP id k11-20020a170902d58b00b001e219152479mr11022843plh.12.1713198940554; Mon, 15 Apr 2024 09:35:40 -0700 (PDT) Received: from localhost (15.4.198.104.bc.googleusercontent.com. [104.198.4.15]) by smtp.gmail.com with UTF8SMTPSA id f12-20020a170902684c00b001e3d2314f3csm8299078pln.141.2024.04.15.09.35.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 15 Apr 2024 09:35:40 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, jannh@google.com, sroettger@google.com, willy@infradead.org, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, usama.anjum@collabora.com, corbet@lwn.net, Liam.Howlett@oracle.com, surenb@google.com, merimus@google.com, rdunlap@infradead.org Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, pedro.falcato@gmail.com, dave.hansen@intel.com, linux-hardening@vger.kernel.org, deraadt@openbsd.org, Jeff Xu Subject: [PATCH v10 4/5] mseal:add documentation Date: Mon, 15 Apr 2024 16:35:23 +0000 Message-ID: <20240415163527.626541-5-jeffxu@chromium.org> X-Mailer: git-send-email 2.44.0.683.g7961c838ac-goog In-Reply-To: <20240415163527.626541-1-jeffxu@chromium.org> References: <20240415163527.626541-1-jeffxu@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C3A6C1C0027 X-Stat-Signature: 7khne38at3t4f6dud3ez9ywfr1zptf8e X-HE-Tag: 1713198941-397812 X-HE-Meta: U2FsdGVkX18dke2ghsUqXghvgspoR3/6fJnNu3ybX3Tg1NUoVSiWclT53kth5a0ott4qpj4m8zq4WRxe/se2QMxX+1lfw3FhA3zy+0EKHJEYMCXqWoxg+rxMf2bukqkvIL0oONQJODLXZlvaRd/02Ow1TyFmfuK2UBwdCmcZMrK9cObE5MN/7qZ7gP2Sc8/iPjToxXu2vjXGOtv9VG2qAGETOgSLucj03INXe45b/hPEtXHGAPslQLKqYHyZmv69zFgwgKxVZvZwN5xL5Qe26w6Xw5VEJUBMTYhaDdn18WQohiSU+p5g0C3LAf5sbfN68sHVrrsPKJsu6HmfxBWcQEmd4jjBl5mT5xqn8kLx0RqGiqNmJ7nTHpsSFqxq7uUzpwZyWlS4i0ZkEKWDU88YPtBq+0zobme+Qbhi/E1ZT0WIUFALTirVqjVQMDWeKG/ZYoAiCqCU85YgMSZhC+52x60pAhg2v8f0cgEaS2bmh5NCX+ekOV0m8Ys7G7hg/q+UaCWMHK29tcovrHkMUBY7dJjQdy99FiFfrvcizyyMXdZDpGtO4RhOE9Dl+QfAUVkpk8T99upsnfm7WhOuSCPmsSIwku0xmMf2izuAQsEKKvxiSztdijTgn7CFzzxHvJSteEg76bKN87WO6MoA4tyhJL9upZfsIn2nTN2HqCKxFuE4t3Xs7V7E/e6R6emV8J8aA0CFk668fbTY0k0LL/LnayI8WyG86684kd3J3NzD7PlLGsvQRaF6b+895XTAyi2DH45bs1ugeNjbE/294g0P6WN+ap6E/dYCIMYU9zIEAVKwP+0DZXnmSISsWQM4YXD9PGDWSRXvJn9dJEClUy8FKTc5r7u8YMhaNRg3xJ1ePlA5LLBk/8g27WAzat0t2Tjk6imi/Y8fulfaLaQaYNy1DlITY5e0Yek4GIvk1LA4zsKUwnq6CZvbE/NqUxDHQXWjCPjMnzaqveiN8znRfSt IVqaHQfG Q4wrfiJaebHXzs1zY60GX/9eMXr8f1xTEO2pxyJ3z2qJg8bDhLefnWaJkilGtUK6s0mQkvwDb+YJIfEkxzskq+aaFPuNT2pJoL9hGRFiaHww29W6AtEfWIVqf0FQ3/LVp3uG20JDJlpYtnBVeN8qWmiH6OvBXcryZStqZoaNGrpHyCHGhFd05pJbNnqLZo7vfyb4jtJ0V5MfmsxogpD3fM0BWcxPEdochIE1EXvEwLhAq9Eelzs8RiU+SvCJ4htoqsOY0q3P9fReYf/nwJDrTsK27xMgmZD5fg3PmVoeyEU2RjX2EzavC3t6sSHNcK9Hstmc+YyjHYeyEXGdcUmgZFDUsuBaxpqtm9TQvqvqcAAmS/q/rw1AVCWGeqfsBp8Cmie3svxtMDlNxFpmNWhhJLX7vURe43uPwWWCW/sQ4k6eo77OjqZIs6TOjXTxgL6qsfBi4bcGDJLJnXobgo0/TQbqjJtd1kqdOEk8LRg7gTJjFQV2nKm+hHf7mSudA12Mo0ntvF5BTIrYs4j6uZ6NzabBVBCZ3GZXBg+wi5XbHynrq8sQ5CmB/Pj9P33GsTUan3KIW2ayK7CLuDR3GyvJkt96jbEEFZOOzhw/5e5OLNuPogWme71kf73kBySNhcMvZX/2SwBCcgrSPoZ5ENSbnsc6Ysw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Jeff Xu Add documentation for mseal(). Signed-off-by: Jeff Xu --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/mseal.rst | 199 ++++++++++++++++++++++++++ 2 files changed, 200 insertions(+) create mode 100644 Documentation/userspace-api/mseal.rst diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index afecfe3cc4a8..5926115ec0ed 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -20,6 +20,7 @@ System calls futex2 ebpf/index ioctl/index + mseal Security-related interfaces =========================== diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst new file mode 100644 index 000000000000..4132eec995a3 --- /dev/null +++ b/Documentation/userspace-api/mseal.rst @@ -0,0 +1,199 @@ +.. SPDX-License-Identifier: GPL-2.0 + +===================== +Introduction of mseal +===================== + +:Author: Jeff Xu + +Modern CPUs support memory permissions such as RW and NX bits. The memory +permission feature improves security stance on memory corruption bugs, i.e. +the attacker can’t just write to arbitrary memory and point the code to it, +the memory has to be marked with X bit, or else an exception will happen. + +Memory sealing additionally protects the mapping itself against +modifications. This is useful to mitigate memory corruption issues where a +corrupted pointer is passed to a memory management system. For example, +such an attacker primitive can break control-flow integrity guarantees +since read-only memory that is supposed to be trusted can become writable +or .text pages can get remapped. Memory sealing can automatically be +applied by the runtime loader to seal .text and .rodata pages and +applications can additionally seal security critical data at runtime. + +A similar feature already exists in the XNU kernel with the +VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2]. + +User API +======== +mseal() +----------- +The mseal() syscall has the following signature: + +``int mseal(void addr, size_t len, unsigned long flags)`` + +**addr/len**: virtual memory address range. + +The address range set by ``addr``/``len`` must meet: + - The start address must be in an allocated VMA. + - The start address must be page aligned. + - The end address (``addr`` + ``len``) must be in an allocated VMA. + - no gap (unallocated memory) between start and end address. + +The ``len`` will be paged aligned implicitly by the kernel. + +**flags**: reserved for future use. + +**return values**: + +- ``0``: Success. + +- ``-EINVAL``: + - Invalid input ``flags``. + - The start address (``addr``) is not page aligned. + - Address range (``addr`` + ``len``) overflow. + +- ``-ENOMEM``: + - The start address (``addr``) is not allocated. + - The end address (``addr`` + ``len``) is not allocated. + - A gap (unallocated memory) between start and end address. + +- ``-EPERM``: + - sealing is supported only on 64-bit CPUs, 32-bit is not supported. + +- For above error cases, users can expect the given memory range is + unmodified, i.e. no partial update. + +- There might be other internal errors/cases not listed here, e.g. + error during merging/splitting VMAs, or the process reaching the max + number of supported VMAs. In those cases, partial updates to the given + memory range could happen. However, those cases should be rare. + +**Blocked operations after sealing**: + Unmapping, moving to another location, and shrinking the size, + via munmap() and mremap(), can leave an empty space, therefore + can be replaced with a VMA with a new set of attributes. + + Moving or expanding a different VMA into the current location, + via mremap(). + + Modifying a VMA via mmap(MAP_FIXED). + + Size expansion, via mremap(), does not appear to pose any + specific risks to sealed VMAs. It is included anyway because + the use case is unclear. In any case, users can rely on + merging to expand a sealed VMA. + + mprotect() and pkey_mprotect(). + + Some destructive madvice() behaviors (e.g. MADV_DONTNEED) + for anonymous memory, when users don't have write permission to the + memory. Those behaviors can alter region contents by discarding pages, + effectively a memset(0) for anonymous memory. + + Kernel will return -EPERM for blocked operations. + + For blocked operations, one can expect the given address is unmodified, + i.e. no partial update. Note, this is different from existing mm + system call behaviors, where partial updates are made till an error is + found and returned to userspace. To give an example: + + Assume following code sequence: + + - ptr = mmap(null, 8192, PROT_NONE); + - munmap(ptr + 4096, 4096); + - ret1 = mprotect(ptr, 8192, PROT_READ); + - mseal(ptr, 4096); + - ret2 = mprotect(ptr, 8192, PROT_NONE); + + ret1 will be -ENOMEM, the page from ptr is updated to PROT_READ. + + ret2 will be -EPERM, the page remains to be PROT_READ. + +**Note**: + +- mseal() only works on 64-bit CPUs, not 32-bit CPU. + +- users can call mseal() multiple times, mseal() on an already sealed memory + is a no-action (not error). + +- munseal() is not supported. + +Use cases: +========== +- glibc: + The dynamic linker, during loading ELF executables, can apply sealing to + non-writable memory segments. + +- Chrome browser: protect some security sensitive data-structures. + +Notes on which memory to seal: +============================== + +It might be important to note that sealing changes the lifetime of a mapping, +i.e. the sealed mapping won’t be unmapped till the process terminates or the +exec system call is invoked. Applications can apply sealing to any virtual +memory region from userspace, but it is crucial to thoroughly analyze the +mapping's lifetime prior to apply the sealing. + +For example: + +- aio/shm + + aio/shm can call mmap()/munmap() on behalf of userspace, e.g. ksys_shmdt() in + shm.c. The lifetime of those mapping are not tied to the lifetime of the + process. If those memories are sealed from userspace, then munmap() will fail, + causing leaks in VMA address space during the lifetime of the process. + +- Brk (heap) + + Currently, userspace applications can seal parts of the heap by calling + malloc() and mseal(). + let's assume following calls from user space: + + - ptr = malloc(size); + - mprotect(ptr, size, RO); + - mseal(ptr, size); + - free(ptr); + + Technically, before mseal() is added, the user can change the protection of + the heap by calling mprotect(RO). As long as the user changes the protection + back to RW before free(), the memory range can be reused. + + Adding mseal() into the picture, however, the heap is then sealed partially, + the user can still free it, but the memory remains to be RO. If the address + is re-used by the heap manager for another malloc, the process might crash + soon after. Therefore, it is important not to apply sealing to any memory + that might get recycled. + + Furthermore, even if the application never calls the free() for the ptr, + the heap manager may invoke the brk system call to shrink the size of the + heap. In the kernel, the brk-shrink will call munmap(). Consequently, + depending on the location of the ptr, the outcome of brk-shrink is + nondeterministic. + + +Additional notes: +================= +As Jann Horn pointed out in [3], there are still a few ways to write +to RO memory, which is, in a way, by design. Those cases are not covered +by mseal(). If applications want to block such cases, sandbox tools (such as +seccomp, LSM, etc) might be considered. + +Those cases are: + +- Write to read-only memory through /proc/self/mem interface. +- Write to read-only memory through ptrace (such as PTRACE_POKETEXT). +- userfaultfd. + +The idea that inspired this patch comes from Stephen Röttger’s work in V8 +CFI [4]. Chrome browser in ChromeOS will be the first user of this API. + +Reference: +========== +[1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274 + +[2] https://man.openbsd.org/mimmutable.2 + +[3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com + +[4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc From patchwork Mon Apr 15 16:35:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Xu X-Patchwork-Id: 13630311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACC70C4345F for ; Mon, 15 Apr 2024 16:35:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB8FE6B0098; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D176B6B0099; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6B2C6B009B; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8BF686B0098 for ; Mon, 15 Apr 2024 12:35:44 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5F5FFA0675 for ; Mon, 15 Apr 2024 16:35:44 +0000 (UTC) X-FDA: 82012317408.23.71FEBEB Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf18.hostedemail.com (Postfix) with ESMTP id 9752D1C000D for ; Mon, 15 Apr 2024 16:35:42 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=ltysLs5h; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf18.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.181 as permitted sender) smtp.mailfrom=jeffxu@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713198942; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AJHOXE7W4iXMxi8stv8aQLFD0obJISw2P5aysbaDa0s=; b=V4KL8aM5+MCu8F6wMeUgFi1ybLtbHAxXZOU9tMfMgebrZOCCZ4iwZcVvIf0vtmheS58TiI WMRzu0Kd0jDWxcDR8m0aCJoGQalvhefvvA5dVzNx/zhR7JMWoU3TnDL+YSvStzUM2CcRRM GEtFBFULn6d6jL+9OwZlDtVNvj5dnk4= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=ltysLs5h; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf18.hostedemail.com: domain of jeffxu@chromium.org designates 209.85.210.181 as permitted sender) smtp.mailfrom=jeffxu@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713198942; a=rsa-sha256; cv=none; b=GXODd41Pat7awqEaKbWOQ5P3vK/7sY5c6PdugLr73CeDXgLAq4t8gIpsXrOW9OOcYK72K7 YeNRAjJtI/Xhflz6GueDBtK7Mn//OMF/fNsU8bghCvP0ZSNIcQ2KioW71ybSz707jl2Ffa 43lFJFUJF9/TiIybPQHfbVJKxVfRDeI= Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-6eff2be3b33so1151229b3a.2 for ; Mon, 15 Apr 2024 09:35:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1713198941; x=1713803741; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AJHOXE7W4iXMxi8stv8aQLFD0obJISw2P5aysbaDa0s=; b=ltysLs5hep2x2s9diIuVEpH0tQMQLPWewRPXn9brR4D5gu8plTS0AUj0N1BcRhPDJz nn1ATY/QSumwbTdEGFfWO0vKfu/opgud32TcZROCHBijcWPHrNoEdNu19zUjmlqwnHGx X7kOip3WKF5lUnW5tcUpYwi80h/XOBnH1O5ro= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713198941; x=1713803741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AJHOXE7W4iXMxi8stv8aQLFD0obJISw2P5aysbaDa0s=; b=sTsKtdCGwNR5TNJylQXjiQ4a0K1841L4SOxPwkMqL2lrdZKUYDy4vpW9CMHDDos8Zf QQQst2Nb0Q2WaF4MiBsUjCXvJGQj0OCG/Q5TEBYXuYtXLgATicVvkLZml6RJEl9gIIyn DR+XYMJs0KKmfDy8yLUzF9of/i1JmJ6jO7Cy2Ecoo4PajYubfjPjZ8aDolovVGcoan7t bqsIofhUGChqxXYr9DVZghgqgORRp91zu5Kq3pr67RE9PNyRMtHvujpc8Jxef5HRkLwK VvdbUaFUW+H71ieQLhrMOZDRW5Pl3tw9Jwxoa0hvD70C2o9qjPlApjvdqEUgAQcNXmEq O+Bg== X-Forwarded-Encrypted: i=1; AJvYcCXuDNlXLjGdRFaGHWf78lN5nuqDBlItGYeYfLKHjqNGbXvaQsjbFSi6uabt8e7DBL6EjXnbAuahRAiCtEYcZ3L2sdc= X-Gm-Message-State: AOJu0YwQwLuw8VARzbmmAohVnWGANw+lmhV7JkQ1tUFWN6zkIBNP0eny PyP/EvPHLdsew0dOftfGlv8/IeJ3lPTFxzgtBJgj92bwC/9zP6ojcXA/YBT9uQ== X-Google-Smtp-Source: AGHT+IGpxBiXitUV65X12uO9kCt9yTS7pqi2G5lLyGKRXo70DVpYj0l9hUD2hNlk3bYPsgQK0YTsXQ== X-Received: by 2002:a05:6a21:151a:b0:1a9:b3e9:a62c with SMTP id nq26-20020a056a21151a00b001a9b3e9a62cmr8978922pzb.48.1713198941478; Mon, 15 Apr 2024 09:35:41 -0700 (PDT) Received: from localhost (15.4.198.104.bc.googleusercontent.com. [104.198.4.15]) by smtp.gmail.com with UTF8SMTPSA id b13-20020a630c0d000000b005dc4da2121fsm7213902pgl.6.2024.04.15.09.35.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 15 Apr 2024 09:35:40 -0700 (PDT) From: jeffxu@chromium.org To: akpm@linux-foundation.org, keescook@chromium.org, jannh@google.com, sroettger@google.com, willy@infradead.org, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, usama.anjum@collabora.com, corbet@lwn.net, Liam.Howlett@oracle.com, surenb@google.com, merimus@google.com, rdunlap@infradead.org Cc: jeffxu@google.com, jorgelo@chromium.org, groeck@chromium.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, pedro.falcato@gmail.com, dave.hansen@intel.com, linux-hardening@vger.kernel.org, deraadt@openbsd.org, Jeff Xu Subject: [PATCH v10 5/5] selftest mm/mseal read-only elf memory segment Date: Mon, 15 Apr 2024 16:35:24 +0000 Message-ID: <20240415163527.626541-6-jeffxu@chromium.org> X-Mailer: git-send-email 2.44.0.683.g7961c838ac-goog In-Reply-To: <20240415163527.626541-1-jeffxu@chromium.org> References: <20240415163527.626541-1-jeffxu@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9752D1C000D X-Stat-Signature: ckr3oztesmmsermu39np151gzizngmjg X-Rspam-User: X-HE-Tag: 1713198942-94568 X-HE-Meta: U2FsdGVkX1+z3Bg9EItLbSGKaskUsAwKEznPFQgVL+D1fw8cdJLFuN/Jnmu9wdPXwVi199A6pPRNu2FIz0Co7y6j4+ZrdWB8zNHetZwyFfgabbmhmU6MrJaPwT05oNmS2d5530YzLemZCYjNaxsLKqmOyoyeE70lmVkgcaHhNFfSW2y2Fn6GM+ZPOKreikv7f5zLBVyC6s/3PwUTrtVNWTsAQKZoLGmSQ1EybOqBcLD+e6NTP+RazYwsInXbbAtiSYscQif9kNTmXLwfxxrH6lyYdgvD3lxMrS8hT659D2YQLDjcAhpdTRr6TB5K99cr5G+e+iS5Dxw2EtIskFwG3eJuxJ66qC+JqLZW2e6yODP9/jK89Wzl3TBl7K6Wab1wjMHa34QE6rVCWU8j8AWlPaR+542MH8fd8deP7KXYVkJN8cI7iYZmOykWbNXAyNcE9Y1aue4oZDKK5eD+PAjG0e1mKRPWl+eN2R5GpFXkK+TMtPAnN3p8b31NNfS1ddFscRcHAGNlcIYfb/JvoJgjLLWsYBC2nT91Zo1aAjaXkLbe0Rc/7q8pjUX15GPCI3VDqRMYIkAeyIaQjCgtdWfGjorxlnDztHieBtvF1ttDTbkot2AzrwzkuMY9AkUmSSlOmNJCV+TRqJZ+8ZAHXoTiSKFybY2CqW6/dZKG4hgG9Q0vkDjAxWeGpfsd/gazdjMevWfR0AM3F1Gtn91kAshwR5t4t3blfUcb+Sxc85MR76iaflkHGL53/a7HVLqrmKxkolEFml/bzYsYBTlaygNSzF4+3bfCCXy8/JkkA2AWFvJL0rOUkA8BiAhNfa2h7PZ113/v8XiEnPulZeXMPHQWq1MLHsjVgzfFTw3S5/11XM0L0pleCtS5qcrLie4QUtq6lRWVo4nSWjKJ7v4JsdQ0Lw6FlsYp+0TrM/jZfvFkDcxj6vJb2PV/NLOinbbsyy0RIyODNIsGIdzmIf1iJ2T GY0k2ido /Linqt5JN1B4hdh6ry0ng6p4dhWN5fDGMpMo8HsoHL6yltm/G+fcDyk5Id683TP5pc89OV2GrfwSp9sDSkv99as/SDEfTHbBjrXMULmtXCi3nc3rqSdSlC26see3xwK+jepq9TQtT37AblCRRmWxYV6YCbb+4RDOu7IBP8Uf8YWl0bt11+uX6qqixnsDclq4qxVqj3nbecLHAqEZg+0pJBzgNLiU9D4k6b4d0nlByelz6Q3N17sc52Rk+sBB6rmy5FzL3+R4vqJTJMftPIKCTIA/e3AgWwpLrYdgbN5kjD4g3fQ+Z1/NQBnWNMnY7bdc9RDsN4nyiX0QYsT11P98P83wuXSIrk6Q2RLQeLl8SE/RmH8nn5odmYz/VSm28epsa+nIt9bTgsMPN4zpB7BVKojgRkY9NQP0+wdUXapg6fq6JrYnNhnC+l5nApA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Jeff Xu Sealing read-only of elf mapping so it can't be changed by mprotect. Signed-off-by: Jeff Xu --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/seal_elf.c | 183 ++++++++++++++++++++++++++ 3 files changed, 185 insertions(+) create mode 100644 tools/testing/selftests/mm/seal_elf.c diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 98eaa4590f11..0b9ab987601c 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -48,3 +48,4 @@ va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map mseal_test +seal_elf diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 95d10fe1b3c1..02392c426759 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -60,6 +60,7 @@ TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test TEST_GEN_FILES += mseal_test +TEST_GEN_FILES += seal_elf TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/seal_elf.c b/tools/testing/selftests/mm/seal_elf.c new file mode 100644 index 000000000000..61a2f1c94e02 --- /dev/null +++ b/tools/testing/selftests/mm/seal_elf.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include "../kselftest.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * need those definition for manually build using gcc. + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 seal_elf.c -o seal_elf + */ +#define FAIL_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + +#define SKIP_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + + +#define TEST_END_CHECK() {\ + ksft_test_result_pass("%s\n", __func__);\ + return;\ +test_end:\ + return;\ +} + +#ifndef u64 +#define u64 unsigned long long +#endif + +/* + * define sys_xyx to call syscall directly. + */ +static int sys_mseal(void *start, size_t len) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mseal, start, len, 0); + return sret; +} + +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, + unsigned long flags, unsigned long fd, unsigned long offset) +{ + void *sret; + + errno = 0; + sret = (void *) syscall(__NR_mmap, addr, len, prot, + flags, fd, offset); + return sret; +} + +inline int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mprotect, ptr, size, prot); + return sret; +} + +static bool seal_support(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + + ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (ptr == (void *) -1) + return false; + + ret = sys_mseal(ptr, page_size); + if (ret < 0) + return false; + + return true; +} + +const char somestr[4096] = {"READONLY"}; + +static void test_seal_elf(void) +{ + int ret; + FILE *maps; + char line[512]; + int size = 0; + uintptr_t addr_start, addr_end; + char prot[5]; + char filename[256]; + unsigned long page_size = getpagesize(); + unsigned long long ptr = (unsigned long long) somestr; + char *somestr2 = (char *)somestr; + + /* + * Modify the protection of readonly somestr + */ + if (((unsigned long long)ptr % page_size) != 0) + ptr = (unsigned long long)ptr & ~(page_size - 1); + + ksft_print_msg("somestr = %s\n", somestr); + ksft_print_msg("change protection to rw\n"); + ret = sys_mprotect((void *)ptr, page_size, PROT_READ|PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + *somestr2 = 'A'; + ksft_print_msg("somestr is modified to: %s\n", somestr); + ret = sys_mprotect((void *)ptr, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(!ret); + + maps = fopen("/proc/self/maps", "r"); + FAIL_TEST_IF_FALSE(maps); + + /* + * apply sealing to elf binary + */ + while (fgets(line, sizeof(line), maps)) { + if (sscanf(line, "%lx-%lx %4s %*x %*x:%*x %*u %255[^\n]", + &addr_start, &addr_end, &prot, &filename) == 4) { + if (strlen(filename)) { + /* + * seal the mapping if read only. + */ + if (strstr(prot, "r-")) { + ret = sys_mseal((void *)addr_start, addr_end - addr_start); + FAIL_TEST_IF_FALSE(!ret); + ksft_print_msg("sealed: %lx-%lx %s %s\n", + addr_start, addr_end, prot, filename); + if ((uintptr_t) somestr >= addr_start && + (uintptr_t) somestr <= addr_end) + ksft_print_msg("mapping for somestr found\n"); + } + } + } + } + fclose(maps); + + ret = sys_mprotect((void *)ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + ksft_print_msg("somestr is sealed, mprotect is rejected\n"); + + TEST_END_CHECK(); +} + +int main(int argc, char **argv) +{ + bool test_seal = seal_support(); + + ksft_print_header(); + ksft_print_msg("pid=%d\n", getpid()); + + if (!test_seal) + ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n"); + + ksft_set_plan(1); + + test_seal_elf(); + + ksft_finished(); + return 0; +}