From patchwork Tue Apr 11 14:15:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13207636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BD21C76196 for ; Tue, 11 Apr 2023 14:17:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229911AbjDKORN (ORCPT ); Tue, 11 Apr 2023 10:17:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229493AbjDKORM (ORCPT ); Tue, 11 Apr 2023 10:17:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3446F4680 for ; Tue, 11 Apr 2023 07:15:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681222549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=qzK4ogixuaREht4MpkiqqhFPvzGg6ALQmBRRBO9bEhw=; b=jUbwLpSnffwcc5daCXto//mJke2UUJmVMz5P0LPI06H8rBkuQv/LNv+XSI2FgVvWJ3K+SV Uj/yZsznQ6/wUPcGLZ8nk3zOzcNzNoAJHZFAF6sVY+qG9qei7cXZsSAIwl7W9TVrb/+c61 I7pxXTgX4nMDQG32zdfqPt8KJLmNly4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-5-bQTcdzQuNdGs3XWMcEGz2w-1; Tue, 11 Apr 2023 10:15:42 -0400 X-MC-Unique: bQTcdzQuNdGs3XWMcEGz2w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6FBF686A96C; Tue, 11 Apr 2023 14:15:32 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id DB346400F284; Tue, 11 Apr 2023 14:15:30 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-kselftest@vger.kernel.org, sparclinux@vger.kernel.org, David Hildenbrand Subject: [PATCH] mm/huge_memory: conditionally call maybe_mkwrite() and drop pte_wrprotect() in __split_huge_pmd_locked() Date: Tue, 11 Apr 2023 16:15:22 +0200 Message-Id: <20230411141529.428991-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org No need to call maybe_mkwrite() to then wrprotect if the source PMD was not writable. It's worth nothing that this now allows for PTEs to be writable even if the source PMD was not writable: if vma->vm_page_prot includes write permissions. As documented in commit 931298e103c2 ("mm/userfaultfd: rely on vma->vm_page_prot in uffd_wp_range()"), any mechanism that intends to have pages wrprotected (COW, writenotify, mprotect, uffd-wp, softdirty, ...) has to properly adjust vma->vm_page_prot upfront, to not include write permissions. If vma->vm_page_prot includes write permissions, the PTE/PMD can be writable as default. This now mimics the handling in mm/migrate.c:remove_migration_pte() and in mm/huge_memory.c:remove_migration_pmd(), which has been in place for a long time (except that 96a9c287e25d ("mm/migrate: fix wrongly apply write bit after mkdirty on sparc64") temporarily changed it). Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6f3af65435c8..8332e16ac97b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2235,11 +2235,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_swp_mkuffd_wp(entry); } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); - entry = maybe_mkwrite(entry, vma); + if (write) + entry = maybe_mkwrite(entry, vma); if (anon_exclusive) SetPageAnonExclusive(page + i); - if (!write) - entry = pte_wrprotect(entry); if (!young) entry = pte_mkold(entry); /* NOTE: this may set soft-dirty too on some archs */ From patchwork Tue Apr 11 14:15:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13207640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94F54C77B70 for ; Tue, 11 Apr 2023 14:17:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230295AbjDKORo (ORCPT ); Tue, 11 Apr 2023 10:17:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230235AbjDKORY (ORCPT ); Tue, 11 Apr 2023 10:17:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9A9A49F8 for ; Tue, 11 Apr 2023 07:15:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681222552; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kiblhRIXN5QVGSvHoajKRlNDk3I/DcpYGGNog0nXOOg=; b=a5MRx4eV937JyjQJEmBxuVthvAAsKgXAYHg7BO3wBNS9WvidWfLj/iDl63JGVJkvwwADMZ 7O7E/ccmWbMIk067gjpNL9NscHU3Yy2w0tqP7jFHyVlUXqU4i96aDTr3FR2oN2S/GXJS9z ac+YNA2QAa/1Ip5462obVaSjX+6TTfI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-306-jhitVNstOv2k9zuv4EVEVA-1; Tue, 11 Apr 2023 10:15:47 -0400 X-MC-Unique: jhitVNstOv2k9zuv4EVEVA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4C84B889067; Tue, 11 Apr 2023 14:15:39 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id F1939400F286; Tue, 11 Apr 2023 14:15:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-kselftest@vger.kernel.org, sparclinux@vger.kernel.org, David Hildenbrand Subject: [PATCH v1 2/6] selftests/mm: mkdirty: test behavior of (pte|pmd)_mkdirty on VMAs without write permissions Date: Tue, 11 Apr 2023 16:15:25 +0200 Message-Id: <20230411141529.428991-4-david@redhat.com> In-Reply-To: <20230411141529.428991-1-david@redhat.com> References: <20230411141529.428991-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Let's add some tests that trigger (pte|pmd)_mkdirty on VMAs without write permissions. If an architecture implementation is wrong, we might accidentally set the PTE/PMD writable and allow for write access in a VMA without write permissions. The tests include reproducers for the two issues recently discovered and worked-around in core-MM for now: (1) commit 624a2c94f5b7 ("Partly revert "mm/thp: carry over dirty bit when thp splits on pmd"") (2) commit 96a9c287e25d ("mm/migrate: fix wrongly apply write bit after mkdirty on sparc64") In addition, some other tests that reveal further issues. All tests pass under x86_64: ./mkdirty # [INFO] detected THP size: 2048 KiB TAP version 13 1..6 # [INFO] PTRACE write access ok 1 SIGSEGV generated, page not modified # [INFO] PTRACE write access to THP ok 2 SIGSEGV generated, page not modified # [INFO] Page migration ok 3 SIGSEGV generated, page not modified # [INFO] Page migration of THP ok 4 SIGSEGV generated, page not modified # [INFO] PTE-mapping a THP ok 5 SIGSEGV generated, page not modified # [INFO] UFFDIO_COPY ok 6 SIGSEGV generated, page not modified # Totals: pass:6 fail:0 xfail:0 xpass:0 skip:0 error:0 But some fail on sparc64: ./mkdirty # [INFO] detected THP size: 8192 KiB TAP version 13 1..6 # [INFO] PTRACE write access not ok 1 SIGSEGV generated, page not modified # [INFO] PTRACE write access to THP not ok 2 SIGSEGV generated, page not modified # [INFO] Page migration ok 3 SIGSEGV generated, page not modified # [INFO] Page migration of THP ok 4 SIGSEGV generated, page not modified # [INFO] PTE-mapping a THP ok 5 SIGSEGV generated, page not modified # [INFO] UFFDIO_COPY not ok 6 SIGSEGV generated, page not modified Bail out! 3 out of 6 tests failed # Totals: pass:3 fail:3 xfail:0 xpass:0 skip:0 error:0 Reverting both above commits makes all tests fail on sparc64: ./mkdirty # [INFO] detected THP size: 8192 KiB TAP version 13 1..6 # [INFO] PTRACE write access not ok 1 SIGSEGV generated, page not modified # [INFO] PTRACE write access to THP not ok 2 SIGSEGV generated, page not modified # [INFO] Page migration not ok 3 SIGSEGV generated, page not modified # [INFO] Page migration of THP not ok 4 SIGSEGV generated, page not modified # [INFO] PTE-mapping a THP not ok 5 SIGSEGV generated, page not modified # [INFO] UFFDIO_COPY not ok 6 SIGSEGV generated, page not modified Bail out! 6 out of 6 tests failed # Totals: pass:0 fail:6 xfail:0 xpass:0 skip:0 error:0 The tests are useful to detect other problematic archs, to verify new arch fixes, and to stop such issues from reappearing in the future. For now, we don't add any hugetlb tests. Signed-off-by: David Hildenbrand --- tools/testing/selftests/mm/Makefile | 2 + tools/testing/selftests/mm/mkdirty.c | 379 +++++++++++++++++++++++++++ 2 files changed, 381 insertions(+) create mode 100644 tools/testing/selftests/mm/mkdirty.c diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index c31d952cff68..8235dddbbbc6 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -47,6 +47,7 @@ TEST_GEN_FILES += map_hugetlb TEST_GEN_FILES += map_populate TEST_GEN_FILES += memfd_secret TEST_GEN_FILES += migration +TEST_GEN_PROGS += mkdirty TEST_GEN_FILES += mlock-random-test TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test @@ -108,6 +109,7 @@ $(OUTPUT)/cow: vm_util.c $(OUTPUT)/khugepaged: vm_util.c $(OUTPUT)/ksm_functional_tests: vm_util.c $(OUTPUT)/madv_populate: vm_util.c +$(OUTPUT)/mkdirty: vm_util.c $(OUTPUT)/soft-dirty: vm_util.c $(OUTPUT)/split_huge_page_test: vm_util.c $(OUTPUT)/userfaultfd: vm_util.c diff --git a/tools/testing/selftests/mm/mkdirty.c b/tools/testing/selftests/mm/mkdirty.c new file mode 100644 index 000000000000..6d71d972997b --- /dev/null +++ b/tools/testing/selftests/mm/mkdirty.c @@ -0,0 +1,379 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Test handling of code that might set PTE/PMD dirty in read-only VMAs. + * Setting a PTE/PMD dirty must not accidentally set the PTE/PMD writable. + * + * Copyright 2023, Red Hat, Inc. + * + * Author(s): David Hildenbrand + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest.h" +#include "vm_util.h" + +static size_t pagesize; +static size_t thpsize; +static int mem_fd; +static int pagemap_fd; +static sigjmp_buf env; + +static void signal_handler(int sig) +{ + if (sig == SIGSEGV) + siglongjmp(env, 1); + siglongjmp(env, 2); +} + +static void do_test_write_sigsegv(char *mem) +{ + char orig = *mem; + int ret; + + if (signal(SIGSEGV, signal_handler) == SIG_ERR) { + ksft_test_result_fail("signal() failed\n"); + return; + } + + ret = sigsetjmp(env, 1); + if (!ret) + *mem = orig + 1; + + if (signal(SIGSEGV, SIG_DFL) == SIG_ERR) + ksft_test_result_fail("signal() failed\n"); + + ksft_test_result(ret == 1 && *mem == orig, + "SIGSEGV generated, page not modified\n"); +} + +static char *mmap_thp_range(int prot, char **_mmap_mem, size_t *_mmap_size) +{ + const size_t mmap_size = 2 * thpsize; + char *mem, *mmap_mem; + + mmap_mem = mmap(NULL, mmap_size, prot, MAP_PRIVATE|MAP_ANON, + -1, 0); + if (mmap_mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return MAP_FAILED; + } + mem = (char *)(((uintptr_t)mmap_mem + thpsize) & ~(thpsize - 1)); + + if (madvise(mem, thpsize, MADV_HUGEPAGE)) { + ksft_test_result_skip("MADV_HUGEPAGE failed\n"); + munmap(mmap_mem, mmap_size); + return MAP_FAILED; + } + + *_mmap_mem = mmap_mem; + *_mmap_size = mmap_size; + return mem; +} + +static void test_ptrace_write(void) +{ + char data = 1; + char *mem; + int ret; + + ksft_print_msg("[INFO] PTRACE write access\n"); + + mem = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE|MAP_ANON, -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + + /* Fault in the shared zeropage. */ + if (*mem != 0) { + ksft_test_result_fail("Memory not zero\n"); + goto munmap; + } + + /* + * Unshare the page (populating a fresh anon page that might be set + * dirty in the PTE) in the read-only VMA using ptrace (FOLL_FORCE). + */ + lseek(mem_fd, (uintptr_t) mem, SEEK_SET); + ret = write(mem_fd, &data, 1); + if (ret != 1 || *mem != data) { + ksft_test_result_fail("write() failed\n"); + goto munmap; + } + + do_test_write_sigsegv(mem); +munmap: + munmap(mem, pagesize); +} + +static void test_ptrace_write_thp(void) +{ + char *mem, *mmap_mem; + size_t mmap_size; + char data = 1; + int ret; + + ksft_print_msg("[INFO] PTRACE write access to THP\n"); + + mem = mmap_thp_range(PROT_READ, &mmap_mem, &mmap_size); + if (mem == MAP_FAILED) + return; + + /* + * Write to the first subpage in the read-only VMA using + * ptrace(FOLL_FORCE), eventually placing a fresh THP that is marked + * dirty in the PMD. + */ + lseek(mem_fd, (uintptr_t) mem, SEEK_SET); + ret = write(mem_fd, &data, 1); + if (ret != 1 || *mem != data) { + ksft_test_result_fail("write() failed\n"); + goto munmap; + } + + /* MM populated a THP if we got the last subpage populated as well. */ + if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) { + ksft_test_result_skip("Did not get a THP populated\n"); + goto munmap; + } + + do_test_write_sigsegv(mem); +munmap: + munmap(mmap_mem, mmap_size); +} + +static void test_page_migration(void) +{ + char *mem; + + ksft_print_msg("[INFO] Page migration\n"); + + mem = mmap(NULL, pagesize, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, + -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + + /* Populate a fresh page and dirty it. */ + memset(mem, 1, pagesize); + if (mprotect(mem, pagesize, PROT_READ)) { + ksft_test_result_fail("mprotect() failed\n"); + goto munmap; + } + + /* Trigger page migration. Might not be available or fail. */ + if (syscall(__NR_mbind, mem, pagesize, MPOL_LOCAL, NULL, 0x7fful, + MPOL_MF_MOVE)) { + ksft_test_result_skip("mbind() failed\n"); + goto munmap; + } + + do_test_write_sigsegv(mem); +munmap: + munmap(mem, pagesize); +} + +static void test_page_migration_thp(void) +{ + char *mem, *mmap_mem; + size_t mmap_size; + + ksft_print_msg("[INFO] Page migration of THP\n"); + + mem = mmap_thp_range(PROT_READ|PROT_WRITE, &mmap_mem, &mmap_size); + if (mem == MAP_FAILED) + return; + + /* + * Write to the first page, which might populate a fresh anon THP + * and dirty it. + */ + memset(mem, 1, pagesize); + if (mprotect(mem, thpsize, PROT_READ)) { + ksft_test_result_fail("mprotect() failed\n"); + goto munmap; + } + + /* MM populated a THP if we got the last subpage populated as well. */ + if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) { + ksft_test_result_skip("Did not get a THP populated\n"); + goto munmap; + } + + /* Trigger page migration. Might not be available or fail. */ + if (syscall(__NR_mbind, mem, thpsize, MPOL_LOCAL, NULL, 0x7fful, + MPOL_MF_MOVE)) { + ksft_test_result_skip("mbind() failed\n"); + goto munmap; + } + + do_test_write_sigsegv(mem); +munmap: + munmap(mmap_mem, mmap_size); +} + +static void test_pte_mapped_thp(void) +{ + char *mem, *mmap_mem; + size_t mmap_size; + + ksft_print_msg("[INFO] PTE-mapping a THP\n"); + + mem = mmap_thp_range(PROT_READ|PROT_WRITE, &mmap_mem, &mmap_size); + if (mem == MAP_FAILED) + return; + + /* + * Write to the first page, which might populate a fresh anon THP + * and dirty it. + */ + memset(mem, 1, pagesize); + if (mprotect(mem, thpsize, PROT_READ)) { + ksft_test_result_fail("mprotect() failed\n"); + goto munmap; + } + + /* MM populated a THP if we got the last subpage populated as well. */ + if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) { + ksft_test_result_skip("Did not get a THP populated\n"); + goto munmap; + } + + /* Trigger PTE-mapping the THP by mprotect'ing the last subpage. */ + if (mprotect(mem + thpsize - pagesize, pagesize, + PROT_READ|PROT_WRITE)) { + ksft_test_result_fail("mprotect() failed\n"); + goto munmap; + } + + do_test_write_sigsegv(mem); +munmap: + munmap(mmap_mem, mmap_size); +} + +#ifdef __NR_userfaultfd +static void test_uffdio_copy(void) +{ + struct uffdio_register uffdio_register; + struct uffdio_copy uffdio_copy; + struct uffdio_api uffdio_api; + char *dst, *src; + int uffd; + + ksft_print_msg("[INFO] UFFDIO_COPY\n"); + + src = malloc(pagesize); + memset(src, 1, pagesize); + dst = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE|MAP_ANON, -1, 0); + if (dst == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd < 0) { + ksft_test_result_skip("__NR_userfaultfd failed\n"); + goto munmap; + } + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api) < 0) { + ksft_test_result_fail("UFFDIO_API failed\n"); + goto close_uffd; + } + + uffdio_register.range.start = (unsigned long) dst; + uffdio_register.range.len = pagesize; + uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register)) { + ksft_test_result_fail("UFFDIO_REGISTER failed\n"); + goto close_uffd; + } + + /* Place a page in a read-only VMA, which might set the PTE dirty. */ + uffdio_copy.dst = (unsigned long) dst; + uffdio_copy.src = (unsigned long) src; + uffdio_copy.len = pagesize; + uffdio_copy.mode = 0; + if (ioctl(uffd, UFFDIO_COPY, &uffdio_copy)) { + ksft_test_result_fail("UFFDIO_COPY failed\n"); + goto close_uffd; + } + + do_test_write_sigsegv(dst); +close_uffd: + close(uffd); +munmap: + munmap(dst, pagesize); + free(src); +#endif /* __NR_userfaultfd */ +} + +int main(void) +{ + int err, tests = 2; + + pagesize = getpagesize(); + thpsize = read_pmd_pagesize(); + if (thpsize) { + ksft_print_msg("[INFO] detected THP size: %zu KiB\n", + thpsize / 1024); + tests += 3; + } +#ifdef __NR_userfaultfd + tests += 1; +#endif /* __NR_userfaultfd */ + + ksft_print_header(); + ksft_set_plan(tests); + + mem_fd = open("/proc/self/mem", O_RDWR); + if (mem_fd < 0) + ksft_exit_fail_msg("opening /proc/self/mem failed\n"); + pagemap_fd = open("/proc/self/pagemap", O_RDONLY); + if (pagemap_fd < 0) + ksft_exit_fail_msg("opening /proc/self/pagemap failed\n"); + + /* + * On some ptrace(FOLL_FORCE) write access via /proc/self/mem in + * read-only VMAs, the kernel may set the PTE/PMD dirty. + */ + test_ptrace_write(); + if (thpsize) + test_ptrace_write_thp(); + /* + * On page migration, the kernel may set the PTE/PMD dirty when + * remapping the page. + */ + test_page_migration(); + if (thpsize) + test_page_migration_thp(); + /* PTE-mapping a THP might propagate the dirty PMD bit to the PTEs. */ + if (thpsize) + test_pte_mapped_thp(); + /* Placing a fresh page via userfaultfd may set the PTE dirty. */ +#ifdef __NR_userfaultfd + test_uffdio_copy(); +#endif /* __NR_userfaultfd */ + + err = ksft_get_fail_cnt(); + if (err) + ksft_exit_fail_msg("%d out of %d tests failed\n", + err, ksft_test_num()); + return ksft_exit_pass(); +} From patchwork Tue Apr 11 14:15:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13207638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72D18C77B6F for ; Tue, 11 Apr 2023 14:17:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbjDKORf (ORCPT ); Tue, 11 Apr 2023 10:17:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbjDKORR (ORCPT ); Tue, 11 Apr 2023 10:17:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E84C49CF for ; Tue, 11 Apr 2023 07:15:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681222550; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VwTQ8oLmv6mJPSwow3w38v2InKFos9H0Yp4Ad7AU1D8=; b=b2bpNsezngzTj0GhvSSgy4c5tzmvfsViE3ovH5434JfdOjWKYq7CXHtKdqvPNd+IdPf0w+ tW08xQ2+qAP9rdS8X1U0kXoxpJbvuvQ/BJ0xnT+b4mNuyW9rNO+XU0mzTS7jgiSx0ak47e 2DxrgZY3OTfkiVzQWqsiYa5GGmKkKYk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-192-6m8W4uXYOJij1C-eHrqkJw-1; Tue, 11 Apr 2023 10:15:47 -0400 X-MC-Unique: 6m8W4uXYOJij1C-eHrqkJw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CFB748996F4; Tue, 11 Apr 2023 14:15:40 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id ABE2F400F285; Tue, 11 Apr 2023 14:15:39 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-kselftest@vger.kernel.org, sparclinux@vger.kernel.org, David Hildenbrand Subject: [PATCH v1 3/6] sparc/mm: don't unconditionally set HW writable bit when setting PTE dirty on 64bit Date: Tue, 11 Apr 2023 16:15:26 +0200 Message-Id: <20230411141529.428991-5-david@redhat.com> In-Reply-To: <20230411141529.428991-1-david@redhat.com> References: <20230411141529.428991-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org On sparc64, there is no HW modified bit, therefore, SW tracks via a SW bit if the PTE is dirty via pte_mkdirty(). However, pte_mkdirty() currently also unconditionally sets the HW writable bit, which is wrong. pte_mkdirty() is not supposed to make a PTE actually writable, unless the SW writable bit -- pte_write() -- indicates that the PTE is not write-protected. Fortunately, sparc64 also defines a SW writable bit. For example, this already turned into a problem in the context of THP splitting as documented in commit 624a2c94f5b7 ("Partly revert "mm/thp: carry over dirty bit when thp splits on pmd""), and for page migration, as documented in commit 96a9c287e25d ("mm/migrate: fix wrongly apply write bit after mkdirty on sparc64"). Also, we might want to use the dirty PTE bit in the context of KSM with shared zeropage [1], whereby setting the page writable would be problematic. But more general, any code that might end up setting a PTE/PMD dirty inside a VM without write permissions is possibly broken, Before this commit (sun4u in QEMU): root@debian:~/linux/tools/testing/selftests/mm# ./mkdirty # [INFO] detected THP size: 8192 KiB TAP version 13 1..6 # [INFO] PTRACE write access not ok 1 SIGSEGV generated, page not modified # [INFO] PTRACE write access to THP not ok 2 SIGSEGV generated, page not modified # [INFO] Page migration ok 3 SIGSEGV generated, page not modified # [INFO] Page migration of THP ok 4 SIGSEGV generated, page not modified # [INFO] PTE-mapping a THP ok 5 SIGSEGV generated, page not modified # [INFO] UFFDIO_COPY not ok 6 SIGSEGV generated, page not modified Bail out! 3 out of 6 tests failed # Totals: pass:3 fail:3 xfail:0 xpass:0 skip:0 error:0 Test #3,#4,#5 pass ever since we added some MM workarounds, the underlying issue remains. Let's fix the remaining issues and prepare for reverting the workarounds by setting the HW writable bit only if both, the SW dirty bit and the SW writable bit are set. We have to move pte_dirty() and pte_dirty() up. The code patching mechanism and handling constants > 22bit is a bit special on sparc64. The ASM logic in pte_mkdirty() and pte_mkwrite() match the logic in pte_mkold() to create the mask depending on the machine type. The ASM logic in __pte_mkhwwrite() matches the logic in pte_present(), just using an "or" instead of an "and" instruction. With this commit (sun4u in QEMU): root@debian:~/linux/tools/testing/selftests/mm# ./mkdirty # [INFO] detected THP size: 8192 KiB TAP version 13 1..6 # [INFO] PTRACE write access ok 1 SIGSEGV generated, page not modified # [INFO] PTRACE write access to THP ok 2 SIGSEGV generated, page not modified # [INFO] Page migration ok 3 SIGSEGV generated, page not modified # [INFO] Page migration of THP ok 4 SIGSEGV generated, page not modified # [INFO] PTE-mapping a THP ok 5 SIGSEGV generated, page not modified # [INFO] UFFDIO_COPY ok 6 SIGSEGV generated, page not modified # Totals: pass:6 fail:0 xfail:0 xpass:0 skip:0 error:0 This handling seems to have been in place forever. [1] https://lkml.kernel.org/r/533a7c3d-3a48-b16b-b421-6e8386e0b142@redhat.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: David Hildenbrand --- arch/sparc/include/asm/pgtable_64.h | 116 ++++++++++++++++------------ 1 file changed, 66 insertions(+), 50 deletions(-) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 2dc8d4641734..5563efa1a19f 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -357,6 +357,42 @@ static inline pgprot_t pgprot_noncached(pgprot_t prot) */ #define pgprot_noncached pgprot_noncached +static inline unsigned long pte_dirty(pte_t pte) +{ + unsigned long mask; + + __asm__ __volatile__( + "\n661: mov %1, %0\n" + " nop\n" + " .section .sun4v_2insn_patch, \"ax\"\n" + " .word 661b\n" + " sethi %%uhi(%2), %0\n" + " sllx %0, 32, %0\n" + " .previous\n" + : "=r" (mask) + : "i" (_PAGE_MODIFIED_4U), "i" (_PAGE_MODIFIED_4V)); + + return (pte_val(pte) & mask); +} + +static inline unsigned long pte_write(pte_t pte) +{ + unsigned long mask; + + __asm__ __volatile__( + "\n661: mov %1, %0\n" + " nop\n" + " .section .sun4v_2insn_patch, \"ax\"\n" + " .word 661b\n" + " sethi %%uhi(%2), %0\n" + " sllx %0, 32, %0\n" + " .previous\n" + : "=r" (mask) + : "i" (_PAGE_WRITE_4U), "i" (_PAGE_WRITE_4V)); + + return (pte_val(pte) & mask); +} + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags); #define arch_make_huge_pte arch_make_huge_pte @@ -418,28 +454,43 @@ static inline bool is_hugetlb_pte(pte_t pte) } #endif +static inline pte_t __pte_mkhwwrite(pte_t pte) +{ + unsigned long val = pte_val(pte); + + /* + * Note: we only want to set the HW writable bit if the SW writable bit + * and the SW dirty bit are set. + */ + __asm__ __volatile__( + "\n661: or %0, %2, %0\n" + " .section .sun4v_1insn_patch, \"ax\"\n" + " .word 661b\n" + " or %0, %3, %0\n" + " .previous\n" + : "=r" (val) + : "0" (val), "i" (_PAGE_W_4U), "i" (_PAGE_W_4V)); + + return __pte(val); +} + static inline pte_t pte_mkdirty(pte_t pte) { - unsigned long val = pte_val(pte), tmp; + unsigned long val = pte_val(pte), mask; __asm__ __volatile__( - "\n661: or %0, %3, %0\n" - " nop\n" - "\n662: nop\n" + "\n661: mov %1, %0\n" " nop\n" " .section .sun4v_2insn_patch, \"ax\"\n" " .word 661b\n" - " sethi %%uhi(%4), %1\n" - " sllx %1, 32, %1\n" - " .word 662b\n" - " or %1, %%lo(%4), %1\n" - " or %0, %1, %0\n" + " sethi %%uhi(%2), %0\n" + " sllx %0, 32, %0\n" " .previous\n" - : "=r" (val), "=r" (tmp) - : "0" (val), "i" (_PAGE_MODIFIED_4U | _PAGE_W_4U), - "i" (_PAGE_MODIFIED_4V | _PAGE_W_4V)); + : "=r" (mask) + : "i" (_PAGE_MODIFIED_4U), "i" (_PAGE_MODIFIED_4V)); - return __pte(val); + pte = __pte(val | mask); + return pte_write(pte) ? __pte_mkhwwrite(pte) : pte; } static inline pte_t pte_mkclean(pte_t pte) @@ -481,7 +532,8 @@ static inline pte_t pte_mkwrite(pte_t pte) : "=r" (mask) : "i" (_PAGE_WRITE_4U), "i" (_PAGE_WRITE_4V)); - return __pte(val | mask); + pte = __pte(val | mask); + return pte_dirty(pte) ? __pte_mkhwwrite(pte) : pte; } static inline pte_t pte_wrprotect(pte_t pte) @@ -584,42 +636,6 @@ static inline unsigned long pte_young(pte_t pte) return (pte_val(pte) & mask); } -static inline unsigned long pte_dirty(pte_t pte) -{ - unsigned long mask; - - __asm__ __volatile__( - "\n661: mov %1, %0\n" - " nop\n" - " .section .sun4v_2insn_patch, \"ax\"\n" - " .word 661b\n" - " sethi %%uhi(%2), %0\n" - " sllx %0, 32, %0\n" - " .previous\n" - : "=r" (mask) - : "i" (_PAGE_MODIFIED_4U), "i" (_PAGE_MODIFIED_4V)); - - return (pte_val(pte) & mask); -} - -static inline unsigned long pte_write(pte_t pte) -{ - unsigned long mask; - - __asm__ __volatile__( - "\n661: mov %1, %0\n" - " nop\n" - " .section .sun4v_2insn_patch, \"ax\"\n" - " .word 661b\n" - " sethi %%uhi(%2), %0\n" - " sllx %0, 32, %0\n" - " .previous\n" - : "=r" (mask) - : "i" (_PAGE_WRITE_4U), "i" (_PAGE_WRITE_4V)); - - return (pte_val(pte) & mask); -} - static inline unsigned long pte_exec(pte_t pte) { unsigned long mask; From patchwork Tue Apr 11 14:15:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13207637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C817CC76196 for ; Tue, 11 Apr 2023 14:17:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230054AbjDKORQ (ORCPT ); Tue, 11 Apr 2023 10:17:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbjDKORO (ORCPT ); Tue, 11 Apr 2023 10:17:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DBD546B4 for ; Tue, 11 Apr 2023 07:15:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681222549; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qzK4ogixuaREht4MpkiqqhFPvzGg6ALQmBRRBO9bEhw=; b=O5HGANU+qF5d0nfHuGXP1gE7dUSNt4oVGnJE9KFv6eVEqJt20zepVUtn+JGnF+YovExWUt mbBMO8izAy6NGUWXXj0E9CmyRUeySHXPJXfI9gGFLq8FRdCaEjC7zDQwl7/nPoRWI+wtIf 31EsKixb0WNGQwNK/AdDLD/BblM9gH8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-371-Ckkwz92eO7-IkeIGQlURJg-1; Tue, 11 Apr 2023 10:15:45 -0400 X-MC-Unique: Ckkwz92eO7-IkeIGQlURJg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 17DDE1C12991; Tue, 11 Apr 2023 14:15:45 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9240D400F281; Tue, 11 Apr 2023 14:15:43 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-kselftest@vger.kernel.org, sparclinux@vger.kernel.org, David Hildenbrand Subject: [PATCH v1 6/6] mm/huge_memory: conditionally call maybe_mkwrite() and drop pte_wrprotect() in __split_huge_pmd_locked() Date: Tue, 11 Apr 2023 16:15:29 +0200 Message-Id: <20230411141529.428991-8-david@redhat.com> In-Reply-To: <20230411141529.428991-1-david@redhat.com> References: <20230411141529.428991-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org No need to call maybe_mkwrite() to then wrprotect if the source PMD was not writable. It's worth nothing that this now allows for PTEs to be writable even if the source PMD was not writable: if vma->vm_page_prot includes write permissions. As documented in commit 931298e103c2 ("mm/userfaultfd: rely on vma->vm_page_prot in uffd_wp_range()"), any mechanism that intends to have pages wrprotected (COW, writenotify, mprotect, uffd-wp, softdirty, ...) has to properly adjust vma->vm_page_prot upfront, to not include write permissions. If vma->vm_page_prot includes write permissions, the PTE/PMD can be writable as default. This now mimics the handling in mm/migrate.c:remove_migration_pte() and in mm/huge_memory.c:remove_migration_pmd(), which has been in place for a long time (except that 96a9c287e25d ("mm/migrate: fix wrongly apply write bit after mkdirty on sparc64") temporarily changed it). Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6f3af65435c8..8332e16ac97b 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2235,11 +2235,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_swp_mkuffd_wp(entry); } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); - entry = maybe_mkwrite(entry, vma); + if (write) + entry = maybe_mkwrite(entry, vma); if (anon_exclusive) SetPageAnonExclusive(page + i); - if (!write) - entry = pte_wrprotect(entry); if (!young) entry = pte_mkold(entry); /* NOTE: this may set soft-dirty too on some archs */