From patchwork Wed Nov 16 10:26:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0B14C4332F for ; Wed, 16 Nov 2022 10:29:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cMkhihx9u4wyyAWokYH+oDvS4KdOQQo6h2Sfu4pdzHU=; b=4rNlPu+TNvNzKR vQk/rD8oJXrfmhT32gm/u1cE6AGAnUAK1HocuIByIeU1n4fKI55xW0zaWwX0oTQquZVoDspIXggNV 0TgyKYd1ABMhOyL+L5187jf+PfcLGoDI/pPKKQZZufaHs/59rsVH1KbPCX33LJR8sfLBgokrBiCHd 2b4CHoLFKz3/4yqBFTvyqCl8htyucRs1Zpb7GUE09E9FTLIBSju4ywXYNmnzJjXiJeTICKbyzO/dv WNYaeHYgGDmbY77KiTCU6qCfUJ/GVA2hnATaKf532dZR7Tx1qILl/r3n6pEFTBXzHSk+ICyDGcIOl Ppd2w75nIAQb2wPNoWgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFdx-002CPB-4W; Wed, 16 Nov 2022 10:27:45 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFdq-002CKE-N7 for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:27:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CRZgYtQk0WJDMWcPE6Ik5l6ROIQ+8KCPRqImipF7TGM=; b=XDDw8tuVZP5VHP/8+1u9UeyByexcgsIbbnCTYbtBpjnb23WXLTxKWgW+DRSChRqZ/BRB21 gJFnV14tXin8TppyOID3XVfmw52el3/hUFuZNSAvZWO43BRIMT+0V60OEW8cjPqM0Xq/Vb 63hn0FUjIztvMEDJofdM3d8TXir/z9k= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-ccCkLAZZNOS2K5RcgvoY3g-1; Wed, 16 Nov 2022 05:27:33 -0500 X-MC-Unique: ccCkLAZZNOS2K5RcgvoY3g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 33AAD3C0F66D; Wed, 16 Nov 2022 10:27:31 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id C9DEA2028CE4; Wed, 16 Nov 2022 10:27:23 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 01/20] selftests/vm: anon_cow: prepare for non-anonymous COW tests Date: Wed, 16 Nov 2022 11:26:40 +0100 Message-Id: <20221116102659.70287-2-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022738_882679_9809DC77 X-CRM114-Status: GOOD ( 24.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Originally, the plan was to have a separate tests for testing COW of non-anonymous (e.g., shared zeropage) pages. Turns out, that we'd need a lot of similar functionality and that there isn't a really good reason to separate it. So let's prepare for non-anon tests by renaming to "cow". Signed-off-by: David Hildenbrand Acked-by: Vlastimil Babka --- tools/testing/selftests/vm/.gitignore | 2 +- tools/testing/selftests/vm/Makefile | 10 ++++---- tools/testing/selftests/vm/check_config.sh | 4 +-- .../selftests/vm/{anon_cow.c => cow.c} | 25 +++++++++++-------- tools/testing/selftests/vm/run_vmtests.sh | 8 +++--- 5 files changed, 27 insertions(+), 22 deletions(-) rename tools/testing/selftests/vm/{anon_cow.c => cow.c} (97%) diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore index 8a536c731e3c..ee8c41c998e6 100644 --- a/tools/testing/selftests/vm/.gitignore +++ b/tools/testing/selftests/vm/.gitignore @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -anon_cow +cow hugepage-mmap hugepage-mremap hugepage-shm diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index 0986bd60c19f..89c14e41bd43 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -27,7 +27,7 @@ MAKEFLAGS += --no-builtin-rules CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES) LDLIBS = -lrt -lpthread -TEST_GEN_FILES = anon_cow +TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_test TEST_GEN_FILES += hmm-tests @@ -99,7 +99,7 @@ TEST_FILES += va_128TBswitch.sh include ../lib.mk -$(OUTPUT)/anon_cow: vm_util.c +$(OUTPUT)/cow: vm_util.c $(OUTPUT)/khugepaged: vm_util.c $(OUTPUT)/ksm_functional_tests: vm_util.c $(OUTPUT)/madv_populate: vm_util.c @@ -156,8 +156,8 @@ warn_32bit_failure: endif endif -# ANON_COW_EXTRA_LIBS may get set in local_config.mk, or it may be left empty. -$(OUTPUT)/anon_cow: LDLIBS += $(ANON_COW_EXTRA_LIBS) +# cow_EXTRA_LIBS may get set in local_config.mk, or it may be left empty. +$(OUTPUT)/cow: LDLIBS += $(COW_EXTRA_LIBS) $(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap @@ -170,7 +170,7 @@ local_config.mk local_config.h: check_config.sh EXTRA_CLEAN += local_config.mk local_config.h -ifeq ($(ANON_COW_EXTRA_LIBS),) +ifeq ($(COW_EXTRA_LIBS),) all: warn_missing_liburing warn_missing_liburing: diff --git a/tools/testing/selftests/vm/check_config.sh b/tools/testing/selftests/vm/check_config.sh index 9a44c6520925..bcba3af0acea 100644 --- a/tools/testing/selftests/vm/check_config.sh +++ b/tools/testing/selftests/vm/check_config.sh @@ -21,11 +21,11 @@ $CC -c $tmpfile_c -o $tmpfile_o >/dev/null 2>&1 if [ -f $tmpfile_o ]; then echo "#define LOCAL_CONFIG_HAVE_LIBURING 1" > $OUTPUT_H_FILE - echo "ANON_COW_EXTRA_LIBS = -luring" > $OUTPUT_MKFILE + echo "COW_EXTRA_LIBS = -luring" > $OUTPUT_MKFILE else echo "// No liburing support found" > $OUTPUT_H_FILE echo "# No liburing support found, so:" > $OUTPUT_MKFILE - echo "ANON_COW_EXTRA_LIBS = " >> $OUTPUT_MKFILE + echo "COW_EXTRA_LIBS = " >> $OUTPUT_MKFILE fi rm ${tmpname}.* diff --git a/tools/testing/selftests/vm/anon_cow.c b/tools/testing/selftests/vm/cow.c similarity index 97% rename from tools/testing/selftests/vm/anon_cow.c rename to tools/testing/selftests/vm/cow.c index bbb251eb5025..d202bfd63585 100644 --- a/tools/testing/selftests/vm/anon_cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * COW (Copy On Write) tests for anonymous memory. + * COW (Copy On Write) tests. * * Copyright 2022, Red Hat, Inc. * @@ -986,7 +986,11 @@ struct test_case { test_fn fn; }; -static const struct test_case test_cases[] = { +/* + * Test cases that are specific to anonymous pages: pages in private mappings + * that may get shared via COW during fork(). + */ +static const struct test_case anon_test_cases[] = { /* * Basic COW tests for fork() without any GUP. If we miss to break COW, * either the child can observe modifications by the parent or the @@ -1104,7 +1108,7 @@ static const struct test_case test_cases[] = { }, }; -static void run_test_case(struct test_case const *test_case) +static void run_anon_test_case(struct test_case const *test_case) { int i; @@ -1125,15 +1129,17 @@ static void run_test_case(struct test_case const *test_case) hugetlbsizes[i]); } -static void run_test_cases(void) +static void run_anon_test_cases(void) { int i; - for (i = 0; i < ARRAY_SIZE(test_cases); i++) - run_test_case(&test_cases[i]); + ksft_print_msg("[INFO] Anonymous memory tests in private mappings\n"); + + for (i = 0; i < ARRAY_SIZE(anon_test_cases); i++) + run_anon_test_case(&anon_test_cases[i]); } -static int tests_per_test_case(void) +static int tests_per_anon_test_case(void) { int tests = 2 + nr_hugetlbsizes; @@ -1144,7 +1150,6 @@ static int tests_per_test_case(void) int main(int argc, char **argv) { - int nr_test_cases = ARRAY_SIZE(test_cases); int err; pagesize = getpagesize(); @@ -1152,14 +1157,14 @@ int main(int argc, char **argv) detect_hugetlbsizes(); ksft_print_header(); - ksft_set_plan(nr_test_cases * tests_per_test_case()); + ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case()); gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR); pagemap_fd = open("/proc/self/pagemap", O_RDONLY); if (pagemap_fd < 0) ksft_exit_fail_msg("opening pagemap failed\n"); - run_test_cases(); + run_anon_test_cases(); err = ksft_get_fail_cnt(); if (err) diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh index ce52e4f5ff21..71744b9002d0 100755 --- a/tools/testing/selftests/vm/run_vmtests.sh +++ b/tools/testing/selftests/vm/run_vmtests.sh @@ -50,8 +50,8 @@ separated by spaces: memory protection key tests - soft_dirty test soft dirty page bit semantics -- anon_cow - test anonymous copy-on-write semantics +- cow + test copy-on-write semantics example: ./run_vmtests.sh -t "hmm mmap ksm" EOF exit 0 @@ -267,7 +267,7 @@ fi CATEGORY="soft_dirty" run_test ./soft-dirty -# COW tests for anonymous memory -CATEGORY="anon_cow" run_test ./anon_cow +# COW tests +CATEGORY="cow" run_test ./cow exit $exitcode From patchwork Wed Nov 16 10:26:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB851C4332F for ; Wed, 16 Nov 2022 10:30:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=A7KqO9Mh7L0wLQoRU6iQRlfLP8lHLtIclZS4bvEH4VE=; b=sdlYfmO++/eOvX Wdivi6bR+BgM4jQ/zB1vYHffRIIH7s2CsllOEWVyK6sdXoGDUCt2b8ltu5R2ZIHu4wWuU2Ts6kF7p oUVg0JUXxu7Cwaa7wpcWjMGL0sxfkNbHJvbj23esdxAp4MhtPc9f5xtxS01iJgoDkGtKYDjNsJ9My ZtMyV8rjPDbXIqu0yf/1C9AarcEpamF1qWowvIrKDKvTC1yE+iwAD/65X7J4UaVizixE5Hs7jkVln v360AZ+QZJv4xXkkuwyfn48mW391iuNxEUZAlms3vFsCQj933+rKmwHqd6Gu9OcjBZBak0vqmMZzr RbnH958tCNMepUKS+Cxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFem-002Cxi-Hu; Wed, 16 Nov 2022 10:28:37 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFe2-002CRo-27 for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:27:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594469; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GX1A4FEx/LE5NKT2fU0VhMZiNOmIzFjwXIrq7S69i44=; b=VLPgCw8IW0tQmZvbpZjwgTYSFgZ5hn7LExNbWqufbGcwxrRvwHf+ebFJiitF+MoVxhj8OR ECHViGAp3U9zbBBeiHsRr1pHvpFgCQWJaY8QnqR7IE1NHxk/A6fvKH/jLLgYf0eFyb6Gpl JkyoWG2A4ZKf63f098Jenbg1Zi961T0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-553-e8D79onzO9-Dj-jIrITFeA-1; Wed, 16 Nov 2022 05:27:45 -0500 X-MC-Unique: e8D79onzO9-Dj-jIrITFeA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 76CF7185A792; Wed, 16 Nov 2022 10:27:38 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 761C02024CC8; Wed, 16 Nov 2022 10:27:31 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 02/20] selftests/vm: cow: basic COW tests for non-anonymous pages Date: Wed, 16 Nov 2022 11:26:41 +0100 Message-Id: <20221116102659.70287-3-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022750_235106_017FCFED X-CRM114-Status: GOOD ( 25.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Let's add basic tests for COW with non-anonymous pages in private mappings: write access should properly trigger COW and result in the private changes not being visible through other page mappings. Especially, add tests for: * Zeropage * Huge zeropage * Ordinary pagecache pages via memfd and tmpfile() * Hugetlb pages via memfd Fortunately, all tests pass. Signed-off-by: David Hildenbrand --- tools/testing/selftests/vm/cow.c | 338 ++++++++++++++++++++++++++++++- 1 file changed, 337 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/vm/cow.c b/tools/testing/selftests/vm/cow.c index d202bfd63585..fb07bd44529c 100644 --- a/tools/testing/selftests/vm/cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "local_config.h" #ifdef LOCAL_CONFIG_HAVE_LIBURING @@ -35,6 +36,7 @@ static size_t thpsize; static int nr_hugetlbsizes; static size_t hugetlbsizes[10]; static int gup_fd; +static bool has_huge_zeropage; static void detect_thpsize(void) { @@ -64,6 +66,31 @@ static void detect_thpsize(void) close(fd); } +static void detect_huge_zeropage(void) +{ + int fd = open("/sys/kernel/mm/transparent_hugepage/use_zero_page", + O_RDONLY); + size_t enabled = 0; + char buf[15]; + int ret; + + if (fd < 0) + return; + + ret = pread(fd, buf, sizeof(buf), 0); + if (ret > 0 && ret < sizeof(buf)) { + buf[ret] = 0; + + enabled = strtoul(buf, NULL, 10); + if (enabled == 1) { + has_huge_zeropage = true; + ksft_print_msg("[INFO] huge zeropage is enabled\n"); + } + } + + close(fd); +} + static void detect_hugetlbsizes(void) { DIR *dir = opendir("/sys/kernel/mm/hugepages/"); @@ -1148,6 +1175,312 @@ static int tests_per_anon_test_case(void) return tests; } +typedef void (*non_anon_test_fn)(char *mem, const char *smem, size_t size); + +static void test_cow(char *mem, const char *smem, size_t size) +{ + char *old = malloc(size); + + /* Backup the original content. */ + memcpy(old, smem, size); + + /* Modify the page. */ + memset(mem, 0xff, size); + + /* See if we still read the old values via the other mapping. */ + ksft_test_result(!memcmp(smem, old, size), + "Other mapping not modified\n"); + free(old); +} + +static void run_with_zeropage(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + + ksft_print_msg("[RUN] %s ... with shared zeropage\n", desc); + + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON, -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + + smem = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Read from the page to populate the shared zeropage. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +} + +static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, *mmap_mem, *mmap_smem, tmp; + size_t mmap_size; + int ret; + + ksft_print_msg("[RUN] %s ... with huge zeropage\n", desc); + + if (!has_huge_zeropage) { + ksft_test_result_skip("Huge zeropage not enabled\n"); + return; + } + + /* For alignment purposes, we need twice the thp size. */ + mmap_size = 2 * thpsize; + mmap_mem = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (mmap_mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + mmap_smem = mmap(NULL, mmap_size, PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (mmap_smem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* We need a THP-aligned memory area. */ + mem = (char *)(((uintptr_t)mmap_mem + thpsize) & ~(thpsize - 1)); + smem = (char *)(((uintptr_t)mmap_smem + thpsize) & ~(thpsize - 1)); + + ret = madvise(mem, thpsize, MADV_HUGEPAGE); + ret |= madvise(smem, thpsize, MADV_HUGEPAGE); + if (ret) { + ksft_test_result_fail("MADV_HUGEPAGE failed\n"); + goto munmap; + } + + /* + * Read from the memory to populate the huge shared zeropage. Read from + * the first sub-page and test if we get another sub-page populated + * automatically. + */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + if (!pagemap_is_populated(pagemap_fd, mem + pagesize) || + !pagemap_is_populated(pagemap_fd, smem + pagesize)) { + ksft_test_result_skip("Did not get THPs populated\n"); + goto munmap; + } + + fn(mem, smem, thpsize); +munmap: + munmap(mmap_mem, mmap_size); + if (mmap_smem != MAP_FAILED) + munmap(mmap_smem, mmap_size); +} + +static void run_with_memfd(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + int fd; + + ksft_print_msg("[RUN] %s ... with memfd\n", desc); + + fd = memfd_create("test", 0); + if (fd < 0) { + ksft_test_result_fail("memfd_create() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, pagesize)) { + ksft_test_result_fail("fallocate() failed\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto close; + } + smem = mmap(NULL, pagesize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +close: + close(fd); +} + +static void run_with_tmpfile(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + FILE *file; + int fd; + + ksft_print_msg("[RUN] %s ... with tmpfile\n", desc); + + file = tmpfile(); + if (!file) { + ksft_test_result_fail("tmpfile() failed\n"); + return; + } + + fd = fileno(file); + if (fd < 0) { + ksft_test_result_skip("fileno() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, pagesize)) { + ksft_test_result_fail("fallocate() failed\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto close; + } + smem = mmap(NULL, pagesize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +close: + fclose(file); +} + +static void run_with_memfd_hugetlb(non_anon_test_fn fn, const char *desc, + size_t hugetlbsize) +{ + int flags = MFD_HUGETLB; + char *mem, *smem, tmp; + int fd; + + ksft_print_msg("[RUN] %s ... with memfd hugetlb (%zu kB)\n", desc, + hugetlbsize / 1024); + + flags |= __builtin_ctzll(hugetlbsize) << MFD_HUGE_SHIFT; + + fd = memfd_create("test", flags); + if (fd < 0) { + ksft_test_result_skip("memfd_create() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, hugetlbsize)) { + ksft_test_result_skip("need more free huge pages\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, hugetlbsize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, + 0); + if (mem == MAP_FAILED) { + ksft_test_result_skip("need more free huge pages\n"); + goto close; + } + smem = mmap(NULL, hugetlbsize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, hugetlbsize); +munmap: + munmap(mem, hugetlbsize); + if (mem != MAP_FAILED) + munmap(smem, hugetlbsize); +close: + close(fd); +} + +struct non_anon_test_case { + const char *desc; + non_anon_test_fn fn; +}; + +/* + * Test cases that target any pages in private mappings that are non anonymous: + * pages that may get shared via COW ndependent of fork(). This includes + * the shared zeropage(s), pagecache pages, ... + */ +static const struct non_anon_test_case non_anon_test_cases[] = { + /* + * Basic COW test without any GUP. If we miss to break COW, changes are + * visible via other private/shared mappings. + */ + { + "Basic COW", + test_cow, + }, +}; + +static void run_non_anon_test_case(struct non_anon_test_case const *test_case) +{ + int i; + + run_with_zeropage(test_case->fn, test_case->desc); + run_with_memfd(test_case->fn, test_case->desc); + run_with_tmpfile(test_case->fn, test_case->desc); + if (thpsize) + run_with_huge_zeropage(test_case->fn, test_case->desc); + for (i = 0; i < nr_hugetlbsizes; i++) + run_with_memfd_hugetlb(test_case->fn, test_case->desc, + hugetlbsizes[i]); +} + +static void run_non_anon_test_cases(void) +{ + int i; + + ksft_print_msg("[RUN] Non-anonymous memory tests in private mappings\n"); + + for (i = 0; i < ARRAY_SIZE(non_anon_test_cases); i++) + run_non_anon_test_case(&non_anon_test_cases[i]); +} + +static int tests_per_non_anon_test_case(void) +{ + int tests = 3 + nr_hugetlbsizes; + + if (thpsize) + tests += 1; + return tests; +} + int main(int argc, char **argv) { int err; @@ -1155,9 +1488,11 @@ int main(int argc, char **argv) pagesize = getpagesize(); detect_thpsize(); detect_hugetlbsizes(); + detect_huge_zeropage(); ksft_print_header(); - ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case()); + ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case() + + ARRAY_SIZE(non_anon_test_cases) * tests_per_non_anon_test_case()); gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR); pagemap_fd = open("/proc/self/pagemap", O_RDONLY); @@ -1165,6 +1500,7 @@ int main(int argc, char **argv) ksft_exit_fail_msg("opening pagemap failed\n"); run_anon_test_cases(); + run_non_anon_test_cases(); err = ksft_get_fail_cnt(); if (err) From patchwork Wed Nov 16 10:26:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 246EAC4332F for ; Wed, 16 Nov 2022 10:30:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SsHFuJQFdu7MN9EtMSgmx9Oxs62m37hgRo0coNyN8lI=; b=Pt6sB4aCjqmMUy yVGs0v7Lu7kgnicnT2fuCa7Opjj3WvwWhcjj9JnAaAcYqQAurAWbmXzygGil2F3QdhJaURwaC3oYt GuUkg4IhIYQpRUWlmTr0ZlhYTnLTh//M//E/7oPznOvukJAfsAeDfcUb6+TxpnK2t+BliH1gtNStk +N+Gs/BhjApTOno3q7RtUfSD4p6+cvHjutpG+rRaIEiE4qTRMYnLXQvoA9E2Zh78Vyyx3FXwqNQ/2 df4v/3sHdVdlztq/8Ci1vzuPsSba4oneMh62DI8lI4+dz9+GOLISvrprcKbu3Caw9Go97rZtPYeAr PPa2VzTMDCBHMMOm654A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFeL-002Cfy-Hl; Wed, 16 Nov 2022 10:28:09 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFe1-002CRb-H5 for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:27:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594468; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1Pfa0qX/ZhOI50QqJ9JdOGuy2kwPxeAxaBCfxYoSW9g=; b=CdH04xSb92RbKIj8gShf4bn2j6ug09/h0btgl92byETeR7FRQ1mZZUlMDj6yJEKj2HODly FaeFo/Ag4q2yn67WyU0PbAtzz8iStsBL/9rsKBDf9fTr5tEJLcsqbyoXinJHVqYrfncft7 paUqHkXY4zxXGJ2dBCZB79rwEhnEcYg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-387-mUeJQgfvOUWRJd96rWHDcg-1; Wed, 16 Nov 2022 05:27:47 -0500 X-MC-Unique: mUeJQgfvOUWRJd96rWHDcg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 547AE86F13B; Wed, 16 Nov 2022 10:27:45 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5D9B20290A6; Wed, 16 Nov 2022 10:27:38 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 03/20] selftests/vm: cow: R/O long-term pinning reliability tests for non-anon pages Date: Wed, 16 Nov 2022 11:26:42 +0100 Message-Id: <20221116102659.70287-4-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022749_687263_1C861192 X-CRM114-Status: GOOD ( 17.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Let's test whether R/O long-term pinning is reliable for non-anonymous memory: when R/O long-term pinning a page, the expectation is that we break COW early before pinning, such that actual write access via the page tables won't break COW later and end up replacing the R/O-pinned page in the page table. Consequently, R/O long-term pinning in private mappings would only target exclusive anonymous pages. For now, all tests fail: # [RUN] R/O longterm GUP pin ... with shared zeropage not ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd not ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile not ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage not ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) not ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) not ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage not ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd not ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile not ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage not ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) not ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) not ok 162 Longterm R/O pin is reliable Signed-off-by: David Hildenbrand --- tools/testing/selftests/vm/cow.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/vm/cow.c b/tools/testing/selftests/vm/cow.c index fb07bd44529c..73e05b52c49e 100644 --- a/tools/testing/selftests/vm/cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -561,6 +561,7 @@ static void test_iouring_fork(char *mem, size_t size) #endif /* LOCAL_CONFIG_HAVE_LIBURING */ enum ro_pin_test { + RO_PIN_TEST, RO_PIN_TEST_SHARED, RO_PIN_TEST_PREVIOUSLY_SHARED, RO_PIN_TEST_RO_EXCLUSIVE, @@ -593,6 +594,8 @@ static void do_test_ro_pin(char *mem, size_t size, enum ro_pin_test test, } switch (test) { + case RO_PIN_TEST: + break; case RO_PIN_TEST_SHARED: case RO_PIN_TEST_PREVIOUSLY_SHARED: /* @@ -1193,6 +1196,16 @@ static void test_cow(char *mem, const char *smem, size_t size) free(old); } +static void test_ro_pin(char *mem, const char *smem, size_t size) +{ + do_test_ro_pin(mem, size, RO_PIN_TEST, false); +} + +static void test_ro_fast_pin(char *mem, const char *smem, size_t size) +{ + do_test_ro_pin(mem, size, RO_PIN_TEST, true); +} + static void run_with_zeropage(non_anon_test_fn fn, const char *desc) { char *mem, *smem, tmp; @@ -1433,7 +1446,7 @@ struct non_anon_test_case { }; /* - * Test cases that target any pages in private mappings that are non anonymous: + * Test cases that target any pages in private mappings that are not anonymous: * pages that may get shared via COW ndependent of fork(). This includes * the shared zeropage(s), pagecache pages, ... */ @@ -1446,6 +1459,19 @@ static const struct non_anon_test_case non_anon_test_cases[] = { "Basic COW", test_cow, }, + /* + * Take a R/O longterm pin. When modifying the page via the page table, + * the page content change must be visible via the pin. + */ + { + "R/O longterm GUP pin", + test_ro_pin, + }, + /* Same as above, but using GUP-fast. */ + { + "R/O longterm GUP-fast pin", + test_ro_fast_pin, + }, }; static void run_non_anon_test_case(struct non_anon_test_case const *test_case) From patchwork Wed Nov 16 10:26:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D330C433FE for ; Wed, 16 Nov 2022 10:31:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+cUfE51WWQAapUjLGf08EHlbEtWTBVSRIlLfzVnb4Fs=; b=pNc/XhS9uNV0RX Kay013KgudoVtE3jjmp0EJ6VVrDkXpjKX5nVUxK4Amb0nKM2Qv7rUE4gST9WT0LOVNtsx1+2vUC99 fBeLsTAkS93Sl1k83sG6g3rE5fE1bbLgWkhViu4DTEDJVrkT6U8KzTR6KfoQAmjtE6HbeJvDumZBA CR342qf6GTh06ZJl8QCorKLu/xa5DyqQZ9ck5RhECKC50+/nOwSh4fIKsNsP+DojW71LNgGI0aPct gcVWR0/ml59PrP834ZUooUANb4o2gtazkd6AoSk+i+ZRBpvc0/4taRdD8J6OvHkQThKxI0wLgTEkh pVp6qLgG0UlNLamz6ftA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfP-002DLV-6w; Wed, 16 Nov 2022 10:29:16 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFeA-002CZ4-VD for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594478; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fetjyNVdxegk0QG4w8mo8gn6x0g2HdvFN8vnJqqr3gA=; b=Z+ENDj4rlT85nXTuJtcEuvsiRJENti9L4KyfcPwuHL3Aq/OxYpoNshhpIXnXhjhRHic8Wn GbZBaA6U1D7jpD5V2W3EJkmGvnHubd5uLgOdOSy9j/U27xrhQiYmFX0Is/FBHX18EZijHU ArjH3WhDMYDRDlzu59iAvgz6AYxtBkg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-456-E78fxk5vOCySDX-MBsnqRQ-1; Wed, 16 Nov 2022 05:27:54 -0500 X-MC-Unique: E78fxk5vOCySDX-MBsnqRQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3B6478027F5; Wed, 16 Nov 2022 10:27:53 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9344E2027064; Wed, 16 Nov 2022 10:27:45 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 04/20] mm: add early FAULT_FLAG_UNSHARE consistency checks Date: Wed, 16 Nov 2022 11:26:43 +0100 Message-Id: <20221116102659.70287-5-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022759_137387_834022A3 X-CRM114-Status: GOOD ( 20.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For now, FAULT_FLAG_UNSHARE only applies to anonymous pages, which implies a COW mapping. Let's hide FAULT_FLAG_UNSHARE early if we're not dealing with a COW mapping, such that we treat it like a read fault as documented and don't have to worry about the flag throughout all fault handlers. While at it, centralize the check for mutual exclusion of FAULT_FLAG_UNSHARE and FAULT_FLAG_WRITE and just drop the check that either flag is set in the WP handler. Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/huge_memory.c | 3 --- mm/hugetlb.c | 5 ----- mm/memory.c | 23 ++++++++++++++++++++--- 3 files changed, 20 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ed12cd3acbfd..68d00196b519 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1267,9 +1267,6 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); VM_BUG_ON_VMA(!vma->anon_vma, vma); - VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); - if (is_huge_zero_pmd(orig_pmd)) goto fallback; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1de986c62976..383b26069b33 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5314,9 +5314,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr = address & huge_page_mask(h); struct mmu_notifier_range range; - VM_BUG_ON(unshare && (flags & FOLL_WRITE)); - VM_BUG_ON(!unshare && !(flags & FOLL_WRITE)); - /* * hugetlb does not support FOLL_FORCE-style write faults that keep the * PTE mapped R/O such as maybe_mkwrite() would do. @@ -5326,8 +5323,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Let's take out MAP_SHARED mappings first. */ if (vma->vm_flags & VM_MAYSHARE) { - if (unlikely(unshare)) - return 0; set_huge_ptep_writable(vma, haddr, ptep); return 0; } diff --git a/mm/memory.c b/mm/memory.c index 2d453736f87c..e014435a87db 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3344,9 +3344,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct folio *folio; - VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); - if (likely(!unshare)) { if (userfaultfd_pte_wp(vma, *vmf->pte)) { pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -5161,6 +5158,22 @@ static void lru_gen_exit_fault(void) } #endif /* CONFIG_LRU_GEN */ +static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, + unsigned int *flags) +{ + if (unlikely(*flags & FAULT_FLAG_UNSHARE)) { + if (WARN_ON_ONCE(*flags & FAULT_FLAG_WRITE)) + return VM_FAULT_SIGSEGV; + /* + * FAULT_FLAG_UNSHARE only applies to COW mappings. Let's + * just treat it like an ordinary read-fault otherwise. + */ + if (!is_cow_mapping(vma->vm_flags)) + *flags &= ~FAULT_FLAG_UNSHARE; + } + return 0; +} + /* * By the time we get here, we already hold the mm semaphore * @@ -5177,6 +5190,10 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, count_vm_event(PGFAULT); count_memcg_event_mm(vma->vm_mm, PGFAULT); + ret = sanitize_fault_flags(vma, &flags); + if (ret) + return ret; + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, flags & FAULT_FLAG_INSTRUCTION, flags & FAULT_FLAG_REMOTE)) From patchwork Wed Nov 16 10:26:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C6D5FC433FE for ; Wed, 16 Nov 2022 10:31:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r/wjJQYy0wn1aEIWwfhfNnR2mjkPfRIApKdcaJQp8/w=; b=XK9jRFOpKmddnK RVfZjCAIhswzryjRtjPAacBzeJXhkXxS476GR1SiD3VlVgErfLy+j2TnSsod/6hA22dwqZd8D2DAX ybbPorAzRSzyjiCK7ad17ga5ijtQ1eHQVM5RIPAEVN/IHQWb5h6DK/UW8dcL0raVPnW+3+eRHUHPu xnDmBwOdpQ8eaDzoD6wrATe575fkJaxt7+aUba+uoPphQ1k5euvFuknXA5fGLUzzFGLIW621W1q8T OT2CbydSAImqLZ0Q7P67SBS2MOMTY1yUdZcwNS/4AYDtM75uqDUl7YhFjZUUm8p8bDx5hGu6Zv26Y hzwCmYycVX9IVOAcg7/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFge-002EDY-PQ; Wed, 16 Nov 2022 10:30:32 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFeI-002CeW-Up for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594486; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dBNCmUN3Fl+gew+kF5uWxf/68ToHi+G0X+q3EDBSaEw=; b=BEua2z5yjijP2J8SxO+ie3xCa5TEiZDnVKAUkF7NT7EO3oC/MmzrbtlyeFXXVyQXRunbza sP7ml2nh/lvENjGGjEv1mJVfvcCIguYXodvHad5WA/SfrllxWgc9azoldOL4nBdfOrkj0K Kfo1llBN72A7/BTfz++MfQatti4q89g= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-634-sQYY2ixTM6-Iv3BZ-sVvtQ-1; Wed, 16 Nov 2022 05:28:02 -0500 X-MC-Unique: sQYY2ixTM6-Iv3BZ-sVvtQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CFD7C3C10ECA; Wed, 16 Nov 2022 10:28:00 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 83B9A2028E8F; Wed, 16 Nov 2022 10:27:53 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 05/20] mm: add early FAULT_FLAG_WRITE consistency checks Date: Wed, 16 Nov 2022 11:26:44 +0100 Message-Id: <20221116102659.70287-6-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022807_112811_05F0C2B6 X-CRM114-Status: GOOD ( 16.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Let's catch abuse of FAULT_FLAG_WRITE early, such that we don't have to care in all other handlers and might get "surprises" if we forget to do so. Write faults without VM_MAYWRITE don't make any sense, and our maybe_mkwrite() logic could have hidden such abuse for now. Write faults without VM_WRITE on something that is not a COW mapping is similarly broken, and e.g., do_wp_page() could end up placing an anonymous page into a shared mapping, which would be bad. This is a preparation for reliable R/O long-term pinning of pages in private mappings, whereby we want to make sure that we will never break COW in a read-only private mapping. Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index e014435a87db..c4fa378ec2a0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5170,6 +5170,14 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, */ if (!is_cow_mapping(vma->vm_flags)) *flags &= ~FAULT_FLAG_UNSHARE; + } else if (*flags & FAULT_FLAG_WRITE) { + /* Write faults on read-only mappings are impossible ... */ + if (WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE))) + return VM_FAULT_SIGSEGV; + /* ... and FOLL_FORCE only applies to COW mappings. */ + if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE) && + !is_cow_mapping(vma->vm_flags))) + return VM_FAULT_SIGSEGV; } return 0; } From patchwork Wed Nov 16 10:26:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A17A7C433FE for ; Wed, 16 Nov 2022 10:32:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=X9hEVHdE2MVYJhn0WdLHHIYj4FFmn6E1Tjl2Jd4qZXg=; b=OZtdVcJnPokFAQ DEEVYDB9CDaL+YJzuM488cZGLxWrrD5TS+Y6M5ibjz9FByXi9eNqm39LYOCQw2djwMSYa/ep7Aban loARgP/m5jhboam/RDDVzgYLcwGp2NroTaJynYZLtF0IcP2F0gFf5sgmp9bC2WrCGaL4ICf9MFo3l LKp1+UK4aT1pY44x7mEcyREULxa2OC/OvmHcm8DIEgNuiVWPVUdstD/L5QPaB+MHlIEzgywH+lMP8 MAabkCDiT+WVixtTMehuhjawJAmbNPvHine8oA+7JXRi1ALEI2tVbqs3Rl6etE8jYD4Xy2mBCX5Ko +nCyAfxXm+5b0ZRtRblw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFh7-002ERe-WE; Wed, 16 Nov 2022 10:31:02 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFeQ-002CjL-Rb for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:16 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594493; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/xieaaG4AIgXK7pR548ohi1s1MYoDoPLtryAR7VPuVQ=; b=Yb14XkKnQ2YNMoSEpQTvq9wu/A7O/0oclIzrMsWvij2y5cxzpt5GZpo6MW8Dkz/04SGgos KgOYsRyIpmgx2ySgETHCX99bXVOxxwzh/+jLIMamaYn++BO6WLnQ37Lrj/giP3JzlRO5qz fpV17GMScOSg5WBF3iWcDr5my1AqJ1A= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-279-9BEEcW7rOKydj62ElCAQ0Q-1; Wed, 16 Nov 2022 05:28:10 -0500 X-MC-Unique: 9BEEcW7rOKydj62ElCAQ0Q-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C77003817A7A; Wed, 16 Nov 2022 10:28:08 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 183A12028E8F; Wed, 16 Nov 2022 10:28:00 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 06/20] mm: rework handling in do_wp_page() based on private vs. shared mappings Date: Wed, 16 Nov 2022 11:26:45 +0100 Message-Id: <20221116102659.70287-7-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022815_016093_728B27AF X-CRM114-Status: GOOD ( 19.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We want to extent FAULT_FLAG_UNSHARE support to anything mapped into a COW mapping (pagecache page, zeropage, PFN, ...), not just anonymous pages. Let's prepare for that by handling shared mappings first such that we can handle private mappings last. While at it, use folio-based functions instead of page-based functions where we touch the code either way. Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/memory.c | 38 +++++++++++++++++--------------------- 1 file changed, 17 insertions(+), 21 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c4fa378ec2a0..c35e6cd32b6a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3342,7 +3342,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; - struct folio *folio; + struct folio *folio = NULL; if (likely(!unshare)) { if (userfaultfd_pte_wp(vma, *vmf->pte)) { @@ -3360,13 +3360,12 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) } vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); - if (!vmf->page) { - if (unlikely(unshare)) { - /* No anonymous page -> nothing to do. */ - pte_unmap_unlock(vmf->pte, vmf->ptl); - return 0; - } + /* + * Shared mapping: we are guaranteed to have VM_WRITE and + * FAULT_FLAG_WRITE set at this point. + */ + if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { /* * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a * VM_PFNMAP VMA. @@ -3374,20 +3373,19 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * We should not cow pages in a shared writeable mapping. * Just mark the pages writable and/or call ops->pfn_mkwrite. */ - if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == - (VM_WRITE|VM_SHARED)) + if (!vmf->page) return wp_pfn_shared(vmf); - - pte_unmap_unlock(vmf->pte, vmf->ptl); - return wp_page_copy(vmf); + return wp_page_shared(vmf); } + if (vmf->page) + folio = page_folio(vmf->page); + /* - * Take out anonymous pages first, anonymous shared vmas are - * not dirty accountable. + * Private mapping: create an exclusive anonymous page copy if reuse + * is impossible. We might miss VM_WRITE for FOLL_FORCE handling. */ - folio = page_folio(vmf->page); - if (folio_test_anon(folio)) { + if (folio && folio_test_anon(folio)) { /* * If the page is exclusive to this process we must reuse the * page without further checks. @@ -3438,19 +3436,17 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) /* No anonymous page -> nothing to do. */ pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; - } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == - (VM_WRITE|VM_SHARED))) { - return wp_page_shared(vmf); } copy: /* * Ok, we need to copy. Oh, well.. */ - get_page(vmf->page); + if (folio) + folio_get(folio); pte_unmap_unlock(vmf->pte, vmf->ptl); #ifdef CONFIG_KSM - if (PageKsm(vmf->page)) + if (folio && folio_test_ksm(folio)) count_vm_event(COW_KSM); #endif return wp_page_copy(vmf); From patchwork Wed Nov 16 10:26:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ED297C433FE for ; Wed, 16 Nov 2022 10:33:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D+fygGTv8IUCPJtI6r1ryzOxdRuFUgRymKiLuCpBH+w=; b=iDZoeoG569qJ0J Cs3gSDrRscZWnALuSfx6saW+adC95P61jVkMj2uWmKzYuScc0OG1g/ri+dHFlz6m2FPHtQNmrJRuj 3H+G1bH3e/zMAl9tkgn2vDtEOWaIjSIQnMj0AFGJFRQG3i09DebHpHHa1TeTuecZDrmG+dtogPzHm pff9IAO6yiv3ehaGarfaGHzT479CKMN6SCyNzIllnx7Rxd9bntoHxgcW6d04fDruPXKCd1fA3Ok28 b4YV1h5uF+RrRgIy2J3d/0aDLPohudwfpCCh2Gnue5QwBURv/6q5EbkCc5BZqx9cLteAJlW5g8dUB X/5DKfhZB6BByamgm/tQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFhX-002Edg-9D; Wed, 16 Nov 2022 10:31:27 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFeY-002Coy-Iu for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594501; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G7hQ0RjvfvFzJ7xHsADTRuI/sN47/U3pNJiDmnh86Zg=; b=DNtyZUGaBEoMHb4dk2woqqxcvh3uRZ5IxEIlqKOrO+29uwYW0/xZaduQ++/eOzZsw5fmzI kdJGAFGuXTBCdzm64Tb2jwx26vV05u/G2XDZxL/ezWrKFRfDchC9DtormYTQZ8VENNBJB2 ZgY5p7IHIJY+xPlR0R7fL0N9/2PtxZI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-342-y1dt35sINka8k4sGs1wnag-1; Wed, 16 Nov 2022 05:28:17 -0500 X-MC-Unique: y1dt35sINka8k4sGs1wnag-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D8854101CC62; Wed, 16 Nov 2022 10:28:15 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 140632024CC8; Wed, 16 Nov 2022 10:28:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 07/20] mm: don't call vm_ops->huge_fault() in wp_huge_pmd()/wp_huge_pud() for private mappings Date: Wed, 16 Nov 2022 11:26:46 +0100 Message-Id: <20221116102659.70287-8-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022822_720627_C7F77ACC X-CRM114-Status: GOOD ( 16.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If we already have a PMD/PUD mapped write-protected in a private mapping and we want to break COW either due to FAULT_FLAG_WRITE or FAULT_FLAG_UNSHARE, there is no need to inform the file system just like on the PTE path. Let's just split (->zap) + fallback in that case. This is a preparation for more generic FAULT_FLAG_UNSHARE support in COW mappings. Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- mm/memory.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c35e6cd32b6a..d47ad33c6487 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4802,6 +4802,7 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; + vm_fault_t ret; if (vma_is_anonymous(vmf->vma)) { if (likely(!unshare) && @@ -4809,11 +4810,13 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) return handle_userfault(vmf, VM_UFFD_WP); return do_huge_pmd_wp_page(vmf); } - if (vmf->vma->vm_ops->huge_fault) { - vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); - if (!(ret & VM_FAULT_FALLBACK)) - return ret; + if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vmf->vma->vm_ops->huge_fault) { + ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); + if (!(ret & VM_FAULT_FALLBACK)) + return ret; + } } /* COW or write-notify handled on pte level: split pmd. */ @@ -4839,14 +4842,17 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) { #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) + vm_fault_t ret; + /* No support for anonymous transparent PUD pages yet */ if (vma_is_anonymous(vmf->vma)) goto split; - if (vmf->vma->vm_ops->huge_fault) { - vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); - - if (!(ret & VM_FAULT_FALLBACK)) - return ret; + if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vmf->vma->vm_ops->huge_fault) { + ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); + if (!(ret & VM_FAULT_FALLBACK)) + return ret; + } } split: /* COW or write-notify not handled on PUD level: split pud.*/ From patchwork Wed Nov 16 10:26:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 366CBC4332F for ; Wed, 16 Nov 2022 10:33:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wsjgK5HZgkqeOcoWUYrU5Ykw80PRDiYw0I5f4GgJYxQ=; b=SnfBWVgeNVzwdk Xho51ium+swTXIkqpWPC92nrpODAm448Xi31npQja49JqVg5ozDmz10VVv4wPaoM68jj9tIW4JSUG o9wdpNwSDO5TrFJ+j2wMyq/ne+HQIl4nXg72rQOeIngh+8GZWwJrsWxb57lVqrwSqH+DO1WwAQgEl Werh94puGsDkNp+hjO6FQ2FQbBfkq///HHCLFQoykNlMY9dq0/6QBPX5AHwYFZdPnTGg9W3G2tnfd szfkilH1f6XP/ET3j+wB0wHkQYd/gNbWn96B68qiYUzUvaQqDJmltjARwIzmocM2W+3DtM4NJpGri glgXn4qipe4/31wGHEcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFi5-002Ew1-6I; Wed, 16 Nov 2022 10:32:01 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFef-002CtL-7D for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594508; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B1T+ot5qRtqPN1LpUOFfkK/yXOB9NMbn5pFr5NKSN84=; b=RazRV4DhTF22N2z6R7dcS1xdxC6oq7mUXRhOHcchdx1SQWtKUe8D2d2KuxDGfHeJFBY2ZR XjSn4ofvEtJE52v+d+CjZbhD067R5STXgXFNqDiEHVI44Ch95/NZCT+8LqjH1MfJX4o0Bj 2KHFUEwFHLlHEqt8lQgrAiOjyZk0jro= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-412-r17LTSJHM5C5l1EBulZ-LQ-1; Wed, 16 Nov 2022 05:28:24 -0500 X-MC-Unique: r17LTSJHM5C5l1EBulZ-LQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D71761C0A110; Wed, 16 Nov 2022 10:28:22 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E25A2024CCA; Wed, 16 Nov 2022 10:28:16 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 08/20] mm: extend FAULT_FLAG_UNSHARE support to anything in a COW mapping Date: Wed, 16 Nov 2022 11:26:47 +0100 Message-Id: <20221116102659.70287-9-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022829_348960_DC1A2418 X-CRM114-Status: GOOD ( 20.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Extend FAULT_FLAG_UNSHARE to break COW on anything mapped into a COW (i.e., private writable) mapping and adjust the documentation accordingly. FAULT_FLAG_UNSHARE will now also break COW when encountering the shared zeropage, a pagecache page, a PFNMAP, ... inside a COW mapping, by properly replacing the mapped page/pfn by a private copy (an exclusive anonymous page). Note that only do_wp_page() needs care: hugetlb_wp() already handles FAULT_FLAG_UNSHARE correctly. wp_huge_pmd()/wp_huge_pud() also handles it correctly, for example, splitting the huge zeropage on FAULT_FLAG_UNSHARE such that we can handle FAULT_FLAG_UNSHARE on the PTE level. This change is a requirement for reliable long-term R/O pinning in COW mappings. Signed-off-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm_types.h | 8 ++++---- mm/memory.c | 4 ---- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5e7f4fac1e78..5e9aaad8c7b2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1037,9 +1037,9 @@ typedef struct { * @FAULT_FLAG_REMOTE: The fault is not for current task/mm. * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch. * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals. - * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to unshare (and mark - * exclusive) a possibly shared anonymous page that is - * mapped R/O. + * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to break COW in a + * COW mapping, making sure that an exclusive anon page is + * mapped after the fault. * @FAULT_FLAG_ORIG_PTE_VALID: whether the fault has vmf->orig_pte cached. * We should only access orig_pte if this flag set. * @@ -1064,7 +1064,7 @@ typedef struct { * * The combination FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE is illegal. * FAULT_FLAG_UNSHARE is ignored and treated like an ordinary read fault when - * no existing R/O-mapped anonymous page is encountered. + * applied to mappings that are not COW mappings. */ enum fault_flag { FAULT_FLAG_WRITE = 1 << 0, diff --git a/mm/memory.c b/mm/memory.c index d47ad33c6487..56b21ab1e4d2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3432,10 +3432,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) } wp_page_reuse(vmf); return 0; - } else if (unshare) { - /* No anonymous page -> nothing to do. */ - pte_unmap_unlock(vmf->pte, vmf->ptl); - return 0; } copy: /* From patchwork Wed Nov 16 10:26:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1BD3C4332F for ; Wed, 16 Nov 2022 10:33:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2C/l9eVxI/Vf+4BOOpG9DBLRaoSq0yxWQ1NP0rivIek=; b=F9bW2CuVO8H2/S sfmii3kl8BgHNA31XA3DPrgZOeshFyjJDgO34QJqvQu2Rx5H5VpwZzGP+PrIrDyYWHortNg7J31Km DCS4+WlExXK9Xt6EQ/Kg3rFA/foUwAH4mSBmzK32R56CULEbEUOhXgdboS7JdFn47qF+bTiOJIQFd rpV7Vm9wZQosaTxxoDwoFR8IMxCuFJnKVIUBcMnGHtHj0m1FZAPGVQT+0SOXHVrnPwfXbIlFOxmOH +crsf4wnlHgJH3gkhkTcH8Qo4Rg65ZF13X40pqerNIUM1NsQy8yGQYYwWp/hdC1xZ0Bv4vfT6gyKy A3W/onlLrMzOF2YGXYUw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFiO-002F3i-DD; Wed, 16 Nov 2022 10:32:21 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFel-002CxR-V1 for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MyRNKgE9VejcvK3KnE6hG0vB3Gd3R9NfE1yvuVtmrG0=; b=QWRb2GpGUY28wVCzrNoO2tCLUkHrY1JCG4m5qMj8BSfIz9CdqQcLp0kPiCIH0es1PnhTPM 7MSUd3tdQ88DnpcwKRFvqYJRHfLwpdiIoYhCLV74qoNFGgNPlE3stUFfHNr3491mopwfOB xEsMV9LzlMVEepivnF5abWUABDfelkg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-1-x5l38RKeO-CqL_8lbQ5Wdg-1; Wed, 16 Nov 2022 05:28:31 -0500 X-MC-Unique: x5l38RKeO-CqL_8lbQ5Wdg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 96DF7101A528; Wed, 16 Nov 2022 10:28:29 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 287292028E8F; Wed, 16 Nov 2022 10:28:23 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 09/20] mm/gup: reliable R/O long-term pinning in COW mappings Date: Wed, 16 Nov 2022 11:26:48 +0100 Message-Id: <20221116102659.70287-10-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022836_123692_4B5D69B8 X-CRM114-Status: GOOD ( 28.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We already support reliable R/O pinning of anonymous memory. However, assume we end up pinning (R/O long-term) a pagecache page or the shared zeropage inside a writable private ("COW") mapping. The next write access will trigger a write-fault and replace the pinned page by an exclusive anonymous page in the process page tables to break COW: the pinned page no longer corresponds to the page mapped into the process' page table. Now that FAULT_FLAG_UNSHARE can break COW on anything mapped into a COW mapping, let's properly break COW first before R/O long-term pinning something that's not an exclusive anon page inside a COW mapping. FAULT_FLAG_UNSHARE will break COW and map an exclusive anon page instead that can get pinned safely. With this change, we can stop using FOLL_FORCE|FOLL_WRITE for reliable R/O long-term pinning in COW mappings. With this change, the new R/O long-term pinning tests for non-anonymous memory succeed: # [RUN] R/O longterm GUP pin ... with shared zeropage ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) ok 162 Longterm R/O pin is reliable Note 1: We don't care about short-term R/O-pinning, because they have snapshot semantics: they are not supposed to observe modifications that happen after pinning. As one example, assume we start direct I/O to read from a page and store page content into a file: modifications to page content after starting direct I/O are not guaranteed to end up in the file. So even if we'd pin the shared zeropage, the end result would be as expected -- getting zeroes stored to the file. Note 2: For shared mappings we'll now always fallback to the slow path to lookup the VMA when R/O long-term pining. While that's the necessary price we have to pay right now, it's actually not that bad in practice: most FOLL_LONGTERM users already specify FOLL_WRITE, for example, along with FOLL_FORCE because they tried dealing with COW mappings correctly ... Note 3: For users that use FOLL_LONGTERM right now without FOLL_WRITE, such as VFIO, we'd now no longer pin the shared zeropage. Instead, we'd populate exclusive anon pages that we can pin. There was a concern that this could affect the memlock limit of existing setups. For example, a VM running with VFIO could run into the memlock limit and fail to run. However, we essentially had the same behavior already in commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") which got merged into some enterprise distros, and there were not any such complaints. So most probably, we're fine. Signed-off-by: David Hildenbrand Acked-by: Daniel Vetter Reviewed-by: Vlastimil Babka Reviewed-by: John Hubbard --- include/linux/mm.h | 27 ++++++++++++++++++++++++--- mm/gup.c | 10 +++++----- mm/huge_memory.c | 2 +- mm/hugetlb.c | 7 ++++--- 4 files changed, 34 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6bd2ee5872dd..e8cc838f42f9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3095,8 +3095,12 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) * Must be called with the (sub)page that's actually referenced via the * page table entry, which might not necessarily be the head page for a * PTE-mapped THP. + * + * If the vma is NULL, we're coming from the GUP-fast path and might have + * to fallback to the slow path just to lookup the vma. */ -static inline bool gup_must_unshare(unsigned int flags, struct page *page) +static inline bool gup_must_unshare(struct vm_area_struct *vma, + unsigned int flags, struct page *page) { /* * FOLL_WRITE is implicitly handled correctly as the page table entry @@ -3109,8 +3113,25 @@ static inline bool gup_must_unshare(unsigned int flags, struct page *page) * Note: PageAnon(page) is stable until the page is actually getting * freed. */ - if (!PageAnon(page)) - return false; + if (!PageAnon(page)) { + /* + * We only care about R/O long-term pining: R/O short-term + * pinning does not have the semantics to observe successive + * changes through the process page tables. + */ + if (!(flags & FOLL_LONGTERM)) + return false; + + /* We really need the vma ... */ + if (!vma) + return true; + + /* + * ... because we only care about writable private ("COW") + * mappings where we have to break COW early. + */ + return is_cow_mapping(vma->vm_flags); + } /* Paired with a memory barrier in page_try_share_anon_rmap(). */ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) diff --git a/mm/gup.c b/mm/gup.c index 5182abaaecde..01116699c863 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -578,7 +578,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } } - if (!pte_write(pte) && gup_must_unshare(flags, page)) { + if (!pte_write(pte) && gup_must_unshare(vma, flags, page)) { page = ERR_PTR(-EMLINK); goto out; } @@ -2338,7 +2338,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, goto pte_unmap; } - if (!pte_write(pte) && gup_must_unshare(flags, page)) { + if (!pte_write(pte) && gup_must_unshare(NULL, flags, page)) { gup_put_folio(folio, 1, flags); goto pte_unmap; } @@ -2506,7 +2506,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!pte_write(pte) && gup_must_unshare(flags, &folio->page)) { + if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } @@ -2572,7 +2572,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0; } - if (!pmd_write(orig) && gup_must_unshare(flags, &folio->page)) { + if (!pmd_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } @@ -2612,7 +2612,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0; } - if (!pud_write(orig) && gup_must_unshare(flags, &folio->page)) { + if (!pud_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 68d00196b519..dec7a7c0eca8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1434,7 +1434,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, if (pmd_protnone(*pmd) && !gup_can_follow_protnone(flags)) return NULL; - if (!pmd_write(*pmd) && gup_must_unshare(flags, page)) + if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK); VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 383b26069b33..c3aab6d5b7aa 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6195,7 +6195,8 @@ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, } } -static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, +static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, + unsigned int flags, pte_t *pte, bool *unshare) { pte_t pteval = huge_ptep_get(pte); @@ -6207,7 +6208,7 @@ static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, return false; if (flags & FOLL_WRITE) return true; - if (gup_must_unshare(flags, pte_page(pteval))) { + if (gup_must_unshare(vma, flags, pte_page(pteval))) { *unshare = true; return true; } @@ -6336,7 +6337,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * directly from any kind of swap entries. */ if (absent || - __follow_hugetlb_must_fault(flags, pte, &unshare)) { + __follow_hugetlb_must_fault(vma, flags, pte, &unshare)) { vm_fault_t ret; unsigned int fault_flags = 0; From patchwork Wed Nov 16 10:26:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C227C4332F for ; Wed, 16 Nov 2022 11:46:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CuO6vY18Q1cg/PtnACK51jhGls0I0/vWqNgJ/fRvNbU=; b=lcCWq6Q5Kn1IY8 TFMlLD/2a3g2UEolvYCFiS638bk+FAJahnMr4ZqSi+bacnAwzgC0w+qy4TRpvqNAb/XauzhJRv75L JB9DkillxdqrQX48oecNTpeZrmWucGW3YUdU45PsYkzQosUmOiotFz7GxSAB/fkU+3ESwpBb+52RM +6QDl4sgeEAwznz+kHD4OoBgI8+1hy1xL1EziS2JT/2AOAG4G1Ziscw9W8eL1idvFjGYHYC/Jie0d 9O+t6jj9hXfn4PUIaJPAYqMBjYOD7dTWRrSTlpHHkAZtNr1hBdedM/+zfO127AO1ZnlMRhgoHK88X cfeH9Q07BsGcHmfeXwPg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovGrO-0035GV-57; Wed, 16 Nov 2022 11:45:42 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFgO-002E6j-QS for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:30:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594615; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xyj8jS+C9wqOk3KceDQTo75wMiWZgljnGSBlM8HQloY=; b=K+zwqjUbSN1QawrCbmSLu+2ZBH2nfPdtoUHXux0A9qLFKjfYekaNPqGN5PWn/IJmHbXvNf ErnTBPFA9bpgCq+ez329EcnFZmiYUyY+owFKV0P7YbzcZH/cp3S3IRjQF9Unj1+Hzk98hT VbgKFyJYZKQlAwWTHAiL0jwFS4gDs3A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-156-1i0B0n5wO9KbojDzH-noSA-1; Wed, 16 Nov 2022 05:28:39 -0500 X-MC-Unique: 1i0B0n5wO9KbojDzH-noSA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6CE1B86F132; Wed, 16 Nov 2022 10:28:37 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7D7C2028E8F; Wed, 16 Nov 2022 10:28:29 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Leon Romanovsky , Leon Romanovsky Subject: [PATCH mm-unstable v1 10/20] RDMA/umem: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:49 +0100 Message-Id: <20221116102659.70287-11-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_023017_048469_67E973D3 X-CRM114-Status: GOOD ( 15.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()). Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access. Tested-by: Leon Romanovsky # Over mlx4 and mlx5. Cc: Jason Gunthorpe Cc: Leon Romanovsky Signed-off-by: David Hildenbrand Reviewed-by: Jason Gunthorpe --- drivers/infiniband/core/umem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 86d479772fbc..755a9c57db6f 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -156,7 +156,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, struct mm_struct *mm; unsigned long npages; int pinned, ret; - unsigned int gup_flags = FOLL_WRITE; + unsigned int gup_flags = FOLL_LONGTERM; /* * If the combination of the addr and size requested for this memory @@ -210,8 +210,8 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, cur_base = addr & PAGE_MASK; - if (!umem->writable) - gup_flags |= FOLL_FORCE; + if (umem->writable) + gup_flags |= FOLL_WRITE; while (npages) { cond_resched(); @@ -219,7 +219,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, page_list); + gup_flags, page_list); if (pinned < 0) { ret = pinned; goto umem_release; From patchwork Wed Nov 16 10:26:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 99EDEC4332F for ; Wed, 16 Nov 2022 10:34:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D7msZH96kwJbF/2WgSCR73YnoHopd5UQjetGCCAcCTQ=; b=WoMDXu8CjdU7wv yn/oYomFoAvfSUMX4hH4Z5ZIE27DxonO/r+qlfGX057kI+4B4Se7wMDHo3xfhKS5E4MGwJ1QVvPIc JOTwnKOeEDDwrarXc/zlgxUMpiWI9gMYwbdwe0FdL3VFi23bv2/1HFududRFp7e/763daTtyuSJJj wNkU3ASKvKtvXyTMRq5m/y0bKY4fkHRCO+DpqQlgw8olXuhrtGTxw+7eSMTvm9AEDEeflvki6TCm+ ivu6eFk9wF/36Scl9kRjNGSvXXPsGE1vTZ3pXt6VNOPVXPx367euDRKH/hU9CLi8sscnRidLlYHMQ DOVr6RyO6wqnBz3HW3Jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFj2-002FR1-7z; Wed, 16 Nov 2022 10:33:00 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFf2-002DAB-7j for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:28:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594531; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AXY/UyjjpCuDyNIViwtXXXaWlWVq5fOjr9nC+t8u+wk=; b=hkWiEhdEw2mADjIJkiYlZI00jEhVCoKb4hWbLnACtfkSU/PLi9qsxgeTRObEUSvXNev+eP S+YIR9aQ5+7Q3Hw0RNI6wo140wxirCXesEeb6ZmDmREBsZ6f4aNxGQDVxZm9nuo/O/6hQ3 feh4CG5nMhwYkaro1zYnpoCfQttdD90= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-325-nf3OqAa0Mp2NEpiQ4Fl_4w-1; Wed, 16 Nov 2022 05:28:47 -0500 X-MC-Unique: nf3OqAa0Mp2NEpiQ4Fl_4w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 43CBC1C0A10F; Wed, 16 Nov 2022 10:28:45 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id C3F5B2028E8F; Wed, 16 Nov 2022 10:28:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Christian Benvenuti , Nelson Escobar , Leon Romanovsky Subject: [PATCH mm-unstable v1 11/20] RDMA/usnic: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:50 +0100 Message-Id: <20221116102659.70287-12-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022852_370041_D4E5557C X-CRM114-Status: GOOD ( 15.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()). Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access. Cc: Christian Benvenuti Cc: Nelson Escobar Cc: Jason Gunthorpe Cc: Leon Romanovsky Signed-off-by: David Hildenbrand Reviewed-by: Jason Gunthorpe --- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 67923ced6e2d..c301b3be9f30 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -85,6 +85,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, int dmasync, struct usnic_uiom_reg *uiomr) { struct list_head *chunk_list = &uiomr->chunk_list; + unsigned int gup_flags = FOLL_LONGTERM; struct page **page_list; struct scatterlist *sg; struct usnic_uiom_chunk *chunk; @@ -96,7 +97,6 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, int off; int i; dma_addr_t pa; - unsigned int gup_flags; struct mm_struct *mm; /* @@ -131,8 +131,8 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, goto out; } - gup_flags = FOLL_WRITE; - gup_flags |= (writable) ? 0 : FOLL_FORCE; + if (writable) + gup_flags |= FOLL_WRITE; cur_base = addr & PAGE_MASK; ret = 0; @@ -140,8 +140,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL); if (ret < 0) goto out; From patchwork Wed Nov 16 10:26:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CABB6C4332F for ; Wed, 16 Nov 2022 10:36:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uucwisRWOr3gUeh9rVR72Gc2dTwSJNAcJ9PAg3TqsBk=; b=goX8LxFtPyE5td Covca7azgKawJKlA/MkG3r4GxEzIw4EBz3U8jjCOfD8/VVskD+ReVA7vECZ2r9vBPkzBbybNp7X04 6fd/k86yL/qxJmUje0mObZbXQ3MeqL+dzZSvuI6kFle/uNFUP5QjWm72/ZmmiFdIV+68Oj1cdpZPn q6c8nGyn1b3GBT2UTzsR9+7purN2X8/q3rpv83/9E3cRgytVAmj9ncy8dxx5JDUQ3Vkn/Uc+YQ1bN +RajP9wt2dtWM2nTX5ZM41aVl+fk4nN25AmTsydbbHrQncngf1xmK3i5BvShxguPeGfvdyj5glSxn NdWwaZyliQ+JrBmiRhFA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFkl-002GXt-Se; Wed, 16 Nov 2022 10:34:48 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfH-002DHl-0s for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594546; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Os9mhvYmMDrU7oRu07Gt40tRhtITIyLN3UTJWIxCBGQ=; b=Np5mjxEQ6nkEvC55WFc+8lnTQduUxuP+2dfPKZNeS2rk/mVM8fhjpQwnWucL2zS360O6zL RbnP6kXzyBTFlhakIQvJ8B2E+axaTZrD34u0z2BLjSCx9ZqeftT79MQqpx70HDKLanLY2+ qjkb3nFkqOTWvmQeLXWZKPOlbxCxVc8= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-169-8bSBee3zN2WWjnQux77ZrA-1; Wed, 16 Nov 2022 05:28:54 -0500 X-MC-Unique: 8bSBee3zN2WWjnQux77ZrA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E444E29ABA07; Wed, 16 Nov 2022 10:28:52 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8254B20290A5; Wed, 16 Nov 2022 10:28:45 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Bernard Metzler , Leon Romanovsky Subject: [PATCH mm-unstable v1 12/20] RDMA/siw: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:51 +0100 Message-Id: <20221116102659.70287-13-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022907_197365_5FB03330 X-CRM114-Status: GOOD ( 15.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()). Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access. Cc: Bernard Metzler Cc: Jason Gunthorpe Cc: Leon Romanovsky Signed-off-by: David Hildenbrand Reviewed-by: Jason Gunthorpe --- drivers/infiniband/sw/siw/siw_mem.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 61c17db70d65..b2b33dd3b4fa 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -368,7 +368,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) struct mm_struct *mm_s; u64 first_page_va; unsigned long mlock_limit; - unsigned int foll_flags = FOLL_WRITE; + unsigned int foll_flags = FOLL_LONGTERM; int num_pages, num_chunks, i, rv = 0; if (!can_do_mlock()) @@ -391,8 +391,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) mmgrab(mm_s); - if (!writable) - foll_flags |= FOLL_FORCE; + if (writable) + foll_flags |= FOLL_WRITE; mmap_read_lock(mm_s); @@ -423,8 +423,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) while (nents) { struct page **plist = &umem->page_chunk[i].plist[got]; - rv = pin_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, + rv = pin_user_pages(first_page_va, nents, foll_flags, plist, NULL); if (rv < 0) goto out_sem_up; From patchwork Wed Nov 16 10:26:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1EAD1C433FE for ; Wed, 16 Nov 2022 10:35:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gxFVphBO/rkVxMRimHLegvl/QfGFJqnJnhwzDD+l+H0=; b=SS3IZCvKZYaJRC pMmHwd3b0X5D9WuLa8WhFiHOHuorEpaMtzh3/m7UwM3whoTCNT2o72XsvvqiLP65S0FTTT+xlRWRK AajJzl0B60vl0aVazR8BxRA442+XLCSBc87mKYDUE7RerI6L+6/fRDrrKjfmoKQap58lO8PUll0Wh C8F7x43pSHxhKcOJMOYFhGMsT/KeuZpNKTKyl4q4U42uCxQSktV0yu+Z3cTpXNWcRyTIvBgwyJ7bp nevlgsNh5KQVqcC8yuMfAjZPUuBr50MInxs2zmsZcK47Pt2Fg7gsLZSVvcF4Qn5W2MRWgloyv1MxP dIEWPU/K3lpB22JXPG6w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFjh-002Fnx-Uu; Wed, 16 Nov 2022 10:33:43 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfG-002DHQ-JQ for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:09 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594545; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PtZFIm00BoukAq0r3vxYea1/oUxWby/yeQnBgXcmZG8=; b=aGyocTGgbaxkv52lA2ApOCBn4zlBzj2O88XfmOEA1Vwm+UakqnhjABvo3YbZ3nql9zJ27I WbmzpPKDsRQJownL58bgRa7nz57eZaW/hp/jH8gF4DaPFDrpKz1fp1eo0NwEJK6PPBlhzn YvDdfVUe0zeTpyqeSGuSrykODmOw7BU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-618-uLqpp3ByOjmb0y4uEYa0gA-1; Wed, 16 Nov 2022 05:29:02 -0500 X-MC-Unique: uLqpp3ByOjmb0y4uEYa0gA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 95160811E7A; Wed, 16 Nov 2022 10:29:00 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4EDB82028E8F; Wed, 16 Nov 2022 10:28:53 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Mauro Carvalho Chehab Subject: [PATCH mm-unstable v1 13/20] media: videobuf-dma-sg: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:52 +0100 Message-Id: <20221116102659.70287-14-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022906_760382_E6C48136 X-CRM114-Status: GOOD ( 16.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()). Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access. Cc: Mauro Carvalho Chehab Signed-off-by: David Hildenbrand Reviewed-by: Daniel Vetter Acked-by: Hans Verkuil --- drivers/media/v4l2-core/videobuf-dma-sg.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index f75e5eedeee0..234e9f647c96 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -151,17 +151,16 @@ static void videobuf_dma_init(struct videobuf_dmabuf *dma) static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, int direction, unsigned long data, unsigned long size) { + unsigned int gup_flags = FOLL_LONGTERM; unsigned long first, last; - int err, rw = 0; - unsigned int flags = FOLL_FORCE; + int err; dma->direction = direction; switch (dma->direction) { case DMA_FROM_DEVICE: - rw = READ; + gup_flags |= FOLL_WRITE; break; case DMA_TO_DEVICE: - rw = WRITE; break; default: BUG(); @@ -177,14 +176,11 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, if (NULL == dma->pages) return -ENOMEM; - if (rw == READ) - flags |= FOLL_WRITE; - dprintk(1, "init user [0x%lx+0x%lx => %lu pages]\n", data, size, dma->nr_pages); - err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, - flags | FOLL_LONGTERM, dma->pages, NULL); + err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags, + dma->pages, NULL); if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0; From patchwork Wed Nov 16 10:26:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18BD4C433FE for ; Wed, 16 Nov 2022 10:37:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vy/R32ibb/yQSmF3zF9dPgjZvE81JHwuNpllcp//XAk=; b=hJ31x8UsnI18pu OetvKcMRpekz1boejKIP5+suAKoNM/zqEcRaDSi6ZB1eDY+9JHh7Ex/XjG3JL/rh4Uke19TySyC9z Ww1psJaeJYmBq3LcFH7Ccv1B727DklhLI/4xKcnFzeDfWvA1nblq9Y1hAKF48K7enaw4I8zBsHjJk m+X+yMG70ffoQ0Xmh6jRtLufua9TNFAZS5RfOT8JOl3FEj/kA53HlDA0j/SYUzZH9UgymalatYYgH FY3EjgKpyD3WT7LyfnRRma//nCZVGLPaJO7hAnk0peoNxzLhtRMLTi4RnmdXZjCWEUZ9XNVi3H654 KY3S2CsfYH6l7pHVMU2Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFlf-002HIK-OC; Wed, 16 Nov 2022 10:35:44 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfP-002DQB-U4 for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ddMjxbE+d57ySmRYhTvAErVzb9ykjmc/iLClBCHgJ7c=; b=jLVy1BJvOFhASVEsDr0Tjg4CCxmPiRjzQw8kTUw62ZixEaN/sDyUROrdm/g+zRu6H4MvEr qohs8spvdywPV3sie1pP8u41xVIrnQIJb0DtG+ZzJc4Rj/vQswSDvZtst05U82fNJjU/Xs 14z8nmjrg2WlPQwaA3+/khkvPEyHVmw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-281-3wHpF2z_Ol-w_Oz0Io_Gqg-1; Wed, 16 Nov 2022 05:29:09 -0500 X-MC-Unique: 3wHpF2z_Ol-w_Oz0Io_Gqg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id ED87586F130; Wed, 16 Nov 2022 10:29:07 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0046B2027064; Wed, 16 Nov 2022 10:29:00 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Daniel Vetter , Russell King , Christian Gmeiner Subject: [PATCH mm-unstable v1 14/20] drm/etnaviv: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:53 +0100 Message-Id: <20221116102659.70287-15-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022916_075090_3B594A84 X-CRM114-Status: GOOD ( 16.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()). commit cd5297b0855f ("drm/etnaviv: Use FOLL_FORCE for userptr") documents that FOLL_FORCE | FOLL_WRITE was really only used for reliable R/O pinning. Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access. Cc: Daniel Vetter Cc: Lucas Stach Cc: Russell King Cc: Christian Gmeiner Cc: David Airlie Signed-off-by: David Hildenbrand Reviewed-by: Daniel Vetter --- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index cc386f8a7116..efe2240945d0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -638,6 +638,7 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) struct page **pvec = NULL; struct etnaviv_gem_userptr *userptr = &etnaviv_obj->userptr; int ret, pinned = 0, npages = etnaviv_obj->base.size >> PAGE_SHIFT; + unsigned int gup_flags = FOLL_LONGTERM; might_lock_read(¤t->mm->mmap_lock); @@ -648,14 +649,15 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) if (!pvec) return -ENOMEM; + if (!userptr->ro) + gup_flags |= FOLL_WRITE; + do { unsigned num_pages = npages - pinned; uint64_t ptr = userptr->ptr + pinned * PAGE_SIZE; struct page **pages = pvec + pinned; - ret = pin_user_pages_fast(ptr, num_pages, - FOLL_WRITE | FOLL_FORCE | FOLL_LONGTERM, - pages); + ret = pin_user_pages_fast(ptr, num_pages, gup_flags, pages); if (ret < 0) { unpin_user_pages(pvec, pinned); kvfree(pvec); From patchwork Wed Nov 16 10:26:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8ACFDC4332F for ; Wed, 16 Nov 2022 10:38:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i/axRsd/7YSQMeVZGqfCbm8Td4TMk2XNs+CSfO4XnSw=; b=PQ+MDnYOa1T1s9 +pxJYWhgb/5TP7tzmEfp/6eh+ucOQrKQh/uziFM96Gn7L7bWIzfeXfSDqe3H44TbiNO+eoCgIe/+m RDaT3XCVVYGSLTtdO0YDCMpMvM8v0hus9Xm0cz7SZxtK4L2lEUje7SV2a8EULhukW4nIfpoerlojv yVJPULRk5pe7//Ylv7CReyDGOXF1ZEabPVwpi6fpdRMBIqLqBLqphcy+YH4u1pc9s2vXCOjV4qoSH nwaeUqskDgrrzkQ/h5yMHzZ6JN4bI6iy7O1FUrPY2jkl9EQ/2UWepJ8uveUnC+FrRp/mW3iyyYY3H lukb7y3tiPYMiPsJkYGg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFme-002I7d-BQ; Wed, 16 Nov 2022 10:36:45 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfZ-002DW6-4a for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594564; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1lgRYK0+yPuNJDFPZ9ndbDbQCvjtc3Xp7NDAue46xBY=; b=eaqcntVaFHDBOWLuG2u3kyaIjdzWompy1glzx8s1KTwQruc45+R0oPLk2q6gD6SF9Y1upw xqqDOO4hiAR3jtiCkEBH4VpOTo5vQ6EIZT36H02lM26ZkMgoX4O7XF6olQSS4hfiWeeF2L Q+5aXjrWC3xHRPz6yxUG7fljU+I9PBM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-14-svSpfQNhOguMSNDl-MdQSA-1; Wed, 16 Nov 2022 05:29:18 -0500 X-MC-Unique: svSpfQNhOguMSNDl-MdQSA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 724BE86F147; Wed, 16 Nov 2022 10:29:15 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3626A2028E8F; Wed, 16 Nov 2022 10:29:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Andy Walls , Mauro Carvalho Chehab Subject: [PATCH mm-unstable v1 15/20] media: pci/ivtv: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:54 +0100 Message-Id: <20221116102659.70287-16-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022925_276974_D1C34AA4 X-CRM114-Status: GOOD ( 17.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FOLL_FORCE is really only for ptrace access. R/O pinning a page is supposed to fail if the VMA misses proper access permissions (no VM_READ). Let's just remove FOLL_FORCE usage here; there would have to be a pretty good reason to allow arbitrary drivers to R/O pin pages in a PROT_NONE VMA. Most probably, FOLL_FORCE usage is just some legacy leftover. Cc: Andy Walls Cc: Mauro Carvalho Chehab Signed-off-by: David Hildenbrand Acked-by: Hans Verkuil --- drivers/media/pci/ivtv/ivtv-udma.c | 2 +- drivers/media/pci/ivtv/ivtv-yuv.c | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 210be8290f24..99b9f55ca829 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -115,7 +115,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Pin user pages for DMA Xfer */ err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count, - dma->map, FOLL_FORCE); + dma->map, 0); if (user_dma.page_count != err) { IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index 4ba10c34a16a..582146f8d70d 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -63,12 +63,11 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Pin user pages for DMA Xfer */ y_pages = pin_user_pages_unlocked(y_dma.uaddr, - y_dma.page_count, &dma->map[0], FOLL_FORCE); + y_dma.page_count, &dma->map[0], 0); uv_pages = 0; /* silence gcc. value is set and consumed only if: */ if (y_pages == y_dma.page_count) { uv_pages = pin_user_pages_unlocked(uv_dma.uaddr, - uv_dma.page_count, &dma->map[y_pages], - FOLL_FORCE); + uv_dma.page_count, &dma->map[y_pages], 0); } if (y_pages != y_dma.page_count || uv_pages != uv_dma.page_count) { From patchwork Wed Nov 16 10:26:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B13DC43219 for ; Wed, 16 Nov 2022 10:39:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5ZNhjhecmcpDE3SPm7Zb2noPIvrAqDvxMY+MvptXUjU=; b=k1vMOP+fsOyUp7 WjkFU0chyewHHlNOAIWSdTv2vFmw+jWL4YOhR3jRb9+iLwAECSFVD19t6ZZ58rZ7KioBfMxnr/GYv u/NOasuxEvlf6TbkCD/tkZSGjWpH/rppRzcEnydSDg7nL4kAl7TZauY5jmSqaA7p/BSIrRZG41Q6N wUj9VzElzWCpKoz9Z9degjdkmORDlyE/zqMsgS51Jf6pagUWs9AobEpMMvDjmiEWJljnRDrZFCVXW TTSKdzJtLsYUC1lMfgAOIyV5Wf9jVrQ0021gYuoomgowolKxSuHW42EF2IHRRgIcdyVUO5CZSum7p t+zxGhdyGPqZII/yv0/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFni-002IrX-2v; Wed, 16 Nov 2022 10:37:50 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfe-002DaK-PZ for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594569; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G0FpSSTO1Nfi3hHjZY1vs8kl28xrhbJvR/VmU2bZlrg=; b=c+lc3Q9ZJTbnMBMVTIGm57g+AWk9tBbJZmuB/aj9gmDfTcstw2nuRWDGVLHY+FvJ3JjVCc rREfTaMhmsZLDckloQtPPslPUcLZ24Tev0u0lrkXD2Q5DYTqsnvMIT+7W73WF0NVSb5lut FyoSEnpItVrJCRVevsX3IgZtsfhmlYo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-652-smSH3VzZPoqE7rq7qnypxQ-1; Wed, 16 Nov 2022 05:29:26 -0500 X-MC-Unique: smSH3VzZPoqE7rq7qnypxQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2D136101A528; Wed, 16 Nov 2022 10:29:25 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id AF3102024CC8; Wed, 16 Nov 2022 10:29:15 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Hans Verkuil , Marek Szyprowski , Tomasz Figa , Mauro Carvalho Chehab Subject: [PATCH mm-unstable v1 16/20] mm/frame-vector: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:55 +0100 Message-Id: <20221116102659.70287-17-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022930_978178_B14D71D8 X-CRM114-Status: GOOD ( 17.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers. FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it. Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction. Cc: Hans Verkuil Cc: Marek Szyprowski Cc: Tomasz Figa Cc: Marek Szyprowski Cc: Mauro Carvalho Chehab Signed-off-by: David Hildenbrand Reviewed-by: Daniel Vetter Acked-by: Hans Verkuil Reviewed-by: Daniel Vetter Acked-by: Hans Verkuil Signed-off-by: David Hildenbrand --- drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs)); if (ret > 0) { vec->got_ref = true; From patchwork Wed Nov 16 10:26:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7FAB7C4332F for ; Wed, 16 Nov 2022 10:40:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=05NF0kbK+baQ2IxqTRXnND3lUDuPen2kOI/YWANBcmM=; b=Uk7EIAUvnQny8B K4xqrrnOlJOve9OV6kLv1D2gpVkbrnEqr7lhEf7uScWvu/SyfnarKaoUM4sv2OBeyNCs7ubYE+JbT HKiptBr/gceZc9x8HQCetPnF/Jn6v5DQT95WY5d5RtJwZMDxvIWhCTbeOAMg2hRidAy75HMvexkfu E8BSWVhO/+1PSDVB/fPAWcLSvco3VmQyZs6TMNWYo6P4bTMW9CDLR+BFWdgv9pWgI7siCc81rqMRd oo/oWCJVHkHHosB9n05pqaiBZdt261+GkImYY2eobmc1d1ru2zrXkggB+kP6Jks0qsguiT3/IjBZr ZC9LOOX6D0Xz0Qs3SqwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFp8-002Jn6-Uh; Wed, 16 Nov 2022 10:39:19 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfo-002DiF-Uy for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594580; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rGA6GaR1az8P6UEdLsolXmCEn/P8qFigXL+8tO23Tsc=; b=gntRlRuncboP1y0/G7cm+yumh7mTR8LIkxUMGdUCn9DHWBGsXnCll13dTDxmcwSxgd2eaQ 9epc5sep0sFq5yg69lIeuV1KDW8rDkVOO9GR0K96tPOJrnEUd2JRqKK97b7f/wnIWnGTjf HrbKge467zST921z2Th7YYZcwA2V6ho= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-166-zpGAvGWpOfO4G6I5WZe6qQ-1; Wed, 16 Nov 2022 05:29:34 -0500 X-MC-Unique: zpGAvGWpOfO4G6I5WZe6qQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2B0731C0A10A; Wed, 16 Nov 2022 10:29:33 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 82BA82028E8F; Wed, 16 Nov 2022 10:29:25 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Inki Dae , Seung-Woo Kim , Kyungmin Park , Daniel Vetter , Krzysztof Kozlowski Subject: [PATCH mm-unstable v1 17/20] drm/exynos: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:56 +0100 Message-Id: <20221116102659.70287-18-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022941_207911_E070C82C X-CRM114-Status: GOOD ( 16.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable. FOLL_FORCE in this case seems to be a legacy leftover. Let's just remove it. Cc: Inki Dae Cc: Seung-Woo Kim Cc: Kyungmin Park Cc: David Airlie Cc: Daniel Vetter Cc: Krzysztof Kozlowski Signed-off-by: David Hildenbrand Reviewed-by: Daniel Vetter --- drivers/gpu/drm/exynos/exynos_drm_g2d.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c index 471fd6c8135f..e19c2ceb3759 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c +++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c @@ -477,7 +477,7 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct g2d_data *g2d, } ret = pin_user_pages_fast(start, npages, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + FOLL_WRITE | FOLL_LONGTERM, g2d_userptr->pages); if (ret != npages) { DRM_DEV_ERROR(g2d->dev, From patchwork Wed Nov 16 10:26:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D57BEC4332F for ; Wed, 16 Nov 2022 10:41:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D78CO0ZGkKECEoC/+JRVsGZYkmYWUifMo3NurqlajJE=; b=hMiWGbB1Hn713w ceMhqD7htJTA0LsAS4Laa7hAgWvR+9LEsfrNnEtSEALHxtTZsTYZMQ8yDrOIFTD1Z+tnJqmtKULLm 1R/VfGYfLXne6LQFU4FzIKQcUidFauinw9tt9xwYFF54453AgMISZSmYSNaZpfdCt22DG/R7WVwYz 6ul0WIOYYQ8mZfGag2Xuq3/lyJKfUMw+RUr3t3GhhA0PDETd7GohLdHLwPbOWtKET5Wfzgz1tybY2 YlzE9ziUiudIu0QuZlJZKm+kOQ3nblMSU/yTA6KiQ2gmi+HMX97A2TVmnwd/4bpHI7JmMMmFceSee KAexfDV4RGpcq0Re0nHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFpu-002KTB-SC; Wed, 16 Nov 2022 10:40:07 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFfv-002Dna-Rb for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:50 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594587; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vwm/knGeRKTOygKvA/Obl+ChXNhkjAzB2LSFVD/DetE=; b=LjiHv8xCyA2Wlwt1ZSXOYcMq1jumYWpyiy06+YTXqEt8njzY2Tc4By+nPv7U81Jmcc9zQ2 bzIUDY0EId+pWECWd1rE1iBbYs5F2hnyk+3/R0u24hEGpSt143qQ5iEcMt94gpzkOrGjr8 0JQ9qX9jKlOt6CoFlJU2eNsYDlhrtyc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-454-brbllGSGP4mJ4Nc_w7a0rA-1; Wed, 16 Nov 2022 05:29:41 -0500 X-MC-Unique: brbllGSGP4mJ4Nc_w7a0rA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 714823C0E216; Wed, 16 Nov 2022 10:29:40 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A8D52028E8F; Wed, 16 Nov 2022 10:29:33 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Dennis Dalessandro , Leon Romanovsky Subject: [PATCH mm-unstable v1 18/20] RDMA/hw/qib/qib_user_pages: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:57 +0100 Message-Id: <20221116102659.70287-19-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022947_987949_31D0D949 X-CRM114-Status: GOOD ( 16.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable. FOLL_FORCE in this case seems to be a legacy leftover. Let's just remove it. Cc: Dennis Dalessandro Cc: Jason Gunthorpe Cc: Leon Romanovsky Signed-off-by: David Hildenbrand --- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index f4b5f05058e4..f693bc753b6b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -110,7 +110,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, for (got = 0; got < num_pages; got += ret) { ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, + FOLL_LONGTERM | FOLL_WRITE, p + got, NULL); if (ret < 0) { mmap_read_unlock(current->mm); From patchwork Wed Nov 16 10:26:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13044848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54E84C43217 for ; Wed, 16 Nov 2022 10:42:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=w04eFCajR+vAS5UhSCHLaefQ3zvaakgf5gTm+TdtkcQ=; b=aneg3TLzLIV1Rt BBLrDnyZoaphyNt4iViSdrvmMi8hFVYOgYGY3GYO1blf1WsXxktoj99ipFChh2SrELeIbkXa/swB2 0EF2u5rvXIarGWxganCk/2htSm9OC9gzw76izRLsAw9mJQHADl5n8lUp1hJOTM8sQGP8Ih3KzVpi2 F/hVSwvRccjk5xsOF6gfp4iMapzzMzHSJcdbDXWdgIJmGyJQQO3EOHY7BanPo5hZbJsSYpus30/kC 9KMnxfuVMyKOzrqiV0jy0jQqU5CVPv6Up/yhwOcRzgctjSUeZ54g1bQL9MmUUjqwY7XNfAWYCYxUF pkUvo0HDph67p897ieTw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFqw-002L3K-CC; Wed, 16 Nov 2022 10:41:11 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovFg0-002Dru-UI for linux-arm-kernel@lists.infradead.org; Wed, 16 Nov 2022 10:29:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AW87fgh+ucyp6GPfn9nwvdCWqoK4NWdmiMJ3qK4gV9o=; b=FIPi5jTyej6HUe2UE18uOXfwkFqx8XlEDlnXhihsM8NYAAIP6aQQ51GWfiYl25hng0a7Zv Vv+UtQzI3CnUMEZP4TXSq/Dv2YWMHHjMl+Pnnt1jGJsqWB1C2U4M5gSyRKnVY3ZWXGxm2P nH5BaiyNvRoNndqs7VLorKxLPVzsTM8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-533-2lbboRpqNXmL5tQ6NMs1RA-1; Wed, 16 Nov 2022 05:29:49 -0500 X-MC-Unique: 2lbboRpqNXmL5tQ6NMs1RA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E748387A9E1; Wed, 16 Nov 2022 10:29:47 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD0942028E8F; Wed, 16 Nov 2022 10:29:40 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand Subject: [PATCH mm-unstable v1 19/20] habanalabs: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:58 +0100 Message-Id: <20221116102659.70287-20-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221116_022953_116157_5A8B24DA X-CRM114-Status: GOOD ( 15.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable. FOLL_FORCE in this case seems to be due to copy-and-past from other drivers. Let's just remove it. Acked-by: Oded Gabbay Cc: Oded Gabbay Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Signed-off-by: David Hildenbrand --- drivers/misc/habanalabs/common/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c index ef28f3b37b93..e35cca96bbef 100644 --- a/drivers/misc/habanalabs/common/memory.c +++ b/drivers/misc/habanalabs/common/memory.c @@ -2312,8 +2312,7 @@ static int get_user_memory(struct hl_device *hdev, u64 addr, u64 size, if (!userptr->pages) return -ENOMEM; - rc = pin_user_pages_fast(start, npages, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + rc = pin_user_pages_fast(start, npages, FOLL_WRITE | FOLL_LONGTERM, userptr->pages); if (rc != npages) {