From patchwork Tue Nov 16 22:00:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A384C433EF for ; Tue, 16 Nov 2021 22:01:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 123B06140A for ; Tue, 16 Nov 2021 22:01:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 123B06140A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id CFC616B0072; Tue, 16 Nov 2021 17:00:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CAB926B0073; Tue, 16 Nov 2021 17:00:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4D506B0074; Tue, 16 Nov 2021 17:00:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id A79A66B0072 for ; Tue, 16 Nov 2021 17:00:54 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 71A441820BA65 for ; Tue, 16 Nov 2021 22:00:44 +0000 (UTC) X-FDA: 78816163650.10.34120E4 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf30.hostedemail.com (Postfix) with ESMTP id 27BD7E001987 for ; Tue, 16 Nov 2021 22:00:42 +0000 (UTC) Received: by mail-qk1-f173.google.com with SMTP id de30so502223qkb.0 for ; Tue, 16 Nov 2021 14:00:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=G1vrXLeEkHaL38rZeKIone4HWwJFBBENMbtQcmlY9TE=; b=g43F9JAkc38jKChhi+Enq5azWqDCveDPFo9GQvxs1HOambiVhd/Q9/G2nOkgHQ90+/ VdJSyJObxRWYtrGT8OX+1tXaMocpk279N1krNbEkZQ5ZYz+vD9Cr1NRFfAkHy6cy0Qp5 B7cF6yANqoD3FiElqjsCvFhA+lkUmtv+T96aeTaf0MRYhn2yr+zoUVm7fDlTEu5fNIb7 EmY28xIYxBP25lvZS7/V/Ut9l/61MormaCrfPZjsBUrlTuhK3LO/EKuaG8/MBZuSMLUv CAW3lP3q/SnjGSfXquWdLJtGHuVkZPJd+BbzBZcvNDtRwJz542CU8hZnat5MN186NduJ itZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=G1vrXLeEkHaL38rZeKIone4HWwJFBBENMbtQcmlY9TE=; b=h/LK5xVWw7XVBck+hHPFJIscjlbvvbM6th5O276guLlq/61jIxFXQ9CMlrKLwWS77f ohGFYH8j8/OscTcSfYWRCtAz8IHRDm+jJ8IMPvs/nHVfMz0Y8L2ODU7uV1iAihiJNHyR 0zvh55PRx6+TKEm8NWP8qqENc6thoZEOMeXZPdHayK0i4ZtZVzfrprCkdgpc4y+PzkRy nkYM7eEJBcE0r5mlDMkhGGbjHsnbB4lUVWCBo3TKSLPjhp2GtEYIEQnPRIvHxg7LXTc0 aXA2/DtI1z57wzo4Ac/pmNzyKbdICSj2xbYlNxHFm63CnM5qqUzC4+k/T62SSdYxLtmx +RSA== X-Gm-Message-State: AOAM5305+h23HTwQVrbLKeEE36fVaTWKV+VvhgSp0hLo3aN7SdjwjGN3 AuPlK/DLKhBfKeDtgN9w6QL82g== X-Google-Smtp-Source: ABdhPJz5PqqO+f+EAo1gNP40S4S3NrRDchCV410DLqkX5Ry47U4kOjvGRk8RnnGN+BMTTwB90ZGRUw== X-Received: by 2002:a05:620a:150a:: with SMTP id i10mr8663279qkk.252.1637100043405; Tue, 16 Nov 2021 14:00:43 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i11sm2975655qko.116.2021.11.16.14.00.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 14:00:42 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, rientjes@google.com, pjt@google.com, weixugc@google.com, gthelen@google.com, mingo@redhat.com, corbet@lwn.net, will@kernel.org, rppt@kernel.org, keescook@chromium.org, tglx@linutronix.de, peterz@infradead.org, masahiroy@kernel.org, samitolvanen@google.com, dave.hansen@linux.intel.com, x86@kernel.org, frederic@kernel.org, hpa@zytor.com, aneesh.kumar@linux.ibm.com Subject: [RFC 1/3] mm: ptep_clear() page table helper Date: Tue, 16 Nov 2021 22:00:36 +0000 Message-Id: <20211116220038.116484-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211116220038.116484-1-pasha.tatashin@soleen.com> References: <20211116220038.116484-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 27BD7E001987 X-Stat-Signature: 5o47zk7ai7dbq5gq4ct7dip7d18qsg7g Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=g43F9JAk; dmarc=none; spf=pass (imf30.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637100042-975662 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have ptep_get_and_clear() and ptep_get_and_clear_full() helpers to clear PTE from user page tables, but there is no variant for simple clear of a present PTE from user page tables without using a low level pte_clear() which can be either native or para-virtualised. Add a new ptep_clear() that can be used in common code to clear PTEs from page table. We will need this call later in order to add a hook for page table check. Signed-off-by: Pasha Tatashin --- Documentation/vm/arch_pgtable_helpers.rst | 6 ++++-- include/linux/pgtable.h | 8 ++++++++ mm/khugepaged.c | 12 ++---------- 3 files changed, 14 insertions(+), 12 deletions(-) diff --git a/Documentation/vm/arch_pgtable_helpers.rst b/Documentation/vm/arch_pgtable_helpers.rst index 552567d863b8..fbe06ec75370 100644 --- a/Documentation/vm/arch_pgtable_helpers.rst +++ b/Documentation/vm/arch_pgtable_helpers.rst @@ -66,9 +66,11 @@ PTE Page Table Helpers +---------------------------+--------------------------------------------------+ | pte_mknotpresent | Invalidates a mapped PTE | +---------------------------+--------------------------------------------------+ -| ptep_get_and_clear | Clears a PTE | +| ptep_clear | Clears a PTE | +---------------------------+--------------------------------------------------+ -| ptep_get_and_clear_full | Clears a PTE | +| ptep_get_and_clear | Clears and returns PTE | ++---------------------------+--------------------------------------------------+ +| ptep_get_and_clear_full | Clears and returns PTE (batched PTE unmap) | +---------------------------+--------------------------------------------------+ | ptep_test_and_clear_young | Clears young from a PTE | +---------------------------+--------------------------------------------------+ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e24d2c992b11..bc8713a76e03 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -258,6 +258,14 @@ static inline int pmdp_clear_flush_young(struct vm_area_struct *vma, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif +#ifndef __HAVE_ARCH_PTEP_CLEAR +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) +{ + pte_clear(mm, addr, ptep); +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5f02fda6f265..6ae659ef7e08 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -756,11 +756,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, * ptl mostly unnecessary. */ spin_lock(ptl); - /* - * paravirt calls inside pte_clear here are - * superfluous. - */ - pte_clear(vma->vm_mm, address, _pte); + ptep_clear(vma->vm_mm, address, _pte); spin_unlock(ptl); } } else { @@ -774,11 +770,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, * inside page_remove_rmap(). */ spin_lock(ptl); - /* - * paravirt calls inside pte_clear here are - * superfluous. - */ - pte_clear(vma->vm_mm, address, _pte); + ptep_clear(vma->vm_mm, address, _pte); page_remove_rmap(src_page, false); spin_unlock(ptl); free_page_and_swap_cache(src_page); From patchwork Tue Nov 16 22:00:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 677D6C433F5 for ; Tue, 16 Nov 2021 22:01:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E8D9C619FA for ; Tue, 16 Nov 2021 22:01:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E8D9C619FA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6BA276B0073; Tue, 16 Nov 2021 17:00:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6694D6B0074; Tue, 16 Nov 2021 17:00:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E3C16B007B; Tue, 16 Nov 2021 17:00:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 3EFE46B0073 for ; Tue, 16 Nov 2021 17:00:56 -0500 (EST) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E540818208298 for ; Tue, 16 Nov 2021 22:00:45 +0000 (UTC) X-FDA: 78816163650.32.56E81ED Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf27.hostedemail.com (Postfix) with ESMTP id 83A9B70000B6 for ; Tue, 16 Nov 2021 22:00:45 +0000 (UTC) Received: by mail-qt1-f169.google.com with SMTP id p19so640598qtw.12 for ; Tue, 16 Nov 2021 14:00:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=tlOPrejBEX/mtuoIk2gmZZq1pr53IzCHU4sbrWMh5d0=; b=AHaYWW2pdSewZ3TAwa2x1EfaFmTDpXp3WFbVw+g7GuRbOp2SFz3iqvLFFQpzyiNUYo jEbIhy2bSp3jdjeWV5MwjX3Y9wFbMDVAueZlNuJs1oSufi2ep2MIVhLWCCwO+XNwtkym wqjKG2sshVKncbVonlObB4oxekUbkuZAvKwiAT82s5T8Ig+9ouIfLiCOVBaKtR49QBbw guFy9tj6ipPB0E8j3zmEuDmgRyA8K4v14u5aw1xVIZWSzienkuXjLu0NemmCE5+nbaIu Gt8dAY0YR2WPwGzh1Auckn2GM8X8ueDVF6ydy43hWE9GWsF0mGKJXh5xPzf3XD95sP2x ih9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tlOPrejBEX/mtuoIk2gmZZq1pr53IzCHU4sbrWMh5d0=; b=IN2/9+KPZYh/jO30rBQWNrWmXt1DOEcmQgI5+vkCl0MFeL8XSMoIy3AhswkCbN/DQk eqVtz4BSnb9T6rnd83Z2FuCOHr9gh/88WUUI1Oou3bz/xVZgjbAO3l+eKeJ/yUcbVTFA 5cTj+NWdO+d/M0+cBvBd65uPtIoyN9XTDAfuMy3ZaGLDbxSz88mfJ3ZhDAa6zANoCQKy p0tg0avrvR8B3rcbWeEH2bn2U+854AEEF6xEbGTzmryvl0ipuwgNrtWCYSOUO6/1SGwl UePscqsH+JV0ZNus9UqXF88xYbrfH+t5HR57JA05kj0y09bWf2TIHlbelhtEguSKD/Gl q6HQ== X-Gm-Message-State: AOAM530rt1ex7X/qpdsWBgarKvrGb5XF3THeVKAhA9gRGfr+RAx4dRFq T0Rb75yQ2juxqTVX+Mjs+iLKmg== X-Google-Smtp-Source: ABdhPJxPwN6RUXI4y7lTRGkgqqpqDdByohDq5iz8ywNEbes6s3mleO0lJsGE4FhRoqcYvc1u82moWQ== X-Received: by 2002:a05:622a:3c9:: with SMTP id k9mr11493700qtx.42.1637100044629; Tue, 16 Nov 2021 14:00:44 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i11sm2975655qko.116.2021.11.16.14.00.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 14:00:44 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, rientjes@google.com, pjt@google.com, weixugc@google.com, gthelen@google.com, mingo@redhat.com, corbet@lwn.net, will@kernel.org, rppt@kernel.org, keescook@chromium.org, tglx@linutronix.de, peterz@infradead.org, masahiroy@kernel.org, samitolvanen@google.com, dave.hansen@linux.intel.com, x86@kernel.org, frederic@kernel.org, hpa@zytor.com, aneesh.kumar@linux.ibm.com Subject: [RFC 2/3] mm: page table check Date: Tue, 16 Nov 2021 22:00:37 +0000 Message-Id: <20211116220038.116484-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211116220038.116484-1-pasha.tatashin@soleen.com> References: <20211116220038.116484-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 83A9B70000B6 X-Stat-Signature: y1tqqehbca8gyhtqsceqjdfnbo4zqmy5 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=AHaYWW2p; dmarc=none; spf=pass (imf27.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637100045-173700 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Check user page table entries at the time they are added and removed. Allows to synchronously catch memory corruption issues related to double mapping. When a pte for an anonymous page is added into page table, we verify that this pte does not already point to a file backed page, and vice versa if this is a file backed page that is being added we verify that this page does not have an anonymous mapping We also enforce that read-only sharing for anonymous pages is allowed (i.e. cow after fork). All other sharing must be for file pages. Page table check allows to protect and debug cases where "struct page" metadata became corrupted for some reason. For example, when refcnt or mapcount become invalid. Signed-off-by: Pasha Tatashin --- Documentation/vm/page_table_check.rst | 53 ++++++ MAINTAINERS | 9 + arch/Kconfig | 3 + include/linux/page_table_check.h | 147 ++++++++++++++ mm/Kconfig.debug | 24 +++ mm/Makefile | 1 + mm/page_alloc.c | 4 + mm/page_ext.c | 4 + mm/page_table_check.c | 264 ++++++++++++++++++++++++++ 9 files changed, 509 insertions(+) create mode 100644 Documentation/vm/page_table_check.rst create mode 100644 include/linux/page_table_check.h create mode 100644 mm/page_table_check.c diff --git a/Documentation/vm/page_table_check.rst b/Documentation/vm/page_table_check.rst new file mode 100644 index 000000000000..41435a45869f --- /dev/null +++ b/Documentation/vm/page_table_check.rst @@ -0,0 +1,53 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _page_table_check: + +================ +Page Table Check +================ + +Page table check allows to hardern the kernel by ensuring that some types of +memory corruptions are prevented. + +Page table check performs extra verifications at the time when new pages become +accessible from userspace by getting their page table entries (PTEs PMDs etc.) +added into the table. + +In case of detected corruption, the kernel is crashed. There is a small +performance and memory overhead associated with page table check. Thereofre, it +is disabled by default but can be optionally enabled on systems where extra +hardening outweighs the costs. Also, because page table check is synchronous, it +can help with debugging double map memory corruption issues, by crashing kernel +at the time wrong mapping occurs instead of later which is often the case with +memory corruptions bugs. + +============================== +Double mapping detection logic +============================== ++-------------------+-------------------+-------------------+------------------+ +| Current Mapping | New mapping | Permissions | Rule | ++===================+===================+===================+==================+ +| Anonymous | Anonymous | Read | Allow | ++-------------------+-------------------+-------------------+------------------+ +| Anonymous | Anonymous | Read / Write | Prohibit | ++-------------------+-------------------+-------------------+------------------+ +| Anonymous | Named | Any | Prohibit | ++-------------------+-------------------+-------------------+------------------+ +| Named | Anonymous | Any | Prohibit | ++-------------------+-------------------+-------------------+------------------+ +| Named | Named | Any | Allow | ++-------------------+-------------------+-------------------+------------------+ + +========================= +Enabling Page Table Check +========================= + +Build kernel with: + +- PAGE_TABLE_CHECK=y +Note, it can only be enabled on platforms where ARCH_SUPPORTS_PAGE_TABLE_CHECK +is available. +- Boot with 'page_table_check=on' kernel parameter. + +Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page +table support without extra kernel parameter. diff --git a/MAINTAINERS b/MAINTAINERS index 74158b271cb7..13dcc9cc10c2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -14338,6 +14338,15 @@ F: include/net/page_pool.h F: include/trace/events/page_pool.h F: net/core/page_pool.c +PAGE TABLE CHECK +M: Pasha Tatashin +M: Andrew Morton +L: linux-mm@kvack.org +S: Maintained +F: Documentation/vm/page_table_check.rst +F: include/linux/page_table_check.h +F: mm/page_table_check.c + PANASONIC LAPTOP ACPI EXTRAS DRIVER M: Kenneth Chan L: platform-driver-x86@vger.kernel.org diff --git a/arch/Kconfig b/arch/Kconfig index 26b8ed11639d..c5b03b3bd62d 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1287,6 +1287,9 @@ config HAVE_ARCH_PFN_VALID config ARCH_SUPPORTS_DEBUG_PAGEALLOC bool +config ARCH_SUPPORTS_PAGE_TABLE_CHECK + bool + config ARCH_SPLIT_ARG64 bool help diff --git a/include/linux/page_table_check.h b/include/linux/page_table_check.h new file mode 100644 index 000000000000..38cace1da7b6 --- /dev/null +++ b/include/linux/page_table_check.h @@ -0,0 +1,147 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2021, Google LLC. + * Pasha Tatashin + */ +#ifndef __LINUX_PAGE_TABLE_CHECK_H +#define __LINUX_PAGE_TABLE_CHECK_H + +#ifdef CONFIG_PAGE_TABLE_CHECK +#include + +extern struct static_key_true page_table_check_disabled; +extern struct page_ext_operations page_table_check_ops; + +void __page_table_check_zero(struct page *page, unsigned int order); +void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr, + pte_t pte); +void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, + pmd_t pmd); +void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, + pud_t pud); +void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte); +void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, + pmd_t *pmdp, pmd_t pmd); +void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, + pud_t *pudp, pud_t pud); + +static inline void page_table_check_alloc(struct page *page, unsigned int order) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_zero(page, order); +} + +static inline void page_table_check_free(struct page *page, unsigned int order) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_zero(page, order); +} + +static inline void page_table_check_pte_clear(struct mm_struct *mm, + unsigned long addr, pte_t pte) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pte_clear(mm, addr, pte); +} + +static inline void page_table_check_pmd_clear(struct mm_struct *mm, + unsigned long addr, pmd_t pmd) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pmd_clear(mm, addr, pmd); +} + +static inline void page_table_check_pud_clear(struct mm_struct *mm, + unsigned long addr, pud_t pud) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pud_clear(mm, addr, pud); +} + +static inline void page_table_check_pte_set(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + pte_t pte) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pte_set(mm, addr, ptep, pte); +} + +static inline void page_table_check_pmd_set(struct mm_struct *mm, + unsigned long addr, pmd_t *pmdp, + pmd_t pmd) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pmd_set(mm, addr, pmdp, pmd); +} + +static inline void page_table_check_pud_set(struct mm_struct *mm, + unsigned long addr, pud_t *pudp, + pud_t pud) +{ + if (static_branch_likely(&page_table_check_disabled)) + return; + + __page_table_check_pud_set(mm, addr, pudp, pud); +} + +#else + +static inline void page_table_check_alloc(struct page *page, unsigned int order) +{ +} + +static inline void page_table_check_free(struct page *page, unsigned int order) +{ +} + +static inline void page_table_check_pte_clear(struct mm_struct *mm, + unsigned long addr, pte_t pte) +{ +} + +static inline void page_table_check_pmd_clear(struct mm_struct *mm, + unsigned long addr, pmd_t pmd) +{ +} + +static inline void page_table_check_pud_clear(struct mm_struct *mm, + unsigned long addr, pud_t pud) +{ +} + +static inline void page_table_check_pte_set(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + pte_t pte) +{ +} + +static inline void page_table_check_pmd_set(struct mm_struct *mm, + unsigned long addr, pmd_t *pmdp, + pmd_t pmd) +{ +} + +static inline void page_table_check_pud_set(struct mm_struct *mm, + unsigned long addr, pud_t *pudp, + pud_t pud) +{ +} + +#endif /* CONFIG_PAGE_TABLE_CHECK */ +#endif /* __LINUX_PAGE_TABLE_CHECK_H */ diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 1e73717802f8..e5724cd6946b 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -62,6 +62,30 @@ config PAGE_OWNER If unsure, say N. +config PAGE_TABLE_CHECK + bool "Check for invalid mappings in user page tables" + depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK + select PAGE_EXTENSION + help + Check that anonymous page is not being mapped twice with read write + permissions. Check that anonymous and file pages are not being + erroneously shared. Since the checking is performed at the time + entries are added and removed to user page tables, leaking, corruption + and double mapping problems are detected synchronously. + + If unsure say "n". + +config PAGE_TABLE_CHECK_ENFORCED + bool "Enforce the page table checking by defauled" + depends on PAGE_TABLE_CHECK + help + Always enable page table checking. By default the page table checking + is disabled, and can be optionally enabled via page_table_check=on + kernel parameter. This config enforces that page table check is always + enabled. + + If unsure say "n". + config PAGE_POISONING bool "Poison pages after freeing" help diff --git a/mm/Makefile b/mm/Makefile index d6c0042e3aa0..5c5a3a480fa6 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -112,6 +112,7 @@ obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o obj-$(CONFIG_CMA) += cma.o obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o +obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fee18ada46a2..4165071e2958 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -63,6 +63,7 @@ #include #include #include +#include #include #include #include @@ -1299,6 +1300,7 @@ static __always_inline bool free_pages_prepare(struct page *page, if (memcg_kmem_enabled() && PageMemcgKmem(page)) __memcg_kmem_uncharge_page(page, order); reset_page_owner(page, order); + page_table_check_free(page, order); return false; } @@ -1338,6 +1340,7 @@ static __always_inline bool free_pages_prepare(struct page *page, page_cpupid_reset_last(page); page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); + page_table_check_free(page, order); if (!PageHighMem(page)) { debug_check_no_locks_freed(page_address(page), @@ -2418,6 +2421,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, } set_page_owner(page, order, gfp_flags); + page_table_check_alloc(page, order); } static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, diff --git a/mm/page_ext.c b/mm/page_ext.c index 2a52fd9ed464..a4d2c86c26a9 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -8,6 +8,7 @@ #include #include #include +#include /* * struct page extension @@ -75,6 +76,9 @@ static struct page_ext_operations *page_ext_ops[] = { #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_PAGE_TABLE_CHECK + &page_table_check_ops, +#endif }; unsigned long page_ext_size = sizeof(struct page_ext); diff --git a/mm/page_table_check.c b/mm/page_table_check.c new file mode 100644 index 000000000000..5a63a2a57da2 --- /dev/null +++ b/mm/page_table_check.c @@ -0,0 +1,264 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (c) 2021, Google LLC. + * Pasha Tatashin + */ +#include +#include + +#undef pr_fmt +#define pr_fmt(fmt) "page_table_check: " fmt + +struct page_table_check { + atomic_t anon_map_count; + atomic_t file_map_count; +}; + +static bool __page_table_check_enabled __initdata = + IS_ENABLED(CONFIG_PAGE_TABLE_CHECK_ENFORCED); + +DEFINE_STATIC_KEY_TRUE_RO(page_table_check_disabled); + +static int __init early_page_table_check_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (strcmp(buf, "on") == 0) + __page_table_check_enabled = true; + + return 0; +} + +early_param("page_table_check", early_page_table_check_param); + +static bool __init need_page_table_check(void) +{ + if (!__page_table_check_enabled) + return false; + + return true; +} + +static void __init init_page_table_check(void) +{ + if (!__page_table_check_enabled) + return; + static_branch_disable(&page_table_check_disabled); +} + +struct page_ext_operations page_table_check_ops = { + .size = sizeof(struct page_table_check), + .need = need_page_table_check, + .init = init_page_table_check, +}; + +static struct page_table_check *get_page_table_check(struct page_ext *page_ext) +{ + BUG_ON(!page_ext); + return ((void *)(page_ext) + page_table_check_ops.offset); +} + +static inline bool pte_user_accessible_page(pte_t pte) +{ + return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER); +} + +static inline bool pmd_user_accessible_page(pmd_t pmd) +{ + return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) && + (pmd_val(pmd) & _PAGE_USER); +} + +static inline bool pud_user_accessible_page(pud_t pud) +{ + return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) && + (pud_val(pud) & _PAGE_USER); +} + +/* + * An enty is removed from the page table, decrement the counters for that page + * verify that it is of correct type and counters do not become negative. + */ +static void page_table_check_clear(struct mm_struct *mm, unsigned long addr, + unsigned long pfn, unsigned int pgcnt) +{ + struct page_ext *page_ext; + bool anon; + int i, count; + + if (!pfn_valid(pfn)) + return; + + page_ext = lookup_page_ext(pfn_to_page(pfn)); + anon = PageAnon(pfn_to_page(pfn)); + + for (i = 0; i < pgcnt; i++, pfn++) { + struct page_table_check *ptc = get_page_table_check(page_ext); + + if (anon) { + BUG_ON(atomic_read(&ptc->file_map_count)); + count = atomic_dec_return(&ptc->anon_map_count); + } else { + BUG_ON(atomic_read(&ptc->anon_map_count)); + count = atomic_dec_return(&ptc->file_map_count); + } + + BUG_ON(count < 0); + page_ext = page_ext_next(page_ext); + } +} + +/* + * A new enty is added to the page table, increment the counters for that page + * verify that it is of correct type and is not being mapped with a different + * type to a different process. + */ +static void page_table_check_set(struct mm_struct *mm, unsigned long addr, + unsigned long pfn, unsigned long pgcnt, + bool rw) +{ + struct page_ext *page_ext; + bool anon; + int i, count; + + if (!pfn_valid(pfn)) + return; + + page_ext = lookup_page_ext(pfn_to_page(pfn)); + anon = PageAnon(pfn_to_page(pfn)); + + for (i = 0; i < pgcnt; i++, pfn++) { + struct page_table_check *ptc = get_page_table_check(page_ext); + + if (anon) { + BUG_ON(atomic_read(&ptc->file_map_count)); + count = atomic_inc_return(&ptc->anon_map_count); + BUG_ON(count > 1 && rw); + } else { + BUG_ON(atomic_read(&ptc->anon_map_count)); + count = atomic_inc_return(&ptc->file_map_count); + } + BUG_ON(count < 0); + page_ext = page_ext_next(page_ext); + } +} + +/* + * page is on free list, or is being allocated, verify that counters are zeroes + * crash if they are not. + */ +void __page_table_check_zero(struct page *page, unsigned int order) +{ + struct page_ext *page_ext = lookup_page_ext(page); + int i; + + BUG_ON(!page_ext); + for (i = 0; i < (1 << order); i++) { + struct page_table_check *ptc = get_page_table_check(page_ext); + + BUG_ON(atomic_read(&ptc->anon_map_count)); + BUG_ON(atomic_read(&ptc->file_map_count)); + page_ext = page_ext_next(page_ext); + } +} + +void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr, + pte_t pte) +{ + if (&init_mm == mm) + return; + + if (pte_user_accessible_page(pte)) { + page_table_check_clear(mm, addr, pte_pfn(pte), + PAGE_SIZE >> PAGE_SHIFT); + } +} + +void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr, + pmd_t pmd) +{ + if (&init_mm == mm) + return; + + if (pmd_user_accessible_page(pmd)) { + page_table_check_clear(mm, addr, pmd_pfn(pmd), + PMD_PAGE_SIZE >> PAGE_SHIFT); + } +} + +void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr, + pud_t pud) +{ + if (&init_mm == mm) + return; + + if (pud_user_accessible_page(pud)) { + page_table_check_clear(mm, addr, pud_pfn(pud), + PUD_PAGE_SIZE >> PAGE_SHIFT); + } +} + +void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ + pte_t old_pte; + + if (&init_mm == mm) + return; + + old_pte = *ptep; + if (pte_user_accessible_page(old_pte)) { + page_table_check_clear(mm, addr, pte_pfn(old_pte), + PAGE_SIZE >> PAGE_SHIFT); + } + + if (pte_user_accessible_page(pte)) { + page_table_check_set(mm, addr, pte_pfn(pte), + PAGE_SIZE >> PAGE_SHIFT, + pte_write(pte)); + } +} + +void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr, + pmd_t *pmdp, pmd_t pmd) +{ + pmd_t old_pmd; + + if (&init_mm == mm) + return; + + old_pmd = *pmdp; + if (pmd_user_accessible_page(old_pmd)) { + page_table_check_clear(mm, addr, pmd_pfn(old_pmd), + PMD_PAGE_SIZE >> PAGE_SHIFT); + } + + if (pmd_user_accessible_page(pmd)) { + page_table_check_set(mm, addr, pmd_pfn(pmd), + PMD_PAGE_SIZE >> PAGE_SHIFT, + pmd_write(pmd)); + } +} + +void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr, + pud_t *pudp, pud_t pud) +{ + pud_t old_pud; + + if (&init_mm == mm) + return; + + old_pud = *pudp; + if (pud_user_accessible_page(old_pud)) { + page_table_check_clear(mm, addr, pud_pfn(old_pud), + PUD_PAGE_SIZE >> PAGE_SHIFT); + } + + if (pud_user_accessible_page(pud)) { + page_table_check_set(mm, addr, pud_pfn(pud), + PUD_PAGE_SIZE >> PAGE_SHIFT, + pud_write(pud)); + } +} From patchwork Tue Nov 16 22:00:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD87BC433F5 for ; Tue, 16 Nov 2021 22:02:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 75B766140A for ; Tue, 16 Nov 2021 22:02:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 75B766140A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 449416B0074; Tue, 16 Nov 2021 17:00:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F8A96B007B; Tue, 16 Nov 2021 17:00:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29AC46B007D; Tue, 16 Nov 2021 17:00:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 1DCF76B0074 for ; Tue, 16 Nov 2021 17:00:57 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D6A0787709 for ; Tue, 16 Nov 2021 22:00:46 +0000 (UTC) X-FDA: 78816163776.01.E324349 Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by imf24.hostedemail.com (Postfix) with ESMTP id 753AFB000812 for ; Tue, 16 Nov 2021 22:00:46 +0000 (UTC) Received: by mail-qk1-f169.google.com with SMTP id q64so441761qkd.5 for ; Tue, 16 Nov 2021 14:00:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ze7s4dFasnQ7mCcq4UJB+UXGJMvUv9ROGRjXSi8JeCc=; b=PYE0eOTn57jJrGuOBttcs67wGW7i0vcFv3xF+yOX31TfP6tpTd1zRJXTDQjZFdiPPD RrbZ/Z5k8B83x9p7iF0+V4MirVepNc6MCeQVHFNr7ZSNWFFsMh5lsWFR+mDev7MrNwXG 4VukV/ioB5jiT5SqOz9JcgvLzDN6KZGkDTMzI8NZgsrVY0iK92vAiLe0uOwRZGPDPkQk AOi/JGr/vMcPWLEhLCPeXsSXlzNwTbi6bzi6sur61FQyPiizJr5GNXzAjzakt5AxXNs8 Qg1I5qwpsOzWzxT2lKQrUShMtDVPVBuBX9FqQwalrmppV+gKuPLzIUj1bOzxtg1OsSoj bt4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ze7s4dFasnQ7mCcq4UJB+UXGJMvUv9ROGRjXSi8JeCc=; b=kUYqeiD5dKHlUuAOW5NkpPgZtSVGvADDrGLN4BLPvDx3HLOpgobRkj7xo0S+HdexkB FIj2wx6zQtr0khqS0l55YXTU/unC9BjI6u38kRqI2/VKsukJfBUirq7YCdZVG4390KkI s6yVKPoAbjccZ+958C7UVqVlszWAsQtp7r+I/tAY2vuVjv3EvFM8/Kf5l1gQXsP0zWDo +C8pt+VQGpeLX39C4PG2c1AnWAxHAaiSJ2zXqCQaaQ23yyt7w3j9uc3QWe4v6blwxgTU 5ZSwDpz3RYkm6UBuyUCY4WkojLOxDNvZIZ48ppBKC9+0bAlJzrFG4fZO+CK7H6cRsna0 FHfw== X-Gm-Message-State: AOAM530EC4B/lvdTQEj3A1DZyGq9/tRWrSqwl0wvgULl+o9vQy57MIkN fI2Y88YYTuDWuW6gJw0GBKQAkQ== X-Google-Smtp-Source: ABdhPJzL/avgjc1rIMxoXxUMQHZqqX1CzOhqmE/cIRTCc4PO7/fmsSAXcnMtZuv8or6XmwAWEJf+dQ== X-Received: by 2002:a05:620a:4ca:: with SMTP id 10mr9271903qks.194.1637100045836; Tue, 16 Nov 2021 14:00:45 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i11sm2975655qko.116.2021.11.16.14.00.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 14:00:45 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, akpm@linux-foundation.org, rientjes@google.com, pjt@google.com, weixugc@google.com, gthelen@google.com, mingo@redhat.com, corbet@lwn.net, will@kernel.org, rppt@kernel.org, keescook@chromium.org, tglx@linutronix.de, peterz@infradead.org, masahiroy@kernel.org, samitolvanen@google.com, dave.hansen@linux.intel.com, x86@kernel.org, frederic@kernel.org, hpa@zytor.com, aneesh.kumar@linux.ibm.com Subject: [RFC 3/3] x86: mm: add x86_64 support for page table check Date: Tue, 16 Nov 2021 22:00:38 +0000 Message-Id: <20211116220038.116484-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211116220038.116484-1-pasha.tatashin@soleen.com> References: <20211116220038.116484-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 753AFB000812 X-Stat-Signature: xq7n3c4oqzzr9dkrenii9edp1nxpym5b Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=PYE0eOTn; dmarc=none; spf=pass (imf24.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.169 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637100046-259213 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add page table check hooks into routines that modify user page tables. Signed-off-by: Pasha Tatashin --- arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 27 +++++++++++++++++++++++++-- 2 files changed, 26 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b1d4b481fcdd..9d28f2ac85ff 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -104,6 +104,7 @@ config X86 select ARCH_SUPPORTS_ACPI select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_DEBUG_PAGEALLOC + select ARCH_SUPPORTS_PAGE_TABLE_CHECK if X86_64 select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096 select ARCH_SUPPORTS_LTO_CLANG diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 448cd01eb3ec..46f0112f0303 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -26,6 +26,7 @@ #include #include #include +#include extern pgd_t early_top_pgt[PTRS_PER_PGD]; bool __init __early_make_pgtable(unsigned long address, pmdval_t pmd); @@ -1006,18 +1007,21 @@ static inline pud_t native_local_pudp_get_and_clear(pud_t *pudp) static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { + page_table_check_pte_set(mm, addr, ptep, pte); set_pte(ptep, pte); } static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd) { + page_table_check_pmd_set(mm, addr, pmdp, pmd); set_pmd(pmdp, pmd); } static inline void set_pud_at(struct mm_struct *mm, unsigned long addr, pud_t *pudp, pud_t pud) { + page_table_check_pud_set(mm, addr, pudp, pud); native_set_pud(pudp, pud); } @@ -1048,6 +1052,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { pte_t pte = native_ptep_get_and_clear(ptep); + page_table_check_pte_clear(mm, addr, pte); return pte; } @@ -1063,12 +1068,21 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, * care about updates and native needs no locking */ pte = native_local_ptep_get_and_clear(ptep); + page_table_check_pte_clear(mm, addr, pte); } else { pte = ptep_get_and_clear(mm, addr, ptep); } return pte; } +#define __HAVE_ARCH_PTEP_CLEAR +static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) +{ + page_table_check_pte_clear(mm, addr, *ptep); + pte_clear(mm, addr, ptep); +} + #define __HAVE_ARCH_PTEP_SET_WRPROTECT static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) @@ -1109,14 +1123,22 @@ static inline int pmd_write(pmd_t pmd) static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { - return native_pmdp_get_and_clear(pmdp); + pmd_t pmd = native_pmdp_get_and_clear(pmdp); + + page_table_check_pmd_clear(mm, addr, pmd); + + return pmd; } #define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, unsigned long addr, pud_t *pudp) { - return native_pudp_get_and_clear(pudp); + pud_t pud = native_pudp_get_and_clear(pudp); + + page_table_check_pud_clear(mm, addr, pud); + + return pud; } #define __HAVE_ARCH_PMDP_SET_WRPROTECT @@ -1137,6 +1159,7 @@ static inline int pud_write(pud_t pud) static inline pmd_t pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { + page_table_check_pmd_set(vma->vm_mm, address, pmdp, pmd); if (IS_ENABLED(CONFIG_SMP)) { return xchg(pmdp, pmd); } else {