From patchwork Wed May 5 00:30:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE943C43461 for ; Wed, 5 May 2021 00:32:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58CE4613D6 for ; Wed, 5 May 2021 00:32:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58CE4613D6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5ADAF6B006E; Tue, 4 May 2021 20:32:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 545D36B0070; Tue, 4 May 2021 20:32:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 427B06B0070; Tue, 4 May 2021 20:32:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 2120C6B006C for ; Tue, 4 May 2021 20:32:29 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D3843A760 for ; Wed, 5 May 2021 00:32:28 +0000 (UTC) X-FDA: 78105301176.20.BA3D895 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP id AB44A40001FE for ; Wed, 5 May 2021 00:32:22 +0000 (UTC) IronPort-SDR: Me9fbKputU29/Kw1rxFgZfNapL25i+YdpYMV6O7Qy71xry4ida9CebpwUPIo3AqArWhE5sJEmh l8GyGF1jzcKg== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724332" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724332" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:25 -0700 IronPort-SDR: Iqxtos5fBejOav42mmvTJRDDIz1U5Vp7QaFm+HjYB+C39Wrj+ro+IYxYBe5DN6g/vXjOtSpPYM b0fjED5YE9tg== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490755" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:24 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 1/9] list: Support getting most recent element in list_lru Date: Tue, 4 May 2021 17:30:24 -0700 Message-Id: <20210505003032.489164-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AB44A40001FE Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf17.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Rspamd-Server: rspam04 X-Stat-Signature: u6anago794aig8jqzowdxrqq5ah7nexr Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174742-925543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In future patches, some functionality will use list_lru that also needs to keep track of the most recently used element on a node. Since this information is already contained within list_lru, add a function to get it so that an additional list is not needed in the caller. Do not support memcg aware list_lru's since it is not needed by the intended caller. Signed-off-by: Rick Edgecombe --- include/linux/list_lru.h | 13 +++++++++++++ mm/list_lru.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 9dcaa3e582c9..4bde44a5024b 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -103,6 +103,19 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item); */ bool list_lru_del(struct list_lru *lru, struct list_head *item); +/** + * list_lru_get_mru: gets and removes the tail from one of the node lists + * @list_lru: the lru pointer + * @nid: the node id + * + * This function removes the most recently added item from one of the node + * id specified. This function should not be used if the list_lru is memcg + * aware. + * + * Return value: The element removed + */ +struct list_head *list_lru_get_mru(struct list_lru *lru, int nid); + /** * list_lru_count_one: return the number of objects currently held by @lru * @lru: the lru pointer. diff --git a/mm/list_lru.c b/mm/list_lru.c index 6f067b6b935f..fd5b19dcfc72 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -156,6 +156,34 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item) } EXPORT_SYMBOL_GPL(list_lru_del); +struct list_head *list_lru_get_mru(struct list_lru *lru, int nid) +{ + struct list_lru_node *nlru = &lru->node[nid]; + struct list_lru_one *l = &nlru->lru; + struct list_head *ret; + + /* This function does not attempt to search through the memcg lists */ + if (list_lru_memcg_aware(lru)) { + WARN_ONCE(1, "list_lru: %s not supported on memcg aware list_lrus", __func__); + return NULL; + } + + spin_lock(&nlru->lock); + if (list_empty(&l->list)) { + ret = NULL; + } else { + /* Get tail */ + ret = l->list.prev; + list_del_init(ret); + + l->nr_items--; + nlru->nr_items--; + } + spin_unlock(&nlru->lock); + + return ret; +} + void list_lru_isolate(struct list_lru_one *list, struct list_head *item) { list_del_init(item); From patchwork Wed May 5 00:30:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2458DC43460 for ; Wed, 5 May 2021 00:32:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D968613CB for ; Wed, 5 May 2021 00:32:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D968613CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 034A16B0071; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F00A26B0073; Tue, 4 May 2021 20:32:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D4C436B0072; Tue, 4 May 2021 20:32:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id B48BB6B0071 for ; Tue, 4 May 2021 20:32:29 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6A020181AEF39 for ; Wed, 5 May 2021 00:32:29 +0000 (UTC) X-FDA: 78105301218.28.93729A4 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf03.hostedemail.com (Postfix) with ESMTP id CB65AC0007C6 for ; Wed, 5 May 2021 00:32:21 +0000 (UTC) IronPort-SDR: PwhJ7oUoyaFTBxujc6zUdMKioyv01iT0SZPGD0LFBta7lK6rZUuwX5OcdKN7K+S9xwZFZZ+s9D SAcS5v0K3Neg== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724338" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724338" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:26 -0700 IronPort-SDR: roaeflJUzMbIdPqfYu//snYPz/jcvlf7Iez+eWxjvAEdyulf3N7WL8xhAqP51GegBkjQy8tLea FkILiWT9I2SQ== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490764" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:25 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 2/9] list: Support list head not in object for list_lru Date: Tue, 4 May 2021 17:30:25 -0700 Message-Id: <20210505003032.489164-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=none; spf=none (imf03.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: CB65AC0007C6 X-Stat-Signature: w34w4trmoj1zjfzhi3benctb8k34z4r4 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174741-49680 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In future patches, there will be a need to keep track of objects with list_lru where the list_head is not in the object (will be in struct page). Since list_lru automatically determines the node id from the list_head, this will fail when using struct page. So create a new function in list_lru, list_lru_add_node(), that allows the node id of the item to be passed in. Otherwise it behaves exactly like list_lru_add(). Signed-off-by: Rick Edgecombe --- include/linux/list_lru.h | 13 +++++++++++++ mm/list_lru.c | 10 ++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index 4bde44a5024b..7ad149b22223 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -90,6 +90,19 @@ void memcg_drain_all_list_lrus(int src_idx, struct mem_cgroup *dst_memcg); */ bool list_lru_add(struct list_lru *lru, struct list_head *item); +/** + * list_lru_add_node: add an element to the lru list's tail + * @list_lru: the lru pointer + * @item: the item to be added. + * @nid: the node id of the item + * + * Like list_lru_add, but takes the node id as parameter instead of + * calculating it from the list_head passed in. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add_node(struct list_lru *lru, struct list_head *item, int nid); + /** * list_lru_del: delete an element to the lru list * @list_lru: the lru pointer diff --git a/mm/list_lru.c b/mm/list_lru.c index fd5b19dcfc72..8e32a6fc1527 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -112,9 +112,8 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr, } #endif /* CONFIG_MEMCG_KMEM */ -bool list_lru_add(struct list_lru *lru, struct list_head *item) +bool list_lru_add_node(struct list_lru *lru, struct list_head *item, int nid) { - int nid = page_to_nid(virt_to_page(item)); struct list_lru_node *nlru = &lru->node[nid]; struct mem_cgroup *memcg; struct list_lru_one *l; @@ -134,6 +133,13 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } + +bool list_lru_add(struct list_lru *lru, struct list_head *item) +{ + int nid = page_to_nid(virt_to_page(item)); + + return list_lru_add_node(lru, item, nid); +} EXPORT_SYMBOL_GPL(list_lru_add); bool list_lru_del(struct list_lru *lru, struct list_head *item) From patchwork Wed May 5 00:30:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3394C43462 for ; Wed, 5 May 2021 00:32:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7D1A0613CD for ; Wed, 5 May 2021 00:32:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D1A0613CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BE796B0072; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 44D9B6B0070; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12A3E6B0070; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id E041D6B0070 for ; Tue, 4 May 2021 20:32:29 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 923F1A760 for ; Wed, 5 May 2021 00:32:29 +0000 (UTC) X-FDA: 78105301218.20.698C8E2 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP id 79C5340002C7 for ; Wed, 5 May 2021 00:32:24 +0000 (UTC) IronPort-SDR: nm5X8gxXdNNP7f6G+prhrIA0uSqPFkOvYG9tFH2kYZEtJtvC67b5PNlCiRFdppxaB4d3SWjL35 UdotBtviQS1g== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724342" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724342" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:27 -0700 IronPort-SDR: Qqa/foM0GySgOOlFhGHAxlMC+r64sCbCcfkREouyUfA0fkoq5KrlRm0fs13S9pF2I66o4T9VQ4 ApIzAmJWkXrA== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490773" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:26 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 3/9] x86/mm/cpa: Add grouped page allocations Date: Tue, 4 May 2021 17:30:26 -0700 Message-Id: <20210505003032.489164-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 79C5340002C7 Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf17.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Rspamd-Server: rspam04 X-Stat-Signature: gw1mxcbt7dz71gd9p53s8d95dqydhnfw Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174744-147132 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For x86, setting memory permissions on the direct map results in fracturing large pages. Direct map fracturing can be reduced by locating pages that will have their permissions set close together. Create a simple page cache that allocates pages from huge page size blocks. Don't guarantee that a page will come from a huge page grouping, instead fallback to non-grouped pages to fulfill the allocation if needed. Also, register a shrinker such that the system can ask for the pages back if needed. Since this is only needed when there is a direct map, compile it out on highmem systems. Free pages in the cache are kept track of in per-node list inside a list_lru. NUMA_NO_NODE requests are serviced by checking each per-node list in a round robin fashion. If pages are requested for a certain node but the cache is empty for that node, a whole additional huge page size page is allocated. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/set_memory.h | 14 +++ arch/x86/mm/pat/set_memory.c | 151 ++++++++++++++++++++++++++++++ 2 files changed, 165 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index 4352f08bfbb5..b63f09cc282a 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -4,6 +4,9 @@ #include #include +#include +#include +#include /* * The set_memory_* API can be used to change various attributes of a virtual @@ -135,4 +138,15 @@ static inline int clear_mce_nospec(unsigned long pfn) */ #endif +struct grouped_page_cache { + struct shrinker shrinker; + struct list_lru lru; + gfp_t gfp; + atomic_t nid_round_robin; +}; + +int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp); +struct page *get_grouped_page(int node, struct grouped_page_cache *gpc); +void free_grouped_page(struct grouped_page_cache *gpc, struct page *page); + #endif /* _ASM_X86_SET_MEMORY_H */ diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 16f878c26667..6877ef66793b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2306,6 +2306,157 @@ int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address, return retval; } +#ifndef HIGHMEM +static struct page *__alloc_page_order(int node, gfp_t gfp_mask, int order) +{ + if (node == NUMA_NO_NODE) + return alloc_pages(gfp_mask, order); + + return alloc_pages_node(node, gfp_mask, order); +} + +static struct grouped_page_cache *__get_gpc_from_sc(struct shrinker *shrinker) +{ + return container_of(shrinker, struct grouped_page_cache, shrinker); +} + +static unsigned long grouped_shrink_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct grouped_page_cache *gpc = __get_gpc_from_sc(shrinker); + unsigned long page_cnt = list_lru_shrink_count(&gpc->lru, sc); + + return page_cnt ? page_cnt : SHRINK_EMPTY; +} + +static enum lru_status grouped_isolate(struct list_head *item, + struct list_lru_one *list, + spinlock_t *lock, void *cb_arg) +{ + struct list_head *dispose = cb_arg; + + list_lru_isolate_move(list, item, dispose); + + return LRU_REMOVED; +} + +static void __dispose_pages(struct grouped_page_cache *gpc, struct list_head *head) +{ + struct list_head *cur, *next; + + list_for_each_safe(cur, next, head) { + struct page *page = list_entry(head, struct page, lru); + + list_del(cur); + + __free_pages(page, 0); + } +} + +static unsigned long grouped_shrink_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct grouped_page_cache *gpc = __get_gpc_from_sc(shrinker); + unsigned long isolated; + LIST_HEAD(freeable); + + if (!(sc->gfp_mask & gpc->gfp)) + return SHRINK_STOP; + + isolated = list_lru_shrink_walk(&gpc->lru, sc, grouped_isolate, + &freeable); + __dispose_pages(gpc, &freeable); + + /* Every item walked gets isolated */ + sc->nr_scanned += isolated; + + return isolated; +} + +static struct page *__remove_first_page(struct grouped_page_cache *gpc, int node) +{ + unsigned int start_nid, i; + struct list_head *head; + + if (node != NUMA_NO_NODE) { + head = list_lru_get_mru(&gpc->lru, node); + if (head) + return list_entry(head, struct page, lru); + return NULL; + } + + /* If NUMA_NO_NODE, search the nodes in round robin for a page */ + start_nid = (unsigned int)atomic_fetch_inc(&gpc->nid_round_robin) % nr_node_ids; + for (i = 0; i < nr_node_ids; i++) { + int cur_nid = (start_nid + i) % nr_node_ids; + + head = list_lru_get_mru(&gpc->lru, cur_nid); + if (head) + return list_entry(head, struct page, lru); + } + + return NULL; +} + +/* Get and add some new pages to the cache to be used by VM_GROUP_PAGES */ +static struct page *__replenish_grouped_pages(struct grouped_page_cache *gpc, int node) +{ + const unsigned int hpage_cnt = HPAGE_SIZE >> PAGE_SHIFT; + struct page *page; + int i; + + page = __alloc_page_order(node, gpc->gfp, HUGETLB_PAGE_ORDER); + if (!page) + return __alloc_page_order(node, gpc->gfp, 0); + + split_page(page, HUGETLB_PAGE_ORDER); + + for (i = 1; i < hpage_cnt; i++) + free_grouped_page(gpc, &page[i]); + + return &page[0]; +} + +int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp) +{ + int err = 0; + + memset(gpc, 0, sizeof(struct grouped_page_cache)); + + if (list_lru_init(&gpc->lru)) + goto out; + + gpc->shrinker.count_objects = grouped_shrink_count; + gpc->shrinker.scan_objects = grouped_shrink_scan; + gpc->shrinker.seeks = DEFAULT_SEEKS; + gpc->shrinker.flags = SHRINKER_NUMA_AWARE; + + err = register_shrinker(&gpc->shrinker); + if (err) + list_lru_destroy(&gpc->lru); + +out: + return err; +} + +struct page *get_grouped_page(int node, struct grouped_page_cache *gpc) +{ + struct page *page; + + page = __remove_first_page(gpc, node); + + if (page) + return page; + + return __replenish_grouped_pages(gpc, node); +} + +void free_grouped_page(struct grouped_page_cache *gpc, struct page *page) +{ + INIT_LIST_HEAD(&page->lru); + list_lru_add_node(&gpc->lru, &page->lru, page_to_nid(page)); +} +#endif /* !HIGHMEM */ /* * The testcases use internal knowledge of the implementation that shouldn't * be exposed to the rest of the kernel. Include these directly here. From patchwork Wed May 5 00:30:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8C28C43470 for ; Wed, 5 May 2021 00:32:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A49D61408 for ; Wed, 5 May 2021 00:32:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A49D61408 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 71CBB6B0070; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BE186B0074; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 427316B0075; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 04A556B0072 for ; Tue, 4 May 2021 20:32:29 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AD9DC181AF5D8 for ; Wed, 5 May 2021 00:32:29 +0000 (UTC) X-FDA: 78105301218.06.F8080AB Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf09.hostedemail.com (Postfix) with ESMTP id 1DEBD600010A for ; Wed, 5 May 2021 00:32:19 +0000 (UTC) IronPort-SDR: OtA8LhIHGkCZlrVry4ukw2MMEBrt8nbQYxWWk1d/+MxQ5/sZohBX52YIsu6CZzkpSlsE3W43cD pY2kmj5Orf4A== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724352" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724352" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:27 -0700 IronPort-SDR: ip0J5D3qBZcnDAFgdM7HQHdxWI8/FjbpAGZI0G2d4bIwRZLNN6s4O1irKqEf+BRPr25p+FQq1v G9Ah1barEj0Q== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490787" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:27 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 4/9] mm: Explicitly zero page table lock ptr Date: Tue, 4 May 2021 17:30:27 -0700 Message-Id: <20210505003032.489164-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf09.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Stat-Signature: wgu6bcx4m4gxw1tb618y5hhuw6pnfc8e X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1DEBD600010A Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174739-452560 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In ptlock_init() there is a VM_BUG_ON_PAGE() check on the page table lock pointer. Explicitly zero the lock in ptlock_free() so a page table lock can be re-initialized without triggering the BUG_ON(). It appears this doesn't normally trigger because the private field shares the same space in struct page as ptl and page tables always return to the buddy allocator before being re-initialized as new page tables. When the page returns to the buddy allocator, private gets used to store the page order, so it inadvertently clears ptl as well. In future patches, pages will get re-initialized as page tables without returning to the buddy allocator so this is needed. Signed-off-by: Rick Edgecombe --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 5efa07fb6cdc..130f8c1e380a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5225,5 +5225,6 @@ bool ptlock_alloc(struct page *page) void ptlock_free(struct page *page) { kmem_cache_free(page_ptl_cachep, page->ptl); + page->ptl = 0; } #endif From patchwork Wed May 5 00:30:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFAE5C43461 for ; Wed, 5 May 2021 00:32:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 517AF6112D for ; Wed, 5 May 2021 00:32:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 517AF6112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C19516B007B; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA40C6B0074; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A40716B0078; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 787B46B0073 for ; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 363018249980 for ; Wed, 5 May 2021 00:32:30 +0000 (UTC) X-FDA: 78105301260.16.8112D73 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf03.hostedemail.com (Postfix) with ESMTP id AAD40C0007CC for ; Wed, 5 May 2021 00:32:22 +0000 (UTC) IronPort-SDR: qIYfur3svitllk4f6q015/cZbvcISOYueMLD4JNCd9eWDxRcqmVY0bsfzW9L3A/mk3ydHIdk/Q 1SdgoKJDqICw== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724357" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724357" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:28 -0700 IronPort-SDR: n5FzI0DMWtZHqm16cE4XyBakjUGMbHRvOPkqq/nUBUeuUE55IKsspO+Ry/oUXbnS89HIhFAU9T vS+Sne504Erw== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490799" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:27 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 5/9] x86, mm: Use cache of page tables Date: Tue, 4 May 2021 17:30:28 -0700 Message-Id: <20210505003032.489164-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=none; spf=none (imf03.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: AAD40C0007CC X-Stat-Signature: j1rppkba88dke88fr4o8bdikz31f5eqt Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174742-908269 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change the page table allocation functions defined in pgalloc.h to use a cache of physically grouped pages. This will let the page tables to be set with PKS permissions later. For userspace page tables, they are gathered up using mmu gather, and freed along with other types of pages in swap.c. Reuse the PageTable page flag to communicate that swap needs to return this page to the cache of page tables, and not free it to the page allocator. Set this flag in the free_tlb() family of functions called by mmu gather. Do not set PKS permissions on the page tables, because the page table setting functions cannot handle it yet. This will be done in later patches. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/pgalloc.h | 4 ++ arch/x86/mm/pgtable.c | 75 ++++++++++++++++++++++++++++++++++ include/asm-generic/pgalloc.h | 42 +++++++++++++++---- include/linux/mm.h | 7 ++++ mm/swap.c | 7 ++++ mm/swap_state.c | 6 +++ 6 files changed, 132 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index 62ad61d6fefc..e38b54853a51 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -7,6 +7,10 @@ #include #define __HAVE_ARCH_PTE_ALLOC_ONE +#ifdef CONFIG_PKS_PG_TABLES +#define __HAVE_ARCH_FREE_TABLE +#define __HAVE_ARCH_ALLOC_TABLE +#endif #define __HAVE_ARCH_PGD_FREE #include diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index f6a9e2e36642..7ccd031d2384 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -6,12 +6,16 @@ #include #include #include +#include +#include #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK phys_addr_t physical_mask __ro_after_init = (1ULL << __PHYSICAL_MASK_SHIFT) - 1; EXPORT_SYMBOL(physical_mask); #endif +static struct grouped_page_cache gpc_pks; +static bool pks_page_en; #ifdef CONFIG_HIGHPTE #define PGTABLE_HIGHMEM __GFP_HIGHMEM #else @@ -33,6 +37,46 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) return __pte_alloc_one(mm, __userpte_alloc_gfp); } +#ifdef CONFIG_PKS_PG_TABLES +struct page *alloc_table(gfp_t gfp) +{ + struct page *table; + + if (!pks_page_en) + return alloc_page(gfp); + + table = get_grouped_page(numa_node_id(), &gpc_pks); + if (!table) + return NULL; + + if (gfp & __GFP_ZERO) + memset(page_address(table), 0, PAGE_SIZE); + + if (memcg_kmem_enabled() && + gfp & __GFP_ACCOUNT && + !__memcg_kmem_charge_page(table, gfp, 0)) { + free_table(table); + table = NULL; + } + + VM_BUG_ON_PAGE(*(unsigned long *)&table->ptl, table); + + return table; +} + +void free_table(struct page *table_page) +{ + if (!pks_page_en) { + __free_pages(table_page, 0); + return; + } + + if (memcg_kmem_enabled() && PageMemcgKmem(table_page)) + __memcg_kmem_uncharge_page(table_page, 0); + free_grouped_page(&gpc_pks, table_page); +} +#endif /* CONFIG_PKS_PG_TABLES */ + static int __init setup_userpte(char *arg) { if (!arg) @@ -54,6 +98,8 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { pgtable_pte_page_dtor(pte); paravirt_release_pte(page_to_pfn(pte)); + /* Set Page Table so swap knows how to free it */ + __SetPageTable(pte); paravirt_tlb_remove_table(tlb, pte); } @@ -70,12 +116,16 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) tlb->need_flush_all = 1; #endif pgtable_pmd_page_dtor(page); + /* Set Page Table so swap nows how to free it */ + __SetPageTable(virt_to_page(pmd)); paravirt_tlb_remove_table(tlb, page); } #if CONFIG_PGTABLE_LEVELS > 3 void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { + /* Set Page Table so swap nows how to free it */ + __SetPageTable(virt_to_page(pud)); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(pud)); } @@ -83,6 +133,8 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { + /* Set Page Table so swap nows how to free it */ + __SetPageTable(virt_to_page(p4d)); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } @@ -411,12 +463,24 @@ static inline void _pgd_free(pgd_t *pgd) static inline pgd_t *_pgd_alloc(void) { + if (pks_page_en) { + struct page *page = alloc_table(GFP_PGTABLE_USER); + + if (!page) + return NULL; + return page_address(page); + } + return (pgd_t *)__get_free_pages(GFP_PGTABLE_USER, PGD_ALLOCATION_ORDER); } static inline void _pgd_free(pgd_t *pgd) { + if (pks_page_en) { + free_table(virt_to_page(pgd)); + return; + } free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER); } #endif /* CONFIG_X86_PAE */ @@ -859,6 +923,17 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) return 1; } +#ifdef CONFIG_PKS_PG_TABLES +static int __init pks_page_init(void) +{ + pks_page_en = !init_grouped_page_cache(&gpc_pks, GFP_KERNEL | PGTABLE_HIGHMEM); + +out: + return !pks_page_en; +} + +device_initcall(pks_page_init); +#endif /* CONFIG_PKS_PG_TABLES */ #else /* !CONFIG_X86_64 */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 02932efad3ab..3437db2a2740 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -2,11 +2,26 @@ #ifndef __ASM_GENERIC_PGALLOC_H #define __ASM_GENERIC_PGALLOC_H +#include + #ifdef CONFIG_MMU #define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO) #define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT) +#ifndef __HAVE_ARCH_ALLOC_TABLE +static inline struct page *alloc_table(gfp_t gfp) +{ + return alloc_page(gfp); +} +#else /* __HAVE_ARCH_ALLOC_TABLE */ +extern struct page *alloc_table(gfp_t gfp); +#endif /* __HAVE_ARCH_ALLOC_TABLE */ + +#ifdef __HAVE_ARCH_FREE_TABLE +extern void free_table(struct page *); +#endif /* __HAVE_ARCH_FREE_TABLE */ + /** * __pte_alloc_one_kernel - allocate a page for PTE-level kernel page table * @mm: the mm_struct of the current context @@ -18,7 +33,12 @@ */ static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm) { - return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL); + struct page *page = alloc_table(GFP_PGTABLE_KERNEL); + + if (!page) + return NULL; + + return (pte_t *)page_address(page); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL @@ -41,7 +61,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) */ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + free_table(virt_to_page(pte)); } /** @@ -60,11 +80,11 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) { struct page *pte; - pte = alloc_page(gfp); + pte = alloc_table(gfp); if (!pte) return NULL; if (!pgtable_pte_page_ctor(pte)) { - __free_page(pte); + free_table(pte); return NULL; } @@ -99,7 +119,7 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { pgtable_pte_page_dtor(pte_page); - __free_page(pte_page); + free_table(pte_page); } @@ -123,11 +143,11 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - page = alloc_pages(gfp, 0); + page = alloc_table(gfp); if (!page) return NULL; if (!pgtable_pmd_page_ctor(page)) { - __free_pages(page, 0); + free_table(page); return NULL; } return (pmd_t *)page_address(page); @@ -139,7 +159,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); pgtable_pmd_page_dtor(virt_to_page(pmd)); - free_page((unsigned long)pmd); + free_table(virt_to_page(pmd)); } #endif @@ -160,10 +180,14 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) { gfp_t gfp = GFP_PGTABLE_USER; + struct page *table; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - return (pud_t *)get_zeroed_page(gfp); + table = alloc_table(gfp); + if (!table) + return NULL; + return (pud_t *)page_address(table); } #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 64a71bf20536..d6dedfc02aab 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2185,6 +2185,13 @@ static inline bool ptlock_init(struct page *page) { return true; } static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ +#ifndef CONFIG_PKS_PG_TABLES +static inline void free_table(struct page *table_page) +{ + __free_pages(table_page, 0); +} +#endif /* CONFIG_PKS_PG_TABLES */ + static inline void pgtable_init(void) { ptlock_cache_init(); diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..d6ff697be28e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -36,6 +36,7 @@ #include #include #include +#include #include "internal.h" @@ -888,6 +889,12 @@ void release_pages(struct page **pages, int nr) continue; } + if (PageTable(page)) { + __ClearPageTable(page); + free_table(page); + continue; + } + if (!put_page_testzero(page)) continue; diff --git a/mm/swap_state.c b/mm/swap_state.c index 3cdee7b11da9..a60ec3d4ab21 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -22,6 +22,7 @@ #include #include #include +#include #include "internal.h" /* @@ -310,6 +311,11 @@ static inline void free_swap_cache(struct page *page) void free_page_and_swap_cache(struct page *page) { free_swap_cache(page); + if (PageTable(page)) { + __ClearPageTable(page); + free_table(page); + return; + } if (!is_huge_zero_page(page)) put_page(page); } From patchwork Wed May 5 00:30:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238719 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C99C1C43460 for ; Wed, 5 May 2021 00:32:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F68E6112D for ; Wed, 5 May 2021 00:32:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F68E6112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED2E16B0073; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E408B6B0078; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C16656B0073; Tue, 4 May 2021 20:32:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 9F0736B0075 for ; Tue, 4 May 2021 20:32:30 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5868E99B5 for ; Wed, 5 May 2021 00:32:30 +0000 (UTC) X-FDA: 78105301260.31.8A640EB Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP id 3DDC640002C0 for ; Wed, 5 May 2021 00:32:25 +0000 (UTC) IronPort-SDR: 3/a/TwmWnyhZPEWPpQhqAMJxZAdXlM69f3Wr9GGyDj2fdfscoBzSiooCOd2Rq2+rjGflDAQOZm KZ4AMAFhWJJA== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724362" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724362" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:29 -0700 IronPort-SDR: ystkmVaLd3ynX9ydNCnx/U8bOYtmoL7EVqOaerYbN9eiXBwAlyVZHDuTDCv0OgBOAZiPRXxDBx AVuB95agEXmA== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490807" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:28 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 6/9] x86/mm/cpa: Add set_memory_pks() Date: Tue, 4 May 2021 17:30:29 -0700 Message-Id: <20210505003032.489164-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3DDC640002C0 Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf17.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Rspamd-Server: rspam04 X-Stat-Signature: seks3g5nsckqr4gfn5tpru6g83rae7y7 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174745-246626 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add function for setting PKS key on kernel memory. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/set_memory.h | 1 + arch/x86/mm/pat/set_memory.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index b63f09cc282a..a2bab1626fdd 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -52,6 +52,7 @@ int set_memory_decrypted(unsigned long addr, int numpages); int set_memory_np_noalias(unsigned long addr, int numpages); int set_memory_nonglobal(unsigned long addr, int numpages); int set_memory_global(unsigned long addr, int numpages); +int set_memory_pks(unsigned long addr, int numpages, int key); int set_pages_array_uc(struct page **pages, int addrinarray); int set_pages_array_wc(struct page **pages, int addrinarray); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6877ef66793b..29e61afb4a94 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1914,6 +1914,13 @@ int set_memory_wb(unsigned long addr, int numpages) } EXPORT_SYMBOL(set_memory_wb); +int set_memory_pks(unsigned long addr, int numpages, int key) +{ + return change_page_attr_set_clr(&addr, numpages, __pgprot(_PAGE_PKEY(key)), + __pgprot(_PAGE_PKEY(0xF & ~(unsigned int)key)), + 0, 0, NULL); +} + int set_memory_x(unsigned long addr, int numpages) { if (!(__supported_pte_mask & _PAGE_NX)) From patchwork Wed May 5 00:30:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238721 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4FBCC43461 for ; Wed, 5 May 2021 00:32:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54C676112D for ; Wed, 5 May 2021 00:32:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54C676112D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B06EE6B0074; Tue, 4 May 2021 20:32:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADE646B0075; Tue, 4 May 2021 20:32:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C8596B0078; Tue, 4 May 2021 20:32:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 6E5FF6B0074 for ; Tue, 4 May 2021 20:32:31 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 32133180AD807 for ; Wed, 5 May 2021 00:32:31 +0000 (UTC) X-FDA: 78105301302.13.E7A6243 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP id F016040002C0 for ; Wed, 5 May 2021 00:32:25 +0000 (UTC) IronPort-SDR: Cv5a6tGmxCHlTrhAKITUPoZDsDsrlExRm/Qv8NsdxIQKX9Gd88O6UNlMusNU8GwEzQKrDbJEHc Cw4ZDAHxS2WA== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724368" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724368" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:30 -0700 IronPort-SDR: Sf+CtbfZdKtb+zNFY2nfkwYozuxaRBjEJydeFa6PYwqzirf3nIfEyhVZCqj7kiI2s4qnKfYwFX SbTMvYIpQM4Q== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490816" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:29 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 7/9] x86/mm/cpa: Add perm callbacks to grouped pages Date: Tue, 4 May 2021 17:30:30 -0700 Message-Id: <20210505003032.489164-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F016040002C0 Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf17.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Rspamd-Server: rspam04 X-Stat-Signature: buw7ffqympzufsecwzgawxisfpb4qska Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174745-763715 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Future patches will need to set permissions on pages in the cache, so add some callbacks that let gouped page cache callers provide a callback the component can call when replenishing the cache or free-ing pages via the shrinker. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/set_memory.h | 8 +++++++- arch/x86/mm/pat/set_memory.c | 26 +++++++++++++++++++++++--- arch/x86/mm/pgtable.c | 3 ++- 3 files changed, 32 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index a2bab1626fdd..b370a20681db 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -139,14 +139,20 @@ static inline int clear_mce_nospec(unsigned long pfn) */ #endif +typedef int (*gpc_callback)(struct page*, unsigned int); + struct grouped_page_cache { struct shrinker shrinker; struct list_lru lru; gfp_t gfp; + gpc_callback pre_add_to_cache; + gpc_callback pre_shrink_free; atomic_t nid_round_robin; }; -int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp); +int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp, + gpc_callback pre_add_to_cache, + gpc_callback pre_shrink_free); struct page *get_grouped_page(int node, struct grouped_page_cache *gpc); void free_grouped_page(struct grouped_page_cache *gpc, struct page *page); diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 29e61afb4a94..6387499c855d 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2356,6 +2356,9 @@ static void __dispose_pages(struct grouped_page_cache *gpc, struct list_head *he list_del(cur); + if (gpc->pre_shrink_free) + gpc->pre_shrink_free(page, 1); + __free_pages(page, 0); } } @@ -2413,18 +2416,33 @@ static struct page *__replenish_grouped_pages(struct grouped_page_cache *gpc, in int i; page = __alloc_page_order(node, gpc->gfp, HUGETLB_PAGE_ORDER); - if (!page) - return __alloc_page_order(node, gpc->gfp, 0); + if (!page) { + page = __alloc_page_order(node, gpc->gfp, 0); + if (gpc->pre_add_to_cache) + gpc->pre_add_to_cache(page, 1); + return page; + } split_page(page, HUGETLB_PAGE_ORDER); + /* If fail to prepare to be added, try to clean up and free */ + if (gpc->pre_add_to_cache && gpc->pre_add_to_cache(page, hpage_cnt)) { + if (gpc->pre_shrink_free) + gpc->pre_shrink_free(page, hpage_cnt); + for (i = 0; i < hpage_cnt; i++) + __free_pages(&page[i], 0); + return NULL; + } + for (i = 1; i < hpage_cnt; i++) free_grouped_page(gpc, &page[i]); return &page[0]; } -int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp) +int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp, + gpc_callback pre_add_to_cache, + gpc_callback pre_shrink_free) { int err = 0; @@ -2442,6 +2460,8 @@ int init_grouped_page_cache(struct grouped_page_cache *gpc, gfp_t gfp) if (err) list_lru_destroy(&gpc->lru); + gpc->pre_add_to_cache = pre_add_to_cache; + gpc->pre_shrink_free = pre_shrink_free; out: return err; } diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 7ccd031d2384..bcef1f458b75 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -926,7 +926,8 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) #ifdef CONFIG_PKS_PG_TABLES static int __init pks_page_init(void) { - pks_page_en = !init_grouped_page_cache(&gpc_pks, GFP_KERNEL | PGTABLE_HIGHMEM); + pks_page_en = !init_grouped_page_cache(&gpc_pks, GFP_KERNEL | PGTABLE_HIGHMEM, + NULL, NULL); out: return !pks_page_en; From patchwork Wed May 5 00:30:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238723 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB9F6C43462 for ; Wed, 5 May 2021 00:32:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 447E0613CB for ; Wed, 5 May 2021 00:32:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 447E0613CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D9F156B0075; Tue, 4 May 2021 20:32:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D29276B0078; Tue, 4 May 2021 20:32:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B54BE6B007D; Tue, 4 May 2021 20:32:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 7C9A06B0075 for ; Tue, 4 May 2021 20:32:32 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3099B180AD807 for ; Wed, 5 May 2021 00:32:32 +0000 (UTC) X-FDA: 78105301344.35.13AE947 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf27.hostedemail.com (Postfix) with ESMTP id DBD6280192EA for ; Wed, 5 May 2021 00:32:01 +0000 (UTC) IronPort-SDR: it8Do0cb8leVTEDq5ike/mkHr9aYSoOy0YjlOBR5HfxB2k75c6KnSlhBanmcgUfV8p+nPXRmTx LpnD3cL8by0A== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724377" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724377" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:31 -0700 IronPort-SDR: I0z1rg5VKK8/qOih2QlQJF9tuI0gG+j1HgDM1okQSTuW+K4/wpcR17+Fod6bL5S+JWwk+l0EyQ qK4cWswk+e1g== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490827" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:30 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 8/9] x86, mm: Protect page tables with PKS Date: Tue, 4 May 2021 17:30:31 -0700 Message-Id: <20210505003032.489164-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Authentication-Results: imf27.hostedemail.com; dkim=none; spf=none (imf27.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DBD6280192EA X-Stat-Signature: ct6n6m4ezyja7y74qgearyhxaudyyx7o Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf27; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174721-523910 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Write protect page tables with PKS. Toggle writeability inside the pgtable.h defined page table modifiction functions. Do not protect the direct map page tables as it is more complicated and will come in a later patch. Signed-off-by: Rick Edgecombe --- arch/x86/boot/compressed/ident_map_64.c | 5 ++ arch/x86/include/asm/pgalloc.h | 2 + arch/x86/include/asm/pgtable.h | 26 ++++++++- arch/x86/include/asm/pgtable_64.h | 33 ++++++++++-- arch/x86/include/asm/pkeys_common.h | 8 ++- arch/x86/mm/pgtable.c | 72 ++++++++++++++++++++++--- mm/Kconfig | 6 ++- 7 files changed, 140 insertions(+), 12 deletions(-) diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index f7213d0943b8..2999be8f9347 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -349,3 +349,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code) */ add_identity_map(address, end); } + +#ifdef CONFIG_PKS_PG_TABLES +void enable_pgtable_write(void) {} +void disable_pgtable_write(void) {} +#endif diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index e38b54853a51..f1062d23d7c7 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -6,6 +6,8 @@ #include /* for struct page */ #include +#define STATIC_TABLE_KEY 1 + #define __HAVE_ARCH_PTE_ALLOC_ONE #ifdef CONFIG_PKS_PG_TABLES #define __HAVE_ARCH_FREE_TABLE diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index b1529b44a996..da6bae8bef7a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -117,6 +117,14 @@ extern pmdval_t early_pmd_flags; #define arch_end_context_switch(prev) do {} while(0) #endif /* CONFIG_PARAVIRT_XXL */ +#ifdef CONFIG_PKS_PG_TABLES +void enable_pgtable_write(void); +void disable_pgtable_write(void); +#else /* CONFIG_PKS_PG_TABLES */ +static void enable_pgtable_write(void) { } +static void disable_pgtable_write(void) { } +#endif /* CONFIG_PKS_PG_TABLES */ + /* * The following only work if pte_present() is true. * Undefined behaviour if not.. @@ -1102,7 +1110,9 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { + enable_pgtable_write(); clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); + disable_pgtable_write(); } #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) @@ -1152,7 +1162,9 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { + enable_pgtable_write(); clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); + disable_pgtable_write(); } #define pud_write pud_write @@ -1167,10 +1179,18 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t pmd) { if (IS_ENABLED(CONFIG_SMP)) { - return xchg(pmdp, pmd); + pmd_t ret; + + enable_pgtable_write(); + ret = xchg(pmdp, pmd); + disable_pgtable_write(); + + return ret; } else { pmd_t old = *pmdp; + enable_pgtable_write(); WRITE_ONCE(*pmdp, pmd); + disable_pgtable_write(); return old; } } @@ -1253,13 +1273,17 @@ static inline p4d_t *user_to_kernel_p4dp(p4d_t *p4dp) */ static inline void clone_pgd_range(pgd_t *dst, pgd_t *src, int count) { + enable_pgtable_write(); memcpy(dst, src, count * sizeof(pgd_t)); + disable_pgtable_write(); #ifdef CONFIG_PAGE_TABLE_ISOLATION if (!static_cpu_has(X86_FEATURE_PTI)) return; /* Clone the user space pgd as well */ + enable_pgtable_write(); memcpy(kernel_to_user_pgdp(dst), kernel_to_user_pgdp(src), count * sizeof(pgd_t)); + disable_pgtable_write(); #endif } diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 56d0399a0cd1..a287f3c8a0a3 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -64,7 +64,9 @@ void set_pte_vaddr_pud(pud_t *pud_page, unsigned long vaddr, pte_t new_pte); static inline void native_set_pte(pte_t *ptep, pte_t pte) { + enable_pgtable_write(); WRITE_ONCE(*ptep, pte); + disable_pgtable_write(); } static inline void native_pte_clear(struct mm_struct *mm, unsigned long addr, @@ -80,7 +82,9 @@ static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte) static inline void native_set_pmd(pmd_t *pmdp, pmd_t pmd) { + enable_pgtable_write(); WRITE_ONCE(*pmdp, pmd); + disable_pgtable_write(); } static inline void native_pmd_clear(pmd_t *pmd) @@ -91,7 +95,12 @@ static inline void native_pmd_clear(pmd_t *pmd) static inline pte_t native_ptep_get_and_clear(pte_t *xp) { #ifdef CONFIG_SMP - return native_make_pte(xchg(&xp->pte, 0)); + pteval_t pte_val; + + enable_pgtable_write(); + pte_val = xchg(&xp->pte, 0); + disable_pgtable_write(); + return native_make_pte(pte_val); #else /* native_local_ptep_get_and_clear, but duplicated because of cyclic dependency */ @@ -104,7 +113,12 @@ static inline pte_t native_ptep_get_and_clear(pte_t *xp) static inline pmd_t native_pmdp_get_and_clear(pmd_t *xp) { #ifdef CONFIG_SMP - return native_make_pmd(xchg(&xp->pmd, 0)); + pteval_t pte_val; + + enable_pgtable_write(); + pte_val = xchg(&xp->pmd, 0); + disable_pgtable_write(); + return native_make_pmd(pte_val); #else /* native_local_pmdp_get_and_clear, but duplicated because of cyclic dependency */ @@ -116,7 +130,9 @@ static inline pmd_t native_pmdp_get_and_clear(pmd_t *xp) static inline void native_set_pud(pud_t *pudp, pud_t pud) { + enable_pgtable_write(); WRITE_ONCE(*pudp, pud); + disable_pgtable_write(); } static inline void native_pud_clear(pud_t *pud) @@ -127,7 +143,12 @@ static inline void native_pud_clear(pud_t *pud) static inline pud_t native_pudp_get_and_clear(pud_t *xp) { #ifdef CONFIG_SMP - return native_make_pud(xchg(&xp->pud, 0)); + pteval_t pte_val; + + enable_pgtable_write(); + pte_val = xchg(&xp->pud, 0); + disable_pgtable_write(); + return native_make_pud(pte_val); #else /* native_local_pudp_get_and_clear, * but duplicated because of cyclic dependency @@ -144,13 +165,17 @@ static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) pgd_t pgd; if (pgtable_l5_enabled() || !IS_ENABLED(CONFIG_PAGE_TABLE_ISOLATION)) { + enable_pgtable_write(); WRITE_ONCE(*p4dp, p4d); + disable_pgtable_write(); return; } pgd = native_make_pgd(native_p4d_val(p4d)); pgd = pti_set_user_pgtbl((pgd_t *)p4dp, pgd); + enable_pgtable_write(); WRITE_ONCE(*p4dp, native_make_p4d(native_pgd_val(pgd))); + disable_pgtable_write(); } static inline void native_p4d_clear(p4d_t *p4d) @@ -160,7 +185,9 @@ static inline void native_p4d_clear(p4d_t *p4d) static inline void native_set_pgd(pgd_t *pgdp, pgd_t pgd) { + enable_pgtable_write(); WRITE_ONCE(*pgdp, pti_set_user_pgtbl(pgdp, pgd)); + disable_pgtable_write(); } static inline void native_pgd_clear(pgd_t *pgd) diff --git a/arch/x86/include/asm/pkeys_common.h b/arch/x86/include/asm/pkeys_common.h index 6917f1a27479..5682a922d60f 100644 --- a/arch/x86/include/asm/pkeys_common.h +++ b/arch/x86/include/asm/pkeys_common.h @@ -25,7 +25,13 @@ * * NOTE: This needs to be a macro to be used as part of the INIT_THREAD macro. */ -#define INIT_PKRS_VALUE (PKR_AD_KEY(1) | PKR_AD_KEY(2) | PKR_AD_KEY(3) | \ + +/* + * HACK: There is no global pkeys support yet. We want the pg table key to be + * read only, not disabled. Assume the page table key will be key 1 and set it + * WD in the default mask. + */ +#define INIT_PKRS_VALUE (PKR_WD_KEY(1) | PKR_AD_KEY(2) | PKR_AD_KEY(3) | \ PKR_AD_KEY(4) | PKR_AD_KEY(5) | PKR_AD_KEY(6) | \ PKR_AD_KEY(7) | PKR_AD_KEY(8) | PKR_AD_KEY(9) | \ PKR_AD_KEY(10) | PKR_AD_KEY(11) | PKR_AD_KEY(12) | \ diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index bcef1f458b75..6e536fe77943 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK @@ -16,6 +17,7 @@ EXPORT_SYMBOL(physical_mask); static struct grouped_page_cache gpc_pks; static bool pks_page_en; + #ifdef CONFIG_HIGHPTE #define PGTABLE_HIGHMEM __GFP_HIGHMEM #else @@ -49,8 +51,11 @@ struct page *alloc_table(gfp_t gfp) if (!table) return NULL; - if (gfp & __GFP_ZERO) + if (gfp & __GFP_ZERO) { + enable_pgtable_write(); memset(page_address(table), 0, PAGE_SIZE); + disable_pgtable_write(); + } if (memcg_kmem_enabled() && gfp & __GFP_ACCOUNT && @@ -607,9 +612,12 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma, { int ret = 0; - if (pte_young(*ptep)) + if (pte_young(*ptep)) { + enable_pgtable_write(); ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *) &ptep->pte); + disable_pgtable_write(); + } return ret; } @@ -620,9 +628,12 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma, { int ret = 0; - if (pmd_young(*pmdp)) + if (pmd_young(*pmdp)) { + enable_pgtable_write(); ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *)pmdp); + disable_pgtable_write(); + } return ret; } @@ -631,9 +642,12 @@ int pudp_test_and_clear_young(struct vm_area_struct *vma, { int ret = 0; - if (pud_young(*pudp)) + if (pud_young(*pudp)) { + enable_pgtable_write(); ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *)pudp); + disable_pgtable_write(); + } return ret; } @@ -642,6 +656,7 @@ int pudp_test_and_clear_young(struct vm_area_struct *vma, int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { + int ret; /* * On x86 CPUs, clearing the accessed bit without a TLB flush * doesn't cause data corruption. [ It could cause incorrect @@ -655,7 +670,10 @@ int ptep_clear_flush_young(struct vm_area_struct *vma, * shouldn't really matter because there's no real memory * pressure for swapout to react to. ] */ - return ptep_test_and_clear_young(vma, address, ptep); + enable_pgtable_write(); + ret = ptep_test_and_clear_young(vma, address, ptep); + disable_pgtable_write(); + return ret; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -666,7 +684,9 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma, VM_BUG_ON(address & ~HPAGE_PMD_MASK); + enable_pgtable_write(); young = pmdp_test_and_clear_young(vma, address, pmdp); + disable_pgtable_write(); if (young) flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); @@ -924,10 +944,50 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) } #ifdef CONFIG_PKS_PG_TABLES +static int pks_key; + +static int _pks_protect(struct page *page, unsigned int cnt) +{ + /* TODO: do this in one step */ + if (set_memory_4k((unsigned long)page_address(page), cnt)) + return 1; + set_memory_pks((unsigned long)page_address(page), cnt, pks_key); + return 0; +} + +static int _pks_unprotect(struct page *page, unsigned int cnt) +{ + set_memory_pks((unsigned long)page_address(page), cnt, 0); + return 0; +} + +void enable_pgtable_write(void) +{ + if (pks_page_en) + pks_mk_readwrite(STATIC_TABLE_KEY); +} + +void disable_pgtable_write(void) +{ + if (pks_page_en) + pks_mk_readonly(STATIC_TABLE_KEY); +} + static int __init pks_page_init(void) { + /* + * TODO: Needs global keys to be initially set globally readable, for now + * warn if its not the expected static key + */ + pks_key = pks_key_alloc("PKS protected page tables"); + if (pks_key < 0) + goto out; + WARN_ON(pks_key != STATIC_TABLE_KEY); + pks_page_en = !init_grouped_page_cache(&gpc_pks, GFP_KERNEL | PGTABLE_HIGHMEM, - NULL, NULL); + _pks_protect, _pks_unprotect); + if (!pks_page_en) + pks_key_free(pks_key); out: return !pks_page_en; diff --git a/mm/Kconfig b/mm/Kconfig index 463e95ea0df1..0a856332fd38 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -812,7 +812,11 @@ config ARCH_HAS_SUPERVISOR_PKEYS bool config ARCH_ENABLE_SUPERVISOR_PKEYS def_bool y - depends on PKS_TEST + depends on PKS_TEST || PKS_PG_TABLES + +config PKS_PG_TABLES + def_bool y + depends on !PAGE_TABLE_ISOLATION && !HIGHMEM && !X86_PAE && PGTABLE_LEVELS = 4 config PERCPU_STATS bool "Collect percpu memory statistics" From patchwork Wed May 5 00:30:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12238725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6899EC43461 for ; Wed, 5 May 2021 00:32:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 161BA613CD for ; Wed, 5 May 2021 00:32:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 161BA613CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 75DEF6B0078; Tue, 4 May 2021 20:32:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70D3D6B007D; Tue, 4 May 2021 20:32:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AE568D0001; Tue, 4 May 2021 20:32:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 337016B0078 for ; Tue, 4 May 2021 20:32:33 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EE62699BE for ; Wed, 5 May 2021 00:32:32 +0000 (UTC) X-FDA: 78105301344.21.AADE207 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP id 9CE2F40002D5 for ; Wed, 5 May 2021 00:32:27 +0000 (UTC) IronPort-SDR: FZNy1Qk1UgN6wFr7w0QVc5FFfEUwBQKNcTfnvuOivX8PjCjWPw4Bqutck/uo2nCnh9PKnTihnd p4STq3ZXlh/g== X-IronPort-AV: E=McAfee;i="6200,9189,9974"; a="197724390" X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="197724390" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:31 -0700 IronPort-SDR: 9jaRCRcl6okr9+JOD13qox6fKBXSe1YqF3sqPuT8obwPy/HaMHajGCyXLLAOTd5q/UqvUxIxNp dKMrvu6eQBvQ== X-IronPort-AV: E=Sophos;i="5.82,273,1613462400"; d="scan'208";a="429490833" Received: from rpedgeco-mobl3.amr.corp.intel.com (HELO localhost.intel.com) ([10.209.26.68]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2021 17:32:31 -0700 From: Rick Edgecombe To: dave.hansen@intel.com, luto@kernel.org, peterz@infradead.org, linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, linux-hardening@vger.kernel.org, kernel-hardening@lists.openwall.com Cc: ira.weiny@intel.com, rppt@kernel.org, dan.j.williams@intel.com, linux-kernel@vger.kernel.org, Rick Edgecombe Subject: [PATCH RFC 9/9] x86, cpa: PKS protect direct map page tables Date: Tue, 4 May 2021 17:30:32 -0700 Message-Id: <20210505003032.489164-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210505003032.489164-1-rick.p.edgecombe@intel.com> References: <20210505003032.489164-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9CE2F40002D5 Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf17.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=rick.p.edgecombe@intel.com X-Rspamd-Server: rspam04 X-Stat-Signature: q56feefs4sjyjb564y8we5x3kw95799k Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from=""; helo=mga14.intel.com; client-ip=192.55.52.115 X-HE-DKIM-Result: none/none X-HE-Tag: 1620174747-556261 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Protecting direct map page tables is a bit more difficult because a page table may be needed for a page split as part new setting the PKS permission the new page table. So in the case of an empty cache of page tables the page table allocator could get into a situation where it cannot create any more page tables. Several solutions were looked at: 1. Break the direct map with pages allocated from the large page being converted to PKS. This would result in a window where the table could be written to right before it was linked into the page tables. It also depends on high order pages being available, and so would regress from the un-protecteed behavior in that respect. 2. Hold some page tables in reserve to be able to break the large page for a new 2MB page, but if there are no 2MB page's available we may need to add a single page to the cache, in which case we would use up the reserve of page tables needed to break a new page, but not get enough page tables back to replenish the resereve. 3. Always map the direct map at 4k when protecting page tables so that pages don't need to be broken to map them with a PKS key. This would have undesirable performance. 4. Lastly, the strategy employed in this patch, have a separate cache of page tables just used for the direct map. Early in boot, squirrel away enough page tables to map the direct map at 4k. This comes with the same memory overhead of mapping the direct map at 4k, but gets the other benefits of mapping the direct map as large pages. Some direct map page tables currently still escape protection, so there are a few todos. It is a rough sketch of the idea. Signed-off-by: Rick Edgecombe --- arch/x86/include/asm/set_memory.h | 2 + arch/x86/mm/init.c | 40 +++++++++ arch/x86/mm/pat/set_memory.c | 134 +++++++++++++++++++++++++++++- 3 files changed, 172 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h index b370a20681db..55e2add0452b 100644 --- a/arch/x86/include/asm/set_memory.h +++ b/arch/x86/include/asm/set_memory.h @@ -90,6 +90,8 @@ bool kernel_page_present(struct page *page); extern int kernel_set_to_readonly; +void add_pks_table(unsigned long addr); + #ifdef CONFIG_X86_64 /* * Prevent speculative access to the page by either unmapping diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index dd694fb93916..09ae02003151 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -26,6 +26,7 @@ #include #include #include +#include /* * We need to define the tracepoints somewhere, and tlb.c @@ -119,6 +120,8 @@ __ref void *alloc_low_pages(unsigned int num) if (after_bootmem) { unsigned int order; + WARN_ON(IS_ENABLED(CONFIG_PKS_PG_TABLES)); + /* TODO: When does this happen, how to deal with the order? */ order = get_order((unsigned long)num << PAGE_SHIFT); return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order); } @@ -153,6 +156,11 @@ __ref void *alloc_low_pages(unsigned int num) clear_page(adr); } + printk("Allocing un-protected page table: %lx\n", (unsigned long)__va(pfn << PAGE_SHIFT)); + /* + * TODO: Save the va of this table to PKS protect post boot, but we need a small allocation + * for the list... + */ return __va(pfn << PAGE_SHIFT); } @@ -532,6 +540,36 @@ unsigned long __ref init_memory_mapping(unsigned long start, return ret >> PAGE_SHIFT; } +/* TODO: Check this math */ +static u64 calc_tables_needed(unsigned int size) +{ + unsigned int puds = size >> PUD_SHIFT; + unsigned int pmds = size >> PMD_SHIFT; + unsigned int needed_to_map_tables = 0; //?? + + return puds + pmds + needed_to_map_tables; +} + +static void __init reserve_page_tables(u64 start, u64 end) +{ + u64 reserve_size = calc_tables_needed(end - start); + u64 reserved = 0; + u64 cur; + int i; + + while (reserved < reserve_size) { + cur = memblock_find_in_range(start, end, HPAGE_SIZE, HPAGE_SIZE); + if (!cur) { + WARN(1, "Could not reserve HPAGE size page tables"); + return; + } + memblock_reserve(cur, HPAGE_SIZE); + for (i = 0; i < HPAGE_SIZE; i += PAGE_SIZE) + add_pks_table((long unsigned int)__va(cur + i)); + reserved += HPAGE_SIZE; + } +} + /* * We need to iterate through the E820 memory map and create direct mappings * for only E820_TYPE_RAM and E820_KERN_RESERVED regions. We cannot simply @@ -568,6 +606,8 @@ static unsigned long __init init_range_memory_mapping( init_memory_mapping(start, end, PAGE_KERNEL); mapped_ram_size += end - start; can_use_brk_pgt = true; + if (IS_ENABLED(CONFIG_PKS_PG_TABLES)) + reserve_page_tables(start, end); } return mapped_ram_size; diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6387499c855d..a5d21a664c98 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -69,6 +69,90 @@ static DEFINE_SPINLOCK(cpa_lock); #define CPA_PAGES_ARRAY 4 #define CPA_NO_CHECK_ALIAS 8 /* Do not search for aliases */ +#ifdef CONFIG_PKS_PG_TABLES +static LLIST_HEAD(tables_cache); +static LLIST_HEAD(tables_to_covert); +static bool tables_inited; + +struct pks_table_llnode { + struct llist_node node; + void *table; +}; + +static void __add_dmap_table_to_convert(void *table, struct pks_table_llnode *ob) +{ + ob->table = table; + llist_add(&ob->node, &tables_to_covert); +} + +static void add_dmap_table_to_convert(void *table) +{ + struct pks_table_llnode *ob; + + ob = kmalloc(sizeof(*ob), GFP_KERNEL); + + WARN(!ob, "Page table unprotected\n"); + + __add_dmap_table_to_convert(table, ob); +} + +void add_pks_table(unsigned long addr) +{ + struct llist_node *node = (struct llist_node *)addr; + + enable_pgtable_write(); + llist_add(node, &tables_cache); + disable_pgtable_write(); +} + +static void *get_pks_table(void) +{ + return llist_del_first(&tables_cache); +} + +static void *_alloc_dmap_table(void) +{ + struct page *page = alloc_pages(GFP_KERNEL, 0); + + if (!page) + return NULL; + + return page_address(page); +} + +static struct page *alloc_dmap_table(void) +{ + void *tablep = get_pks_table(); + + /* Fall back to un-protected table is something went wrong */ + if (!tablep) { + if (tables_inited) + WARN(1, "Allocating unprotected direct map table\n"); + tablep = _alloc_dmap_table(); + } + + if (tablep && !tables_inited) + add_dmap_table_to_convert(tablep); + + return virt_to_page(tablep); +} + +static void free_dmap_table(struct page *table) +{ + add_pks_table((unsigned long)virt_to_page(table)); +} +#else /* CONFIG_PKS_PG_TABLES */ +static struct page *alloc_dmap_table(void) +{ + return alloc_pages(GFP_KERNEL, 0); +} + +static void free_dmap_table(struct page *table) +{ + __free_page(table); +} +#endif + static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm) { return __pgprot(cachemode2protval(pcm)); @@ -1068,14 +1152,15 @@ static int split_large_page(struct cpa_data *cpa, pte_t *kpte, if (!debug_pagealloc_enabled()) spin_unlock(&cpa_lock); - base = alloc_pages(GFP_KERNEL, 0); + base = alloc_dmap_table(); + if (!debug_pagealloc_enabled()) spin_lock(&cpa_lock); if (!base) return -ENOMEM; if (__split_large_page(cpa, kpte, address, base)) - __free_page(base); + free_dmap_table(base); return 0; } @@ -1088,7 +1173,7 @@ static bool try_to_free_pte_page(pte_t *pte) if (!pte_none(pte[i])) return false; - free_page((unsigned long)pte); + free_dmap_table(virt_to_page(pte)); return true; } @@ -1100,7 +1185,7 @@ static bool try_to_free_pmd_page(pmd_t *pmd) if (!pmd_none(pmd[i])) return false; - free_page((unsigned long)pmd); + free_dmap_table(virt_to_page(pmd)); return true; } @@ -2484,6 +2569,47 @@ void free_grouped_page(struct grouped_page_cache *gpc, struct page *page) list_lru_add_node(&gpc->lru, &page->lru, page_to_nid(page)); } #endif /* !HIGHMEM */ + +#ifdef CONFIG_PKS_PG_TABLES +/* PKS protect reserved dmap tables */ +static int __init init_pks_dmap_tables(void) +{ + struct pks_table_llnode *cur_entry; + static LLIST_HEAD(from_cache); + struct pks_table_llnode *tmp; + struct llist_node *cur, *next; + + llist_for_each_safe(cur, next, llist_del_all(&tables_cache)) + llist_add(cur, &from_cache); + + while ((cur = llist_del_first(&from_cache))) { + llist_add(cur, &tables_cache); + + tmp = kmalloc(sizeof(*tmp), GFP_KERNEL); + if (!tmp) + goto out_err; + tmp->table = cur; + llist_add(&tmp->node, &tables_to_covert); + } + + tables_inited = true; + + while ((cur = llist_del_first(&tables_to_covert))) { + cur_entry = llist_entry(cur, struct pks_table_llnode, node); + set_memory_pks((unsigned long)cur_entry->table, 1, STATIC_TABLE_KEY); + kfree(cur_entry); + } + + return 0; +out_err: + WARN(1, "Unable to protect all page tables\n"); + llist_add(llist_del_all(&from_cache), &tables_cache); + return 0; +} + +device_initcall(init_pks_dmap_tables); +#endif + /* * The testcases use internal knowledge of the implementation that shouldn't * be exposed to the rest of the kernel. Include these directly here.