From patchwork Mon Dec 13 14:27:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12674035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79A29C433EF for ; Mon, 13 Dec 2021 14:28:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B58DC6B0074; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B081B6B0075; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E57C6B007B; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay031.a.hostedemail.com [64.99.140.31]) by kanga.kvack.org (Postfix) with ESMTP id 7D8CC6B0074 for ; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3F4532209C for ; Mon, 13 Dec 2021 14:27:09 +0000 (UTC) X-FDA: 78912998178.01.8577BD2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id B1A0612000D for ; Mon, 13 Dec 2021 14:27:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w4DGbDmVukCMyumUVjlvoObtquEzMdIQjJeODulGmAg=; b=ZGTG9RBHJQ+3OwtP7bwSuPAYs7 4v/rtpNhXZNZEswO89WfIWQ70Rqn6FC975hC0jOYPETLMLeKoNxr2ejcHD5TI59Gq5XDR/hSlJTrp Sndpy4vJbVYcVATgRiSZ4v8N8yhqjPvTqCYaFJ+X6R+uMGUqehb7nsjkkMcszbZ/7RAGvcWUKruZp SAcH8oICLC4WjM+sOUhYWx2sdfE/eCoiNyGwZd8/Ot3CgppP1mDOKIBJtCUH4SiEQlR+TDNeelBD5 KePtJUfVhVbzyZK0NHjU4UpteIItnd86Nf9rbuPhUaegO8gEC9OSUTlGNMd05lyT/zDoKGyqrohA6 K471n/hg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlt-6Y; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 1/3] mm/usercopy: Check kmap addresses properly Date: Mon, 13 Dec 2021 14:27:01 +0000 Message-Id: <20211213142703.3066590-2-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: up9ffmxo4wk4tq78fonmbc1m8pofmphg Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=ZGTG9RBH; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: B1A0612000D X-HE-Tag: 1639405625-20963 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If you are copying to an address in the kmap region, you may not copy across a page boundary, no matter what the size of the underlying allocation. You can't kmap() a slab page because slab pages always come from low memory. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- arch/x86/include/asm/highmem.h | 1 + include/linux/highmem-internal.h | 10 ++++++++++ mm/usercopy.c | 16 ++++++++++------ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 032e020853aa..731ee7cc40a5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -26,6 +26,7 @@ #include #include #include +#include /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 0a0b2b09b1b8..01fb76d101b0 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count) atomic_long_add(count, &_totalhigh_pages); } +static inline bool is_kmap_addr(const void *x) +{ + unsigned long addr = (unsigned long)x; + return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP); +} #else /* CONFIG_HIGHMEM */ static inline struct page *kmap_to_page(void *addr) @@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr) static inline unsigned int nr_free_highpages(void) { return 0; } static inline unsigned long totalhigh_pages(void) { return 0UL; } +static inline bool is_kmap_addr(const void *x) +{ + return false; +} + #endif /* CONFIG_HIGHMEM */ /* diff --git a/mm/usercopy.c b/mm/usercopy.c index b3de3c4eefba..8c039302465f 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (!virt_addr_valid(ptr)) return; - /* - * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the - * highmem page or fallback to virt_to_page(). The following - * is effectively a highmem-aware virt_to_head_page(). - */ - page = compound_head(kmap_to_page((void *)ptr)); + if (is_kmap_addr(ptr)) { + unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1); + + if ((unsigned long)ptr + n - 1 > page_end) + usercopy_abort("kmap", NULL, to_user, + offset_in_page(ptr), n); + return; + } + + page = virt_to_head_page(ptr); if (PageSlab(page)) { /* Check slab allocator for flags and size. */ From patchwork Mon Dec 13 14:27:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12674033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63D01C433EF for ; Mon, 13 Dec 2021 14:27:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D1466B0073; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62AD86B0078; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33E1C6B0073; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 1C7526B0073 for ; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BAF1988487 for ; Mon, 13 Dec 2021 14:27:08 +0000 (UTC) X-FDA: 78912998136.09.4B3EA53 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id DA69440007 for ; Mon, 13 Dec 2021 14:27:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FaGT0W88prg5rCbGEk97JZKHxNwNL3WzIICgEAogG4E=; b=mtLnrhgjys1pqoGO33djIsde2C 8HmUctiwvH5oR5DW9wWo0s0HdWUgQ1mJbtegHveMzx0s3U1pZQP5gNSDCVFu261pmWunSPKgOFcA1 ysU1GynwW/iXshue2VuFpeTg+2lftBEbO4V2gZvT3LmeSEIoi3H7rlXnWQ2FXa0BiGkQq2aksrbTO 0e5WZmown/Qp2+9cgPieU6JasAZ/UOB/CipIcx6WEFz2htWixAfE9nQKUriINXvCIGcSMkk4MdM94 IC3k8tBonM7R+pNu8CsILjWvb2nFyBQh6PPRJ9DTzYqvk8pxWqUGHrXUoS1PZGe/wbwtlyUUQNiWE N9wujDPw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlv-8z; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 2/3] mm/usercopy: Detect vmalloc overruns Date: Mon, 13 Dec 2021 14:27:02 +0000 Message-Id: <20211213142703.3066590-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: DA69440007 X-Stat-Signature: mqihwsr6efdhywmob1bdio1xstegsjrh Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=mtLnrhgj; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1639405623-732463 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If you have a vmalloc() allocation, or an address from calling vmap(), you cannot overrun the vm_area which describes it, regardless of the size of the underlying allocation. This probably doesn't do much for security because vmalloc comes with guard pages these days, but it prevents usercopy aborts when copying to a vmap() of smaller pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/usercopy.c b/mm/usercopy.c index 8c039302465f..63476e1506e0 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } + if (is_vmalloc_addr(ptr)) { + struct vm_struct *vm = find_vm_area(ptr); + unsigned long offset; + + if (!vm) { + usercopy_abort("vmalloc", "no area", to_user, 0, n); + return; + } + + offset = ptr - vm->addr; + if (offset + n > vm->size) + usercopy_abort("vmalloc", NULL, to_user, offset, n); + return; + } + page = virt_to_head_page(ptr); if (PageSlab(page)) { From patchwork Mon Dec 13 14:27:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12674031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C415DC433EF for ; Mon, 13 Dec 2021 14:27:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 405866B0072; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B6616B0074; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27DDF6B0075; Mon, 13 Dec 2021 09:27:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay035.a.hostedemail.com [64.99.140.35]) by kanga.kvack.org (Postfix) with ESMTP id 181C16B0072 for ; Mon, 13 Dec 2021 09:27:19 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D379522082 for ; Mon, 13 Dec 2021 14:27:08 +0000 (UTC) X-FDA: 78912998136.05.CD9E1C5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf23.hostedemail.com (Postfix) with ESMTP id 8FD52140005 for ; Mon, 13 Dec 2021 14:27:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=IFa+DfiRwDRCozF/2nCT7kRZcW4NwEd0UI8DBzwcaaA=; b=iD8Wu5BjPcK7VGavwSrSgUlsXJ tqiC0hMOWgWwzB59+08QudrEsHZvKuwQmXrTBjR1uymNzzXLzsg8B7tubFibFBHhFzWlkaKBw5xHl uk7+OT0kO9ZJ3TsTv/TFwViAUYI2c2BCbEdMWf8qQ3L71h+0JFizUOIk+V8tDX2FqfjzUcTboBbai GxXzDalnP0aDCNPkIDv/lRxnYJxJCOA6H/1y5IMk/TCp3bYMY7KFfnoOPk7u3M5yWKj4+368X2Bwm AJhICcPAJJhdY+Y8ETQwnQHw3Ag+ZkrmGFCc0Ck2awXzi1L1KzsXhGhGahtElf4k1ck//3LSxNy1+ zSg/k6lw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mwmID-00Crlz-CV; Mon, 13 Dec 2021 14:27:05 +0000 From: "Matthew Wilcox (Oracle)" To: Kees Cook Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Thomas Gleixner , linux-hardening@vger.kernel.org Subject: [PATCH v3 3/3] mm/usercopy: Detect compound page overruns Date: Mon, 13 Dec 2021 14:27:03 +0000 Message-Id: <20211213142703.3066590-4-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211213142703.3066590-1-willy@infradead.org> References: <20211213142703.3066590-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8FD52140005 X-Stat-Signature: zds6hk4s54kq1fegrt45i11boz35nxju Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iD8Wu5Bj; spf=none (imf23.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1639405625-376862 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the compound page overrun detection out of CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kees Cook --- mm/usercopy.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/mm/usercopy.c b/mm/usercopy.c index 63476e1506e0..db2e8c4f79fd 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, { #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN const void *end = ptr + n - 1; - struct page *endpage; bool is_reserved, is_cma; /* @@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ((unsigned long)end & (unsigned long)PAGE_MASK))) return; - /* Allow if fully inside the same compound (__GFP_COMP) page. */ - endpage = virt_to_head_page(end); - if (likely(endpage == page)) - return; - /* * Reject if range is entirely either Reserved (i.e. special or * device memory), or CMA. Otherwise, reject since the object spans @@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n, if (PageSlab(page)) { /* Check slab allocator for flags and size. */ __check_heap_object(ptr, n, page, to_user); + } else if (PageHead(page)) { + /* A compound allocation */ + unsigned long offset = ptr - page_address(page); + if (offset + n > page_size(page)) + usercopy_abort("page alloc", NULL, to_user, offset, n); } else { /* Verify object does not incorrectly span multiple pages. */ check_page_span(ptr, n, page, to_user);