From patchwork Tue Jan 18 18:53:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12716779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7288C433F5 for ; Tue, 18 Jan 2022 18:55:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=PrXepfJEBQfDvyFvhZPSUWzWdNJYMc7uGboHFw2RQSk=; b=xXADfxv8swM8E/ y9+myVg6HSHBPkXDc/7GbZZAbPtSIbkztgM755tXDAkiW+s3/a/F75Gmh7MbdrrHuOuNQQnDOcJfJ 8nOhyXFW3fhzAyyknjBZNxRN8YvzXpr7ewkYvYgAZUKDgXU330T1w+ncRWwNDIKdlHZzqF6f2Zz6n loB6Lp5ikbhIKCqkjyD4MzKkVYObQl5h0QYhIobTFj0uvBH7Amyddyw7KaCGUkgVs3Wk8tMUXmYfm TSsCgkwuGHybYG/7NOodxWC0+CBjg7SaYC9LietbPwwDtkRxSMtDl+693+1L4TgkzRSgmAPCBQSgo 9N2Slb46MX6w8pkVnNXg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1n9tcM-002fgk-Bu; Tue, 18 Jan 2022 18:54:06 +0000 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1n9tcI-002feN-QN for linux-arm-kernel@lists.infradead.org; Tue, 18 Jan 2022 18:54:04 +0000 Received: by mail-pl1-x633.google.com with SMTP id u15so25310090ple.2 for ; Tue, 18 Jan 2022 10:53:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YPKAqa/Yie4EjghZwtk9ckw9e3jN+xr2r/YDq6C8TvY=; b=on6V4hjBVDuWhH4VB6UgYcuhkCcJqCr99ziHlKCxL2t87PV7NLEWtA2fnOonJv7bS5 I1Tybi1o8qrOz5irlU3xjkO5kVou8vFxUoEWUR2+tjRgPRnGoXjb8l7Gry63Wzdjekx1 pJYLanLDoQF7Bhip98Q4W0To+u96wHtFJeVbaypQ+9UTQmn4RWttdLddbcnjp2eKndvi +tC/K6Oyc+3FwUsghA3qG91oi6POwFZTY1lsOi0SPM8irsADjmLHsgHwkwFgEy8CNEaD ObCjkEWtfnsZu2hLZ4rfwVK3JpaUeOULrve9lh0wMFbkkGUJ7etqgolkrSHBwv0uKhDm T06Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YPKAqa/Yie4EjghZwtk9ckw9e3jN+xr2r/YDq6C8TvY=; b=Z2xb+9oq8bsbKKj5nVlU7/Sjw0aw5oemzxgL7UOrnvMleoEYfQdxfIhM+ciiMFlL8H fYyOvgYJn/04JeyFfPm4a91DbMtL5lChhAfT5V80d1Hg0ihq0+7kjrjCFGr7PDHbgKaV RAyroAl9vKw1UgwPU5SL581NBrRAUIQ0/LHGoqa2gJZ/rAcfqdHJCjRg6L3WGdgk40TN KvLKhVZ6xKwcxlmHypo5aSAY5Dy9S4rES4kZN5TgZ4MN9gagizsz5amHFKCysHzZBUBd uliQ+WQqDblhmq1YblWWBMxOy2JLDI5WWeBQ6xLK446JmU+hlbgNTUh6fVU1PlWhSPPT ZrDg== X-Gm-Message-State: AOAM530IMoxY9SgMf0V0+TS5qkcnoRxl+/17akPUPtWqJYUNWb0YyZmz veIMtMtYA1v2ZNj6iScm1otfgLlxgPo= X-Google-Smtp-Source: ABdhPJyVHwdrjVqcP4OCoFFTUiP+ukPRMSGf89wPp+23eFVMBzv6npJ5NwS0gC1cdR4jQ7ntY3uSvg== X-Received: by 2002:a17:902:6841:b0:149:6791:5a4f with SMTP id f1-20020a170902684100b0014967915a4fmr28650293pln.123.1642532039454; Tue, 18 Jan 2022 10:53:59 -0800 (PST) Received: from localhost (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id oc11sm4076333pjb.5.2022.01.18.10.53.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Jan 2022 10:53:59 -0800 (PST) From: Yury Norov To: Catalin Marinas , Will Deacon , Andrew Morton , Nicholas Piggin , Ding Tianhong , Anshuman Khandual , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Yury Norov Subject: [RFC PATCH] arm64: don't vmap() invalid page Date: Tue, 18 Jan 2022 10:53:54 -0800 Message-Id: <20220118185354.464517-1-yury.norov@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220118_105402_869401_07A88809 X-CRM114-Status: GOOD ( 16.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org vmap() takes struct page *pages as one of arguments, and user may provide an invalid pointer, which would lead to DABT at address translation later. Currently, kernel checks the pages against NULL. In my case, however, the address was not NULL, and was big enough so that the hardware generated Address Size Abort. Interestingly, this abort happens even if copy_from_kernel_nofault() is used, which is quite inconvenient for debugging purposes. This patch adds an arch_vmap_page_valid() helper into vmap() path, so that architectures may add arch-specific checks of the pointer passed into vmap. For arm64, if the page passed to vmap() corresponds to a physical address greater than maximum possible value as described in TCR_EL1.IPS register, the following table walk would generate Address Size Abort. Instead of creating the invalid mapping, kernel will return ERANGE in such situation. Signed-off-by: Yury Norov --- arch/arm64/include/asm/vmalloc.h | 41 ++++++++++++++++++++++++++++++++ include/linux/vmalloc.h | 7 ++++++ mm/vmalloc.c | 8 +++++-- 3 files changed, 54 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h index b9185503feae..e9d43ee019ad 100644 --- a/arch/arm64/include/asm/vmalloc.h +++ b/arch/arm64/include/asm/vmalloc.h @@ -4,6 +4,47 @@ #include #include +static inline u64 pa_size(u64 ips) +{ + switch (ips) { + case 0b000: + return 1UL << 32; + case 0b001: + return 1UL << 36; + case 0b010: + return 1UL << 40; + case 0b011: + return 1UL << 42; + case 0b100: + return 1UL << 44; + case 0b101: + return 1UL << 48; + case 0b110: + return 1UL << 52; + /* All other values */ + default: + return 1UL << 52; + } +} + +#define arch_vmap_page_valid arch_vmap_page_valid +static inline int arch_vmap_page_valid(struct page *page) +{ + u64 tcr, ips, paddr_size; + + if (!page) + return -ENOMEM; + + tcr = read_sysreg_s(SYS_TCR_EL1); + ips = (tcr & TCR_IPS_MASK) >> TCR_IPS_SHIFT; + + paddr_size = pa_size(ips); + if (page_to_phys(page) >= paddr_size) + return -ERANGE; + + return 0; +} + #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP #define arch_vmap_pud_supported arch_vmap_pud_supported diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 6e022cc712e6..08b567d8bafc 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -119,6 +119,13 @@ static inline int arch_vmap_pte_supported_shift(unsigned long size) } #endif +#ifndef arch_vmap_page_valid +static inline int arch_vmap_page_valid(struct page *page) +{ + return page ? 0 : -ENOMEM; +} +#endif + /* * Highlevel APIs for driver use */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..ee0384405cdd 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -472,11 +472,15 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, return -ENOMEM; do { struct page *page = pages[*nr]; + int ret; if (WARN_ON(!pte_none(*pte))) return -EBUSY; - if (WARN_ON(!page)) - return -ENOMEM; + + ret = arch_vmap_page_valid(page); + if (WARN_ON(ret)) + return ret; + set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); (*nr)++; } while (pte++, addr += PAGE_SIZE, addr != end);